OnBenchMark Logo

Bhushan (RID : 210ulp7x966t)

designation   AWS Data Engineer

location   Location : Gurgaon

experience   Experience : 8 Year

rate   Rate: $16 / Hourly

Availability   Availability : Immediate

Work From   Work From : Offsite

designation   Category : Information Technology & Services

Shortlisted : 2
Total Views : 47
Key Skills
Data Engineer AWS MySQL AWS Redshift Linux
Discription

Project1:PayaPaymentGatewayWorkContribution:

  • Adheretotimelinestomeetqualityassurancetargets.
  • Coordinatedwithsystemspartnerstofinalizedesignsandconfirmrequirements.
  • Boostednetwork,systemanddataavailabilityandintegritythroughpreventivemaintenanceandupgrades.
  • Receivedandprioritizedservicerequeststooptimizeresources.

 

  • ResponsiblefortheautomationprocessofvarioustasksbycreatingaDatapipelineusingAWSGlue,S3andAWSLambdaservicesalongwithAWSSESandSNSservicestosendthereports.
  • ProficiencyinwritingPythonscriptsforETLprocessinAWSGlueandAWSLambdafunctions.
  • ResponsibleforcreatinganeffectiveETLprocesstoLoadandtransformthedatafromMySQLDatabase to DynamoDB.
  • ProficiencyinusingAItoolsforPromptEngineeringrequirementstocreateandanalyzethePythoncodefordevelopers’team.
  • BuildanddeployanETLjobsusingAbInitio.
  • InvolvedinAWSlabcreationforEnterprisedatateam.
  • CreatedDataarchitectureforDLSCtaskstomigratetheDatafromAzureservicestoAWSservices.
  • WorkedonDataModelingusingAWSRedshiftandAWSDataMigrationServices(DMS).
  • Workedondesigning,Implementingandmaintainingstructure,semistructureddatapipelinesusingAWSGlue,AWSLambda,AWSS3,AWSKinesis,KafkaandPySpark.
  • Worked on designing and implementing near to real-time streaming data pipelines using PySparkintegrationwithKafka.
  • Workedondesigningandimplementingsemi-structuredstreamingdatapipelinesusingAWSKinesis.
  • WorkedondesignandimplementingstructurestreamingdatapipelinesusingPySparkintegrationwithAWSRedshift.
  • WorkedonimplementationofSourcedataconnectionsfordatapipelinesusingAWSGluealongwithnotificationserviceslikeAWSSNS/SES.
  • WorkedontocreateGraphicaldashboardsusingAWSquicksightforclouddataandTableauforDBdata.
  • Conducted a client visit during the initial phase of the project to learn about the business requirementsandgatherinformationforrequirementanalysis.
  • Workedcollaborativelywiththeprojectteamtoanalyzegatheredinformationanddevelopacomprehensiveprojectplan.
  • Conductedrequirementanalysistoidentifyprojectobjectives,scope,constraints,risks,andsuccesscriteria.
  • Facilitated meetings with stakeholders to gather feedback, address concerns, and ensure projectalignmentwithbusinessgoals.

 

Project2:BakktCryptoServicesWorkContribution:

  • Adheretotimelinestomeetqualityassurancetargets.
  • Coordinatedwithsystemspartnerstofinalizedesignsandconfirmrequirements.
  • Boostednetwork,systemanddataavailabilityandintegritythroughpreventivemaintenanceandupgrades.
  • Receivedandprioritizedservicerequeststooptimizeresources.
  • ResponsibleforautomationofGCPservicesbycreatinganddeployingData-PipelinesusingvariousGCP services like GCS, Cloud functions,Cloud Scheduler, Python
  • WorkedonDataArchitectureofdatamigrationprocessofon-premtoGCP.

 

 

SeniorDataEngineer                                                                                                                08/2021to08/2022

MannaraTechnologiesPvtLtd–Pune

Project:LMS

Workcontribution:

  • Designed and implemented effective database solutions and models to store and retrieve data,Designed, and developed analytical data structures. Contributed to internal activities for overallprocessimprovements,efficiencies,andinnovation.
  • Worked on designing, Implementing and maintaining structure, semistructured data pipelines usingAWSGlue,AWSLambda,AWSS3,AWSKinesis,KafkaandPySpark.
  • Worked on designing and implementing near to real-time streaming data pipelines using PySparkintegrationwithKafka

 

  • BuildanddeployanETLjobsusingAbInitio.
  • Workedondesigningandimplementingsemi-structuredstreamingdatapipelinesusingAWSKinesis.
  • WorkedondesignandimplementingstructurestreamingdatapipelinesusingPySparkintegrationwithAWSRedshift.
  • WorkedonimplementationofSourcedataconnectionsfordatapipelinesusingAWSGluealongwithnotificationserviceslikeAWSSNS/SES.
  • ProficientinmonitoringAWSRDSMySQLdatabasesfordiskspace,replicationhealth,andbackups.
  • WorkedonupgradingAWSRDSinstancesforclients.
  • WorkedondatavisualizationforgeneratinggraphicalreportsusingTableau.
  • Developed automation scripts using shell scripting and stored procedures to optimize and streamlinedatabase processes, Skilled in setting up and monitoring Master-Slave replication in AWS RDS MySQLdatabases.
  • KnowledgeableinAWSRDSMySQLsecurityandimplementingpasswordpoliciesusingAWSSecretManagerandencryptiontechniques.
  • Experiencedinwritingstoredproceduresandviewstoautomatetaskssuchasusercreationandgrantingprivileges.
  • Expertise infine-tuning AWSRDSMySQL databaseperformance.
  • ProficientininstallingandsecuringMySQLonAWSRDSinstances.
  • Worked on development of data pipelines for Reporting and Analytics requirements using readreplicas,definingphysicaldatastructures/modelsforRedshift/RDS.
  • Workingexperienceondatareplication,extraction,loadingandcleansing.
  • Involvedindevelopmentandexecutionoftableauprojects.
  • WorkedonworkingonERDusingDraw-iodesigningtool,shorteningthetime-consumingprocessesofDatabase by automation using shell script and Stored procedures, worked on development of storedprocedures for Summary project, query optimization and performance tuning of MySQL queries fordevelopers.
  • InvolvedinthedevelopmentanddeploymentofETLJobsforclients,modeling,processing,transferring& representation of Datasets through visualization using ETL, AWS & tableau.

 

 
Matching Resources
My Project History & Feedbacks
Copyright© Cosette Network Private Limited All Rights Reserved
Submit Query
WhatsApp Icon
Loading…

stuff goes in here!