OnBenchMark Logo

Farooq (RID : 4kv2lps4cv4m)

designation   Azure Data Engineer

location   Location : Jaipur

experience   Experience : 8 Year

rate   Rate: $16 / Hourly

Availability   Availability : Immediate

Work From   Work From : Offsite

designation   Category : Information Technology & Services

Shortlisted : 1
Total Views : 50
Key Skills
Azure SQLServer SAPHana ADLS Oracle
Discription

ProfessionalExperience

 

Project1

DevelopmentandSupport

Organization

TechnogridITSystems

 

Description

 

  • InvolvingincreatingAzureDataFactorypipelinesthatmove,transform,andanalysedatafroma widevariety ofsources
  • TransformthedatatoParquetformatandFilebasedincrementalLoadofdataasperVendorrefreshschedule
  • CreatingTriggerstorunpipelinesasperschedule
  • ConfiguringADFpipelineparametersandvariable
  • CreatepipelinesinParentandchildpattern
  • CreatingTriggerstoexecutepipelinessequentially
  • MonitoringDremioDataLakeEnginetodeliverdatatothecustomersasperbusinessneeds

 

Project2

DevelopmentandSupport

Organization

TechnogridITSystems

 

Description

 

  • CreatedLinkedServicesformultiplesourcesystem(i.e.:Oracle,SQLServer,Teradata,SAPHana,ADLS,BLOB, File StorageandTableStorage).
  • CreatedPipeline’stoextractdatafromonpremisessourcesystemstoazureclouddatalakestorage; Extensively worked on copy activities and implemented the copy behaviour’s suchas flatten hierarchy, preserve hierarchy and Merge hierarchy; Implemented Error Handlingconceptthrough copyactivity.
  • ExposureonAzureDataFactoryactivitiessuchasLookups,Storedprocedures,ifcondition,foreach,Set Variable,AppendVariable,GetMetadata, Filterand wait.
  • Configuredthelogicappstohandleemailnotificationtotheendusersandkeyshareholderswith the help of web services activity; create dynamic pipeline to handle multiple sourceextractingtomultipletargets;extensivelyusedazurekeyvaultstoconfiguretheconnectionsinlinked services.
  • Configured and implemented the Azure Data Factory Triggers and scheduled the Pipelines;monitored the scheduled Azure Data Factory pipelines and configured the alerts to getnotificationof failure pipelines.
  • Implementeddeltalogic extractionsfor varioussources withthehelpof control table;implemented the Data Frameworks to handle the deadlocks, recovery, logging the data ofpipelines.
  • Deployed the codes to multiple environments with the help of CI/CD process and worked oncode defect during the SIT and UAT testing and provide supports to data loads for testing,implementedreusablecomponentsto reducemanual interventions
  • KnowledgeonAzureDatabrickstorunSpark-PythonNotebooksthroughADFpipelines.
  • Using Databricks utilities called widgets to pass parameters on run time from ADF toDatabricks.
  • CreatedTriggers,PowerShellscriptsandtheparameterJSONfilesforthedeployments
  • Reviewing individual work on ingesting data into azure data lake and provide feedbacksbasedonreferencearchitecture,namingconventions,guidelinesandbestpractices
  • ImplementedEnd-EndloggingframeworksforDatafactorypipelines.
  • WorkedextensivelyondifferenttypesoftransformationslikeQuerytransformation,Merge,Validations,Map-operation,HistoryPreservingandTableComparisontransformationsetc.
  • ExtensivelyusedETLtoloaddatafromflatfileandalsofrom therelationdatabase.
  • Used BODSScriptandGlobalVariables.
  • Extracted data from different sources such as Flat files, Oracle to load into SQL database •Ensuring proper Dependencies and Proper running of loads (Incremental and Completeloads).
  • Maintained warehouse metadata, naming standards and warehouse standards for futureapplicationdevelopment.
  • Involvedinpreparationandexecutionoftheunit,integrationandendtoendtestcases
 
Matching Resources
My Project History & Feedbacks
Copyright© Cosette Network Private Limited All Rights Reserved
Submit Query
WhatsApp Icon
Loading…

stuff goes in here!