OnBenchMark Logo

Kalaneni

Data Engineer
placeholder
Location Hyderabad
view-icon
Total Views81
availability
Shortlist0
back in time
Member since30+ Days ago
 back in time
Contact Details
phone call {{contact.cdata.phone}}
phone call {{contact.cdata.email}}
Candidate Information
  • User Experience
    Experience 2 Year
  • Cost
    Hourly Rate$5
  • availability
    AvailabilityImmediate
  • work from
    Work FromOffsite
  • check list
    CategoryInformation Technology & Services
  • back in time
    Last Active OnJuly 26, 2024
Key Skills
Data EngineerAzure DatabricksPythonSQLPySpark
Summary

WORK EXPERIENCE Software Engineer 1, Microsoft Azure MAQ Software, Hyderabad Duration: Aug 22 – Present Technologies/Tools: Azure Data Factory, Azure Databricks, SQL, Azure Data Lake Storage, Azure Synapse, PySpark Description: An orchestrating framework to bring the data from external data sources like SFTP, Google Storage, Amazon S3 bucket into ADLS till Synapse and make it available to downstream end users.

• Preparing Requirements Analysis and Data Architecture Design.

• Building the pipelines to copy the data from source to destination in ADF.

• Automated the process of Linked Service and Dataset creation on the source as well as destination servers.

• Transforming data in Azure Data Factory with ADF Transformations.

• Scheduling pipelines and monitoring the data movement from source to destination.

• Creating multiple types of triggers to schedule the pipelines.

• Working on multiple data sources PRD’s .

• Working with Databricks Notebooks to transform data in alignment with specific use cases using Pyspark and SQL.

• Creating, Configuring and Monitoring Interactive clusters and job clusters.

• Automating Mail Notification Alert using Databricks Notebook through API calls.

• Creating STTMs (Source to Target Mapping) for Data Modelling.

• Build and Run Continuous Integration and Continuous Delivery (CI/CD) Pipeline in Azure Data Factory.

• Building Databricks Validation Notebooks to monitor the fetched and transformed Data.

• Creating Data assets and Data Quality checks in Internal tool.

• Experience with agile Delivery Methodologies. Software Engineer, Microsoft Azure Capgemini, Hyderabad Duration: Aug 21 – Aug 22 Technologies/Tools: Azure Data Factory, Azure SQL, Azure Databricks Description: Worked on ETL pipelines to automate the task of fetching the data from multiple sources and loading into multiple destinations.

• Worked with Python and SQL to write the logic to transform the data.

• Building the pipelines to copy the data from source to destination in ADF.

• Creating Linked Services and Multiple datasets on the source as well as destination servers.

• Scheduling pipelines and monitoring the data movement from source to destination.

Copyright© Cosette Network Private Limited All Rights Reserved
Submit Query
WhatsApp Icon