OnBenchMark Logo

Priyanka

Azure data engineer
placeholder
Location Guntur
view-icon
Total Views152
availability
Shortlist0
back in time
Member since30+ Days ago
 back in time
Contact Details
phone call {{contact.cdata.phone}}
phone call {{contact.cdata.email}}
Candidate Information
  • User Experience
    Experience 3 Year
  • Cost
    Hourly Rate$20
  • availability
    AvailabilityImmediate
  • work from
    Work FromOffsite
  • check list
    CategoryTraining & Development
  • back in time
    Last Active OnJuly 24, 2024
Key Skills
AzurePythonSQLPysparkDatabricksAdf
Summary

Experience Data Engineer Assetmonk – Hyderabad, Telangana 05/2020-present Creating ADF data pipelines to extract, transform, and load data from multiple sources into datalake. Understand the production applications and the impact of new implementation on existing business processes. For very large datasets, I performed extensive debugging, data validation, and error handling. Pipelines, mapping data flows, datasets, linked services, and triggers were all implemented. Working with DevOps engineers to create automated CICD pipelines and test-driven development using Azure services. Creating notebooks with Azure Databricks. Designed and implemented efficient database solutions for storing and retrieving data using Azure Blob. Azure SQL databases were created to store the data. Creating SQL scripts to support automated workflows. Developed Spark applications using Pyspark and Spark-SQL for data extraction, transformation and aggregation from multiple file formats for analyzing & transforming the data to uncover insights into the customer usage patterns Responsible for estimating the cluster size, monitoring and troubleshooting of the Spark data bricks cluster. Good understanding of Spark Architecture including Data Frames, Spark Streaming Driver Node. Worker Node. Stages, Executors and Tasks. O Experience with MS SQL Server Integration Services (SSIS) T-SQL skills, stored procedures, triggers. Azure Data Factory (ADF), Integration Run Time (IR) File System Data Ingestion, Relational Data Ingestion. Extract Transform and Load data from Sources Systems to Azure Data Storage services using a combination of Azure Data. Created Pipelines in ADF using Linked Services/Datasets/Pipeline/ to Extract. Transform and load data from different sources like Azure SQL, Blob storage. Azure SQL Data warehouse, write back tool and backwards... Creating SQL scripts to support automated workflows. Frontend Developer Assetmonk – Hyderabad, Telangana 03/2019 Developing the websites using the Html, CSS, Bootstrap and JavaScript. The project was to develop application for clients to know about the assetmonk how smart and secure the real estate investment platform and to know the complete process of the application and to know the process of how it works. This is the live link of the website https://assetmonk.com/. Developing the portal using the Angular, Html, CSS, Bootstrap and Type Script. This project is and internal project build for the Assetmonk investors, in this portal the investors can see the complete details of the particular project on which the investor has invested he can see the detailed structure of the portal like they can see the payments, receipts etc. in the porta.

Copyright© Cosette Network Private Limited All Rights Reserved
Submit Query
WhatsApp Icon