OnBenchMark Logo

Mallikarjuna (RID : g8qolgdduc64)

designation   Data Engineer

location   Location : Bengaluru, India

experience   Experience : 4 Year

rate   Rate: $7 / Hourly

Availability   Availability : Immediate

Work From   Work From : Any

designation   Category : Information Technology & Services

Shortlisted : 13
Total Views : 99
Key Skills
Azure Data Factory (ADF) Azure Data Bricks (ADB) SQL
Discription

Mallikarjuna Mobile:

E-Mail:

OBJECTIVE

A skilled, talented IT professional with diverse experience in database, data warehousing, business intelligence suite and cloud technologies. To strive for excellence and to work in such an environment that will enhance my knowledge and career.

SUMMARY

  • Having around 4+ years of experience in the IT Industry.
  • Having around 3+ years of experience in Microsoft Azure Technologies.
  • Working on Azure Cloud technologies – Azure Data Factory (ADF), Azure Data Bricks (ADB) with SQL and Pyspark.
  • Strong Knowledge of business intelligence ETL tools using Microsoft SSIS.
  • Experience in writing T-SQL Objects like Stored Procedures, Indexes, Views, Joins, Temporary Tables, and User Defined Functions.
  • Strong Experience with Azure Data Lake, Azure Data Factory, SQL Data Warehouse Azure Blob, and Azure Storage Explorer.
  • Designed and developed data ingestion pipelines from on-premise to different layers into the ADLS using Azure Data Factory (ADF V2).
  • Hands-on experience in Azure Data factory and its Core Concepts like Datasets, Pipelines and Activities, Scheduling, and Execution.
  • Experience with integration of data from multiple data sources.
  • Extensively Worked on Copy Data activity.
  • Implemented a dynamic pipeline to extract multiple files into multiple targets with the help of a single pipeline.
  • Implemented Control flow activities: Copy activity, Pipeline, Get Metadata, If Condition, Lookup, Set Variable, Filter, For Each Pipeline Activity for On-cloud ETL processing.
  • Experienced in Azure ADF data migration projects from On-Premise to Azure Cloud.
  • Activities in compliance to delivery schedules.
  • Able to quickly learn new concepts and technologies.

TECHNICAL SKILLS

Operating System

Windows XP,7,10; Windows Server 2003, Windows Server 2008

Database Tools

SQL Server 2016, 2019; SQL Server Management Studio (SSMS)

Query languages

SQL, Pyspark.

Cloud Technologies

Azure synapse Analytics, Azure SQL Database, Azure Data Factory (ADF), Azure data bricks

, ADLS Gen1 &Gen2, Blob storage

WORK EXPERIENCE:

    • Currently working as Data Engineer in wipro Technology at Bengalore from Sept 2018 to till date.

EDUCATION QUALIFICATION

  • B. Tech (Electronics and Communication Engineering) E.C.E from Nagole Institute of Technology & Science, JNTUH University

PROJECT DETAILS

Project - 1

US Foods

Role

Data Engineer

Technologies

Azure synapse Analytics, ADLS Gen2, Azure Data Bricks notebooks, Azure SQL Database, Pyspark

Client

Wipro Limited

Team Size

8

Duration

Jan 2021 – Till Date

Responsibilities:

    • Understanding the Business and its Requirements.
    • creating azure synapse analytics and working on azure synapse analytics.
    • worked on analyzing data with a Serverless SQL pool and a dedicated SQL pool.
    • worked on analyzing data with Apache spark pool.
    • worked on analyzing data in different Storages like ADLS Gen2, Azure SQL server...etc
    • implemented different pipelines and different notebooks in azure synapse analytics.
    • having good knowledge on to monitor pipelines and notebook logs in azure synapse analytics.
    • having good knowledge of creating external tables and working on external tables in azure synapse analytics.
    • Created Azure Data Factory pipelines to copy data from different sources to Azure BLOB container as a CSV file then same loaded into Azure SQL Data Warehouse table.
    • In this pipeline development, we have used activities like IF, For Each, Get Metadata, stored procedure, Lookup, execute the pipeline, and Copy activity.
    • Created stored procedures and views to implement logic.
    • We used the ARM template import and export wizard for ADF code migration to upper environments (Eg: Dev to PRD).
    • Scheduling ADF pipelines by creating scheduled triggers.
    • Post data load, Validating data between source, Azure SQL Data Warehouse, and report. Prepared document for the same.
    • Developing the project in agile scrum methodology.

Project - 2

Altria complete data process

Role

Associate Data Engineer

 

 
Matching Resources
My Project History & Feedbacks
Copyright© Cosette Network Private Limited All Rights Reserved
Submit Query
WhatsApp Icon
Loading…

stuff goes in here!