OnBenchMark Logo

Hardip (RID : a49xldwlkg9h)

designation   Sr.Data Engineer

location   Location : Hyderabad, India

experience   Experience : 10 Year

rate   Rate: $20 / Hourly

Availability   Availability : Immediate

Work From   Work From : Any

designation   Category : Information Technology & Services

Shortlisted : 2
Total Views : 113
Key Skills
Azure Data Factory Hadoop Scala Java Python Kafka Databricks
Discription

Hari S

Sr. Data Engineer-Azure

Designer, builder and manager of Big Data infrastructures

A collaborative engineering professional with substantial experience designing and executing solutions for complex business problems involving large scale data warehousing, real-time analytics and reporting solutions. Known for using the right tools when and where they make sense and creating an intuitive architecture that helps organizations effectively analyze and process terabytes of structured and unstructured data.

Competency Synopsis 

  • Over 10 years of IT experience on data driven application design and development with 3.5 years of relevant hands-on-experience in various Azure data services..
  • Proficient in Azure technologies such as Azure Data Factory (ADF), Azure DataBricks (ADB),Azure Synapse Analytics, Azure Active Directory, Azure Storage, Azure data Lake Services (ADLS), Azure key vault, Azure SQL DB, Azure HDInsight.
  • Having good hands on experience on Azure Dev Ops (ADO) services like Repos, Boards, Build Pipelines (CI/CD), Ansible (yml scripting) for resource orchestration and code deployment.
  • Hands on Experience developing data engineering frameworks and notebooks using Azure databrikcs using Spark SQL, Scala, pyspark
  • Experience in Big data real time ingestion tools like kafka, Spark streaming and sqoop for streaming and batch data ingestion.
  • Experience in Apache Hadoop frameworks such as Hdfs,Map Reduce, Hive etc.
  • Experience in Microsoft Azure Cloud with Data Factory,LinkServices,HDI Cluster ,DataLake Gen2and DataBrics.
  • Good knowledge on Azure synapse Analytics.
  • Proficient in Big data ingestion tools like kafka, spark streaming and sqoop for streaming and batch data ingestion.
  • Worked with Big Data distributions like Hortonworks (Hortonworks 2.1) with Ambari.
  • Hands-on experience in application development using Java, RDBMS, and Linux shell scripting.
  • Hands-on experience working with IDE tools such as Eclipse, NetBeans and Maven.
  • Worked on Tableau to generate reports

Professional Experience:

  • Working asAzure data engineering From April 2021 Deltacubes Technology
  • Worked as a Technical Lead in Wipro Limited From January 2019-April2021, Hyderabad, India.
  • Worked as a Big data specialist in VERIZON Data services Pvt Ltd,from May 2016-Dec 2018
  • Worked as Senior Member Technical in ADP from October 2011-May 2016.

Education:

  • Master of Computer Applications from JNTU Hyderabad 2011.
  • Bachelor of Computer science from Kakatiya University 2008.

Technical Skills:

AZURE: Azure Data Factory (ADF), Azure DataBricks, Azure Data Lake Services (ADLS), Azure Blob Services, Azure SQL DB, Azure Active Directory (AAD),Azure Dev Ops
Big Data Ecosystem: Hadoop, MapReduce, Pig, Hive, Kafka, YARN, Spark. 

Languages: Scala, Core Java,Python

Databases: Hive, Hbase

Data ingestion: Sqoop, Kafka, Spark Streaming

Data Visualization:Tableau

AZURE:ADF,Databricks,Azure devops.

PROJECT Details:

PROJECT #1:

Project Name

: Data lake Data engineering

Client

: Communication and Media.

Environment

: Azure data factory, Azure data bricks, Azure sql db, ADLS 2

Duration

: April 2021- Present

Role

: Sr. Azure Data engineer

Description:

Data lake Technology Platform is modern technology foundation delivered in a secure hosted ecosystem that integrates client data industry specific data feeds and the power of Media unique capabilities in data analytics and advanced AI to deliver Enhanced opportunities throughout customer lifecycle.

Roles and Responsibilities:

  • Involved in ETL workflow using Azure ADF,Databricks with PySpark as per business requirements, which includes extraction of data from Relation Database and loading it to Azure Sql DB.
  • Extensively involved in writing transforming data in PySpark and pushing into ADLS Gen2.
  • Storing the resulted data into Azure SQL DB for PowerBI and Sportfire Consumption.
  • Experienced in data migration from On-Prem to Azure Cloud with Databricks using
 
Matching Resources
My Project History & Feedbacks
Copyright© Cosette Network Private Limited All Rights Reserved
Submit Query
WhatsApp Icon
Loading…

stuff goes in here!