OnBenchMark Logo

Phani (RID : qzyclk2e0g95)

designation   Data Engineer

location   Location : Andhrapradesh, India

experience   Experience : 7 Year

rate   Rate: $11 / Hourly

Availability   Availability : Immediate

Work From   Work From : Offsite

designation   Category : Information Technology & Services

Shortlisted : 1
Total Views : 94
Key Skills
Data Engineer Python Spark Azure Hive Hadoop ETL
Discription

Professional Summary:

  • A Software Engineer professional with 6 years 4 months of total experience including 2.5 years of experience as Hadoop /Pyspark Developer using Big Data technologies like Hadoop Ecosystem, Spark Ecosystem of experience in Azure Data Bricks, Azure Data Factory, Azure Data Lake and Data Storage and also creating linked services, Data sets, Pipelines and Triggers.
  • Having good knowledge in Python programming language.
  • Designed various ingestion and processing patterns based on use cases in Delta Lake.
  • Experience in Managing and storing confidential credential in Azure Key vault.
  • Built complex data ingestion/processing frameworks using Azure Data vBricks/Python/PySpark.

Managing data in databricks delta tables in different layers like bronze , silver, gold.

  • Hands on experience in writing databricks notesboks using pyspark.
  • Orchestrated the end-to-end data integration pipelines using Azure Data Factory.
  • Worked on creating the RDDs, DFs for the required input data and performed the data transformations using Spark-core.
  • Good knowledge in SQL joins, sub queries.
  • Experience in writing complex queries, reporting mechanisms including (SQL, PL/SQL)
  • Has the motivation to take independent responsibility as well as ability to contribute and be a productive team member.
  • Excellent communication, presentation, and organizational skills.
  • Basic Knowledge of Shell scripting.

Area of Exposer:

  • Strong knowledge on HDFS, SPARK, Python, HIVE,Azure Data Bricks, Azure Data Factory, Azure Data Lake and Data Storage
  • Good knowledge on SQL.
  • Good knowledge on Spark Streaming process using Kafka
  • Strong Knowledge on Automation Anywhere v11(RPA Developer)

Educational Qualification:

  • Bachelor of Engineering -2007 from Anna University.

Career Profile:

Working for Hoonar Tekwurks Private Limted. from March2022 to 16 june 2023 Worked for Offshore HCL global system from Feb 2018 to March2022 .

Worked for Adecco India from 14-mar-2016 to 21- nov-2017

Professional Experience:

Current project : Data Migration and processing Client : OYO Rooms

Role : Azure Data Engineer

Environment : Spark, Python, Spark SQL, Spark Core, Hadoop Description:

OYO Rooms also known as OYO Hotels & Homes, is an Indian multinational hospitality chain of leased and franchised hotels, homes, and living spaces.Founded in 2012 OYO initially consisted mainly of budget hotels As of January 2020, it has more than 43,000 properties and 10 lakh (1 million) rooms across 800 cities in 80 countries including India, Malaysia, the UAE, Nepal, China, Brazil, Mexico, the UK, Philippines,Japan Saudi Arabia, Sri Lanka,Indonesia, Vietnam, and the United States

Responsibilities:

  • Loaded data into Spark data frames for processing like Cleaning checks ,Validation checks, Data quality assurance checks
  • Primarily involved in Data Migration using SQL, SQL Azure, Azure Data lake, and Azure Data Factory.
  • Responsible for extracting the data from OLTP and OLAP using Azure Data factory and Databricks to Data lake.
  • Used Azure Databricks notebook to extract the data from Data lake and load it into Azure and On-prem SQL database.
  • Developed pipelines that can extract data from various sources and merge into single source datasets in Data lake using Databricks.
  • Performed encryption of sensitive data in to Data lake which are sensitive to business using cypher algorithm.
  • Decrypt the sensitive data using keys for refined Datasets for analytics, by providing end users access.
  • Gathered business requirements from different user groups
  • Ingested data from disparate sources like but not limited to ,workday, Tenrox ,sharepoint
  • Written notebooks to process data using spark
  • Designed various ingestion and processing patterns based on use cases
  • Implemented complete end to end data process in Azure delta lake
  • Converted On premises stored procedure in to Sparkdata frames.
  • Built complex data ingestion/processing frameworks using Azure Databricks/Python/Pyspark
 
Matching Resources
My Project History & Feedbacks
Copyright© Cosette Network Private Limited All Rights Reserved
Submit Query
WhatsApp Icon
Loading…

stuff goes in here!