OnBenchMark Logo

MAYANK (RID : 6bz5ld7nbd15)

designation   Data Engineer

location   Location : Delhi

experience   Experience : 4 Year

rate   Rate: $9 / Hourly

Availability   Availability : Immediate

Work From   Work From : Any

designation   Category : Information Technology & Services

Shortlisted : 0
Total Views : 145
Key Skills
Python MySQL MongoDB FLASK Power BI GCP ARIMA NUMPY PANDAS
Discription

AFZAL

Profile Summary

  • Data Engineer in Deltacubes Technology Pvt.Ltd. with 5 years of experience as an Analytics professional working with different Clients.
  • Have Strong Experience in Big Data Ecosystem.
  • Hands on Experience with Cloud like Google cloud and AWS

Google Cloud Certified Data Engineer

  • Exceptional problem solving, and sound decision-making capabilities coupled with excellent track record for meeting deadlines and submitting deliverables on time.
  • Experience on Hadoop, Python, pandas, Google Cloud, Google Big Query, Dataflow and DataPrep as part of the Big Data stack and familiar with other components of the Big Data ecosystem (Spark, Hive, SparkSQL, PySpark)
  • Efficient in conducting sessions for gathering requirements, resolving open issues, and under- standing change requests.
  • Proactive team player, creating innovative and logical solutions to solve complex problems.

Professional Qualifications

Bachelor of Engineering (Computer Science and Engineering) from Andhra Loyola Institute of Engineering and Technology (JNTUK) Vijayawada, AP in 2015.

Skill Set

    • GCP - Google Big Query, pub/sub, Cloud SQL, Dataproc, Cloud Dataflow, Cloud Composer, GCS, Cloud IAM
    • Python
    • Pandas (data analytics)
    • Databases – Oracle, SQL, PostgreSQL, PrestoDB
    • Hadoop, Hive, Impala, Spark, PySpark
    • AWS – Red Shift, S3, Glue
    • Docker
    • Data Visualizations – Power BI, Tableau, Looker
    • Git, JIRA

Professional Experience

Nov 2017 - Mar 2018, Data Engineer, Nielsen, Hyderabad, India

Nielsen - The objective for this project is to create product's price and volume validation platform on top of Nielsen Data exchange (NDX) for BUY data for RF (South Asian), LATAM (Latin America) and CIP (European)datafactories.

This platform is developed using Apache Spark and Hive. Present validation system is designed in legacy Oracle system for CIP countries, for LATAM and RF countries validation carried out manually by dedicated team. Aim of Zero Touch Validation project is to remove manual intervention from validation processing by using Open Source technologies.

  • Converting the PL/SQL functionality to Scala code.
  • Creating the kudu tables and loading them with data from oracle.
  • Fine tune the performance of the applications.
  • Created Scala/Spark jobs for data transformation and aggregation
  • Understanding customer requirements & Analyzing data.
  • Identifying key metrics and performance indicators.
  • Developing the code as per the customer requirement.

Apr 2018 - Nov 2019, Data Science, BMS Life Sciences, Hyderabad, India

Bristol Myers Squibb (BMS) - is an American multinational pharmaceutical company. Headquartered in New York City, BMS is one of the world's largest pharmaceutical companies and consistently ranks on the Fortune 500 list of the largest U.S. corporations. For fiscal 2021, it had a total revenue of $46.4 billion.

Bristol Myers Squibb manufactures prescription pharmaceuticals and biologics in several therapeutic areas, including cancer, HIV/AIDS, cardiovascular disease, diabetes, hepatitis, rheumatoid arthritis, and psychiatric disorders.

  • Participated in Requirement gathering, and Business meetings understand the requirements and translated into Technical Design Document.
  • Responsible for Extracting, Transforming and Loading data from CSV and parquet files.
  • Working with EDL services for BUS Rapid Data Factory project and demonstrate use of EDL services in handling huge data.
  • Data transformation using Python in EDL services BUS (client) required format
  • Ingesting processed and transformed data in database
  • Reading input data from AWS S3 bucket using Python.

Dec 2019 - May 2021, Data Science, Unilever, Hyderabad, India

Unilever - Unilever Plc (Unilever) is a manufacturer and supplier of fast-moving consumer goods. The company's product

 
Matching Resources
My Project History & Feedbacks
Copyright© Cosette Network Private Limited All Rights Reserved
Submit Query
WhatsApp Icon
Loading…

stuff goes in here!