OnBenchMark Logo

Kunal (RID : yb8plldqp39w)

designation   Data Engineer

location   Location : Delhi, India,

experience   Experience : 8 Year

rate   Rate: $15 / Hourly

Availability   Availability : Immediate

Work From   Work From : Offsite

designation   Category : Information Technology & Services

Shortlisted : 5
Total Views : 75
Key Skills
Data Engineer PL/SQL Pyspark Python Linux MongoDB
Discription

Kunal Sachdeva

PROFILE SUMMARY

 

  • Total 8+ years of hands-on experience in Information Technology using Big Data tools & analytics such as Pyspark, Redshift, Python & Reporting.
  • Worked on developing applications in Postgres SQL, SQL, and Stored procedures.
  • Provide AWS technical expertise including strategic design and architectural mentorship, assessments, POCs, etc., in support of the overall sales lifecycle or consulting engagement process.
  • Extensive experience of pre-processing data from multiple sources using PySpark
  • Worked on containerizing the applications using Docker & Kubernetes.
  • Built Dashboards for various Logs & Metrics monitoring using Tableau.
  • Built RESTful APIs using Python.
  • Strong knowledge in Linux.
  • Ranked among the top 1.1% of all entrants of nationwide All India Engineering Entrance

Examination (AIEEE) in 2011 was held for admission into the National Institute of Technology.

  • Excellent written, communication, and Interpersonal skills.
  • Extensive experience working in an Agile development environment.
  • Handling client meetings, weekly status calls, code-walk throughs, review sessions with peers and onsite counterparts, mentoring juniors, and updating MOMs.
  • Excellent ability to learn new technologies and adapt on the job.
  • Handled complex and big-ticket projects with proven expertise in resolving issues in a timely and sustainable manner.

 

 

EDUCATION

 

Bachelor of Technology(B.Tech) in Computer Science from National Institute of Technology, Raipur(2011-2015).

 

TECHNOLOGY AND APPLICATIONS

 

  • Operating System: Windows, Linux.
  • Big Data Technologies: Spark, Kafka, Kinesis.
  • Databases: Aws Redshift, Mysql, SQL server, Postgres SQL, T-SQL
  • Cloud Tools: AWS, AWS Lambda
  • Data Transformation: Pandas, Pyspark, Spark Sql.
  • NoSQL Databases: MongoDB.
  • Programming Language: Python
  • Workflow Scheduler: Airflow.
  • Tools/ IDE: IntelliJ

EXPERIENCE

 

  • SQL Developer in ISYS Technologies from July 2015 - March 2017.
  • Business Technology Analyst in ZS Associates India Pvt. ltd from April 2017-July 2018
  • Lead Business Analyst in Shiprocket from Aug 2018 – Feb 2023

PROJECT DETAILS

Project 1: Setup AWS infrastructure for BI and reporting

 

 

 

Role: Sr. Data Engineer Consultant

Responsibilities:

  • Major Projects - Smart AI Courier Assignment; Churn Prediction; Real time BI Infra Development; ETD Prediction
  • Providing Business Intelligence/Reporting solutions to different stakeholders
  • Redesign the eCommerce logistics pipeline by designing 360-degree performance metrics at the client level
  • Developed Smart-AI courier assignment model using historical data
  • Responsible for defining, analysing, and communicating key metrics and business trends to the product and engineering teams
  • Model data to create reporting infrastructure/dashboards for business process management and data mining
  • Developed applications in Big Data Technologies - pyspark, Python, Aws Glue, Postgres SQL, T-SQL.

Project 2: Events Management Data

 

Role: Sr. Business Analysts Consultant

Responsibilities:

  • Developed a fully automated Incentive Compensation System on

Javelin suite for quota setting, quota refining, Eligibility calculations, and Incentive Pay-outs for 2 business units

  • Developing automated sales crediting, sales analysis, reporting, and data integration solutions using Amazon Redshift on the back end and MicroStrategy on the front end for visualization.
  • Gathering requirements and Configuring various business rules for field-level and HQ-level reports.

Project 3: Bidding Tool for Ads & Ad Network

 

Role: Spark & Backend Developer

Responsibilities:

Pre-processed the data using PySpark so that it could fit the existing system used by the company

• Developed a real-time bidding tool for Ad networks which provides a complete solution for ETL

• Developed ETL pipelines to fetch and process huge amounts of publishing & advertising data of the ad-tech industry

• Created the ETL pipelines to process the data and store it Elastic Search & Kibana for visualization

• Technology – GCP cloud, Parquet file system, GCS, Pyspark, GBQ(SQL)

Project 4: Azure Data factory and ML

 

Role: Data Analysts (Data Science team)

Responsibilities:

  • Created Azure Data Factory Pipeline to copy data from oracle Db to AWS web services.
  • There were two pipelines one was full load and the other was Delta load.
  • Azure blob was used for intermediate storage and Azure Table was used to store metadata

Project 5: ETL Pipeline for MIX MEDIA MODELING

Role: Sr. Data & ETL Engineer Consultant

Responsibilities:

• Collect user’s touch points from different sources and store in GBQ

• Create Pipeline using Apache Beam and store data into SQL

 

 
Matching Resources
My Project History & Feedbacks
Copyright© Cosette Network Private Limited All Rights Reserved
Submit Query
WhatsApp Icon
Loading…

stuff goes in here!