OnBenchMark Logo

Kaish (RID : pp3lnvea929)

designation   AWS Data Engineer

location   Location : Noida

experience   Experience : 7 Year

rate   Rate: $14 / Hourly

Availability   Availability : Immediate

Work From   Work From : Offsite

designation   Category : Information Technology & Services

Shortlisted : 1
Total Views : 76
Key Skills
AWS Data Engineer AWS DBT ETL SQL Hadoop AWS Redshift
Discription

WORK EXPERIENCE:

Sr. Data Engineerin(October 2022 - till Now) :

Currently working as a Sr. Data Engineer 

Environment: 

Hadoop, PySPark, MapReduce ,AWS, DBT, ETL, SQL

Responsibilities:

As a dedicated data engineer, I have successfully executed project leveraging AWS Redshift, ETL processes, DBT (Data Build Tool), and SQL to transform raw data into actionable insights. My work with AWS Redshift involved the development of optimized data warehousing solutions, ensuring high performance and scalability for our analytical workloads. Using ETL methodologies, I crafted robust data pipelines that extracted data from various sources, transformed it into a structured format, and loaded it into Redshift.

My proficiency in DBT allowed me to model and document data transformations efficiently, facilitating collaboration with data analysts and scientists. Throughout these projects, I utilized SQL to design and implement complex data transformations and aggregations, enabling data-driven decision-making for the organization. My experience with these technologies has not only strengthened my data engineering skills but has also empowered my teams to make data-driven decisions more effectively.

 

Monitoring and trouble shooting the data loading from data lake to give in spark-Java.

Used AWS cloud services to do the PySpark development such as AWS EMR, AWS EC2, AWS S3 and AWS MySql, AWS Athena, AWS Glue, AWS Redshift and AWS Lambda.
 

Optimized data processing workflows, improving efficiency by 30% and reducing latency by 20%.

Collaborated with cross-functional teams to integrate AWS Kinesis and AWS Glue with other AWS services such as Lambda, S3, and Redshift for end-to-end data processing and analytics.

Written multiple PySparkscripts to achieve the desired output as per the requirement.

 
Matching Resources
My Project History & Feedbacks
Copyright© Cosette Network Private Limited All Rights Reserved
Submit Query
WhatsApp Icon
Loading…

stuff goes in here!