OnBenchMark Logo

Ajay (RID : 210ulp2nvm79)

designation   Sr. Data Engineer

location   Location : Indore, India

experience   Experience : 5 Year

rate   Rate: $14 / Hourly

Availability   Availability : Immediate

Work From   Work From : Offsite

designation   Category : Information Technology & Services

Shortlisted : 0
Total Views : 51
Key Skills
Data Engineer Python AWS AWS Glue Lambda API Getway Redshift Kubernetes Github Jeera MySQL
Discription

Ajay
Data Engineer
---------------------------------------------------------------------------------------------------------------------
SUMMARY
• 5+ years of Software Development Experience
• Education: Bachelors of Engineering

● Maintain high standards of software quality within the team by establishing good practices and habits. I have 5+ years of experience of working as a Python
● Have Extensive Experience in IT data analytics projects, Hands on experience in migrating on premise ETLs.
● Experience in handling python and spark context when writing Pyspark programs for ETL.
● Diverse experience in all phases of software development life cycle (SDLC) especially in Analysis, Design, Development, Testing and Deploying of applications.
● Feature Development and maintenance
● Mentoring team members
● Excellent communication and interpersonal skills and capable of learning new technologies very quickly.
● Hand on experience working with Docker, Kubernetes & Apache Airflow.
● Hands-on experience with AWS Glue, Code Pipeline, MWAA, CloudFormation, SQS, Lambda, RDS, Event Bridge, S3, EC2, Code commit, Athena, Quicksight, Kinesis, DynamoDB, API Gateway, SNS, CDK, Redshift


CORE SKILLS

● Programming Languages/ Frameworks/ Libraries: Python, Pandas, Numpy Pyspark.
● DBMS: PostgreSQL, Mysql, DynamoDB, RDS
● Cloud Technologies: AWS
● AWS Services: AWS Glue, MWAA, CloudFormation, SQS, Lambda, Event Bridge, S3, EC2, Athena, API Gateway, SNS, CDK, Redshift
● Containerization services and Deployment: Docker, Kubernetes.
● Project Management tools: Git, GitLab, GitHub, Jeera, Trello




PROJECT UNDERTAKEN

Project Name
NTT global
Description NTT believes in contributing to society through our business operations by applying technology for good. Our services help clients accelerate their growth and develop new
business models. Involved in predictive modeling with ML algorithms on customer data, contracts.

Technology Stack
Python, Redshift, Event Bridge, MWAA.
Role Sr. Developer

Project Name Job Target
Role Sr. Developer
Description JobTarget is a company dedicated to helping job seekers and employers connect. We are a team of nearly 500 who are passionate about improving online job search. This focus has allowed us to earn the opportunity to serve thousands of companies and millions of job seekers each month. Technology Stack Python, AWS Glue, CloudFormation, SQS, Lambda, Event
Bridge, S3, EC2, Athena, API Gateway, SNS, Docker.

Project Name
Sales Monster
Role Developer
Description SalesMonster is a sales management app. SalesMonster lets
you easily organize your sales pipeline into simple chunks,
helping you see exactly where every account and every lead
is going. We can create a new person, organization,
deal,activities in Sales monster and at the same time we can
add a new product according to a particular deal. Sales
monster will provide an easy and convenient way to sell our
goods and deal with a good sales marketing environment.
Developing ETL Jobs.
Technology Stack Python, AWS Services, Docker.


Project Name Build an ETL Process with Clickstream Data.
Description Clickstream data is the information that is collected about a
user while they are browsing through a website. The
company would like to use Clickstream data to understand
the user behavior on the website. For that here I have
implemented an ETL process and generated a table that will
help the company to know more about its user base.
Technology Stack Python, PySpark, EMR, S3, EC2, Bitbucket, Docker,
Kubernetes, Airflow.
Role Data Engineer




Project Name Data Analysis on Chess Dataset
Description The goal of this project is to use a big database that was
provided to the client, to predict the result of chess matches
after twenty full movements (after black player has moved),
then use all the characteristics and properties of the data we
manage to capture in our data process. To process this
database, we have used PySpark over the EMR Cluster.
Technology Stack Python, PySpark, EMR, S3, CodeCommit, CodePipeline,
Athena, Airflow.
Role Data Engineer

 

 
Matching Resources
My Project History & Feedbacks
Copyright© Cosette Network Private Limited All Rights Reserved
Submit Query
WhatsApp Icon
Loading…

stuff goes in here!