OnBenchMark Logo

Ayan (RID : pp3lnsm8vcg)

designation   AWS Data Engineer

location   Location : Delhi, India

experience   Experience : 8 Year

rate   Rate: $15 / Hourly

Availability   Availability : 1 Week

Work From   Work From : Offsite

designation   Category : Information Technology & Services

Shortlisted : 9
Total Views : 60
Key Skills
AWS Python Spark
Discription

Ayan

Objective
:
To work on latest technology for a reputed organization that provides challenging and innovative environment with dedicated peoples, which will help me to explore myself fully and
realize my potential
.

EDUCATION:


Examination Year of Passing Board/University

% or CGPA
Master of Computer Application 2015 Jadavpur University 70.77% /7.7 CGPA
B.SC (Mathematics Honors) 2012 University of Kalyani 62.75%
Higher Secondary (+2), Science 2009 WBBSC

70.2%
Secondary

2007 WBCHSC

81.37%


PROFESSIONAL SUMMARY:


 Around 7 years 3 months of IT experience (and continuing) including in present Accenture 1 year 8 months and in TCS 5 years 8 months. 3.4 years + of experience in Hadoop
Environment and 1.8 years in AWS environment with in total 5+ years in PySpark.
 Working on different services of AWS computational services like AWS EMR, Lambda, AWS Glue.
 Using Terraform to manage the AWS resources and using Docker Image to maintain and deploy to the add/update/delete AWS resources.
 Have good knowledge of HDFS YARN architecture and different aspects of it like Replication, Rack awareness, master-slave etc.
 Very good Experience in processing Big Data using Py-Spark and Spark SQL.
 Been involved in end-to-end Development project in Agile methodology. Was responsible for understanding the source data, analyzing the architecture, and understanding the
transformation for each field, writing python/py-spark code to produce the output file/table.
 Have exposure to various stages of Software Development Life Cycle such as Requirements Analysis, Project Plan, Design, Coding, Development, Testing, Deployment and Implementation.
 An effective communicator with excellent interpersonal, analytical and client serving abilities.
 Handling AWS Data Engineer Team and working with them to understand/resolve any issue if they are facing to deliver their day-to-day work.


SKILLS:
Big Data
Tools/Services
AWS, EMR, AWS Glue, Lambda, S3, Athena, Step Function, DynamoDB, Secret
Manager, Cloudwatch, AWS Cost Explorer, ECS, SQS, SNS
Databases MySQL, NoSQL,
Other Tools JIRA, Terraform, Pycharm, Git Hub, Docker
Languages Unix, SQL, HiveQl, Python, PySpark, SparkSql

PROJECTS:
 August 2021 to Present:
Organization Northcorp Software Pvt Ltd
Client BMW - BigData & AI
Role Team Lead (AWS Data Engineer)
Technologies/
Tools AWS, PySpark, Terraform, JIRA, Docker, Github, Pycharm,
Team Size 8

 Worked on Data Migration from HDFS to AWS S3
 Migrated Oracle, DB2, Kafka Topic, Flat Files, data to AWS Cloud using PySpark Framework
 Worked on AWS Lambda, S3, Cloud Watch, Athena, DynamoDB, EMR, AWS Glue, Step Functions, KMS, Secret Manager, Security HUB, IAM, SQS, SNS, ECS, EC2 to ingest streaming data in near real time fashion and then ingesting batch file in scheduled way
 Ingested source data in RAW layer, then do ETL process on the data and ingested that data in Preparation layer. Finally build Data Asset on it based on different type of requirements
and business logic from Data Steward of BMW account
 Having alignment with Client/Data Steward to gather requirement and as per requirement
design the architectural view and specification document.
 Developed Python lambda function for ingesting near real time streaming data.
 Developed AWS Glue job/ AWS EMR cluster to process big data as a batch job in scheduling time.
 Ensure on deliverables which must be in time and meets acceptance criteria described in JIRA Story.
 Created Athena table on top of the data in S3 bucket using schema from AWS Glue to validate source data and final output data.
 Worked with different person from team to resolve different type of join condition issue/ AWS related issue to meet the certain timeline to deliver the work.
 Created JIRA Story as per the client requirements, manage the team to meet the certain number of JIRA Story points delivery in each Sprint
 Working in Agile Methodology and also in a Devops culture





 April 2019 to August 2021
Organization Tata Consultancy Services
Client Citi Bank
Role Data Engineer
Technologies/
Tools
Hadoop, HDFS, Cloudera, Hive, Unix, Spark, Python, Scala, Bit Bucket, RLM
, Jenkins, Autosys, SAS EG
Team Size 8

 Understanding the business objectives and converting models from SAS to SPARK using Python or Scala

 
Matching Resources
My Project History & Feedbacks
Copyright© Cosette Network Private Limited All Rights Reserved
Submit Query
WhatsApp Icon
Loading…

stuff goes in here!