OnBenchMark Logo

SAPNA (RID : 6v6vlv2p139o)

designation   DATA ENGINEER

location   Location : MUMBAI

experience   Experience : 3 Year

rate   Rate: $13 / Hourly

Availability   Availability : Immediate

Work From   Work From : Offsite

designation   Category : Information Technology & Services

Shortlisted : 0
Total Views : 30
Key Skills




Experienced Data Analyst and Engineer with 3 years of expertise in developing data models and optimizing infras- tructure. Seeking roles where I can leverage my comprehensive skill set in data analytics, data engineering ,machine learning, and AWS services to drive business growth and innovation.



Softwares /Tools:             Python, Tableau, PowerBI, MongoDB, PySpark, PostgreSQL, Jenkins, Git,GitLab, Postman, Confluence, Jira, Golang, Scylladb, chat GPT, Docker , Machine learning , Deep learning openAI


Cloud Platform:             AWS , GCP



Data  Analyst  and  Engineer                                                                                                                                                                      Jul 2021- June 2023


Role Overview: Contributed to the development and optimization of  data-driven  solutions  aimed  at  enhancing sales acceleration and infrastructure functionality within the organization.  Collaborated with cross-functional teams to implement robust data models, analytics frameworks, and scalable data infrastructure.

  • Data Model Development and Segmentation: Involved in the development of a comprehensive data model for prospect segmentation in the US and Canada regions, leveraging Python, SQL, and machine learning algorithms. The objective was to optimize prospect classification to maximize conversion rates and sales outcomes.
  • Algorithm Development and Implementation: Implemented ARMS for Multi-Arm Bandwidth Logic using Python, facilitating efficient prospect segmentation and identification of look-alike prospects across different regions. Developed and deployed similarity search algorithm to enhance targeting efficiency and increase conversion probabilities.
  • ETL Process Optimization and Data Validation:  Led  the  development  of  Extract,  Transform,  Load (ETL) processes using PySpark for data ingestion and transformation into the Data Lake infrastructure. Ensured seamless data flow from source to destination, scheduling and monitoring Glue jobs and event-driven workflows.Heavily involved in Data Validation and addressing and resolving the discrepancies.
  •   Infrastructure Development and Automation: Contributed to the establishment of a robust Data Mesh infrastructure, standardizing data formats and access methods across applications and machine learning models. Developed Lambda functions to streamline data retrieval and implemented notification systems using SQS to alert users of query execution statuses through API Gateway.
  • Dashboard Development: Designed and implemented dashboards in Power BI to provide stakeholders with insights into ML model performance and conversion rates, facilitating data-driven decision-making processes.

Golang  Developer                                                                                       &

Matching Resources
My Project History & Feedbacks
Copyright© Cosette Network Private Limited All Rights Reserved
Submit Query
WhatsApp Icon

stuff goes in here!