OnBenchMark Logo

Chetan (RID : c94qlegruxwi)

designation   Data Engineer

location   Location : Jaipur

experience   Experience : 11 Year

rate   Rate: $23 / Hourly

Availability   Availability : Immediate

Work From   Work From : Offsite

designation   Category : Information Technology & Services

Shortlisted : 0
Total Views : 114
Key Skills
Data Engineer ETL SDLC Oracle Unix
Discription

Chetan

Developer/Data Modeller/Data Analyst

A highly result-oriented and strategic data analyst bringing 11+ years of rich and extensive experience in executing diverse challenging projects. An effective leader for an experienced and high-performing team to ensure achieving business and meeting customer requirements in an agile and competitive work environment.

Key Accomplishments:

  • Currently working as Senior Data Modeling/Engineering Lead for almost a year in Microsoft Azure Services like Azure Synapse Analytics, ADF, Databricks, ADLS, PySpark , Python
  • Experience in project coordination, client interaction and tracking the complete live project activities.
  • Well versed in debugging, problem Analyzing, issue solving and Documentation skills.
  • Extensive knowledge in Teradata, Teradata Utilities, Informatica, Oracle Warehouse Builder, Oracle and Unix.
  • Extensive use of Teradata Stored Procedure and Teradata utilities FastLoad Bteq, Mload, Fast Export etc.
  • Extensive experience in ETL process consisting of data transformation, sourcing, mapping, conversion and loading.
  • Experience and strong understanding of all phases of the SDLC (software development life cycle) .
  • 6 years’ experience in data modeling (preferably on large scale DWH)
  • Demonstrated expertise in logical and physical data modeling (ER and Dimensional)
  • Demonstrated expertise in logical and physical data mapping for the development team.

Key Responsibility Areas :

  • Work closely with the business management team to understand the business and technical requirements for the project.
  • Prepare detailed estimates, Scope of Work and Cost estimates for different projects like GDPR2 DGSD CAIS MDD Regulatory projects in UK.
  • Review estimates to ensure accuracy, completeness, and compliance with defined scope of work.

Technical Skills:

  • Erwin :-Data Modeler, Data Mart
  • TERADATA: - TPT, TPUMP, FASTEXPORT, FASTLOAD, MULTILOAD, BTEQ, TERADATA STORED PROCEDURE)
  • Azure Synapse Analytics, Azure Data Factory, Azure Data Bricks, PySpark,
  • ORACLE: - Oracle, PL/SQL, Oracle Warehouse Builder, SQL Loader
  • UNIX: - Shell Scripting AIX: - Tivoli Work Load Scheduler
  • Languages Basics: - Informatica Designer, Datastage Designer.

Functional Skills:

  • Team Leader Business Analysis and Development
  • Excellent Communication and Interpretation skills.

Certification:

  • NCFM Beginers Certified External (NSDL)
  • AZ-900 Certified

Education:

  • B.E.(Electronics Engineering) from Kavikulguru Institute of Technology and Science Ramtek , Nagpur, Affiliated to RTMU, Nagpur
  • 12th, from I.G.S.I College Mandu, Ranchi, Affiliated to Jharkhand

.Board

  • 10th, from ST. Teresa’s School, Bhagalpur, Affiliated to I.C.S.E. Board

Industry:-

Supply Chain

Project:-

SCIP –Supply Chain Integration Project(E&Y)

Duration:-

OCT-2021 – TILL DATE

Project Details:-

SCIP :A PRODUCT CREATED TO BE USED FOR MULTIPLE CLIENTS .

A supply chain is a network between a company and its suppliers to produce and distribute a specific product to the final buyer. This network includes different activities, people, entities, information, and resources. The supply chain also represents the steps it takes to get the product or service from its original state to the customer.

Companies develop supply chains so they can reduce their costs and remain competitive in the business landscape. Its divided into multiple groups like Make Move Supply

Role/Title

Designer /Data Modeler /Developer

Activities

  • Data Modeling/Mapping for EDW Data Model for SCIP Model
  • Finding Data source from different DataMart’s like SAP DB2 Oracle File system Hadoop and develop a data mapping .
  • Defined conceptual, logical and physical data models and maintained the data dictionary.
  • Analytical layer Implementation of the data for PBI to publish data on Dashboard.
  • Re-architecting SCIP with the Use of PySpark approach instead of traditional SQL Approach to reduce the data load time and latency.
  • Worked on writing PySpark Job on Azure Synapse Analytics and Azure Data Bricks.
  • Created ADF ETL pipelines to run the PySpark jobs and used various ADF components like CopyActivity, ForEach, GetMetadata, Execute Pipeline.
  • Completely reformed the Stored Procedures written for each file load into a single PySpark
 
Matching Resources
My Project History & Feedbacks
Copyright© Cosette Network Private Limited All Rights Reserved
Submit Query
WhatsApp Icon
Loading…

stuff goes in here!