OnBenchMark Logo

Arpit Raj (RID : 1gkbsl9wg7wv0)

designation   Data Warehouse Engineer

location   Location : Bangalore, India,

experience   Experience : 11 Year

rate   Rate: $16 / Hourly

Availability   Availability : Immediate

Work From   Work From : Offsite

designation   Category : Information Technology & Services

Shortlisted : 3
Total Views : 147
Key Skills
Data Warehouse SQL Python Informatica MDM Unix Teradata



  • Total 11+ years Software development experience.
  • Excellent experience in Data Engineer and Big Data technologies, ETL Informatica Power Centreand Data/Dimensional Modeling.
  • Good experience in Python, Machine Learning Algorithms, Spark, SQL, Unix, Teradata, Control-M, UC4, Informatica MDM, Informix.
  • Good knowledge of Database like Oracle SQL, Hana.
  • Worked with Tata Consultancy Services Pvt. Ltd, Wipro Limited andQuinnox. Currently working as a Consultant.
  • Worked in 8 Development Project & implemented it from end to end
  • Have multiple domain knowledge like Re Insurance, Tele-Communication, Retail Domain, Life Science and Energy,
  • Working in Agile methodology (Jira Tool) and have experience in Waterfall model.
  • Experience in playing various roles as Data Engineer, Team Lead, Developer, Support, Minor architectrole.
  • Strong (Communication +Presentation+ Negotiation) = Leadership

Personal Project Highlights:

  • Comparison of Classification algorithms like k-NN, Naive Bayes, SVM, Decision Tree and Logistic Regression using K-Fold cross validation.
  • Filter Ham and Spam message using multinomial Naive Bayes classifier, Support Vector Classification and Logistic Regression model.
  • Data Visualization using Matplot lib, Analyze the confusion Matrix and Word cloud for Ham and Spam message
  • Used Neural network Model using Keras for Classification.
  • Applied Singular-Value Decomposition as matrix decomposition method for reducing a matrix from 1*501 to 1*6 to get better accuracy.


April 2018 to Present (Consultant)

Role: Data Engineer

Project Abstract:

We are building an enterprise data lake for client. Client aim is to be become a data driven company. data comes from various sources like rdbms / files and we work from e2e for data life cycle of data ingestion process. This data is used by Reporting team, Data Science, Data Analyst and business user for building dashboard, data insight and for effective decision- making process.We are using Palantir Foundry platform which supports Data Ingestion, Data Analysis, ML/AI and many more functionalities. It is cloud based platform built on AWS.


  • Coordinate with Client and understand the customer requirements.
  • Followed Agile methodology (Jira Tool).
  • Apply business logic and move data from RDBMS/ Files to Palantir Cloud Ontology
  • Provide High and Low level Technical ETL solutions.
  • Cleanse the data and applied SCD 1.
  • Using pyspark and sparksql in Palantir tool
  • Visualize the data in contour using Palantir tool.
  • Apply various transformation in codework/usecase folder.
  • Primary aim is to have data in one place so that Business can take effective decisions.

Sep 2017 to April 2018 (Quinnox - Waste Management) Role: SeniorConsultant

Project Abstract:

Waste Management, Inc. is an American waste management, comprehensive waste, and environmental services company in North America.


  • Apply business logic using transformation of Informatica.
  • The Business consists of 4 main flows: Data load from Oracle to Netezza Staging using Informatica; Netezza scripts will load data in Dim table; Generate simple xml file using Informatica; Update the various target table

May 2017 to Aug2017 (Wipro - NV Energy) Role: Team Lead

Project Abstract:

NV Energy is working towards the goal of aligning the telecommunication business processes to Enterprise Work & Asset Management process model.

Purpose is to effectively manage Inventory to optimize cost and visibility for both stocked and non-stocked item.


  • Led the development of the Informatica ETL collaborating with developers, solution designers and internal as well as external clients.
  • Migrate data from MS access and provide the XML file to Maximo/Ventyx/Reporting team.Load the data in staging/landing table of Oracle.
  • Make low-level design documents.Mentoring junior associates in ETL tool.

Feb 2016 to May 2017 (Wipro – Singtel Optus) Role: Senior Software Engineer (and Architect)

Project Abstract:

Business purpose was to maintain correct and latest data of its customer.

We did work of extraction from Teradata and Files and loaded into Oracle applying business logics using Informatica.


  • Worked extensively on performance issues and performance tuning
Matching Resources
My Project History & Feedbacks
Copyright© Cosette Network Private Limited All Rights Reserved
Submit Query
WhatsApp Icon

stuff goes in here!