OnBenchMark Logo

Ujjwal (RID : c94qleh01clz)

designation   Data Engineer

location   Location : Jaipur

experience   Experience : 9 Year

rate   Rate: $24 / Hourly

Availability   Availability : Immediate

Work From   Work From : Offsite

designation   Category : Information Technology & Services

Shortlisted : 1
Total Views : 134
Key Skills
Data Engineer Power Bi AWS MySQL
Discription

PROFESSIONAL SUMMARY:

  • 9+ years of experience in designing, deploying, and operating highly available, scalable and fault tolerant systems using Bigdata tools, Amazon Web Services (AWS) and Azure.
  • Working on Distributed Computing, Big data processing, Hadoop, Apache Spark (Python), Apache Hive, Apache Pig, Apache HBase, Sqoop MySQL
  • Hands on experience on BI Tool Tableau Desktop, Power Bi
  • Hands on experience on Kubernetes using GCP.
  • Hands on experience on Databricks, Delta Lake
  • Hands on experience on Pandas, Numphy, Matplotlib
  • Experienced with event-driven and scheduled AWS Lambda functions to trigger various AWS resources.
  • Highly skilled in deployment, data security and troubleshooting of the applications using AWS services.
  • Worked on different Hadoop distributions like Cloudera, Hortonworks
  • Experience in importing and exporting data using Sqoop from HDFS to Relational Database Systems vice-versa.
  • Experienced on working with Big Data and Hadoop File System (HDFS). Expertise in SQL

Programming with hands-on experience in Oracle.

  • Experienced in version control and source code management tools like GIT, SVN, and Bitbucket.
  • Basic knowledge of GraphQl (Neo4j), Cipher Query

 

WORK EXPERIENCE:

XYZ December 2019 - Present

Associate Technical Lead - Big Data

Umbrella Info care February 2018 - December,2019

Bigdata Developer - Big Data

Artech Infosystem May, 2017 - December,2017

Associate Technology - Big Data

Mymind Infotech December 2015 - May,2017

Hadoop Developer - Big Data

Teleperformance Feb 2014 - November,2015

Data Analyst - Analyst

 

TECHNICAL SKILLS:

Components

Skills

Programming Language

Python, Scala, Java

Big Data Technologies

Apache Hadoop, Apache PySpark, Apache Kafka, Apache Hive, Apache HBase, Apache Pig, Sqoop, Apache Flume, spark-sql, Delta Lake

Big Data Cluster Managers

Cloudera, HDP, EMR, Databricks

Database

MySQL, HBase, Redshift, Vertica, Snowflake, DynamoDB

BI Tool

Tableau Desktop, Power Bi

Workflow Management

Apache Airflow, Apache Nifi

CERTIFICATIONS:

  • AWS Certified Developer Associate

Credential id - 31WLB83C1BFEQD5F

  • AWS Certified Bigdata Specialty

Credential id - J1YFB5J1DFBE1JKM

  • Azure Data Fundamental DP-900
  • Oracle Cloud Infrastructure Foundations 2020 Certified Associate
  • Scrum Fundamentals Certified (SFC)

Credential id - 777523

  • Databricks Certified Data Engineer Associate

Credential ID - 53617629

  • Databricks Certified Associate Developer for Spark 3.0

Credential ID - 64712243

  • Databricks Certified Data Analyst Associate

Credential ID - 61142221

PROJECTS HIGHLIGHTS:

Share DWH (May 20 - Now)

About client

Dubai based client having business in multiple domains

Product Overview

Building a Data Warehouse for Share App which is a loyalty program for managing points got from doing transactions across different domains.

Technology / Tools

Airflow, Spark, Delta Lake, Databricks, Snowflake, Kubernetes

Roles

Lead Data Engineer

Responsibilities

  • Create a model for Data Warehouse.
  • Create Bronze, Silver & Gold Delta Lake layer
  • Create Databricks pipeline and jobs.
  • Write Spark jobs for data processing.
  • Perform data validation, data transformation, data extraction.
  • Managing Team and communicating with clients & stakeholders.

Photovoltaic Inverter Power Forecasting (Jan 20 - Apr 20)

About client

Skytron is a Germany based company that operates in Energy domain.

Product Overview

Building a prediction model for forecasting AC power of Invertors

Technology / Tools

LSTM , Keras , Tensorflow , Python , Matplotlib, Jupyter Lab

Roles

Data Scientist

Responsibilities

  • Create ML model for prediction.
  • Use LSTM and TensorFlow to create model.
  • Perform data validation, data transformation, data extraction.
  • Used matplotlib for Visualization.

Rummy ETL (Sept 19 - Dec, 19)

About client

Junglee games is a Gurgaon b

 
Matching Resources
My Project History & Feedbacks
Copyright© Cosette Network Private Limited All Rights Reserved
Submit Query
WhatsApp Icon
Loading…

stuff goes in here!