OnBenchMark Logo

Shivanu (RID : c94qlemkjq3f)

designation   Data Engineer

location   Location : Ghaziabad, India,

experience   Experience : 3.2 Year

rate   Rate: $9 / Hourly

Availability   Availability : Immediate

Work From   Work From : Offsite

designation   Category : Information Technology & Services

Shortlisted : 5
Total Views : 88
Key Skills
Python SQL MySQL Big Data Cloud Pyspark Datalake Data Warehouse
Discription

Shivanu

Current Role Description

Working as a Data Engineer with BrizSolution Technology, Pvt. Ltd., Noida.

Skill / Experience Summary

  • Having over3years of experience in I.T industry with technologies like Big Data, PySpark, Hive, Hadoop, Python etc.
  • Having sound knowledge of Big Data Ecosystem, DWH, BI and ETL.
  • Experience in requirement analysis and client presentations.
  • Proficient with working on large scale Infrastructure and Data Platforms.
  • Knowledge on data best practices.
  • Good experience in Database/Data-Warehouse operations.
  • Good experience in setting up data pipelines, exploring new tools and technologies and doing POC’s on them.
  • Create meaningful and error free datasets for analytics and data scientist team members. 
  • Experience in automating manual tasks using Python and shell scripting.

Technology Experience

Technologies:

Big Data, PySpark, Cloud, Datalake/Datawarehouse

Programming Languages

PySpark, Python, SQL, Shell scripting

Tools

Cloudera, PyCharm, Hue, Hive, Grafana, Trino

Domain Experience

Service Industry

Databases

MySql

Education Summary

Degree

University

Major and Specialization

B.Tech

UPTU

Mechanical Engineering

12th

Central Board of Secondary Education

Science

10th

Central Board of Secondary Education

Science

Experience Profile – Key Projects

Company: BrizSolution Technology, Pvt. Ltd., Noida

Project: Big Data Analytics

Duration: Jan 20 – Till Date Role – Software Engineer

Description: Project deals in identifying the KPI’s for contact center agents and then building datasets to enable reporting for the same. I work with client and their engineering team in identifying the improvements areas in their project. Designing and developing scalable and robust data engineering and data insight solutions using PySpark, Hive, Python and Big Data platform.

Role:

  • Handling requirement gathering and client discussions for the project.
  • Working directly with client on design improvements and implementations
  • Responsible for the developing data pipelines.
  • Developing and implementing the use cases
  • Migration planning of tools and technologies from legacy systems to new designs.
  • Participate in the design for datalake and data pipeline/ETL using existing and emerging technologies.
  • Communicate project status to different stakeholders
  • Help improve reliability, stability and tackle scalability challenges with engineering teams.

Tools & Technology:

Linux, Hadoop, PySpark, AWS S3, Python, Nifi, Hive, Grafana, Trino

 
Matching Resources
My Project History & Feedbacks
Copyright© Cosette Network Private Limited All Rights Reserved
Submit Query
WhatsApp Icon
Loading…

stuff goes in here!