OnBenchMark Logo

Data Engineer

No of Positions  No of Positions:   1

location Location: Chennai

date Tentative Start Date:   July 31, 2024

Work From Work From : Onsite

rate Rate : $ 15  -  18 (Hourly)

experience Experience : 6 to 15 Year

Job Applicants : 6
Job Views : 180
You have successfully applied. Company will contact you soon.
Name : {{jobapplydata.name}}
Company Name : {{jobapplydata.cname}}
Email  {{jobapplydata.email}} |   Send Email   {{emaildata.total}}
Phone {{jobapplydata.phone}} | Call
You have successfully applied. Need to upgrade your plan to view contact details of client. Upgrade Plan
Job Category : Information Technology & Services
Duration : 6-12  Month
Key Skills Required Skills
Python SQL Pyspark Databricks CI/CD DevOps
Description

Note: Only vendors with an ISO:27001 certificate are eligible to apply and they have Datbrick certified candidates.

Job description:

We are looking for people who are self-motivated, responsive individuals who are passionate about data. You will build data solutions to address complex business questions. You’ll take data through its lifecycle, from the pipeline for data processing, data infrastructure, and create dataset data products.

Core responsibilities:

  • Monitor and optimize the performance of data pipelines and queries, addressing any bottlenecks and improving efficiency.

  • Design, develop, and maintain robust data pipelines using SQL, Python, and PySpark.

  • Partner with business team(s) to understand the business requirements, understand the impact to existing systems and design and implement new data provisioning pipeline process for Finance/ External reporting domains.

  • Monitor and troubleshoot operational or data issues in the data pipelines

  • Drive architectural plans and implementation for future data storage, reporting, and analytic solutions

  • Work closely with data analysts, data scientists, and other stakeholders to understand data needs and deliver high-quality data solutions.

  • Identify, troubleshoot, and resolve data-related issues, ensuring high availability and reliability of data systems.

Qualifications:

  • 5+ years of relevant work experience in data engineering or software engineering equivalent

  • 3+ years of experience in implementing big data processing technology: AWS / Azure / GCP, Apache Spark, Python

  • Experience writing and optimizing SQL queries in a business environment with large-scale, complex datasets

  • Hands-on experience with PySpark for big data processing, including data frame operations, Spark SQL, and Spark Streaming.

  • Proven experience with SQL, Python, and PySpark in a production environment.

  • Experience with big data technologies such as Hadoop, Spark, and Kafka are a plus.

  • Monitor and troubleshoot data pipeline issues to ensure seamless data flow.

  • working experience in a DevOps CI/CD environment

  • Data Brick Certificate is imortent t.o apply for this profile 

Copyright© Cosette Network Private Limited All Rights Reserved
Submit Query
WhatsApp Icon
Loading…

Loading