OnBenchMark Logo

Data Engineer

No of Positions  No of Positions:   1

location Location: Noida

date Tentative Start Date:   October 14, 2022

Work From Work From : Offsite

rate Rate : $ 8  -  16 (Hourly)

experience Experience : 3 to 5 Year

Job Applicants : 2
Job Views : 268
You have successfully applied. Company will contact you soon.
Name : {{jobapplydata.name}}
Company Name : {{jobapplydata.cname}}
Email  {{jobapplydata.email}} |   Send Email   {{emaildata.total}}
Phone {{jobapplydata.phone}} | Call
You have successfully applied. Need to upgrade your plan to view contact details of client. Upgrade Plan
Job Category : Information Technology & Services
Duration : 6-12  Month
Key Skills Required Skills
SQL NoSQL Hadoop v2 HDFS Big Data
Description

Experience Required – 3+

● Able to work within the GMT+8 time zone.

Responsibilities:

• Design, develop and maintain an infrastructure for streaming, processing, and storage of data. Build tools for effective maintenance and monitoring of the data infrastructure.

• Contribute to key data pipeline architecture decisions and lead the implementation of major initiatives.

• Work closely with stakeholders to develop scalable and performant solutions for their data requirements, including extraction, transformation, and loading of data from a range of data sources.

• Develop the team’s data capabilities - share knowledge, enforce best practices and encourage data-driven decisions.

• Develop data retention policies, and backup strategies and ensure that the firm’s data is stored redundantly and securely.

 

Requirements - 

● Solid Computer Science fundamentals, excellent problem-solving skills, and a strong understanding of distributed computing principles.

● At least 3 years of experience in a similar role, with a proven track record of building scalable and performant data infrastructure.

● Expert SQL knowledge and deep experience working with relational and NoSQL databases.

● Advanced knowledge of Apache Kafka and demonstrated proficiency in Hadoop v2, HDFS, and MapReduce.

● Experience with stream-processing systems (e.g. Storm, Spark Streaming), big data querying tools (e.g. Pig, Hive, Spark), and data serialization frameworks (e.g. Protobuf, Thrift, Avro).

● Bachelor’s or Master’s degree in Computer Science or a related field from a top university.

 

 

Copyright© Cosette Network Private Limited All Rights Reserved
Submit Query
WhatsApp Icon
Loading…

Loading