OnBenchMark Logo

Data Engineer (GMT+8 time zone)

No of Positions  No of Positions:   1

location Location: Hyderabad

date Tentative Start Date:   February 22, 2023

Work From Work From : Any Location

rate Rate : $ 12  -  13 (Hourly)

experience Experience : 3 to 7 Year

Job Applicants : 8
Job Views : 439
You have successfully applied. Company will contact you soon.
Name : {{jobapplydata.name}}
Company Name : {{jobapplydata.cname}}
Email  {{jobapplydata.email}} |   Send Email   {{emaildata.total}}
Phone {{jobapplydata.phone}} | Call
You have successfully applied. Need to upgrade your plan to view contact details of client. Upgrade Plan
Job Category : Information Technology & Services
Duration : Long-Term
Key Skills Required Skills
NOSQL Kafka/RabbitMQ Spark Hive Hadoop
Description

Note: GIT Hub Score (Any related coding score), Linked In account Link mandatory

 

We are looking for world-class talent to join a crack team of engineers, product managers and designers. We want people who are passionate about creating software that makes a difference to the world. We like people who are brimming with ideas and who take initiative rather than wait to be told what to do. We prize team-first mentality, personal responsibility and tenacity to solve hard problems and meet deadlines. As part of a small and lean team, you will have a very direct impact on the success of the company. As a data engineer you will:

•Design, develop and maintain an infrastructure for streaming, processing and storage of data. Build tools for effective maintenance and monitoring of the data infrastructure.

•Contribute to key data pipeline architecture decisions and lead the implementation of major initiatives.

•Work closely with stakeholders to develop scalable and preferment solutions for their data requirements, including extraction, transformation and loading of data from a range of data sources.

•Develop the team’s data capabilities -share knowledge, enforce best practices and encourage data-driven decisions.

•Develop data retention policies, backup strategies and ensure that the firm’s data is stored redundantly and securely.

Job requirements:

●Solid Computer Science fundamentals, excellent problem-solving skills and a strong understanding of distributed computing principles.

●At least 3 years of experience in a similar role, with a proven track record of building scalable and performant data infrastructure.

●Expert SQL knowledge and deep experience working with relational and NoSQL databases.

●Advanced knowledge of Apache Kafka and demonstrated proficiency in Hadoop v2, HDFS, and Map Reduce.

●Experience with stream-processing systems (e.g. Storm, Spark Streaming), big data querying tools (e.g. Pig, Hive, Spark) and data serialization frameworks (e.g. Protobuf, Thrift, Avro).

●Bachelor’s or Master’s degree in Computer Science or related field from a top university.

●Able to work within the GMT+8 time zone

What we offer:

●An exciting and passionate working environment with in a young and fast-growing company

●The opportunity to work with a high performing team

●A competitive salary package

●The ability to work from anywhere in the world (assuming a stable internet connection)

●The chance of being a fundamental part of the team and make a difference

 

What will the process look like?

 

●Application: you will submit an online application form, which will take you less than 10 minutes to complete

●Test: you will take a 75 min online test

●Interview: There will be 2rounds of interviews

 

 

Qualifications

Bachelor’s or Master’s degree in Computer Science or related field from a top university. or candidates from premium institutes are preferred

 

 

Copyright© Cosette Network Private Limited All Rights Reserved
Submit Query
WhatsApp Icon
Loading…

Loading