No of Positions: 1
Location:
other city
Tentative Start Date:
May 21, 2023
Work From :
Any Location
Rate : $ 10
-
15 (Hourly)
Experience :
3 to 7 Year
Dear Candidate,
We are having an opening for Data Engineer– Hive, Spark - Remote
Position Data Engineer
Qualification Bachelor's degree in Computer Science or related field.
Number of post 3
Office Location Remote
CTC As per Market Standard
Work Experience 53 to 7 years
Selection Process 2 Rounds
Skills
Solid Computer Science fundamentals, excellent problem-solving skills and a strong
understanding of distributed computing principles.
● At least 3 years of experience in a similar role, with a proven track record of
building scalable
and performant data infrastructure.
● Expert SQL knowledge and deep experience working with relational and NoSQL
databases.
● Advanced knowledge of Apache Kafka and demonstrated proficiency in Hadoop v2,
HDFS,
and MapReduce.
● Experience with stream-processing systems (e.g. Storm, Spark Streaming), big data
querying
tools (e.g. Pig, Hive, Spark) and data serialization frameworks (e.g. Protobuf, Thrift,
Avro).
Job Description We are looking for world-class talent to join a crack team of engineers, product
managers and
designers. We want people who are passionate about creating software that makes a
difference to
the world. We like people who are brimming with ideas and who take initiative
rather than wait to
be told what to do. We prize team-first mentality, personal responsibility and
tenacity to solve hard
problems and meet deadlines. As part of a small and lean team, you will have a very
direct impact on
the success of the company.
As a data engineer you will:
• Design, develop and maintain an infrastructure for streaming, processing and
storage of
data. Build tools for effective maintenance and monitoring of the data infrastructure.
• Contribute to key data pipeline architecture decisions and lead the implementation
of major
initiatives.
• Work closely with stakeholders to develop scalable and performant solutions for
their data
requirements, including extraction, transformation and loading of data from a range
of data
sources.
• Develop the team’s data capabilities - share knowledge, enforce best practices and
encourage data-driven decisions.
• Develop data retention policies, backup strategies and ensure that the firm’s data
is stored
redundantly and securely.