OnBenchMark Logo

Data Engineer

No of Positions  No of Positions:   2

location Location: Pune

date Tentative Start Date:   September 17, 2022

Work From Work From : Offsite

rate Rate : $ 7  -  16 (Hourly)

experience Experience : 3 to 4 Year

Job Applicants : 12
Job Views : 315
You have successfully applied. Company will contact you soon.
Name : {{jobapplydata.name}}
Company Name : {{jobapplydata.cname}}
Email  {{jobapplydata.email}} |   Send Email   {{emaildata.total}}
Phone {{jobapplydata.phone}} | Call
You have successfully applied. Need to upgrade your plan to view contact details of client. Upgrade Plan
Job Category : Information Technology & Services
Duration : 6-12  Month
Key Skills Required Skills
SQL Python Java cloud computing AWS Azure

Understand the business requirements and translate these to data services to solve the business and data problems

2. Develop and manage the transports/data pipelines (ETL/ELT jobs) and retrieve applicable datasets for specific use cases using cloud data platforms and tools

3. Explore new technologies and tools to design complex data modeling scenarios, and transformations and provide optimal data engineering solutions

4. Build data integration layers to connect with different heterogeneous sources using various approaches

5. Understand data and metadata to support consistency of information retrieval, combination, analysis and reporting 6. Troubleshoot and monitor data pipelines to have high availability of reporting layer

7. Collaborate with many teams - engineering and business, to build scalable and optimized data solutions 8. Spend significant time on enhancing the technical excellence via certifications, contribution to blog, etc

Hands-on knowledge/experience with SQL/Python/Java/Scala programming, Experience with any cloud computing platforms like AWS (S3, Lambda functions, RedShift, Athena), GCP (GCS, CloudSQL,Spanner, Dataplex, BigLake, Dataflow, Dataproc, Pub/Sub, Bigquery), Azure (Blob/Azure, Synapse) etc.

3. Experience/knowledge with Big Data tools (Hadoop, Hive, Sqoop, Pig, Spark, Presto)

4. Data pipelines & orchestration tool (Ozzie, Airflow, Nifi)

5. Any streaming engines (Kafka, Storm, Spark Streaming, Pub/Sub)

6. Any relational database or data warehousing experience

7. Any ETL tool experience (Informatica, Talend, Pentaho, Business Objects Data Services, Hevo) or good to have knowledge

8. Good communication skills, right Attitude, open-minded, flexible to learn and adapt to the fast growing data culture, proactive in coordination with other teams and providing quick data solutions

9. Experience in working independently and strong analytical skills

Copyright© Cosette Network Private Limited All Rights Reserved
Submit Query
WhatsApp Icon