Consultant (Data Engineer)

No of Positions  No of Positions:   1

location Location: Bengaluru

date Tentative Start Date:   July 22, 2022

Work From Work From : Offsite

rate Rate : $ 12  -  16 (Hourly)

experience Experience : 3 to 7 Year

Posted: 23 Days ago
Job Applicants : 13
Job Views : 48
You have successfully applied. Company will contact you soon.
Name : {{jobapplydata.name}}
Company Name : {{jobapplydata.cname}}
Email  {{jobapplydata.email}} |   Send Email   {{emaildata.total}}
Phone {{jobapplydata.phone}} | Call
You have successfully applied. Need to upgrade your plan to view contact details of client. Upgrade Plan
Job Category : Information Technology & Services
Duration : 3-6  Month
Key Skills Required Skills
GCP (primary) ETL/ELT processes and tools Big Data ETL PySpark (primary) Python (primary) SQL
Description

Objective

Data Engineer will be responsible for expanding and optimizing our data and database architecture,

as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an

experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building.

The Data Engineer will support our software developers, database architects, data analysts and data

scientists on data initiatives and will ensure optimal data delivery architecture is consistent

throughout ongoing projects. They must be self-directed and comfortable supporting the data needs

of multiple teams, systems, and products

Roles and Responsibilities:

➢ Good knowledge on GCP Data Flow, Big Query, Google cloud storage, cloud function, Cloud

composer, etc.

➢ Good knowledge on working on the access permissions like IAM.

➢ Understanding the upstream data designing efficient data pipelines, building data marts in

Big Query, having the transformed data exposed to downstream applications.

➢ Should be comfortable in building and optimizing performant data pipelines which include

data ingestion, data cleansing and curation into a data warehouse, database, or any other

data platform using PySpark.

➢ Experience in distributed computing environment and Spark architecture.

➢ Optimize performance for data access requirements by choosing the appropriate file

formats (AVRO, Parquet, ORC etc.) and compression codec respectively.

➢ Experience in writing production ready code in Python and test, participate in code reviews

to maintain and improve code quality, stability, and supportability.

➢ Experience in designing data warehouse/data mart.

➢ Experience with any RDBMS preferably SQL Server and must be able to write complex SQL

queries.

➢ Expertise in requirement gathering, technical design and functional documents.

➢ Experience in Agile/Scrum practices.

➢ Experience in leading other developers and guiding them technically.

➢ Experience in deploying data pipelines using automated CI/CD approach.

➢ Ability to write modularized reusable code components.

➢ Proficient in identifying data issues and anomalies during analysis.

➢ Strong analytical and logical skills.

➢ Must be able to comfortably tackle new challenges and learn.

➢ Must have strong verbal and written communication skills.

 
 
Job applied, view dashboard for more details
My Resources ({{totalapply}})
search icon
{{list.name}}
{{list.designation}}
{{list.name}}
{{list.designation}}
Already Applied
{{list.name}}
{{list.designation}}
Not Verified
No Resources have been added.
Add Resources
Copyright : 2022 – OnBenchMark All Right Reserved.
Loading…

Loading