Big Data Engineer

No of Positions  No of Positions:  3

location Location:Delhi

date Tentative Start Date:  August 15, 2020

Work From Work From :Any Location

rate Rate : $8-11(Hourly)

experience Experience :3to6Year

Job Category :Information Technology & Services
Duration :3-6 Month
Key Skills Required Skills
hadoopbig dataData AnalysisAWS or Azuredata modeling

1. 3+ years of development experience in at least one of MySQL, Oracle, PostgreSQL or MSSQL and

with Big Data frameworks / platforms / data stores like Apache Drill, Arrow, Hadoop, HDFS,

Spark, MapR etc

2. Strong experience setting up data warehouses, data modeling, data wrangling and dataflow

architecture on the cloud

3. 3+ experience with public cloud services such as AWS, Azure, or GCP and languages like Java/

Python etc

4. 3+ years of development experience in Amazon Redshift, Google Bigquery or Azure data

warehouse platforms preferred

5. Knowledge of statistical analysis tools like R, SAS etc

6. Familiarity with any data visualization software

7. A growth mindset and passionate about building things from the ground up and most

importantly, you should be fun to work with

As a data engineer, you will:

1. Create and maintain optimal data pipeline architecture,

2. Assemble large, complex data sets that meet functional / non-functional business requirements.

3. Identify, design, and implement internal process improvements: automating manual processes,

optimizing data delivery, re-designing infrastructure for greater scalability, etc.

4. Build the infrastructure required for optimal extraction, transformation, and loading of data

from a wide variety of data sources using SQL and AWS ‘big data’ technologies.

5. Build analytics tools that utilize the data pipeline to provide actionable insights into customer

acquisition, operational efficiency and other key business performance metrics.

6. Work with stakeholders including the Executive, Product, Data and Design teams to assist with

data-related technical issues and support their data infrastructure needs.

7. Keep our data separated and secure across national boundaries through multiple data centers

and AWS regions.

8. Create data tools for analytics and data scientist team members that assist them in building and

optimizing our product into an innovative industry leader.

9. Work with data and analytics experts to strive for greater functionality in our data, dynamic individuals to bridge the gap between design and technical implementation and define

how their software looks as well as how it works.

Similar Job/Project
ASP.Net ,SQL Server Integration Services ,SQL Server Reporting Services. ,Java Script ,C#.NET ,Containers ,AWS or Azure
hadoop ,scala ,hive ,scoop
Kafka ,Big Data ,Strong Technical Knowledge in at least one stream of either Software ,Apache ,Microservices
data modeling
Hadoop ,Python ,IBM MQ , Tibco ,Kafka/RabbitMQ
Python ,Django ,HTML ,Jquery ,AWS or Azure ,Docker , Kubernetes
Data Science ,data mining ,business intelligence tools ,Hadoop ,Strong math skills (e.g. statistics alg
Data Science ,Python ,Data Engineer ,Big Data ,Machine Learning
Salesforce Data Migration ,Data mapping ,Data Analysis ,VF , Trigger ,Lightning
.Net ,Angular JS ,JavaScript ,Bootstrap ,Jquery ,AWS or Azure
Hadoop ,Spark ,Java or Python ,Data Analytics Architecture ,cloud-native database technologies ,data management ,Data Modelling ,Data Science , Data Engineering ,Data Visualization ,Data Strategy and Cloud service
Devops ,AJAX, Web Technologies, JavaScript ,AWS or Azure ,Java ,Linux ,Unix ,terraform ,Python
.Net Core JavaScript and/or other high-level programming and scripting languages such as Power-Shell, Bash, SQL, .NET, Java, Python, PHP, Ruby, PERL, C++, etc. ,Azure IaaS PaaS and SaaS , SQL Server Azure SQL, Azure Data Lake, HD Insights, Hadoop, Cloudera, MongoDB, MySQL, Neo4j, Cassandra, Couchbase