OnBenchMark Logo

PRAKASAM (RID : 4kv2lpm9sm37)

designation   AWS Data Engineer

location   Location : Vadodara

experience   Experience : 20 Year

rate   Rate: $16 / Hourly

Availability   Availability : Immediate

Work From   Work From : Offsite

designation   Category : Information Technology & Services

Shortlisted : 1
Total Views : 43
Key Skills
AWS Azure DevOps Scala SQL Server Redshift Databricks SQL Warehouse ADLS
Discription

Project Name

Corporate Analytics Engineering Core Team

Role

Data Engineer

 

Duration

1.5 Years

Description

The project is to develop a Data Engineering Framework named Fundamentum, this framework is rolled out as releases to the enterprise-wide Analytics team, the Data Engineers part of Analytics Organization will consume this framework to develop the solution for Data Analytics Products/Projects.

Working Environment

PyCharm, PySpark, Azure Databricks, Azure Data Factory, Azure Log Workspace, Azure Monitor, Azure Key Vault, ADLS, Azure DevOps, Sonarqube, Fossa, Great Expectation, Azure SQL Server, Azure Event Hub, Azure Stream Analytics,Apazhe Kafka, Scala, Databricksdbx, Delta Live Tables, Pytest, Pytest-bdd, Pylint, Hive, YAML,Redshift

 

Responsibilities:
This framework is rolled out as releases to the enterprise-wide Analytics team, the Data Engineers part of Analytics Organization will consume this framework to develop the solution for Data Analytics Products/Projects.
 

Project Name

AAS Migration to Databricks SQL Warehouse POC

Role

Data Architect

Duration

3 Months

Description

The purpose of the project is to migrate the Analytical Solution running currently on Azure Analysis Services to Databricks SQL Warehouse, the activities involve building data pipelines, building Delta Tables external and internal,integration with Power BI, creating/managing the Warehouse cluster and Data Engineering Clusters.

 

Working Environment

Azure Databricks, Apache Spark, Pyspark, Delta Tables, Databricks SQL Warehouse, ADLS, Power BI.

 

Responsibilities:

  • The activities involve building data pipelines, building Delta Tables external and internal,integration with Power BI, creating/managing the Warehouse cluster and Data Engineering Clusters.

 

 

Project Name

SpongeBob Phase 1 & 2

Role

Data Engineer

 

Duration

5 months, Maintenance & support.

Description

The purpose of the project is to build the Power BI dashboard for Trade Compliance Analytics, the dashboard consists of reports for Trade Details, Trade Detail Pivot, Trade Route, Trade Segment and Site Details, Trade Route, Tariff Growth -year till date, Brands, Mars Site Growth- year till date, Trade Values – Invoices & their Distribution, Trade Invoice Details, YoY Variance Analysis. To build the dashboard, the data pipelines are build in ADF and transformed in Databricks and loaded into the Azure SQL, Also the ARRIA Natural Language Generation tool is integrated with Power BI.

 

Working Environment

Azure Data Factory, ADLS, Azure Databricks, Pyspark, Spark SQL, Azure SQL, Arria NLG, Power BI.

 

 
Matching Resources
My Project History & Feedbacks
Copyright© Cosette Network Private Limited All Rights Reserved
Submit Query
WhatsApp Icon
Loading…

stuff goes in here!