No of Positions: 14
Location: Hyderabad
Tentative Start Date: February 21, 2024
Work From : Onsite
Rate : $ 7 - 9 (Hourly)
Experience : 4 to 6 Year
Good to have skills - Azure Data Bricks, Azure Data Factory, Pyspark, SQL, Scala/Python, Power BI and DevOps Knowledge.
Candidates must have a solid understanding of Azure data tech stack. Should be proficient in contributing towards development of ingestion and compute (data transformation) pipelines in large complex Enterprise databases/warehouses using Microsoft Azure Data Technologies. Candidate should have very strong SQL. Design and develop high quality code in accordance with privacy, security, and coding standards guidelines.
Job qualifications: |
Desired Educational Qualification & Technical Skills: [BE/B. Tech/MS/MCA or equivalent] |
Should have strong programming skills with the ability to write optimized and reusable high-quality code. |
4+ years’ experience in building ingestion pipelines using Azure Data Factory, building pipelines for data transformations to compute analytical metrics using SQL/Scala/Python in Azure Big data tech stack (Azure Databricks/Synapse). |
Solid understanding of data modelling, building facts and dimensions. |
4+ years of hands-on to SQL/T-SQL in optimized query writing, Performance tuning, Troubleshooting, debugging & development experience. |
4+ years’ experience in data visualization in building dashboards/reports using Microsoft Power BI. |
Dev ops knowledge of code check-ins, building release pipelines for CICD. |
Desired Soft Skills: |
Ability to rapidly assimilate new information and new techniques. Display high degree of Confidence and ability to work in ambiguous situations. |
Strong analytical skills in understanding business requirements |
Please provide the following details:
Organization name :
SPOC name contact number :
Resource name :
Resource Skillset:
Resource LinkedIn ID :