No of Positions: 1
Location:
Thiruvananthapuram
Tentative Start Date:
September 04, 2022
Work From :
Any Location
Rate : $ 18
-
20 (Hourly)
Experience :
6 to 8 Year
Job Description
• Need to drive and create Source to Target Mappings for Data Warehouse pipelines
• Need to understand the source systems and attributes and use them to create and translate business KPIs to Source to Target Mappings
• Work experience as a data analyst or in a related field.
• Create and maintain optimal data pipeline architecture
• Assemble large, complex data sets that meet business requirements
• Optimize data delivery and re-design infrastructure for greater scalability
• Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using Kafka and Azure technologies.
• High-level experience in methodologies and processes for managing large-scale databases.
• Work with internal and external stakeholders to assist with data-related technical issues and support data infrastructure needs.
• Strong mathematical skills to help collect, measure, organize and analyse data
• Work closely with business intelligence team, understand their requirements through API and needs to check with the backend postgres and snowflake data warehouses and needs to update the facts and dimensions according to that. Strong understanding of data warehouse concepts is needed.
Responsibilities
The candidate should have ability to work with stakeholders to assess potential risks and ability to analyse existing tools and databases and provide software solution recommendations. The candidate needs to work closely with Business intelligence team, Business team understand their requirements and identify / recommend solution for data management and engineering capacity. Strong understand of data warehouse concepts is needed.
Primary Skills
Technical proficiency regarding database design development, data models, techniques for data mining, and segmentation.
• Knowledge of programming languages like SQL, Python
• Proficiency in statistics and statistical packages like Excel, SPSS, SAS to be used for data set analysing
• Knowledge of data visualization software like Power BI
• Hands-on expertise in Datawarehouse’s, Data Modelling, Data Analysis, Kafka, Azure Data Lake, Postgres, snowflake, API, Data ingestion, Distributed systems, Apache Airflow.
Secondary Skills
• Adept at queries, writing reports, and making presentations
• Knowledge of how to create and apply the most accurate algorithms to datasets to find solutions
• Problem-solving skills
• Accuracy and attention to detail
• Verbal and Written communication skills
• Proven working experience in data analysis