No of Positions: 1
Location: Pune
Tentative Start Date: December 31, 2023
Work From : Any Location
Rate : $ 10 - 11 (Hourly)
Experience : 5 to 7 Year
JD:
Experience with programming in Python, Spark and SQL
● Prior experience in AWS services (Such as AWS Lambda, Glue, Step function, Cloud Formation, CDK)
● Knowledge of building bespoke ETL solutions
● MS SQL Server(data modelling, and T-SQL) for managing business data and reporting
● Ability to design, build and manage data pipelines for data structures encompassing data transformation, data models, schemas, metadata and workload management
● A combination of IT skills, data governance skills, analytics skills and economics knowledge
● An advanced degree in computer science (MS), information science (MIS), data management, information systems, information science (post graduate diploma or similar) or a related quantitative field or equivalent work experience
● Experience in working with data science teams in refining and optimizing data science and machine learning models and algorithms.
To succeed in this role it would be an advantage if you possess:
Experience with advanced analytics tools for Object-oriented/object function scripting using languages such as C#, R
● Familiarity with AWS cloud and its services
● An advanced degree in computer science (MS), statistics, applied mathematics (Ph.D.), information science (MIS), data management, information systems, information science (post graduate diploma or related) or a related quantitative field or equivalent work experience.
● Experience in working with data science teams in refining and optimizing data science and machine learning models and algorithms.
● Adept in agile methodologies and capable of applying DevOps and increasingly DataOps principles to data pipelines to improve the communication, integration, reuse and automation of data flows between data managers and consumers across an organization.