No of Positions: 4
Tentative Start Date: January 10, 2022
Work From : Any Location
Rate : $ 7 - 15 (Hourly)
Experience : 3 to 10 Year
• Create and maintain optimal data pipeline architecture. • Assemble large, complex data sets that meet functional / non-functional business requirements. • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. • Build the infrastructure required for optimal extraction, transformation, and loading data from a wide variety of data sources using SQL and big data technologies. • Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics. • Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader. • Work with data and analytics experts to strive for greater functionality in our data systems.
• 3+ years experience in Docker, Containers with Kubernetes, Java, Spring Boot, Python, Jenkins, CICD. • Development and engineering: Docker & Cloud Containers with Kubernetes • Experience with Google Cloud Platform/AWS/Azure or other public cloud technologies is preferred
Azure (AZ – 400), Kubernetes Certifications preferred