OnBenchMark Logo

Data Engineer

No of Positions  No of Positions:   2

location Location: Pune

date Tentative Start Date:   July 25, 2023

Work From Work From : Offsite

rate Rate : $ 11  -  12 (Hourly)

experience Experience : 4 to 5 Year

Job Applicants : 16
Job Views : 497
You have successfully applied. Company will contact you soon.
Name : {{jobapplydata.name}}
Company Name : {{jobapplydata.cname}}
Email  {{jobapplydata.email}} |   Send Email   {{emaildata.total}}
Phone {{jobapplydata.phone}} | Call
You have successfully applied. Need to upgrade your plan to view contact details of client. Upgrade Plan
Job Category : Information Technology & Services
Duration : 3-6  Month
Key Skills Required Skills
SQL Python azure Agile SQL Server java Cosmos DB

Position: Data Engineer Experience

Experience -Position: Data Engineer Experience

Education: BE/B.Tech

Notice Period: Immediate

  • Design data solutions to meet business, technical and user requirements. This includes building modern data pipelines that meet functional/non-functional business requirements and provide end to end data solutions.
• Preprocessing of data using language as SQL, Python or other languages
• Analyzing Raw data and combining raw data from multiple sources, Developing and maintaining data sets
and improving data quality and efficiency
• Conduct complex data analysis and report on results.
• Create pipelines on Azure Synapse or similar, to copy, move, or process data.
• Good knowledge on Azure data processing tools
• Identify, design, and implement process improvements: automating manual processes, optimizing data
delivery, re-designing infrastructure for greater scalability.
• Implementing ETLs pipelines using Azure Synapse or other similar resources
• Provide datasets modeled according to Data Analysts and Data Scientists need.
• Improving the use of data resources and processes to achieve better performance and reduce the cost
• Support compliance with data, information and security management requirements of the Unit (GDPR,
LGPD, or specific data protection laws)
 • Master data lifecycle, standards and technologies used by the team and the company.
• Deliver on time with high quality.
• Perform technical documentation of the resources, pipelines, data sources, and datasets.
• Trouble Shooting and debugging skills for faster data defect fixing.
Must Have:
• Background on data modeling for relational database, NoSQL database, and data warehouse
• Must have the required knowledge to understand/build complex database systems for businesses.
• Experience with big data tools: Hadoop, Spark, Kafka, Spark & Kafka Streaming, Python, Scala, Talend etc.
  • Bachelor’s degree in computer science or IT-related courses.
• Technical expertise with data models, data mining, and segmentation techniques
• Solid knowledge in relational and NoSQL databases (SQL Server, Snowflake, Cosmos DB)
 • Knowledge of programming languages (e.g., Java and Python)
• Strong hands-on in build pipelines and ETL with Azure Synapse, DBT, Stitch, or similar tools

 • Analytical, problem-solving, and decision-making skills
• Strong knowledge of working on and managing big data environments
• Demonstrate excellent analytical, technical, interpersonal and organizational skills and be a good team
• Experience working in an agile environment as an important part of the team.
• Excellent communication skills with sponsors, CSM teams, and clients.
We Value:
• Good working knowledge of Continuous Delivery Practices with Azure DevOps or similar frameworks
• Experience with data quality assurance
• Ability to work within a Team with strong analytical, problem-solving and communication skills.
• Flexible and adaptable, able to work in ambiguous situations.
• Experience working within an Agile team.
• Understanding of Agile practices and ability to use tools such as Azure DevOps to enable the delivery of
high-quality data resources.

Copyright© Cosette Network Private Limited All Rights Reserved
Submit Query
WhatsApp Icon