OnBenchMark Logo

Ramesh (RID : 1fdd4l9dwrz02)

designation   Data Engineer

location   Location : Noida

experience   Experience : 16 Year

rate   Rate: $22 / Hourly

Availability   Availability : Immediate

Work From   Work From : Offsite

designation   Category : Information Technology & Services

Shortlisted : 17
Total Views : 142
Key Skills
Hadoop/Bigdata XML JSON Mappers Spark Pyspark Java Spring MySQL Teradata Mongo Docker Maven Python Apache kafka
Discription

Ramesh Natarajan

Summary

  • Having around 16+ years of experience in the IT industry and 8+ years in Hadoop/Bigdata
  • Experience developing in an Agile Software Development Environment
  • Involved in product development from the scratch and its development activities
  • Well versed with the application development lifecycle and in implementing Java/scala/Big data applications using OO programming techniques
  • Hands on experience in Big Data solutions and the underlying infrastructure of Hadoop Clusters using Hortonworks and Cloudera distributions
  • Solid Hadoop experience, Familiar with Hadoop tools/languages, such as Hive, Sqoop,Oozie,Impala and Hbase
  • Solid understanding of MapReduce/Spark and how the Hadoop Distributed File System works
  • Written multiple MapReduce/Spark Jobs using Java/scala API and Hive for data extraction, transformation and aggregation from multiple file formats including Parquet, Avro, XML, JSON, CSV, ORCFILE and other compressed file formats Codecs like gZip, Snappy.
  • Good experience in optimizing Map Reduce algorithms using Mappers, Reducers, combiners and practitioners to deliver the best results for the large datasets.
  • Experienced in writing Ad Hoc queries using Cloudera Impala, also used Impala analytical functions
  • Developed Spark, Flink, Ka a,Scala. Pyspark applications, modules and HDFS workflows using Oozie
  • Experience working within Agile/Scrum framework and as part of a Scrum team, working within an iterative product development methodology

Education Qualifications

  • First Class in B.Tech (Information technology) degree

Courses

  • Infosys Bigdata internal training
  • AZ-500 Azure security and AZ-304 architecture training by Gavs partnership with Microsoft team:

Technical Experience

Languages

Java/J2EE,Spring

Database & No-Sql

MySQL, Teradata,Db2, Mongo, Hbase

Bigdata-Technologies and Others

Hadoop, MapReduce, Spark, Flink, Apache kafka, Hive,Hbase,Impala, Oozie, Sqoop,Python

Tools & IDE

Eclipse IDE, IntelliJ IDE, Jupiter Notebook, GitHub, Docker, Jenkins, Maven

Operating System

Windows,Unix

Major Assignments

Apr 2021 – Mar 2022

Project

Ad-reports

Client

Self- InvestingChannel

Technologies

Java, Spring, Redshift, AWS cloud, AWS Glue

IDE & Tools

Eclipse, GitHub, Maven

Role

Lead Developer and product owner

Brief description of the project:

Investing channel is the largest and most innovative online financial marketing platform reaching an audience of 20 million influential decision makers. It helps publishers to make revenue in an efficient way with new technology and identifying the gap to improve their revenue.

Responsibilities:

    • Lead developer and part of Architecture team
    • Requirements gathering for new modules and providing implementation plan
    • Product ownership and end to end deliverable
    • Update Director, Sr management on the progress and new tools modules on the product on a daily basis.
    • Team management and allocate their tasks and deliverables.

July 2020 – Mar 2021

Project

PCE

Client

Premier-USA

Technologies

Spark 2.0, Scala,Sqoop, Oozie, Hive/Impala and Netezza/Teradata

IDE & Tools

Eclipse, GitHub, Maven

Role

Lead Developer

Brief description of the project:

Premier connect enterprise is a transformation layer to process hospital data and provide to hospitals It has ETL process to load data from RDBMS(Netezza/Teradata) and stored in HDFS for effective transformation process. Spark is used as a transformation layer which creates dim and fact tables by processing the source data .The end data is exported back to RDBMS using Sqoop and this entire process is scheduled using oozie workflow.

Responsibilities:

    • Lead developer and individual contributor and part of Architecture team
    • Requirements gathering for new enhancements and implement the changes
    • Production support on daily basis
    • Provide hot fix for any production failure and fix it permanently

Challenges:

  • Migrate from Netezza to Teradata as source and target Database
 
Matching Resources
My Project History & Feedbacks
Copyright© Cosette Network Private Limited All Rights Reserved
Submit Query
WhatsApp Icon
Loading…

stuff goes in here!