OnBenchMark Logo

Vikram Kumar (RID : 1ate6l8mqhg6h)

designation   Consultant

location   Location :  Hyderabad , India

experience   Experience : 6.8 Year

rate   Rate: $20 / Hourly

Availability   Availability : Immediate

Work From   Work From : Offsite

designation   Category : Information Technology & Services

Shortlisted : 0
Total Views : 103
Key Skills
AWS SPARK SQL Airflow
Discription

Vikram Kumar Kandie

 

PROFESSIONAL SUMMARY:

  • Highly self-motivated and goal-oriented professional committed to pursuing a long-term career.
  • 6.8 years+ of overall experience in IT professional and demonstrating good analytical and problem-solving skills in Big data technology using AWS, Hadoop, python and Apache Spark.
  • 4+ years of exclusive experience as Hadoop Developer have strong background in bigdata Hadoop components Python HDFS, Hive, Sqoop and Cloud Technologies like Amazon Web Services (AWS) EMR, EC2, S3, Airflow, Athena, SNS, Aurora DB.

 

  • 2 years of experience in Spring, Hibernate and restful services using Java.
  • Strong analytical skills to find the root cause and remediate issues out of large applications.
  • Having passion to learn new technologies and ability to apply them.

WORK EXPERIENCE:

  • Worked as Intern for Ness Technologies from March 2015 to September 2015
  • Worked as Software Engineer for Ness Technologies from September 2015 to September 2017
  • Presently Working as Technology Lead from October 2017 to till date

 

EXTERNAL CERTIFICATION:

  • Completed AZ-900 Azure fundamental external certification.

EDUCATIONAL QUALIFICATION:

Bachelor of Technology in Electrical and Electronics Engineering (2009-2013) under J.N.T.U University

Hyderabad with 70.3%

TECHNICAL SKILLS:

  • Languages : Python, Spark, Pyspark, Core Java.
  • Aws Skills : S3, EC2, EMR, AWS Glue, Athena, Dynamo dB
  • Hadoop Skills : Hadoop, Apache spark, Hive, HDFS, Sqoop, Databricks
  • Job Scheduler tool : Airflow
  • CI/CD s : Bit bucket, Jenkins, Git
  • Tools : Attunity Tool, Informatica Tool
  • Frameworks : Spring, Hibernate.
  • Database : Oracle 10g, MySQL, Aurora
  • IDE : Eclipse , PyCharm
  • Application server : Tomcat7, Web logic 9.2,12C
  • Web Services : Restful

 

PROJECT DETAILS: 1 Started: May-2022

Project Title : Uniform data platform(UDP)

Client : Northwestern Mutual

Role : Developer

Team Size : 4

Environment : Databricks, Pyspark, S3, Apache Spark, lucid charts, Airflow, AutoSys and Informatica tool

Description:

Northern western mutual is a leading financial company which provides consultation on income protection and investment advisory service in financial sector. To improve their customer services, client

want to migrate the data from on-prem Netezza to cloud data bricks for analytical report purpose. This project sets out to ingest data from Netezza sources to cloud data bricks. We use Pyspark to ingest the data to handle any volume of data. Data is cleansed, transformed, and then mapped to target data types and finally store in data bricks.

Responsibilities:

  • Creation of lucid charts for transformation of source tables from Netezza to cloud data bricks.
  • Involved in tables creation using existing framework
  • Scheduling the airflow jobs using Autosys for monitoring jobs.
  • Experience in checking in and out git lab repository.
  • Hands on code deploying using git lab ci/cd tool.
  • Handled small team for tasked assign in project.

PROJECT DETAILS: 2

Project Title : AMFAM –AWD CLOUD

Client : American Family Insurance

Role : Developer

Team Size : 5

Environment : EMR, Pyspark, Hive, S3, Apache Spark, Athena, Sqoop, Airflow, Sqlachemy , DynamoDB

 

Description:

Client is a leading Insurance company. To improve their customer services, client wants to periodically analyses their data to repair faults and problems. This project sets out to ingest data from different types of sources. We use Pyspark to ingest the data in order to handle any volume of data. Data is cleansed, transformed and then mapped to target data types and finally store in hive.

 

Responsibilities:

 

 

  • Created the task in Attunity tool by providing the s3 security credentials, started the task Which is have to be real time and scheduled the task which has to push periodically.

 

  • Involved in developing following frameworks using Python and spark.
 
Matching Resources
My Project History & Feedbacks
Copyright© Cosette Network Private Limited All Rights Reserved
Submit Query
WhatsApp Icon
Loading…

stuff goes in here!