OnBenchMark Logo

Kartik (RID : yb8plldqrkxn)

designation   Senior Data Engineer

location   Location : Jaipur, India

experience   Experience : 7 Year

rate   Rate: $16 / Hourly

Availability   Availability : Immediate

Work From   Work From : Offsite

designation   Category : Information Technology & Services

Shortlisted : 12
Total Views : 73
Key Skills
Data Engineer Python PL/SQL Spark SQL Oracle SE2 SQL Server NoSQL
Discription

     KARTIK PRAJAPAT

 

SUMMARY-
 

  • 7+ years of IT experience in a variety of industries working on Big Data technology using technologies such as Cloudera and Hortonworks distributions. Hadoop working environment includes Hadoop, Spark, MapReduce, Kafka, Hive, Ambari, Sqoop, HBase, and Impala.Fluent programming experience with Scala, Java, Python, SQL, T - SQL, R.

 

  • Hands-on experience in developing and deploying enterprise-based applications using major Hadoop ecosystem components like MapReduce, YARN, Hive, HBase, Flume, Sqoop, Spark MLlib, Spark GraphX, Spark SQL, Kafka.

 

  • Adept at configuring and installing Hadoop/Spark Ecosystem Components.

 

  • Proficient with Spark Core, Spark SQL, Spark MLlib, Spark GraphX and Spark Streaming for processing and transforming complex data using in-memory computing capabilities written in Scala. Worked with Spark to improve efficiency of existing algorithms using Spark Context, Spark SQL, Spark MLlib, Data Frame, Pair RDD's and Spark YARN.

 

  • Experience in application of various data sources like Oracle SE2, SQL Server, Flat Files and Unstructured files into a data warehouse.

 

  • Able to use Sqoop to migrate data between RDBMS, NoSQL databases and HDFS.

 

  • Experience in Extraction, Transformation and Loading (ETL) data from various sources into Data Warehouses, as well as data processing like collecting, aggregating and moving data from various sources using Apache Flume, Kafka, PowerBI and Microsoft SSIS.

 

  • Hands-on experience with Hadoop architecture and various components such as Hadoop File System HDFS, Job Tracker, Task Tracker, Name Node, Data Node and Hadoop MapReduce programming.

 

  • Comprehensive experience in developing simple to complex Map reduce and Streaming jobs using Scala and Java for data cleansing, filtering and data aggregation. Also possess detailed knowledge of MapReduce framework.

 

  • Used IDEs like Eclipse, IntelliJ IDE, PyCharm IDE, Notepad ++, and Visual Studio for development.

 

  • Seasoned practice in Machine Learning algorithms and Predictive Modeling such as Linear Regression, Logistic Regression, Naïve Bayes, Decision Tree, Random Forest, KNN, Neural Networks, and K-means Clustering.

 

  • Ample knowledge of data architecture including data ingestion pipeline design, Hadoop/Spark architecture, data modeling, data mining, machine learning and advanced data processing.

 

  • Experience working with NoSQL databases like Cassandra and HBase and developed real-time read/write access to very large datasets via HBase.

 

  • Developed Spark Applications that can handle data from various RDBMS (MySQL, Oracle Database) and Streaming sources.

 

  • Proficient SQL experience in querying, data extraction/transformations and developing queries for a wide range of applications.

 

  • Capable of processing large sets (Gigabytes) of structured, semi-structured or unstructured data.

 

  • Experience in analyzing data using HiveQL, Pig, HBase and custom MapReduce programs in Java 8.

 

  • Experience working with GitHub/Git 2.12 source and version control systems.

 

  • Strong in core Java concepts including Object-Oriented Design (OOD) and Java components like Collections Framework, Exception handling, I/O system.
     

TECHNICAL SKILLS

Hadoop/Big Data Technologies: HDFS, Hive, Pig, Sqoop, Yarn, Spark, Spark SQL, Kafka

adoop Distributions: Horton works and Cloudera Hadoop

Languages: C, C++, Python, Scala, UNIX Shell Script, COBOL, SQL and PL/SQL

Tools:Teradata SQL Assistant, Pycharm, Autosys

Operating Systems: Linux, Unix, ZOS and Windows

Databases: Teradata, Oracle 9i/10g, DB2, SQL Server, MySQL 4.x/5.

ETL Tools: IBM InfoSphere Information Server V8, V8.5 V9.1

Reporting: Tableau


 



 

 

 
Matching Resources
My Project History & Feedbacks
Copyright© Cosette Network Private Limited All Rights Reserved
Submit Query
WhatsApp Icon
Loading…

stuff goes in here!