RAKESH

Lead Big Data Administrator

Location : Gurgaon

Experience : 10 Year

Last active on : July 26, 2021

Rate: $20 / Hourly

Availability : Immediate

Work From : Any

Category : Information Technology & Services

 
Active: 30+ Days ago
Shortlist : 0
Total Views : 28
Key Skills
Microsoft Azure Admin shell Scripting Jenkins Hadoop TFS Eclipse
Summary

Hi,


I feel that my skills and experience are a great fit for this position. 

Please feel free to contact me to arrange an interview. I look forward to learning more about this opportunity.

Please find the attached resume.


Experience Summary: - I have 10 years of working experience in mix of profile like Hadoop Administrator and Software Testing.

I have 4 years of work experience as a Hadoop Administrator.


Big Data Exposure:-

*Installation and Verify in Hadoop logic installation, configuration, supporting and managing Hadoop Clusters using Apache Hadoop.

*Verify backup configuration and recovery from a Namenode failure.

*Verify Installation of various Hadoop Ecosystems and Hadoop Daemons.

*Checking system health by Heartbeat mechanism. 

*Good experience on Design, configure and manage the backup and disaster recovery for Hadoop data.

*Hands on experience in analyzing log files for Hadoop and ecosystem services and finding  root cause. 

*As an administrator verify, cluster maintenance, troubleshooting, monitoring and followed proper backup & Recovery strategies.

*Experience in HDFS data storage and support for running map-reduce jobs. 

*Verify installing and configuring Hadoop eco- system like Sqoop, Pig, Hive, Hbase, Flume, Oozie, and Kafka.

*Configured various property files like core-site.xml, hdfs-site.xml, mapred-site.xml, yarn-site.xml based upon the job requirement.

*Importing and exporting data into HDFS using Sqoop.

*Importing and exporting data into hdfs to local, local to local.

*Good working Knowledge in Hadoop security like Kerberos and sentry.

*Experienced in Cloudera installation, configuration and deployment on Linux distribution.

*Commissioning and decommissioning of nodes as require.

*Managing and monitoring Hadoop services like Namenode, Datanode & Yarn

*Performance tuning, and solving Hadoop issues using CLI or by WebUI

*Troubleshooting Hadoop cluster runtime errors and ensuring that they do not occur again.

*Accountable for storage and volume management of Hadoop clusters.

*Ensuring that the Hadoop cluster is up and running all the time (High availability, big data cluster etc.)

*Evaluation of Hadoop infrastructure requirements and design/deploy solutions.

*Backup and recovery task by creating snapshots policies, backup schedules and recovery from node failure.

*Responsible for Configuring Alerts for different types of services which is running in  Hadoop Ecosystem.

*Moving data from one cluster to another.

 

Warms Regards

Rakesh Dubey

8744855286









My Project History & Feedbacks

Average user rating

0 / 0

5 star icon
NaN% Complete
4 star icon
NaN% Complete
3 star icon
NaN% Complete
2 star icon
NaN% Complete
1 star icon
NaN% Complete