OnBenchMark Logo

Priti (RID : 210ulpcek3wo)

designation   Big Data Developer

location   Location : Jaipur

experience   Experience : 3 Year

rate   Rate: $12 / Hourly

Availability   Availability : Immediate

Work From   Work From : Offsite

designation   Category : Information Technology & Services

Shortlisted : 1
Total Views : 36
Key Skills
Big Data Hadoop Hive GCP Pyspark Hive
Discription

Projects

Project1::,P(BankingDomain)

Tools used: Hive, Hadoop, HBase, Cloudera, Hdfs, Sqoop,GCP

Roles&Responsibilities:

  • WritingthescriptfilesforprocessingdataandloadingtoHDFS
  • LoadingfilestoHDFSandwritinghivequeriestoprocessrequireddata
  • Completelyinvolvedintherequirementanalysisphase.
  • InvolvedinPartitionsofHivetables.CreatingHivetablestostoretheprocesseddatainatables.

 

LanguagesEnglishHindiMarathi

  • Analyzingtherequirementtosetupaclusterforming.
  • SetupHivewithMySQLasaRemoteMetastore.
  • Movedalllog/textfilesgeneratedbyvariousproductsintoHDFSlocation.
  • Investigate and analyze alternative solutions to datastoring, processing etc. to ensure most streamlinedapproachesareimplemented

 

Project2,(RetailDomain)

Toolsused:Pyspark,Hive,Hdfs,Hbase,GCP

Roles&Responsibilities:

  • Developmentofend-to-enddatatransformationframeworkusingPySparkonGCP
  • RunningHIVEqueriesfortransformations.
  • Transformationinvolvedaseriesof100+PySparkandHIVEscripts.
  • GenericdatamigrationframeworkfromlegacyDatabasestoHadoopusingspark2.4.
  • Build Big data pipelines using various Google CloudPlatform services(GCP)-, Google Cloud Storage, GoogleCloudSQL,Dataproc,BigqueryCloudComposer/Airflow.
  • Shellscriptingfollowedbytransformationsasapartofdatageneration,cleaning.
 
Matching Resources
My Project History & Feedbacks
Copyright© Cosette Network Private Limited All Rights Reserved
Submit Query
WhatsApp Icon
Loading…

stuff goes in here!