OnBenchMark Logo

Pariksheet (RID : a49xle45o565)

designation   Data Architect

location   Location : Noida

experience   Experience : 11 Year

rate   Rate: $19 / Hourly

Availability   Availability : Immediate

Work From   Work From : Offsite

designation   Category : Information Technology & Services

Shortlisted : 0
Total Views : 84
Key Skills
Python Hadoop PySpark Teradata SQL SQL
Discription

PARIKSHEET DE

OBJECTIVE

To pursue a challenging career in the ever-growing world technologies, where my knowledge can be to its fullest and be part of the team which helps in individual growth of the organization.

EXPERIENCE SUMMARY

  • Teradata Vantage Certified, Oracle Certified Professional (OCP) and Oracle Certified Expert (OCE) with 11+ years of experience in IT industry in Data Engineering, ETL role, database and data warehousing techniques and technology.Currently working as Data Engineer using Python, Apache PySpark, and Python libraries e.g., Pandas, with InformaticaPowerCenter, Teradata, Hive, Sqoop.
  • My technology forte is Python, PySpark, Scala Spark, Stream Data processing, Teradata Utilities, Teradata SQL, Hive, Sqoop, Amazon Web Services (AWS), MicrosoftAzure.
  • 5+ years of experience in PySpark, Scala Spark
  • 5+ years of experience in Informatica Power Center
  • 5+ years of experience in Teradata Utilities and SQL
  • 7+ years of experience in data warehousing and ETL includes Mapping creations, Mapplet creations and designing inInformaticaPowerCenter
  • 6+ years of experience in Big Data Engineering and Analytics.
  • Extensively worked on Teradata SQL, Teradata utilities like BTEQ, FastLoad, FastExport, MLoad, TPT
  • Experience designing Data Lake in Microsoft Azure, Amazon Web Services cloud from disparate system for predictive analysis.
  • Build and review data model designs, database development standards, implementation and management of data warehouses and data analytics systems.
  • Managing end-to-end data architecture, from selecting the platform, design the technical architecture and developing the application finally testing and implementing proposed solution.
  • Experience on Azure Databricks implementing Big Data technologies, Scala Spark PySpark, Python, Scala
  • Programming in the Big Data space Python, Hadoop, PySparketc.
  • Ensuring very large databases and compute clusters operate optimally.
  • Build ETL pipelines in Spark, Python, HIVE that process transaction and account level data and standardize data fields across various data sources
  • Exposure in Financial domain and good knowledge on Data warehousing concepts.
  • Experience in SQL (Structured Query Language).
  • Experienceindesigning Dimensional Modeling, Data Modeling in banking, retail domain to leverage the existing business model into the analytical platform. The data extracted from various sources are transformed into meaningful insight for the user requirement.
  • Implemented 4 end-to-end Datawarehouse projects on Retail, BFSI using ETL, InformaticaPowerCenter, Teradata, Oracle.
  • Experience in processing high volume of data usingHadoop, Spark Azure.
  • Designed Financial Service Design Model - FSDM for Banking project.

WORK EXPERIENCE

  • June 2022 to till date: Working as Architect
  • Sep 2020 to June 2022: Working as Solution Architect
  • Sep 2019 to July 2020: Worked as Senior Technical Consultant
  • Dec 2017 to July 2019: Worked as Technical Consultant
  • June 2015 to Dec 2017: Worked as Senior Software Engineer
  • August 2012 to October 2014: Worked as Senior Associate
  • April 2011 to Jun 2012: Worked as a Database Developer
  • EDUCATIONAL QUALIFICATION
  • Computer Science from NIIT (GNIIT) in 2011 with 72.4%
  • B. Com from AcharyaJagadish Chandra Bose College (University Of Calcutta) in 2008 with 50%
  • Class XII from Kalyani Central Model School in 2005 with 55%
  • Class X from North Point School in 2003 with 62%

TECHNOLOGY SKILL SETS

  • Python Programming
  • Pandas
  • NoSQL
  • MongoDB
  • Cassandra
  • Amazon Web Services (AWS)
  • Amazon EMR (Elastic Map Reduce)
  • Amazon Redshift
  • DynamoDB
  • Amazon Glue ETL
  • AWS Data Catalog
  • Amazon RDS (MySQL, PostgreSQL, Oracle)
  • AWS Athena
  • AWS Lambda
  • Microsoft Azure
  • Azure Databricks
  • Azure Data Factory
  • Azure Synapse Analytics
  • Azure SQL
  • Azure Storage
  • Snowflake Data Warehouse
  • Apache Spark
  • Scala Spark
 
Matching Resources
My Project History & Feedbacks
Copyright© Cosette Network Private Limited All Rights Reserved
Submit Query
WhatsApp Icon
Loading…

stuff goes in here!