OnBenchMark Logo

Mihir (RID : 6v6vlqqjnxjg)

designation   Python Developer

location   Location : Jaipur

experience   Experience : 7 Year

rate   Rate: $18 / Hourly

Availability   Availability : Immediate

Work From   Work From : 

designation   Category : Information Technology & Services

Shortlisted : 1
Total Views : 34
Key Skills
C C++ Java JavaScript jQuery Python HTML CSS Machine Learning Deep Learning Data Mining Django regex web scrapping
Discription


Mihir – 6+ Years – Python Developer

Career Objective:
To gain in-depth knowledge through an expertly designed curriculum to obtain
employment in a leading organization in the Computer Science working as a Python
Developer or Machine Learning Engineer.
Education:
Bachelor in Technology, Computer Engineering (CGPA: 9.48/10) [University Gold
Medalist] (August, 2014 - June- 2017)

Career Projects:
Medicines Scrappers
- Used scrapy for scrapping the details of medicines from some of the
websites such as 1mg, Netmeds, Comedz etc.
- The scrapper will go through each of the category/alphabets from A-Z, fetch
the links of individual categories and save it in a csv.
- Then after saving that the scrapper will start traversing each category and start
fetching all the medicines details such as the name, price , description and other
details too by storing it in to a csv
- Apart from csv we also stored that data directly in to postgres database and
created a huge data for medicines coming from these websites.

Some of the website references we scrapped:-
https://www.1mg.com/drugs-all-medicines
https://www.netmeds.com/


E-commerce Scrappers
- Scrapped the product details of around 10 e-commerce websites such as
ShopClues, Snapdeal, Ajio, Myntra etc.
- Used scrapy and scrapy-splash for scrapping the details of all the
different products from the website
- First of all the scrapper will start fetching different head categories from the top
part of the e-commerce site.
- Then it will individually move to each top category, then fetch sub-
categories and store their link in csv.
- Now these each product details page contains pagination too. So, the
scrapper will perform pagination and fetch all the details of the products from
each of these page.
- At last the details get stored in csv as well as postgres database directly.

Some of the website references we scrapped:-
https://www.shopclues.com/ https://www.ajio.com/

Google trends:-
- Used scrapy and scrapy-splash for fetching current trending topics on the Google
- The scrapper will first fetch the categories such as Business, Sports, Politics,
and "All" categories too.
- Then based on the categories the scrapper will fetch all the current
trending topics for all as well as the individual categories
- Here the details will be stored in to individual csv's. For instance, there will be a
different csv for Business category showing all the trending topics of only Business
category and so on for all of them.


The website we used:-
https://trends.google.com/trends/trendingsearches/realtime? geo=IN&category=b

Company Details Scrapper
- Used selenium tool for fetching the details of all the companies working in
Python technology(can be any technology) as well as fetch the details of their
employees too.
- The scrapper will first select the category based on the user, then will fetch all
the details of individual companies mentioned on their website.
- After fetching that it will start traversing through individual companies, search
on the individual site whether there is any linkedin url mentioned in their site
anywhere or not.
- If a Linkedin url is found then the scrapper will fetch the details of their owners,
CTO, CEO and their employees and store it in a database.
- Actually we have integrated the scrapper with Django, so every details that will
get fetched will be stored in a sqlite database(can be changed to postgres or
mongodb too as we need to just change a database setting in Django project).

The website we used:-
https://clutch.co/

Indian Post Office Scrapper
- Used scrapy for fetching the client details of the Indian Post office agents.
- The scrapper will first enter the credentials and do login in to the portal
- Then it will fill up a form containing multiple fields. After filling the forms the scrapper
will fetch all the details of people who are there in their database
- Data filtering is also needed, so filter data is read from the csv file and page is filtered
according to that.
- That data will get stored in to a csv as well as in to the client's postgres
database.


The website we used:-


https://www.indiapost.gov.in/

Gun Scrappers
- Used scrapy and selenium for fetching the armory details of the 50
ammunition websites
- The scrapper will perform paginations, fetch categories and save their

 
Matching Resources
My Project History & Feedbacks
Copyright© Cosette Network Private Limited All Rights Reserved
Submit Query
WhatsApp Icon
Loading…

stuff goes in here!