OnBenchMark Logo

Rahul

Devops Engineer
placeholder
Location Meerut
view-icon
Total Views88
availability
Shortlist0
back in time
Member since30+ Days ago
 back in time
Contact Details
phone call {{contact.cdata.phone}}
phone call {{contact.cdata.email}}
Candidate Information
  • User Experience
    Experience 18 Year
  • Cost
    Hourly Rate$20
  • availability
    AvailabilityImmediate
  • work from
    Work FromOffsite
  • check list
    CategoryInformation Technology & Services
  • back in time
    Last Active OnJuly 18, 2024
Key Skills
DevOpsAWSPythonJavascript terraform typescriptMicrosoft AzureGit or SVNNagios DataDog GCP Docker Kubernetes html css wordpress
Education
2012-2013

(Marketing )
Periyar University

2021-2023

(Computer Science )
Shubarti University

2021-2023

(computer science)
shubarti University

Summary

Certified AWS DevOps professional; targeting a challenging DevOps Engineer role to apply nearly two decades of experience in infrastructure automation, container orchestration, CI/CD pipeline, security protocols, and data compliance. Aiming to boost operational excellence and drive business growth through a blend of creative thinking, effective problem-solving, and strategic planning. 

Project Details
Title :AWS S3 Buckets Deletion
Duration :1
role and responsibileties :

created python code to achieve the organisation.

Description :

This project was regarding deleting unused AWS S3 buckets in the company's AWS  account. 


Title :AWS ECR Image deployement to K8S
Duration :1
role and responsibileties :

Kubernetes deplyement.

Typescript code usage 

Pulumi IAAC toolusage

Description :

This project was about deploying an AWS ECR image to K8S using Pulumi typescript. 


Title :Build secure application networks with Amazon VPC Lattice, Amazon ECS, and AWS Lambda
Duration :6
role and responsibileties :

VPC COnfiguration

AWS Lambda using python

 

Description :

This project consists of sample code to seamlessely connect Amazon ECS and AWS Lambda workloads using Amazon VPC Lattice.


Title :Github to Gitlab Migration
Duration :1
role and responsibileties :

Our process typically takes 15-20 minutes for most of our services. We achieve this because we use GitLab services for our code repository, which comes with the Gitlab Runner feature. However, you need to have the runner itself for this feature to work. You can either use Gitlab's own shared runner or use your own runner. We use our own runner because we want better performance for our CI process. We configure the runner resources (CPU and memory) as per our requirements to ensure top-notch performance. Moreover, we ensure that secret credentials remain on our AWS environment. We use our Gitlab runner, which is the best option out there.

 

Autoscale requires one master server that has the task to launch a new runner if there is a new process that needs to be handled. In our Autoscale configuration, we choose Docker as our runner options. Autoscale also allows us to be more efficient in terms of cost since it only launches a new runner server when it is needed. We set the timeout of the runner really short, around 5 minutes, which means if there are no more jobs after 5 minutes of being idle, the server will automatically shut down. This configuration has a drawback. It will increase the cold-start time when there is a job available and there is no active runner server. However, we have a very small engineering team and we currently don't mind with this since our deployment rate is still small. Since we implemented this architecture 2 months ago, we have only made around 700 deployment processes (~11 deployments/day).

 

Moving on, we have a container registry where after the Docker build process is done, it produces an image which we push to ECR and then deploy it to ECS. The setup of ECR and ECS is pretty simple since they both are AWS services. On ECS, we want to simplify things, that's why we choose EC2 as our container cluster. It's higher cost but lower maintenance work is required.

 

In Gitlab, you need a configuration file called gitlab-ci.yml, which we use to configure each step of the Continuous Integration (CI) process. In our setup, we have 3 steps: test, build, and deploy. So, our configuration looks like this:

 

stages:

- test

- build

- deploy

 

Our process ensures that we deliver high-quality services quickly and efficiently.

Description :

resources to create gitlab ci/cd pipeline on gitlab for ecs, Postgres and Django git-cheat-sheet.pdf Gitlab CI with ECS The flow is pretty much lHere's a clearer version of the text:

 

To create a GitLab CI/CD pipeline on GitLab for ECS, Postgres, and Django, you'll need to refer to the git-cheat-sheet.pdf. Here's the basic flow:

 

1. Push changes to your code repository.

2. GitLab will create a new pipeline and notify the Master Runner (which is hosted on AWS EC2).

3. The Master Runner will then launch a new GitLab Runner server or use an existing one (also on AWS EC2).

4. GitLab Runner will run tests.

5. Once the tests are complete, the Docker build process will begin.

6. Once the build is complete, the Docker image will be stored in the AWS Elastic Container Registry (ECR) and the deployment process will start.ike this: Push code changes to the repository. GitLab creates new pipeline and notify the Master Runner. (AWS EC2). Master Runner launch new Gitlab Runner server or use the existing one. (AWS EC2). Gitlab Runner run test. Test done and start Docker build process. Build process done, store Docker image to AWS Elastic Container Registry (ECR) and start deployment process.


Copyright© Cosette Network Private Limited All Rights Reserved
Submit Query
WhatsApp Icon