Satyaki – AWS, Python, Terraform
Satyaki is a Mid-to-Senior DevOps and Site Reliability Engineer with strong experience in AWS, Docker, Terraform, and CI/CD automation. He has delivered infrastructure solutions across domains, demonstrating solid communication and client management skills. Candidate's profile mentions proficiency in container management and AWS, and includes leadership as well as founding experience!
9 years of commercial experience in
Main technologies
Additional skills
Direct hire
PossibleReady to get matched with vetted developers fast?
Let’s get started today!Experience Highlights
Tech Lead
This project involved developing a comprehensive, end-to-end web application with a frontend, backend, AI, and database components. The product can be plugged into a Sprinklr environment — it's a focused add-on that provides users with exactly the extra capability they need, without requiring the onboarding of another software.
- Led a team of 3 people, including designers and junior software developers.
- Was handling the product owner role, gathering requirements, converting business requests to technical requirements, and communicating with the team.
- Created cloud infrastructure, built CI-CD pipelines.
- Provided code reviews (Python and JavaScript code).
- Carried out the implementation of Claude AI.
- Integrated payments.
Lead Full-Stack Engineer
One of the first metal poster-selling websites in India. As part of the mini-launch, the client was looking for a proficient full-stack developer to handle the complete lifecycle, from building to production. This website now handles about 300 visitors daily.


- Took on the task of leading a small team of 1 UX designer, 1 app developer, and myself to get this product ready.
- Created frontend and backend architecture.
- Built frontend and backend.
- Integrated an AI chatbot.
- Implemented SEO.
Senior DevOps Engineer
This project involved designing a pipeline that processes 80 GB of image data per day. The pipeline would ingest, clean, add metadata, convert the data to tabular format, and store it in a relational database. The complete infrastructure was designed using Terraform and deployed in development, testing, and production environments. Application teams successfully used the infrastructure to deploy proprietary image processing logic.
- Deployed the pipeline with 96% uptime.
- Trained juniors in writing infrastructure as code.
- Handheld application teams to use this new infrastructure, substituted with proper documentation.
- Debugged user-specific use cases while adding them as new features.
Infrastructure and Software Engineer
A Dutch information technology company that specializes in using crowd-sourcing for business in the retail, tech, and care sectors. The project focused on the full-stack DevOps pipeline.
- Built an end-to-end data scraping pipeline using Docker and AWS components to scrape multiple websites daily.
- Wrote Python pipelines for wrangling this data - cleaning, Spark processing and storage - all via Airflow orchestration and event-driven.
- Implemented horizontal scaling on each node.
- Orchestrated complete pipeline using Airflow.