Steven – AWS, Python, Terraform
A Senior DevOps Engineer with a strong background in Cloud engineering, specializing in AWS, Terraform, and CI/CD practices. He has experience leading teams, including serving as a Tech Lead, and has worked across both corporate environments and startups. While his strengths lie in cloud and automation, he continues to deepen his expertise in Kubernetes, Linux, and networking. Steven has also been exploring AI since 2016 and actively leverages modern tools such as GitHub Copilot, JetBrains AI, and ChatGPT in his daily workflow.
9 years of commercial experience in
Main technologies
Additional skills
Direct hire
PossibleReady to get matched with vetted developers fast?
Let’s get started today!Experience Highlights
DevOps Architect
A lightweight, event-driven compute platform designed to let users connect over WebSocket, authenticate, and trigger Dockerized jobs hosted in ECR. The platform is built using AWS API Gateway (WebSocket), Lambda, and ECS Fargate. It supports running short-lived containers (≤ 60 seconds) in real time—ideal for interactive tooling, micro-automation, or command execution-as-a-service. The architecture focuses on serverless execution, stateless session management, and tight security boundaries, making it highly scalable and cost-effective.
- Designed a WebSocket-triggered serverless compute layer using AWS Lambda and API Gateway.
- Enabled secure real-time container execution of user-hosted Docker images in ECS Fargate.
- Implemented session-aware routing and message handling using a modular MCP (Message Command Processor) structure.
- Used CloudWatch and DynamoDB to track connection state and enforce TTL expiration on inactive sessions.
- Leveraged Lambda timeouts and Fargate constraints to limit execution windows to ≤ 1 minute.
- Built API key–based customer authentication with scoped image access permissions via IAM.
- Deployed with Terraform, with branch-based environment separation (dev/qa/prod).
- Optimized cost by avoiding idle infrastructure—no servers or prewarmed containers required.
Data Engineer
A beauty retailer in the United States and Mexico. The company offers branded and private label beauty products, including cosmetics, fragrance, haircare, skincare, bath and body products, professional hair products, and salon styling tools. The role involved the support of the modernization of their legacy Data Exchange Layer (DXL), which powered data flow across their e-commerce systems.
- Re-architected the Data Exchange Layer for improved scalability and security.
- Built robust, testable GraphQL endpoints with improved schema design.
- Streamlined Jenkins pipelines to accelerate build and deploy cycles.
- Improved CI test coverage using MochaJS and shell-based test runners.
- Helped teams transition from legacy data flows to modern, scalable APIs.
- Supported system performance audits and resolved high-impact latency issues.
Cloud Automation Engineer
A fully automated, serverless platform that ingests trending news, generates AI-powered summaries, renders videos, and publishes them directly to YouTube. The project involved architecting the cross-cloud infrastructure using AWS and GCP services, integrating compute workflows across Lambda, ECS, and Cloud Functions.
- Designed and implemented a fully serverless pipeline using AWS Lambda, GCP Cloud Functions, and ECS.
- Built event-driven orchestration to automate video ingestion, summarization, rendering, and publishing to YouTube.
- Integrated OpenAI and Node.js microservices for news analysis and script generation.
- Implemented secure multi-cloud communication via IAM roles, Secrets Manager, and Pub/Sub triggers.
- Optimized for cost by combining Lambda for burst operations and ECS for long-running batch jobs.
- Deployed the platform using Terraform, enabling reproducible infrastructure and rapid iteration.
DevOps Engineer
This project involved designing and deploying a fully serverless, distributed video rendering system using AWS. The pipeline transformed dynamic JSON data into branded videos via a React + Remotion frontend, orchestrated by AWS Step Functions and backed by Lambda. It was created to enable scalable rendering workflows without provisioning persistent infrastructure, highlighting the engineer's ability to leverage event-driven compute, microservice coordination, and DevOps principles in media pipelines.
- Architected an end-to-end serverless video rendering pipeline using AWS Lambda, Step Functions, and S3.
- Integrated with a Remotion + React frontend to allow dynamic JSON-driven content rendering.
- Implemented fine-grained workflow stages (e.g., script compile, video render, thumbnail export) as modular Lambda functions.
- Designed with horizontal scalability, automatically scaling rendering based on demand.
- Reduced compute costs by shifting from traditional render farms to on-demand Lambda execution.
- Built infrastructure using Terraform, enabling reusable environments and CI/CD integration.
- Provided logging, failure recovery, and retry mechanisms using Step Functions error handling.
Lead DevOps Engineer
A project involved creating a Lambda failover mechanism that automatically detects service outages and routes traffic to a backup AWS region. This system used a mix of control logic, error handling, and health check feedback to decide whether to continue executing in the primary region or fail over. The solution ensured uptime for critical workflows even during regional AWS outages.
- Designed a dual-region AWS Lambda setup with automatic failover logic.
- Used CloudWatch alarms and fallback control flows to monitor and redirect execution.
- Achieved high availability with minimal cost by leveraging existing serverless primitives.
- Abstracted routing logic to enable multi-service compatibility across teams.
- Documented the solution and provided onboarding sessions for the internal team.
Lead DevOps Engineer
The project was designed to allow the company to rapidly and consistently deploy all baseline infrastructure to new AWS regions in preparation for scaling to the other areas based on business needs and disaster recovery scenarios.
- Spearheaded the implementation of this regional deployment framework, which served as a single entry point for spinning up VPCs, IAM policies, ECS clusters, and other foundational services.
- Created an automated Terraform-based deployment pipeline for multi-region AWS environments.
- Reduced time-to-deploy a complete base region from weeks to under an hour.
- Integrated with CI/CD to support ephemeral region spin-up for testing.
- Enabled disaster recovery readiness by pre-creating failover regions.
- Standardized all core infrastructure through reusable Terraform modules.
Platform Engineer
A free streaming service that offers users hundreds of live, linear channels and thousands of on-demand movies and TV shows across the Americas and Europe. The initiative began with service-level experiments and culminated in a company-wide mesh implementation.
- Helped lead the product rollout across the company’s Kubernetes services.
- Implemented service-level Istio configurations during the initial rollout.
- Guided teams through onboarding and migrated early adopters.
- Tuned ingress/egress traffic, circuit breaking, and observability.
- Collaborated with platform teams to design a scalable mesh architecture.
- Contributed to Terraform/Helm scripts for Istio injection and sidecar policies.
Cloud DevOps Engineer
An American electric power and natural gas holding company. This project involved supporting its enterprise cloud migration.
- Led cloud engineering support during AWS account migration efforts.
- Built internal documentation and guides for Terraform-based infrastructure.
- Standardized account promotion workflows and CI/CD environment transitions.
- Hosted internal workshops to accelerate DevOps maturity.
- Assisted 15+ projects in resolving infrastructure-related blockers.