
Ivan
From Ukraine (UTC+2)
9 years of commercial experience
Ivan – Python, AWS, LLM
Ivan is a Machine Learning Engineer with 7+ years of experience and a strong foundation in software engineering, currently focused on MLOps and cloud infrastructure. Brings a practical, end-to-end understanding of the ML lifecycle — from model development to deployment and monitoring — with hands-on experience in AWS, Azure, and production-grade ML systems. Known for clear communication, cross-functional collaboration, and a proactive mindset, making him a valuable asset to any team. Combines technical depth with a team-oriented approach to help deliver scalable, reliable, and maintainable ML solutions.
Main technologies
Additional skills
Direct hire
PossibleReady to get matched with vetted developers fast?
Let’s get started today!Experience Highlights
Senior ML Engineer
A computer vision–based solution enables noninvasive anemia detection using a simple photo of the eye. The system analyzes visual features captured from the conjunctiva region to assess hemoglobin levels, providing a fast, accessible, and needle-free screening method suitable for remote and clinical settings.
- Created pipeline for incoming data and data versioning, automated Tensorflow model training with Skypilot to utilize AWS spot instances and save on costs in continuous training and experiments;
- Developed a customer-facing dashboard with authentication for viewing patients' records and visualization of prediction results vs lab results;
- Сonverted inference pipeline from Python to C++ with model running on a mobile phone;
- Вeveloped cross-platform C++ SDK with closed source code to be used in multiple applications for Android and IOS.
ML Engineer
A Retrieval-Augmented Generation (RAG) system was developed to support internal search needs within a technology-focused organization. The goal was to provide engineers with efficient access to a large-scale knowledge base spanning over a decade of historical work. More than 50,000 documents across 2,000 projects were digitized using OCR, indexed, and integrated into a custom-built RAG interface. The solution was deployed on-premise to ensure data security and tailored to meet specific domain and user requirements.
- Set up a backend API and inference server for an on-premise Hugging Face model using the vLLM inference engine;
- Deployed and configured a local ChromaDB vector store to support fast and scalable retrieval;
- Implemented hybrid search combining dense and sparse retrieval methods for improved result relevance;
- Designed and built an internal dashboard to compare responses across different algorithm variants;
- Ensured seamless integration between retrieval components and inference pipeline for robust RAG performance;
- Optimized infrastructure for on-premise deployment, prioritizing performance, data security, and maintainability
Senior MLOps Engineer
A smart internal search tool designed to support the assessment and certification of maritime equipment. The solution helps identify renewal opportunities and potential clients by leveraging fuzzy matching algorithms, integrating open web search, and applying advanced reasoning capabilities through large language models (LLMs). This enables more efficient decision-making and improved lead discovery within the maritime compliance domain.
- Designed solution architecture using AI Search, Azure OpenAI, and Serper Search API;
- Implemented hybrid search with automatic filter field extraction using advanced prompting;
- Developed CI/CD automation for back-end deployment;
- Introduced unit tests and automated quality assurance for LLM responses to ensure stability during model upgrades;
- Implemented outage alerting mechanisms;
- Set up a controlled software release process across DEV, UAT, and PROD environments.
Senior MLOps Engineer
An end-to-end agtech solution developed for one of the largest food manufacturers specializing in full-cycle sugar production. The goal was to enhance operational monitoring during critical stages of the process, specifically, beet harvesting in the field and sugar boiling at the manufacturing facility. The project focused on improving process visibility, data accuracy, and real-time decision-making through the integration of digital tools and sensors.
- Built a Docker-based asynchronous inference pipeline for image processing, deployed on AWS ECS;
- Developed an automated data labeling tool, accelerating the workflow by 4× for data and computer vision engineers;
- Converted PyTorch models to ONNX format for optimized CPU inference;
- Established performance benchmarks for model inference and system throughput;
- Reduced batch scoring time for 1 M+ images from 36 to 4 hours by parallelizing computation on ECS, orchestrated via AWS CDK.
MLOps engineer
An internal document intelligence platform was developed for a leading Oil & Gas enterprise to automate the extraction of analytical insights from complex technical documentation, including well logs and engineering reports. The system leverages natural language processing and domain-specific parsing techniques to structure unstructured data, enhancing decision-making and operational efficiency across teams.
- Maintained a scalable Kubernetes-based inference stack for serving ML models used in document information parsing;
- Deployed and supported MLflow infrastructure to streamline experiment tracking for the Data Science team;
- Built a proof of concept for deploying custom Docker containers to Amazon SageMaker for flexible model serving;
- Developed a data versioning pipeline enabling the DS team to efficiently track experiments using custom tags and S3 artifact paths, integrated with GitLab CI.