Levi – Python, LLM, LangChain
Levi is a senior AI engineer with strong hands-on experience in Python, LLMs, LangChain, and LangGraph, focusing on agentic AI systems and applied automation. He has delivered multi-agent chatbots, document automation, and compliance solutions for business and financial domains. Screenings confirm his solid theoretical grounding, product-mindedness, and effective communication with both technical and non-technical stakeholders.
5 years of commercial experience in
Main technologies
Additional skills
Direct hire
PossibleReady to get matched with vetted developers fast?
Let’s get started today!Experience Highlights
Senior AI Engineer
Built an automated data quality control and compliance validation system for a major financial institution, designed to monitor, flag, and report on data integrity issues across internal banking datasets. The system uses AI-powered agents to evaluate transactions and records against regulatory compliance rules, generating structured reports and alerts when anomalies or violations are detected. The pipeline was version-controlled and integrated into existing internal tooling for continuous monitoring.
- Designed and implemented an agentic AI pipeline for automated data quality checks across large-scale financial datasets;
- Built compliance validation logic to evaluate records against internal regulatory rules and flag potential violations;
- Developed automated HTML report generation summarizing data quality status and compliance outcomes per run;
- Integrated the system with internal data warehouse infrastructure for scheduled and event-driven pipeline execution;
- Version-controlled all workflow logic in a dedicated GitHub repository for auditability and team collaboration;
- Deployed workflows on a secured internal server with access controls aligned to financial data governance requirements;
- Collaborated with DevOps and data engineering teams to ensure pipeline reliability and maintainability.
Lead Developer
A multi-agent conversational system designed to answer business formation questions and guide users through completing orders end-to-end. The solution initially used a workflow-based orchestration with a classifier routing messages between specialized agents for information retrieval (RAG over a knowledge base), order processing via API integrations, handling out-of-scope queries, and collecting feedback. The system was later reimplemented using a more robust framework to improve reliability, control, and structured multi-step flows, with typed state management and validation. Session data is persisted in a database, with additional workflows extracting structured insights for reporting. The infrastructure runs on a secure, containerized environment.

- Architected a multi-agent agentic system in n8n, routing queries between specialized agents (Information, Order Builder, Reject, and Feedback) using a GPT-4o-mini text classifier;
- Integrated pgvector on PostgreSQL for RAG-powered semantic retrieval over a chunked business formation knowledge base (33 semantically meaningful chunks with metadata);
- Built a Postgres-backed session store for persistent chat memory across multi-turn conversations;
- Built and exposed an HTTP MCP server (FastMCP/JSON-RPC) with four business formation tools (order_init, update_order, review_order, get_offering) consumed by Claude Sonnet agents;
- Migrated core order collection logic to LangGraph to support typed AgentState, deterministic field validation, and more reliable multi-step order completion;
- Enabled the agent to autonomously hit backend APIs and submit business formation orders end-to-end;
- Deployed the full stack on a Tailscale-secured Docker server for safe internal access.
Lead Developer
An internal AI-powered chatbot enabling non-technical stakeholders, including executives and department heads, to query live business data (sales, marketing, advertising, and customer analytics) using natural language. Built a multi-agent architecture connected to a SQL data warehouse, with tools allowing the AI to understand schema context and return accurate, grounded responses. The system was secured within a private network for safe internal use and included a feedback loop to support continuous improvement.

- Built a full-stack conversational UI enabling natural language queries over live business data;
- Integrated a multi-agent AI system with an Azure SQL data warehouse, enabling real-time schema-aware querying;
- Architected the end-to-end agentic pipeline - from intent routing to query execution and response synthesis;
- Designed custom AI tools for schema introspection, query generation, and result interpretation;
- Demoed the live system to C-suite stakeholders, translating complex AI capabilities into clear business value;
- Built a feedback mechanism to capture user corrections and drive continuous agent refinement;
- Deployed the system on a zero-trust private network (Tailscale), ensuring secure internal-only access.
Model Developer
A machine learning system to predict the probability of hospitalization for elderly Mexican individuals within the next year, using demographic data, health indicators, and medical history. The project involved training and comparing multiple classification models across a high-dimensional dataset (thousands of features), reducing the input to under 50 clinically meaningful features through feature importance analysis, and achieving an AUC score above 0.9. The final system was deployed as a REST API with a Streamlit form-based UI, containerized with Docker for reproducible, production-ready delivery.
- Conducted full exploratory data analysis (EDA) on a large healthcare dataset with thousands of features targeting elderly Mexican patient populations;
- Trained and compared multiple classification models, including Decision Trees, Random Forests, and Gradient Boosting, to identify the best-performing approach;
- Applied feature importance analysis to reduce the input dimensionality from thousands of features to under 50, preserving predictive power while enabling a usable form-based UI;
- Achieved an AUC score above 0.9 on the final hospitalization risk prediction model;
- Built a REST API to serve real-time predictions from the trained model;
- Developed a Streamlit web interface with a patient intake form for live risk prediction demos;
- Containerized the full stack (model, API, UI, database) using Docker Compose for consistent deployment across environments.