David
From Lithuania (GMT+3)
11 years of commercial experience
Lemon.io stats
David – Rust, Golang, Apache Hadoop
Meet David - a Rust developer extraordinaire with experience in similar technologies like C++. Focused on systems programming, he has worked in DevOps roles, specializing in infrastructure, data processing, and traffic management at scale. This expertise translates well into any backend development project in Rust. His prior commercial experience as an architectural decision-maker makes him a proper Senior-level engineer with deep knowledge of the language and a knack for tackling even the toughest engineering problems.
Main technologies
Additional skills
Ready to start
ASAPDirect hire
Potentially possibleExperience Highlights
Architect
The project involved the design and implementation of a complete infrastructure for a crypto hedge fund that previously operated on approximately 400 virtual servers. The goal was to rebuild the infrastructure to operate on just 20 owned bare metal servers, significantly reducing operating costs by over 90%. This required careful planning and execution of hardware and software components to achieve optimal architecture. The final outcome is an efficient and cost-effective infrastructure, with the only expenses being for collocation and electricity.
- Implemented infrastructure provisioning with Ansible;
- Implemented three environments tests;
- Implemented the ability to test infrastructure with VMs only using a single machine but many logical virtual machines;
- Architected and implemented the internal VLAN network;
- Provisioned ZFS on all servers;
- Deployed Hashicorp Nomad orchestrator;
- Deployed Hashicorp Consul for service discovery;
- Deployed Hashicorp Vault for storing secrets;
- Deployed Prometheus/Victoria Metrics for metric storage/analysis;
- Deployed Alert manager for infrastructure alerts;
- Developed alerts for our servers;
- Deployed Grafana for metric visualization;
- Deployed Datadog Vector for log forwarding;
- Deployed Elasticsearch and Kibana for log storage and viewing;
- Deployed over 50 services to be run on Nomad Orchestrator.
Senior Back-end Developer
Migration of ordinary MySQL instances to Vitess of over 20 MySQL shards with over 20TB of data without downtime.
- Identified performance issues (e.g., sequential scans) with current MySQL queries;
- Provisioned the new servers to overtake old shards;
- Dumped replicated data from the legacy ProxySQL setup to the new Vitess shards;
- Switched traffic carefully to the new Vitess shards without downtime;
- Deprovisioned old legacy servers;
- Debugged and resolved any issues arising along the way;
- Everything mentioned above has been completed for more than 20 shards, which collectively contain over 10TB of data.
Senior Back-end Developer
A proprietary tooling for automating test infrastructure clusters with Chef in the Digital Ocean cloud. This was a breakthrough because there was only the ability to test on bare metals before.
- Implemented terraform code generation for test infrastructure;
- Added pre-test cluster deployment logical checks for errors;
- Implemented Chef Bootstrapping;
- Implemented NFS file-sharing functionality across the cluster;
- Created example clusters for everyone to try;
- Added unit tests for small local context functions;
- Wrote thorough documentation for teammates to pick up quickly;
- Did code reviews for any additional improvements made by teammates.
Senior Back-end Developer
A metrics service to collect, aggregate and forward metrics for viewing into external data stores like graphite.
- Forked the original StatsD service for internal use at Vinted;
- Made Rust StatsD compliant with the original Node.js StatsD implementation;
- Added more unit tests;
- Analyzed performance issues over the growth of the company;
- Optimized it a few times over the years to handle over a million requests per second on end using ZeroMQ messaging and CapNProto message encoding;
- Rolled out the changes in the entire Vinted infrastructure via automated provisioning with Chef;
- Was responsible for reviewing additional modifications to that code.
Senior only Back-end Developer
New big data event ingestion pipeline for all tracking events. It was needed because the old pipeline based on Camus was no longer keeping up with the scale. It was reimplemented with Apache Spark.
- Architected the idea of how to dump events from Kafka to HDFs;
- Added unit tests;
- Stress tested the pipeline with over 400k events per second test;
- Debugged and resolved all issues along the way;
- Deployed all changes in production;
- Deprecated and removed the old event ingestion pipeline.