Ayse
From Bulgaria (GMT+3)
11 years of commercial experience
Lemon.io stats
2
projects done1203
hours workedOpen
to new offersAyse – Python, Linux, Docker
Meet one more lady in our long-bearded company! Ayse has rich experience in commercial Data Science projects backed by a Ph.D. degree in Information Systems. She worked on projects of different scales and led Data Science teams, which highlights her leadership and ownership capabilities. Ayse is also a co-founder of a pet project for predicting football match results, where she takes care of all tech-related tasks and data analysis. Ayse is open to new challenges that require efficient solutions!
Main technologies
Additional skills
Ready to start
To be verifiedDirect hire
Potentially possibleExperience Highlights
Tech Lead
The project focuses on building a portfolio and opportunity management platform for football agents which also allows them to do data-driven scouting on a global scale (roughly 600k players globally). The goal is to allow agents to manage their deals by recording and keeping track of the players and teams they represent, uploading their contracts and other legal documents, performing data-driven scouting using proprietary algorithms to give them an advantage in a highly-competitive market.
The main features are CRM/deal management tools, opportunity discovery tools, opportunity scoring and matching tools, legal & document management tools and sales tools.
- Built the data collection pipeline to collect data from various sources such as REST APIs, public sources like websites and social media, as well as user-generated data
- Built linking algorithms to match the different data sources to arrive at a unified view of the data
- Built data pipelines to process large amounts of raw data to be consumed by analytical solutions
- Developed a REST API backend to serve the platform
- Automated deployment of back-end and front-end APIs to Cloud Run using GitHub Actions
- Built ML algorithms to facilitate intelligent scouting for purpose to sift through 600k players
- Built ML algorithms to recommend best matches between opportunities and assets
Lead Data Scientist
The objective was to understand the financial dynamics behind football player transfer fees and how they change based on performance and other environmental factors. To this end, we built a stacked machine-learning model using 2 different base learners and a top-level meta-learner to outperform all existing methods in the literature.
- Built a machine learning model that outperforms existing methods;
- Shown that not all dynamics are captured by available data, and most other models suffer from data leakage, which makes them unusable in real-life;
- Facilitated informed decisions on several player transfers.
Lead Data Scientist
The aim of the project was to develop custom performance rating metrics where the performance is affected by external factors such as the environment or the competition. To this end, instead of relying on black-box machine learning models, the solution was to mathematically encode the most prominent environmental factors and quantify them in a functional manner to compute their aggregated impact on performance.
- Built a performance rating model for over 50000 individuals;
- Built a novel time-spent-on-work function to quantify throughput objectively;
- Built a system of environmental factor aggregations;
- Deployed a REST API and a front-end to dynamically perform scenario analysis by selecting different values for environmental factors and cross-referencing the performance values.
Lead Data Scientist
A US-based medical informatics start-up was trying to understand the expected number of hospitalizations and the frequency of future procedures to optimize the hospital staff and budget. They needed a machine learning pipeline to consume the historical hospitalizations and procedures data to continuously build and forecast procedure and region-specific operation/hospitalization frequencies. The project made more challenges with the rise of COVID-19. Therefore, standard forecasting techniques were not applicable.
- Built an end-to-end machine learning pipeline that builds unique forecasting models for 6 regions and hundreds of hospitals, using historical data as well as external factors such as working/non-working days and COVID-19 forecasts;
- Operationalized a Prophet model instead of standard time-series forecasting models;
- Built a REST API to expose the model predictions for consumption by the rest of the app;
- Deployed the REST API onto AWS SageMaker for continuous training and forecasting.
Lead Data Scientist
One of the largest chemical manufacturers in Germany wanted to be able to prioritize its sales opportunities to increase staff efficiency and conversion. They were using SAP as their CRM, and they needed a machine learning model to score each opportunity in terms of their conversion likelihood. They also wanted a custom dashboard to visualize their sales funnel. The in-house data science team was using R as their primary analytics programming language. Therefore, she built a machine-learning pipeline and a custom dashboard using the R analytics stack.
- Built a Dockerized data processing pipeline and opportunity scoring model using xGBoost and R;
- Deployed a custom, interactive Shiny/JavaScript dashboard to visualize the sales funnel using Sankey and network diagrams;
The project was initially for their polyol department; however, they liked the solution so much that it ended up scaling to the rest of the business.
Lead Data Scientist
A leading client in the robotic vacuum cleaner industry was losing market share on their online store on Amazon. They wanted to be competitive price-wise but also did not want to harm their brand perception due to too low price.
- NLP to understand the sentiment regarding the offered functionality of the product. The results showed that the customers who had pets found the hair-cleaning capacity underwhelming and, therefore, the price too high for those products;
- Built a price-to-sold-quantity model to understand which price point would be ideal. The results showed that the top-end product prices should be kept high to maintain brand perception because the competitor's products on the high-end did not meet customer expectations and therefore were not a threat; however, the mid-range products needed a price reduction to be competitive;
- Built the data ingestion pipeline, the NLP sentiment analysis model, and the price optimization model;
- Conducted exploratory analysis and BI reports to highlight important differences;
- Deployed and operationalized the model that continuously learns and fine-tunes the estimations.
Accomplishments:
- 8% increase in online sales through the client's Amazon store;
- The client understood the weak points of their products, which guided marketing campaigns and R&D;
- As the client's first advanced data science project, it was demonstrably valuable to the business, the client used this as an example to refine the analytics strategy.
Data Scientist
A controller that ingested 9D inertial sensor data and the commands from the operator to reliably and safely operate an underwater observation vehicle. The task was to build a controller on Ardunhio and Raspberry Pi to allow low-latency action decisions.
- Built a PID controller;
- Implemented Unscented Kalman Filter for signal processing;
- Developed the main library for controllers for the company.
Researcher
The project aimed at privacy-preserving anomaly detection from CCTV video feed of pedestrian areas. The objective was to be able to respond to anomalous activity such as unexpected objects (i.e. vehicles) or unexpected motion (i.e. fights) in a timely manner.
- Built a computer vision model that estimated anomalous activity on 60 fps;
- Wrote and published several research papers on international conferences;
- Won a 2nd place research award.