My name is Kirill Mavreshko, and I'm working as a software developer for the last 14 years. I've had experience in solving all sorts of challenging problems, ranging from web-development to high-performant distributed systems analyzing real-time data. I've also had my share of struggles and learned a lot of lessons along the way, which taught me how to build working and reliable software solutions from scratch, making sure the project will have a long and prosperous life.
In recent years I deeply dived into the field of machine learning (ML). Have had the opportunity to study and apply many popular ML algorithms and deep learning architectures, as well as become intimately familiar with various ML frameworks and the intricacies of their inner workings.
Some quick facts about me:
IPONWEB is an AI, data & engineering company that specialises in programmatic and real-time advertising technology and infrastructure. BidSwitch was created to help solve many of the underlying technical challenges and inefficiencies that hamper platform interconnectivity and trading at the infrastructure level.
Helped to start several internal projects that later grew into new products or parts of them: most notably BidSwitch UI, internal financial reporting, ad traffic forecasting, automatic creative approval.
Originally started as a web frontend and backend developer using Python, Django, and Angular.
Last years in the company I spent primarily focused on high-load server-side projects, related to processing and analyzing large amounts of data and communicating with other services. During this time I used C++ and Python, had extensive experience with PostgreSQL and non-relational clusters of Cassandra and MongoDB, as well as many critical parts of modern IT infrastructure, including automatic monitoring, testing, real-time error reporting, and continuous delivery.
- Design and implementation of an online market of actors playing the role of Santa (Ded Moroz in Russia). Integrated the project with online payment system QIWI. Technologies: Python, Django, JavaScript, HTML, CSS, PostgreSQL.
Smart Links is an Ukrainian-Russian marketing and advertising company.
- Design and implementation of high-performance distributed morphology system for Russian and Ukrainian languages using C and ZeroMQ and distributed HTML content analyzer for advertisement system. Python bindings for some parts of the system.
- Design and implementation of an architecture for universal web store of digital services selling services like VoIP telephony, web domains, hosting. Integrated the store with online payment systems: Webmoney, Yandex Money, Paypal. Technologies: Python, Django, JavaScript, HTML, CSS, PostgreSQL.
- Design and development of both of backend and partially frontend (for admin site) parts of the portal. Did some optimizations to withstand high traffic. Technologies: Python, Django, JavaScript, MySQL, PostgreSQL, Nginx, HTML, CSS, SVN, Trac.
Gzt.ru has been closed in 2011.
Co-founder.
- Was responsible for the development of both the backend and the frontend parts of the portal.
Integrated portal's database with multiple real estate agencies.
Technologies: Python, Django, PostgreSQL.
- Website maintenance & support
Backend Web developer. Technologies: Python, Django, PostgreSQL, Linux, Mercurial.
- Design and implementation of several WEB-oriented systems using Perl, PHP, .NET, Java and MySQL with PostgreSQL as DB backend. OS: Linux (mostly) / Windows
Keras-transformer is a library implementing nuts and bolts for building (Universal) Transformer models using Keras. It allows you to assemble a multi-step Transformer model in a flexible way.
The library supports positional encoding and embeddings, attention masking, memory-compressed attention, ACT (adaptive computation time). All pieces of the model (like self-attention, activation function, layer normalization) are available as Keras layers, so, if necessary, you can build your version of Transformer, by re-arranging them differently or replacing some of them.
The (Universal) Transformer is a deep learning architecture described in arguably one of the most impressive DL papers of 2017 and 2018: the "Attention is All you need" and the "Universal Transformers" by Google Brain team.
Their authors brought the idea of recurrent multi-head self-attention, which has inspired a big wave of new research models that keep coming ever since, demonstrating new state-of-the-art results in many Natural Language Processing tasks, including translation, parsing, question answering, and even algorithmic tasks.
My impression is that indeed, unlike already classical recurrent neural networks, Transformer trains much faster measured both as the time per epoch and the wall clock time. It's also capable of efficiently handling multiple long-term dependencies in texts.
For instance, when applied to text generation, Transformer creates more coherent stories, which don't degrade in quality with the growth of their length, as it is typically the case with the recurrent networks.
The implementation of paper "Mimicking Word Embeddings using Subword RNNs". MIMICK allows to avoid OOV (out of vocabulary) problem by imitating the original pre-trained word embeddings using small character-based model, thus making < UNK > word embeddings unnecessary.
Benefits:
The demonstration of pre-trained model which can mimic word embeddings for OOV words and show which of the known words are closest to the produced vector:
The word "trintiful" is fictional but MIMICK "understands" that this is probably some kind of adjective. It assumed that "unfrogable" is a negating adjective. It was also able to guess that "Kaa" looks like an asian name, and "Karll" is something european. "abroktose" turns out to be something vaguely scientific, probably about chemistry or biology, which also makes sense.
KERL is a collection of various Reinforcement Learning algorithms and related techniques implemented purely using Keras.
The goal of the project is to create implementations of state-of-the-art RL algorithms as well as a platform for developing and testing new ones, yet keep the code simple and portable thanks to Keras and its ability to use various backends. This makes KERL very similar to OpenAI Baselines, only with focus on Keras.
What works in KERL:
All algorithms support adaptive normalization of returns Pop-Art, described in DeepMind's paper "Learning values across many orders of magnitude". This greatly simplifies the training, often making it possible to just throw the algorithm at a task and get a decent result.
With KERL you can quickly train various agents to play Atari games from pixels and dive into details of their implementation. Here's an example of such agent trained with KERL (youtube video): Deep RL A2C Agent Playing Ms. Pacman
Limitations:
Currently KERL does not support continuous control tasks and so far was tested only on various Atari games supported
by The Arcade Learning Environment via OpenAI Gym.
Avalanche is a simple deep learning framework written in C++ and Python. Unlike the majority of the existing tools it is based on OpenCL, an open computing standard. This allows Avalanche to work on pretty much any GPU, including the ones made by Intel and AMD, even quite old models.
The project was created as an attempt to better understand how modern deep learning frameworks like TensorFlow do their job and to practice programming GPUs. Like any decent deep ML framework these days, Avalanche is based on a computational graph model. It supports automatic differentiation, broadcasted operations, automatic memory management, can utilize multiple GPUs if needed.
The framework also works a backend for Keras, so if you know Keras, you can begin to use Avalanche without the need to learn anything about it.
CartGP is a very simple and minimalistic C++/Python library implementing Cartesian Genetic Programming (CGP). The library currently supports classic form of CGP where nodes are arranged into a grid and no recurrent connections are allowed.
Check this jupyter notebook to see how to use the library from Python.
I love gardening. After months of studying how to grow plants the right way, I have created a calculator which uses mathematical optimization (linear programming) to make sure my mix of fertilizers always fits recommendations from experts in plant nutrition and contains everything necessary for plant's healthy growth.
With this calculator, you can quickly create a fully balanced Dr. Jacob Mittleider Weekly-Feed mix using virtually any sorts of fertilizers locally available to you.
The extra benefit of this tool is that it can guide the gardener step-by-step through the process of creating the mix by pointing on problems with the existing recipe.
5 Course specialization by deeplearning.ai
by Stanford University on Coursera
Applied Mathematics and Computer Science
Apart from being a software developer and AI enthusiast, I enjoy bicycling, 3D printing and modeling devices in Fusion 360.
I am also familiar with electronics at the level where I can design and build my own few-hundred watts power supply entirely from scratch, fully understanding how it works. Have had experience designing and programming MCU-based devices (various levels of the STM32 family).
I love plants and was gardening for several years with my family.