
Adapting and Evolving: Year Two of Aggregata
As the second year of Aggregata draws to a close, we take the opportunity to review the changes and provide some insight into future developments.
I’m a software engineer from Germany. My focus is to broadcast the fascination behind machine learning to anyone and growing my own skills in the process.
As the second year of Aggregata draws to a close, we take the opportunity to review the changes and provide some insight into future developments.
Transparency is an important step for user trust in a product. Today we want to focus more on our use of machine learning methods for Aggregata.
Automatically answering questions about images is a powerful tool for making a variety of processes faster and more efficient. In this article, we introduce Matcha-Quarta, a neural network trained for this task.
Image segmentation is crucial for object recognition and advanced image processing, perceiving object descriptions, and associating them with their respective image regions. Today we will explore use cases of such a model.
Actor-critic reinforcement learning is a significant advancement in the field of reinforcement learning. Actor-critic reinforcement learning combines the advantages of both policy-based and value-based reinforcement learning. In this post, I would like to introduce this algorithm.
Dimension reduction is an increasingly important part of the learning process of machine learning programs as data sets grow larger. Today we will look at the linear transformation method of Principal Component Analysis on low dimensional vector spaces to potentially improve the learning process.
Image captioning is important because it provides a textual representation of the content and context of an image, improving accessibility and understanding for all users, especially those with visual impairments. In this post, we introduce BLIP for this use case.
Sentiment analysis is an increasingly important part of the evaluation of news from social networks. In the following article, we would like to present a pretrained transformer that is tailored for this task: the Emotion Text Classifier.
The Naive Bayes classifier is a simple and efficient algorithm for classification tasks that assumes independence of the features. This post aims to introduce this algorithm to the reader.
Large amounts of data can be challenging for many reasons. Today, I will present an algorithm that can be used to reduce the size of a data set: Random Projections.
T5 is a powerful language model capable of performing a wide range of text-to-text tasks, including text classification, language translation or text summarization. The aim of this post is to introduce this pretrained transformer to the reader.
Optimizing a (complex) function can be a difficult task. Here I present a library which I use and which I think is a good way to solve such tasks. In addition, I will show corresponding tasks and an implementation of a comparatively simple optimization task as a usage example.