The texts in this article were partly generated by artificial intelligence and corrected and revised by us. The following services were used for the generation:
How we use machine learning to create our articlesClassification
Whether ChatGPT (with the underlying GPT4) or research projects on autonomous driving, these projects have one thing in common: large amounts of data. Large amounts of data pose several challenges to developers of deep learning models, such as enormous hardware requirements, difficult or impossible visualization, (extremely) long training times, or poor training results due to the curse of dimensionality. In order to reduce the dimensionality of a dataset and still obtain relevant information, dimensionality reduction has been developed using various algorithms. In this article I would like to introduce the principle of random projections.
Mathematical basics
To talk about the mathematical foundations of random projections, let’s assume we have data from a high-dimensional Euclidean space, say , where .
The Euclidean property of vector space is necessary to compute distances between two (or more) points in our vector space.
We further assume that is large enough and we want to bring data from into a space of small dimension, i.e. with and . In this case we can use the Johnson-Lindenstrauss lemma.
The Johnson-Lindenstrauss lemma tells us that we can implement the process described above and then there exists such a that can embed the points from in and approximately retain the distances between the points from approximately in . 1 2 3 4
Preserving the distances between the points is very important here, as this describes the differences between the data points. If the distances were not (roughly) preserved, two (potentially) very different data points could appear very similar in , even though they are not objectively similar.
The remaining step in reducing the data set is to select the data points. This is done with Random Projections by generating a random matrix , where is the number of data points in the data set.
In short, a random matrix is a matrix whose elements are random variables.
Many approaches can be used to generate the elements . Often are assumed to be elements of a Gaussian Normal Distribution. There are efficient methods to generate these elements. Alternatively, it has been shown that the generation can be done by a much simpler distribution: 1 5 6 7
With the random method presented here, greater efficiency can be achieved in terms of time, even if the results may be slightly worse.
Application example
As an application example, we will use a data set that can ideally have a relatively large number of input dimensions. Images are ideal for this because they already have a high number of input dimensions. In the context of this work, we have chosen to work with the colorectal_histology dataset. This dataset contains images of different tissue types, with a focus on colorectal cancer. An algorithm is used to classify this dataset into a total of eight different tissue types. 8
The images in the dataset are pixels high, pixels wide, and distributed over three different color channels, i.e. they are available as RBG images. This gives us a dataset with a total input dimension of .
Our goal is to show that random projections can improve classification. To write an example, we start with all the necessary imports:
Note: This script runs on Tensorflow version 2.12.0.
We then load the dataset from the Tensorflow dataset and process it so that we have all the images and all the labels:
To get interesting results, the next step is to create a train test split with a ratio of .
Note: The tag
replace=False
withinnp.random.choice(...)
prevents the duplication of indices. Without this addition, we will not get a correct train-test split.
The next step is to create a model that uses all the input data to perform a classification based on the data set. This is not a specially optimized model, but a simple neural network to illustrate the process. In reality, this would be a model optimized for the task.
We then train the model. The model can be automatically evaluated.
From this graph we can see that our training is unfortunately not very stable. There are several reasons for this:
- Incorrect setting of the training epochs,
- Incorrect setting of the batch size,
- Incorrect architecture of the neural network,
- A data set that is too small or
- Input dimensions of the data set are too high.
Ultimately, unfortunately, this can be due to many factors. However, we can influence the input dimensions of the data set very well. To do this, we now use the Random Projections already discussed.
A closer look at the data of the reduced dataset shows that the number of features could be reduced from a total of to features. This corresponds to a reduction to of the data. To demonstrate the effect of dimensionality reduction, we use a neural network with identical features and retrain it completely on the reduced dataset:
The new course of the training can now be visualized as follows:
We will then be able to compare all the results produced here:
Name | Full data set | Reduced data set |
---|---|---|
Maximum training accuracy | ||
Final training accuracy | ||
Maximum test accuracy | ||
Final test accuracy | ||
Number of input pixels | ||
Execution time | ||
Training time |
All computations have been done on the Google Colab hardware. Experiments can be repeated there, although e.g. the execution time may vary due to the availability of hardware at Google Colab.
The parameter Execution time specifies the amount of time required for the dimensionality reduction with the random projections algorithm.
All in all, it can be seen that dimensionality reduction in the context of preprocessing can very well help to improve the accuracy of a model. In principle, this is not limited to neural networks, but can also be helpful for other machine learning techniques. Equally important is the reduction of training time, which saves costs.
The improvement in training and testing behavior when training with the reduced data set can be attributed to the curse of dimensionality.
Advantages and disadvantages of random projections
Random projections have a number of advantages and disadvantages due to the stochastic nature of the algorithm. Principal Component Analysis (PCA)](https://en.wikipedia.org/wiki/Principal_component_analysis) is often mentioned as a counterpart to this algorithm for comparison.
Advantages
The process has a number of positive attributes: 1 7 9
-
Random projections are incredibly fast. The time complexity of random projections is , which is significantly lower than that of principal component analysis.
-
Random projections are not as affected by outliers. This is probably due to the stochastic nature of the method.
Disadvantages
Compared to principal component analysis, random projections have a number of disadvantages: 9 10.
- Random projections are not as efficient as principal component analysis. In theory, this is the optimal method for reducing the dimension of a data set.
Areas of application
Random projections are already used in some areas of big data. Here are some examples: 9 11
-
Speech and language processing: In this area, sound recordings can introduce noise into the input data. Projecting the data onto a random lower-dimensional subspace yields results comparable to conventional dimension reduction methods such as Principal Component Analysis.
-
Image and video processing: Both image and video processing deal with very large amounts of data, where the efficiency of Random Projections is advantageous.
-
Bioinformatics: Bioinformatics: Gene expression profiles can be viewed as matrices with more than tens of thousands of continuous values. High dimensionality leads to computational burden and dimensionality curse problems. Therefore, dimensionality reduction techniques are often used in machine learning tasks to mitigate these problems.