Graphics from Pawel Czerwinski – https://unsplash.com/de/@pawel_czerwinski

Enabling the Power of Dimensionality Reduction: An Introduction to Principal Component Analysis

Dimension reduction is an increasingly important part of the learning process of machine learning programs as data sets grow larger. Today we will look at a linear transformation method on low dimensional vector spaces to potentially improve the learning process. Principal Component Analysis will be introduced for this purpose.

Henrik Bartsch

Henrik Bartsch

The texts in this article were partly composed with the help of artificial intelligence and corrected and revised by us. The following services were used for the generation:

Introduction

Data reduction is an important step in data analysis and machine learning, revealing hidden patterns and relationships in data. One of the most popular and effective techniques for unsupervised dimensionality reduction is Principal Component Analysis (PCA). PCA is a linear transformation that projects the original data into a lower-dimensional space defined by the eigenvectors of the covariance matrix. This allows PCA to capture the most important features of the data while minimizing its dimensionality. This makes it an ideal tool for data visualization, feature extraction, and data compression. In addition, PCA is a deterministic algorithm, which means that it will always produce the same results given the same input data. This makes it easier to reproduce and validate the results of PCA, which is not always the case with other unsupervised dimensionality reduction techniques. In the following blog entry, we want to introduce this algorithm.

Mathematical Basics

The basic idea of Principal Component Analysis is relatively simple: we try to find out whether certain variables in our input data are redundant, i.e. whether a part of this data cannot be represented by another part of our data and is therefore “hidden”. This property is also called correlation. We also examine how much information is contained in an input variable. We measure this by the variance, which allows us to eliminate irrelevant information from the mean if it has a low information content. 1 2

To understand these mathematical properties, they are briefly introduced here. 1 Suppose we have two (or more) data points of the form x1,x2Rnx_1, x_2 \in \mathbb{R}^n and n with nNn \in \mathbb{N}. In this case, the covariance of the two data points can be defined as follows:

κ(x1,x2):=1n(x1x1ˉ)T(x2x2ˉ).\kappa(x_1, x_2) := \frac{1}{n} (x_1 - \bar{x_1})^T (x_2 - \bar{x_2}).

The expression xˉ\bar{x} for a vector xRnx \in \mathbb{R}^n represents the mean of the vector over all its components.

With this definition, we can now introduce the standard deviation σ(x)\sigma(x) with xRnx \in \mathbb{R}^n.

σ(x):=κ(x,x).\sigma(x) := \sqrt{\kappa(x, x)}.

We have used the term variance above. The variance is the square of our standard deviation σ(x)\sigma(x), which would cost us another computational step. 3 Accordingly, it is typically not used directly in algorithms, but the standard deviation is used instead.

In the next step, we still need the correlation between two data points, which we can introduce using the last two definitions:

χ(x1,x2):=κ(x1,x2)σ(x1)σ(x2).\chi(x_1, x_2) := \frac{\kappa(x_1, x_2)}{\sigma(x_1)\sigma(x_2)}.

To find uncorrelated principal components, a number of properties from linear algebra are needed. We start with the properties of eigenvalues and eigenvectors: 1 4

A vector vRmv \in \mathbb{R}^m with mNm \in \mathbb{N} is said to be an eigenvector of a matrix ARm×mA \in \mathbb{R}^{m \times m} if there exists a scalar value λR\lambda \in \mathbb{R} such that

Av=λvAv = \lambda v

is satisfied. In this case, we denote λ\lambda as the eigenvalue of AA for the eigenvector vv.

Explained clearly, this concept means that multiplying a given vector for a given matrix (which can generate both stretch and rotation components) simply stretches our given vector by a factor λ\lambda.

Such a definition can be used to perform what is known as an eigenspace decomposition. This can be described as follows:

For a given matrix ARm×mA \in \mathbb{R}^{m \times m}, let v1,,vmRmv_1, \dots, v_m \in \mathbb{R}^m be the eigenvectors and the corresponding eigenvalues λ1,,λm\lambda_1, \dots, \lambda_m. In this case we can summarize them in the matricesVRm×mV \in \mathbb{R}^{m \times m} and ΛRm×m\Lambda \in \mathbb{R}^{m \times m}:

V=(v1,,vm),V = (v_1, \dots, v_m),

Λ=diag(λ1,,λm).\Lambda = diag(\lambda_1, \dots, \lambda_m).

In this instance, we can decompose the eigenspace with

AV=VΛ.A V = V \Lambda.

This is a useful property for principal component analysis. After all, we are looking for linear combinations of input variables that are correlated with each other. This means that they can in principle be represented by other variables and thus represent redundancy in our data. An eigenspace decomposition allows us to find such an analysis for all of our data. Data with particularly high eigenvalues therefore have a high variance and thus a high information content, which we are interested in. Data with low eigenvalues have a low impact on the information in the data set and therefore have a low variance. The eigenvectors are the main components here, representing a transformed coordinate system in which we can more efficiently display our data.

By searching for linear combinations of the input variables, the algorithm is limited to linear dependencies between each variable and therefore cannot map non-linear relationships.

To get all this information, we need to do an eigenspace decomposition on our covariance matrix. We can represent our covariance matrix as follows:

K=(κ(x1,x1),,κ(x1,xm)κ(xm,x1),κ(xm,xm)).K = \begin{pmatrix} \kappa(x_1, x_1), & \cdots, & \kappa(x_1, x_m) \\ \vdots & & \vdots \\ \kappa(x_m, x_1) & \cdots, & \kappa(x_m, x_m) \end{pmatrix}.

We can now apply an eigenspace decomposition to this matrix. Then, by filtering for the largest eigenvalues (and possibly determining the number of subsequent features), we can use the largest lNl \in \mathbb{N} eigenvectors to represent our reduced data. We now call these eigenvectors principal components to represent the connection between these eigenvectors and the algorithm. These principal components are uncorrelated and orthogonal to each other, i.e., they are not redundant and cannot be represented by a combination of other principal components. 1 5

Application Example

In the following, we use similar training data as in our blog post on the random projection algorithm to demonstrate comparability within this discipline. Again, we specifically use version 2.12.02.12.0 of Tensorflow to avoid errors in the code. The first step is to import all the necessary libraries:

pca.py
import tensorflow_datasets as tfds
import tensorflow as tf
import numpy as np

import math

from tensorflow.python.keras.models import Sequential
from tensorflow.python.keras.layers import InputLayer, Dense
from tensorflow.python.keras.optimizer_v2.adam import Adam
from tensorflow.python.keras.losses import sparse_categorical_crossentropy
from sklearn.decomposition import PCA

Next, we download the colorectal_histology dataset. This contains image data with an image size of 150×150×3150 \times 150 \times 3. We want to reduce the amount of this input data in our code in the hope of improving the training accuracy of the neural networks we train for classification. The image data is transformed directly so that all features are contained in a single vector for each image. This simplifies later dimension reduction.

pca.py
hist_data = tfds.load("colorectal_histology")

df_train = tfds.as_dataframe(hist_data["train"])
images = df_train["image"].values

temp = []
for element in images:
element = np.reshape(element, [-1])
temp.append(element)

images = np.array(temp)
labels = df_train["label"].values

The next step is to set a percentage for a train test split. In this case, we set 80%20%80\%-20\%, which are common values.

pca.py
train, test = 0.8, 0.2
amount_train, amount_test = math.ceil(images.shape[0] * train), math.floor(images.shape[0] * test)
print(amount_train, amount_test)

In connection with this, it is necessary to extract the corresponding training and test data from all the data. This is done in the following code section:

pca.py
indices_train = np.random.choice(images.shape[0], amount_train, replace=False)
indices_train = np.sort(indices_train)

indices_test = []
for i in range(images.shape[0]):
if ((i in indices_train) == False):
  indices_test.append(i)

indices_test = np.array(indices_test)

train_x, train_y, test_x, test_y = [], [], [], []

train_x = images[indices_train]
train_y = labels[indices_train]

train_x, train_y = tf.convert_to_tensor(train_x), tf.convert_to_tensor(train_y)

test_x = images[indices_test]
test_y = labels[indices_test]

test_x, test_y = tf.convert_to_tensor(test_x), tf.convert_to_tensor(test_y)

We then define a neural network to classify our data.

pca.py
model = Sequential([
InputLayer(input_shape=(67500,)),
Dense(units=128, activation="relu"),
Dense(units=128, activation="relu"),
Dense(units=64, activation="relu"),
Dense(units=8, activation="softmax")
])

optimizer = Adam(learning_rate=10e-4)
model.compile(optimizer=optimizer,
          loss=sparse_categorical_crossentropy,
          metrics=["accuracy"])
model.summary()

In the next step, we train this with our training data and display the results during training.

pca.py
history = model.fit(x=train_x, y=train_y,
                epochs=20, batch_size=64,
                validation_data=(test_x, test_y))

Finally, we can perform a specific test run of the neural network on our test data.

pca.py
model.evaluate(x=test_x, y=test_y)

From this graph we can see that our training is unfortunately not very stable. There could be several reasons for this:

  • Incorrect training epoch setting,
  • Incorrect batch size setting,
  • Faulty neural network architecture,
  • A data set that is too small or
  • Data set input dimensions too high.

Now we get to the really interesting part: Dimensionality reduction using principal component analysis. We can implement this in a few lines of code using scikit-learn:

pca.py
pca = PCA()
reduced_images = pca.fit_transform(images)

The reduced output data is then used to create the reduced training and test data sets.

pca.py
reduced_train_x = reduced_images[indices_train]

reduced_test_x = reduced_images[indices_test]

Now we need to redefine our neural network so that it can correctly process the reduced input data and not use the results of the last training run, which could distort the final result.

pca.py
model = Sequential([
InputLayer(input_shape=(5000,)),
Dense(units=128, activation="relu"),
Dense(units=128, activation="relu"),
Dense(units=64, activation="relu"),
Dense(units=8, activation="softmax")
])

optimizer = Adam(learning_rate=10e-4)
model.compile(optimizer=optimizer, loss=sparse_categorical_crossentropy, metrics=["accuracy"])
model.summary()

Finally, we train and test the neural network again to get the data we need.

pca.py
history = model.fit(x=reduced_train_x, y=train_y,
                epochs=20, batch_size=64,
                validation_data=(reduced_test_x, test_y))
pca.py
model.evaluate(x=reduced_test_x, y=test_y)

As an illustration, we have the training accuracy and the test accuracy:

We can compare all the results generated here after applying the above code.

NameComplete DatasetReduced Dataset
Maximum Training Accuracy59%59\%99.2%99.2\%
Final Training Accuracy58.38%58.38\%97%97\%
Maximum Test Accuracy53.7%53.7\%57%57\%
Final Test Accuracy42.4%42.4\%56.5%56.5\%
Amount of Pixels675006750050005000
Execution Time-13m\approx 13m
Training Time240s\approx 240s13s13s

All computations were performed on hardware at Google Colab. Experiments can be repeated there, although the execution time may vary depending on the availability of hardware at Google Colab.

The Execution Time parameter indicates how long the dimensionality reduction using Principal Component Analysis took.

Overall, it can be seen that dimensionality reduction as part of preprocessing can be very helpful in improving the accuracy of a model. In principle, this is not limited to neural networks, but can also be helpful for other machine learning techniques. Equally important is the reduction of training time, which leads to cost savings.

The improvement in training and testing behavior when training with the reduced data set can be attributed to the curse of dimensionality.

Advantages and Disadvantages

In the following, we examine the advantages and disadvantages of the method used here. In particular, we compare Principal Component Analysis with Random Projections, which is also a popular algorithm in this area.

Advantages

  1. Distortions: Due to the property of Principal Component Analysis to maintain orthogonality by using eigenspaces, we do not introduce distortions into the data. Random projections are only approximately orthogonal, which can introduce bias. 6

This property is only satisfied if the dependencies between the individual characteristics are indeed linear. If non-linearities occur, this property is not necessarily met.

  1. Optimality: The use of Principal Component Analysis leads to optimal results because we use mathematically exact properties (as much as possible) instead of relying on approximations, as is the case with random projections. This usually means that the dimension of the resulting data is smaller than it would be with random projections. 6

Disadvantages

  1. Runtime: Although Principal Component Analysis provides better results, this comes at the cost of a longer runtime. 6 7 8 In our example, Principal Component Analysis requires a runtime that is 6.5\approx 6.5 times longer than Random Projections. This is mainly due to the numerically complex computation of eigenvalues and eigenvectors. This can lead to problems, especially with increasingly large data sets. Random projections are therefore more suitable for very high-dimensional data.

In principle, it is possible to find a compromise between optimal results and computation time by using algorithms to approximate the eigenvalues and eigenvectors under certain conditions. Examples of such algorithms are the power iteration method or the Lanczos method. 6

  1. Similarity: In some examples, random projections are better than principal component analysis at maintaining similarity when transformed to a lower dimensional vector space. 6

  2. Linearity: Principal component analysis requires that the data be linearly dependent. If the data are not linearly dependent, this algorithm cannot produce good results.

  3. Outliers: The presence of outliers in the data can cause the data to be significantly worse than if the outliers were not in the data set. This is because the main components are altered by the presence of the outliers and no longer necessarily reflect the actual structure of the data set. 9

As a workaround, outlier analysis can be applied to the data set, or the variables can be normalized. Normalizing the variables is recommended for Principal Component Analysis. 10 11

Fields of Application

Principal component analysis can theoretically be used in many areas. Here are some examples of where this algorithm is used: 6 9

  1. Marketing: When analyzing customer data, PCA can be used to filter out irrelevant factors from data sets that only have a minor influence on purchasing behaviour. This allows advertising to be more targeted and sales to be increased.

  2. Statistical Analysis: In a statistical model with many features, the complexity caused by many features can reduce the quality of the model. Reducing the model parameters through PCA can increase the quality and the probability of correct prediction.

  3. Image Processing: When analyzing satellite imagery, there are often large data sets that are difficult to analyze in their raw form. Reducing the data using PCA helps to simplify the analysis and improve the results.

  4. Text Analysis: Due to the high dimensionality of text data, reduction is sometimes necessary. PCA can perform such a reduction.

Sources

Footnotes

  1. imsc.uni-graz.at 2 3 4

  2. link.springer.com

  3. wikipedia.com

  4. wikipedia.com

  5. imb.com

  6. users.ics.aalto.fi 2 3 4 5 6

  7. arxiv.org

  8. medium.com

  9. wikpedia.com 2

  10. link.springer.com

  11. wikipedia.org