The texts in this article were partly generated by artificial intelligence and corrected and revised by us. The following services were used for the generation:

## Classification

Neural networks are powerful learning models from machine learning that can learn even complex patterns from data. They are commonly used for various classification tasks, such as identifying objects in images or predicting sentiments in text. But what if we want to use neural networks for regression tasks, such as estimating home prices or forecasting company sales? How can we adapt neural networks to handle continuous outputs instead of discrete outputs? In this blog post, we will address these questions and present some of the options and techniques available for using neural networks for regression tasks.

## Regression as a task

Basically, regression is about approximating a function as accurately as possible by another function. This can have various reasons, e.g. the evaluation of the function to be approximated may take too long (this is the case with neural network performance after training), the function is too complicated or a number of other reasons.

Regressionis also calledregression analysisaccording to Wikipedia. In the following, however, only the termregressionis used here.^{1}

One regression method is the *method of least squares* ^{2}, where at each data point the error between the original function $f$ and the replacement function $f^*$ is calculated,
squared, and summed over all data points:

$e := \sum_{i=1}^n \left( f(x_i) - f^*(x_i) \right)^2.$

The total error here is $e$. In the formula $n \in \mathbb{N}$ stands for the total number of data points. The task of a corresponding regression algorithm is now to minimize the corresponding error:

$min \left( \sum_{i=1}^n \left( f(x_i) - f^*(x_i) \right)^2 \right).$

## Loss-Metrik

There are many different loss metrics to calculate corresponding errors in the data.

*Mean Absolute Error*: In this case the error is classically

$\frac{1}{n} \sum_{i=1}^n \left( f(x_i) - f^*(x_i) \right)$

computed.

*Mean Squared Error*: Here, the error is calculated and then squared

$\frac{1}{n} \sum_{i=1}^n \left( f(x_i) - f^*(x_i) \right)^2.$

*Mean Squared Logarithmic Error*: Here the error is logarithmized and then squared

$\frac{1}{n} \sum_{i=1}^n \left( log(f(x_i) + 1) - log(f^*(x_i) + 1) \right)^2.$

There are a number of other loss metrics. A listing can be found here.

In the following, the

Mean Squared Erroris used. This is a good basis and often leads to good results. Sometimes other error functions can give better final results, but this depends on the problem.

## Implementation using Tensorflow

In the following, a simple regression task is to be solved. Here a sine function

$f: R \rightarrow R: x \mapsto sin(\pi*x)$

on the range $x \in [0, 1]$ can be approximated by a neural network. The sine function lends itself very well to this example, since the sine function only changes in the range

$f(x) \in [0, 1] \enspace \forall x \in [0, 1]$

and is thus normalized. This sine function is visualized below:

### Generation of the data set

Before any dataset can be generated, imports are necessary to work later:

Then we define a function in which the sine function is evaluated.

Now we generate $100$ equidistant points on the interval $[0,1]$ as training data set and $200$ equidistant points on the identical interval as test data set.

Here with we have all necessary data to perform an appropriate training with a neural network.

### Training the model

Next, we define a corresponding neural network:

This neural network configuration arose from hyperparameter optimization. However, smaller or guessed neural networks also work very well for this regression task.

Next, the respective `optimizer`

, the `loss`

function and the `metric`

, which should be minimized, have to be defined:

Now the neural network can be trained.

As a final step, the neural network can still be evaluated to find out the error and accuracy on the test data set:

Here two equal results will come out, because for the error as well as the accuracy

`MeanSquaredError`

was used. As an alternative for the accuracy also`MeanAbsoluteError`

would be conceivable.

### Visualization of the results

The neural network from above was trained by us for different numbers of epochs. A comparison between the actual function to be approximated and the output of the neural network is visualized below:

Furthermore, we calculated and plotted once the squared error at each location for each epoch:

It is relatively clear that the regression does not converge at the same rate at all points. This is a common problem with many regression algorithms.

This example was a relatively simple example. Often the dimensions of both the input and output data are significantly higher. Accordingly, a learning process can require more time due to the higher complexity of the data set.

## Notes

As shown above, neural networks are suited for use in regression problems. However, it is important to ensure that the results are rigorously tested so that errors in interpretation do not occur in critical areas of the regression and go undetected.