## Testing analysis class

The purpose of testing is to compare the outputs from the neural network against targets in an independent testing set. This will show the quality of the model before its deployment.

This is the last part of a set of documents that explains how to use the main methods of OpenNN, so before continuing, it is advisable to read the previous chapter ModelSelection class.

The easiest and most common way to create a testing analysis object is with the neural network and data set for objects:

TestingAnalysis testing_analysis(&neural_network, &data_set);

For classification problems, the most common testing method is the confusion matrix. It allows us to evaluate the accuracy of the model. It can be calculated with the following code.

Tensor<Index, 2> confusion = testing_analysis.calculate_confusion();

It is also possible to calculate the testing errors, which allow us to study the accuracy of the model’s predictions compared to the training and selection instances. The following method returns a vector with four positions, which contain: the sum squared error, the mean squared error, the root mean squared error and the normalized squared error.

Tensor<type, 1> confusion_matrix = testing_analysis.calculate_testing_errors();

For function regression applications, calculating the errors on the testing instances is usual. It is also frequent to calculate basic error statistics and to draw error histograms. Despite that, performing a linear regression analysis is the most standard method of testing a neural network for function regression. The command is the following

const Tensor<testinganalysis::linearregressionanalysis, 1=""> linear_regression_analysis = testing_analysis.perform_linear_regression_analysis(); linear_regression_analysis(0).print();</testinganalysis::linearregressionanalysis,>

If you need more information about TestingAnalysis class visit TestingAnalysis class