Testing analysis class
The goal of testing is to evaluate the neural network’s performance by comparing its outputs to the target values in an independent test set. This helps assess the model’s quality before deployment.
This is the last part of a set of documents explaining how to use OpenNN’s main methods. Before continuing, it is advisable to read the previous chapter, ModelSelection class.
The easiest and most common way to create a testing analysis object is with the neural network and data set for objects:
TestingAnalysis testing_analysis(&neural_network, &data_set);
The most common testing method for classification problems is the confusion matrix, which allows evaluation of the model’s accuracy.
Tensor<Index, 2> confusion = testing_analysis.calculate_confusion();
It is also possible to calculate the testing errors, allowing us to study to what extent the model’s predictions are accurate compared to the training and selection instances. The following method returns a vector with four positions containing the sum squared error, the mean squared error, the root mean squared error, and the normalized squared error:
Tensor<type, 1> confusion_matrix = testing_analysis.calculate_errors();
In function regression applications, it is common to calculate errors on test instances, compute basic error statistics, and create error histograms. However, the most standard method for evaluating a neural network in function regression is performing a linear regression analysis:
Tensor <TestingAnalysis::GoodnessOfFitAnalysis, 1> goodness_of_fit_analysis = testing_analysis.perform_goodness_of_fit_analysis();
For more information on the TestingAnalysis
class visit the TestingAnalysis class.