Testing analysis class

The purpose of testing is to compare the outputs from the neural network against targets in an independent testing set. This will show the quality of the model before its deployment.

For function regression applications, calculation of the errors on the testing instances is usual. It is also frequent to calculate basic error statistics, and to draw error histograms. Despite of that, performing a linear regression analysis is the most standard method of testing a neural network for function regression.

This chapter is the last one that belongs to a set documents that explain how to use the main methods of OpenNN, so before continuing it is advisable to read the previous chapter ModelSelection class.

The easiest and the most common way to create a testing analysis object is with reference of the neural network and data set for the objects:

TestingAnalysis testing_analysis(&neural_network, &data_set);

For the classification problems, the most common testing method is the linear regression analysis. This could be done with the following code.

Vector<TestingAnalysis::LinearRegressionAnalysis> linear_regression_results = testing_analysis.perform_linear_regression_analysis();

for(size_t i=0; i<linear_regression_results.size(); i++)
{
   cout<<"Liner correlation for output " + to_string(i) << ": " << linear_regression_results[i].correlation << endl;          
}

This method returns an estructure for each output compose of the parameters: intercept, slope and correlation.

In the pattern recognition, the confusion matrix is the main method to obtain the accuracy of the model. OpenNN calculates this matrix with the code.

Matrix<size_t> confusion_matrix = testing_analysis.calculate_confusion();

If you need more information about TestingAnalysis class visit TestingAnalysis Class Reference
⇐ModelSelection