Neural network class

A neural network can be defined as a biologically inspired computational model which consists of a network architecture composed of artificial neurons. This structure contains a set of parameters, which can be adjusted to perform certain tasks.

In this tutorial we are going to use the iris data set to show how to use some of the main methods in NeuralNetwork class so, before continuing it is advisable to read the previous chapter DataSet class.

As well as DataSet class NeuralNetwork implements a wide variety of constructors, in this concrete example we are going to use the following:

NeuralNetwork neural_network(4, 6, 3);

Through the constructor's argument we can set the NeuralNetwork architecture. As we can see in the picture above, the first number represents the number of inputs, the second refers to the number of neurons in the hidden layer of the multilayer perceptron, and the last one represent the output layer.

NeuralNetwork class provides methods to set input and output information:

Inputs* inputs_pointer = neural_network.get_inputs_pointer();

inputs_pointer->set_information(inputs_information);

Outputs* outputs_pointer = neural_network.get_outputs_pointer();

outputs_pointer->set_information(targets_information);

These methods are useful for the neural network to know which data will be the input and which one will be the output.

Once we have set the information the next step it's construct scaling layer:

neural_network.construct_scaling_layer();

In practice it is always convenient to scale the inputs in order to make all of them to be of order zero. The ScalingLayer will perform the task of scaling the data in selection phase.

In the training phase the neural network will receive the data scaled, so we will set the scaling method as NoScaling. When the network is trained and we want to check the selection error, selection data will be pased as unscaled, so the ScalingData will scale selection data using the statistics vector calculated in the previous chaper.

ScalingLayer* scaling_layer_pointer = neural_network.get_scaling_layer_pointer();

scaling_layer_pointer->set_statistics(inputs_statistics);

scaling_layer_pointer->set_scaling_methods(ScalingLayer::NoScaling);

Due to we are working with a pattern recognition example, a probabilistic layer will be needed aswell. This layer takes the outputs to produce new outputs whose elements can be interpreted as probabilities. In this way, the probabilistic outputs will always fall in the range [0,1], and the sum of all will always be 1.

neural_network.construct_probabilistic_layer();

ProbabilisticLayer* probabilistic_layer_pointer = neural_network.get_probabilistic_layer_pointer();

probabilistic_layer_pointer->set_probabilistic_method(ProbabilisticLayer::Softmax);

If you need more information about NerualNetwork class visit NerualNetwork Class Reference
⇐DataSet TrainingStrategy ⇒