Neural network class

A neural network can be defined as a biologically inspired computational model that consists of a network architecture composed of artificial neurons. This structure contains a set of parameters, which can be adjusted to perform specific tasks.

In this tutorial, we will use the iris data set to show how to use some of the main methods in NeuralNetwork class, so before continuing, it is advisable to read the previous chapter DataSet class.

As well as DataSet class NeuralNetwork implements a wide variety of constructors, in this concrete example we are going to use the following:

NeuralNetwork neural_network(NeuralNetwork::Classification, {4, 6, 3})

The first argument to the function indicates the model type, which can be of three different types: Approximation, Classification, or Forecasting. In the case of classification problems, the default architecture consists of a scaling layer, two perceptron layers, and a probabilistic layer.

In the second argument, the first number represents the number of inputs. The second one refers to the number of neurons in the perceptron layer. The last one represents the number of neurons of the probabilistic layer If more numbers are introduced in the second argument, more perceptrons layers will be added to the neural network.

It is also possible to set the input and target variables names:

Vector<string> inputs_names = data_set.get_input_variables_names();
neural_network.set_inputs_names(inputs_names);
Vector<string> targets_names = ds.get_target_variables_names();
neural_network.set_outputs_names(targets_names);

Once the neural network has been designed, we can get all the information about it as follows

Vector<string> inputs_names = data_set.get_input_variables_names();
neural_network.set_inputs_names(inputs_names);
Vector<string> targets_names = ds.get_target_variables_names();
neural_network.set_outputs_names(targets_names);

In the training phase the neural network will receive the data scaled, so we will set the scaling method as NoScaling. When the network is trained and we want to check the selection error, selection data will be pased as unscaled, so the ScalingData will scale selection data using the statistics vector calculated in the previous chapter.

ScalingLayer* scaling_layer_pointer = neural_network.get_scaling_layer_pointer();
scaling_layer_pointer->set_descriptives(inputs_descriptives);
scaling_layer_pointer->set_scaling_methods(ScalingLayer::ScalingMethod::NoScaling);

The probabilistic layer takes the outputs to produce new outputs whose elements can be interpreted as probabilities. In this way, the probabilistic outputs will always fall in the range [0,1], and the sum of all will always be 1. We can change the activation function of the neurons of this layer as follows.

ProbabilisticLayer* probabilistic_layer_pointer = neural_network.get_probabilistic_layer_pointer();
probabilistic_layer_pointer->set_activation_function(ProbabilisticLayer::ActivationFunction::Softmax);

If you need more information about NerualNetwork class visit NerualNetwork Class Reference

⇐DataSet TrainingStrategy ⇒