OpenNN
Open-source neural networks library
Loading...
Searching...
No Matches
opennn::Loss Class Reference

Trainable loss function attached to a NeuralNetwork and a Dataset. More...

#include <loss.h>

Classes

struct  EvaluationResult
 Output of calculate_error(). More...
 

Public Types

enum class  Error {
  MeanSquaredError , NormalizedSquaredError , WeightedSquaredError , CrossEntropy ,
  CrossEntropy3d , MinkowskiError
}
 Built-in loss functions. More...
 
enum class  Regularization { L1 , L2 , ElasticNet , NoRegularization }
 Parameter-norm regularization terms. More...
 

Public Member Functions

 Loss (NeuralNetwork *neural_network=nullptr, Dataset *dataset=nullptr)
 Constructs a Loss bound to a network and dataset.
 
virtual ~Loss ()=default
 Virtual destructor.
 
const NeuralNetworkget_neural_network () const
 Read-only access to the bound network.
 
NeuralNetworkget_neural_network ()
 Mutable access to the bound network.
 
const Datasetget_dataset () const
 Read-only access to the bound dataset.
 
Datasetget_dataset ()
 Mutable access to the bound dataset.
 
void set (NeuralNetwork *neural_network=nullptr, Dataset *dataset=nullptr)
 Re-initializes the Loss by binding network and dataset pointers.
 
void set_neural_network (NeuralNetwork *new_neural_network)
 Sets the bound network.
 
virtual void set_dataset (Dataset *new_dataset)
 Sets the bound dataset; subclasses may override to refresh cached state derived from the dataset.
 
void set_regularization (const string &new_regularization_method)
 Sets the regularization method by name.
 
void set_regularization (Regularization new_regularization)
 Sets the regularization method directly.
 
void set_regularization_weight (const float new_regularization_weight)
 Sets the strength of the regularization term.
 
void set_normalization_coefficient ()
 Recomputes the dataset-derived normalization coefficient.
 
EvaluationResult calculate_error (const Batch &batch, const ForwardPropagation &forward_propagation) const
 Computes the loss on a batch given its forward intermediates.
 
void set_error (const Error &)
 Sets the loss term directly.
 
void set_error (const string &)
 Sets the loss term by name.
 
Error get_error () const
 Currently selected loss term.
 
void back_propagate (const Batch &batch, ForwardPropagation &forward_propagation, BackPropagation &back_propagation) const
 Computes the gradient of the loss with respect to the parameters.
 
float calculate_regularization (const VectorR &parameters) const
 Computes the regularization term given the parameter vector.
 
void from_JSON (const JsonDocument &)
 Loads loss hyperparameters from a parsed JSON document.
 
void to_JSON (JsonWriter &) const
 Writes loss hyperparameters to a streaming JSON writer.
 
void regularization_from_JSON (const JsonDocument &)
 Reads only the regularization fields from a parsed JSON document.
 
void regularization_to_JSON (JsonWriter &) const
 Writes only the regularization fields to a streaming JSON writer.
 
const string & get_name () const
 Canonical name of the loss (e.g. "Loss").
 
void print () const
 Prints a human-readable summary of the loss to stdout.
 

Static Public Member Functions

static const EnumMap< Regularization > & regularization_map ()
 Returns the singleton string<->enum mapping for Regularization values.
 
static const string & regularization_to_string (Regularization regularization)
 Converts a Regularization to its canonical string name.
 
static Regularization string_to_regularization (const string &name)
 Parses a Regularization from string.
 
static float calculate_h (const float parameter)
 Computes the finite-difference perturbation step for a parameter.
 

Protected Attributes

Error error = Error::MeanSquaredError
 Currently selected loss term.
 
float normalization_coefficient = 1.0f
 Variance of the targets, used by NormalizedSquaredError.
 
float positives_weight = 1.0f
 Weight applied to positive-class samples (WeightedSquaredError).
 
float negatives_weight = 1.0f
 Weight applied to negative-class samples (WeightedSquaredError).
 
float minkowski_parameter = 1.5f
 Exponent p used by MinkowskiError.
 
Regularization regularization_method = Regularization::L2
 Currently selected regularization term.
 
float regularization_weight = 0.001f
 Multiplier applied to the regularization term in the total loss.
 
NeuralNetworkneural_network = nullptr
 Network whose parameters are being trained; not owned.
 
Datasetdataset = nullptr
 Dataset that provides training data; not owned.
 
string name = "Loss"
 Canonical name of the loss instance.
 

Detailed Description

Trainable loss function attached to a NeuralNetwork and a Dataset.

Holds non-owning pointers to a NeuralNetwork and a Dataset, the choice of loss term (Loss::Error) and regularization term (Loss::Regularization). Per-loss hyperparameters (e.g. minkowski_parameter, positives_weight) are stored as protected fields and configured through the corresponding setters.

Member Enumeration Documentation

◆ Error

enum class opennn::Loss::Error
strong

Built-in loss functions.

Enumerator
MeanSquaredError 

Mean of squared errors over outputs and samples.

NormalizedSquaredError 

MSE divided by the variance of the targets.

WeightedSquaredError 

MSE with per-class weights for imbalanced binary tasks.

CrossEntropy 

Binary or multi-class cross entropy.

CrossEntropy3d 

Sequence-level cross entropy used by language models.

MinkowskiError 

◆ Regularization

enum class opennn::Loss::Regularization
strong

Parameter-norm regularization terms.

Enumerator
L1 

L1 norm of the parameters (sparsity).

L2 

L2 norm of the parameters (weight decay).

ElasticNet 

Mix of L1 and L2 regularization.

NoRegularization 

Constructor & Destructor Documentation

◆ Loss()

opennn::Loss::Loss ( NeuralNetwork * neural_network = nullptr,
Dataset * dataset = nullptr )

Constructs a Loss bound to a network and dataset.

Parameters
neural_networkNetwork whose parameters are being trained.
datasetDataset that provides training data.

◆ ~Loss()

virtual opennn::Loss::~Loss ( )
virtualdefault

Virtual destructor.

Member Function Documentation

◆ back_propagate()

void opennn::Loss::back_propagate ( const Batch & batch,
ForwardPropagation & forward_propagation,
BackPropagation & back_propagation ) const

Computes the gradient of the loss with respect to the parameters.

Parameters
batchCurrent training batch.
forward_propagationForward intermediates for the batch.
back_propagationOutput buffer in which to accumulate gradients.

◆ calculate_error()

EvaluationResult opennn::Loss::calculate_error ( const Batch & batch,
const ForwardPropagation & forward_propagation ) const

Computes the loss on a batch given its forward intermediates.

Parameters
batchCurrent training batch.
forward_propagationForward intermediates for the batch.
Returns
Error term, accuracy and active token count.

◆ calculate_h()

static float opennn::Loss::calculate_h ( const float parameter)
static

Computes the finite-difference perturbation step for a parameter.

Parameters
parameterParameter value at which the gradient is evaluated.
Returns
Perturbation magnitude h used by numerical-gradient checks.

◆ calculate_regularization()

float opennn::Loss::calculate_regularization ( const VectorR & parameters) const

Computes the regularization term given the parameter vector.

Parameters
parametersFlat vector of all trainable parameters.
Returns
Scalar regularization term (already weighted).

◆ from_JSON()

void opennn::Loss::from_JSON ( const JsonDocument & )

Loads loss hyperparameters from a parsed JSON document.

◆ get_dataset() [1/2]

Dataset * opennn::Loss::get_dataset ( )
inline

Mutable access to the bound dataset.

◆ get_dataset() [2/2]

const Dataset * opennn::Loss::get_dataset ( ) const
inline

Read-only access to the bound dataset.

◆ get_error()

Error opennn::Loss::get_error ( ) const
inline

Currently selected loss term.

◆ get_name()

const string & opennn::Loss::get_name ( ) const
inline

Canonical name of the loss (e.g. "Loss").

◆ get_neural_network() [1/2]

NeuralNetwork * opennn::Loss::get_neural_network ( )
inline

Mutable access to the bound network.

◆ get_neural_network() [2/2]

const NeuralNetwork * opennn::Loss::get_neural_network ( ) const
inline

Read-only access to the bound network.

◆ print()

void opennn::Loss::print ( ) const
inline

Prints a human-readable summary of the loss to stdout.

◆ regularization_from_JSON()

void opennn::Loss::regularization_from_JSON ( const JsonDocument & )

Reads only the regularization fields from a parsed JSON document.

◆ regularization_map()

static const EnumMap< Regularization > & opennn::Loss::regularization_map ( )
inlinestatic

Returns the singleton string<->enum mapping for Regularization values.

Returns
Reference to a process-wide EnumMap initialized on first call.

◆ regularization_to_JSON()

void opennn::Loss::regularization_to_JSON ( JsonWriter & ) const

Writes only the regularization fields to a streaming JSON writer.

◆ regularization_to_string()

static const string & opennn::Loss::regularization_to_string ( Regularization regularization)
inlinestatic

Converts a Regularization to its canonical string name.

Parameters
regularizationRegularization value.
Returns
Reference to the canonical string.

◆ set()

void opennn::Loss::set ( NeuralNetwork * neural_network = nullptr,
Dataset * dataset = nullptr )

Re-initializes the Loss by binding network and dataset pointers.

Parameters
neural_networkNetwork whose parameters are being trained.
datasetDataset that provides training data.

◆ set_dataset()

virtual void opennn::Loss::set_dataset ( Dataset * new_dataset)
inlinevirtual

Sets the bound dataset; subclasses may override to refresh cached state derived from the dataset.

Parameters
new_datasetDataset that provides training data.

◆ set_error() [1/2]

void opennn::Loss::set_error ( const Error & )

Sets the loss term directly.

Receives the Loss::Error enum value to install.

◆ set_error() [2/2]

void opennn::Loss::set_error ( const string & )

Sets the loss term by name.

Receives the canonical name of the loss term.

◆ set_neural_network()

void opennn::Loss::set_neural_network ( NeuralNetwork * new_neural_network)
inline

Sets the bound network.

Parameters
new_neural_networkNetwork whose parameters are being trained.

◆ set_normalization_coefficient()

void opennn::Loss::set_normalization_coefficient ( )

Recomputes the dataset-derived normalization coefficient.

Called automatically after the dataset is bound; only relevant for NormalizedSquaredError.

◆ set_regularization() [1/2]

void opennn::Loss::set_regularization ( const string & new_regularization_method)
inline

Sets the regularization method by name.

Parameters
new_regularization_methodCanonical name (see Regularization).

◆ set_regularization() [2/2]

void opennn::Loss::set_regularization ( Regularization new_regularization)
inline

Sets the regularization method directly.

Parameters
new_regularizationRegularization enum value.

◆ set_regularization_weight()

void opennn::Loss::set_regularization_weight ( const float new_regularization_weight)
inline

Sets the strength of the regularization term.

Parameters
new_regularization_weightMultiplier applied to the regularization term.

◆ string_to_regularization()

static Regularization opennn::Loss::string_to_regularization ( const string & name)
inlinestatic

Parses a Regularization from string.

Accepts canonical names plus the alias "NoRegularization" (also encoded as "None" in the map).

Parameters
nameString to parse.
Returns
Matching Regularization value.

◆ to_JSON()

void opennn::Loss::to_JSON ( JsonWriter & ) const

Writes loss hyperparameters to a streaming JSON writer.

Member Data Documentation

◆ dataset

Dataset* opennn::Loss::dataset = nullptr
protected

Dataset that provides training data; not owned.

◆ error

Error opennn::Loss::error = Error::MeanSquaredError
protected

Currently selected loss term.

◆ minkowski_parameter

float opennn::Loss::minkowski_parameter = 1.5f
protected

Exponent p used by MinkowskiError.

◆ name

string opennn::Loss::name = "Loss"
protected

Canonical name of the loss instance.

◆ negatives_weight

float opennn::Loss::negatives_weight = 1.0f
protected

Weight applied to negative-class samples (WeightedSquaredError).

◆ neural_network

NeuralNetwork* opennn::Loss::neural_network = nullptr
protected

Network whose parameters are being trained; not owned.

◆ normalization_coefficient

float opennn::Loss::normalization_coefficient = 1.0f
protected

Variance of the targets, used by NormalizedSquaredError.

◆ positives_weight

float opennn::Loss::positives_weight = 1.0f
protected

Weight applied to positive-class samples (WeightedSquaredError).

◆ regularization_method

Regularization opennn::Loss::regularization_method = Regularization::L2
protected

Currently selected regularization term.

◆ regularization_weight

float opennn::Loss::regularization_weight = 0.001f
protected

Multiplier applied to the regularization term in the total loss.