npdl.objectives
¶
Provides some minimal help with building loss expressions for training or validating a neural network.
These functions build element- or item-wise loss expressions from network predictions and targets.
Examples¶
Assuming you have a simple neural network for 3-way classification:
>>> import npdl
>>> model = npdl.model.Model()
>>> model.add(npdl.layers.Dense(n_out=100, n_in=50))
>>> model.add(npdl.layers.Dense(n_out=3,
>>> activation=npdl.activation.Softmax()))
>>> model.compile(loss=npdl.objectives.SCCE(),
>>> optimizer=npdl.optimizers.SGD(lr=0.005))
Objectives¶
BinaryCrossEntropy |
Computes the binary cross-entropy between predictions and targets. |
SoftmaxCategoricalCrossEntropy |
Computes the categorical cross-entropy between predictions and targets. |
MeanSquaredError |
Computes the element-wise squared difference between two tensors. |
HellingerDistance |
Computes the multi-class hinge loss between predictions and targets. |
Detailed Description¶
-
class
npdl.objectives.
Objective
[source]¶ An objective function (or loss function, or optimization score function) is one of the two parameters required to compile a model.
-
class
npdl.objectives.
MeanSquaredError
[source]¶ Computes the element-wise squared difference between two tensors.
\[L = (p - t)^2\]Parameters: a, b : Theano tensor
The tensors to compute the squared difference between.
Returns: Theano tensor
An expression for the element-wise squared difference.
Notes
This is the loss function of choice for many regression problems or auto-encoders with linear output units.
-
npdl.objectives.
MSE
[source]¶ alias of
MeanSquaredError
-
class
npdl.objectives.
HellingerDistance
[source]¶ Computes the multi-class hinge loss between predictions and targets.
\[L_i = \max_{j \not = p_i} (0, t_j - t_{p_i} + \delta)\]Parameters: predictions : Theano 2D tensor
Predictions in (0, 1), such as softmax output of a neural network, with data points in rows and class probabilities in columns.
targets : Theano 2D tensor or 1D tensor
Either a vector of int giving the correct class index per data point or a 2D tensor of one-hot encoding of the correct class in the same layout as predictions (non-binary targets in [0, 1] do not work!)
delta : scalar, default 1
The hinge loss margin
Returns: Theano 1D tensor
An expression for the item-wise multi-class hinge loss
Notes
This is an alternative to the categorical cross-entropy loss for multi-class classification problems
-
npdl.objectives.
HeD
[source]¶ alias of
HellingerDistance
-
class
npdl.objectives.
BinaryCrossEntropy
(epsilon=1e-11)[source]¶ Computes the binary cross-entropy between predictions and targets.
\[L = -t \log(p) - (1 - t) \log(1 - p)\]Returns: Theano tensor
An expression for the element-wise binary cross-entropy.
Notes
This is the loss function of choice for binary classification problems and sigmoid output units.
-
npdl.objectives.
BCE
[source]¶ alias of
BinaryCrossEntropy
-
class
npdl.objectives.
SoftmaxCategoricalCrossEntropy
(epsilon=1e-11)[source]¶ Computes the categorical cross-entropy between predictions and targets.
\[L_i = - \sum_j{t_{i,j} \log(p_{i,j})}\]Parameters: predictions : Theano 2D tensor
Predictions in (0, 1), such as softmax output of a neural network, with data points in rows and class probabilities in columns.
targets : Theano 2D tensor or 1D tensor
Either targets in [0, 1] matching the layout of predictions, or a vector of int giving the correct class index per data point.
Returns: Theano 1D tensor
An expression for the item-wise categorical cross-entropy.
Notes
This is the loss function of choice for multi-class classification problems and softmax output units. For hard targets, i.e., targets that assign all of the probability to a single class per data point, providing a vector of int for the targets is usually slightly more efficient than providing a matrix with a single 1.0 per row.
-
npdl.objectives.
SCCE
[source]¶ alias of
SoftmaxCategoricalCrossEntropy