Core Layers

class npdl.layers.Linear(n_out, n_in=None, init='glorot_uniform')[source][source]

A fully connected layer implemented as the dot product of inputs and weights.

Parameters:
n_out : (int, tuple)

Desired size or shape of layer output

n_in : (int, tuple) or None

The layer input size feeding into this layer

init : (Initializer, optional)

Initializer object to use for initializing layer weights

backward(pre_grad, *args, **kwargs)[source][source]

Apply the backward pass transformation to the input data.

Parameters:
pre_grad : numpy.array

deltas back propagated from the adjacent higher layer

Returns:
numpy.array

deltas to propagate to the adjacent lower layer

connect_to(prev_layer=None)[source][source]

Propagates the given input through this layer (and only this layer).

Parameters:
prev_layer : previous layer

The previous layer to propagate through this layer.

forward(input, *args, **kwargs)[source][source]

Apply the forward pass transformation to the input data.

Parameters:
input : numpy.array

input data

Returns:
numpy.array

output data

grads[source]

Get layer parameter gradients as calculated from backward().

params[source]

Layer parameters.

Returns a list of numpy.array variables or expressions that parameterize the layer.

Returns:
list of numpy.array variables or expressions

A list of variables that parameterize the layer

Notes

For layers without any parameters, this will return an empty list.

class npdl.layers.Dense(n_out, n_in=None, init='glorot_uniform', activation='tanh')[source][source]

A fully connected layer implemented as the dot product of inputs and weights. Generally used to implemenent nonlinearities for layer post activations.

Parameters:
n_out : int

Desired size or shape of layer output

n_in : int, or None

The layer input size feeding into this layer

activation : str, or npdl.activatns.Activation

Defaults to Tanh

init : str, or npdl.initializations.Initializer

Initializer object to use for initializing layer weights

backward(pre_grad, *args, **kwargs)[source][source]

Apply the backward pass transformation to the input data.

Parameters:
pre_grad : numpy.array

deltas back propagated from the adjacent higher layer

Returns:
numpy.array

deltas to propagate to the adjacent lower layer

connect_to(prev_layer=None)[source][source]

Propagates the given input through this layer (and only this layer).

Parameters:
prev_layer : previous layer

The previous layer to propagate through this layer.

forward(input, *args, **kwargs)[source][source]

Apply the forward pass transformation to the input data.

Parameters:
input : numpy.array

input data

Returns:
numpy.array

output data

grads[source]

Get layer parameter gradients as calculated from backward().

params[source]

Layer parameters.

Returns a list of numpy.array variables or expressions that parameterize the layer.

Returns:
list of numpy.array variables or expressions

A list of variables that parameterize the layer

Notes

For layers without any parameters, this will return an empty list.

class npdl.layers.Softmax(n_out, n_in=None, init='glorot_uniform')[source][source]

A fully connected layer implemented as the dot product of inputs and weights.

Parameters:
n_out : int

Desired size or shape of layer output

n_in : int, or None

The layer input size feeding into this layer

init : str, or npdl.initializations.Initializer

Initializer object to use for initializing layer weights

class npdl.layers.Dropout(p=0.0)[source][source]

A dropout layer.

Applies an element-wise multiplication of inputs with a keep mask.

A keep mask is a tensor of ones and zeros of the same shape as the input.

Each forward() call generates an new keep mask stochastically where there distribution of ones in the mask is controlled by the keep param.

Parameters:
p : float

fraction of the inputs that should be stochastically kept.

backward(pre_grad, *args, **kwargs)[source][source]

calculate the input gradient

connect_to(prev_layer)[source][source]

Propagates the given input through this layer (and only this layer).

Parameters:
prev_layer : previous layer

The previous layer to propagate through this layer.

forward(input, train=True, *args, **kwargs)[source][source]

Apply the forward pass transformation to the input data.

Parameters:
input : numpy.array

input data

train : bool

is inference only

Returns:
numpy.array

output data