Core Layers

class npdl.layers.Linear(n_out, n_in=None, init=’glorot_uniform’)[source]

A fully connected layer implemented as the dot product of inputs and weights.

Parameters:

n_out : (int, tuple)

Desired size or shape of layer output

n_in : (int, tuple) or None

The layer input size feeding into this layer

init : (Initializer, optional)

Initializer object to use for initializing layer weights

backward(pre_grad, *args, **kwargs)[source]

Apply the backward pass transformation to the input data.

Parameters:

pre_grad : numpy.array

deltas back propagated from the adjacent higher layer

Returns:

numpy.array

deltas to propagate to the adjacent lower layer

forward(input, *args, **kwargs)[source]

Apply the forward pass transformation to the input data.

Parameters:

input : numpy.array

input data

Returns:

numpy.array

output data

class npdl.layers.Dense(n_out, n_in=None, init=’glorot_uniform’, activation=’tanh’)[source]

A fully connected layer implemented as the dot product of inputs and weights. Generally used to implemenent nonlinearities for layer post activations.

Parameters:

n_out : int

Desired size or shape of layer output

n_in : int, or None

The layer input size feeding into this layer

activation : str, or npdl.activatns.Activation

Defaults to Tanh

init : str, or npdl.initializations.Initializer

Initializer object to use for initializing layer weights

backward(pre_grad, *args, **kwargs)[source]

Apply the backward pass transformation to the input data.

Parameters:

pre_grad : numpy.array

deltas back propagated from the adjacent higher layer

Returns:

numpy.array

deltas to propagate to the adjacent lower layer

forward(input, *args, **kwargs)[source]

Apply the forward pass transformation to the input data.

Parameters:

input : numpy.array

input data

Returns:

numpy.array

output data

class npdl.layers.Softmax(n_out, n_in=None, init=’glorot_uniform’)[source]

A fully connected layer implemented as the dot product of inputs and weights.

Parameters:

n_out : int

Desired size or shape of layer output

n_in : int, or None

The layer input size feeding into this layer

init : str, or npdl.initializations.Initializer

Initializer object to use for initializing layer weights

class npdl.layers.Dropout(p=0.0)[source]

A dropout layer.

Applies an element-wise multiplication of inputs with a keep mask.

A keep mask is a tensor of ones and zeros of the same shape as the input.

Each forward() call generates an new keep mask stochastically where there distribution of ones in the mask is controlled by the keep param.

Parameters:

p : float

fraction of the inputs that should be stochastically kept.

forward(input, train=True, *args, **kwargs)[source]

Apply the forward pass transformation to the input data.

Parameters:

input : numpy.array

input data

train : bool

is inference only

Returns:

numpy.array

output data