npdl.layers

Base Layers

Layer The Layer class represents a single layer of a neural network.

Core Layers

Linear A fully connected layer implemented as the dot product of inputs and weights.
Dense A fully connected layer implemented as the dot product of inputs and weights.
Softmax A fully connected layer implemented as the dot product of inputs and weights.
Dropout A dropout layer.

Convolution Layers

Convolution Convolution operator for filtering windows of two-dimensional inputs.

Embedding Layer

Embedding

Normalization Layer

BatchNormal Batch normalization layer (Ioffe and Szegedy, 2014) [R1] .

Pooling Layers

MeanPooling Average pooling operation for spatial data.
MaxPooling Max pooling operation for spatial data.

Recurrent Layers

Recurrent A recurrent neural network (RNN) is a class of artificial neural network where connections between units form a directed cycle.
SimpleRNN Fully-connected RNN where the output is to be fed back to input.
GRU Gated recurrent units (GRUs) are a gating mechanism in recurrent neural networks, introduced in 2014.
LSTM Long short-term memory (LSTM) is a recurrent neural network (RNN) architecture (an artificial neural network) proposed in 1997 by Sepp Hochreiter and Jürgen Schmidhuber [R7] and further improved in 2000 by Felix Gers et al.[R8]_ Like most RNNs, a LSTM network is universal in the sense that given enough network units it can compute anything a conventional computer can compute, provided it has the proper weight matrix, which may be viewed as its program.
BatchLSTM Long short-term memory (LSTM) is a special kind of RNN.

Shape Layers

Flatten