`npdl.layers`

ΒΆ

`Layer` |
The `Layer` class represents a single layer of a neural network. |

`Linear` |
A fully connected layer implemented as the dot product of inputs and weights. |

`Dense` |
A fully connected layer implemented as the dot product of inputs and weights. |

`Softmax` |
A fully connected layer implemented as the dot product of inputs and weights. |

`Dropout` |
A dropout layer. |

`Convolution` |
Convolution operator for filtering windows of two-dimensional inputs. |

`Embedding` |

`BatchNormal` |
Batch normalization layer (Ioffe and Szegedy, 2014) [R6438c52e335a-1] . |

`MeanPooling` |
Average pooling operation for spatial data. |

`MaxPooling` |
Max pooling operation for spatial data. |

`Recurrent` |
A recurrent neural network (RNN) is a class of artificial neural network where connections between units form a directed cycle. |

`SimpleRNN` |
Fully-connected RNN where the output is to be fed back to input. |

`GRU` |
Gated recurrent units (GRUs) are a gating mechanism in recurrent neural networks, introduced in 2014. |

`LSTM` |
Bacth LSTM, support mask, but not support training. |

`BatchLSTM` |
Batch LSTM, support training, but not support mask. |

`Flatten` |