Dense layers

class lasagne.layers.DenseLayer(incoming, num_units, W=lasagne.init.GlorotUniform(), b=lasagne.init.Constant(0.), nonlinearity=lasagne.nonlinearities.rectify, **kwargs)[source]

A fully connected layer.

Parameters:

incoming : a Layer instance or a tuple

The layer feeding into this layer, or the expected input shape

num_units : int

The number of units of the layer

W : Theano shared variable, numpy array or callable

An initializer for the weights of the layer. If a shared variable or a numpy array is provided the shape should be (num_inputs, num_units). See Layer.create_param() for more information.

b : Theano shared variable, numpy array, callable or None

An initializer for the biases of the layer. If a shared variable or a numpy array is provided the shape should be (num_units,). If None is provided the layer will have no biases. See Layer.create_param() for more information.

nonlinearity : callable or None

The nonlinearity that is applied to the layer activations. If None is provided, the layer will be linear.

Notes

If the input to this layer has more than two axes, it will flatten the trailing axes. This is useful for when a dense layer follows a convolutional layer, for example. It is not necessary to insert a FlattenLayer in this case.

Examples

>>> from lasagne.layers import InputLayer, DenseLayer
>>> l_in = InputLayer((100, 20))
>>> l1 = DenseLayer(l_in, num_units=50)
class lasagne.layers.NonlinearityLayer(incoming, nonlinearity=lasagne.nonlinearities.rectify, **kwargs)[source]

A layer that just applies a nonlinearity.

Parameters:

incoming : a Layer instance or a tuple

The layer feeding into this layer, or the expected input shape

nonlinearity : callable or None

The nonlinearity that is applied to the layer activations. If None is provided, the layer will be linear.

class lasagne.layers.NINLayer(incoming, num_units, untie_biases=False, W=lasagne.init.GlorotUniform(), b=lasagne.init.Constant(0.), nonlinearity=lasagne.nonlinearities.rectify, **kwargs)[source]

Network-in-network layer. Like DenseLayer, but broadcasting across all trailing dimensions beyond the 2nd. This results in a convolution operation with filter size 1 on all trailing dimensions. Any number of trailing dimensions is supported, so NINLayer can be used to implement 1D, 2D, 3D, ... convolutions.

Parameters:

incoming : a Layer instance or a tuple

The layer feeding into this layer, or the expected input shape

num_units : int

The number of units of the layer

untie_biases : bool

If false the network has a single bias vector similar to a dense layer. If true a separate bias vector is used for each trailing dimension beyond the 2nd.

W : Theano shared variable, numpy array or callable

An initializer for the weights of the layer. If a shared variable or a numpy array is provided the shape should be (num_inputs, num_units), where num_units is the size of the 2nd. dimension of the input. See lasagne.utils.create_param() for more information.

b : Theano shared variable, numpy array, callable or None

An initializer for the biases of the layer. If a shared variable or a numpy array is provided the correct shape is determined by the untie_biases setting. If untie_biases is False, then the shape should be (num_units, ). If untie_biases is True then the shape should be (num_units, input_dim[2], ..., input_dim[-1]). If None is provided the layer will have no biases. See lasagne.utils.create_param() for more information.

nonlinearity : callable or None

The nonlinearity that is applied to the layer activations. If None is provided, the layer will be linear.

References

[R14]Lin, Min, Qiang Chen, and Shuicheng Yan (2013): Network in network. arXiv preprint arXiv:1312.4400.

Examples

>>> from lasagne.layers import InputLayer, NINLayer
>>> l_in = InputLayer((100, 20, 10, 3))
>>> l1 = NINLayer(l_in, num_units=5)