lasagne.layers.dnn¶
- class lasagne.layers.dnn.Pool2DDNNLayer(incoming, pool_size, stride=None, pad=(0, 0), ignore_border=True, mode='max', **kwargs)[source]¶
2D pooling layer
Performs 2D mean- or max-pooling over the two trailing axes of a 4D input tensor. This is an alternative implementation which uses theano.sandbox.cuda.dnn.dnn_pool directly.
Parameters: incoming : a Layer instance or tuple
The layer feeding into this layer, or the expected input shape.
pool_size : integer or iterable
The length of the pooling region in each dimension. If an integer, it is promoted to a square pooling region. If an iterable, it should have two elements.
stride : integer, iterable or None
The strides between sucessive pooling regions in each dimension. If None then stride = pool_size.
pad : integer or iterable
Number of elements to be added on each side of the input in each dimension. Each value must be less than the corresponding stride.
ignore_border : bool (default: True)
This implementation never includes partial pooling regions, so this argument must always be set to True. It exists only to make sure the interface is compatible with lasagne.layers.MaxPool2DLayer.
mode : string
Pooling mode, one of ‘max’, ‘average_inc_pad’ or ‘average_exc_pad’. Defaults to ‘max’.
**kwargs
Any additional keyword arguments are passed to the Layer superclass.
Notes
The value used to pad the input is chosen to be less than the minimum of the input, so that the output of each pooling region always corresponds to some element in the unpadded input region.
This is a drop-in replacement for lasagne.layers.MaxPool2DLayer. Its interface is the same, except it does not support the ignore_border argument.
- class lasagne.layers.dnn.MaxPool2DDNNLayer(incoming, pool_size, stride=None, pad=(0, 0), ignore_border=True, **kwargs)[source]¶
2D max-pooling layer
Subclass of Pool2DDNNLayer fixing mode='max', provided for compatibility to other MaxPool2DLayer classes.
- class lasagne.layers.dnn.Conv2DDNNLayer(incoming, num_filters, filter_size, stride=(1, 1), pad=0, untie_biases=False, W=lasagne.init.GlorotUniform(), b=lasagne.init.Constant(0.), nonlinearity=lasagne.nonlinearities.rectify, flip_filters=False, **kwargs)[source]¶
2D convolutional layer
Performs a 2D convolution on its input and optionally adds a bias and applies an elementwise nonlinearity. This is an alternative implementation which uses theano.sandbox.cuda.dnn.dnn_conv directly.
Parameters: incoming : a Layer instance or a tuple
The layer feeding into this layer, or the expected input shape. The output of this layer should be a 4D tensor, with shape (batch_size, num_input_channels, input_rows, input_columns).
num_filters : int
The number of learnable convolutional filters this layer has.
filter_size : int or iterable of int
An integer or a 2-element tuple specifying the size of the filters.
stride : int or iterable of int
An integer or a 2-element tuple specifying the stride of the convolution operation.
pad : int, iterable of int, ‘full’, ‘same’ or ‘valid’ (default: 0)
By default, the convolution is only computed where the input and the filter fully overlap (a valid convolution). When stride=1, this yields an output that is smaller than the input by filter_size - 1. The pad argument allows you to implicitly pad the input with zeros, extending the output size.
A single integer results in symmetric zero-padding of the given size on all borders, a tuple of two integers allows different symmetric padding per dimension.
'full' pads with one less than the filter size on both sides. This is equivalent to computing the convolution wherever the input and the filter overlap by at least one position.
'same' pads with half the filter size on both sides (one less on the second side for an even filter size). When stride=1, this results in an output size equal to the input size.
'valid' is an alias for 0 (no padding / a valid convolution).
Note that 'full' and 'same' can be faster than equivalent integer values due to optimizations by Theano.
untie_biases : bool (default: False)
If False, the layer will have a bias parameter for each channel, which is shared across all positions in this channel. As a result, the b attribute will be a vector (1D).
If True, the layer will have separate bias parameters for each position in each channel. As a result, the b attribute will be a 3D tensor.
W : Theano shared variable, numpy array or callable
An initializer for the weights of the layer. This should initialize the layer weights to a 4D array with shape (num_filters, num_input_channels, filter_rows, filter_columns). See lasagne.utils.create_param() for more information.
b : Theano shared variable, numpy array, callable or None
An initializer for the biases of the layer. If None is provided, the layer will have no biases. This should initialize the layer biases to a 1D array with shape (num_filters,) if untied_biases is set to False. If it is set to True, its shape should be (num_filters, input_rows, input_columns) instead. See lasagne.utils.create_param() for more information.
nonlinearity : callable or None
The nonlinearity that is applied to the layer activations. If None is provided, the layer will be linear.
flip_filters : bool (default: False)
Whether to flip the filters and perform a convolution, or not to flip them and perform a correlation. Flipping adds a bit of overhead, so it is disabled by default. In most cases this does not make a difference anyway because the filters are learnt. However, flip_filters should be set to True if weights are loaded into it that were learnt using a regular lasagne.layers.Conv2DLayer, for example.
**kwargs
Any additional keyword arguments are passed to the Layer superclass.
Notes
Unlike lasagne.layers.Conv2DLayer, this layer properly supports pad='same'. It is not emulated. This should result in better performance.
Attributes
W (Theano shared variable) Variable representing the filter weights. b (Theano shared variable) Variable representing the biases.