Noise layers

class lasagne.layers.DropoutLayer(incoming, p=0.5, rescale=True, **kwargs)[source]

Dropout layer

Sets values to zero with probability p. See notes for disabling dropout during testing.

Parameters:

incoming : a Layer instance or a tuple

the layer feeding into this layer, or the expected input shape

p : float or scalar tensor

The probability of setting a value to zero

rescale : bool

If true the input is rescaled with input / (1-p) when deterministic is False.

Notes

The dropout layer is a regularizer that randomly sets input values to zero; see [R15], [R16] for why this might improve generalization. During training you should set deterministic to false and during testing you should set deterministic to true.

If rescale is true the input is scaled with input / (1-p) when deterministic is false, see references for further discussion. Note that this implementation scales the input at training time.

References

[R15](1, 2) Hinton, G., Srivastava, N., Krizhevsky, A., Sutskever, I., Salakhutdinov, R. R. (2012): Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580.
[R16](1, 2) Srivastava Nitish, Hinton, G., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. R. (2014): Dropout: A Simple Way to Prevent Neural Networks from Overfitting. Journal of Machine Learning Research, 5(Jun)(2), 1929-1958.
get_output_for(input, deterministic=False, **kwargs)[source]
Parameters:

input : tensor

output from the previous layer

deterministic : bool

If true dropout and scaling is disabled, see notes

lasagne.layers.dropout[source]

alias of DropoutLayer

class lasagne.layers.GaussianNoiseLayer(incoming, sigma=0.1, **kwargs)[source]

Gaussian noise layer.

Add zero-mean Gaussian noise of given standard deviation to the input [R17].

Parameters:

incoming : a Layer instance or a tuple

the layer feeding into this layer, or the expected input shape

sigma : float or tensor scalar

Standard deviation of added Gaussian noise

Notes

The Gaussian noise layer is a regularizer. During training you should set deterministic to false and during testing you should set deterministic to true.

References

[R17](1, 2) K.-C. Jim, C. Giles, and B. Horne (1996): An analysis of noise in recurrent neural networks: convergence and generalization. IEEE Transactions on Neural Networks, 7(6):1424-1438.
get_output_for(input, deterministic=False, **kwargs)[source]
Parameters:

input : tensor

output from the previous layer

deterministic : bool

If true noise is disabled, see notes