# lasagne.nonlinearities¶

Non-linear activation functions for artificial neurons.

 sigmoid(x) Sigmoid activation function $$\varphi(x) = \frac{1}{1 + e^{-x}}$$ softmax(x) Softmax activation function $$\varphi(\mathbf{x})_j = \frac{e^{\mathbf{x}_j}}{\sum_{k=1}^K e^{\mathbf{x}_k}}$$ where $$K$$ is the total number of neurons in the layer. tanh(x) Tanh activation function $$\varphi(x) = \tanh(x)$$ rectify(x) Rectify activation function $$\varphi(x) = \max(0, x)$$ LeakyRectify([leakiness]) Leaky rectifier $$\varphi(x) = \max(\alpha \cdot x, x)$$ leaky_rectify(x) Instance of LeakyRectify with leakiness $$\alpha=0.01$$ very_leaky_rectify(x) Instance of LeakyRectify with leakiness $$\alpha=1/3$$ linear(x) Linear activation function $$\varphi(x) = x$$ identity(x) Linear activation function $$\varphi(x) = x$$

## Detailed description¶

lasagne.nonlinearities.sigmoid(x)[source]

Sigmoid activation function $$\varphi(x) = \frac{1}{1 + e^{-x}}$$

Parameters: x : float32 The activation (the summed, weighted input of a neuron). float32 in [0, 1] The output of the sigmoid function applied to the activation.
lasagne.nonlinearities.softmax(x)[source]

Softmax activation function $$\varphi(\mathbf{x})_j = \frac{e^{\mathbf{x}_j}}{\sum_{k=1}^K e^{\mathbf{x}_k}}$$ where $$K$$ is the total number of neurons in the layer. This activation function gets applied row-wise.

Parameters: x : float32 The activation (the summed, weighted input of a neuron). float32 where the sum of the row is 1 and each single value is in [0, 1] The output of the softmax function applied to the activation.
lasagne.nonlinearities.tanh(x)[source]

Tanh activation function $$\varphi(x) = \tanh(x)$$

Parameters: x : float32 The activation (the summed, weighted input of a neuron). float32 in [-1, 1] The output of the tanh function applied to the activation.
lasagne.nonlinearities.rectify(x)[source]

Rectify activation function $$\varphi(x) = \max(0, x)$$

Parameters: x : float32 The activation (the summed, weighted input of a neuron). float32 The output of the rectify function applied to the activation.
class lasagne.nonlinearities.LeakyRectify(leakiness=0.01)[source]

Leaky rectifier $$\varphi(x) = \max(\alpha \cdot x, x)$$

The leaky rectifier was introduced in [R34]. Compared to the standard rectifier rectify(), it has a nonzero gradient for negative input, which often helps convergence.

Parameters: leakiness : float Slope for negative input, usually between 0 and 1. A leakiness of 0 will lead to the standard rectifier, a leakiness of 1 will lead to a linear activation function, and any value in between will give a leaky rectifier.

leaky_rectify
Instance with default leakiness of 0.01, as in [R34].
very_leaky_rectify
Instance with high leakiness of 1/3, as in [R35].

References

 [R34] (1, 2, 3) Maas et al. (2013): Rectifier Nonlinearities Improve Neural Network Acoustic Models, http://web.stanford.edu/~awni/papers/relu_hybrid_icml2013_final.pdf
 [R35] (1, 2) Graham, Benjamin (2014): Spatially-sparse convolutional neural networks, http://arxiv.org/abs/1409.6070

Examples

In contrast to other activation functions in this module, this is a class that needs to be instantiated to obtain a callable:

>>> from lasagne.layers import InputLayer, DenseLayer
>>> l_in = InputLayer((None, 100))
>>> from lasagne.nonlinearities import LeakyRectify
>>> custom_rectify = LeakyRectify(0.1)
>>> l1 = DenseLayer(l_in, num_units=200, nonlinearity=custom_rectify)

Alternatively, you can use the provided instance for leakiness=0.01:

>>> from lasagne.nonlinearities import leaky_rectify
>>> l2 = DenseLayer(l_in, num_units=200, nonlinearity=leaky_rectify)

Or the one for a high leakiness of 1/3:

>>> from lasagne.nonlinearities import very_leaky_rectify
>>> l3 = DenseLayer(l_in, num_units=200, nonlinearity=very_leaky_rectify)

Methods

 __call__(x) Apply the leaky rectify function to the activation x.
lasagne.nonlinearities.leaky_rectify(x)[source]

Instance of LeakyRectify with leakiness $$\alpha=0.01$$

lasagne.nonlinearities.very_leaky_rectify(x)[source]

Instance of LeakyRectify with leakiness $$\alpha=1/3$$

lasagne.nonlinearities.linear(x)[source]

Linear activation function $$\varphi(x) = x$$

Parameters: x : float32 The activation (the summed, weighted input of a neuron). float32 The output of the identity applied to the activation.