inox.nn.activation#
Activation functions
Classes#
Creates an identity activation function. |
|
Creates an identity activation function. |
|
Creates a sigmoid activation function. |
|
Creates a sigmoid linear unit (SiLU) activation function. |
|
Creates a softplus activation function. |
|
Creates a softmax activation function. |
|
Creates a rectified linear unit (ReLU) activation function. |
|
Creates a leaky-ReLU activation function. |
|
Creates an exponential linear unit (ELU) activation function. |
|
Creates a continuously-differentiable ELU (CELU) activation function. |
|
Creates a Gaussian error linear unit (GELU) activation function. |
|
Creates a self-normalizing ELU (SELU) activation function. |
Descriptions#
- class inox.nn.activation.Identity#
Creates an identity activation function.
\[y = x\]
- class inox.nn.activation.Tanh#
Creates an identity activation function.
\[y = \tanh(x)\]
- class inox.nn.activation.Sigmoid#
Creates a sigmoid activation function.
\[y = \sigma(x) = \frac{1}{1 + \exp(-x)}\]
- class inox.nn.activation.SiLU#
Creates a sigmoid linear unit (SiLU) activation function.
\[y = x \sigma(x)\]References
Gaussian Error Linear Units (Hendrycks et al., 2017)Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning (Elfwing et al., 2017)
- class inox.nn.activation.Softplus#
Creates a softplus activation function.
\[y = \log(1 + \exp(x))\]
- class inox.nn.activation.Softmax(axis=-1)#
Creates a softmax activation function.
\[y_i = \frac{\exp(x_i)}{\sum_j \exp(x_j)}\]
- class inox.nn.activation.ReLU#
Creates a rectified linear unit (ReLU) activation function.
\[y = \max(x, 0)\]
- class inox.nn.activation.LeakyReLU(alpha=0.01)#
Creates a leaky-ReLU activation function.
\[\begin{split}y = \begin{cases} \alpha x & \text{if } x \leq 0 \\ x & \text{otherwise} \end{cases}\end{split}\]
- class inox.nn.activation.ELU(alpha=1.0)#
Creates an exponential linear unit (ELU) activation function.
\[\begin{split}y = \begin{cases} \alpha (\exp(x) - 1) & \text{if } x \leq 0 \\ x & \text{otherwise} \end{cases}\end{split}\]References
Fast and Accurate Deep Network Learning by Exponential Linear Units (Clevert et al., 2015)
- class inox.nn.activation.CELU(alpha=1.0)#
Creates a continuously-differentiable ELU (CELU) activation function.
\[y = \max(x, 0) + \alpha \min(0, \exp(x / \alpha) - 1)\]References
Continuously Differentiable Exponential Linear Units (Barron, 2017)
- class inox.nn.activation.GELU(approximate=True)#
Creates a Gaussian error linear unit (GELU) activation function.
\[y = \frac{x}{2} \left(1 + \mathrm{erf}\left(\frac{x}{\sqrt{2}}\right)\right)\]When
approximate=True
, it is approximated as\[y = \frac{x}{2} \left(1 + \tanh\left(\sqrt{\frac{2}{\pi}}(x + 0.044715 x^3)\right)\right)\]References
Gaussian Error Linear Units (Hendrycks et al., 2017)- Parameters:
approximate (bool) – Whether to use the approximate or exact formulation.
- class inox.nn.activation.SELU#
Creates a self-normalizing ELU (SELU) activation function.
\[\begin{split}y = \lambda \begin{cases} \alpha (\exp(x) - 1) & \text{if } x \leq 0 \\ x & \text{otherwise} \end{cases}\end{split}\]where \(\lambda \approx 1.0507\) and \(\alpha \approx 1.6732\).
References
Self-Normalizing Neural Networks (Klambauer et al., 2017)