Some useful functions
import torch
import matplotlib.pyplot as plt
from nbdev.showdoc import *

stats[source]

stats(x)

Returns mean and std of a tensor

t = torch.randn((50,50))
t[:5,:5]
tensor([[ 0.9286, -0.2953, -1.2908,  0.8852, -0.1019],
        [-0.6417, -0.1461, -0.4287,  0.4329, -1.2059],
        [ 0.8204, -0.9540, -0.0658,  0.4746, -0.8641],
        [-1.4552, -1.7285, -0.8970, -1.5622, -1.2735],
        [ 0.3761,  0.8173,  0.5098, -0.0591,  0.4272]])
stats(t)
(tensor(0.0043), tensor(0.9670))

Cross Entropy Loss

Softmax of our activations is defined by:

$$\hbox{softmax(x)}_{i} = \frac{e^{x_{i}}}{e^{x_{0}} + e^{x_{1}} + \cdots + e^{x_{n-1}}}$$

or more concisely:

$$\hbox{softmax(x)}_{i} = \frac{e^{x_{i}}}{\sum_{0 \leq j \leq n-1} e^{x_{j}}}$$

where $n$ is the number of classes.

In practice, we will need the log of the softmax when we calculate the loss.

log_softmax[source]

log_softmax(x)

log_softmax(t)
tensor([[-3.2889, -4.5127, -5.5082,  ..., -5.7149, -5.2011, -3.5205],
        [-4.7243, -4.2287, -4.5113,  ..., -2.9726, -3.0700, -2.8482],
        [-3.5969, -5.3712, -4.4830,  ..., -4.2475, -3.8662, -4.3414],
        ...,
        [-4.2794, -4.6401, -5.4084,  ..., -4.8664, -4.6423, -5.1971],
        [-5.8388, -3.2967, -4.3515,  ..., -5.8626, -6.9136, -4.9014],
        [-5.3133, -3.8779, -5.1986,  ..., -5.3160, -5.3171, -5.5674]])

accuracy[source]

accuracy(pred, y)

Accuracy metric

error[source]

error(pred, y)

Error metric

nll[source]

nll(pred, yb)

Negative Log Likelihood Loss function

Plotting

plotdist[source]

plotdist(x, showsigmas=True)

Plot distribution x with optional showsigmas

plotdist(t)
plotdist(t,showsigmas=False)