Binary cross-entropy and categorical cross-entropy are two most common cross-entropy based loss function, that are available in deep learning frameworks like Keras. For a classification problem with classes the cross-entropy is defined:
Where denotes whether the input belongs to the class and is the predicted score for class .
Categorical cross-entropy is where the vector is computed using softmax function. Softmax squashes the input vector into a vector which represents a valid probability distribution (i.e. sums up to 1). is suitable for multi-class problems, where given input can belong only to one class (classes are mutually exclusive). can be implemented in the following way:
def cce_loss(softmax_output, target): softmax_output = np.asfarray(softmax_output) target = np.asfarray(target) return -target * np.log(softmax_output)
>>> target = [0, 1, 0] >>> softmax_output = [0.3, 0.6, 0.1] >>> print cce_loss(softmax_output, target) [0, 0.5108256, 0]
From the above we see, that only the probability of the true class contributes to the total loss. Model outputs for the other classes are influenced indirectly by the softmax activation, which works according to the “winner takes it all” principle.
The binary cross-entropy considers each class score produced by the model independently, which makes this loss function suitable also for multi-label problems, where each input can belong to more than one class. Unlike , doesn’t assume a specific activation function of the final network layer. For a problem with 3 output classes (A, B, C) the binary cross-entropy considers three independent binary classification problems:
- class A vs. not class A
- class B vs. not class B
- class C vs. not class C
is defined as:
def bce_loss(output, target): output = np.asfarray(output) target = np.asfarray(target) d = np.abs(output - target) return -np.log(1 - d)
>>> target = [0, 1, 0] >>> output = [0.3, 0.6, 0.1] >>> print bce_loss(p, q) [0.35667497 0.5108256 0.10536055]