Table 1 Common loss functions applied in person ReID

Category Loss function
Metric learning Contrastive loss [4]: $$\mathcal {L}_{\text {CE}}= \frac {1}{2N} \sum \limits _{i=1}^{N} \left [ \left (1- y_{i}\right) d^{2} \left (\mathbf {f}_{i,1},\mathbf {f}_{2,i}\right) + \left (y_{i}\right)\left \{\max \left (0, m- d^{2} \left (\mathbf {f}_{1,i},\mathbf {f}_{2,i}\right)\right)\right \} \right ]$$
Triplet [47]: $$\mathcal {L}_{\text {T}}= \frac {1}{N_T} \sum \limits _{\substack {\text {a,p,n}\\ {y_a=y_{p}{\neq }y_n}}} \left [d \left (\mathbf {f}_a,\mathbf {f}_{p}\right) -d \left (\mathbf {f}_a,\mathbf {f}_{n}\right)\right ]_{+}$$
Triplet loss with margin [7]: $$\mathcal {L}_{\text {T}}= \frac {1}{N_T} \sum \limits _{\substack {\text {a,p,n}\\ {y_a=y_{p}{\neq }y_n}}} \left [m+d \left (\mathbf {f}_a,\mathbf {f}_{p}\right) -d \left (\mathbf {f}_a,\mathbf {f}_{n}\right)\right ]_{+}$$
Semi-hard triplet [41]: $$\phantom {\dot {i}\!}\mathcal {L}_{\text {TBH}}= \frac {1}{N_s} \sum \limits _{a=1}^{N_s}\left [m+ \text {max}_{y_{p} =y_{a}} d \left (\mathbf {f}_{a},\mathbf {f}_{p}\right)- \text {min}_{y_{p} \neq y_{a}} d \left (\mathbf {f}_{a},\mathbf {f}_{n}\right)\right ]_{+}$$
Quadruplet [5]: \begin {aligned} \mathcal {L}_{\text {quad}}= \frac {1}{N} \sum \limits _{\substack {\text {a,p,n} \\ {y_{a}=y_{p}{\neq }y_{n}}}} \left [m_{1}+d \left (\mathbf {f}_{a},\mathbf {f}_{p}\right) -d \left (\mathbf {f}_{a},\mathbf {f}_{n}\right)\right ]_{+}\\ \ \ \ + \frac {1}{N} \sum \limits _{\substack {\text {a,p,n,k} \\ {y_a=y_{p}{\neq }y_{n}{\neq }y_k}}} \left [m_2+d \left (\mathbf {f}_a,\mathbf {f}_{p}\right) -d \left (\mathbf {f}_n,\mathbf {f}_{k}\right)\right ]_{+} \end {aligned}
HAP2S [48]: $$\phantom {\dot {i}\!}\mathcal {L}_{\text {HAP2S}}= \frac {1}{N_{s}} \sum \limits _{a=1}^{N_{s}}\left [m+ \text {max}_{y_{p} =y_{a}} d \left (\mathbf {f}_{a},\mathbf {S}_{p}\right)- \text {min}_{y_{p} \neq y_{a}} d \left (\mathbf {f}_{a},\mathbf {S}_{n}\right)\right ]_{+}$$
Magnet [49]: $$\mathcal {L}_{\text {mag}}= - \frac {1}{N} \sum \limits _{i=1}^{N} \left [ \log \frac {e^{-\frac {1}{2{\sigma }^{2}} d\left (\mathbf {f}_{i}, {\mu }(\mathbf {f}_{i})\right)-m}}{ \sum _{k=1}^{C} e^{-\frac {1}{2{\sigma }^{2}} d\left (\mathbf {f}_{i}, {\mu }_{i}^{k} \right)}} \right ]_{+}$$
Classification Cross-entropy [2, 38, 50]: $$\phantom {\dot {i}\!}\mathcal {L}_{\text {CE}}= - \frac {1}{N} \sum \limits _{i=1}^{N} \log \frac {e^{\mathbf {W}_{y_{i}}^{T} \mathbf {f}_{i}}}{ \sum _{k=1}^{C} e^{\mathbf {W}_{k}^{T} \mathbf {f}_{i}}}$$
Cosine Softmax [49]: $$\phantom {\dot {i}\!}\mathcal {L}_{\text {CCE}}= - \frac {1}{N} \sum \limits _{i=1}^{N} \log \frac {e^{\kappa. \Tilde {\mathbf {W}_{y_{i}}^{T}} \Tilde {\mathbf {f}_{i}}}}{ \sum _{k=1}^{C} e^{\kappa. \Tilde {\mathbf {W}_{k}^{T}} \Tilde {\mathbf {f}_{i}}}}$$
Part-based cross-entropy [4346]: $$\mathcal {L}_{\text {PCE}}= \sum \limits _{p=1}^{P} \mathcal {L}_{{CE}}^{p}$$