Skip to main content

Table 6 Performance evaluation on the IJB-A dataset

From: Deep-learned faces: a survey

Publication

1:1 verification TAR

1:N identification TPIR

 

FAR = 0.001

FAR = 0.01

FPIR = 0.01

FPIR = 0.01

Rank 1

Rank 5

VGGFace [37]

-

0.461 ± 0.077

0.670 ± 0.031

0.913 ± 0.011

-

 

Template [59]

0.939 ± 0.013

0.979 ± 0.004

0.774 ± 0.049

0.882 ± 0.016

0.928 ± 0.010

0.977 ± 0.004

NAN [58]

0.941 ± 0.008

0.978 ± 0.003

0.817 ± 0.041

0.917 ± 0.009

0.958 ± 0.005

0.980 ± 0.005

B-CNN [62]

-

-

0.143 ± 0.027

0.341 ± 0.032

0.588 ± 0.020

0.796 ± 0.017

DCNNmanual+metric [63]

-

0.787 ± 0.043

-

-

0.852 ± 0.018

0.937 ± 0.010

  1. For verification, the true accept rates (TAR) vs. false-positive rates (FAR) are reported. For identification, the true-positive identification rate (TPIR) vs. false-positive identification rate (TPIR) and the rank-N accuracies are presented