Skip to main content

Table 2 Results for different settings of alignment and pruning on the two datasets (W for Watertight, P for Princeton). The two rows shown in bold illustrate the performances of the best precision/runtime trade-off.

From: From 2D Silhouettes to 3D Object Retrieval: Contributions and Benchmarking

  

NN (%)

FT (%)

ST (%)

DCG (%)

Align (None),

3 Views, Prun ()

W

92.5

51.6

65.6

82.1

 

P

60.4

30.5

41.8

60.1

Align (NPCA),

3 Views, Prun ()

W

93.5

60.7

71.9

86

 

P

62.7

37.1

49.2

64.1

Align (PCA),

3 Views, Prun ()

W

94.7

61.5

72.8

86.5

 

P

65.4

38.2

49.7

64.7

Align (Our),

3 Views, Prun ()

W

95.2

62.7

73.7

86.9

 

P

67.1

39.8

51

66.1

Align (Our),

9 Views, Prun ()

W

95.2

65.3

75.6

88

 

P

71.9

45.1

55.6

70.1

Align (Our),

3 Views, Prun ()

W

89.5

57.8

72.3

83.9

 

P

60.5

34.5

47.2

61.8

Align (Our),

3 Views, Prun ()

W

95.5

62.8

73.7

86.9

 

P

66.1

40.1

51

66