Skip to main content

Table 1 Performance comparison of different DCNN models on LEVIR data set

From: Performance analysis of different DCNN models in remote sensing image object detection

Base-network

mAP@.5

mAP@[.5:.95]

Test-time(inference/nms/total)

Memory

VGG16

0.815

0.592

14.9 ms/1.2 ms/16.1 ms

321.2 M

VGG19

0.798

0.576

17.7 ms/1.1 ms/18.8 ms

363.7 M

InceptionV3

0.874

0.639

9.8 ms/1.1 ms/11.1 ms

394.0 M

InceptionV4

0.728

0.502

15.9 ms/1.1 ms/17.0 ms

543.0 M

ResNet50

0.83

0.60

9.50 ms/1.1 ms/10.6 ms

398.2 M

ResNet101

0.795

0.566

12.9 ms/1.1 ms/14.0 ms

558.2 M

ResNeXt50

0.782

0.557

13.8 ms/1.2 ms/15.0 ms

401.7 M

ResNeXt101

0.797

0.568

34.7 ms/1.1 ms/35.8 ms

559.5 M

SqueezeNet

0.905

0.673

6.0 ms/1.0 ms/6.9 ms

217.9 M

ShuffleNetV2

0.856

0.618

3.8 ms/1.1 ms/4.9 ms

217.2 M

DarkNet53

0.868

0.539

11.7 ms/1.2 ms/12.9 ms

532.7 M

MobileNetV2

0.873

0.634

4.0 ms/1.1 ms/5.1 ms

217.9 M

MobileNetV3

0.869

0.633

4.9 ms/1.1 ms/6.0 ms

217.9 M

SE-ResNet50

0.852

0.619

10.4 ms/1.4 ms/11.8 ms

426.0 M

SK-ResNet50

0.823

0.592

9.2 ms/1.1 ms/10.3 ms

260.1 M

CSPDarknet53

0.882

0.639

11.1 ms/1.2 ms/12.3 ms

420.8 M

EfficientB0

0.757

0.537

6.4 ms/1.2 ms/7.6 ms

241.1 M

EfficientB1

0.835

0.60

7.7 ms/1.0 ms/8.8 ms

261.3 M

GhostNet

0.809

0.579

4.6 ms/1.1 ms/5.7 ms

229.4 M

Res2Net50

0.761

0.536

11.2 ms/1.3 ms/12.5 ms

407.1 M

  1. The best results are in bold, the second best results are underlined