Skip to main content

Table 2 Main properties of different channel pruning techniques

From: Exploiting prunability for person re-identification

Strategy

Methods

Criteria

Prune in one step

L1 [33]

Weights: \( S_{j}=\sum \left | w_{k} \right | \)

 

Redundant channels [67]

Weights: \(SIM_{C}(\mathbf {W}_{i},\mathbf {W}_{j}) = \frac {\mathbf {W}_{i}\boldsymbol {\cdot } \mathbf {W}_{j}}{\left \| \mathbf {W}_{i} \left \| \boldsymbol {\cdot } \right \| \mathbf {W}_{j} \right \|}\)

 

Entropy [34]

Feature maps: \(E_{j} = -\sum _{a=1}^{m}\left (p_{a}log(p_{a})\right)\)

Prune iteratively

Taylor [32]

Feature maps: \(\left | \Delta C(\mathbf {H}_{i,j}) \right | = \left | \frac {\delta C}{\delta \mathbf {H}_{i,j}} \mathbf {H}_{i,j}\right |\)

 

FPGM [68]

Weights: \(\phantom {\dot {i}\!}\mathbf {W}_{i,j^{\ast }} \in {argmin}_{j^{\ast } \in R^{n_{{in}} \times k * \times k}} \sum _{j^{\prime } \in [1, n_{{out}}]} ||x - \mathbf {W}_{i,j^{\prime }}||_{2}\)

Prune iteratively with regularization

Play and Prune [69]

Weights: \(S{_{j}}=\sum \left | w_{k} \right |\)

 

Auto-Balance [70]

Weights: \(S{_{j}}=\sum \left | w_{k} \right |\)

Prune iteratively, min reconstruction error

ThiNet

Feature maps: \(\mathbf {H}_{i+1,j} = \sum _{j=1}^{C} \sum _{k=1}^{K} \sum _{k=1}^{K} \mathbf {W}_{i,j,k,k}*\mathbf {H}_{i,j}\)

 

Channel pruning [35]

Feature maps: \({\underset {\beta,\mathbf {W}}{\arg \min } \tfrac {1}{2N}\left \| \mathbf {H}_{i+1,j} - \sum _{j=1}^{n} \beta _{i,j} \mathbf {H}_{i,j} \mathbf {W}_{i,j} \right \|_{F}^{2}+ \lambda \left \| \beta \right \|_{1} }\)

Prune progressively

PSFP [36]

Weights: \(S{_{j}}=\sum \left | w_{k} \right |_{2}\)