Skip to main content

A multi-camera system for damage and tampering detection in a postal security framework

Abstract

In this paper, we describe a multi-camera system for parcel inspection which detects signs of damages and cues of tampering. The proposed system has been developed within the EU project SAFEPOST as a part of a multi-sensor scanning modality, to enhance safety and security of parcels travelling on the European Postal Supply Chain. Our work addresses in particular the safety of valuable goods, whose presence on the postal supply chain is in steady growth. The method we propose is based on extracting 3D shape and appearance information, detecting in real-time signs of damages or tampering, and storing the model for future comparative analysis when required by the system. We provide an experimental evidence of the effectiveness of the method, both in laboratory and field tests.

1 Introduction

Over the last years, the statistics on the e-commerce has shown its significance and regular growth: the 2015 Global B2C E −Commerce report (https:// ec.europa.eu/futurium/en/content/global-e-commerce- turnover-grew-240-reach-1943bn-2014) confirms the booming of this market, with a + 23.6% of transactions, a percentage which is expected to grow in the near future. This development significantly stressed the worldwide postal services, with a consequent increasing call for automation to support the human operators. At the same time, the security of postal supply chain has become a major concern in many countries [29]: not only postal systems are vulnerable to terrorism attacks, but also to other types of threat, as smuggling of goods and thefts, just to name some. Unsuitable strategy may cause delay and congestion, with a deterioration of the services experienced by the customers and a consequent loss of market share.

The EU project SAFEPOSTFootnote 1 aims at establishing sustainable postal security solutions, focusing on innovative screening methods and advanced information processing.

One of the main screening solutions objective of the project is the automatic analysis and storage of images coming from a screening/scanning process at various stages of the distribution process, to detect signs of damages and tampering. This paper describes the rationale of the approach we undertook and provides an account of the method we developed and implemented on a prototype scanning system.

According to the feedback obtained through a questionnaire which involved 10 postal operators in Europe (see Fig. 1), there is a significant interest in scanning the parcels traveling through Europe; many operators have adopted image-based devices (all the interviewed operators use bar-code readers, primarily for track and trace, 6 of them have an x-ray machine, 5 of them have video cameras), but there is a lack of readily available automatic solutions with the exception of bar-code readers (in the comments, many operators noted they did not adopt any automatic scanning because they did not find products on the market). Half of the interviewed operators would alter the layout of the sorting centre to add new scanners (but they all mentioned “depending on the space availability”) and 4 of them would accept a slowdown in the process conditioned to a significant increase in the security. Thus, the users’ requirements may be summarised as follows: the SAFEPOST scanning system (including the image recognition system described in this paper) should not significantly slow down the normal procedure and it must be accommodated in the existing plants with no need for reconfiguration.

Fig. 1
figure 1

A summary of the questionnaire outcomes (see text)

On this respect, it is important to point out that different sorting centres have a different layout and different functionalities. Therefore, the proposed architecture needs to be light and highly reconfigurable; at the same time, the system must work at least at the speed of the slower portion of the processing chain—which normally is the in-feed ramp, running at about 0.5–1 m/s. As a further requirement, the cost impact of the devised solution needs to be limited and we can not intervene on the appearance of the parcels to be scanned. Indeed today, lacking an appropriate regulation in Europe imposing specific standards to packaged goods, the scanning solution must be able to analyse common packaging.

Therefore, following the requirements of the stakeholders, the SAFEPOST anti-damage system has been designed as an image recognition multi-camera system which analyses the exterior of a parcel from different viewpoints, estimating in real-time geometrical features (such as shape and size) and appearance features (such as brightness patterns) all of which can be used to detect anomalies, damages, and signs of tampering.

In order to balance efficiency and effectiveness, the solution we devise leverages the specificities of the application. In place of a full 3D reconstruction, we estimate the 3D shape of the parcel by fitting a 3D model with an effective real-time procedure. Then, we build a model of the parcel appearance: we project the 3D model on the available images of the parcel and then warp the detected sides on fronto-parallel views. The 3D shape and appearance models of a given parcel at a given time are collected in the SAFEPOST Common Security Space together with information from other sensors, ready to be used for future reference and for automatic comparison, should it be needed.

The proposed system is novel in the sense that to the best of our knowledge, it is the first system performing an automatic analysis of the exterior of parcels in the postal domain, which is a specific applications.

The scanning prototype has been tested extensively both in a laboratory setting and in a functioning postal sorting centre (Correos Sorting Center, Zaragoza, Spain), and the obtained results make it applicable to real-time parcel scanning.

The remainder of the manuscript is organised as follows. Section 2 provides an account of related work and technologies. Section 3 delineates two use cases, which clarify examples of use of our method. To follow, we detail our architecture and software modules in Section 4. An extensive experimental analysis is provided in Section 5, while Section 6 is left to a final discussion.

2 Related work

In this section, we refer primarily to other machine vision systems available in the literature. Besides that, we also cover alternative sensing modalities and we highlight why they have not been adopted even if they relate to our problem.

Machine vision systems have been used for decades as sensing modalities in quality control and they are widely available on the market. The interested reader is referred, for instance to [15]; other interesting readings on the topic are [7, 9, 26, 28]. Specifically referring to parcel inspection, it is worth mentioning the products commercialised by Vitronic (http://www.vitronic.com/), which propose parcel logistics products for bar code reading, OCR, and volume measurement. Their products do not include damage assessment. Besides this, a comparison with our approach is not straightforward since the company does not disclose technical details on the product. The industrial inspection leading company Cognex (http://www.cognex.com/) proposes 3D vision products to compute 3D models of objects, which appears to be effective also on objects of small size. The methodologies are related to the ones presented in this paper, but the scope of the applications is quite different and difficult to compare. On this respect, it is worth observing how, in order to obtain effective systems, all methods need to be adjusted to the specificity of the objects to be analysed and to the characteristics and the requirements of the specific supply chain. A first important task in vision-based quality control systems is the localisation of the object of interest. Often times, the pose of the object is controlled mechanically, thus localisation on the image plane benefits from a prior obtained by the system. In other applications, as the postal one, objects are placed on a conveyor belt in impredicted positions and poses. In this case, a standard procedure to localise parcels is to resort to change detection. There is a wide literature available on this topic (two recent surveys can be found in [3, 18]), reporting algorithms of a different complexity. The choice of a specific algorithm mainly depends on whether the acquisition setting may be controlled. Particularly important is the possibility of influencing illumination and masking out clutter. After localisation, a pattern matching step is usually carried out. Often, this step is based on the availability of a reference database of images of known objects to perform object recognition or pattern matching. Image matching has been widely addressed by Computer Vision community, with a variety of approaches available (see for instance [2, 4, 5, 14] or a comprehensive survey [27] for a more general view).

Besides machine vision, depending on the specific system requirements and the complexity of the task, alternative sensing modalities may be adopted. We mention X-ray sensors, although they are not appropriate to address the specific problem we are considering, since they acquire the interior of an object. X-rays are used in a different but related application which is luggage security control in airports. Here, it is worth observing that the majority of the analysis is performed manually, by human operators. SAFEPOST considered the use of X-rays as a way to inspect automatically the parcel looking for dangerous goods. There is a limited but promising literature of methods addressing object recognition in X-ray images [1, 8, 22], but the main limitation is the considerable cost of the machine and the significant amount of space it occupies which is not always available in the postal feeding lines. For this reason, in the SAFEPOST logic, X-ray scanning is considered as a complementary feature which may be optionally installed on a postal centre, should it be requested by the postal operator.

Recent technology advances in sensors have had an impact on research and development in quality control and retail applications. On this respect, it is worth mentioning 3D scanners (http://www.3dsystems.com/shop/scanners, http://www.rapidform.com/3d-scanners/, http://www.kapricorp.net/postal-scanner-systems-14808-35.html, http://www.datalogic.com/eng/industries/trans-portation-logistics/postal-so-12.html) which have a very high potential for the accuracy in the estimated points cloud, in particular if the speed requirements are not too stringent—indeed current models perform on average at about 16 images per second, unless GPU-based systems are employed (see for instance [6]). However, their availability on the market is still very limited. It is also worth noticing that most interest, from the technology advance point of view, has been put on the development of portable 3D scanners: first, 3D scanners are still more expensive than traditional camera-based systems; second, they are hardly applicable if objects are moving. On this respect, a common approach is to decouple the acquisition, performed online, and the actual reconstruction, done offline (see, e.g. [21]); this approach cannot be adopted in our setting since the output of parcel scanning must immediately trigger an action (e.g., check the parcel).

RGB-D (Kinect-like) sensors are also a possibility which is worth considering today. Although they have appeared on the market primarily in the entertainment/HCI fields, they are today adopted in a variety of applications. Kinect Fusion obtains quite accurate 3D reconstructions on a fixed volume at a processing speed which is close to real-time with a GPU [10]. Point cloud registration techniques may be employed to obtain multi-view or larger areas’ reconstructions [19, 20]. These sensors have a potential for industrial quality control applications, although to date they are seldom employed. One of the limits is their fixed resolution (and fixed field of view) which may be appropriate for only a limited range of applications.

RFID technology is currently applied to the automation of groceries management in department stores (http://www.rfidarena.com/2013/4/11/grocery-industry-operations-are-facing-a-real-paradigm-shift.aspx). In spite of some clear advantages, RFID technology is still struggling against a lack of standard, a relatively high cost, and the fact it impacts considerably the production/distribution line, since it requires a tag to be attached to each processed item [12]. In the context of anti-tampering ad hoc technologies, active packaging (or smart packaging) refers to the presence of active functions in the package [24], which may include the ability to sense or measure an attribute of the product, the package inner atmosphere, or the shipping environment. The measured quantity may then be communicated to a user. Diffused especially for food packaging, this class of intelligent packages may be in principle extended to different scenarios, although they lack appearance information.

For what concern tampering prevention, the use of tamper-evident designs for packaging and labelling is often adopted, in combination with other tampering indicators to improve the robustness of the strategy [11]. The National Security Agency, for instance, developed anti-tamper holograph and prism labels—to be applied on envelops or packages—which are unlikely to be duplicated. It is common practice, however, to change regularly the indicators, which may be subject to counterfeiting.

3 A use-case scenario

The imaging system of SAFEPOST enables direct use of the parcels images, and it allows for collection and storage of images for future reference. The main goal is to direct the attention of the postal operator towards items with some anomaly.

We start by providing two meaningful examples of the system use, which should help the reader in visualising possible usage modalities.

Use case I.

At the consolidation centre.

  • A parcel arrives in the scanning area.

  • A barcode reader records the parcel’s ID.

  • The image recognition system acquires a model of the parcel in real-time and automatically checks for any signs of anomaly on the viewed sides of the parcel.

  • A warning message is displayed on the local computer monitor.

  • After a careful visual inspection, the postal operator decides the anomaly could have been caused by someone handling the parcel with dirty hands; thus, the parcel does not need further analysis.

  • The information provided by the sensors is stored in the SAFEPOST Common Security Space and will be available for future reference.

Use case II.

At the Sender Operator:

  • The parcel is scanned by the image recognition system and by the barcode reader; no anomalies are detected.

  • Appearance and shapes models of the parcel are sent to the SAFEPOST Common Security Space together with the barcode ID.

At the Recipient Operator:

  • The parcel has reached the recipient centre; the image recognition system performs a scanning of the parcel and the barcode reader associates its ID.

  • The SAFEPOST platform requests for information on this specific parcel (if available) from previous scanning.

  • A comparative analysis of the old and the new models associated with the same barcode ID allows the software to detect the presence of changes that may indicate damages or tempering; an anomaly is detected and an alarm is sent to the operator.

  • In case of doubt, the human operator can also access previously stored images for a manual comparison.

4 Methods

In this section, we describe the designed setup and the software modules and summarise the overall procedure of the system.

4.1 The proposed architecture setup

The multi-camera system is meant to be installed on the in-feed of a postal consolidation centre. Parcels are feed on a conveyor belt, which is expected to run at about 0.5 m/s (carrying approximately one average size parcel every second). The system hardware consists of a set of high resolution video cameras (high-resolution quality control Color CCD cameras 1293×964 14 bit/pix—with megapixel varifocal lens) and two illuminators mounted on a 1 m side aluminum cage. The system needs to instal a sufficient number of cameras to allow for an effective analysis of the visible sides of the parcel. Four cameras are the minimum number to guarantee efficient results and robustness against different positions of the parcel, while with 3 cameras there could often be a non visible side and the parcel would need a careful positioning. Although it is not strictly necessary, we will assume that one of the cameras will be mounted on the top of the conveyor belt facing down (acquiring the top side of the parcel) while the others should observe the lateral sides (see Figs. 2 and 3).

Fig. 2
figure 2

A sketch of the reference setup we adopt in our installations: if possible, we put one camera on the top of the conveyor belt, and other 3 cameras looking at the sides as distant as possible to one another (see text)

Fig. 3
figure 3

Examples of different setups we mounted and tested in different stages of the project. Above: laboratory setups; bottom: real-world installations

The system software includes a calibration module which allows us to estimate a common reference frame for all cameras and produce a metrically consistent 3D model; for this, we simply referred to [30]. In the following, we will refer to the projection matrix, mapping a 3D point from the world reference system to the image plane Φ j , as M j . Besides geometric calibration, the module also requires to manually configure the region of interest (ROI) of each view just once, at the start of the procedureFootnote 2. The ROIs are meant to exclude the areas outside of the working environment, thus reducing the presence of clutter. They are particularly important as the prototype working environment is not controlled by a curtain or a closed box as it could be in a next engineering stage.

Video streams from different cameras are acquired in a synchronous fashion, and processing is performed in real time. A video-based process aiming at locating the parcels to be scanned is run continuously, while the actual model acquisition is performed only when the parcel reaches an optimal position on the conveyor belt, as described in the following.

Figure 4 summarises the main components of the software. The cameras simultaneously acquire an image of the parcel and segment it to localise the parcel silhouette on each image plane. As soon as all the obtained silhouettes fall inside the pre-defined ROI, reconstruction takes place. The silhouettes of the parcel, in the form of binary maps, are passed to a parcel reconstruction module which produces a geometrical and appearance model of the parcel. The model is immediately used to detect anomalies (a deviation from a reference 3D model). It is then also stored in the SAFEPOST Common Security Space for future reference and comparisons, as a way to detect signs of damages or tampering along the parcel route from source to destination. The technical details on the main software modules are described in the reminder of the section.

Fig. 4
figure 4

SAFEPOST image recognition workflow

4.2 The software modules

We now discuss our implementation of the functionalities mentioned in the previous section.

Parcel detection A change detection and background update model implementing an adaptive dictionary-based procedure [25] is run in real-time. The algorithm is a patch-based approach followed by a pixel-based refinement that computes a dictionary of common background patches and is able to incorporate multiple background models, useful in particular to contrast the effects of set-up shakes and neon illumination or repetitive illumination patterns.

For each video frame, it produces a binary map where changed pixels are marked in white (see Fig. 5). At each time instant t and for each map, we compute the main connected component (let us call it \(CC^{t}_{j}\))—where j identifies one of the M views. Given a time instant \(\hat t\) for which all the \(CC^{\hat {t}}_{j}\) are completely inside ROI j (the predefined Region of Interest of view j), we consider that the optimal viewing position of the parcel is reached, and parcel reconstruction can be carried out. Thus, differently from the computation of the connected components, which occurs at each time instant, the parcel reconstruction is performed only when conditions are favourable.

Fig. 5
figure 5

Two sides of a parcel being scanned (left) and the associated binary maps (right)

We conclude by observing that the described procedure applies to a stand-alone, purely image-based, installation. In an integrated installation, the system may receive a feedback of a parcel approaching from other sensors, installed earlier in the conveyor belt track. In this case, we can exploit the information redundancy to obtain a more accurate estimate of the parcel’s position and reduce the computation time required by the parcel detection module.

Parcel reconstruction At time \({\hat t}\), the M binary maps obtained by the change detection procedure are the input of the parcel reconstruction module. In the following, we consider the M main connected components and for clarity of the notation we omit the temporal index, referring to CC j for j=1,…,M.

To obtain an estimate of the parcel geometry and size in real-time, we adopt a shape-based model where we fit a parallelepiped to the connected components. This assumption is quite restrictive in general circumstances, as parcels of more generic shapes are normally sent. However, when valuable or fragile goods are delivered, and more specifically within the e-commerce procedure, it is a generally valid assumption; indeed, in all these cases, postal operators require very precise packaging standards. Besides that, the method we propose, loosely inspired to a method for automatic people counting [13], can be easily extended to other geometrical models should it be needed.

To obtain the best fit of a 3D model S on the scanned parcel, we maximise the overlap between the connected components CC j and the projection of the 3D model on the corresponding image plane \({\mathcal {P}}_{j}(S)\)

$$ C(S)=\min_{S} \left(1 - \sum\limits_{j=1}^{M} \frac{\cap ({\mathcal{P}}_{j}(S),{CC}_{j}) }{ \cup ({\mathcal{P}}_{j}(S),{CC}_{j}) }\right). $$
(1)

The minimisation is performed with the Powell’s optimisation method [17]. A reasonably accurate initialisation of the 3D model S is computed as follows: we consider the top view silhouette and fit a rectangle to it. The fitted rectangle is used as an initialisation of the basis of the parallelepiped, while the height of the 3D model is initialised to a fixed value. The projections of the 3D model S on the image planes are computed implicitly, to reduce the computational time:

  • The vertices {V i } of the current S are projected on each image plane Π j , j=1,…,M via the j-th projection matrix M j : \(v^{\,j}_{i} = M_{j} V_{i}\).

  • Then, for each image, we compute a convex hull [23] of the projected vertices \(v^{\,j}_{i}\) to obtain a region corresponding to the projection of the 3D shape on the image \({\mathcal {P}}_{j}(S)\).

Figure 6 shows an example of the final estimated model reprojected on two different views.

Fig. 6
figure 6

Two sides of a parcel being scanned (left) and the best 3D fitted model reprojected on the image plane (right)

When the optimisation converges to the minimum of Eq. 1, we also compute fronto-parallel views of each visible side of a parcel (that is, a view of a side of the parcel as if it were observed from the front) as in Fig. 7. The views are computed by considering the projection of the final 3D model vertices on each image plane and then warping the image portions corresponding to each projected side on a rectangle of a fixed arbitrary size:

$$ I^{s}_{j} = {\mathcal{H}}({\mathcal{P}}_{j}(S)) \ \ \ j=1,\ldots,M, \ \ \ s=1, \ldots \#VS. $$
(2)
Fig. 7
figure 7

An example of a parcel acquired from different view-points and the corresponding virtual images producing fronts-parallel views of each visible side

where #VS is the number of visible sides, the map \({\mathcal {H}}\) is obtained by a homography estimated from the four vertices of the projection and the arbitrary rectangle. Since each side can be viewed by more than one camera, we may obtain multiple (at most M) virtual views of a side, then we select the one with the highest image quality (estimated according to the quality metric proposed in [16]).

The outputs of the parcel reconstruction phase are the model S or (more specifically (i) the size of a parallelepiped which best approximates the scanned parcel and (ii) the estimated volume) and the appearance model Iwarp={Is}, a set of warped images of the fronto-parallel views of the visible sides s. The former item is used on the fly to detect possible damages. The last two elements will be stored in the system database along with other information on the parcel, associated with its bar code.

Anomalies detection The first time a parcel is scanned by the system, we may observe signs of damages, usually caused by mis-handling.

We detect signs of breakages by estimating the scanned object deviation from an ideal parallelepiped shape. To do so, we rely on the estimated 3D model and on how well it overlaps with the actual parcel’s shape as detected on the image planes. The measure of quality of the shape can be derived directly from the minimisation procedure of Eq 1: if the normalised overlap is below a given threshold δ the parcel is reported as damaged. The threshold may be tuned in the calibration phase. Figure 8 shows two examples of parcels with two different levels of damage. On the left, above, a small sign of tampering which is ignored by the 3D model fit but stands out as a difference between the reprojected model and the image of the parcel (above, right). On the second row of the figure, a severe damage produces an inconsistent 3D model which largely deviates from the actual area occupied by the parcel in the image.

Fig. 8
figure 8

Examples of two different levels of damage in a parcel (left) and the corresponding 3D reconstructions (right), superimposed to a parcel image (see text)

Parcels matching The tool is particularly useful if multiple scanning systems are available along the parcel’s route. In this case, the model of the previous scanning can be compared with the current one to detect signs of tampering. We first compare the parcel size estimated in the two different scanning sessions. If the size is different, the system sends a signal of a possible parcel substitution. Otherwise, we look for possible signs of tampering by comparing the images of the corresponding sides. The comparison is performed via a rotation invariant procedure which allows us to evaluate the appearance similarity of parcels regardless their relative position. In this procedure, we represent each side of a parcel as a fixed-scale histogram of oriented gradient (HOG) [4] computed on the fronto-parallel virtual image of the side. Rotation invariance is addressed in a brute-force manner: let \(h^{p}_{t-1}\) be the HOG feature vector obtained by the virtual image of the side p of a parcel acquired in a previous scanning and stored in the database. Let \(h^{p}_{t}\) be the corresponding feature vector on the new acquisition. We evaluate the similarity between the two as follows:

$$ \begin{array}{l} {Sim}_{tot}\left(h^{p}_{t-1},h^{p}_{t}\right) = \max_{r \in R}{Sim\left(h^{p}_{t-1},{\cal{T}}_{r}\left(h^{p}_{t}\right)\right)} \\ \text{with} \ \ \ R = \{ 0, 90, 180, 270\} \end{array} $$
(3)

where R summarises the possible rotations of the side considered. As a similarity measure, we choose histogram intersection. We report an evidence of tampering if

$${Sim}_{tot}\left(h^{p}_{t-1},h^{p}_{t}\right)<\tau. $$

The threshold τ, which can be selected on a validation set, is a stable threshold influenced only by the illumination characteristics of the acquisition environment. Therefore, we observe it is sufficient to tune it at the time of the system installation, in the context of the calibration procedure. We conclude by observing that this procedure is not applied on sides that appear to be too small in all the acquired images. Even more so, the most reliable results have been obtained from the analysis of the top side.

Figure 9 shows pairs of warped images going through a matching procedure. Figure 9 reports on the first two rows examples of correct positive match—image pairs of objects without tampering, where pose changes may be noticed. Below instead, we show examples of correct negative match—image pairs of the same object after it underwent tampering actions; notice that in some cases, the visual effect of tampering is very small.

Fig. 9
figure 9

Comparisons between warped images I warp corresponding to parcels acquired in different scanning sessions. First and second rows: positive comparisons where the parcel did not undergo any variation. Third and forth rows: examples showing variations which were detected by our method

The overall damage and tampering detection procedure is summarised in Algorithm 1.

5 Experimental analysis, results and discussion

In this section, we first report the experimental performances of the system on a set of tests we performed in our laboratory set-up. Then, we summarise the results obtained during field tests carried out at the Correos (Spanish Post) Zaragoza sorting centre.

In what follows, we refer to

  • true positives or truly detected damages

  • true negatives or true normal parcels

  • false alarms or erroneous damaged parcels

  • misses or damaged / tampered parcels which have not been detected

5.1 Key Performance Indices

The Key Performance Indices (KPI) identified by the users group of the SAFEPOST project can be summarised as follows:

  • Keep up with a conveyor belt speed of at least 0.5 m/s (approximately 1 parcel per second).

  • Do not require major interventions on the sorting centre layout nor a specific training for the postal operators.

  • Maintain an overall anomaly detection accuracy above 90%.

The proposed method meets all the expected key performance indices: it processes parcels at a speed which is above 0.5 m/s, and it has been tested on a variety of different configurations and environments simply by either mounting the cameras on pre-existing aluminum bars or by wrapping an aluminum cage around the conveyor belt. To confirm the third KPI in the following, we report thorough quantitative results obtained in a laboratory set-up. We also report overall result obtained during field-test sessions. In all the experiments, we do not apply any constraint in the environment or in the ambient lighting. Thus, illuminations and shadows may be present, although they are attenuated by the presence of illuminators.

A general comment that applies to all experiments we will discuss in the following is that postal parcels have a small intra-class variability. The main critical aspect we had to address is the tolerance of the algorithms to different parcels poses changes. For this reason, each parcel has been tested multiple times, by positioning it on the conveyor belt in random poses.

5.2 Anomaly detection assessment

We start by evaluating the performances of our system in detecting anomalies. To this purpose, in a first experiment, we consider a set of 50 normal parcels and 50 damaged parcels (anomalies or damages). All parcels have been chosen as typical e-commerce boxes of a medium size (main side between 20 and 40 cm). We acquire the video data and then apply the anomaly detection procedure described in Section 4.2 for different thresholds δ producing the ROC reported in Fig. 10 in blue (with *). The equal error rate is about 95% which is above the required KPIs. Notice that we may increase the percentage of detected anomalies to 98% to the price of 15% false alarms.

Fig. 10
figure 10

ROC analysis of damages detection. In blue, the single test; in red and green, the integration of two or three consecutive tests (see text)

We also conduct a further experiment which has been suggested by a postal operator in a personal communication: at each detection of a damage, we repeat the scan without requesting a manual check (this procedure is quite common in practice: the operator takes the parcel and puts it back under the scanner, in some sorting centres, there is an automatic loop in the feeding line for a second check). We detect an anomaly only if both scan are coherent on the anomaly detection output. The same procedure can be repeated three times. Interestingly, we obtain a considerable reduction of false alarms without decreasing the percentage of correctly detected damages.

In the reminder of the lab experiments, we fixed the threshold δ corresponding to the equal error rate, as it is in line with the required KPI.

A second experiment on anomaly detection is performed on a wider variety of parcels, considering different appearances and sizes. Here, we also evaluate the influence of size and appearance on the overall performances. In this case, we perform about 200 experiments, half of good parcels and half with damaged parcels. We classify the parcels according to their size: small (main side lower than 20 cm), medium (main side between 20 and 40 cm), large (main side greater than 40 cm); we also associate an appearance attribute: opaque (with a standard, opaque colour), shiny (shiny, often coloured, surface). Tables 1 and 2 reports a summary of the results in terms of misses (missed damaged parcels) and false alarms (normal parcels declared as damaged). Although the overall performances are only slightly below what obtained with the first experiment (and slightly below KPIs), we notice that the cause is mainly in shiny parcels. This has been confirmed by final statistical test highlights the system feedback is independent on the parcels size (χ2=0.1442, p=0.930445) but dependent on its appearance (χ2=27.3977, p<10−6) at the significance level of 0.05. Indeed, shiny/reflective parcels are more difficultly reconstructed, due to failures in the change detection phase. As we will see later, in the field tests, this effect has been attenuated first by adding an extra care in positioning the illuminators, and second because shiny parcels are very uncommon in practice.

Table 1 Percentage of parcels correctly associated with a NORMAL state
Table 2 Percentage of parcels correctly associated with a DAMAGE state

Moreover, we noticed there is a benefit in tuning the threshold δ in proportion to the size of the parcel (which is accurately estimated in the reconstruction phase).

5.3 Tampering detection assessment

We now test the ability of our system in detecting signs of tampering or parcel substitution. To address this problem, we assume multiple scanning of the same parcel have been performed, as the parcel was travelling from source to destination.

The experiment we report has been carried out on the same set of parcels described in the previous section. We performed a total of 105 acquisitions on normal and damaged parcels which had been previously scanned by the system. The results we obtained are summarised in Table 3. The table is organised as a confusion matrix, and it reports on the rows the ground truth labelling and on the columns the estimated state of the parcel. The table entry T(i,j) reports the percentage of parcels of type i which have been annotated as j. Notice that the table can be further analysed by noticing that in the normal operation of the sorting centre there is only a binary state, normal/alarm; therefore, a tampering mistaken as a parcel substitution is not crucial. In this binary configuration, we observed a 4% of misses and a 16% of false alarms, which is in line with the KPIs.

Table 3 Tampering detection confusion matrix; on the rows the ground truth labels, on the column the estimated ones (in purple the missed alarms and in yellow the false alarms)

5.4 Field tests

Our prototype has been transported and mounted at the Correos (Spanish Post) Zaragoza sorting centre, where it has been tested for a few days. Table 4 reports three sets of experiments which have been carried out in three different circumstances by different operators: system tuning (carried out by the authors to validate the system on the field, after calibration—Fig. 3), KPI check (carried out by other partners of the SAFEPOST project), and demo testing (carried out publicly in front of about 80 people on a set of parcels selected by Correos). In that circumstance, we tuned the system in order to reduce the number of missed damaged/tampered goods, to simulate a typical working section at a sorting centre. The operators had to follow a standard procedure: parcels were taken from a baskets, placed on the conveyor belt in a random position, and then passed under the bar code reader, and all the sensors developed in the project. The estimate was immediately shown on screen, and all the estimates and images were also sent to the SAFEPOST Common Security Space. The positive feedback from the stake holders and project partners and the satisfactory quantitative results speak in favour of the appropriateness of the devised prototype for the considered application.

Table 4 Field tests results

6 Conclusions

In this paper, we described the design, implementation, and testing of a prototype multi-camera system for parcel inspection, developed within the EU FP7 Project SAFEPOST as one of the main scanning modalities for an integrated security system for the international postal chain. The method we devised has been significantly influenced by the stakeholders requests which identified very specific KPIs: in order to control the design costs and to have a minimal impact on the layout of existing sorting centres, we relied on low cost video-based technology and reached the required accuracy by carefully implementing simple but effective computer vision algorithms. The main objective of the prototype was to detect signs of tampering and damages. Damages affecting the parcel shape are identified by comparing the actual estimated parcel shape with a geometrical model of a parcel; signs of tampering are detected by automatically aligning virtual images of the parcel sides acquired at different times and comparing their appearance by means of a HOG global descriptor and geometrical information. We reported an exhaustive experimental analysis we carried out in the lab which shows the effectiveness of our solution. We also reported an account of field tests carried out within the Correos Sorting Center in Zaragoza (Spain).

Notes

  1. VII FP SAFEPOST “Reuse and development of Security Knowledge assets for International Postal supply chains” http://www.safepostproject.eu/

  2. A new calibration may be needed after significant variations of the setup, which however are rather rare events.

References

  1. M Baştan, MR Yousefi, TM Breuel, in Proceedings of the 14th International Conference on Computer Analysis of Images and Patterns - Volume Part I, CAIP’11. Visual words on baggage x-ray images (Springer-VerlagBerlin, 2011), pp. 360–368. http://dl.acm.org/citation.cfm?id=2033460.2033514.

    Google Scholar 

  2. H Bay, T Tuytelaars, L Van Gool, in European conference on computer vision. Surf: Speeded up robust features (Springer, 2006), pp. 404–417. https://scholar.googleusercontent.com/scholar.bib?q=info:1k1oa-utOjcJ:scholar.google.com/&output=citation&scisig=AAGBfm0AAAAAWkuhxXBSkqKYfTC7d_tikA1-OzXhc8-6&scisf=4&ct=citation&cd=-1&hl=en.

  3. T Bouwmans, L Maddalena, A Petrosino, Scene Background Initialization: a Taxonomy. Pattern Recogn. Lett (2017). https://scholar.googleusercontent.com/scholar.bib?q=info:r6Zqgd43VccJ:scholar.google.com/&output=citation&scisig=AAGBfm0AAAAAWkuiLQhB7gSs5s3KCJxmqYeb5rqzvo1F&scisf=4&ct=citation&cd=-1&hl=en.

  4. N Dalal, B Triggs, in Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05) - Volume 1 - Volume 01, CVPR ’05. Histograms of oriented gradients for human detection (IEEE Computer SocietyWashington, 2005), pp. 886–893. http://dx.doi.org/10.1109/CVPR.2005.177.

    Google Scholar 

  5. E Delponte, N Noceti, F Odone, A Verri, in Image Analysis and Processing, 2007. ICIAP 2007. 14th International Conference on. Appearance-based 3d object recognition with time-invariant features (IEEE, 2007), pp. 467–474. https://scholar.googleusercontent.com/scholar.bib?q=info:SZEPv7yXeq0J:scholar.google.com/&output=citation&scisig=AAGBfm0AAAAAWkuhi4BORSfCxjJ6XcUqfy8klSYXrMXo&scisf=4&ct=citation&cd=-1&hl=en.

  6. H Gao, T Takaki, I Ishii, in SPIE Photonics Europe. Gpu-based real-time structured light 3d scanner at 500 fps (International Society for Optics and Photonics, 2012), pp. 84,370J–84,370J. https://scholar.googleusercontent.com/scholar.bib?q=info:NwE9n3HA0qAJ:scholar.google.com/&output=citation&scisig=AAGBfm0AAAAAWkuh9d0Epc4NNW-ZRw0IIn45vXDgf6xj&scisf=4&ct=citation&cd=-1&hl=en.

  7. H Golnabi, A Asadpour, Design and application of industrial machine vision systems. Robot. Comput. Integr. Manuf. 23(6), 630–637 (2007). http://dx.doi.org/10.1016/j.rcim.2007.02.005, http://www.sciencedirect.com/science/article/pii/S0736584507000233. 16th International Conference on Flexible Automation and Intelligent Manufacturing.

    Article  Google Scholar 

  8. XP He, P Han, XG Lu, RB Wu, in Proceedings of the 2008 International Conference on Computer Science and Information Technology, ICCSIT ’08. A new enhancement technique of x-ray carry-on luggage images based on dwt and fuzzy theory (IEEE Computer SocietyWashington, DC, USA, 2008), pp. 855–858. doi:http://dx.doi.org/10.1109/ICCSIT.2008.180.

    Chapter  Google Scholar 

  9. N Herakovic, M Simic, F Trdic, J Skvarc, A machine-vision system for automated quality control of welded rings. Mach. Vis. Appl. 22(6), 967–981 (2011). http://dx.doi.org/10.1007/s00138-010-0293-9.

    Article  Google Scholar 

  10. S Izadi, D Kim, O Hilliges, D Molyneaux, R Newcombe, P Kohli, J Shotton, S Hodges, D Freeman, A Davison, A Fitzgibbon, in Proceedings of the ACM symposium on User interface software and technology. Kinectfusion: real-time 3d reconstruction and interaction using a moving depth camera, (2011), pp. 559–568. https://scholar.googleusercontent.com/scholar.bib?q=info:kSDzyrv1mOEJ:scholar.google.com/&output=citation&scisig=AAGBfm0AAAAAWkujABDartIk0QjyQiNc1sZUchnJSCFJ&scisf=4&ct=citation&cd=-1&hl=en.

  11. R Johnston, Effective vulnerability assessment of tamper-indicating seals. J. Test. Eval. 4: (1997). https://scholar.googleusercontent.com/scholar.bib?q=info:265QkCoc3V8J:scholar.google.com/&output=citation&scisig=AAGBfm0AAAAAWkuj0--nalomyVH7oeV-iythgBV50gbD&scisf=4&ct=citation&cd=-1&hl=en.

  12. M Kaur, M Sandhu, N Mohan, PS Sandhu, Rfid technology principles, advantages, limitations and its applications. Int. J. Comput. Electr. Eng. 3(1) (2011). https://scholar.googleusercontent.com/scholar.bib?q=info:dEGRvdoyN64J:scholar.google.com/&output=citation&scisig=AAGBfm0AAAAAWkukC-Bs8qqYDHHau-iwey3ZUfmoAXOX&scisf=4&ct=citation&cd=-1&hl=en.

  13. P Kilambi, E Ribnick, AJ Joshi, O Masoud, N Papanikolopoulos, Estimating pedestrian counts in groups. Comput. Vis. Image Underst. 110(1), 43–59 (2008). http://dx.doi.org/10.1016/j.cviu.2007.02.003.

    Article  Google Scholar 

  14. DG Lowe, Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004).

    Article  Google Scholar 

  15. EN Malamas, EG Petrakis, M Zervakis, L Petit, JD Legat, A survey on industrial vision systems, applications and tools. Image Vis. Comput. 21(2), 171–188 (2003).

    Article  Google Scholar 

  16. A Mittal, AK Moorthy, AC Bovik, No-reference image quality assessment in the spatial domain. IEEE Trans. Image Process, 4695–4708 (2012). https://scholar.googleusercontent.com/scholar.bib?q=info:P1KtwBIQcq8J:scholar.google.com/&output=citation&scisig=AAGBfm0AAAAAWkukQ1FkGnYIjraJH8xCHvnxt8JMJ4aT&scisf=4&ct=citation&cd=-1&hl=en.

  17. MJD Powell, An efficient method for finding the minimum of a function of several variables without calculating derivatives. Comput. J. 7:, 155–162 (1964).

    Article  MathSciNet  MATH  Google Scholar 

  18. RJ Radke, S Andra, O Al-Kofahi, B Roysam, Image change detection algorithms: a systematic survey. IEEE Trans. Image Process. 14(3), 294–307 (2005).

    Article  MathSciNet  Google Scholar 

  19. H Roth, M Vona, in BMVC. Moving volume kinect fusion, (2012). https://scholar.googleusercontent.com/scholar.bib?q=info:St-BrVu_o5sJ:scholar.google.com/&output=citation&scisig=AAGBfm0AAAAAWkujOdnMVD85FttfQRVuYTSrHYn3Unvf&scisf=4&ct=citation&cd=-1&hl=en.

  20. RB Rusu, S Cousins, in IEEE International Conference on Robotics and Automation. 3d is here: Point cloud library (pcl), (2011). https://scholar.googleusercontent.com/scholar.bib?q=info:2P9q0h8VkYgJ:scholar.google.com/&output=citation&scisig=AAGBfm0AAAAAWkujeIPunUuSTUVhcWhNSciItnQdcp7F&scisf=4&ct=citation&cd=-1&hl=en.

  21. R Sagawa, R Furukawa, H Kawasaki, Dense 3d reconstruction from high frame-rate video using a static grid pattern. IEEE Trans Pattern Anal. Mach. Intell. 36(9), 1733–1747 (2014).

    Article  Google Scholar 

  22. L Schmidt-Hackenberg, MR Yousefi, TM Breuel, in ICPR. Visual cortex inspired features for object detection in x-ray images (IEEE Computer Society, 2012), pp. 2573–2576.

  23. J Sklansky, Finding the convex hull of a simple polygon. Pattern Recogn. Lett. 1:, 79–83 (1982). https://scholar.googleusercontent.com/scholar.bib?q=info:dVIRORZD-2AJ:scholar.google.com/&output=citation&scisig=AAGBfm0AAAAAWkukjT9hP_wzgEY_Tny61lg5PO_Lm2re&scisf=4&ct=citation&cd=-1&hl=en.

    Article  MATH  Google Scholar 

  24. W Soroka, Illustrated glossary of packaging terms (2008). https://scholar.googleusercontent.com/scholar.bib?q=info:0af31guxXpQJ:scholar.google.com/&output=citation&scisig=AAGBfm0AAAAAWkuk1DJ_Hq-U8ybdRPLak01-LZhkAqa4&scisf=4&ct=citation&cd=-1&hl=en.

  25. A Staglianò Noceti, A Verri, F Odone, Online space-variant background modeling with sparse coding. IEEE Trans. Image Process. 24(8), 2415–2428 (2015).

    Article  MathSciNet  Google Scholar 

  26. A Thomas, M Rodd, J Holt, C Neill, Real-time industrial visual inspection: A review. Real-Time Imaging. 1(2), 139–158 (1995). http://dx.doi.org/10.1006/rtim.1995.1014. http://www.sciencedirect.com/science/article/pii/S1077201485710145.

    Article  Google Scholar 

  27. T Tuytelaars, K Mikolajczyk, et al., Local invariant feature detectors: a survey. Found. Trends®; Comput. Graph. Vision. 3(3), 177–280 (2008).

    Article  Google Scholar 

  28. P West, A roadmap for building a machine vision system. Autom. Vision Syst (2006). https://scholar.googleusercontent.com/scholar.bib?q=info:onD9lvVJA-8J:scholar.google.com/&output=citation&scisig=AAGBfm0AAAAAWkuihZb8FWJ06iDgwVGjEwVcCMIvmG-P&scisf=4&ct=citation&cd=-1&hl=en.

  29. CH Wong, China tightens security on delivery services in wake of deadly blasts (2015). https://scholar.googleusercontent.com/scholar.bib?q=info:WOfomxS8kYMJ:scholar.google.com/&output=citation&scisig=AAGBfm0AAAAAWkulHr6fe5lGxmJiPxvo0JGtDNFuLPRO&scisf=4&ct=citation&cd=-1&hl=en.

  30. Z Zhang, A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000). doi: 10.1109/34.888718. http://dx.doi.org/10.1109/34.888718.

    Article  Google Scholar 

Download references

Acknowledgments

The authors thank Aytaç Kanaci for the contribution on the system design and Alberto Lovato for the considerable help provided in the entire process of design, implementation, and testing.

Funding

This work has been partially funded by the EU FP7 Project SAFEPOST N.285104, “Reuse and development of Security Knowledge assets for International Postal supply chains”, a 4-year Integration project addressing the FP7-SEC-2011.2.4- 1 International Postal Supply Chains.

Availability of data and materials

The data (images and videos) will be made available upon request.

Author information

Authors and Affiliations

Authors

Contributions

The authors equally contributed to the development of the work. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Nicoletta Noceti.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Noceti, N., Zini, L. & Odone, F. A multi-camera system for damage and tampering detection in a postal security framework. J Image Video Proc. 2018, 11 (2018). https://doi.org/10.1186/s13640-017-0242-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13640-017-0242-x

Keywords