Open Access

Quality of Experience for Large Ultra-High-Resolution Tiled Displays with Synchronization Mismatch

EURASIP Journal on Image and Video Processing20112011:647591

https://doi.org/10.1155/2011/647591

Received: 11 November 2010

Accepted: 7 February 2011

Published: 8 March 2011

Abstract

This paper relates to quality of experience when viewing images, video, or other content on large ultra-high-resolution displays made from individual display tiles. We define experiments to measure vernier acuity caused by synchronization mismatch for moving images. The experiments are used to obtain synchronization mismatch acuity threshold as a function of object velocity and as a function of occlusion or gap width. Our main motivation for measuring the synchronization mismatch vernier acuity is its relevance in the application of tiled display systems, which create a single contiguous image using individual discrete panels arranged in a matrix with each panel utilizing a distributed synchronization algorithm to display parts of the overall image. We also propose a subjective assessment method for perception evaluation of synchronization mismatch for large ultra-high-resolution tiled displays. For this, we design a synchronization mismatch measurement test video set for various tile configurations for various interpanel synchronization mismatch values. The proposed method for synchronization mismatch perception can evaluate tiled displays with or without tile bezels. The results from this work can help during design of low-cost tiled display systems, which utilize distributed synchronization mechanisms for a contiguous or bezeled image display.

1. Introduction

Displays with large screen size and high resolution are increasingly becoming affordable and ubiquitous. Large displays are often used in certain niche markets such as public displays and digital signage markets. These include displays at public places such as airports, museums, hotels, stadiums, hospitals, malls. These displays are often created using individual small display tiles. Also, in universities, research institutes, and corporations, large wall-sized displays are often built from small-sized individual display panels. Such large-sized tiled displays are used for scientific, medical visualization applications [1]. Examples of such tiled displays include LambdaVision display at University of Illinois at Chicago's Electronic Visualization Laboratory [2], Stallion tiled display at Texas Advanced Computing Center [3], and Stanford School of Medicine tiled display [4]. Also, prior work exists on building projection based tiled displays [5, 6].

The majority of the current tiled display systems are driven by a cluster of computers. In a typical tiled display architecture, a set of "display nodes" (computers) drive individual tiles of the overall display. Often a single computer node is used to drive two display tiles from a single graphics card utilizing two DVI connections. Depending upon the type of middleware used, the display nodes may show the data that is rendered on one or more of the display nodes. As an example, such architecture is enabled by chromium middleware [7]. In another architecture, additionally, a set of "rendering nodes" which may be separate from the "display nodes" serve the role of rendering application data. The rendered application data is then transmitted on a high speed network in compressed/uncompressed form to the display nodes. In this architecture, the display nodes and rendering nodes typically use a middleware such as scalable adaptive graphics environment (SAGE), a specialized graphics streaming architecture and middleware. Figure 1 shows an example architecture of our SharpWall tiled display system that we built using 20 Sharp Aquos LCD panels tiled together. The SharpWall shown in Figure 2 measures 177 inch (diagonally) and has a very high resolution of 10 K × 4.5 K pixels.
Figure 1

SharpWall tiled display architecture.

Figure 2

SharpWall tiled display system.

In the typical tiled display architecture as described above, rendering nodes send parts of overall image to be displayed to individual display nodes. The display nodes then utilize a distributed synchronization algorithm and individually display parts of the image on their display tiles to provide the overall perception of a single continuous image. In some cases, the display nodes can utilize advanced graphics cards such as Nvidia Quadro family of cards and can utilize Genlock frame lock [8] to achieve a better frame synchronization amongst them. However, these cards are expensive. Furthermore, the Genlock frame lock techniques cannot completely solve the synchronization problem as each display node is receiving its image parts over the network. Thus, the image part ready to be displayed by each node at a given refresh instance (e.g., at 60 Hz) may not belong to the same overall image depending upon the network stream reception performance of each node.

As a result, in a typical tiled display system, some level of synchronization discrepancy may exist among individual display nodes. Thus, the two goals of this paper are the following:
  1. (i)

    define synchronization mismatch vernier acuity and define an experiment to obtain synchronization mismatch acuity threshold as a function of object velocity and as a function of occlusion (tile bezel width),

     
  2. (ii)

    define a method for perception evaluation of synchronization mismatch for ultra-high-resolution large displays and to perform a subjective quality evaluation using this method to arrive at conclusions regarding synchronization mismatches among individual tiles/panels which will still result in an acceptable picture quality for the overall tiled display.

     

As a result of a time offset, a moving object edge will be spatially offset. Since the spatial offsets vary across tiles due to the synchronization mismatches, the edges of moving objects will suffer spatial offsets across the tiles. This is analogous to vernier acuity psychophysical experiments, where a line is presented with a break due to an orthogonal spatial offset. The distortions in our application are more general, where the offset break direction depends on the motion direction relative to the edge orientation. Further, in our application, the offset disappears with a nonmoving image, since the tiles are then essentially synchronized. The magnitude of the spatial offset increases with both edge speed and synchronization temporal offset.

In the prior art, Westheimer and McKee [9] defined several experiments and measured spatial vernier acuity. Mostly, static images were used for these experiments, as the goal was only to obtain spatial vernier acuity thresholds. In one study [10], vernier acuity under retinal motion of up to 3.5 deg/sec (= 3.2 pixels/frame at 60 fps input for HD resolution, ) was studied, and only a slight loss of acuity was found. However, motion imagery is known to have much higher velocities. In our application, such higher velocities will cause larger spatial offsets for any synchronization delay, but the acuity may be less as the velocity increases. So, it is unknown to us which effect will dominate and whether high levels of motion will make the synchronization artifacts more or less visible. Gorea and Hammett defined experiments to study spatiotemporal vernier acuity [11]. Their experiments resulted in determination of the smallest instantaneous displacement discriminable from a continuous drift and shortest motion stop discriminable from a continuous drift. Their study assumed perfect synchronous and contiguous display of the object under consideration. In comparison with these prior works, we define experiments for measuring spatial, temporal, and synchronization mismatch vernier acuity.

With respect to our second goal, we are not aware of any existing method for interpanel synchronization mismatch perception evaluation. ITU-R Recommendation BT.500-11 [12] describes methods for the subjective assessment of the quality of television pictures. Also, ITU-R Recommendation BT.710-4 [13] describes methods for subjective assessment for image quality in high-definition television. The above methods [12, 13] describe subjective evaluation which is conducted on a single display screen. The main distinctions of our proposed subjective evaluation method are the following. We describe a method for creation of a synchronization mismatch measurement test video set for particular tile configuration and for various interpanel synchronization mismatch values. This synchronization mismatch test video set allows performing synchronization mismatch perception evaluation for a target tiled system using a typical single-screen display (e.g., a 46′′ LCD display). That is, the tiled display is simulated on a single display panel. The method supports perceptual evaluation to correspond to a tiled display with or without tile bezels. The method creates a higher frame rate video from an original video for playback using a typical video player. The method uses a display which can provide higher frame rate playback, such as 120 Hz display, since 60 Hz (16.66 ms) is too close to the human visual system threshold.

In a typical tiled display system, each panel may have a bezel (mullion). Typically, tile bezels may be black/dark in color, as seen in Figure 2. When displaying the overall image, the tile bezels behave in a manner similar to occluding virtual pixels underneath them. Thus, the middleware removes those pixels which fall "underneath" the tile bezels. This helps humans perceive the overall displayed image as natural (such as seen through a French window). Thus, circles stay as circles and not as ellipses. Similarly, a human face retains its proportion. In this paper, the tile bezel width is treated as a occlusion and is referred to as gap width.

The rest of this paper is organized as follows. Section 2 describes our proposed experiment for measuring synchronization mismatch vernier acuity of moving images. In Section 3, we describe our subjective method for synchronization mismatch perception evaluation. Section 4 provides the details about our subjective quality evaluation using the method described in Section 3. In Section 5, we provide the conclusions from our subjective quality evaluation.

2. Method for Synchronization Mismatch Vernier Acuity

In the prior art, Westheimer and McKee [9] performed several experiments to obtain thresholds for visual spatial localization differences using a constant-stimuli method. Those experiments obtained the following thresholds:
  1. (i)

    threshold for the detection of direction the of the vertical misalignment of two vertical lines as a function of length of each line,

     
  2. (ii)

    threshold for the detection of the direction of the vertical misalignment of two vertical lines as a function of separation of the lower end of the top line and upper end of the bottom line (for four different lengths of line),

     
  3. (iii)

    threshold for the detection of the direction of the vertical misalignment of two short gaps each in one long horizontal line as a function of vertical separation of the lines,

     
  4. (iv)

    threshold for the detection of the direction of the vertical misalignment of a line with the point of a chevron pattern,

     
  5. (v)

    threshold for difference in the spatial interval between two vertical lines as a function of their separation,

     
  6. (vi)

    threshold of the distance discrimination for different configurations (bright lines, dark lines, bright edges, dark edges, bright edge and bright line), dots versus lines,

     
  7. (vii)

    threshold for detection of differences in width of bars (made from individual lines).

     

The work of Westheimer and McKee was focused on spatial vernier acuity only and utilized static images. Other important vernier acuity work from Klein and Levi [14] and Morgan and Regan [15] was only spatial.

Gorea and Hammett studied spatiotemporal vernier acuity [11]. Rather than a spatial vernier acuity experiment under motion, they sought to explore shearing distortions in the spatiotemporal (velocity) domain, where strictly spatial cues could not be used. They performed experiments which resulted in determination of
  1. (i)

    smallest instantaneous displacement (infinite velocity) discriminable from a continuous drift,

     
  2. (ii)

    shortest motion stop discriminable from a continuous drift.

     
They performed the following experiments.
  1. (i)

    Two Gaussian blobs of opposite polarity drifting at equal speeds in opposite directions disappear, and only one blob reappears at a variable spatial offset relative to true position. Observers decide positive or negative offset.

     
  2. (ii)

    Two Gaussian blobs drifting at unequal speeds in opposite direction. Observers decide higher speed blob pair.

     
  3. (iii)

    Two Gaussian blobs disappear after being flashed simultaneously followed by asynchronous reappearance at positions corresponding to different speeds. Observers decide which blob jump (reappearance position with respect to original position) has higher velocity.

     
In our application, the spatial cues due to the breakup of a moving edge across tiles are important. We also wanted to consider the possibility of purely temporal cues (flickering along a bezel edge) which could be a result of the synchronization mismatch. Thus, our scope is much wider than the Gorea paper. So, the main distinction of our work from spatial vernier acuity [9] and spatiotemporal vernier acuity [11] work is that we focus on spatial and temporal vernier acuity (breaks and offsets across spatial, temporal, and velocity dimensions), which relates to spatiotemporal discrimination aspects of visual system. We define spatiotemporal and synchronization mismatch vernier acuity as a task to measure the aspect of visual acuity that involves the ability to detect the alignment or lack of alignment of a moving object. Specifically, we display a synchronization mismatch between parts of the moving object (which in practice results from synchronization mismatch between adjacent display panels). Thus, we define the following experiment for measuring spatial, temporal, and synchronization mismatch vernier acuity as shown in Figure 3.
  1. (i)

    An object (e.g., a Gaussian blob) moves as a single contiguous object with a constant velocity , disappears at at time . The object reappears at as two partial objects: at position at time and at position at time , with and moving with constant velocity and where . The objects and disappear at time and reappear at time at as single contiguous object moving at a constant velocity .

     
  2. (ii)

    Observer is shown the reference video which has no synchronization mismatch (i.e., = 0) followed by the test video with synchronization mismatch (i.e., > 0). Observer is then asked to rate the test video with respect to reference video (see Section 3.2 for double stimulus impairment scale method used).

     
  3. (iii)

    The control parameter is varied from 0 (perfect synchronization) to (maximum target synchronization mismatch value to be tested) to obtain the synchronization mismatch acuity threshold.

     
  4. (iv)

    The velocity is varied from to study the effect of amount of object motion on synchronization mismatch acuity threshold.

     
  5. (v)

    Additionally, in a set of experiments, a gap of width is placed at a fixed position (e.g., at the center of the frame). The gap width control parameter is varied in the range corresponding to minimum and maximum gap width. This allows studying synchronization mismatch acuity threshold as a function of occlusion (gap or bezel width).

     
Figure 3

Space-time-synchronization mismatch diagram for the experiment (a) with gap (which corresponds to tile bezel for a tiled display system), (b) without gap.

Figure 3 shows the space-time-synchronization-mismatch diagrams for our experiment with gap (which corresponds to tile bezel for a tiled display system) (Figure 3(a)) and without gap (Figure 3(b)). The dark brown circle parts are seen by the observer. The light brown circle parts are shown for illustration. Figures 4(a)4(c) show single-frame snapshots in time for the experiment with three different synchronization mismatch values ( ). The value in Figure 4(a) corresponds to no synchronization mismatch. Figures 5(a)5(c) show single-frame snapshots in time for the experiment with three different gap widths ( ). Based on our experiments, we can find the following:
  1. (i)

    synchronization mismatch acuity threshold as a function of object velocity,

     
  2. (ii)

    synchronization mismatch acuity threshold as a function of occlusion (gap width).

     
Figure 4

Single-frame snapshots in time for the experiment with three different synchronization mismatch values ( ). The value in (a). The dark brown circle parts are seen by the observer. The light brown circle parts are shown for illustration. Object moves in the plane. The temporal axis shows snapshots in time with (a)–(c) all corresponding to time units. Frame unit is 8.33 ms (corresponding to 120 Hz).

Figure 5

Single-frame snapshots in time for the experiment with three different gap widths ( ). The dark brown circle parts are seen by the observer. The light brown circle parts are shown for illustration.

3. Method for Synchronization Mismatch Perception Evaluation

In this section, we provide the details of our method for synchronization mismatch perception evaluation for practical video displayed across tiles. Our proposed method consists of following steps:
  1. (i)

    create a synchronization mismatch measurement test video set for particular tile configuration for various interpanel synchronization mismatch values,

     
  2. (ii)

    use a subjective visual quality measurement method for evaluating the synchronization perception at different interpanel synchronization mismatch values.

     

3.1. Synchronization Mismatch Measurement Test Video Set Creation

To create a video set for synchronization mismatch measurement the following steps are taken.
  1. (i)

    Start with a video with original frame rate of (e.g., ) frames per second. Let us assume the video has frames.

     
  2. (ii)

    Create a new synch measurement video with frames in it (e.g., ), where the additional frames are created as described below. Set the video frame rate of the synch measurement video to frames per second. Thus, the new video will be played at frame rate (e.g., at 120 frames per second assuming , ) when playing back with a typical video player.

     
  3. (iii)
    The new synch measurement video above is created as follows.
    1. (1)

      Consider a target tile/panel geometry we want to test. Consider the video position on the tiles. As an example consider a tile with % of video width in tile (0) and % of video width in tile (1).

       
    2. (2)

      Then, for each of the new video frames copy the left % portion of original video (Part in Figure 2) in the left % portion of new video with each frame repeated times to create total frames.

       
    3. (3)

      For each of the new video frames, copy the right % portion of original video (Part in Figure 2) in the right % portion of the new video with each frame repeated times, but with a copy offset with a value between .

       
    4. (4)

      Value for can be set based on the measurements obtained from individual nodes (i.e., from a recorded trace) or can be set in absolute terms, for example,  ms corresponding to synch discrepancies up to  ms.

       
     
  4. (iv)

    The above video creation steps are repeated for each value in as offset to generate new synch measurement videos.

     
  5. (v)

    The above process is repeated for various different tile configurations (e.g., for various values).

     
The pseudocode in Algorithm 1 shows the creation of a synch measurement video set for tile configuration and using the above method for the following parameters:
  1. (i)

    : frame rate of the original video,

     
  2. (ii)

    : frame rate scale factor for created synchronization mismatch video,

     
  3. (iii)

    : number of frames in the original video,

     
  4. (iv)

    : height in pixels of the original video,

     
  5. (v)

    : width in pixels of the original video,

     
  6. (vi)

    : lowest value for synchronization mismatch offset to be tested (in frame units at frame rate),

     
  7. (vii)

    : highest value for synchronization mismatch offset to be tested (in frame units at frame rate),

     
  8. (viii)

    : fraction of overall video width and height in each individual tiles,

     
  9. (ix)

    : pixel data for original video frame at pixel location for color plane ,

     
  10. (x)

    : created output synchronization mismatch video set for tile configuration for various synchronization mismatch offsets ,

     
  11. (xi)

    : created output synchronization mismatch video set for tile configuration for various synchronization mismatch offsets .

     

Algorithm 1: Pseudocode for synchronization mismatch video set creation for and tile configuration with no bezels.

;

;

;

;

;

;

;

;

;

;

;

;

;

;

;

;

;

Figure 6 shows the process for creation of synchronization mismatch video measurement set for a tile geometry. In this case, 4 measurement test videos are created as a part of the synchronization mismatch video set from one original video. The example in Figure 6 corresponds to , , , , and .
Figure 6

Process for creation of synchronization mismatch video measurement set (target configuration: tiled display).

Figure 7 shows the process for creation of synchronization mismatch video measurement set for a tile geometry. In this case, 4 measurement test videos are created as a part of the synchronization mismatch video set from one original video. The example in Figure 7 corresponds to , , , , and .
Figure 7

Process for the creation of the synchronization mismatch video measurement set (target configuration: tile display).

Figure 8 shows the process for creation of synchronization mismatch video measurement set for a tile geometry. In this case, 4 measurement test videos are created as a part of the synchronization mismatch video set from one original video. The example in Figure 8 corresponds to , , , , , , and .
Figure 8

Process for the creation of the synchronization mismatch video measurement set (target configuration: tiled display).

3.2. Subjective Visual Quality Evaluation of Interpanel Synchronization Mismatch

Subjective visual quality evaluation can be conducted by playing back and evaluating the videos from the above created synchronization mismatch video measurement set using procedure as follows. Methods similar to ITU-R BT.500-11 recommendation [12] can be used to playback the videos from synchronization mismatch video measurement set and obtain their subjective quality evaluation. The following methods adapted from [12] can be used.

3.2.1. Double-Stimulus Impairment Scale (DSIS) Method

  1. (i)

    The original video will be used as unimpaired reference. Each of the videos in the synchronization mismatch video measurement set will be used sequentially as impaired videos.

     
  2. (ii)

    The subject will be shown the reference video followed by one of the impaired video. The subject then uses a 5 point impairment scale (shown below) for assessment of impaired video with respect to the reference video. The reference video is the original video which was used to create the synchronization mismatch video set.

     
  3. (iii)

    The above step is repeated for each of the videos in the synchronization mismatch set as an impairment video in random order.

     
  4. (iv)

    A 5-point impairment scale can be used

    5: imperceptible,

    4: perceptible, but not annoying,

    3: slightly annoying,

    2: annoying,

    1: very annoying.

     

3.2.2. Double-Stimulus Continuous Quality-Scale (DSCQS) Method

  1. (i)

    In this case, the original video and each of the videos in the synchronization mismatch video measurement set are played back in pair.

     
  2. (ii)

    Subjects can switch between the two videos in the pair and can also repeat video playbacks for those two videos any number of times. The subject does not know which is reference (original) video and which is impaired video.

     
  3. (iii)

    After the playback, subject uses a continuous rating scale (to avoid quantization errors), but the scale is divided into five equal lengths which correspond to the normal ITU-R five-point quality scale (excellent, good, fair, poor, bad).

     
  4. (iv)

    The above step is repeated for each of the videos in the synchronization mismatch video measurement set as an impairment video together with the reference (original) video in random order.

     

3.2.3. Single-Stimulus (SS) Method or Single-Stimulus with Multiple Repetition (SSMR) Method with Adjectival Categorical Judgment Method

  1. (i)

    In this case, the original video and each of the videos in the synchronization mismatch video measurement set are played back one at a time (single stimulus).

     
  2. (ii)

    After playback of each video the subject provides rating for it (SS method).

     
  3. (iii)

    If using SSMR method, the subject can repeat the same video multiple times.

     
  4. (iv)

    A 5-grade scale (e.g., ITU-R quality impairment scale) can be used to provide adjectival category judgment.

     
  5. (v)

    In other cases, instead of adjectival categorical judgment method, numerical categorical judgment method with 11-grade numerical categorical scale (SSNCS) [12] or noncategorical judgment method with numerical scale (e.g., 0–100) can be used.

     

For all the subjective quality tests, the viewing conditions are set as described in Annex 1 of [13]. The video can be played back on single contiguous display (preferred) or on a tiled display. The video is played back locally or using a mechanism, where no additional jitter/synchronization mismatch is introduced.

The video is played back to simulate a system consisting of the following.
  1. (i)

    Tile with no bezels: in this case, the video from the synchronization mismatch measurement set created above is played back using a standard video player.

     
  2. (ii)
    Tile with bezels: one of the following two methods can be used to simulate bezels (mullions).
    1. (1)

      A black/bezel color bar pattern for the target tile configuration can be created and embedded on top of the video player to cover parts of the video being played back to simulate the effect of tile bezels.

       
    2. (2)

      The synch mismatch video measurement set can be preprocessed to remove pixels corresponding to bezels.

       
     

4. Subjective Quality Evaluation Results

We conducted subjective tests using the synchronization mismatch perception video set generated using our proposed method. The test setup used a 120 Hz single-panel display. This is because a 60 Hz display does not reach the human visual system temporal cutoff frequency (especially in parafovea and periphery). A computer runs video player which decodes 120 frames per second video and displays it on 120 Hz display. The viewing distance for a subject is set to 3 picture heights (standard viewing distance for HD, ).

Two types of tests are done:
  1. (i)

    synchronization Mismatch Vernier Acuity tests: are done on a Gaussian Blob sequence as described in Section 2,

     
  2. (ii)

    natural video content tests are done using typical natural video sequences as shown in Algorithm 1.

     

4.1. Natural Video Content Tests

The tests are conducted for , , and tile configurations with and without bezels. Three video sequences as described in Table 1 below are used. This abstract shows only the results for one of the video sequences (Fence). A sample frame from the "Fence" video sequences is shown in Figures 9(d)9(f) (Test condition: without bezels) and Figures 9(a)9(c) (Test condition: with bezels of width 30 pixel = 0.54 deg). Synchronization mismatch values of (8.33, 16.66, and 24.99) ms are tested. As a result, each video sequence set has nine tests. Total of twenty subjects who are imaging engineers were viewers for the subjective testing. Video sequences are each of 10 seconds duration. DSIS method with 5 point impairment scale is used for testing. The total time taken by a subject for complete test set was approximately 30 minutes. Subject is able to control the video playback (start/pause/stop).
Table 1

Video sequences used.

Sequence name

Sequence description

Type of motion

Diving

Springboard diving sequence

Camera panning motion

Basketball

Basketball sequence

High motion

Fence

From "digital video essentials" [16] test material. The sequence has a fence with regular structure

Fixed camera with object motion, followed by camera and object motion

Figure 9

(a), (b), and (c): A sample frame for the video sequences diving, basketball, and fence used for testing (test condition: without bezels). (d), (e), and (f): A sample frame for the video sequences diving, basketball, and fence with tile mullion used for testing (test condition: with bezels).DivingBasketballFenceDivingBasketballFence

Figures 10(a)10(f) provides the subjective quality results for each video test sequence. Each plot shows the results for , , and tiles configuration for twenty subjects for various synchronization mismatch values. Each plot shows average DSIS scores with 95% confidence interval error bars.
Figure 10

(a) Sequence diving, display with tile bezels (bezel width 0. 54 deg). (b) Sequence basketball, display with tile bezels (bezel width 0.54 deg). (c) Sequence fence, display with tile bezels (bezel width 0.54 deg). (d) Sequence diving, display with no tile bezels. (e) Sequence basketball, display with no tile bezels. (f) Sequence fence, display with no tile bezels.

4.2. Synchronization Mismatch Vernier Acuity Tests

The previously described video experiments are important for assessing the consequences that will be visible in the actual application. However, it is hard to dissect the key perceptual components because of the unknown and multiple object velocities in the video. Therefore, more controlled tests with single velocities and directions are conducted here and as described in Section 2.
  1. (i)

    The control parameter is varied from 0 (perfect synchronization) to 24.99 ms in steps of 8.33 ms.

     
  2. (ii)

    The velocity is varied from 6 deg/s to 12 deg/s to 24 deg/s to study the effect of the amount of object motion on synchronization mismatch acuity.

     
  3. (iii)

    The gap width control parameter (Bezel width) is tested for the values of (0 0.18, 0.36, 0.54, 0.72, and 0.90) deg. For a tiled display system, these correspond to a bezel-bezel size of (0 10 20 30 40 50) pixels.

     
  4. (iv)

    Time was set equal to .

     
Figures 11(a)11(f) provides the DSIS score results for various Bezel width values. For each Bezel width, the plots are shown for 3 different velocities— (smooth motion, medium motion, and fast motion) for various synchronization mismatch values— . Each plot shows average DSIS scores with 95% confidence interval error bars.
Figure 11

(a) Synchronization mismatch DSIS scores (Bezel width = 0 deg). (b) Synchronization mismatch DSIS scores (Bezel width = 0.18 deg = 10 pixel). (c) Synchronization mismatch DSIS scores (Bezel width = 0.36 deg = 20 pixel). (d) Synchronization mismatch DSIS scores (Bezel width = 0.54 deg = 30 pixel). (e) Synchronization mismatch DSIS scores (Bezel width = 0.72 deg = 40 pixel). (f) Synchronization mismatch DSIS scores (Bezel width = 0.90 deg = 50 pixel).

5. Conclusion

We defined experiments to measure synchronization mismatch vernier acuity. The experimental results can used to obtain synchronization mismatch acuity threshold as a function of object velocity and as a function of occlusion (gap width).

Let us define synchronization mismatch discomfort threshold (SMDT) as the synchronization mismatch value (in ms) above which the synchronization mismatch perception between tiles has a DSIS score at or below certain value. As an example, in a stringent system, DSIS score below 4 can be chosen to define this threshold. This is the value we will use below to obtain example SMDT values.

From our natural video content subjective quality evaluation, we can make the following conclusions.
  1. (i)

    Synchronization mismatch discomfort threshold (in ms) is larger with bezels compared to without bezels.

    This can be observed by comparing the DSIS scores for Figures 10(a)10(c) to those from Figures 10(d)10(f), respectively. In general, people prefer a tiled display with smaller or no bezels. Our results show that a tiled display system which has bezels can allow a more lax synchronization among tiles compared to one without bezels. The results in terms of lower distortion visibility for the bezels case corresponds well with that from standard spatial vernier acuity threshold elevation as gap width increases.

     
  2. (ii)

    Synchronization mismatch discomfort threshold (in ms) decreases as number of tiles increases. This can be observed by comparing the DSIS values in Figures 10(a)10(f) for the case of tile to that of or tiles.

     

It is unclear why the increase in number of tiles increases the distortion visibility and annoyance. It could be due to the increases in opportunities of mismatches along the varying edges or to a more overall sense of independent tile images.

From synchronization mismatch vernier acuity tests, we can make the following conclusions.
  1. (i)

    Synchronization mismatch discomfort threshold (in ms) as a function of object velocity (motion).

    It can be observed that as the amount of motion increases (from slow motion = 6 deg/s to fast motion 24 deg/s), the DSIS score increases for the same synchronization mismatch value. Thus, SMDT increases with object velocity even though the larger motion causes larger spatial offset, which would be more visible according to standard spatial vernier acuity.

     
  2. (ii)

    Synchronization mismatch discomfort threshold (in ms) as a function of gap (bezel) width.

    It can be observed that as the gap (bezel) width increases (from 0 to 0.9 deg), the DSIS score increases for the same synchronization mismatch value as well as for the same object velocity. Thus, SMDT increases with gap width.

     
The plots shown in Figures 11(a)11(f) can be used to arrive at specific values for synchronization mismatch discomfort threshold (SMDT) based upon the choice of acceptable DSIS score. Figure 12 shows such an example plot for SMDT values in ms for for different object motion. The SMDT values are arrived based on the results shown in the Figures 11(a)11(f). A value of is assigned in case the (i.e., just still above visible threshold) for the smallest tested synchronization mismatch for the particular object motion. A value of is assigned in case for all the tested synchronization mismatch values.
Figure 12

Synchronization mismatch discomfort threshold in ms for a system with DSIS score 4 as constraint.

The results from our evaluation can help during design of a tiled display system to meet a certain acceptable synchronization mismatch tolerance at the design stage. As a future work, we are developing a quantitative model which can predict the DSIS scores as a function of intertile synchronization mismatch values for any tile configuration with or without bezels.

Declarations

Acknowlegment

Scott Daly is now with Dolby Laboratories. The above work was done while he was at Sharp Laboratories of America.

Authors’ Affiliations

(1)
Sharp Laboratories of America
(2)
Dolby Laboratories

References

  1. Gaba D, Stringer J, Kung SSY: display walls in healthcare education what why how. Stanford School of Medicine, http://summit.stanford.edu/pdfs/DisplayWallGIRfinal.pdf
  2. University of Illinois at Chicago's Electronic Visualization Laboratory's LambdaVision Tiled Display http://www.evl.uic.edu/cavern/lambdavision
  3. Texas Advanced Computing Center's Stallion tiled display http://services.tacc.utexas.edu/index.php/stallion-user-guide
  4. Stanford Display Wall http://summit.stanford.edu//research/displaywall.html
  5. Roman P, Lazarov M, Majumder A: A scalable distributed paradigm for multi-user interaction with tiled rear projection display walls. IEEE Transactions on Visualization and Computer Graphics 2010, 16(6):1623-1632.View ArticleGoogle Scholar
  6. Dominick S, Ruigang Y: Anywhere pixel router. Proceedings of the ACM/IEEE 5th International Workshop on Projector Camera Systems (PROCAMS '08), August 2008Google Scholar
  7. Humphreys G, Houston M, Ng R, Frank R, Ahern S, Kirchner PD, Klosowski JT: Chromium: a stream-processing framework for interactive rendering on clusters. Proceedings of the 29th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH '02), July 2002 693-702.View ArticleGoogle Scholar
  8. NVIDA Quadro : G-Sync. http://www.nvidia.com/page/quadrofx_gsync.html
  9. Westheimer G, McKee SP: Spatial configurations for visual hyperacuity. Vision Research 1977, 17(8):941-947. 10.1016/0042-6989(77)90069-4View ArticleGoogle Scholar
  10. Westheimer G, McKee SP: Visual acuity in the presence of retinal image motion. Journal of the Optical Society of America 1975, 65(7):847-850. 10.1364/JOSA.65.000847View ArticleGoogle Scholar
  11. Gorea A, Hammett ST: Spatio-temporal vernier acuity. Spatial Vision 1998, 11(3):295-313. 10.1163/156856898X00040View ArticleGoogle Scholar
  12. ITU-R Recommendation : Methodology for the subjective assessment of the quality of television pictures. ITU; 2002.Google Scholar
  13. ITU-R Recommendation : Subjective assessment methods for image quality in high-definition television. ITU; 1998.Google Scholar
  14. Klein SA, Levi DM: Hyperacuity thresholds of 1 sec: theoretical predictions and empirical validation. Journal of the Optical Society of America 1985, 2(7):1170-1190. 10.1364/JOSAA.2.001170View ArticleGoogle Scholar
  15. Morgan MJ, Regan D: Opponent model for line interval discrimination: interval and vernier performance compared. Vision Research 1987, 27(1):107-118. 10.1016/0042-6989(87)90147-7View ArticleGoogle Scholar
  16. Digital Video Essentials http://www.jkpi.net/products_main.php

Copyright

© Sachin Deshpande and Scott Daly. 2011

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.