Wednesday , November 20 2019
Home / Projects/Reports / Acuity-Driven Gigapixel Visualization Methodology and Results

Acuity-Driven Gigapixel Visualization Methodology and Results


Acuity-Driven Gigapixel Visualization is a project that makes to compensate for the flaws that have been presented by advanced visual technology. The realization that advanced visual technology has limitations on user navigation as a result of the alternating user-surface distance between optimal and suboptimal is acutely projected in the study. Deriving basis on tests run on a Reality Deck with a gigapixel display of 1.5, a 3600 Horizontal FoV and a 33′ * 19′ * 11′ workspace, the project is a presentation of optimized visualization process of gigapixel data (Papadopoulos, 2013). The project also employs the use of tracking data gathered from the study to formulate synthetic usage scenarios to conduct an evaluation of performance of the proposed system. It is termed so because it uses an approach of guiding optimizations by analytically formulated visual acuity (Papadopoulos, 2013).


The study projects the use of a shader based on local data in determining the appropriate LoD for implementation of the virtual texture on a GPU. Such determination is made by the virtual texture pipeline on the basis of spatial derivatives of the texture coordinates of the of the image plane:The study proposes that decrease of the user’s retina’s visual angle directly affects its texture space.
A geometry tessellation based on the F + C lens curvature and viewer’s proximity from the display is further proposed in the study. The factors enable the calculation of gigapixel adaptive parametrization comprised of both a view based and a lens based metric.
An integration of the LoD selection that is acuity driven in a gigapixel visualization pipeline that is based on virtual texturing is then conducted for implementation. The tessellation scheme is also GPU implementable (Papadopoulos, 2013).

Acuity-Driven Gigapixel Visualization Methodology and Results


The algorithm had a linear projection between the resulting tessellation and the proximity to the screen and also accurately captured the F + C lens structure (Papadopoulos, 2013). The participants as such achieved an average vision enabled through the use of either corrective glasses or lenses. This is confirmed by their ability to successfully pick out similar survey target pictures after being subjected to queries on demographic information. Comprehending the data and the participants’ reaction is simplified by the head-tracking props that they wear.

The Article’s Flaws

In its experimentation, the article projects the participants used for the study as bearing an average of 26 years, all of whom are graduate and undergraduate students (Papadopoulos, 2013). The study thus fails to examine the children whose visual ability is very significant for their conception of the concepts of the universe, considering that their retinas are immature and subject to changes of development. Moreover, elderly persons also have their visual ability on a constant reduction. The study should hence offer the flexibility to cover for all generations with visual impairment.

Well Presented Points

The post-hoc analysis of the positional tracking makes it possible to rate the reaction of the participants and easily comprehend their ADGV image quality in comparison to SVG. The analysis is quite detailed and gives a fine explanation for the results.

Project Revelation

The project presents a scheme that improvises technology to further improve visual capacity in a great dimension (Papadopoulos, 2013). The use of vertex shaders to displace the underlying mesh by utilizing OpenGL tessellation that enables the F lens to be stored and precumpted in a lookup texture is an interesting concept (Lewis and SPIE, 2013). As such, the system supports execution that is distributed and synchronized (Porter, 2006).

Significant Conclusions

The projects take to simplify the complex mesh by refining the image based on proximity to the object. The process makes its distinct from the more basic mechanisms that that uses perceptual criteria to achieve the same and as such presents comparison to work in the LoD field.

Future Focus

The project aims to focus on varied mage data and a larger sample size. However, they should also aim to include more variation of the ages of the participants to establish the reaction levels (Lewis and SPIE, 2013). Not only would such focus make it easier to comprehend the eye and head the correlation movement, but also make it more comprehensive in analyzing the reaction rates based on age (Seymour and Britton, 1989).

  • In Lewis, K. L., Fraunhofer-Gesellschaft,, & SPIE (Society),. (2013). Emerging technologies in security and defence: And Quantum security II; and Unmanned sensor systems X : 23-26 September 2013, Dresden, Germany.
  • Papadopoulos, C. (Dec, 2013). Acuity-Driven Gigapixel Visualization in IEE Transactions in Visualization and Computer Graphics (Vol 19, 12).
  • Porter, C. (2006). Tessellation quilts: Sensational designs from simple, interlocking patterns. Newton Abbot: David & Charles.
  • Seymour, D., & Britton, J. (1989). Introduction to tessellations. Palo Alto, Calif: Dale Seymour Publications.

Leave a Reply

Your email address will not be published. Required fields are marked *