• Home
  • Knowledge Base
    • Licensing
    • Operation
    • Quality Improvement
    • References
  • Documentation
    • VIOSO Core
    • VIOSO Anyblend
    • VIOSO Anyblend VR&SIM
    • VIOSO Integrate
    • VIOSO Player
  • FAQ
    • Error and other feedback
    • Licensing
    • System & Requirements
  • Videos
  • Home
  • Knowledge Base
    • Licensing
    • Operation
    • Quality Improvement
    • References
  • Documentation
    • VIOSO Core
    • VIOSO Anyblend
    • VIOSO Anyblend VR&SIM
    • VIOSO Integrate
    • VIOSO Player
  • FAQ
    • Error and other feedback
    • Licensing
    • System & Requirements
  • Videos

VIOSO Anyblend VR&SIM

home/Documentation/VIOSO Anyblend VR&SIM
Expand All Collapse All
  • VIOSO Anyblend VR&SIM Software Overview
  • Abstract Displays
  • Performing Multiclient Autocalibration
  • Intrinsic and Positions
  • Content space management
  • Content space transformation
  •  3D calibration
    • 3D model creation
    • 3D Alignment (MRD Adjustment)
    • Re-calculate calibration
  • Dynamic eye-point correction
  • Calibration Export
  •  Multi-Camera Calibration
    • Create a screen model using SketchUp®
  •  Anyblend VR&SIM Examples
    • Export for Barco WB2560 (MIPS)
    • Calibration of a partial dome screen with an off-centre camera

Dynamic eye-point correction

247 views 0

Emanuel
April 17, 2020

You can use 3D data to enable live eye-point-correction. For this purpose, you need a post process shader for your application or you will need to connect Anyblend to a tracker.

The basic idea behind eye-point correction is to get rid of the effects resulting from a curved projection surface. Consider a simple cylindrical screen. It works like a curved mirror we look onto the virtual scene through. We can warp the image to appear correct from one eye-point. This is called the “sweet spot”. If you look at the image that gets projected, you see why I say it works like a mirror because it looks like a scene reflected by a curved mirror.

Looking from the “sweet spot” onto the scene on the screen, you see that all of the lines are straight, as they should be. As the IG then renders a scene from the same fixed eye-point, which corresponds to the real eye-point, we can warp our image to look good. If you move away from the “sweet spot”, you notice lines get bent and skewed. This is because the displayed perspective does not match your actual eye position.

Of course, we simply can tell the IG to render from a different position. This works great as long as the projection surface is flat. A flat mirror would not distort the image it reflects — but this changes dramatically if there is a curve.

For every dynamic eye-point setup, there is a fixed render plane (rectangle in 3D), which defines the view frustum to the dynamic eye-point. The rendered image can be scaled to that render plane, once the IG updates its projection-view-matrix M.

This gives us a correspondence of p and p’. If there is no match between the screen and the render plane (p’’ != p’), we have to answer another question: Where would I expect to see p on the screen (p’’)?

If we take a look at the projection, we see a correspondence between p’’ and p’’’. This is what we get from the Calibrator tool as a look-up map.

Now we can solve all of the problems:

p’’ = L(p’’’)            (1)
p’ = MT*p                (2)
p’ = MT*p’’              (3)

(1)->(3)

p’ = MT*L(p’’’)          (4) 

The equation (2) is solved by the renderer of the IG. Generating the image, it fills the render plane, so we know each content pixel.

Equation 4 can be solved by a simple pixel shader:

  • p’’’ is the relative texel coordinate given by the texture coordinates of the rendered quad.
  • L() is a lookup texture giving us 3D of the screen.
  • p’ is then resulting texture coordinate inside the content texture.

There are a few things to keep in mind:

  • The render plane must be big enough to get the screen covered. This means that from every possible dynamic eye-point the whole projector image must be visible through that “window”. The best practice is to intersect it with the real screen and keep the projection “inside”.
  • Minimize the maximum distance between the render-plane and the actual screen. In other words: The bigger the gap, the heavier the distortion. Best practice is to place the projectors to cover only a small arc, i.e. using portrait mode or more projector’s render planes.
  • We have to define the fixed render-plane. A convenient way to do this is to define a fixed frustum and a distance to define a view plane. The frustum setting can be derived by the Calibrator calculated from an eye-point and a scanned mapping.

Was this helpful?

Yes  No
Related Articles
  • Create a screen model using SketchUp®
  • Export for Barco WB2560 (MIPS)
  • 3D model creation
  • VIOSO Anyblend VR&SIM Software Overview
  • Calibration of a partial dome screen with an off-centre camera
  • Performing Multiclient Autocalibration
  • Ticket System
  • VIOSO Home
  • Imprint
  • Forum
  • © 2020 VIOSO GmbH. All Rights Reserved.

We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept”, you consent to the use of ALL the cookies.
Do not sell my personal information.
Cookie settingsACCEPT
Manage consent

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary
Always Enabled

Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.

Non-necessary

Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.

SAVE & ACCEPT