You can use 3D data to enable live eye-point-correction. For this purpose, you need a post process shader for your application or you will need to connect Anyblend to a tracker.
The basic idea behind eye-point correction is to get rid of the effects resulting from a curved projection surface. Consider a simple cylindrical screen. It works like a curved mirror we look onto the virtual scene through. We can warp the image to appear correct from one eye-point. This is called the “sweet spot”. If you look at the image that gets projected, you see why I say it works like a mirror because it looks like a scene reflected by a curved mirror.
Looking from the “sweet spot” onto the scene on the screen, you see that all of the lines are straight, as they should be. As the IG then renders a scene from the same fixed eye-point, which corresponds to the real eye-point, we can warp our image to look good. If you move away from the “sweet spot”, you notice lines get bent and skewed. This is because the displayed perspective does not match your actual eye position.
Of course, we simply can tell the IG to render from a different position. This works great as long as the projection surface is flat. A flat mirror would not distort the image it reflects — but this changes dramatically if there is a curve.
For every dynamic eye-point setup, there is a fixed render plane (rectangle in 3D), which defines the view frustum to the dynamic eye-point. The rendered image can be scaled to that render plane, once the IG updates its projection-view-matrix M.
This gives us a correspondence of p and p’. If there is no match between the screen and the render plane (p’’ != p’), we have to answer another question: Where would I expect to see p on the screen (p’’)?
If we take a look at the projection, we see a correspondence between p’’ and p’’’. This is what we get from the Calibrator tool as a look-up map.
Now we can solve all of the problems:
p’’ = L(p’’’) (1) p’ = MT*p (2) p’ = MT*p’’ (3) (1)->(3) p’ = MT*L(p’’’) (4)
The equation (2) is solved by the renderer of the IG. Generating the image, it fills the render plane, so we know each content pixel.
Equation 4 can be solved by a simple pixel shader:
- p’’’ is the relative texel coordinate given by the texture coordinates of the rendered quad.
- L() is a lookup texture giving us 3D of the screen.
- p’ is then resulting texture coordinate inside the content texture.
There are a few things to keep in mind:
- The render plane must be big enough to get the screen covered. This means that from every possible dynamic eye-point the whole projector image must be visible through that “window”. The best practice is to intersect it with the real screen and keep the projection “inside”.
- Minimize the maximum distance between the render-plane and the actual screen. In other words: The bigger the gap, the heavier the distortion. Best practice is to place the projectors to cover only a small arc, i.e. using portrait mode or more projector’s render planes.
- We have to define the fixed render-plane. A convenient way to do this is to define a fixed frustum and a distance to define a view plane. The frustum setting can be derived by the Calibrator calculated from an eye-point and a scanned mapping.