The warper uses a pixel shader to deform the input to the currently set render target.
By using the same view and projection matrices as the rendering process, we can map provided image using the 3D-coordinates of the actual screen for every projector pixel.
The sequence is always the same:
Set up your 3D environment. For each window call
Providing a DirectXDevice or for DX12 a ID3D12CommandQueue. In case of OpenGL, set this to NULL but make sure the window’s context is current and in same thread.
In case you provided a path to some .ini-file, it is loaded and the warper’s attributes are set accordingly. You might now change the warper’s attributes or rely on .ini solely. Then call
This will load all mappings to the GPU and computes life-time transformations and compiles the shader code.
Your render loop you call
To obtain the frustum to render your scene with.
In case you do only 2D rendering (wallpaper) make sure, you use a 2D mapping, then you can skip the above step.
After rendering is finished, and the image output is ready in a texture or the current backbuffer, set the backbuffer as render target (in D3D12, you provide it’s GPU-handle) then call
to warp scene to screen. There is no multi-threading inside VIOSOWarpBlend module, make sure to use same thread or make your GL-context current before calling VWB_render.
This API can use all mappings. In case you export 3D, you can specify view parameter.
dir=[pitch, yaw, roll] fov=[left,top,right,bottom] ;all positive, so 90°x85° would be [45,42.5,45,42.5] screen=distanceToViewPlane
to let the API calculate these values.
need to be set to some sane value. It is not used for calculating the warped output. It makes sense to overwrite this with your values after the warper was created.
The eye point correction algorithm works implicitly. Imagine a rectangular “window” placed virtually near next to the screen. It must be situated the way that from every possible (dynamic) eye point the whole projection area of the regarded projector is seen through that window.