This chapter describes how to handle multi-camera setups with a marker-based approach. Requirements are:
- 3D Model of the projection screen.
- Markers in relation to the screen.
- Markers placed as physical references on the screen in the real world.
Table of Contents
Step 1: 3D Model
Create/Use an existing CAD model
If you already have a model, you need to strip everything obstructing the actual screen. Keep it as simple as possible and try to use a minimum of faces. Use real-world units. We recommend mm.
The faces have to point towards the cameras because we use face culling to reduce rendering and obstruction.
For a multi-camera approach, it is necessary to distribute texture coordinates to the model. So distribute a textured material to the surface of the screen. Create a material from one big texture, which has the dimension of the screen. In the case of a cylindrical screen, use its height and arc length of the screen. Once the material is applied, it must be distributed uniformly like shown in next section.
Export the result to either 3ds, dae, or obj.
Step 2: Markers
To align the cameras to the screen, we need at least 6 known features per camera. A feature is some unique place on (or off) the screen of which we have the 3D coordinates.
In the case of a cylindrical screen, we create them using angle functions. Add all points to a %ProgramData\VIOSO\SPCalibrator\SPMarkerDef.ini (or your set data directory). On our 200° screen,
we use 22° step. We go up, then right. Enumerate continuously. In our example, we use 5 cameras for 9 projectors, so the front camera needs points, 11° rotated.
You have to find the markers on the screen itself, so put some next to the screen on the bottom and use a laser cross to find the upper edge.
<?xml version="1.0"?>
<VIOSO>
<File version="1.0.0" />
<Marker id=" 1" X="-3792.723228" Y="0" Z="600.7083458" />
<Marker id=" 2" X="-3792.723228" Y="2885" Z="600.7083458" />
<Marker id=" 3" X="-3741.581049" Y="0" Z="-863.8120487" />
<Marker id=" 4" X="-3741.581049" Y="2885" Z="-863.8120487" />
<Marker id=" 5" X="-3145.54385" Y="0" Z="-2202.533516" />
<Marker id=" 6" X="-3145.54385" Y="2885" Z="-2202.533516" />
<Marker id=" 7" X="-2091.413894" Y="0" Z="-3220.494981" />
<Marker id=" 8" X="-2091.413894" Y="2885" Z="-3220.494981" />
<Marker id=" 9" X="-1438.489319" Y="0" Z="-3560.386002" />
<Marker id="10" X="-1438.489319" Y="2885" Z="-3560.386002" />
<Marker id="11" X="-732.7065422" Y="0" Z="-3769.448384" />
<Marker id="12" X="-732.7065422" Y="2885" Z="-3769.448384" />
<Marker id="13" X="0" Y="0" Z="-3840" />
<Marker id="14" X="0" Y="2885" Z="-3840" />
<Marker id="15" X="732.7065422" Y="0" Z="-3769.448384" />
<Marker id="16" X="732.7065422" Y="2885" Z="-3769.448384" />
<Marker id="17" X="1438.489319" Y="0" Z="-3560.386002" />
<Marker id="18" X="1438.489319" Y="2885" Z="-3560.386002" />
<Marker id="19" X="2091.413894" Y="0" Z="-3220.494981" />
<Marker id="20" X="2091.413894" Y="2885" Z="-3220.494981" />
<Marker id="21" X="3145.54385" Y="0" Z="-2202.533516" />
<Marker id="22" X="3145.54385" Y="2885" Z="-2202.533516" />
<Marker id="23" X="3741.581049" Y="0" Z="-863.8120487" />
<Marker id="24" X="3741.581049" Y="2885" Z="-863.8120487" />
<Marker id="25" X="3792.723228" Y="0" Z="600.7083458" />
<Marker id="26" X="3792.723228" Y="2885" Z="600.7083458" />
</VIOSO>
Write down the sections for every camera:
- Cam1 sees 6 markers, starting by 1.
- Cam2 sees 8 markers, starting by 5.
- Cam3 sees 10 markers, starting by 9.
- Cam4 sees 8 markers, starting by 15.
- Cam5 sees 6 markers starting by 21.
Step 3: Place your cameras
In the data directory, you need SPDevices.ini, SPMarkerDef.ini, and SPContenSpaces.ini. In the SPDevices, you’ll find the camera intrinsic. This, in conjunction with the markers, allows us to find the pose of the camera.
Start SPCalibrator and go to Calibration->Intrinsics and poses and select “Manual camera pose” from the dropdown. Select the camera and your observer screen (which should not be a projector).
Now put in the start marker and the appropriate number used for that camera. Press next.
Now adjust the camera to see the markers and the laser line. Use the “Format” button and adjust Exposure. Press Next.
Now move the points roughly to their positions and then position fine. Use the mouse wheel to zoom and right button to pan. The point next to the mouse cursor is selected (encircled) and can be moved by the arrow keys, as well.
Once finished, press F2 to get the UI in front. Select the correct intrinsic parameter set from the dropdown and press “Calculate”. Save a screenshot for later use.
Press “Next”, if you are satisfied (perform sanity check on the values!).
Repeat for all cameras.
Step 4: Create a model content space
Go to Extras->Custom content spaces. Click new and name it “screen”. It is a “model” and use the native camera resolution as pixel space.
Select the model file (screen_mgp.obj), which should be copied to the data directory. Currently 0,0,0 is in the middle of the cylinder at the bottom (not floor) level. Move the model to the eye-point by moving it down (-y). Check “invert Tv”. Press “Apply” and “Save”.
Now you have all cameras aligned to the screen.
Close SPCalibrator on master.
Step 5: Copy all settings to all clients
Copy SPDevices.ini, SPContentSpaces.ini, and the screen model file from the data directory over to all clients’ data directories. Make sure there are no .ini’s except SPeASY.ini in the program directory.
Step 6: Run projector scans
Start VIOSO Calibrator on all clients. Make sure that network settings and firewall are configured.
Start a Multi-Client calibration on master. Add all IP’s to the listbox and click Next.
You must make sure all cameras are the same brand/make and are set physically and parametrically identical.
Select the correct screen mode (flat or curved screen) and give it a convenient name. This file will be saved on all clients in its data directory, since all clients hold their data themselves.
Check “no blending calculation” and click Next.
Now you are asked to perform a calibration on each client. Do not load or re-use an existing calibration. This is why we restarted all Calibrator instances on all clients.
Use same camera settings on all scans (write them down). Perform white balance once and write down the values. Then disable white balancing and auto exposure before doing each scan.
Unfortunately, you must remote to the clients and set up the camera (this is going to be saved, so once you load a successful scan, the system uses these parameters).
You have to use the correct camera intrinsic parameters — this is where the camera position comes from.
Draw a mask slightly inside the screen where the projection is cut off by the canvas or obstruction.
If any of the scans fail, just go on and restart the process. Now you can skip good scans by using them (keep the existing calibration selected) or even load if you had to restart VIOSO Calibrator.
Step 7: Conversions
Now that we have a correspondence between camera(s) and screen, we need to calculate the 3D data that every projector pixel points to. This is done by going through the following steps:
Camera content space translation
On every client go to Calibration->Conversion tasks, select the Compound, Camera content space translation, and your screen model. Then press perform.
After that, please start preview of all Compounds on all clients.
Now we look at a back projected rendering of the model, which is the virtual screen.
You will notice that where camera sections overlap, the image is not matching perfectly. This comes from measurement and intrinsic errors summing up on the screen. This can be corrected using our warping tool. In this state, we transform the camera, just like adapting the intrinsic parameter.
Add VC to display geometry
Once you are satisfied with the overall geometry, go to Calibration->Conversion tasks and select “Add VC to display geometry”. Then the warping will be connected to the camera. So from now on, no warping needs to be applied on recalibration.
Custom content space
The last step is to convert the projectors to their new content space. Now the 3D points and the texture coordinates are calculated to every pixel.
Step 8: Redo blending
Now you must do a recalibration. Click on the “Calibrate” button and select multi-client calibration. Only add all IP addresses for one eye if you have stereoscopic mode, as they are to be blended together. Repeat for the other eye, in case. Click “Next” and select all projectors. Keep all settings, except check “auto content space convert” to be able to rescan without doing conversions again. Then click “Next”.
Now confirm every available calibration and wait (be patient!) for the blending being calculated.
Step 9: Export
The last step is to export the mappings. Go to Files->Export warping and blending and select the Super Compound. Check export to master.
-
Camera content space translation
To get the exact pose for every IG, you’ll need to calculate it from the collected data. Go to Calibration->Conversion tasks and select Display auto pose. This leads to a new pose you’ll find in Extras->Custom content spaces. -
Export pose
You can export it by clicking in IG views… Just key in the main view as follows:
Eye: 0, 0, 0; Dir: 0, 0, 1, and up: 0, 1, 0.
Enter the location of the individual .ini and use the supplied template to generate the settings file for the plugin.