• Home
  • Knowledge Base
    • Licensing New (VIOSO 6, EXAPLAY)
    • Operation
    • Quality Improvement
    • References
    • Licensing Old (Anyblend, Player)
  • Documentation
    • System Preparation
    • VIOSO 6
    • VIOSO 6 Integration
    • EXAPLAY
    • VIOSO Core 5
    • VIOSO Anyblend 5
    • VIOSO Anyblend VR&SIM 5
    • VIOSO Integrate 5
    • VIOSO Player 2
  • FAQ
    • Error and other feedback
    • Licensing
    • System & Requirements
  • Videos
  • Home
  • Knowledge Base
    • Licensing New (VIOSO 6, EXAPLAY)
    • Operation
    • Quality Improvement
    • References
    • Licensing Old (Anyblend, Player)
  • Documentation
    • System Preparation
    • VIOSO 6
    • VIOSO 6 Integration
    • EXAPLAY
    • VIOSO Core 5
    • VIOSO Anyblend 5
    • VIOSO Anyblend VR&SIM 5
    • VIOSO Integrate 5
    • VIOSO Player 2
  • FAQ
    • Error and other feedback
    • Licensing
    • System & Requirements
  • Videos

VIOSO Anyblend VR&SIM 5

home/Documentation/VIOSO Anyblend VR&SIM 5
Expand All Collapse All
  • VIOSO Anyblend VR&SIM Software Overview
  • 3D calibration
    • 3D model creation
    • 3D Alignment (MRD Adjustment)
    • Re-calculate calibration blending (optional)
  • Multi-client Calibration
    • Method 1: Abstract Displays
    • Method 2: Legacy
  • Multi-Camera Calibration
    • Method 1: 3D Alignment based multi-cam
    • Method 2: Marker based multi-cam
  • Intrinsic and Positions
  • Content space management
  • Content space transformation
  • Observer Conversion for Static Eye-Point
  • Dynamic eye-point correction
  • Calibration Export
  • Anyblend VR&SIM Examples
    • Export for Barco WB2560 (MIPS)
    • Calibration of a partial dome screen with an off-centre camera

Observer Conversion for Static Eye-Point

413 views 0

Jerbi
June 14, 2021

In this article, we describe the steps to acquire virtual camera poses relative to each projector (Fields of View and Orientations) from a 3D calibration performed in Anyblend VR&SIM (or Integrate Plus).

This workflow is needed only for static eye-point integrations to simulators and rendering engines where it is necessary to manually configure camera parameters for each view. For dynamic eye-point plugins this automatically executed by the API.

The general idea is to convert the single view generated by the calibration into multiple blended views then configure your rendering application with the corresponding camera poses. This is described in details below:

Table of Contents

  • 1. Perform a 3D calibration
  • 2. Apply an Auto-Frustum Conversion
  • 3. Adjust Frustum Values
  • 4. Apply Observer Conversions
  • 5. Check the result & Export

1. Perform a 3D calibration

Since calculating virtual camera poses relies on 3D coordinates in the content space, the calibration scan must be followed by 3D alignment of the projection screen. This gives us the data necessary to automatically calculate a frustum for each a projector space.

Check out here the guide on how to perform a 3D calibration

2. Apply an Auto-Frustum Conversion

In this step, the software will automatically calculate poses and create a content space for each projector:

  • First, Make sure that no warping is applied to the grid and that your canvas is in full screen.
  • From the main menu, go to Calibration > Conversion Tasks :
    • Select your compound from the list of available calibrations
    • Select “auto frustum” as a conversion format
    • Specify the desired eye point coordinates for the viewer
    • Click on Perform
  • Check out the calculation results in Calibration > Content spaces : you should see in the list a new entry or each projector.

3. Adjust Frustum Values

  • From the main menu, go to Calibration > Content Spaces :
    • Create a new frustum by clicking on the “New” button, name it “main” and set its type to “frustum” and click on “Create Empty Defintion”
    • Select the main frustum you just created and set its “LookAt” coordinates and and “Up” vector. Example: For a screen calibrated with a camera facing the screen at 1 meter distance, LookAt = (0,1000,-1000) and Up = (0,1,0)
    • Click on “Apply”.
    • Select each of the Projector frustums from the list and edit its Rotation and Field of view values to round them to integers, for example : if Y=48.86 round it to Y=49
    • Make sure you apply each edit, and finally click on Save to finish.

4. Apply Observer Conversions

  • From the main menu, go to Calibration > Conversion Tasks :
    • Select your compound from the list of available calibrations
    • Select “Observer conversion” as a conversion format
    • Select the first projector frustum from the list
    • Click on Perform
  • Repeat the same observer conversion steps for all projector frustums
  • Save the calibration to a different file, you can call it for example “name_ObserverConv”

5. Check the result & Export

Finally, to check your conversions result you can activate the compound, show a test image and watch out for such issues:

  • “Uncovered corners” and “too much loss” issues are due to viewport parameters being too narrow/wide. You can fix this by going back to your initial scan result and redo step 3. (Adjust Frustum Values): slightly edit the fields of view then re-convert and check.
  • “Too much distortion” means your projectors are too wide to produce non-distorted single views. This not fixable by software

Once you optimized your result and made sure the views are entirely covering the projection screen, proceed to export: File > Export Mapping then uncheck the 3D box.

Was this helpful?

Yes  No
Related Articles
  • Method 1: 3D Alignment based multi-cam
  • Method 2: Marker based multi-cam
  • Method 1: Abstract Displays
  • Method 2: Legacy
  • Export for Barco WB2560 (MIPS)
  • 3D model creation

Didn't find your answer? Contact Us

Previous
Content space transformation
Next
Dynamic eye-point correction
  • Ticket System
  • VIOSO Home
  • Imprint
  • Forum
  • © 2020-now VIOSO GmbH. All Rights Reserved.

We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept”, you consent to the use of ALL the cookies.
Do not sell my personal information.
Cookie settingsACCEPT
Manage consent

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Non-necessary
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
SAVE & ACCEPT