Hi there! Here are some guidelines to creating comfortable, awesome-looking stereoscopic images for the Mira Prism
Try not to place objects too far away in the scene. We advise that you keep your objects within 1 - 10 feet from the user, within room-scale. If you want to put the objects far away, keep them within a similar range of depths. Always ensure that the virtual dimensions of your scene match the real-world depth values you're hoping for, or you're going to have a bad time.
These rules are not absolute - depending on the context of your virtual scene, what you perceive may contradict this document. What seems to be most important is that your content exist in a similar range of depths. If you have a scene with many virtual objects, with some staged very close to the camera, and some staged very far, the user will be constantly shifting convergence from close objects to far objects
We are rendering stereoscopic images, with the virtual cameras spaced by the user's IPD. A large object far away is not the same as a small object close to the user.
Our goal is to keep all objects within the stereoscopic comfort zone of 1 - 10 feet (approx 0.3 - 3 meters). It is highly recommended to remain aware of your virtual unit size, and calculate out the distance/dimensions of virtual objects
The most clear, readable distance from the camera rig is 0.608 meters, or approximately 2 feet. If you need the user to read text, it is recommended to place it around that distance
- One of the strongest depth cues is occlusion - a closer object hides a further object
- If you put objects too far, they'll appear to be behind objects or walls
- This gives the brain conflicting information about the stereo, and makes it difficult for the users eyes to converge
Vergence Accommodation Conflict
- We project the image on different parts of the display so that your eyes converge for different depths, and you perceive the object at a depth that matches the virtual scene. See gif below for a visual explanation of this
- While we can shift the image to manipulate convergence, the focal plane of the display is always fixed. This means that even though your eyes are converging on a point placed 10 meters away, it will be blurry, since the focal plane of the virtual image is much closer. In the gif below, when the convergence does not match the focal plane, the image doesn't focus on the user's retina
- In VR, everything is relative to another virtual object. In AR, you are comparing the virtual images to the real world, and this mismatch between the focus and convergence can be difficult for the user's brain to reconcile
^^^ Vergence Accommodation Conflict
Now get out there and create some awesome stereoscopic experiences!!!
Updated less than a minute ago