The field of augmented reality, and specifically devices like the Microsoft Hololens, offer many exciting possibilities for developers. For the first time we aren’t developing for fixed sized 2D plane anymore, but instead a infinite canvas of possibilities surrounding the user from all sides. This opens up a lot of scope for interesting use-cases, but also almost certainly means we will be learning how to best manage that space for a long time to come.
When developing for any new medium like this, it’s important to take into account not just the new possibilities but also any design constraints you need to work within.
One constraint of the Hololens that has prompted a lot of discussion in the press is its field of view.
As humans we have a very wide FOV, about 180 degrees, but optically transparent AR devices like the Hololens are currently more limited. While Microsoft has not published a official spec sheet for the Hololens, it’s estimated by some users to be about 30 degrees.
Some people have used this value to write off the Hololens completely – a point I firmly disagree with.Some hands on experiences back me up.
But the FOV can’t be ignored completely either.
As a developer it is important to consider the FOV the user has, where their eyes will be, and to try to immerse them in a world that limits how much attention is drawn to the edges of the controllable portion of that view.
Because of this I have written a little guide to what I feel will be useful things to consider in regards to the visible space when developing a Hololens app.
Disclaimer: I do not own a Hololens and, being based in Europe, it might be quite some time before I can develop with one. As such, all my advice is merely based of the hands on impressions given by others, combined with my own knowledge of computer animation and game design. Use your own discretion when it comes to how reliable my advice may thus be.
Draw the eye – consider where the eye is at all times.
A lot of our eye-movement is subconscious. Our eyes dart about while taking in the environment, often being automatically drawn to anything out of the ordinary.
To a large extent this fact can be used to precisely control where the users gaze will be, even if it’s only for a brief moment
For example: let’s say your designing a game and, as in Microsoft’s own demo, a monster is going to go crashing though a real life wall. If this is done suddenly without any notice the player could be looking at the wall off-center…or even in completely the wrong direction.
Instead, by having a bashing sound come from the wall, then letting a crack appear in the center of where it will break you control the users eye. The noise of the bash draws the users attention to the general direction, then the crack draws their gaze to exactly where you want it.
Notice the crack that appears first?
Ideally, you then break the wall at the moment the users eyes our looking directly at the crack (which of course you can detect, as HMDs like the Hololens need to know where people are looking)
Because the users gaze is now centralized as the creature emerges, you have ensured they see the maximum possible amount of the creature. If the creature is smaller then the FOV, they will even see it completely without any clipping that might spoil the illusion that it’s really in the room with you.
This technique can be applied to general interface elements too. If it’s important for a user to see a certain bit of information, a brief indication of where it will appear before it appears gives the user the fraction of a second needed to move their gaze towards it.
Remember also not to move things quicker then the users ability to track them – the user will naturally keep interesting scene elements centered in their view as long as they don’t whizz off too quick.
Consider dynamic sizing for GUI elements.
While it will not always be possible or appropriate, it is worth considering if scene elements can be sized to make them more likely to keep within the frame. GUI elements are a good candidate for this.
For example, consider a marker on the ground;
This doesn’t look too good when close up.
However, if you keep the marker scaled to display-space rather then world-space:
It still works well as a marker, without cluttering the field of view, or being clipped out of its edges.
Completely 2D Labels meanwhile, can just be aligned to keep within the FOV.
Ensure the user can locate stuff
On a more practical level in some situations it might be important for the user to be able to quickly locate virtual elements outside their immediate view.
As humans our wide FOV helps us detect movement or changes in the environment from a wide range of angles. The virtual overlay in current HMDs, however, is just focused on the area in front of the user.
For this reason additional cues should be provided to help locate elements outside the Field of View.
The most obvious possibility here is sound. Fortunately the Hololens has good 3d spacial sound, letting users find elements by ear alone. We can hope the same will be said for future AR HMDs.
In addition to sound, carefully made targeting or “radar like” visual indicators could be used.
It might take some experimentation in order to achieve a good balance between minimal screen use and intuitive indication of where something is.
One way to do this would be a depth aware target, with prongs indicating the length and distance to the object:
(crude mock up)
Consider the path of motion.
If something is heading towards the user, it might work better to use a curved path towards the head, rather the a linear one:
This would help ensure the user sees it earlier, and it spends less time at the clipped edge of the users line of sight.
Finally… Don’t worry!
Remember: content is king – if your application is immersive and offers a gripping experience the user will be paying less attention to things that don’t quite fit.
These techniques are just to help ease the transitional period when the user is getting used to the experience. To help ensure things aren’t obviously too out of place. Once the user is hocked on your experience anyway, it largely doesn’t matter. As humans we quickly adapt and get used to new experiences – and as developers we should have fun making them!