After so much waiting and so much hype, consumers can finally get their hands on VR. I mean, you’re probably going to need a lot of patience while you’re waiting for your order and there may be a lot of complaining, but alas, you can purchase it and that is all that matters.
This is part three of a month-long series on VR. Check out parts one and two.
There’s nothing new about Virtual Reality. It’s not that it’s an obscure technology that has been under dark basements all this time. I remember playing Heretic and Doom on VR systems in the 90’s; but now, thanks to many factors, VR tech is (almost) reaching the level of being accessible to the common man.
Currently, there’s a fistful of companies that are trying to push their own tech on the VR dream. Some are even located between the blurred lines of the industry, like Microsoft HoloLens that is more Augmented Reality than Virtual Reality, but follows the momentum of the Not-Real-Reality zeitgeist. Most of these efforts will die, and that’s ok. Everyone’s pitching their own ideas, and at the end, the most convenient and popular will survive, or maybe not (looking at you, VHS and Blu-Ray).
How do we interact with VR?
Whoever wins the arms race, what we really expect to reap are behavior standards. That is what concerns us as experience architects. Right now, all VR efforts have pretty much only one behavior in common: You can turn your head and perceive the virtual-augmented environment around. All the other basic elements of the experience and how you interact (if you can) with this environment vary. We are not talking about some basic button to push here, we are talking about stuff that aims to compete with reality itself.
VR needs an equivalent of Xerox mouse or Apple’s touch gestures on the first iPhone. An easy way to interact with the world that VR is promising, that is easy to use and quick to learn. One will think that because VR aims to involve the entire self, body gestures are an easy way to go on this, after all, finger gestures quickly became a standard, so this should be like the next “step”. But time has proven that it’s not really cool to waste your energy moving your arms around to do a simple action that can also can be done with a simple finger movement, like when we tap or click on a mouse.
Right now there are a lot of options that aim for the title of standard, or at least the better option. From complete stations that let the user feel like he’s walking in a single place to motion sensors and simple hand controls. All of those have been borrowed from the world of videogames; a world that has been dreaming with VR for decades.
Since the Nintendo Wii made popular the revolutionary “Wiimote”, it seemed that it was possible to create a decent body interaction with potential VR systems. But that’s for games, and you can only play boxing like that for some time before you get tired. Some experiences require less physical immersion and more of a sensorial experience.
If VR wants to aim to any kind of public and be a platform for a wide arrange of products, it needs to start simple before it gets complex. I bet somebody will want the complete “Lawnmower Man” battle station at home, but maybe companies need to aim low to hit high for now, stop pretending to replace reality itself.
Still, VR is still a technology with a potential to be as immersive as anything anyone has ever experienced, so simple interactions still need to be on the level of the potential that the device is promising. We don’t want to break immersion using the same Xbox controller we use for games on a flat screen. We want interactions that can go from a jump to the movement of a finger, and compact and precise enough so you don’t have to take the kids out of their room to build a VR place where you can do all that.
Maybe Facebook with the Oculus have another vision of what the market could want more than Valve+HTC, but for now, what we need is to start seeing the results of all those ideas on the public and see how they (and we) react.
In these infant times of an old technology, we can’t demand to have epiphanies of how things are going to be forever. We expect evolution, and as designers, is our duty to give those ideas challengers and opportunities, embrace chaos and don’t be afraid to fail. It’s time to aim for the moon, we will reach for the stars next… from the comfort of our sofas… using our VR headsets.
Let us know what you think about User Experience in VR by tweeting @zemoga!
The future of connected devices is going to follow the same path as other innovations before it: expect it to get worse before it gets better. After that, wearables will be as ubiquitous as smart phones.
We dove into the subject at the Society of Digital Agency’s “Meaningful Connections” session in April and learned why you should jump on the connected devices bandwagon.
Creating the next big thing.
Right now, the market of connected devices is defined by experimentation. There are connected toothbrushes that tell you if you’re brushing your teeth correctly, a slew of watches and wristbands that monitor your heart rate and steps, and endless other digital objects that are connected either to your phone or to each other. Some of these are genuinely creative, but don’t fit in well enough to be widely used. It’s reminiscent of the early days of smartphones. Companies eventually found what worked, but not before unleashing abominations like the Nokia N-Gage, a cell phone that played video games and took calls (but wasn’t any good at either), to the public.
We’re in that awkward stage again. A lot of these devices are either telling us things we already know or things we don’t really need to know. We have smart watches that give us notifications on our wrist instead of our phone. The last thing anyone needs is more notifications.
The challenge is compounded by the fact that most software companies aren’t also hardware companies, and connected devices require hardware.
What can you expect right now?
For a time in which a cell phone has already replaced the tradition of carrying a watch, wearables (especially smart watches) should not be considered a new shiny gadget to discover. Smart watches are at their best extensions of a phone instead of a replacement, and that’s ok. We don’t want expensive but less powerful devices trying to compete with the smartphones of today.
In the same way that we used to look at our watches to tell time, wearables work as a handy notification windows. They’re primarily for all of those who don’t want their hands and eyes locked on their cell phone screens all the time. Weather, incoming emails, texts messages, music controls and even Facebook posts (believe me, you don’t want to do that) are easily accessed on the smartwatch screen.
Is it worth it? It is, if you are still eager to carry a watch in your wrist. If you don’t, then the rest is not going to sell the experience for you. It is nice to look at your Runtastic data while jogging or dialing a number without taking your phone out of your pocket in a busy commute; but it’s not of much use while you’re at the office or at home and that extra reach isn’t a problem. Smart watches need to keep paving their way to serve not as complex gimmicks but as fashion statements, that capture data and are connected in an useful way to the rest of our digital selves.
Finding Untapped Potential.
The future of connected devices is in providing value while seamlessly integrating into everyday life. We’re seeing bits of promise in certain devices right now, and as tech companies, we’ve got to pick out the pieces that work and modifying them until we get it right.
Here’s what we do know: we’ll no longer be developing just for phones. We’re going to have to make everything adapt to lots of different screens on lots of different devices. The design will need to be simple and intuitive enough to support watches, glasses, and more. Most importantly, any obstacles that get between the user and the information they want must be kept to a minimum.
Do you own any wearable tech? If not, why not? Let us know!