Jumpstart: Sleep Better. Live Better

Screen Shot 2015-07-14 at 3.22.37 PM

As a part of PSFKs’ “The Good Data Contest”, Zemoga wanted to create something that went beyond today’s connected classrooms. We wanted to create an app that was fun for kids to use, while parents, teachers and administrators could also gain valuable information and use that information to improve the education process. To that end, we created Jumpstart, a fully integrated, dynamic app that allows students, parents and administrators to work together to form a more comprehensive view of an individual child’s health.

Let us present to you, Jumpstart:

Screen Shot 2015-07-14 at 3.21.18 PM

The Set Up:

It’s not enough for classrooms to simply be connected. As a parent, trusting in your child’s classroom experience is a requirement, but transparency into the events of the day is a luxury. Understanding the moments that matter and sharing simple insights with teachers lead to the simple but significant day to day course corrections that improve your child’s experience. This interaction and collaboration is infrequent and its absence contributes to poor performance by the child and low satisfaction for everyone. This can change with Jumpstart.

Beyond the technical fabric that brings teachers, students and parents together, greater classroom connectivity needs to do something relevant for us. Imagine a system of engagement that knows the conditions of the day (weather and community medical alerts), catalogs the reporting by parents (your child’s sleeping, eating and illness), observes the moments that matter through the day (playground activity and energy levels) and infers from all those contextual cues the recommendations that can make the next day and the learning experience better for your child.

Machine learning, computer vision, always-on intelligent sensing, indoor/ outdoor position location, and many other innovations will be part of the roadmap for the connected cognitive classroom. However, the starting point of Jumpstart can be much simpler by correlating a small set of initial indicators: sleep, diet, weather, public medical alerts and physical motion.

The Problem:

The country is currently focused on trying to enhance school performance, with standardized testing under scrutiny along with the overall quality of classrooms. Blame for poor experiences and low results is usually spread across the board between student, teachers, administrators and government. One thing is abundantly clear: Healthy and engaged kids perform better.

It’s hard to measure the level of “health” in a child and healthy interaction in a classroom. We know one thing for sure, a child who gets proper rest performs better. Their ability to think, react, remember and process tasks are all higher when appropriate amounts of sleep are being met. Studies into this are academic and techniques are rarely practical enough to implement in a basic classroom.

On top of that there has been little or no path to make the current insights from such studies “actionable”. Simple tracking and correlating the day to day experiences of a child will bring forward buried information that is helpful to parents and teachers and encouraging to the child too. This is where Watson comes in.

It’s one thing to know sleep is tied to performance, but how do you inform parents and encourage better behavior?

Screen Shot 2015-07-14 at 3.20.59 PM

The Scenario:

Imagine a connected school that gave teachers, administrators and government better understanding of the “climate” of their students through observation. Imagine a connected school where you could help gamify the experience for kids to make better decisions about their sleep habits and performance. Imagine both teacher and parents having a better understanding of the unique drivers that will improve the experience of each individual child. Imagine the impact of a teacher having the aggregate view that most of her class got 5hrs of sleep and there is a flu bug circulating which could help her change tactics for the day in how she engages the full class or how a lesson is tailored for one individual child.

 

The Inputs:

Teachers:

The Jumpstart app for iPad will serve to take attendance and also input teachers perceived energy level of students. In the near future, other inputs like grades can be plugged into the system to make it smarter.

Students:

A BLE enabled device that will help monitor sleep and activity while providing an Apple Watch version of Jumpstart for students to make simple input decisions. At the same time, it is the base of a simple gamified system, that gives XP (Experience Points) to the kid based on their behavior and enables him/her to redeem points for practical rewards, previously set by the parents. Over time this can expand to additional, specialized wearable devices offered at several different price points.

Parents:

Parents interact with Jumpstart for iPhone to plug in kids’ bed time and a handful of simple daily inputs. It also works as a tool, that provides quick snapshots of kids’ weekly performance, food recommendations, etc.

CMS/Admin:

A secure interface built to manage local automation and workflows around all input sources, and provide analytics as the front-end user interface to the Watson analytics and feedback.

Additional data feeds directly into CMS according to the specific school:

  • Health Alert information • Weather (via public APIs) • School-wide activities
  • Testing days

Beacons:

Positioned in public areas like the hallways, gym and playground to monitor overall movement and frequency of visits. When in the gym are they sitting on the bleachers or actually being active? Are the kids going to the soda machine? If so, how often?

The Outcome:

All this data would get crunched and correlated by Watson and provide valu- able insights about a child and recommendations to make improvements in their daily habits. Given the fuller view, a parent can recognize the child needs to get to bed 20 min earlier or boost their diet; the teacher can encourage students to “look alive, let’s have fun and grab a healthy and energizing drink or fruit” rather than nod off in Mrs. Anderson’s class. The administration can tune schedules to compensate for the perfect storm of exhausted kids, a flu bug outbreak and a week of standardized testing.

Helping parents, teachers and students better understand their routines and behaviors is key to unlocking better performance. Encouraging balance through rest, education, and play matter greatly not just in school, but also in life. We see sleep as just the first step in unlocking the potential of each individual.

 

A DAY IN THE LIFE OF A JUMPSTART STUDENT

Sleep is important as it affects our ability to engage with the world. This couldn’t be more important than in regards to school aged children. With Jumpstart we want to empower kids to inform the world around them about how they’re feeling. By having Watson understand their sleep patterns and various other input points we will help create an environment for a kid to better engage with others.
Screen Shot 2015-07-14 at 3.20.38 PM

06:00:

Tom is woken up by his apple watch, which calculated the appropriate time to wake him. He inputs his general mood upon waking up and begins his day. This info goes out to all other participants (parents, and teachers). It shows how many hours of sleep Tom got. This helps align expectations. **We understand that no kid is fully awake at 6am, but there are days that a kid is more eager to go to school than others. Capturing the kids general attitude/mood at beginning of the day is important, especially as it relates to sleep.

06:30:

As Tom is about to sit down and eat breakfast his mother or father inputs their perceived energy level of their child. They can also add additional information such as: what they ate and their perceived mood. It’s important to begin to understand: Do they seem stressed? Upset? Excited? All this information is going to the CMS and Watson. Chef Watson gives ideas of healthy breakfast according to previous data.

07:00:

Tom makes his way to school. He knows he can unlock XP points for participating in Jumpstart, which gives him access to rewards predefined by the parents with the help of Watson. Tom racked up 3000 XP points from the walk.

08:30:

Tom makes it to his first class, history. His teacher uses the Jumpstart app to take attendance and input her perceived energy level of each student. She can also see what Tom and his parents inputted that morning before coming to class. This helps her potentially adjust tactics if she knows that Tom is feeling anxious or excited. Tom has been very participative today, so his teacher gives him some XP points.

12:30:

Jumpstart has passive beacons throughout the school which help see at a high level if Tom skips lunch, or only sits on the bleachers at the gym, or visits a soda machine 4 times in a day. Additionally, using the Passbook app in the Watch, Tom can pay for food at the vending machines or the school cafeteria. Thankfully, Tom got lunch today and went with the healthier option which was registered by the system and unlocked more XP.

17:00:

Tom wraps up his day by trading in some XP for some video game play time with his favorite game, Destiny. As Tom prepares for bed, both he and his parents input their perceived  energy levels. The app also informs Tom that if he gets to bed by 11 he could unlock more XP.

Screen Shot 2015-07-14 at 3.20.48 PM

Potential Hurdles:

He largest hurdle in our mind is lack of participation. With any good survey, the best way to measure success is to have as many inputs as possible. This is why we wanted multiple input sources from the actual student, teacher, parents and beacons. We think using a device like the Apple Watch will help incentivize/encourage the kid to participate. We want to reward their efforts for participating as well, which is why we included awarding them with XP from a gaming platform like Xbox. Leveraging the Apple Watch and an Xbox in our mind helps the kid to feel empowered, which is important. We want the child to know it’s THEIR participation that makes a difference. For them to inform their world about how they’re feeling is crucial.

  • Lack of participation
  • Cost – Apple Watches can be expensive at scale

 

CONTACT: Chad Rodriguez. chad.rodriguez@zemoga.com

CONCEPT LEAD: Chad Rodriguez

TEAM MEMBERS: Juan Diego Velasco, Paul Magnone, Sebastian Zamora

 

Zemoga At AngelHack

AngelHack Picture 2

On July 4th and 5th, I had the privilege of participating as judge in the Third Annual Colombian AngelHack hackathon. It was a part of the Eighth Global Series, which was also celebrated in 95 cities and over 65 countries. The winners of each city will go through a 12 week accelerator program and a trip to the Global DemoDay in San Francisco (CA) to pitch their ideas to some of the most known investors, startup accelerators and incubators in Silicon Valley.

A hackathon is, essentially, a 24 hours event where coders, engineers, designers, entrepreneurs and tech-minded people come together to do intense coding. Ultimately, the contestants will transform a nascent idea into something useful that can “wow” judges and attract investment.

The judges were a selection of recognized Colombian entrepreneurs, technology journalists, CEOs, and technology directors. We were in charge of selecting the winner based on 4 different criteria: product/solution, technical chops, execution and design.

Many of the teams already knew each other and came with a clear idea of what they wanted to present, while others were there to pitch their ideas to the other enthusiasts in hopes of them joining their teams. Some of the projects included: web sites and mobile applications that attempted to disrupt online deliveries, a semantic search engine for products and services, a chat for tourist based on geo-localization and to-do list engine that was aware of your friends pending tasks.

At the end of the second day and after many hours of deliberation the winners were chosen. In third place was AyDoor, a delivery and services platform. Second place was taken by Poof, a semantic search engine for products and services. Finally, the winner was awarded to InfiniteLoops a platform to automatize and execute repetitive tasks and scripts for developers.

AngelHack Picture 1

Overall, it was a great experience and the amount of talent is really encouraging. Some of the major conclusions I took away from AngelHack were:

– Work on something that does one thing really well. Show it off and focus on it for your demo. Filling long forms and following user flows are demo killers.

– Merits matter. You have to build, watch, and plan a successful demo and presentation. You can slave away all night but if you can’t clearly express your project you won’t have a chance of winning. Make sure you don’t waste time talking about pieces of the project that aren’t essential to the crowds understanding of what you have built. Get through your demo.

– Think your idea through. Most projects and teams tend fade away after the hackathon weekend, which is traditionally one of the criteria that the judges use when voting. If you are really passionate about your idea keep at it regardless of the events results.

– Tell a story. If you think you are solving a problem (big or small), tell everyone about the problem and how you solved it. People like stories. If you are solving an interesting problem and have a cool/working product you have a higher chance of winning something.

– It may sound obvious, but setting clear objectives for your attendance at the hackathon ensures you get the most of out of the short time.

– Have a plan B and even C. This was repeated over and over to the participants yet many relied solely on the success of their pitches and demo. Unfortunately, for some, this wasn’t a successful strategy.

 

The privilege of being a judge is an awesome learning experience. You get to hear really fresh ideas and approaches from technology leaders and extremely enthusiastic participants. The culture and products this event helps build are great for countries like Colombia. There is a good talent pool here, but not enough confidence or entrepreneurship to help all these talented people take their ideas further. Hopefully, more events like AngelHack will allow these remarkable individuals to bring their ideas, creativity and talents to the forefront of the tech industry.

 

-Carlos Arenas, Mobile & Application Development Director

 

Can We Trust the Natives?

When it comes to native advertising, everyone has an opinion. There are those who hold user experience in the highest regard. This crowd will argue that native ads allow for users to gain true value by having the ability to consume and interact with unobtrusive content that is of interest to them, unlike display ads which can distract from the key reason a user visits any given site. The camp on the other side, however, argues that native ads might be misleading since they can tend to blur the lines between sponsored content and editorial content.

Either way, the core of the controversy seems to come down to one universal concept. Trust.

Brands can attempt to build trust by offering up custom-tailored content to users. However, it can be argued that publishers violate that same sense of a user’s trust by subtly placing branded content within an editorial context, and by so doing, “tricking” users into reading their advertisers’ content.

Statistics that support native are thrown around and touted all over the digital environment these days, bolstering higher engagement, brand lift and purchase to intent, just to name a few metrics. After a little Googling, one will find that biggest lift for advertisers seems to come in the form of brand awareness, which can lead to the perception that the traditional goals of driving a user to some sort of conversion or transaction might become secondary to simply entertaining users and gaining their trust, and thus, their loyalty. Whether this is a long-term benefit to brands is yet to be seen, but the signs are there, and they’re pointing to a new path through which brands can gain this loyalty from consumers.

Eventually, we will get to a point where it is clear to users that they are viewing this branded content, but it won’t matter. IF in fact the content is interesting enough for a user to spend precious time engaging with it, then it seems like a fair trade. The amount of creative freedom and innovation that this new approach allows for is sure to bring digital advertising to new heights, the compromise will be the need for brands’ and consumers’ patience and tolerance while advertisers learn and allow this approach to mature by gauging its overall effectiveness and, more importantly, the reaction and acceptance of it by consumers.

Throughout the month of July, we will be examining the concept of native advertising and branded content from several different perspectives. As more and more brands begin to focus on creating branded content. We look forward to keeping an eye on this trend as it grows and transforms as we move toward a point where editorial and branded content are almost indistinguishable from one another in the eyes of the user. Z-Team Out!

 

A.I….Oh My

hal_9000_interface_website_3
Where does one begin with AI….

There are so many places it can go and so many different things it can affect, whether it’s work forces, cars, web browsing and beyond.

For the purposes of Zemoga it’s all really interesting stuff, and as usual our fascination with this technology is both out of personal curiosity but also how it applies to products we build. The biggest thing for us is machine learning. Especially because we work with a lot of retail, publication and healthcare companies.

The purpose of removing friction is a pillar for the things that we build. Every click or every movement through a site or app has to have purpose and direction. It’s so much more than just making a product “pretty”, although that helps. The idea of machine learning, like what IBM is doing with Watson, is really interesting. Not because it’ll destroy the world, but because it’ll help remove a few “clicks” in both my physical and online worlds.

That’s compelling.

Think about the routines of your day. We all have different ones, but imagine the little bits of incremental time that are consumed by your daily “clicks”. Waking up, starting your coffee, jumping in your car or taking the subway, etc. Each having varying degrees of interaction.

You see machine learning in small ways today, your watch telling you to get up and move, or Google Now saying, “time to leave in order to get to a restaurant before it closes”. These are all “small” things but they are providing bits of value that literally give you back time. Brands will get better at leveraging this tech because it means it can help them move away from being “salesy” with its’ customers. Let the AI figure out the appropriate times when, quite practically and literally, you just need a new shirt. This allows the brand to offer genuine value and rethink what it means to be “loyal”.

This AI/machine learning tech is useful because there are actual needs that we have. For a brand to better understand those moments of actual need, it helps them know the RIGHT time to talk, which is important. Most advertising is the “fishing with dynamite” model. The brand sits in a boat, lights a stick of dynamite throws it into the water, dynamite explodes and some fish float to the surface. That’s how brands currently communicate. They don’t know how to speak directly to you, they just aren’t good at it, so they have to quite literally at times disrupt your day to make sure you see them.

This is why certain types of AI are fascinating. We get hyper focused on the car or the robot, but those things are larger and visible versions that will take time. The great first step is the behind the scenes AI for our everyday lives, just removing friction. Giving us back bits of time to focus on things that matter. It’s an exciting time for sure – and the space is changing quickly- but at the end of the day we all want more time and this is why AI isn’t going anywhere.

 

The Future is Now: AI is Here

 

Artificial. Intelligence: For some, those two words conjure up images of a dystopian future, a “robot apocalypse”, set in the not-too-distant future – in a world where humans have failed and machines have prevailed.

For others, this is an opportunity for mankind to put decades of collective learning to the test, and to find out if humans can create something more intelligent and complex than ourselves.

Either way, there is no denying that this divisive issue consistently lands at the top of the “Hot Topics” list in the tech industry. It has been the topic of TEDx Talks, Hollywood films, and even Vanity Fair articles.

Whether it’s Elon Musk and Stephen Hawkings asserting that hostile AI is going to destroy the world, or Ray Kurzweil speaking of the inevitable singularity…. one thing is certain. This new technology, if approached in the right manner, has the potential to change the way we view the world and the information around us, and will most certainly impact almost every industry imaginable from agriculture and education, to finance, automotive and even medical.

The fact that great minds like these are having an extended dialogue on the topic foreshadows that this is the brave new world that this form of technology is entering into. One thing these thought-leaders can all agree on, is that we humans must tread carefully as we continue to develop and employ these game-changing technologies that could leave us in the dust once they realize that they have the ability to do so. Broadly speaking, AI will solve some major problems – not the least of which could include the eradication of disease and poverty – while causing others, which are have yet to be seen.

So… what does this mean for the general public? Over the next few weeks, Zemoga will be taking a deeper look into this exciting, but mysterious voyage that we’ve begun to embark on, and we’ll offer up different insights on how AI and machine learning can be used for the greater good. We’ll explore IBM’s Watson and similar existing technologies, while looking into our crystal Z-ball to see what’s next. Be on the lookout! Follow us on Facebook and Twitter to keep up…. or we will send our robots after you… they know where you live.

 

 

Will the real value of that Apple Watch app please stand up?

When I started to think about writing a piece on Apple Watch I was certain I would talk about the device from a rather skeptical angle. All the annoying buzz, added to the fact that I’ve not used a wristwatch in more than 15 years, had me very pessimistic about the success of this gadget. However, at some point last week, in the middle of the process of actually writing the article, I finally had the chance to really think it through and realize the potential of new, paradigm-breaking devices like this one.

The iPhone’s sidekick

Many out there are frustrated by the fact that Watch apps are not 100% standalone and depend (at least partially) on an iPhone. This is true because the technical architecture of this first generation of the Apple Watch is very limited in terms of how much load you can put on it. It also has to do with the purpose this kind of device is supposed to serve, from a strategic point of view.

It’s a companion device. It’s a peripheral to the CPU that is the iPhone. Think of it as a sidecar for a motorcycle, as a Robin for a Batman, as a Garfunkel for a Simon. [EDITOR’S NOTE: We have banned JDV from all future analogies]. You can either look at it as a very weak and limited companion or as a companion with special abilities.

Now, the question many are asking is: does the iPhone need a sidekick? Well, iPhones are great and recent versions are capable enough to edit video, stream audio and process 3D graphics at unbelievable speeds. However, they’re sometimes too much when all you need to do is check the time, read that Twitter mention, check the weather, see the score of your team’s game or find out the name of that song Pandora’s playing and maybe save it as a favorite. Reaching out to your pocket to pull out your phone and look at its bright giant screen when you’re with people not only sounds like a lot of effort, but also (and more importantly) it has a social cost —which is becoming more and more true with time. In the future you will want to remain connected, but will definitely prefer to be more discreet, if possible.

Let’s build the f… out of that Watch app now (or maybe not)

Does all this mean that every product should have a Watch companion? No. Like it happened with the inclusion of widgets in the notification center in recent versions of iOS, every other app rushed to conquer that special place of their users’ screen, but not many really had a reason to do it. For a product designer, this decision has to come from the users and what’s important to them. If that shiny feature you pushed into your users’ watches has no value in that context or is simply impossible to operate, those users will punish you by ignoring it, or worse.

Finding the value

We at Zemoga keep talking about user experiences and how these are larger than what happens within the limits of a screen of an app. Think of an experience as a huge flying spaghetti monster, with a bunch of tentacles that belong to the same creature, but extend to reach and connect to other undetermined places. This sounds complex because user experiences are complex.

target-apple-watch_1024

Say you have a pretty iPhone app on the App Store. It’s never just about the app living as a standalone thing. Just take two minutes to think outside that box and visualize the entire experience around your users’ interaction with that app:

  • What makes them download it?
  • What motivates them to keep checking it over and over?
  • How do they learn about new activity within the app?
  • How do they let the world know they’re using it?
  • In which contexts do they interact with it?
  • What are they thinking/feeling in those moments?
  • What else are they doing at the same time?
  • What did they do before interacting with your app?
  • What will they do afterwards?
  • Etc…

Asking yourself questions like these, talking with actual or potential users, mapping the entire experience are all good exercises in order to understand the user in a more holistic way and create an inventory of the many touchpoints where your product can provide value. Those touchpoints give you a better understanding of your users’ needs in different contexts and can be translated into opportunities to expand the functionality of your product to other devices like the Apple Watch.

It’s all about context (a practical example).

Let’s think of an airline and their iPhone app, which probably offers all the functionality you would expect to find in a decent airline’s app: ability to book a flight, deals, information specific to your mileage program, an option to check-in, information about your flight, access to your boarding passes, etc. It looks like a very solid app, including all the basics and more.

Now let’s take a step back for a second and think about the experience of flying, which is full of touchpoints—moments in which the customer is interacting in any way with the airline and the experience of flying with that airline. The moment you get to the terminal and check-in, when you’re going through security, that block of time before you board the flight (maybe you go to the food court, maybe you ask an attendant for an upgrade, maybe you’re running from one terminal to another to catch a connecting flight), the moment you’re finally at the gate, the moment you’re about to board the plane, etc.

All these touchpoints are opportunities for our airline to add value through different devices according to whatever makes most sense: could we say, for example, that right after you’ve gone through security, your phone (the one you used to purchase your tickets) takes a backseat and your watch becomes your go-to device from that moment on?

  • Could your watch give you a visual confirmation that your flight is on time and help you find the terminal and gate you have to board the next plane from?
  • Could the watch give you a nice alert when your group is ready to board?
  • Could its screen stay on to show a QR code they can scan to let you board the plane?

We’re covering pretty much all the same functionalities you already had in this airline’s app, but now some of them are being distributed to the device that is most convenient for you in that context and in that moment. We’re improving your overall experience as a traveler, giving you extra reasons to choose this airline again the next time. We’re creating loyalty through user experience.

Let’s recap.

There’s a huge opportunity in creating experiences that live in this new device, but not every single idea will succeed. Product designers need to be smart and think of the entire experience their product can provide and, as always, find smart ways to distribute that experience among different devices (desktop, laptop, TV, tablet, phone, and now watch), thinking always about the relation between task, device capabilities and context.

As for the things you want your users doing on a Watch, it just requires a bit of common sense: asking Shazam to find the name of the song that’s playing in that bar? Yes, please! Buying plane tickets and booking hotels for that multi-city trip around Asia? Hmmm, maybe not yet.

 

Apple Watch: It’s about time

It’s an interesting time we live in. A lot of the hilarious things from futuristic movies of the past are legitimately coming to fruition: the VR headsets of Demolition and Lawnmower Man, AI like we’ve seen in movies like Terminator and Short Circuit and the personalized tech of Dick Tracy and Star Trek in Smart Devices. Yes, there even is a flying car now, granted it’s not as cool as something from The Fifth Element, but hey, we’re getting there. It’s all very wild to think about.

The thing that is of current obsession in our culture is of course the Smart Watch. One watch in particular: the Apple Watch. If you don’t think this watch is a big deal, here’s a list of links for every review just from today written about this product:

The Verge

The Wall Street Journal

Business Insider

The New York Times

That’s just a few. I could probably add 10 more links. So what more can be said? There are literally thousands and thousands of words on this one little device. It’s also a super divisive topic. People either love this thing or hate it. It’s either “the future” or our “demise” as a people. Like any story, the truth probably lies somewhere in the middle.

Bottom line is this: This device is not going to change the world anytime soon.

The reason why: The world around it needs to catch up.

Most massive technological breakthroughs aren’t necessarily because of the “breakthrough” itself, it’s more about the things surrounding it (ecosystem) that allow the breakthrough to be meaningful. Think about it: the idea of a car is cool, but what makes it great and something that can expand? Roads. Microsoft created the tablet years ago, but it never took off. Why? Because a computer was still better and the “portability” of America didn’t exist 10+ years ago.

This is ultimately the Apple Watch’s fate. It’s appears to be quite a unique and valuable product. That value will only increase when everything else around it catches up. My favorite article about this so far is one by M.G. Siegler comparing Apple Watch to Disney’s Magic Band.

M.G. Siegler compared the Apple Watch to Disney's Magic Band

M.G. Siegler compared the Apple Watch to Disney’s Magic Band on Medium

 

The Disney Band is great because it’s such a passive device. It works without you having to really think. Apple Watch needs to convince the world around it that they also need to become more passive. You can see it starting to happen with things like “Smart” locks, lights, hotel room doors. Those are all fun, but it’s not until it’s at a Disney level where I walk in and you know I’m there, I can pick up some items and just walk out and be charged that it becomes helpful.

That’s what this little device potentially means. It’s worth being slightly pragmatic at this point as well, because lets be honest, a lot of those things can be accomplished with our current smartphones, so it’s often redundant to try and claim a watch can solve the problem when we haven’t really been able to solve a problem with our phones, which is actually much smarter than any of these watches.

Zemoga poked fun at the Apple Watch hype with iSquint.

Zemoga poked fun at the Apple Watch hype with iSquint.

As a digital agency, this is how we try to think about it as well. A good brand knows that a great website or app isn’t an island unto itself. It has to be able to work well and make sense within a larger digital environment. The watch doesn’t mean we can make better screen experiences for that device. It means we can create better physical environment experiences. The websites and apps we develop can now have their reach extended into the physical world passively, in time. It adds a new level of challenges to creating a great UX, because now it isn’t simply on a screen that you can walk away from.

So love it or hate it, the Apple Watch is here, and it’s not going anywhere. We are planning on ordering quite a few to begin thinking through what it all means not just to digital world, but the physical world around us. Be sure to follow us this month as our team dives into the different aspects of this shiny new toy.

Let us know what you think! Tweet us at @zemoga or follow us on Facebook.

 

 

Virtual Reality: Made in milliseconds

by Camilo Soto

This is part four of a month-long series on VR. Check out parts one, two, and three.

There are tons of virtual reality headsets coming out in the next year. How do they trick your mind into thinking you’re looking at a 3D space? What are the specs you need to make sure you don’t get sick?

Made in milliseconds

3D is produced by showing two different images to each eye to simulate what your eyes would normally see. It tricks your brain into thinking you’re present in a digitally produced environment. It sounds pretty simple, but in order to actually allow a person see what they would naturally see in a different environment, the image that they are looking at needs to be updated between 40 and 60 times per second, keeping track of exactly what they’re looking at.

This means that in a matter of milliseconds, the headset needs to figure out the user’s position and orientation, pass this information to the computer, and the computer in turn needs to produce an image that matches what the world would look like from that specific point in space.

Sony's Project Morpheus touted

Sony’s Project Morpheus touted

Advances made during the last decade in a number of fields have made it possible to produce a sense of “presence” that would have been impossible before. High resolution, high quality display technologies, and a push forward by the mobile phone market have made it possible to make portable “retina” quality screens that look great even when placed mere inches from your eyes.

Valve claims no one has gotten motion sickness while using the HTC Vive.

Valve claims no one has gotten motion sickness while using the HTC Vive.

 

Sensor technologies such as accelerometers and gyroscopes, miniaturized and optimized for use in motion controllers and mobile devices, make it possible to track movement and orientation very accurately and very fast. Last, but not least, advances made in CGI for the entertainment industry make it possible to create very realistic real-time renderings of imaginary environments. Put these elements together and you have a portable headset capable of actually making a person feel like they’ve been transported to another world.

In the VR arms race, developers are trying to differentiate their hardware by claiming it has the lowest refresh rate, highest resolution, or lowest latency.

Limitations on interaction

VR has been pushed forward very strongly by the gaming industry, but its application in gaming is still somewhat blurry. The reason for this is that even though you may be capable of creating the illusion of being in a fantasy world, you still need to solve the issue of allowing the player to act upon it.

This is where the controllers come in, but you can’t exactly let the player move around using their full body when they’re blinded to the physical world they’re moving around in. We’ve all seen the youtube videos of where that can lead. Solutions have been implemented using harnesses, rolling floors and dedicated rooms that track the user, but that’s very far from practical and ages away from becoming mainstream enough that making content for this platforms sounds like a good business proposition. There will always be classic controllers which gamers are used to, but can you really expect them to go through a long play session without being able to see the controller they’re using?

MorpheusDemo

That leaves us with a technology that allows people to immerse themselves in an environment, but with very little capability to do much more than spectate. That sounds an awful lot like another huge branch of entertainment, where you look but don’t touch: cinema. Therefore, some of the first popular experiences in VR are bound to be experiences where you’re just an audience, immersed in a world in which things are happening, but unable to act on them.

Initially, VR’s strength will be experiences that allow users to navigate environments that would otherwise be unreachable, allowing them to effect changes that would be impossible in real life. You could for instance allow people to navigate their new home before it’s built, changing the furniture, the paint or the time of day. The big question right now is whether there’s going to be a wide adoption of the technology for household use, or if it’s going to be the type of thing that you run into at the mall as a curiosity.

The bottom line is VR is just a platform, a new display technology, and the most defining factor in it’s success and meaningfulness will be the content and experiences available in it.

Have thoughts on VR? Have thoughts on our thoughts? Tweet at us @zemoga! Also check us out on Instagram and Facebook!

 

Virtual Reality: Let’s not replace reality… yet

This is part three of a month-long series on VR. Check out parts one and two.

There’s nothing new about Virtual Reality. It’s not that it’s an obscure technology that has been under dark basements all this time. I remember playing Heretic and Doom on VR systems in the 90’s; but now, thanks to many factors, VR tech is (almost) reaching the level of being accessible to the common man.

Currently, there’s a fistful of companies that are trying to push their own tech on the VR dream. Some are even located between the blurred lines of the industry, like Microsoft HoloLens that is more Augmented Reality than Virtual Reality, but follows the momentum of the Not-Real-Reality zeitgeist. Most of these efforts will die, and that’s ok. Everyone’s pitching their own ideas, and at the end, the most convenient and popular will survive, or maybe not (looking at you, VHS and Blu-Ray).

Nintendo's Virtual Boy was released in 1995, and was discontinued six months later.

Nintendo’s Virtual Boy was released in 1995, and was discontinued six months later.

How do we interact with VR?

Whoever wins the arms race, what we really expect to reap are behavior standards. That is what concerns us as experience architects. Right now, all VR efforts have pretty much only one behavior in common: You can turn your head and perceive the virtual-augmented environment around. All the other basic elements of the experience and how you interact (if you can) with this environment vary. We are not talking about some basic button to push here, we are talking about stuff that aims to compete with reality itself.

LeapMotion attaches to Oculus Rift and tracks your hand movements.

LeapMotion attaches to Oculus Rift and tracks your hand movements.

VR needs an equivalent of Xerox mouse or Apple’s touch gestures on the first iPhone. An easy way to interact with the world that VR is promising, that is easy to use and quick to learn. One will think that because VR aims to involve the entire self, body gestures are an easy way to go on this, after all, finger gestures quickly became a standard, so this should be like the next “step”. But time has proven that it’s not really cool to waste your energy moving your arms around to do a simple action that can also can be done with a simple finger movement, like when we tap or click on a mouse.

Right now there are a lot of options that aim for the title of standard, or at least the better option. From complete stations that let the user feel like he’s walking in a single place to motion sensors and simple hand controls. All of those have been borrowed from the world of videogames; a world that has been dreaming with VR for decades.

Since the Nintendo Wii made popular the revolutionary “Wiimote”, it seemed that it was possible to create a decent body interaction with potential VR systems. But that’s for games, and you can only play boxing like that for some time before you get tired. Some experiences require less physical immersion and more of a sensorial experience.

Start Simple

If VR wants to aim to any kind of public and be a platform for a wide arrange of products, it needs to start simple before it gets complex. I bet somebody will want the complete “Lawnmower Man” battle station at home, but maybe companies need to aim low to hit high for now, stop pretending to replace reality itself.

THE-LAWNMOWER-MAN

Still, VR is still a technology with a potential to be as immersive as anything anyone has ever experienced, so simple interactions still need to be on the level of the potential that the device is promising. We don’t want to break immersion using the same Xbox controller we use for games on a flat screen. We want interactions that can go from a jump to the movement of a finger, and compact and precise enough so you don’t have to take the kids out of their room to build a VR place where you can do all that.

Maybe Facebook with the Oculus have another vision of what the market could want more than Valve+HTC, but for now, what we need is to start seeing the results of all those ideas on the public and see how they (and we) react.

In these infant times of an old technology, we can’t demand to have epiphanies of how things are going to be forever. We expect evolution, and as designers, is our duty to give those ideas challengers and opportunities, embrace chaos and don’t be afraid to fail. It’s time to aim for the moon, we will reach for the stars next… from the comfort of our sofas… using our VR headsets.

Let us know what you think about User Experience in VR by tweeting @zemoga!