Where’s the Fun in Native Advertising?

Native advertising, as with most of the buzzwords we hear and say everyday, everywhere, is a concept that is defined in so many different ways it’s just confusing. The “official” definition implies that every ad that fits naturally and feels like other types of content in the page fits perfectly into this category. The definition of the word “naturally” is very subjective, though. It’s as simple as this: the same ad in the same placement could feel more natural to some users than others.

The Many Meanings of Native Advertising

The definition is so broad that it includes things like sponsored posts on Facebook or promoted posts on Twitter. The playbook created by IAB even mentions the inline ads you find between two paragraphs of an article you’re trying to read on your phone and it covers those recommendation widgets that are confusing to some users and just disappointing to others. It goes as far as to include paid search units.

I wouldn’t say those ads I’ve run into while scrolling through my timeline feel natural —although Facebook is doing a better job than Twitter at that. I can say I don’t trust those external articles you find contained in recommendation widgets at the end of an article you just read in ESPN —I trust ESPN as a content creator, however those “sponsored articles” are not being selected by ESPN but by the third-party that created the widget (Outbrain, Taboola, Earnify, etc). In the same way, most of the times when searching something on Google, I instinctively tend to prefer links that appear organically and ignore those that are clearly paid and promoted.

How Is This Better Than Standard Banner Ads?

Screen Shot 2015-07-21 at 7.16.32 PM

One thing you can say about the examples I mentioned above is that all those ads are displayed based on a certain context. Granted, that context can be more or less accurate depending on the kind of data it is built upon. But searching for “Cartagena Hotels” on Google, for example, shows me paid search results related with Cartagena, Colombia and not Cartagena, Spain. Google seems to figure out the context based on the search term and my location and then displays paid search results based on that, which is very smart. Recommendation widgets define the context by assuming you are interested in the topic you were reading about and then display article recommendations (from other publishers) related with that topic. I’m not going to go case by case, but one starts to see there is a common element in all these different ad solutions.

The common element is context. They try to address the consumer more accurately by using the data they have at hand, so you could definitely say they are getting smarter. The problem is that while they are smarter, they are still not adding much more value to the consumer’s experience. They are just being a better informed door to door salesman. Almost equally annoying.

Adding Meaning to Context

Context alone is cool and it does increase the chances of getting to the right audiences in the right moments. However, the concept of native advertising covers other approaches that are pushing to make native ads more meaningful and valuable.

Buzzfeed’s revenue model is based almost entirely on sponsored articles. Ranging from 15 Of The Best Bands To Come From College Campuses (sponsored by Spotify) to the much less subtle 12 Ways Nutella® Makes You Smile (sponsored by Nutella), BuzzFeed has created what would be the perfect win-win scenario. Readers getting content that is entertaining and brands capitalizing by raising awareness and sponsoring content that is aligned with their attributes. However, the area of the layout of the article that says that this is a sponsored article is so small and subtle, a large group of users don’t see it and content passes as regular content, not as sponsored content, which gives room to all types of concerns.

Adding Extra Value Through Custom Native Ads

Referring back to IAB and their Native Advertising Playbook, “In the world of native advertising execution, there is no limit to the possibilities when an advertiser and publisher work together on custom units”. That’s probably the most valuable bit of the entire document and makes one thing clear: key to success here is in collaboration. Advertisers and their agencies can (and should) be as creative as they want. Publishers are always open to trying new stuff as long as it is well paid. They kind of need to be open, to be honest. Custom native ads are probably the most expensive of the group, but they’re definitely the most impactful.

Screen Shot 2015-07-21 at 7.14.04 PM

An example of this is the partnership between Netflix and The New York Times. The newspaper produced an in-depth piece on the conditions women face in American prisons and Netflix sponsored it by connecting the topic of the article with the theme of their series “Orange is the New Black”. The beauty of it is it doesn’t feel like a regular article of the newspaper’. The design was 100% custom and incorporated exclusive illustrations, in addition to the quality of the text content produced by the newspaper. The strategy was not to appear as a regular article and confuse. It was to reinforce the presence of the product by attaching to a special article on a related topic.

Screen Shot 2015-07-21 at 7.07.24 PM

Custom content also means each case is different. This makes things more exciting for everyone involved: brand, publisher and agency. We personally had a lot of fun helping Rolling Stone and Wenner Media in their process of adding custom sponsored content to the products they offer to their super cool clients. In the case of Indian Motorcycles, the tie between motorcycles and rock music was reinforced by a custom article that was a highly visual list of the 40 most groundbreaking albums of all time. All the content was created by RS, and the brand appeared inserted into the story in the most organic way possible. The brand did fit very naturally in the story and both the design and the content of the article where in line with the magazine’s expected quality.

Especially when it comes to sponsored content, ethics are a big deal here and the more honest the brand messages feel and the less the publisher’s principles get compromised, the better for everyone, in the end. Publishers need to establish rules and filters and take care of their reputation. When it comes to deciding how far publishers can go for any given brand, there has to be a line that each publisher will have to draw according to their ethical values.


Zemoga At AngelHack

AngelHack Picture 2

On July 4th and 5th, I had the privilege of participating as judge in the Third Annual Colombian AngelHack hackathon. It was a part of the Eighth Global Series, which was also celebrated in 95 cities and over 65 countries. The winners of each city will go through a 12 week accelerator program and a trip to the Global DemoDay in San Francisco (CA) to pitch their ideas to some of the most known investors, startup accelerators and incubators in Silicon Valley.

A hackathon is, essentially, a 24 hours event where coders, engineers, designers, entrepreneurs and tech-minded people come together to do intense coding. Ultimately, the contestants will transform a nascent idea into something useful that can “wow” judges and attract investment.

The judges were a selection of recognized Colombian entrepreneurs, technology journalists, CEOs, and technology directors. We were in charge of selecting the winner based on 4 different criteria: product/solution, technical chops, execution and design.

Many of the teams already knew each other and came with a clear idea of what they wanted to present, while others were there to pitch their ideas to the other enthusiasts in hopes of them joining their teams. Some of the projects included: web sites and mobile applications that attempted to disrupt online deliveries, a semantic search engine for products and services, a chat for tourist based on geo-localization and to-do list engine that was aware of your friends pending tasks.

At the end of the second day and after many hours of deliberation the winners were chosen. In third place was AyDoor, a delivery and services platform. Second place was taken by Poof, a semantic search engine for products and services. Finally, the winner was awarded to InfiniteLoops a platform to automatize and execute repetitive tasks and scripts for developers.

AngelHack Picture 1

Overall, it was a great experience and the amount of talent is really encouraging. Some of the major conclusions I took away from AngelHack were:

– Work on something that does one thing really well. Show it off and focus on it for your demo. Filling long forms and following user flows are demo killers.

– Merits matter. You have to build, watch, and plan a successful demo and presentation. You can slave away all night but if you can’t clearly express your project you won’t have a chance of winning. Make sure you don’t waste time talking about pieces of the project that aren’t essential to the crowds understanding of what you have built. Get through your demo.

– Think your idea through. Most projects and teams tend fade away after the hackathon weekend, which is traditionally one of the criteria that the judges use when voting. If you are really passionate about your idea keep at it regardless of the events results.

– Tell a story. If you think you are solving a problem (big or small), tell everyone about the problem and how you solved it. People like stories. If you are solving an interesting problem and have a cool/working product you have a higher chance of winning something.

– It may sound obvious, but setting clear objectives for your attendance at the hackathon ensures you get the most of out of the short time.

– Have a plan B and even C. This was repeated over and over to the participants yet many relied solely on the success of their pitches and demo. Unfortunately, for some, this wasn’t a successful strategy.


The privilege of being a judge is an awesome learning experience. You get to hear really fresh ideas and approaches from technology leaders and extremely enthusiastic participants. The culture and products this event helps build are great for countries like Colombia. There is a good talent pool here, but not enough confidence or entrepreneurship to help all these talented people take their ideas further. Hopefully, more events like AngelHack will allow these remarkable individuals to bring their ideas, creativity and talents to the forefront of the tech industry.


-Carlos Arenas, Mobile & Application Development Director


Can We Trust the Natives?

When it comes to native advertising, everyone has an opinion. There are those who hold user experience in the highest regard. This crowd will argue that native ads allow for users to gain true value by having the ability to consume and interact with unobtrusive content that is of interest to them, unlike display ads which can distract from the key reason a user visits any given site. The camp on the other side, however, argues that native ads might be misleading since they can tend to blur the lines between sponsored content and editorial content.

Either way, the core of the controversy seems to come down to one universal concept. Trust.

Brands can attempt to build trust by offering up custom-tailored content to users. However, it can be argued that publishers violate that same sense of a user’s trust by subtly placing branded content within an editorial context, and by so doing, “tricking” users into reading their advertisers’ content.

Statistics that support native are thrown around and touted all over the digital environment these days, bolstering higher engagement, brand lift and purchase to intent, just to name a few metrics. After a little Googling, one will find that biggest lift for advertisers seems to come in the form of brand awareness, which can lead to the perception that the traditional goals of driving a user to some sort of conversion or transaction might become secondary to simply entertaining users and gaining their trust, and thus, their loyalty. Whether this is a long-term benefit to brands is yet to be seen, but the signs are there, and they’re pointing to a new path through which brands can gain this loyalty from consumers.

Eventually, we will get to a point where it is clear to users that they are viewing this branded content, but it won’t matter. IF in fact the content is interesting enough for a user to spend precious time engaging with it, then it seems like a fair trade. The amount of creative freedom and innovation that this new approach allows for is sure to bring digital advertising to new heights, the compromise will be the need for brands’ and consumers’ patience and tolerance while advertisers learn and allow this approach to mature by gauging its overall effectiveness and, more importantly, the reaction and acceptance of it by consumers.

Throughout the month of July, we will be examining the concept of native advertising and branded content from several different perspectives. As more and more brands begin to focus on creating branded content. We look forward to keeping an eye on this trend as it grows and transforms as we move toward a point where editorial and branded content are almost indistinguishable from one another in the eyes of the user. Z-Team Out!


A.I….Oh My

Where does one begin with AI….

There are so many places it can go and so many different things it can affect, whether it’s work forces, cars, web browsing and beyond.

For the purposes of Zemoga it’s all really interesting stuff, and as usual our fascination with this technology is both out of personal curiosity but also how it applies to products we build. The biggest thing for us is machine learning. Especially because we work with a lot of retail, publication and healthcare companies.

The purpose of removing friction is a pillar for the things that we build. Every click or every movement through a site or app has to have purpose and direction. It’s so much more than just making a product “pretty”, although that helps. The idea of machine learning, like what IBM is doing with Watson, is really interesting. Not because it’ll destroy the world, but because it’ll help remove a few “clicks” in both my physical and online worlds.

That’s compelling.

Think about the routines of your day. We all have different ones, but imagine the little bits of incremental time that are consumed by your daily “clicks”. Waking up, starting your coffee, jumping in your car or taking the subway, etc. Each having varying degrees of interaction.

You see machine learning in small ways today, your watch telling you to get up and move, or Google Now saying, “time to leave in order to get to a restaurant before it closes”. These are all “small” things but they are providing bits of value that literally give you back time. Brands will get better at leveraging this tech because it means it can help them move away from being “salesy” with its’ customers. Let the AI figure out the appropriate times when, quite practically and literally, you just need a new shirt. This allows the brand to offer genuine value and rethink what it means to be “loyal”.

This AI/machine learning tech is useful because there are actual needs that we have. For a brand to better understand those moments of actual need, it helps them know the RIGHT time to talk, which is important. Most advertising is the “fishing with dynamite” model. The brand sits in a boat, lights a stick of dynamite throws it into the water, dynamite explodes and some fish float to the surface. That’s how brands currently communicate. They don’t know how to speak directly to you, they just aren’t good at it, so they have to quite literally at times disrupt your day to make sure you see them.

This is why certain types of AI are fascinating. We get hyper focused on the car or the robot, but those things are larger and visible versions that will take time. The great first step is the behind the scenes AI for our everyday lives, just removing friction. Giving us back bits of time to focus on things that matter. It’s an exciting time for sure – and the space is changing quickly- but at the end of the day we all want more time and this is why AI isn’t going anywhere.


Virtual Reality: Let’s not replace reality… yet

This is part three of a month-long series on VR. Check out parts one and two.

There’s nothing new about Virtual Reality. It’s not that it’s an obscure technology that has been under dark basements all this time. I remember playing Heretic and Doom on VR systems in the 90’s; but now, thanks to many factors, VR tech is (almost) reaching the level of being accessible to the common man.

Currently, there’s a fistful of companies that are trying to push their own tech on the VR dream. Some are even located between the blurred lines of the industry, like Microsoft HoloLens that is more Augmented Reality than Virtual Reality, but follows the momentum of the Not-Real-Reality zeitgeist. Most of these efforts will die, and that’s ok. Everyone’s pitching their own ideas, and at the end, the most convenient and popular will survive, or maybe not (looking at you, VHS and Blu-Ray).

Nintendo's Virtual Boy was released in 1995, and was discontinued six months later.

Nintendo’s Virtual Boy was released in 1995, and was discontinued six months later.

How do we interact with VR?

Whoever wins the arms race, what we really expect to reap are behavior standards. That is what concerns us as experience architects. Right now, all VR efforts have pretty much only one behavior in common: You can turn your head and perceive the virtual-augmented environment around. All the other basic elements of the experience and how you interact (if you can) with this environment vary. We are not talking about some basic button to push here, we are talking about stuff that aims to compete with reality itself.

LeapMotion attaches to Oculus Rift and tracks your hand movements.

LeapMotion attaches to Oculus Rift and tracks your hand movements.

VR needs an equivalent of Xerox mouse or Apple’s touch gestures on the first iPhone. An easy way to interact with the world that VR is promising, that is easy to use and quick to learn. One will think that because VR aims to involve the entire self, body gestures are an easy way to go on this, after all, finger gestures quickly became a standard, so this should be like the next “step”. But time has proven that it’s not really cool to waste your energy moving your arms around to do a simple action that can also can be done with a simple finger movement, like when we tap or click on a mouse.

Right now there are a lot of options that aim for the title of standard, or at least the better option. From complete stations that let the user feel like he’s walking in a single place to motion sensors and simple hand controls. All of those have been borrowed from the world of videogames; a world that has been dreaming with VR for decades.

Since the Nintendo Wii made popular the revolutionary “Wiimote”, it seemed that it was possible to create a decent body interaction with potential VR systems. But that’s for games, and you can only play boxing like that for some time before you get tired. Some experiences require less physical immersion and more of a sensorial experience.

Start Simple

If VR wants to aim to any kind of public and be a platform for a wide arrange of products, it needs to start simple before it gets complex. I bet somebody will want the complete “Lawnmower Man” battle station at home, but maybe companies need to aim low to hit high for now, stop pretending to replace reality itself.


Still, VR is still a technology with a potential to be as immersive as anything anyone has ever experienced, so simple interactions still need to be on the level of the potential that the device is promising. We don’t want to break immersion using the same Xbox controller we use for games on a flat screen. We want interactions that can go from a jump to the movement of a finger, and compact and precise enough so you don’t have to take the kids out of their room to build a VR place where you can do all that.

Maybe Facebook with the Oculus have another vision of what the market could want more than Valve+HTC, but for now, what we need is to start seeing the results of all those ideas on the public and see how they (and we) react.

In these infant times of an old technology, we can’t demand to have epiphanies of how things are going to be forever. We expect evolution, and as designers, is our duty to give those ideas challengers and opportunities, embrace chaos and don’t be afraid to fail. It’s time to aim for the moon, we will reach for the stars next… from the comfort of our sofas… using our VR headsets.

Let us know what you think about User Experience in VR by tweeting @zemoga!



NRF15: Tuesday Recap

Tuesday, Tuesday.

Day 3 of NRF is up and running, and today we really tried to find some cool things on the floor. Although we do find supply chain software sexy, you might not. So here are some of the highlights from the show. Oddly enough, the most interesting tech came from some of the bigger players.



These guys are always finding themselves in the guts of some really cool things. They’re the backbone to much of the technology we use today. Although I still believe my G4 Powerbook, which ran on IBM, was the best apple laptop of all time (back then everything really did “just work”), I love my latest 15” MacBook Pro.

The Dove experience was interesting. I will be honest: the display did a pretty poor job of taking the customer on the journey. You could tell that the purpose was to help people better understand the product. Again, biggest issue was figuring out how to interact with it. Did I swipe something, scan it, tap it? What do I do?! Once I figured it out, it didn’t do much besides show the product I scanned or swiped. It also had a lift sense on one side, so that when you picked up a product, it put it up on the screen. The issue was the products didn’t match what came up on the screen, but “A” for effort.


There was also a NE-YO experience that was kind of “meh.” Again, I didn’t really know what the stuff was doing. There was already music playing with a “virtual NE-YO” dancing. I’d push buttons on a laptop that would then add new sounds/mixes to the song. I’ve been playing music my whole life, and the funny thing was that most of the samples you could drop in didn’t really even seem to fit with the song. As I fumbled with trying to play with it, even virtual NE-YO got mad at me. He said, “come on bro, you can do better than that!”. I took that in stride considering NE-YO probably has no idea how to even write a song. I gave up. I think the purpose of the tech could very easily be applied to an “endless aisle” type feature where a shopper could simply scroll through many options of the same product.


Imagine a shopper at Footlocker looking at shoes and the NIKE’s have about 30 colors but Footlocker really only wanted to stock about 5. Presto! You could use these screens to look at all the beautiful options (or just pull out your smartphone and go to NIKE.com). I digress.

What was cool about their booth was a plug-in box for a POS that grabbed real-time purchase decisions. So, if you’re McDonalds you could plug this thing into your terminal and it would just start capturing real time data that you could put on a dashboard. What’s great is that this box is POS System agnostic. It can hook up to anything, which is super helpful to the retailer. The simplest things always have the most impact when it comes to technology.



These guys are the backbone for so many industries, but their Hybris system is pretty cool. They had a great dressing room technology incorporating RFID. RFID at one point in my life was the bain of my existence. It use to be very expensive and clunky. I used to lead innovation for a very high end luxury retailer. We had incorporated RFID almost 7 years ago into the dressing room only to watch it fail miserably. Time seems to cure all things, as now the cost of the technology has dropped significantly and it easily sticks right on a price tag.

This speaks to the notion that being first and buying the kitchen sink doesn’t always work. The technology was interesting back then, but it needed time to evolve and allow the things to support it to catch up. Bluetooth is very much the same. It’s not until BLE came along and the software caught up that it’s become a viable option via beacons. Being good at retail strategy is seeing the curve ahead. It’s understanding that this is great technology, but not right now.


Wine Me

They also had a sensor panel where a user could go up to an iPad kiosk input a few of their wine preferences and then the appropriate bottle on the wall would light up. Once someone grabbed the bottle it would measure it being picked up and put back if the customer decided to pass. The retailer would get real time data on this, which is always helpful. Obviously the technology could be applied to a variety of products (Dove and Intel could use Hybris help).



To be honest there was nothing super interesting there. We just went because they were showing off some XboxOne games, and who doesn’t love video games?


Thanks for following along. We’ll have an overall recap tomorrow about leveraging these strategies and how it relates to your roadmaps and startups.

Remember to follow us @zemoga on Twitter, and like us on Facebook!