We spend a lot of time talking about change at Signal Theory. We’ve seen it firsthand as people have shifted their media consumption habits from television to desktops to smartphones – with advertising dollars quickly being redirected and fragmented in new creative ways. We need to understand how, where and why people interact to better help our clients, and we believe that the future of human interaction lies in the underpinnings of augmented reality.
Age of disruption
Modern technological advancements are disrupting entire industries with what was once known as the “direct-to-consumer” economy,” but it’s clear we can just call it “the economy” at this point. The way we order food, read the news, talk to mom, share memories and even end relationships has changed four or five times in the last 100 years alone.
However, humans have stayed relatively static for 40,000 years.
We all run the same ancient, organic operating system with the same flaws and biases as every human around us – no matter how smart, aware and capable we individually might think we are. To add further complexity, the human operating system isn’t being upgraded as the data piles up – which hasn’t been much of a problem until recently.
When you perform a simple search for baby names the results that Google returns to you are based on a few things: your search query of baby names, but also your location, your previous search history and – if you are logged in – your demographics (female, 36, Colorado, etc.).
These extra data points give more context to your search and produce better results from Google. For instance, if you’re 65 and searching “baby names,” it’s probably for a different reason than a 25-year-old, so Google should show results related to grandparents. As the mountain of data in the world grows exponentially, the more critical context becomes to sift through that data.
This is where augmented reality will revolutionize the interface as we know it.
In comes augmented reality
Augmented reality, or “AR,” has often been thought of as “putting a virtual object on your floor” or even boiled down to something like Instagram filters, but the real secret sauce behind the power of AR is unlocking the context of our daily lives. The more understanding we have about our environment, the better the digital experience will be for all.
Let’s take our search example above. Those extra contextual bits about age and location are helpful, but that’s just the starting point of what context can really be with the introduction of AR and machine learning.
For instance, let’s include the context that you are standing in your kitchen. The AR might display what food items are in your fridge and what dishes you can prepare. Now include the time of day so the AR can better understand the types of dishes you might be craving. Next, include the context of why you entered the kitchen: You wanted a quick bite while watching a football game at 8:30 p.m. In a perfect world, you should see some options for late-night snacks that are commonly consumed during these types of sporting events because you’ve used several contextual clues to help decipher your needs.
Your brain works this same way!
Your brain – the original machine
In every situation, your brain is always using context to actively provide you information on the side (i.e., your “mind’s eye”) – like watching a movie and thinking about the actor and the different movie you just saw him in. Or seeing a white sedan and remembering the car you had in high school and the time you drove home in a snowstorm listening to a specific song on the radio. Life is just context.
AR’s ultimate goal is to give data more visual context and make it more meaningful for your daily life. Imagine having an unobstructive, superpowered mind’s eye that could perform “Google searches” for you without your ever needing a keyboard, for example – showing movie trivia in a way as familiar as recalling a memory.
These latest advancements aren’t focused on fixing the flaws of being human but enhancing the very things that make us human – things like sight, sound, touch and proprioception. As a firm focused squarely on understanding the intersection of human behavior and interactions, we couldn’t be more excited to be right at the tip of the spear.