So why is a computer being able to see be so darn important, I’ll tell you why, so advertisers can know exactly who you are, wherever you are, and send a highly targeted product advertisement at you through some pretty clever channels.
Let’s imagine a far future scenario, like in 16 months from now, where you are walking down the street and the HD digital display screen which is in actuality the glass of a storefront you happen to be passing near calls out to you specifically. “Hey Jeremy! I love the Ralph Lauren Jacket you’re wearing. We just got in the 2017 bold cut of that jacket in Dark Green, take a look” The display screen proceeds to show you what you would look like by overlaying the new jacket on top of the old while maintaining the street view behind you by using Augmented Reality. The display then proceeds to say “if you come in and buy one right now, we’ll offer you a %10 percent discount”. You say to yourself, damn, I look great in this jacket and walk right on in. SUCKER!!!!
Honestly scenarios like this are in development right now. Not as cohesive as the picture I am trying to paint, but all the separate pieces are being made as we speak. And don’t think for one second advertisers won’t be doing their very best to get you to spend those dollars by using advanced technology, your aesthetics and ego against you.
Vision is a primary sense, we live in a visual-centric world. In order for machines to be able to relate to humans and provide the support we need, it is critical they can observe and interact in the visual environment. Getting our devices to understand what they are seeing is a huge challenge and seeing the world isn’t as simple as building an algorithm to parse through data.
The reason vision is so complex is it requires experience and understanding of real situations that allow us to respond accordingly. Machine vision, a rapidly growing branch of AI (artificial Intelligence) that aims to give machines sight like our own, has made massive strides over the past few years thanks to researchers applying neural networks to help machines identify and understand images from the real world.
Starting back in 2012 computers began to recognize images on the web to include faces, but now this have moved in to the playing field of autonomous drones and object recognition systems. Robots and drones face a myriad of obstacles that may be out of the norm, and figuring out how to overcome these difficulties is a priority for those looking to really bank on the AI revolution.
Ambarish Mitra of Blippar Is Building a next-generation app that acts like a Wikipedia of the Physical World. By just pointing the app at everyday objects, the app identifies and categorizes nearly everything in the frame. The app launches in two weeks in an effort to become a visual browser.
Blippar’s app currently lets you photograph a product and instantly get information like price, nutrition information, and where you can buy it. Soon the app will let a user just point the camera at a wide range of items and identify them in real-time.
The critical challenge is making augmented reality apps work every time and Mitra explains the reason people use Google is because it works every time.
Now since we are speaking about Google, they have just promoted their D‑Wave 2X quantum computer, which it operates alongside NASA at the U.S. space agency’s Ames Research Center in California. This Google machine works with quantum bits, or qubits instead of more conventional bits. The superposition of these qubits enable machines to make huge computations simultaneously, making a quantum computer highly desirable for Big Data number crunching.
Well I just read in a recent venture beat article that this kind of processing “could lead to speed-ups for things like image recognition, which is in place inside of many Google services”.
Now think about what could happen if you took a Machine vision app like Blippar and combined it with the processing power of Google’s DWAVE processor. There are already a good deal of companies engaging in operating systems that replicate the way a brain works so with the continued adoption of technologies like neural networks and specialized machine vision hardware, we are rapidly closing the gap between human and machine vision.
This could be the cream of the advertising crop for sure. Any camera or screen could be capable of picking you out of the crowd and be able to tell if you Fitbit is a cheap knockoff or not. What’s worse is that it would be able to tell if you were exhausted and keep pushing Red Bulls or 5‑hour energy drinks everywhere you go.
Get used to it, soon everyone and everything will be looking at you.
Just in case you are curious D‑Wave has also sold quantum computers to Lockheed Martin and the Los Alamos National Laboratory.