So why is a comÂputÂer being able to see be so darn imporÂtant, I’ll tell you why, so adverÂtisÂers can know exactÂly who you are, wherÂevÂer you are, and send a highÂly tarÂgetÂed prodÂuct adverÂtiseÂment at you through some pretÂty clever channels.
Let’s imagÂine a far future sceÂnario, like in 16 months from now, where you are walkÂing down the street and the HD digÂiÂtal disÂplay screen which is in actuÂalÂiÂty the glass of a storeÂfront you hapÂpen to be passÂing near calls out to you specifÂiÂcalÂly. “Hey JereÂmy! I love the Ralph LauÂren JackÂet you’re wearÂing. We just got in the 2017 bold cut of that jackÂet in Dark Green, take a look” The disÂplay screen proÂceeds to show you what you would look like by overÂlayÂing the new jackÂet on top of the old while mainÂtainÂing the street view behind you by using AugÂmentÂed RealÂiÂty. The disÂplay then proÂceeds to say “if you come in and buy one right now, we’ll offer you a %10 perÂcent disÂcount”. You say to yourÂself, damn, I look great in this jackÂet and walk right on in. SUCKER!!!!
HonÂestÂly sceÂnarÂios like this are in develÂopÂment right now. Not as coheÂsive as the picÂture I am tryÂing to paint, but all the sepÂaÂrate pieces are being made as we speak. And don’t think for one secÂond adverÂtisÂers won’t be doing their very best to get you to spend those dolÂlars by using advanced techÂnolÂoÂgy, your aesÂthetÂics and ego against you.
Vision is a priÂmaÂry sense, we live in a visuÂal-cenÂtric world. In order for machines to be able to relate to humans and proÂvide the supÂport we need, it is critÂiÂcal they can observe and interÂact in the visuÂal enviÂronÂment. GetÂting our devices to underÂstand what they are seeÂing is a huge chalÂlenge and seeÂing the world isn’t as simÂple as buildÂing an algoÂrithm to parse through data.
The reaÂson vision is so comÂplex is it requires expeÂriÂence and underÂstandÂing of real sitÂuÂaÂtions that allow us to respond accordÂingÂly. Machine vision, a rapidÂly growÂing branch of AI (artiÂfiÂcial IntelÂliÂgence) that aims to give machines sight like our own, has made masÂsive strides over the past few years thanks to researchers applyÂing neurÂal netÂworks to help machines idenÂtiÂfy and underÂstand images from the real world.
StartÂing back in 2012 comÂputÂers began to recÂogÂnize images on the web to include faces, but now this have moved in to the playÂing field of autonomous drones and object recogÂniÂtion sysÂtems. Robots and drones face a myrÂiÂad of obstaÂcles that may be out of the norm, and figÂurÂing out how to overÂcome these difÂfiÂculÂties is a priÂorÂiÂty for those lookÂing to realÂly bank on the AI revolution.
AmbarÂish Mitra of BlipÂpar Is BuildÂing a next-genÂerÂaÂtion app that acts like a Wikipedia of the PhysÂiÂcal World. By just pointÂing the app at everyÂday objects, the app idenÂtiÂfies and catÂeÂgoÂrizes nearÂly everyÂthing in the frame. The app launchÂes in two weeks in an effort to become a visuÂal browser.
Blippar’s app curÂrentÂly lets you phoÂtoÂgraph a prodÂuct and instantÂly get inforÂmaÂtion like price, nutriÂtion inforÂmaÂtion, and where you can buy it. Soon the app will let a user just point the camÂera at a wide range of items and idenÂtiÂfy them in real-time.
The critÂiÂcal chalÂlenge is makÂing augÂmentÂed realÂiÂty apps work every time and Mitra explains the reaÂson peoÂple use Google is because it works every time.
Now since we are speakÂing about Google, they have just proÂmotÂed their D‑Wave 2X quanÂtum comÂputÂer, which it operÂates alongÂside NASA at the U.S. space agency’s Ames Research CenÂter in CalÂiÂforÂnia. This Google machine works with quanÂtum bits, or qubits instead of more conÂvenÂtionÂal bits. The superÂpoÂsiÂtion of these qubits enable machines to make huge comÂpuÂtaÂtions simulÂtaÂneÂousÂly, makÂing a quanÂtum comÂputÂer highÂly desirÂable for Big Data numÂber crunching.
Well I just read in a recent venÂture beat artiÂcle that this kind of proÂcessÂing “could lead to speed-ups for things like image recogÂniÂtion, which is in place inside of many Google services”.
Now think about what could hapÂpen if you took a Machine vision app like BlipÂpar and comÂbined it with the proÂcessÂing powÂer of Google’s DWAVE procesÂsor. There are already a good deal of comÂpaÂnies engagÂing in operÂatÂing sysÂtems that repliÂcate the way a brain works so with the conÂtinÂued adopÂtion of techÂnoloÂgies like neurÂal netÂworks and speÂcialÂized machine vision hardÂware, we are rapidÂly closÂing the gap between human and machine vision.
This could be the cream of the adverÂtisÂing crop for sure. Any camÂera or screen could be capaÂble of pickÂing you out of the crowd and be able to tell if you FitÂbit is a cheap knockÂoff or not. What’s worse is that it would be able to tell if you were exhaustÂed and keep pushÂing Red Bulls or 5‑hour enerÂgy drinks everyÂwhere you go.
Get used to it, soon everyÂone and everyÂthing will be lookÂing at you.
Just in case you are curiÂous D‑Wave has also sold quanÂtum comÂputÂers to LockÂheed MarÂtin and the Los AlamÂos NationÂal Laboratory.