Machine learning and VR

AI and machine learning have been a hot topic for years, but the intangible nature of the beast allowed for a disconnected wariness that's naive to the possibilities of it's fruition.

Recently, AI assistant systems such as Siri, Google Now, Cortana, and Amazon Echo have become ubiquitous.  Cloud-connected partners that process our speech to perform tasks for us.  But that is a passive and controlled implementation of a concept that can go much further.  For example, a machine that can learn to identify objects that you sketch.

IBM's Watson is an artificial intelligence backed by years of R&D, a super-computer, and the power of the cloud.  IBM held the Watson Image Recognition Hackathon at SVVR to faciliate getting Watson into developers hands.  Some developers used Watson's Unity SDK to create VR games.

One usage was to teach Watson a sketch of a key, and when the user drew a key symbol in the air with the HTC Vive controllers, a key materialized for the players use.  This mechanic creates a novel interaction for an adventure game.

Here is a full write-up of the hackathon and the winner "Watson and Waffles" done by Nvidia.

With the Seattle VR Hackathon only a few days away(go to it, seriously), I can only hope we'll see some cool innovation there in an intersection between VR and another emerging technology like computer vision or machine learning.

Though this sort of image recognition on the cloud is wicked cool, this really paves the way for something much more interesting: If a machine can decode my meaning and provide me with something useful with just a crude gesture made by primitive spatial input for my hands, how can it interact with me when it has a more complete dataset of my body language?