X

Google just launched a bizarre AI experiment called 'Move Mirror'

It's kind of like a reverse image search, but it uses machine learning to match your poses with images

Jackson Ryan Former Science Editor
Jackson Ryan was CNET's science editor, and a multiple award-winning one at that. Earlier, he'd been a scientist, but he realized he wasn't very happy sitting at a lab bench all day. Science writing, he realized, was the best job in the world -- it let him tell stories about space, the planet, climate change and the people working at the frontiers of human knowledge. He also owns a lot of ugly Christmas sweaters.
Jackson Ryan
2 min read
move
Google

Here's somewhere you probably didn't expect machine learning to go, but here we are anyway.

Google revealed "Move Mirror" in a blog post on July 19. It's a machine learning experiment that matches your poses with images of other people in the same pose.

The reason for its existence? Fun, mainly. Google also wanted to "make machine learning more accessible to coders and makers" while inspiring them to take the tech and run with it for their own applications.  

The "mirror" uses an open-souce "pose estimation model" from Google called PoseNet, which can detect body poses, and TensorFlow.js, a library for in-browser machine learning framework. 

In finding a matching image, the experiment uses your "pose information" -- the location of 17 different body parts including your shoulders, ankles and hips. According to Google's explainer, it doesn't take any individual characteristics into account, such as gender, height or body type. 

I gave it a real challenge by dancing like a goofball and it seemed to keep throwing up a young lady in a white dress. 

No hardcore dancing in the living room.

Jackson Ryan/CNET

Using computers to detect poses isn't new, of course -- motion capture technology has been used for decades to capture real human movements for blockbusters. Video games have used it too, just look at Microsoft's 3D imaging device, the Kinect . But those methods require expensive hardware.The triumph here is that it all happens in browser, with just a webcam.

Google does not send any of your images to its servers, all the image recognition happens locally, in browser. The technology also doesn't recognize who is in the image because there is "no personal identifiable information associated to pose estimation."

If you're interested in the incredible amount of work that went into building Move Mirror, TensorFlow has an extensive rundown of the challenges and programming hurdles they overcame in its blog.

You can try it for yourself, provided you have a webcam, at Google's experiments page

'Hello, humans': Google's Duplex could make Assistant the most lifelike AI yet.

Comic-Con 2018: We're headed to America's epic entertainment geekfest, and bringing you all the latest.