Move Mirror is a machine learning experiment that lets you explore pictures in a fun new way, just by moving around. Move in front of your webcam and Move Mirror will match your real-time movements to hundreds of images of people doing similar poses around the world. It’s kind of like a magical mirror that reflects your moves with images of all kinds of human movement—from sports and dance to martial arts, acting, and beyond.
Move Mirror relies solely on pose information to find a matching image. By pose, we are referring to the relative location of 17 body parts such as the right shoulder, the left ankle, the right hip, the nose, etc. Move Mirror does not take into account race, gender, height, body type, or any other individual characteristic.
We wanted to make machine learning more accessible to coders and makers, and inspire them to experiment with this technology. With Move Mirror, we are showing how computer vision techniques like pose estimation can be used in fun new ways.
No. Your images are not being sent to any Google servers while you’re interacting with Move Mirror. All the image recognition is happening locally in your browser.
This experiment works best when there is only one person in the webcam view. Make sure your entire body is in the frame under good lighting conditions. Most of all, make sure to have fun and move around a lot to find the best images!
Move Mirror works best on newer computers with faster GPUs. If you have an older computer you can still try Move Mirror, it just might just feel a little slower and less accurate.
Using a pose estimation model called PoseNet which runs in the browser using TensorFlow.js, itself a library that allows web developers to train and run machine learning models in the browser. If you’re a coder or a maker curious to play with pose estimation, we encourage you to check out these two open-source libraries as well our Move Mirror blog post with technical tips and tricks to inspire your own projects.
PoseNet is a new open-source pose estimation model from Google that allows users to use their webcam to detect their body pose (i.e. where an elbow or an ankle appears in an image). To be clear, this technology is not recognizing who is in an image — there is no personal identifiable information associated to pose estimation. The algorithm is simply estimating where key body joints are.
This experiment was a collaborative effort by PAIR, Research, and Creative Lab teams at Google and friends at Use All Five.