Move Mirror

You move and 80,000 images move with you
Try It!

Share!

don't be shy! step in the frame to start
oh hi! it's better if you back up
oops! this experiment is for one person
camera figure
Created with Sketch. Make a GIFStart RecordingStart RecordingRecordingProcessing
Created with Sketch. Recording

What is this?

Move Mirror is a machine learning experiment that lets you explore pictures in a fun new way, just by moving around. Move in front of your webcam and Move Mirror will match your real-time movements to hundreds of images of people doing similar poses around the world. It’s kind of like a magical mirror that reflects your moves with images of all kinds of human movement—from sports and dance to martial arts, acting, and beyond.

How are the images matched?

Move Mirror relies solely on pose information to find a matching image. By pose, we are referring to the relative location of 17 body parts such as the right shoulder, the left ankle, the right hip, the nose, etc. Move Mirror does not take into account race, gender, height, body type, or any other individual characteristic.

Why did Google make this?

We wanted to make machine learning more accessible to coders and makers, and inspire them to experiment with this technology. With Move Mirror, we are showing how computer vision techniques like pose estimation can be used in fun new ways.

When I turn on my webcam, are my images being sent to Google servers?

No. Your images are not being sent to any Google servers while you’re interacting with Move Mirror. All the image recognition is happening locally in your browser.

Any tips I should keep in mind?

This experiment works best when there is only one person in the webcam view. Make sure your entire body is in the frame under good lighting conditions. Most of all, make sure to have fun and move around a lot to find the best images!

Why is the experience slow on my computer?

Move Mirror works best on newer computers with faster GPUs. If you have an older computer you can still try Move Mirror, it just might just feel a little slower and less accurate.

How do I learn more about machine learning?

Here’s an intro-level video explainer. This site lets you interact with neural networks in more detail. And this free online course lets you dive in even deeper.

How was this built?

Using a pose estimation model called PoseNet which runs in the browser using TensorFlow.js, itself a library that allows web developers to train and run machine learning models in the browser. If you’re a coder or a maker curious to play with pose estimation, we encourage you to check out these two open-source libraries as well our Move Mirror blog post with technical tips and tricks to inspire your own projects.

What is PoseNet?

PoseNet is a new open-source pose estimation model from Google that allows users to use their webcam to detect their body pose (i.e. where an elbow or an ankle appears in an image). To be clear, this technology is not recognizing who is in an image — there is no personal identifiable information associated to pose estimation. The algorithm is simply estimating where key body joints are.

Who made this?

This experiment was a collaborative effort by PAIR, Research, and Creative Lab teams at Google and friends at Use All Five.