October 2018 | By Jay Alan Zimmerman
Experience music visually.
This is a tool for visualizing music. You can turn on your mic to sing or play sounds. You can also drop in your own audio or video file. Some modes – like Hilbert Scope and Spectrogram – show the subtle textures of sound. Others show the paths and shapes of different melodies.
Use Basic Mode to visualize monophonic music such as a human voice. Use Piano Mode to visualize polyphonic music such as piano recordings. You can also use Piano Mode to visualize a live performance using a MIDI keyboard.
The pitch detection is based on this code. The hilbert scope is based on this code. The piano transcription is built with Onsets and Frames, a machine learning model made by the Magenta team at Google.
This experiment was made by Jay Alan Zimmerman, Yotam Mann, Claire Kearney-Volpe, Luisa Pereira, Kyle Phillips, and Google Creative Lab. Music performed by Jacquelyn Briggs (voice), Sam Posner (saxophone), Matt Lewcowicz (guitar), Melissa Tong (violin), and Jonathan Singer (tabla). Special thanks to Hanna Ehrenberg, Julia Silvestri, and our friends at Henry Viscardi School at The Viscardi Center, Tech Kids Unlimited, and ADAPT Community Network.