This experiment lets you play with new sounds created with machine learning. It's built using NSynth, a neural network trained on over 300,000 instrument sounds.
Traditionally, a computer turns sound waves directly into numbers. To mix a bass and flute, you would add the numbers at each point along each sound wave. And that would sound like this – a bass playing with a flute:
NSynth also turns sound into numbers. But these numbers don't represent the sound wave itself. They represent a more abstract description of the sound. When you mix a bass and flute using these numbers, it sounds more like a new, hybrid bass-flute sound:
You also get interesting results playing non-instrument sounds through NSynth, like a cow's moo. Since NSynth is only trained on instruments, it tries to recreate the sound using only what it knows about how instruments behave:
This experiment lets anyone explore these sounds and make music with them. Just move the slider and tap the keys. You can also use your computer keys or a MIDI keyboard. The samples aren't always perfectly in tune and they do sound a bit unusual – but we think that's what makes them fun. We will be open-sourcing the front-end code and sounds here, so stay tuned.
Built by the Magenta and Creative Lab teams at Google. NSynth is built with Tensorflow, Tone.js and a Wavenet-style autoencoder. Learn about NSynth here.