Oleg Zabluda's blog
Tuesday, May 16, 2017
NSynth: Neural Audio Synthesis
NSynth: Neural Audio Synthesis
we are proud to announce NSynth (Neural Synthesizer), [...] Unlike a traditional synthesizer which generates audio from hand-designed components like oscillators and wavetables, NSynth uses deep neural networks to generate sounds at the level of individual samples. Learning directly from data, NSynth provides artists with intuitive control over timbre and dynamics and the ability to explore new sounds that would be difficult or impossible to produce with a hand-tuned synthesizer.

The acoustic qualities of the learned instrument depend on both the model used and the available training data, so we are delighted to release improvements to both:

A dataset of musical notes an order of magnitude larger than other publicly available corpora.

A novel WaveNet-style autoencoder model that learns codes that meaningfully represent the space of instrument sounds.
NSynth dataset, a large collection of annotated musical notes sampled from individual instruments across a range of pitches and velocities. With ~300k notes from ~1000 instruments, it is an order of magnitude larger than comparable public datasets.


Powered by Blogger