Oleg Zabluda's blog
Thursday, September 08, 2016
 
WaveNet: A Generative Model for Raw Audio
WaveNet: A Generative Model for Raw Audio
"""
generating speech with computers — a process usually referred to as speech synthesis or text-to-speech (TTS) — is still largely based on so-called concatenative TTS, where a very large database of short speech fragments are recorded from a single speaker and then recombined to form complete utterances. This makes it difficult to modify the voice (for example switching to a different speaker, or altering the emphasis or emotion of their speech) without recording a whole new database.

This has led to a great demand for parametric TTS, where all the information required to generate the data is stored in the parameters of the model, and the contents and characteristics of the speech can be controlled via the inputs to the model. So far, however, parametric TTS has tended to sound less natural than concatenative, at least for syllabic languages such as English. Existing parametric models typically generate audio signals by passing their outputs through signal processing algorithms known as vocoders.

WaveNet changes this paradigm by directly modelling the raw waveform of the audio signal, one sample at a time. As well as yielding more natural-sounding speech, using raw waveforms means that WaveNet can model any kind of audio, including music.

Researchers usually avoid modelling raw audio because it ticks so quickly: typically 16,000 samples per second or more, with important structure at many time-scales. Building a completely autoregressive model, in which the prediction for every one of those samples is influenced by all previous ones (in statistics-speak, each predictive distribution is conditioned on all previous observations), is clearly a challenging task.

However, our PixelRNN and PixelCNN models, published earlier this year, showed that it was possible to generate complex natural images not only one pixel at a time, but one colour-channel at a time, requiring thousands of predictions per image. This inspired us to adapt our two-dimensional PixelNets to a one-dimensional WaveNet.

The above animation shows how a WaveNet is structured. It is a fully convolutional neural network, where the convolutional layers have various dilation factors that allow its receptive field to grow exponentially with depth and cover thousands of timesteps.
"""
https://deepmind.com/blog/wavenet-generative-model-raw-audio/
https://deepmind.com/blog/wavenet-generative-model-raw-audio

Labels:


 
ARM adds 2048-bit vectors to v8A with SVE
ARM adds 2048-bit vectors to v8A with SVE
"""
ARM unveiled their SVE extensions for supercomputing. [...] Scalable Vector Extension and it does indeed scale from 128 to 2048 bits in 128b chunks. It is an optional ISA extension for ARM v8-A/AARCH64 for use in supercomputing, not consumer or media type work. While it may fit some of those workloads, it is not NEON v2, it is separate and distinct by design. It also isn’t fully finalized and public, that release is expected in late 2016 or early 2017 with silicon bearing SVE not expected until 2019 or 2020.
[...]
The nice thing about SVE is that it is vector length agnostic, your hardware can range from 128-2048b and the code can be written for 128-2048b vector units and they don’t have to match. If your vectors are 2048b wide and the hardware is only 128b wide, code will automatically run in 16 passes. If the code is 128b wide vectors and the hardware is 2048b wide, 15/16ths of the hardware will be powered down
[...]
Given that the marquee customer is Fujitsu and their Post-K supercomputer [to replace Sparc] which will use a 512b wide SVE pipe, you can be pretty sure their data will come in 512b increments. Others making SVE enabled silicon for similar projects can pick the physical widths to suit their projects.

One thing SVE won’t do is pack unfilled vector units with multiple disparate instructions automatically. If you have a 512b SVE unit and four independent 128b vectors, the hardware will not automagically run them in one cycle. If you have a compiler that can pack this type of work together before hand, you win, but the hardware won’t do it for you. This plus the ISA itself is why SVE isn’t really suited for consumer or image processing work.
[...]
scatter/gather, per-lane predication, and predicate–based loop control.
"""
http://semiaccurate.com/2016/09/07/arm-adds-2048-bit-vectors-v8a-sve/

Labels:


 
"""
"""
Astronomers have created the most detailed computer simulation to date of our Milky Way galaxy's formation [...] Previous simulations predicted that thousands of [...] satellite [...] dwarf, galaxies should exist. However, only about 30 of the small galaxies have ever been observed. Astronomers have been tinkering with the simulations, trying to understand this "missing satellites" problem to no avail.

Now, with the new simulation—which used a network of thousands of computers running in parallel for 700,000 central processing unit (CPU) hours—Caltech astronomers have created a galaxy that looks like the one we live in today, with the correct, smaller number of dwarf galaxies.
[...]
One of the main updates to the new simulation relates to how supernovae, [...] winds, which reach speeds up to thousands of kilometers per second, "can blow gas and stars out of a small galaxy," [...] Previous simulations that were producing thousands of dwarf galaxies weren't taking the full effects of supernovae into account.

"We had thought before that perhaps our understanding of dark matter was incorrect in these simulations, but these new results show we don't have to tinker with dark matter," says Wetzel. "When we more precisely model supernovae, we get the right answer."
"""
http://www.caltech.edu/news/recreating-our-galaxy-supercomputer-51995

Supercomputers Solve Case of Missing Galaxies
https://www.youtube.com/watch?v=b0R-2mM0Ghs

"Reconciling Dwarf Galaxies with ΛCDM Cosmology: Simulating A Realistic Population of Satellites Around a Milky Way-Mass Galaxy," (2016)
https://www.sciencedaily.com/releases/2016/09/160907135150.htm
http://www.caltech.edu/news/recreating-our-galaxy-supercomputer-51995

Labels:


 
Intel XSAVE, XSAVEC
Intel XSAVE, XSAVEC
http://events.linuxfoundation.org/sites/events/files/slides/LinuxCon_NA_2014.pdf
http://events.linuxfoundation.org/sites/events/files/slides/LinuxCon_NA_2014.pdf

Labels:



Powered by Blogger