Oleg Zabluda's blog
Monday, August 07, 2017
 
Accelerating Deep Learning Research with the Tensor2Tensor Library
Accelerating Deep Learning Research with the Tensor2Tensor Library
"""
Tensor2Tensor (T2T), an open-source system for training deep learning models in TensorFlow. T2T facilitates the creation of state-of-the art models for a wide variety of ML applications, such as translation, parsing, image captioning and more, enabling the exploration of various ideas much faster than previously possible. This release also includes a library of datasets and models, including the best models from a few recent papers (Attention Is All You Need, Depthwise Separable Convolutions for Neural Machine Translation and One Model to Learn Them All) to help kick-start your own DL research.
[...]
The T2T library is built with familiar TensorFlow tools and defines multiple pieces needed in a deep learning system: data-sets, model architectures, optimizers, learning rate decay schemes, hyperparameters, and so on.
[...]
This means that T2T is flexible, with training no longer pinned to a specific model or dataset. It is so easy that even architectures like the famous LSTM sequence-to-sequence model can be defined in a few dozen lines of code. One can also train a single model on multiple tasks from different domains. Taken to the limit, you can even train a single model on all data-sets concurrently, and we are happy to report that our MultiModel, trained like this and included in T2T, yields good results on many tasks even when training jointly on ImageNet (image classification), MS COCO (image captioning), WSJ (speech recognition), WMT (translation) and the Penn Treebank parsing corpus. It is the first time a single model has been demonstrated to be able to perform all these tasks at once.
[...]
Built-in Best Practices

With this initial release, we also provide scripts to generate a number of data-sets widely used in the research community[1], a handful of models[2], a number of hyperparameter configurations, and a well-performing implementation of other important tricks of the trade. While it is hard to list them all, if you decide to run your model with T2T you’ll get for free the correct padding of sequences and the corresponding cross-entropy loss, well-tuned parameters for the Adam optimizer, adaptive batching, synchronous distributed training, well-tuned data augmentation for images, label smoothing, and a number of hyper-parameter configurations that worked very well for us, including the ones mentioned above that achieve the state-of-the-art results on translation and may help you get good results too.
"""
https://research.googleblog.com/2017/06/accelerating-deep-learning-research.html

Labels:


 
Here's How Much One of Tesla's Amazing Solar Roofs Actually Costs
Here's How Much One of Tesla's Amazing Solar Roofs Actually Costs
https://www.inverse.com/article/35106-tesla-solar-roof-actual-price
https://www.inverse.com/article/35106-tesla-solar-roof-actual-price

Labels:


 
A Brief Open Source History of the Syrian Barrel Bomb
A Brief Open Source History of the Syrian Barrel Bomb
https://www.bellingcat.com/news/mena/2015/07/08/a-brief-open-source-history-of-the-syrian-barrel-bomb/
https://www.bellingcat.com/news/mena/2015/07/08/a-brief-open-source-history-of-the-syrian-barrel-bomb/

Labels:



Powered by Blogger