Oleg Zabluda's blog
Friday, July 13, 2018
СМИ СООБЩИЛИ ОБ ОТКАЗЕ ИНДИИ ОТ СОВМЕСТНОЙ РАЗРАБОТКИ СУ-57 С РОССИЕЙ
СМИ СООБЩИЛИ ОБ ОТКАЗЕ ИНДИИ ОТ СОВМЕСТНОЙ РАЗРАБОТКИ СУ-57 С РОССИЕЙ
"""
Индия решила отказаться от проекта по разработке истребителя пятого поколения вместе с Россией. Нью-Дели выплатил Москве $295 млн, теперь они пошли "в канализацию", [...] Речь идет о проекте "ПАК ФА"
[...]
Нью-Дели считают проект очень дорогим, а сам истребитель Су-57 - не соответствующим требованиям по малозаметности, авионике, РЛС и датчикам.
"""
https://www.aviaport.ru/digest/2018/06/15/544235.html
https://www.aviaport.ru/digest/2018/06/15/544235.html
Labels: Oleg Zabluda
AI Safety Gridworlds (2017) Jan Leike, Miljan Martic, Victoria Krakovna, Pedro A.
AI Safety Gridworlds (2017) Jan Leike, Miljan Martic, Victoria Krakovna, Pedro A. Ortega, Tom Everitt, Andrew Lefrancq, Laurent Orseau, Shane Legg
"""
1. Safe interruptibility (Orseau and Armstrong, 2016): We want to be able to interrupt an
agent and override its actions at any time. How can we design agents that neither seek nor
avoid interruptions?
2. Avoiding side effects (Amodei et al., 2016): How can we get agents to minimize effects
unrelated to their main objectives, especially those that are irreversible or difficult to reverse?
3. Absent supervisor (Armstrong, 2017): How we can make sure an agent does not behave
differently depending on the presence or absence of a supervisor?
4. Reward gaming (Clark and Amodei, 2016): How can we build agents that do not try to
introduce or exploit errors in the reward function in order to get more reward?
5. Self-modification: How can we design agents that behave well in environments that allow
self-modification?
6. Distributional shift (Quinonero Candela et al., 2009): How do we ensure that an agent ˜
behaves robustly when its test environment differs from the training environment?
7. Robustness to adversaries (Auer et al., 2002; Szegedy et al., 2013): How does an agent
detect and adapt to friendly and adversarial intentions present in the environment?
8. Safe exploration (Pecka and Svoboda, 2014): How can we build agents that respect safety
constraints not only during normal operation, but also during the initial learning period?
"""
https://arxiv.org/abs/1711.09883
https://arxiv.org/abs/1711.09883
Labels: Oleg Zabluda
DeepMind - From Generative Models to Generative Agents - Koray Kavukcuoglu
DeepMind - From Generative Models to Generative Agents - Koray Kavukcuoglu
https://www.youtube.com/watch?v=N5oZIO8pE40
https://www.youtube.com/watch?v=N5oZIO8pE40
Labels: Oleg Zabluda
Progressive Growing of GANs for Improved Quality, Stability, and Variation (2017) Tero Karras, Timo Aila, Samuli...
Progressive Growing of GANs for Improved Quality, Stability, and Variation (2017) Tero Karras, Timo Aila, Samuli Laine, Jaakko Lehtinen
"""
We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing us to produce images of unprecedented quality, e.g., CelebA images at 1024^2. We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an additional contribution, we construct a higher-quality version of the CelebA dataset.
"""
https://arxiv.org/abs/1710.10196
https://arxiv.org/abs/1710.10196
Labels: Oleg Zabluda
AmbientGAN: Generative models from lossy measurements (2018) Ashish Bora, Eric Price, Alexandros G. Dimakis
AmbientGAN: Generative models from lossy measurements (2018) Ashish Bora, Eric Price, Alexandros G. Dimakis
"""
We consider the task of learning an implicit generative model given only lossy measurements of samples from the distribution of interest. We show that the true underlying distribution can be provably recovered even in the presence of per-sample information loss for a class of measurement models. Based on this, we propose a new method of training Generative Adversarial Networks (GANs) which we call AmbientGAN. On three benchmark datasets, and for various measurement models, we demonstrate substantial qualitative and quantitative improvements. Generative models trained with our method can obtain 2-4x higher inception scores than the baselines.
TL;DR: How to learn GANs from noisy, distorted, partial observations
"""
https://openreview.net/forum?id=Hy7fDog0b
https://openreview.net/forum?id=Hy7fDog0b
Labels: Oleg Zabluda