Oleg Zabluda's blog
Friday, September 30, 2016
US Army introduces the new superalloy Mondaloy 200 for Rocket Engines (US)
US Army introduces the new superalloy Mondaloy 200 for Rocket Engines (US)
"""
The sub-scale preburner test campaign accomplished the first demonstrations of several key rocket engine technologies, including the first use of Mondaloy 200 superalloy in a rocket engine environment and the first operation of a diluent type preburner. Demonstration of Mondaloy 200, which was co-developed by Aerojet Rocketdyne and the AFRL Materials Directorate, was a critical step to proving the unique combination of high-strength and burn resistance necessary for hardware survival in the harsh ORSC rocket environment. “These tests are a significant milestone for our program, but also just the beginning of an effort to develop and transition the tools, components and knowledge needed for our customer and the U.S. rocket industry,” said Dr. Shawn Phillips, chief of the AFRL Rocket Propulsion Division.
"""
http://www.france-metallurgie.com/us-army-introduces-the-new-superalloy-mondaloy-200-for-rocket-engines-us/
Hydrocarbon Boost (HCB)
"""
The three critical components are the ox-rich preburner (ORPB), turbopump assembly (TPA), and the thrust chamber assembly (TCA). The ORPB is a full flow combustion device that operates at a low 7 mixture ratio, close to stoichiometric. Downstream LOX diluent is injected into the hot gas to produce uniform temperature gas to the turbine. [...] incorporates Mondaloy 200™ to provide the high strength ox-resistant material.
"""
http://www.rocket.com/hydrocarbon-boost-hcb
"""
[Monica] “Jacinto has developed and patented Mondaloy 100 and 200, which are burn resistant alloys for gaseous oxygen environment applications that greatly reduce the weight of the components over conventional materials used on previous engine development programs. Its properties allow space vehicles to be made thinner and lighter and remove the need for protective coatings. As a result, the vehicles have increased safety and reliability, and decreased cost.”
"""
https://forum.nasaspaceflight.com/index.php?topic=34330.0
"""
Burn-resistant metal alloys that also have a high tensile strength are described. The alloys generally include about 55 to about 75 weight percent nickel, about 12 to about 17 weight percent cobalt, about 4 to about 16 weight percent chromium, about 1 to about 4 weight percent aluminum, and about 1 to about 4 weight percent titanium.
"""
https://www.google.com/patents/US20030053926
https://www.google.com/patents/US20040208777
Tribute to Dallis Hardwick (1950-2014)
"""
I decided to take a poll among my project team members and a few others within the company who had a vested interest. The overwhelming response was for the alloy name to be a combination of the names Dallis and Monica. I suggested to Dallis it should be Dalmonoy and she responded, "That does not sound as good as Mondaloy - let's name it that". It stuck and the two variants became Mondaloy 100 and Mondaloy 200. I think this was just Dallis being humble. I suspect she was as uncomfortable as I was to have an alloy named after oneself.
"""
http://www.materials.unsw.edu.au/newsletter/2014/dec/tribute-dallis-hardwick-1950-2014
http://www.rocket.com/hydrocarbon-boost-hcb
Labels: Oleg Zabluda
"""
"""
The relation of motor maximum torque (tmax) to motor mass is shown in Fig. 1. Two distinct regimes of maximum torque scaling with mass are apparent. Group a motors (RC servos) have maximum torque outputs that scale isometrically with motor mass (Ga ∝ m1.00, R2 = 0.74). Motors in this group include a rotary electric motor plus a gearbox. Group b motors (Maxon) have maximum torque outputs that scale allometrically to motor mass (Gb ∝ m1.27, R2 = 0.96). Motors in this group consist of only rotary electric motors, without a gearbox.
"""
http://www.sciencedirect.com/science/article/pii/S1877050911005989
http://www.sciencedirect.com/science/article/pii/S1877050911005989
Labels: Oleg Zabluda
Fox Village in Zao Japan!
Fox Village in Zao Japan!
https://www.youtube.com/watch?v=92wtDKCtOiU
Channel:
https://www.youtube.com/channel/UC4yqcgz49APdbgj0OMv7jpA
https://www.youtube.com/watch?v=92wtDKCtOiU
Labels: Oleg Zabluda
"""
"""
На Украине опубликованы 25 архивных документов КГБ, которые свидетельствуют о мероприятиях советских спецслужб в связи с акциями памяти евреев, погибших от рук нацистов во время Второй мировой войны в Бабьем Яру.
Из документов, которые относятся к 1966–1978 годам, следует, что советские спецслужбы считали чествование памяти евреев – жертв Холокоста, проявлениями "сионизма", "еврейского экстремизма" и "провокационными экстремистскими акциями".
Архивы КГБ впервые были открыты в 2007 году, спустя три года после "оранжевой революции". В 2010 году, когда президентом страны стал Виктор Янукович, их вновь закрыли. В начале 2015 года Служба безопасности Украины передала архивы в Украинский институт национальной памяти.
[...]
в специальных сообщениях на имя первого секретаря Центрального комитета Коммунистической партии Украины председатель КГБ докладывал о запланированных еврейских акциях и мероприятиях КГБ по их оперативному сопровождению и, часто, срыву и задержанию участников.
"""
http://www.svoboda.org/a/28018223.html
http://www.svoboda.org/a/28018223.html
Labels: Oleg Zabluda
Internal email: Microsoft forms new 5,000-person AI division;
Internal email: Microsoft forms new 5,000-person AI division;
"""
the full text of Nadella’s memo to employees about the changes.
"""
http://www.geekwire.com/2016/internal-email-microsoft-forms-new-5000-person-ai-division-top-exec-qi-lu-leaving-bike-injury/
http://www.geekwire.com/2016/internal-email-microsoft-forms-new-5000-person-ai-division-top-exec-qi-lu-leaving-bike-injury
Labels: Oleg Zabluda
Variable Rate Image Compression with Recurrent Neural Networks (2015) George Toderici et al
Variable Rate Image Compression with Recurrent Neural Networks (2015) George Toderici et al
"""
standard autoencoders operate under a number of hard constraints that have so far made them infeasible as a drop-in replacement for standard image codecs. Some of these constraints are that variable rate encoding is typically not possible (one network is trained per compression rate); the visual quality of the output is hard to ensure; and they’re typically trained for a particular scale, being able to capture redundancy only at that scale.
We explore several different ways in which neural network-driven image compression can improve compression rates while allowing similar flexibility to modern codecs. To achieve this flexibility, the network architectures we discuss must meet all of the following requirements: (1) the compression rate should be capable of being restricted to a prior bit budget; (2) the compressor should be able to encode simpler patches more cheaply
[...]
A typical compressing autoencoder has three parts: (1) an encoder which consumes an input (e.g., a fixed-dimension image or patch) and transforms it into (2) a bottleneck representing the compressed data, which can then be transformed by (3) a decoder into something resembling the original input. These three elements are trained end-to-end, but during deployment the encoder and decoder are normally used independently. The bottleneck is often simply a flat neural net layer, which allows the compression rate and visual fidelity of the encoded images to be controlled by adjusting the number of nodes in this layer before training. For some types of autoencoder, encoding the bottleneck as a simple bit vector can be beneficial (Krizhevsky & Hinton, 2011). In neural net-based classification tasks, images are repeatedly downsampled through convolution and pooling operations, and the entire output of the network might be contained in just a single node. In the decoder half of an autoencoder, however, the network must proceed in the opposite direction and convert a short bit vector into a much larger image or image patch. When this upsampling process is spatially-aware, resembling a “backward convolution,” it is commonly referred to as deconvolution (Long et al., 2014).
[...]
To make it possible to transmit incremental information, the design should take into account the fact that image decoding will be progressive. With this design goal in mind, we can consider architectures that are built on top of residuals with the goal of minimizing the residual error in the reconstruction as additional information becomes available to the decoder [...] a varying number of bits per patch by allowing a varying number of iterations of the encoder.
[...]
In our networks, we employ a binarization technique first proposed by Williams (1992), and similar to Krizhevsky & Hinton (2011) and Courbariaux et al. (2015).
[...]
The binarization process consists of two parts. The first part consists of generating the required number of outputs (equal to the desired number of output bits) in the continuous interval [−1, 1]. The second part involves taking this real-valued representation as input and producing a discrete output in the set {−1, 1} for each value. For the first step in the binarization process, we use a fully-connected layer with tanh activations. For the second part, following Raiko et al. (2015) [OZ: stochastic rounding]
[OZ: See very interesting correspondence between...]
3.3 FEED-FORWARD FULLY-CONNECTED RESIDUAL ENCODER
3.4 LSTM-BASED COMPRESSION
"""
https://arxiv.org/abs/1511.06085
https://arxiv.org/abs/1511.06085
Labels: Oleg Zabluda