Oleg Zabluda's blog
Monday, September 26, 2016
 
Oculus Rift inventor Palmer Luckey is funding Trump’s racist meme machine
Oculus Rift inventor Palmer Luckey is funding Trump’s racist meme machine
Admits involvement with pro-Trump nonprofit, deletes Reddit account.
"""
The stream of racist, sexist, and economically illiterate memes appearing in support of Donald Trump during this years' interminable American presidential election is being bankrolled in part by the 24-year-old inventor of Oculus Rift.

Palmer Luckey, who came into a personal fortune worth $700 million (£535 million) when his VR headset firm was bought out by Facebook, has admitted to backing an unofficial pro-Trump political non-profit called Nimble America that is powering the tsunami of white supremacist and other racist image macros that have plagued Reddit.

According to research by the Daily Beast, Luckey is intimately involved with the group. He has also been active in r/The_Donald, the Reddit community that acts as something of a ground zero for endless election-related memes.
"""
http://arstechnica.com/tech-policy/2016/09/how-your-oculus-rift-is-secretly-funding-donald-trumps-racist-meme-wars/
http://arstechnica.com/tech-policy/2016/09/how-your-oculus-rift-is-secretly-funding-donald-trumps-racist-meme-wars

Labels:


 
Self-driving car alliances:
Self-driving car alliances:

Fiat Chrysler, Google (May 2016)

GM, Lyft (Jan 2016)
http://www.reuters.com/article/us-gm-lyft-investment-idUSKBN0UI1A820160104

BMW, Mobileye,Intel (Jun 2016)
http://www.wsj.com/articles/bmw-intel-mobileye-link-up-in-self-driving-tech-alliance-1467379145

Volvo, Uber+Otto (Aug 2016)
http://www.reuters.com/article/us-volvo-uber-idUSKCN10T12B

Delphi, Mobileye (Aug 2016)
http://www.wsj.com/articles/delphi-mobileye-join-forces-to-develop-self-drive-system-1471924804

Baidu, Nvidia, Udacity, Mercedes-Benz, Uber+Otto
http://www.nasdaq.com/article/nvidia-baidu-team-up-to-develop-selfdriving-car-20160901-00093
https://blogs.nvidia.com/blog/2016/09/23/udacity-nvidia-teach-self-driving-car/

https://blogs.nvidia.com/blog/2016/09/23/udacity-nvidia-teach-self-driving-car/
Volkswagen, Gett (Aug 2016)
http://www.reuters.com/article/us-volkswagen-strategy-idUSKCN10U130

Renault-Nissan (Infiniti, Dacia, Datsun, Venucia, Lada, Mitsubishi), Microsoft (Sep 2016)
http://www.pcmag.com/news/348180/microsoft-azure-to-keep-renault-nissan-self-driving-cars-con
http://www.autoblog.com/2016/04/26/ford-volvo-google-uber-lyft-self-driving-cars

Labels:


 
Self-Driving Cars Were Just Around the Corner—in 1960
Self-Driving Cars Were Just Around the Corner—in 1960
"""
on a test track in Princeton, N.J. Cars drove themselves around the track, using sensors on their front bumpers to detect an electrical cable embedded in the road. The cable carried signals warning of obstructions ahead (like road work or a stalled vehicle), and the car could autonomously apply its brakes or switch lanes as necessary.
[...]
According to the RCA vision, it would be just a decade or two until all highway driving was autonomous, with human drivers taking over only when their exit approached.
"""
http://spectrum.ieee.org/geek-life/history/selfdriving-cars-were-just-around-the-cornerin-1960

"""
Zworykin:

Well, the system was very simple. [..] When the automobile rolled over the cable it changed the cable impedance so that it produced a signal. That indicated speed and location and gave information for traffic lights and counting automobiles, and so forth.
"""
http://ethw.org/Oral-History:Vladimir_Zworykin

Patent: Automatic control system for vehicles, Vladimir K Zworykin et al
https://www.google.com/patents/US2847080
http://spectrum.ieee.org/geek-life/history/selfdriving-cars-were-just-around-the-cornerin-1960

Labels:


 
"""
"""
The ventral stream begins with V1, goes through visual area V2, then through visual area V4, and to the inferior temporal cortex (IT cortex).
[...]
Primary visual cortex (V1)

The primary visual cortex is the best-studied visual area in the brain. In all mammals studied, it is located in the posterior pole of the occipital cortex (the occipital cortex is responsible for processing visual stimuli). It is the simplest, earliest cortical visual area. It is highly specialized for processing information about static and moving objects and is excellent in pattern recognition.
[...]
The primary visual cortex is divided into six functionally distinct layers, labeled 1 through 6. Layer 4, which receives most visual input from the lateral geniculate nucleus (LGN), is further divided into 4 layers, labelled 4A, 4B, 4Cα, and 4Cβ. Sublamina 4Cα receives most magnocellular input from the LGN, while layer 4Cβ receives input from parvocellular pathways.

The occipital cortex where the visual cortex resides is the smallest of the four cortices of the human brain, which also includes the temporal cortex, parietal cortex, and frontal cortex. The average number of neurons in the adult human primary visual cortex, in each hemisphere, has been estimated at around 140 million.
[...]
V1 has a very well-defined map of the spatial information in vision. [...] retinotopic mapping is a transformation of the visual image from retina to V1. The correspondence between a given location in V1 and in the subjective visual field is very precise: even the blind spots are mapped into V1. In terms of evolution, this correspondence is very basic and found in most animals that possess a V1. In humans and animals with a fovea in the retina, a large portion of V1 is mapped to the small, central portion of visual field, [...] neurons in V1 have the smallest receptive field size of any visual cortex microscopic regions.

The tuning properties of V1 neurons (what the neurons respond to) differ greatly over time. Early in time (40 ms and further) individual V1 neurons have strong tuning to a small set of stimuli. That is, the neuronal responses can discriminate small changes in visual orientations, spatial frequencies and colors. Furthermore, individual V1 neurons in human and animals with binocular vision have ocular dominance, namely tuning to one of the two eyes. In V1, and primary sensory cortex in general, neurons with similar tuning properties tend to cluster together as cortical columns. David Hubel and Torsten Wiesel proposed the classic ice-cube organization model of cortical columns for two tuning properties: ocular dominance and orientation. However, this model cannot accommodate the color, spatial frequency and many other features to which neurons are tuned[citation needed].
[...]
Later in time (after 100 ms), neurons in V1 are also sensitive to the more global organisation of the scene (Lamme & Roelfsema, 2000).[13] These response properties probably stem from recurrent feedback processing (the influence of higher-tier cortical areas on lower-tier cortical areas) and lateral connections from pyramidal neurons (Hupe et al. 1998). While feedforward connections are mainly driving, feedback connections are mostly modulatory in their effects (Angelucci et al., 2003; Hupe et al., 2001). Evidence shows that feedback originating in higher-level areas such as V4, IT, or MT, with bigger and more complex receptive fields, can modify and shape V1 responses, accounting for contextual or extra-classical receptive field effects (Guo et al., 2007; Huang et al., 2007; Sillito et al., 2006).
[...]
Visual area V2, or secondary visual cortex, [...] receives strong feedforward connections from V1 (direct and via the pulvinar) and sends strong connections to V3, V4, and V5. It also sends strong feedback connections to V1.
[...]
V2 has many properties in common with V1: Cells are tuned to simple properties such as orientation, spatial frequency, and colour. The responses of many V2 neurons are also modulated by more complex properties, such as the orientation of illusory contours,[20] binocular disparity,[21] and whether the stimulus is part of the figure or the ground.[22] Recent research has shown that V2 cells show a small amount of attentional modulation (more than V1, less than V4), are tuned for moderately complex patterns, and may be driven by multiple orientations at different subregions within a single receptive field.
[...]
V2 cells also respond to various complex shape characteristics, such as the orientation of illusory contours[28] and whether the stimulus is part of the figure or the ground.[29] Anatomical studies implicate layer 3 of area V2 in visual-information processing. In contrast to layer 3, layer 6 of the visual cortex is composed of many types of neurons, and their response to visual stimuli is more complex.

In a recent study, the Layer 6 cells of the V2 cortex were found to play a very important role in the storage of Object Recognition Memory as well as the conversion of short-term object memories into long-term memories.[30]
[...]
V4 is the third cortical area in the ventral stream, receiving strong feedforward input from V2 and sending strong connections to the PIT. It also receives direct inputs from V1, especially for central space.
[...]
V4 is the first area in the ventral stream to show strong attentional modulation. Most studies indicate that selective attention can change firing rates in V4 by about 20%. A seminal paper by Moran and Desimone characterizing these effects was the first paper to find attention effects anywhere in the visual cortex.[34][35]

Like V2, V4 is tuned for orientation, spatial frequency, and color. Unlike V2, V4 is tuned for object features of intermediate complexity, like simple geometric shapes, although no one has developed a full parametric description of the tuning space for V4. Visual area V4 is not tuned for complex objects such as faces, as areas in the inferotemporal cortex are.
"""
https://en.wikipedia.org/wiki/Visual_cortex
https://en.wikipedia.org/wiki/Visual_cortex

Labels:


 
Faciotopy—A face-feature map with face-like topology in the human occipital face area (2015) Linda Henriksson et al
Faciotopy—A face-feature map with face-like topology in the human occipital face area (2015) Linda Henriksson et al
"""
The occipital face area (OFA) and fusiform face area (FFA) are brain regions thought to be specialized for face perception. However, their intrinsic functional organization and status as cortical areas with well-defined boundaries remains unclear. Here we test these regions for “faciotopy”, a particular hypothesis about their intrinsic functional organisation. A faciotopic area would contain a face-feature map on the cortical surface, where cortical patches represent face features and neighbouring patches represent features that are physically neighbouring in a face. The faciotopy hypothesis is motivated by the idea that face regions might develop from a retinotopic protomap and acquire their selectivity for face features through natural visual experience. Faces have a prototypical configuration of features, are usually perceived in a canonical upright orientation, and are frequently fixated in particular locations. To test the faciotopy hypothesis, we presented images of isolated face features at fixation to subjects during functional magnetic resonance imaging. The responses in V1 were best explained by low-level image properties of the stimuli. OFA, and to a lesser degree FFA, showed evidence for faciotopic organization. When a single patch of cortex was estimated for each face feature, the cortical distances between the feature patches reflected the physical distance between the features in a face. Faciotopy would be the first example, to our knowledge, of a cortical map reflecting the topology, not of a part of the organism itself (its retina in retinotopy, its body in somatotopy), but of an external object of particular perceptual significance.
"""
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4643680/
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4643680

Labels:


 
The Dark Ages of the Universe (2006) Abraham Loeb
The Dark Ages of the Universe (2006) Abraham Loeb
http://www.scientificamerican.com/article/the-dark-ages-of-the-univ-2006-11/

Free pdf here:
https://www.cfa.harvard.edu/~loeb/sciam.pdf
http://www.scientificamerican.com/article/the-dark-ages-of-the-univ-2006-11

Labels:



Powered by Blogger