Oleg Zabluda's blog
Saturday, June 09, 2018
 
"""
"""
200 petaflops. [...] The system and its twin at Lawrence Livermore National Laboratory in California were funded in 2014 as part of a $325 million Department of Energy program called Coral [...] 4,608 interconnected computer nodes [...] liquid-cooled [...] the size of two tennis courts [...] 15 megawatts,
[...]
Each node has two IBM Power9 chips running at 3.1GHz, [..] 22 processing cores [...] Connected to each pair of Power9 chips are six Nvidia Tesla V100 graphics chips. [...] Each node has 1.6 terabytes of memory [...] 2.2 terabytes per second on a 250-petabyte storage system.
"""
https://www.cnet.com/news/ibms-world-class-summit-supercomputer-gooses-speed-with-ai-abilities/
https://www.cnet.com/news/ibms-world-class-summit-supercomputer-gooses-speed-with-ai-abilities/

Labels:



Powered by Blogger