Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Supercomputing Government United States Hardware

US DOE Sets Sights On 300 Petaflop Supercomputer 127

dcblogs writes U.S. officials Friday announced plans to spend $325 million on two new supercomputers, one of which may eventually be built to support speeds of up to 300 petaflops. The U.S. Department of Energy, the major funder of supercomputers used for scientific research, wants to have the two systems – each with a base speed of 150 petaflops – possibly running by 2017. Going beyond the base speed to reach 300 petaflops will take additional government approvals. If the world stands still, the U.S. may conceivably regain the lead in supercomputing speed from China with these new systems. How adequate this planned investment will look three years from now is a question. Lawmakers weren't reading from the same script as U.S. Energy Secretary Ernest Moniz when it came to assessing the U.S.'s place in the supercomputing world. Moniz said the awards "will ensure the United States retains global leadership in supercomputing." But Rep. Chuck Fleischmann (R-Tenn.) put U.S. leadership in the past tense. "Supercomputing is one of those things that we can step up and lead the world again," he said.
This discussion has been archived. No new comments can be posted.

US DOE Sets Sights On 300 Petaflop Supercomputer

Comments Filter:
  • Reminds me a little of Soviet era build the biggest thing you can projects. I could see it if they have a particular problem that either needs faster updates or higher resolution updates than the performance currently available provides (weather forecasting comes to mind). But building big to build big ? The interesting part of high performance computing is all in architecture and software to make use of it. This strikes as a little wasteful

    • Re:Ehhh Meh (Score:5, Informative)

      by Macman408 ( 1308925 ) on Saturday November 15, 2014 @08:51PM (#48394539)

      There are plenty of things that can use all the computing power you can throw at it these days. As you mentioned, weather forecasting - though more generally, climate science. Somebody from one of the National Labs mentioned at a college recruiting event that they use their supercomputer for (among other things) making sure that our aging nukes don't explode while just sitting in storage. There are thousands of applications, from particle physics to molecular dynamics to protein folding to drug discovery... Almost any branch of science you can find has some problem that a supercomputer can help solve.

      Additionally, it's worth noting that these generally aren't monolithic systems; they can be split into different chunks. One project might need the whole machine to do its computations, but the next job to run after it might only need a quarter - and so four different projects can use the one supercomputer at once. It's not like the smaller computing problems end up wasting the huge size of the supercomputer. After all, many of these installations spend more in electricity bills over the 3- or 5-year lifetime of the computer than they do to install the computer in the first place, so they need to use it efficiently, 24/7.

      • There are plenty of things that can use all the computing power you can throw at it these days. As you mentioned, weather forecasting - though more generally, climate science. Somebody from one of the National Labs mentioned at a college recruiting event that they use their supercomputer for (among other things) making sure that our aging nukes don't explode while just sitting in storage. There are thousands of applications, from particle physics to molecular dynamics to protein folding to drug discovery... Almost any branch of science you can find has some problem that a supercomputer can help solve.

        True enough, the rub is that developing solutions for those problems that effectively use supercomputing resources is as big a problem as the problem. It's more than likely you are reading this on a multiprocessor with a vector acceleration system, that has more potential compute power than any supercomputer from older than 15 years. The question is just what is your utilization and where is the speedup from all the extra compute resources.

      • There are plenty of things that can use all the computing power you can throw at it these days. As you mentioned, weather forecasting - though more generally, climate science. Somebody from one of the National Labs mentioned at a college recruiting event that they use their supercomputer for (among other things) making sure that our aging nukes don't explode while just sitting in storage. There are thousands of applications, from particle physics to molecular dynamics to protein folding to drug discovery... Almost any branch of science you can find has some problem that a supercomputer can help solve.

        Additionally, it's worth noting that these generally aren't monolithic systems; they can be split into different chunks. One project might need the whole machine to do its computations, but the next job to run after it might only need a quarter - and so four different projects can use the one supercomputer at once. It's not like the smaller computing problems end up wasting the huge size of the supercomputer. After all, many of these installations spend more in electricity bills over the 3- or 5-year lifetime of the computer than they do to install the computer in the first place, so they need to use it efficiently, 24/7.

        You forgot encryption key researching. Got an encrypted file you want to read. Lets use this beast to determine the encryption key and read the xxx contents.

    • As supercomputers grow larger, the pool of problems that benefit by using them gets smaller.

      • by mikael ( 484 )

        The number of floating point operations (FLOPS) performed by a next-generation game console outranks early days supercomputers like the Cray.

        Cray-2 = 1.9 GFLOPS
        http://www.dcemu.co.uk/vbullet... [dcemu.co.uk]
        Dreamcast | CPU: 1.4 GFLOPS | GPU: 0.1 GFLOPS | Combined: 1.5 GFLOPS
        PS2 | CPU: 6 GFLOPS | GPU: 0 GFLOPS | Combined: 6 GFLOPS
        Xbox | CPU: 1.5 GFLOPS | GPU: 5.8 GFLOPS | Combined: 7.3 GFLOPS
        Wii | CPU: 60 GFLOPS | GPU: 1 GFLOPS | Combined: 61 GFLOPS
        Xbox360 | CPU: 115 GFLOPS | GPU: 240 GFLOPS | Combined: 355 GFLOPS
        PS3 | CP

        • Re:Ehhh Meh (Score:4, Interesting)

          by fahrbot-bot ( 874524 ) on Sunday November 16, 2014 @12:27AM (#48395277)

          The number of floating point operations (FLOPS) performed by a next-generation game console outranks early days supercomputers like the Cray.

          Sure, but do they have the system capability / bandwidth to actually do anything with those numbers and is their raw speed offset by not being vector processors like the Cray 2 (process an entire array of data in 1 instruction)? I'm not a hardware geek, but was an administrator for the Cray 2 at the NASA Langley Research Center back in the mid 1980s and, among other things, wrote a proof-of-concept program in C to perform Fast Fourier transforms on wind tunnel data in near real time - probably would have been faster had I been a FORTRAN geek - and the system could pump through quite a bit of data - at least for the 80s.

          And the Cray 2 was way prettier than a PS3/4 or Xbox, though the Fluorinert immersion used for cooling is a bit cumbersome and expensive :-)

          • Sure, but do they have the system capability / bandwidth to actually do anything with those numbers and is their raw speed offset by not being vector processors like the Cray 2 (process an entire array of data in 1 instruction)?

            Nope. The vetor unit with its crazy chaining and entire array computations initiated by a single instruction were the tricks required to get the CRAY to be as fast as it was. With all those tricks, the CRAY-2 peaked at about 2GFlops or so. Bear in mind the relative of Vector process

        • Not sure what point you're trying to make here, but newer supercomputers are very different from those early supercomputers, in far more ways than one. The parallelism is much higher (supercomputers now have millions of nodes, with exascale computers expected to have tens of millions or more), for instance. It's extremely hard to program for them. Interconnects have not been improving very much and so data flow between cores has to be managed carefully.

          • By 'nodes' I mean 'cores'. Typo.

          • by dbIII ( 701233 )
            Rubbish - geophysics alone is full of embarrassingly parallel problems. For instance, apply filter X to 10**8 traces. See also anything involving DNA, or to get far simpler, even types of finite element analysis which work with multiple passes. Give the nodes their job, then do something with all the bits they independantly produce for the next step - there's plenty of tasks that don't require constant interconnection.
            • With tens of millions of nodes data logistics pretty much always is a problem, even for supposedly embarrassingly parallel problems. Either the nodes communicate with only a few neighbours, in which case you have to carefully design the layout of the computations to make sure every node can communicate efficiently with its neighbours, and there probably is also some kind of global clock that has to be maintained. Alternatively you have some kind of farmer-worker setup where each worker node is happily chomp

              • The example above is applying the same transformation to a very large number of datasets and then after some hours or days each node writes out what it has done to some shared storage. In that case the "extremely hard to program" thing does not exist since a shell script or queueing system does the job - which is why there are a class of problems known as "embarrassingly parallel". It's not "millions of nodes" but it could be since the problem can be neatly divided into millions of independant parts that
          • by mikael ( 484 )

            t I was trying to explain that there was a vast number of applications using classic supercomputer type technology, ranging from academic research down to multiplayer games. A modern game console now uses multiple cores, vector processors, vector chaining, kernels (if you consider vertex, fragment, geometry shaders as kernels), client-server communication to update players moves. Even geometry data is streamed across the network as some game MMORG worlds are so vast, all the data couldn't be stored on one d

      • by dbIII ( 701233 )
        True - problems of modelling molecular interaction now benefit so that is a "smaller" problem :)
    • by mikael ( 484 )

      Supercomputers are designed to be unlimited in scalability (super-scalar). Everything is duplicated from the cores on a single chip die to the boards, racks, rack-frames, aisles of rack-frames and interconnect fabric. The only limits to the size of a super-computer are financial; component cost, office space lease and electricity bills. Usually, it's the last one that's the problem. The slowest proocessing nodes can be pulled out and replaced with more powerful ones as time goes by.

      • Supercomputers are designed to be unlimited in scalability (super-scalar). Everything is duplicated from the cores on a single chip die to the boards, racks, rack-frames, aisles of rack-frames and interconnect fabric. The only limits to the size of a super-computer are financial; component cost, office space lease and electricity bills. Usually, it's the last one that's the problem. The slowest proocessing nodes can be pulled out and replaced with more powerful ones as time goes by.

        That's meaningless if your software doesn't scale or has serial bottlenecks.

        • by mikael ( 484 )

          That's why many simulations are still written in Fortran - the compilers were optimized to handle multi-dimensional grid arrays, which is what fluid dynamics and other solvers use.

          • Really ?
            I always thought it was the incredible abundance of numeric and simulation libraries for Fortran and the incredible amount of testing they have undergone, also there is the inertia of so many scientists and engineers learning Fortran as their first language or just knowing the language.

            • ...also there is the inertia of so many scientists and engineers...

              Sounds like words of a youngster who doesn't know that newer isn't always better.

            • by sjames ( 1099 )

              FORTRAN is an excellent language for that sort of thing even though the standards people seem hell bent on screwing that up lately.

              C is great for many things but it's too easy to have bugs that crash it in hard to diagnose ways. Interpreted languages have their place too, but not when absolutely maximum performance is a requirement.

              • FORTRAN is an excellent language for that sort of thing even though the standards people seem hell bent on screwing that up lately.

                C is great for many things but it's too easy to have bugs that crash it in hard to diagnose ways. Interpreted languages have their place too, but not when absolutely maximum performance is a requirement.

                So much in this it's almost impossible to reply to.

                Fortran is certainly a good language for numerical codes of any type. I wasn't aware this was in dispute.

                C is not the only other choice for a compiled language.

                Is there an inherent performance increase for precompiled code vs interpreted or just in time compiled code on massively parallel systems ? Dunno. I'd pass that off to someone doing their doctoral thesis and still likely to get a wrong/ incomplete answer. On the other hand if we want "Leadership in s

                • by sjames ( 1099 )

                  You seemed to feel that inertia was a big factor.

                  No JIT will be as fast as compiling once in advance for exactly the hardware it will run on. Especially given the chance to do time (and correctness) trials with various optimizations first. Interestingly, JIT and scripted languages make a lot more sense for small to medium clusters, particularly if they would see idle time anyway. In those, the pressure to get value from every cycle tends to be a bit lower such that saving development time and debugging effo

                  • You seemed to feel that inertia was a big factor.

                    Certainly do, haven't seen anyone make a plausible case it isn't

                    No JIT will be as fast as compiling once in advance for exactly the hardware it will run on

                    You really don't understand the nature of the question do you ? Because if you did you would never make such a blanket statement as answer.

                    • by sjames ( 1099 )

                      And so I claimed that it actually is a good language to use, not just there by inertia.

                      In what way is Jit going to run faster than a binary pre-compiled after careful (and automated) profiling and tuning? JIT's advantage is in cases where the end user can't do a custom compile.

                    • No JIT will be as fast as compiling once in advance for exactly the hardware it will run on

                      vs

                      In what way is Jit going to run faster than a binary pre-compiled after careful (and automated) profiling and tuning? JIT's advantage is in cases where the end user can't do a custom compile..

                      You're babbling.

                    • by sjames ( 1099 )

                      More likely you can't answer the question.

                    • More likely you can't answer the question.

                      I believe I Initially claimed that the answer to that question would likely be someone's thesis topic.

                      Troll harder

                    • by sjames ( 1099 )

                      No troll here. Just someone who apparently is a lot more likely to write that paper one day than you are. If you think I'm talking gibberish, it's because the argument is over your head. Had you been more polite about it, I might be more polite here.

                    • No troll here. Just someone who apparently is a lot more likely to write that paper one day than you are. If you think I'm talking gibberish, it's because the argument is over your head. Had you been more polite about it, I might be more polite here.

                      LOL I'll worry about it when you can go a few paragraphs without contradicting yourself.

            • by mikael ( 484 )

              Those numeric and simulation libraries were optimized in conjunnction with the Fortran compiler to take advantage of the hardware. The most obvious example; having fixed sized multi-dimensional arrays as global variables. For regular grids, the compiler can then decide which way to slice that data block up so that every processing node gets assigned a chunk of data. Since each function is not more than a few loop counters reading previous and current state for each grid cell, those get optimized into paral

              • Those numeric and simulation libraries were optimized in conjunnction with the Fortran compiler to take advantage of the hardware.

                So you are trying to say that having millions of lines of code already in place that does things like finite element analysis has nothing to do with it ?

              • The most obvious example; having fixed sized multi-dimensional arrays as global variables.

                You mean like these ?
                http://www.phy.ornl.gov/csep/p... [ornl.gov]

                Fortran 90 has three varieties of dynamic arrays. All three allow array creation at run time with sizes determined by computed (or input) values. These three varieties of dynamic arrays are:

                Oh I wouldn't hold my breath on the compiler parallelizing those, it has to be able to determine it's safe to do so, more often than not a programmer will have to tell it to do so with a doall.

    • soviet era? oh no my friend, it goes back way further than that. Russia is all about "build the biggest you can". the Tsar cannon comes to mind. https://en.wikipedia.org/wiki/... [wikipedia.org] and this https://en.wikipedia.org/wiki/... [wikipedia.org]
    • i think something like this would be a case of "if you build it, they will come".

  • by DumbSwede ( 521261 ) <slashdotbin@hotmail.com> on Saturday November 15, 2014 @08:42PM (#48394497) Homepage Journal

    I remember back in the 80's all the excitement about building faster and faster super computers to solve all sorts of grand challenge problems and how a teraflop would just about be nirvana for science. Around 2000 teraflops came and went and then petaflops became the new nirvana for science where we would be able to solve grand challenge problems. Now exaflop is the new nirvana that will solve grand challenge science problems once again. Seems raw computing power hasn't given us the progress in science we predicted. Sure it's been used for stuff, but it hasn't helped us crack nuclear fusion for instance, one of its often hyped goals.

    Where's the score card on how much progress has been made because of super computing? I know drug design is one very useful application, but what are other areas that have been transformed?

    • by Anonymous Coward

      Faster machines just run shitty code faster. Without theory it's all a waste.

    • A large proportion of the science that has been done with supercomputers is about nuclear weapons and is thus classified. There's no real way for us to know if supercomputers have helped in that direction or not. Presumably they have, otherwise LLNL wouldn't be getting the latest shiniest toy every few years (they often get the very first make of a new supercomputer that is developed). Or they haven't and it's all a big waste of money.

      • by jimhill ( 7277 )

        Got it in two.

      • by Anonymous Coward

        They've done public research on supernova simulations. Short-term weather forecasts have gained from higher resolution grids.

    • The singularity, where supercomputers can advance scientific knowledge unaided by humans, is still some way off. However, you are mistaken if you believe there have not been huge advances in scientific knowledge in the last 20 years, or if you believe the rapid pace of advancement would have been possible without the computing power that has become available to support that effort. In earth sciences, medicine, high energy physics, astronomy, meteorology and many other scientific areas, the simulation and in
    • Sure it's been used for stuff, but it hasn't helped us crack nuclear fusion for instance, one of its often hyped goals.

      At one point they'll need a nuclear fusion reactor to power the "next big thing in supercomputers", so someone working at some non-cutting-edge compute facility will figure it out so that they can get the grant money for that "next big thing in supercomputers." It's all about funding :-)

    • by chalker ( 718945 ) on Saturday November 15, 2014 @09:30PM (#48394693) Homepage

      There are countless problems solved only as a result of supercomputers. Setting aside for a minute the minority of problems that are classified (e.g. nuclear stockpile stewardship, etc), supercomputers benefit both academia and industry alike. You'll be hard pressed to find a Fortune 500 company that doesn't have at least one if not multiple supercomputers in house.

      For example, here is a list of case studies of specific manufacturing problems that have been solved http://www.compete.org/publica... [compete.org] which include things as mundane as shipping pallets, golf clubs, and washing machines.

      The organization I work for, the Ohio Supercomputer Center, annually publishes a research report listing primarily academic projects that benefit from our supercomputers: https://www.osc.edu/sites/osc.... [osc.edu] which range from Periodontal Disease, Photovoltaic Cells, Forest Management and Welding.

      TL;DR: "HPC Matters" in many ways. Here's some short blinky flashy videos: http://www.youtube.com/channel... [youtube.com]

      • by chalker ( 718945 )

        P.S. - OSC is going to be doing a reddit AMA on Monday at 7:30PM Eastern. Feel free to hop on and ask us some questions!

        “We will be answering questions about running a Supercomputer Center, High Performance Computing (HPC) and anything else. Our current systems have a total performance of 358 TeraFLOPS, and consist of 18,000 CPUs, 73 TB of RAM and 4 PB of storage, all connected to a 100 Gbps statewide network (yes, it will run Crysis, just barely;). We will be holding the AMA in conjunction with th

      • > Setting aside for a minute the minority of problems that are classified (e.g. nuclear stockpile stewardship, etc)

        Nuclear simulations aren't a 'minority'. Both of the US' top supercomputers (Titan and Sequoia) are at DOE facilities (ORNL and LLNL). Most of the time on Sequoia is reserved for nuclear simulations. Titan does more varied stuff but nuclear still takes up a sizable share of its time.

      • How should one go about getting a job programming a large supercomputer?
        • by JanneM ( 7445 ) on Sunday November 16, 2014 @03:18AM (#48395631) Homepage

          How should one go about getting a job programming a large supercomputer?

          Become a researcher in a field that makes use of lots of computing power, then specialize in the math modeling and simulation subfields. Surprisingly often it's quite easy to get time on a system if you apply as a post-doc or even a grad student. Becoming part of a research group that develops simulation tools for others to use can be an especially good way.

          Or, get an advanced degree in numerical analysis or similar and get hired by a manufacturer or an organization that builds or runs supercomputers. On one hand that'd give you a much more permanent job, and you'd be mostly doing coding, not working on your research; on the other hand it's probably a lot harder to get.

          But ultimately, why would you want this? They're not especially magical machines. Especially today, when they're usually Linux based, and the system developers do all they can to make it look and act like a regular Linux system.

          If you want to experience what it's like, try this: Install a 4-5 year old version of Red Hat on a workstation. Install OpenMP and OpenMPI, and make sure all your code uses either or both. Install an oddball C/C++ compiler. Access your workstation only via SSH, not directly. And add a job queue system that will semi-randomly let your app run after anything from a few seconds to several hours.

    • by Kjella ( 173770 )

      I think it's a bit like in IT, nobody notices when it just works. More and more bad designs die on the drawing board because we run detailed simulations. For example if you buy a car today I expect the deformation zones have gone through plenty of simulated crashes. Perhaps you've even stepped that up another notch and let the computer try to design what the optimal deformation zone looks like within certain requirements. Thousands of adjustments times thousands of scenarios at different angles, speeds and

    • by Jeremi ( 14640 )

      Sure it's been used for stuff, but it hasn't helped us crack nuclear fusion for instance, one of its often hyped goals.

      Speak for yourself [wired.com], bucko. ;)

    • by Orp ( 6583 )

      Take a look, there is some neat stuff going on with Blue Waters: https://bluewaters.ncsa.illino... [illinois.edu]

      Most science is not breakthroughs; it's usually slow progress, with many failed experiments.

      These computers are facilitating a lot of good science and increases like this in our computational infrastructure for research is great news. I do wonder how they are going to power this beast and what kind of hardware it will be made of. 300 PFlop is pretty unreal with today's technology.

    • It's gone where all CPU gains have gone in the past 15 years - into sloppy development and rushed schedules. I was just browsing the reviews of the new version of Google Maps today and users are complaining that it is slow, slow, slow. Who cares about efficient programming done right when you can just sit back and wait for Moore's Law to catch up?
  • had to adjust mine.
  • The subject implies that the NSA publicizes the capabilities of their rigs. I would be willing to bet they have near the computing power of China all by themselves.

  • For 20+ years, HPC systems have relied on the same conservative design of compute separated from storage, connected by Infiniband. Hadoop kind of shook up the HPC world with its introduction of data locality, especially as scientific use cases have involved larger data sets that distributed data storage is well-suited for. The HPC world has been wondering aloud how best and when to start incorporating local data storage for each node. Summit introduces some modest 800GB non-volatile storage per node for cac

  • I hate this attitude that if you don't have the top spot, you are crap. It is so silly the attitude that the US somehow lost something by not having the first spot on the top 500 list.

    I mean for one thing, the Chinese computer is more specialized than the big US supercomputers. It gets its performance using Intel Xeon Phi GPGPU type processors. Nothing wrong with hat but they are vector processors hanging off the PCIe bus. They work a lot like graphics cards. There are problems that they are very fast at,

  • If the world stands still, the U.S. may conceivably regain the lead in supercomputing speed from China with these new systems

    It's kind of hard to regain something you didn't truly lose to China.

    • by gweihir ( 88907 )

      Nonsense. The US is a technology backwater now. Even if it "regains" the larger number, it does not have the people to actually use this infrastructure efficiently, making it a meaningless stunt.

  • "Supercomputing is one of those things that we can step up and lead the world again,"
    Here's a thought. 100% of chip making companies that make chips that are actually good/fast are American companies. So just don't sell to any other countries in bulk for supercomputer use. Win by cutting off the supply.
    • by gweihir ( 88907 )

      AMD produces CPUs in Dresden, Memory and chipsets are fully produced outside the US, ARM is British, the CPUs for china's supercomputer are made there, etc. These are global companies, sometimes non-US domestic ones, but never only US companies. You mindless patriotism blinds you to reality.

      Result of your proposed move would be that the US would not get components, not the other way round.

  • A factor of 10 is pretty meaningless in supercomputing. Software quality makes much more of a difference. Of course, politicians are not mentally equipped to understand that and instead want "the larger number" like the most stupid noob PC buyer.

  • So, they make this announcement right before the new Top500 list is unveiled in the SuperComputing conference... What clearly means that once again there will be no US system in the Top1 position, right?

Understanding is always the understanding of a smaller problem in relation to a bigger problem. -- P.D. Ouspensky

Working...