Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Supercomputing

Warning At SC13 That Supercomputing Will Plateau Without a Disruptive Technology 118

dcblogs writes "At this year's supercomputing conference, SC13, there is worry that supercomputing faces a performance plateau unless a disruptive processing tech emerges. 'We have reached the end of the technological era' of CMOS, said William Gropp, chairman of the SC13 conference and a computer science professor at the University of Illinois at Urbana-Champaign. Gropp likened the supercomputer development terrain today to the advent of CMOS, the foundation of today's standard semiconductor technology. The arrival of CMOS was disruptive, but it fostered an expansive age of computing. The problem is 'we don't have a technology that is ready to be adopted as a replacement for CMOS,' said Gropp. 'We don't have anything at the level of maturity that allows you to bet your company on.' Peter Beckman, a top computer scientist at the Department of Energy's Argonne National Laboratory, and head of an international exascale software effort, said large supercomputer system prices have topped off at about $100 million 'so performance gains are not going to come from getting more expensive machines, because these are already incredibly expensive and powerful. So unless the technology really has some breakthroughs, we are imagining a slowing down.'" Although carbon nanotube based processors are showing promise (Stanford project page; the group is at SC13 giving a talk about their MIPS CNT processor).
This discussion has been archived. No new comments can be posted.

Warning At SC13 That Supercomputing Will Plateau Without a Disruptive Technology

Comments Filter:
  • by Anonymous Coward

    Coding to make best use of resource.

    Moving to clockless.

    Minimal use processors (custom ASIC).

    Live with it. Sometimes you may have to wait Seven. And a Half (what? not till next week?) Million Years for your answer. It may be a tricky problem.

    • Moving to clockless.

      Chuck Moore-style? [greenarraychips.com]

      Minimal use processors

      That doesn't make sense. Or rather, makes multiple possible senses at once. Could you elaborate on what in particular do you have in mind?

      • by Anonymous Coward

        That doesn't make sense. Or rather, makes multiple possible senses at once. Could you elaborate on what in particular do you have in mind?

        I believe he was referring to building the processor for the task - getting rid of unnecessary gates, prioritizing certain operations over others, etc, based on the research being done. An example are the custom machines available for mining bitcoins now - prioritize running integer hashes, get rid of all the junk you don't need(high memory, floating point processors to name a couple).

    • Clockless isn't a very good idea, designing such a large asynchronous system with current CMOS technology is going to end in a big disaster.
    • Considering how disruptive computer power AI is becoming, a temporary slow down can be good for us...To give us time to adapt, put in place controls, and decide on a future without need for human labor
      • by geekoid ( 135745 )

        Yeah, you try to create some laws to govern a speculative future.
        Could you imagine writing regulation for the internet in 1950?
        You can't regulate until after something is in the wild, otherwise it will fail horrible.

        • I meant a cultural adaption. Things are changing very very very quickly now, even within a single generation.
          • by strack ( 1051390 )
            i like how you used the word 'cultural' in place of 'all this newfangled tech is scaring the olds, slow down a bit dagnammit'
  • MIPS CNT... (Score:5, Funny)

    by motd2k ( 1675286 ) on Wednesday November 20, 2013 @01:51PM (#45474121)
    MIPS CNT... how do you pronounce that?
  • So what? (Score:2, Interesting)

    by Animats ( 122034 )

    So what? Much of supercomputing is a tax-supported boondoggle. There are few supercomputers in the private sector. Many things that used to require supercomputers, from rocket flight planning to mould design, can now be done on desktops. Most US nuclear weapons were designed on machines with less than 1 MIPS.

    Supercomputers have higher cost/MIPS than larger desktop machines. If you need a cluster, Amazon and others will rent you time on theirs. If you're sharing a supercomputer, and not using hours or da

    • Re:So what? (Score:5, Insightful)

      by Anonymous Coward on Wednesday November 20, 2013 @02:14PM (#45474377)

      There are actually a half-decent number of 'supercomputers' -depending on how you define that term- in the private sector. From 'simple' ones that do rendering for animation companies to ones that model airflow for vehicles to ones that crunch financial numbers to.. well, lots of things, really. Are they as large as the biggest National faciltiies? Of course not - that's where the next generation of business-focused systems get designed and tested and models and methods get developed and tested.

      It is indeed the case that far simpler systems ran early nuclear weapon design, yes, but that's like saying far simpler desktops had 'car racing games' -- when, in reality, the quality of those applications has changed incredibly. Try playing an old racing game on a C64 vs. a new one now and you'd probably not get that much out of the old one. Try doing useful, region-specific climate models with an old system and you're not going to get much out of it. Put a newer model with much higher resolution, better subgrid models and physics options, and the ability to accurately and quickly do ensemble runs for a sensitivity analysis and, well, you're in much better territory scientifically.

      So, in answer to "So what?", I say: "Without improvements in our tools (supercomputers), our progress in multiple scientific -and business- endeavors slows down. That's a pretty big thing."

      • I'd argue that most scientific progress doesn't depend on supercomputers, and anything we know we can use supercomputers for, we can do with current computers, it will just take longer. Aside from the science of making more powerful computers I suppose. Protein folding, for example, could go faster, but it's already going.

        This is not to say I don't think we should be content with the computers we have now, just saying it doesn't seem too catastrophic to science. And buisiness seems to make money no
    • "If you need a cluster, Amazon and others will rent you time on theirs."

      You come from the planet where all algorithms parallelize neatly, eh? I've heard that they've cured the common cold and the second law of thermodynamics there, too...
      • You come from the planet where all algorithms parallelize neatly, eh? I've heard that they've cured the common cold and the second law of thermodynamics there, too...

        Because supercomputers are not massively parallel computers ... Oh wait....

        • Re:So what? (Score:4, Informative)

          by fuzzyfuzzyfungus ( 1223518 ) on Wednesday November 20, 2013 @04:19PM (#45475473) Journal
          They have no choice in the matter, since nobody makes 500GHz CPUs; but there is a reason why (many, not all) 'supercomputers' lay out a considerable amount of their budget for very fast, very low latency, interconnects (myrinet, infiniband, sometimes proprietary fabrics for single-system-image stuff), rather than just going GigE or 10GigE and calling it a day, like your generic datacenter-of-whitebox-1Us does.

          There are problems where chatter between nodes is low, and separate system images are acceptable, and blessed are they, for they shall be cheap; but people don't buy the super fancy interconnects just for the prestige value.
    • Re:So what? (Score:5, Interesting)

      by Kjella ( 173770 ) on Wednesday November 20, 2013 @02:39PM (#45474585) Homepage

      Of course these people are using talking about supercomputers and the relevance to supercomputers, but you have to be pretty daft to not see the implications for everything else. In the last years almost all the improvement have been in power states and frequency/voltage scaling, if you're doing something at 100% CPU load (and isn't a corner case to benefit from a new instruction) the power efficiency has been almost unchanged. Top of the line graphics cards have gone constantly upwards and are pushing 250-300W, even Intel's got Xeons pushing 150W not to mention AMD's 220W beast, though that's a special oddity. The point is that we need more power to do more and for hardware running 24x7 that's a non-trivial part of the cost that's not going down.

      We know CMOS scaling is coming to an end, maybe not at 14nm or 10nm but at the end of this decade we're approaching the size of silicon atoms and lattices. There's no way we can sustain the current rate of scaling in the 2020s. And it wouldn't be the end of the world, computers would go roughly the same speed they did ten or twenty years ago like cars and jet planes do. Your phone would never become as fast as your computer which would never become as fast as a supercomputer again. We could get smarter at using that power of course, but fundamentally hard problems that require a lot of processing power would go nowhere and it won't be terahertz processors, terabytes of RAM and petabytes of storage for the average man. It was a good run while it lasted.

      • We know CMOS scaling is coming to an end, maybe not at 14nm or 10nm but at the end of this decade we're approaching the size of silicon atoms and lattices.

        I have heard that statement made many times since about the mid-80s or at the very latest, early '90s -- not the exact size, but the prediction of the imminent end to CMOS scaling. Perhaps it is true now, as we approach single molecule transistors.

        • by lgw ( 121541 )

          Yes, the difference now is reaching the limits of physics, and even with something better than CMOS there's not much headroom. There's only so much state you can represent with one atom, and we're not that far off.

          I think the progress we'll see in the coming decades will be very minor in speed of traditional computers, significant in power consumption, and huge in areas like quantum computing, which are not incremental refinements of what we're so good at today.

          Our tools are nearly as fast as they reasonab

          • Exactly. Thanks to atomic uncertainty, we're rapidly approaching the point where CPUs are going to need 3 or more pipelines executing the same instructions in parallel, just so we can compare the results and decide which result is the most likely to be the RIGHT one.

            We're ALREADY at that point with flash memory. Unlike SRAM, which is unambiguously 0 or 1, SLC flash is like a leaky bucket that starts out full (1), gets instantly drained to represent 0, and otherwise leaks over time, but still counts as '1' a

      • by geekoid ( 135745 )

        "... current rate of scaling in the 1980s err 1990 err 2000, definitely 2000 err 2010.. I know; definitely 2020.

        • by Kjella ( 173770 )

          A single silicon lattice is about 0.55nm across, so at 32nm like we had at the start of this decade you're talking about 58 lattices wide. At 5nm (what Intel's roadmap predicts in 2019) you're down to 9 wide, keep that up to 2030 and you're down to 1.5 lattices wide. I guess the theoretical limit is a single lattice, but then you need perfect purity and perfect alignment of every atom of that processors or true nanotechnology in other words. We will probably run into problems earlier with quantum effects an

    • I somewhat agree w/ this. For the applications that do need supercomputers, they should really work on escalating the levels of parallelism within them. After that, just throw more CPUs at the problem. Indeed, that's the way Intel managed to wipe RISC out of the market.

      Also, as others pointed out, improve the other bottlenexts that exist there - the interconnects and that sort of thing. We don't need to move out of CMOS to solve a problem facing a fringe section of the market.

    • Re: (Score:2, Interesting)

      by Anonymous Coward

      Actually, the sort-of sad reality is that, outside the top few supercomputers in the world, the "top500" type lists are completely bogus because they don't include commercial efforts who don't care to register. Those public top-cluster lists are basically where tax-supported-boondoggles show off, but outside the top 5-10 entries (which are usually uniquely powerful in the world), the rest of the list is bullshit. There are *lots* (I'd guess thousands) of clusters out there that would easily make the top-2

  • by UnknownSoldier ( 67820 ) on Wednesday November 20, 2013 @01:57PM (#45474185)

    We've had Silicon Germanium cpus that can scale to 1000+ GHz for years. Graphene is also another interesting possibility.

    The question is that "At what price can you make the power affordable?"

    For 99% of people, computers are good enough. For the other 1% they never will be.

    • Yeah, SOS-CMOS like SOG-CMOS or SOD-CMOS. You can't have a data core without SOD-CMOS.
    • by Anonymous Coward

      We've had Silicon Germanium cpus that can scale to 1000+ GHz for years.

      Not really. We've had transistors that can get almost that fast... no one builds a CPU with those, for good reasons. It's not a question of cost.

    • The problem is heat. Simple as that. Currently there are no technologies more power efficient than CMOS. Therefore there are no technologies that can produce more powerful computers than CMOS. If a significantly more power-efficient technology is found, the semiconductor manufacturers will absolutely attempt to use it.
    • Do they also scale thermally? It is ultimately a problem of computations per joule, not a problem of computations per second. Supercomputers already have to use parallel algorithms, so building faster ones is about how much computing power can you squeeze into a cubic meter without the machine catching fire. That's actually the other reason why CMOS is being used, and not, e.g., ECL. ;-)
      • by fuzzyfuzzyfungus ( 1223518 ) on Wednesday November 20, 2013 @02:57PM (#45474747) Journal
        Even if you are willing to burn nigh unlimited power, thermals can still be a problem (barring some genuinely exotic approaches to cooling), because ye olde speed of light says that density is the only way to beat latency. There are, of course, ways to suck at latency even more than the speed of light demands; but there are no ways to suck less.

        If your problem is absolutely beautifully parallel (and, while we're dreaming, doesn't even cache-miss very often), horrible thermals would be a problem that could be solved by money: build a bigger datacenter and buy more power. If there's a lot of chatter between CPUs, or between CPUs and RAM, distance starts to hurt. If memory serves, 850nm light over 62.5 micrometer fiber is almost 5 nanoseconds/meter. That won't hurt your BattleField4 multiplayer performance; but when even a cheap, nasty, consumer grade CPU is 3GHz, there go 15 clocks for every meter, even assuming everything else is perfect. Copper is worse, some fiber might be better.

        Obviously, problems that can be solved by money are still problems, so they are a concern; but problems that physics tells us are insoluble are even less fun.
        • Re: (Score:2, Interesting)

          by lgw ( 121541 )

          Light-carrying fiber is slower than copper (5ns/m vs 4 for copper) - it sort of has to be, as the higher impedance goes hand-in-hand with the need for total internal reflection at the boundary of the clear plastic. Optical helps with band-width-per-strand, not with latency.

          I think the next decade of advances will be very much about power efficiency, and very little about clock rate on high-end CPUs. That will benefit both mobile and supercomputers, as power are power-constrained (supercomputers by the hea

    • No, they can't. We've known that for some time...and this is why [wikipedia.org].
    • by mlts ( 1038732 ) *

      I'd say computers are good enough for today's tasks... but what about tomorrow's?

      With the advent of harder hitting ransomware, we might need to move to better snapshotting/backup systems to preserve documents against malicious overwrites which are made worse with SSD (TRIM zeroes out stuff, no recovery, no way.)

      Network bandwidth also is changing. LANs are gaining bandwidth, while WANs are stagnant. So, caching, CDN services, and such will be needing to improve. WAN bandwidth isn't gaining anything but mo

      • Arguably, WAN bandwidth (except wireless, where the physics are genuinely nasty) is mostly a political problem with a few technical standards committees grafted on, rather than a technical problem.

        Even without much infrastructure improvement, merely scaring a cable company can, like magic, suddenly cause speeds to increase to whatever DOCSIS level the local hardware has been upgraded to, even as fees drop. Really scaring them can achieve yet better results, again without even driving them into insolvency
  • that this is not a complete sentence:
    "Although carbon nanotube based processors are showing promise [...]."

    Go, speed-editor, go! :)
  • by schlachter ( 862210 ) on Wednesday November 20, 2013 @02:06PM (#45474275)

    my intuition tells me that disruptive technologies are precisely that because people don't anticipate them coming along nor do they anticipate the changes that will follow their introduction. not that people can't see disruptive tech ramping up, but often they don't.

    • It seems to me that any technological advancements that humans have "invented" is merely a fabrication of context, in order to do what Nature is already doing. Perhaps the next "super computers" will not be what we think of as computers, but more like biological structures that are able to process things without using mathematics, or bits at all. As if all aspects of mathematics will be inherently built into the structure itself.

      just a thought.
    • No, they're disruptive because they change what is technically possible. The ability to directly manipulate ambient energy would greatly change ... everything. I've got piles and piles of things we can do with quantum tunneling junctions, when they're refined enough--currently you get a slab with 1% of the area functional (it works, but it's overly expensive to manufacture and too large).

      Anticipating a new advance to produce multi-thousand-GHz processors for 15 years won't make them disruptive. We'll

    • by fuzzyfuzzyfungus ( 1223518 ) on Wednesday November 20, 2013 @03:24PM (#45474973) Journal

      my intuition tells me that disruptive technologies are precisely that because people don't anticipate them coming along nor do they anticipate the changes that will follow their introduction. not that people can't see disruptive tech ramping up, but often they don't.

      Arguably, there are at least two senses of 'disruptive' at play when people talk about 'disruptive technology'.

      There's the business sense, where a technology is 'disruptive' because it turns a (usually pre-existing, even considered banal or cheap and inferior) technology into a viable, then superior, competitor to a nicer but far more expensive product put out by the fat, lazy, incumbent. This comment, and probably yours, was typed on one of those(or, really, a collection of those.)

      Then there's the engineering/applied science sense, where it is quite clear to everybody that "If we could only fabricate silicon photonics/achieve stable entanglement of N QBits/grow a single-walled carbon nanotube as long as we want/synthesize a non-precious-metal substitute for platinum catalysts/whatever, we could change the world!"; but nobody knows how to do that yet.

      Unlike the business case (where the implications of 'surprisingly adequate computers get unbelievably fucking crazy cheap' were largely unexplored, and before that happened people would have looked at you like you were nuts if you told them that, in the year 2013, we have no space colonies, people still live in mud huts and fight bush wars with slightly-post-WWII small arms; but people who have inadequate food and no electricity have cell phones), the technology case is generally fairly well planned out (practically every vendor in the silicon compute or interconnect space has a plan for, say, what the silicon-photonics-interconnect architecture of the future would look like; but no silicon photonics interconnects, and we have no quantum computers of useful size; but computer scientists have already studied the algorithms that we might run on them, if we had them); but application awaits some breakthrough in the lab that hasn't come yet.

      (Optical fiber is probably a decent example of a tech/engineering 'disruptive technology' that has already happened. Microwave waveguides, because those can be tacked together with sheet metal and a bit of effort, were old news, and the logic and desireability of applying the same approach to smaller wavelengths was clear; but until somebody hit on a way to make cheap, high-purity, glass fiber, that was irrelevant. Once they did, the microwave-based infrastructure fell apart pretty quickly; but until they did, no amount of knowing that 'if we had optical fiber, we could shove 1000 links into that one damn waveguide!' made much difference.)

  • by tlambert ( 566799 ) on Wednesday November 20, 2013 @02:31PM (#45474517)

    Didn't that boat sail with the Cray Y-MP?

    All our really big supercomputers today are adding a bunch of individual not-even-Krypto-the-wonderdog CPUs together, and then calling it a supercomputer. Have we reached the limits in that scaling? No.

    We have reached the limits in the ability to solve big problems that aren't parallelizable, due to the inability to produce individual CPU machines in the supercomputer range, but like I said, that boat sailed years ago.

    This looks like a funding fishing expedition for the carbon nanotube processor research that was highlighted at the conference.

    • by timeOday ( 582209 ) on Wednesday November 20, 2013 @02:51PM (#45474677)

      All our really big supercomputers today are adding a bunch of individual not-even-Krypto-the-wonderdog CPUs together, and then calling it a supercomputer. Have we reached the limits in that scaling? No.

      This is wrong on both counts. First, the CPUs built into supercomputers today are as good as anybody knows how to make one. True, they're not exotic, in that you can also buy one yourself for $700 on newegg. But they represent billions of dollars in design and are produced only on multi-billion dollar fabs. There is no respect in which they are not lightyears more advanced than any custom silicon cray ever put out.

      Second, you are wrong that we are not reaching the limits of scaling these types of machines. Performance does not scale infinitely on realistic workloads. And budgets and power supply certainly do not scale infinitely.

      • First, the CPUs built into supercomputers today are as good as anybody knows how to make one.

        Well, that's wrong... we just aren't commercially manufacturing the ones we know how to make already.

        There is no respect in which they are not lightyears more advanced than any custom silicon cray ever put out.

        That's true... but only because you intentionally limited us to Si as the substrate. GaAs transistors have a switching speed around 250GHz, which is about 60 times what we get with absurdly cooled and over-clocked silicon.

        • Well, that's wrong... we just aren't commercially manufacturing the ones we know how to make already.

          What is missing?

        • the attempts to make large chips and supercomputers failed spectacularly for good reason, even at the slow speeds of the early 1990s the stuff had to be in a bucket of coolant. that bad choice of GaAs made the cray 3 fail.

      • Actually cost to fab custom chips is a huge impediment to getting faster(at least faster on Linpack) supercomputers. Both the Japanese entries that have grabbed the top spot in the past 10 years(earth simulator and the K-Computer) were actually custom jobs that added in extra vector CPUs. These machines were very fast but also very expensive to make because they had such small runs of CPUs. The K-computer was slightly better in this regard as it uses a bunch of SPARC CPUs with basically an extra vector u
      • And budgets and power supply certainly do not scale infinitely.

        Unless you are the NSA. ;)

    • by Anonymous Coward on Wednesday November 20, 2013 @02:53PM (#45474695)

      The problem is that there are many interesting problems which don't parallelize *well*. I epmhasize *well* because many of these problems do parallelize, it's just that the scaling falls off by an amount that matters the more thousands of processors you add. For these sorts of problems (of which there are many important ones), you can take Latest_Processor_X and use it efficiently in a cluster of, say, 1,000 nodes, but probably not 100,000. At some point the latency and communication and whatnot just takes over the equation. Maybe for a given problem of this sort you can solve it 10 days on 10,000 nodes, but the runtime only drops to 8 days on 100,000 nodes. It just doesn't make fiscal sense to scale beyond a certain limit in these cases. For these sorts of problems, single-processor speed still matters, because they can't be infinitely scaled by throwing more processors at the problem, but they can be infinitely scaled (well, within information-theoretic bounds dealing with entropy and heat-density) by faster single CPUs (which are still clustered to the degree it makes sense).

      CMOS basically ran out of real steam on this front several years ago. It's just been taking a while for everyone to soak up the "easy" optimizations that were laying around elsewhere to keep making gains. Now we're really starting to feel the brick wall...

  • A bit of humor in one of the linked articles?

    To eliminate the wire-like or metallic nanotubes, the Stanford team switched off all the good CNTs. Then they pumped the semiconductor circuit full of electricity. All of that electricity concentrated in the metallic nanotubes, which grew so hot that they burned up and literally vaporized into tiny puffs of carbon dioxide. This sophisticated technique was able to eliminate virtually all of the metallic CNTs in the circuit at once.

    Bypassing the misaligned nanotubes required even greater subtlety.

    ......

  • by Orp ( 6583 ) on Wednesday November 20, 2013 @03:03PM (#45474803) Homepage
    ... are the biggest problems from where I'm sitting here in the convention center in Denver.

    In short, there will need to be a serious collaborative effort between vendors and the scientists (most of whom are not computer scientists) in taking advantage of new technologies. GPUs, Intel MIC, etc. are all great only if you can write code that can exploit these accelerators. When you consider that the vast majority of parallel science codes are MPI only, this is a real problem. It is very much a nontrivial (if even possible) problem to tweak these legacy codes effectively.

    Cray holds workshops where scientists can learn about these new topologies and some of the programming tricks to use them. But that is only a tiny step towards effectively utilizing them. I'm not picking on Cray; they're doing what they can do. But I would posit that before the next supercomputer is designed, that it is done with input from the scientists who will be using it. There are a scarce few people with both the deep physics background and the computer science background to do the heavy lifting.

    In my opinion we may need to start from the ground up with many codes. But it is a Herculean effort. Why would I want to discard my two million lines of MPI-only F95 code that only ten years ago was serial F77? The current code works "well enough" to get science done.

    The power problem - that is outside of my domain. I wish the hardware manufacturers all the luck in the world. It is a very real problem. There will be a limit to the amount of power any future supercomputer is allowed to consume.

    Finally, compilers will not save us. They can only do so much. They can't write better code or redesign it. Code translators hold promise, but those are very complex.
    • "Why would I want to discard my two million lines of MPI-only F95 code that only ten years ago was serial F77? The current code works "well enough" to get science done."

      Out of genuine curiosity (I'm not nearly familiar enough with either the economics or the cultural factors involved), would the hardware vendors, rather than the scientists(who, are scientists, not computer scientists, and just want to get their jobs done, not become programmers, so aren't strongly motivated to change), be in a position t
      • by jabuzz ( 182671 )

        Interesting thought. I guess the answer is that for the small percentage of HPC users that code stuff, they need to keep updating the code as time goes by. So they might not want to learn CUDA/OpenCL etc.

        On the other hand in my experience most HPC users are using a preexisting application to do something like CFD, molecular dynamics etc. For these there are open source applications like OpenFOAM, NAMD etc. that it would make sense for Nvidia to throw engineering effort at to improve the GPU acceleration.

        The

      • by jd ( 1658 )

        A surprising amount is FOSS. I routinely get screamed at by irate scientists for listing their stuff of Freshm...freecode.

  • was going to be gallium arsenide, but it never made it to market.

  • by Theovon ( 109752 ) on Wednesday November 20, 2013 @03:19PM (#45474905)

    There are plenty of algorithms that benefit from supercomputers. But it turns out that a lot of the justification for funding super computer research has been based on bad math. Check out this paper:

    http://www.cs.binghamton.edu/~pmadden/pubs/dispelling-ieeedt-2013.pdf

    It turns out that a lot of money has been spent to fund supercomputing research, but the researchers receiving that money were demonstrating the need for this research based on the wrong algorithms. This paper points out several highly parallelizable O(n-squared) algorithms that researchers have used. It seems that these people lack an understanding of basic computational complexity, because there are O(n log n) approaches to the same problems that can run much more quickly, using a lot less energy, on a single-processor desktop computer. But they’re not sexy because they’re not parallelizable.

    Perhaps some honest mistakes have been made, it trends towards dishonestly as long as these researchers continue to use provably wrong methods.

  • by Tim12s ( 209786 ) on Wednesday November 20, 2013 @03:59PM (#45475271)

    The next step has already started.

    Micronizing truely massive supercomputers is the next step for "applied sciences". We've gotten used to measuring data centres in power, I recon it will be computing power per cubic foot or something like that. It'll start with drones, then it will move to shipping and long haul robotics. After that it'll move to mining applications. I'm not talking about automating but rather truly autonomous applications that require massive computation for collision avoidance and programmed execution.

    At this point it'll be a race to redo the industrial age, albeit with micronized robotics. Again, already started with 3D printing.

    Hopefully by then someone figures out how to get off this rock.

  • That means there are hard limits to technology the NSA is using against us.

  • Tianhe-2? (Score:4, Informative)

    by therealobsideus ( 1610557 ) on Wednesday November 20, 2013 @04:46PM (#45475863)
    Totally off topic, but I ended up getting drunk with a bunch of people that are here in town for SC13 last night. Those boys can drink. But I'm surprised that there wasn't more talk about Tianhe-2 there, and how Chinese is going to kick the US off the top 25 in international supercomputing.
  • by Anonymous Coward

    Yes, Moore's Law is just about over. Fortunately all signs point towards graphene transistors actually being workable within a decade. We can make them have a bandgap, we can produce ever larger crystals of pure graphene, we can isolate it from the environment to avoid contamination. We can, in labs, do everything needed to make graphene transistors already. Combining everything effectively and commercially may take a while, but it'll happen and by 2023 you'll be running your Google Glass v10's CPU at sever

  • D-Wave scaled up superconducting foundry output for their quantum chip [wavewatching.net], see no reason to not leverage this for conventional superconducting chips.

  • Intel's latest creations are basically x86-themed Transputers, which everyone (other than Intel) has been quite aware was inevitable. The only possible rival was processor-in-memory, but the research there has been dead for too long.

    Interconnects are the challenge, but Infiniband is fast and the only reason Lightfleet never got their system working is because they hired managers. I could match Infiniband speeds inside of a month, using a wireless optical interconnect, with a budget similar to the one they s

  • 50 years ago, state of the art was a billion dollars, equivalent to $25USD billion today. So they are .004 or 1/250 what they formerly cost. WTG Techies!

  • I'm not saying that lagging software is a problem: it's not. The problem is that there are so few real needs that justify the top, say, 10 computers. Most of the top500 are large not because they need to be - that is, that they'll be running one large job, but rather because it makes you look cool if you have a big computer/cock.

    Most science is done at very modest (relative to top-of-the-list) sizes: say, under a few hundred cores. OK, maybe a few thousand. These days, a thousand cores will take less th

  • 3D chips, memristors, spintronics. I am surprised these are not mentioned prominently in this thread. I was hoping to hear about the latest advances in these areas from people in the industry.

    3D chips. As materials science and manufacturing precision advances, we will soon have multi-layered (starting at a few layers that Samsung already has, but up to 1000s) or even fully 3D chips with efficient heat dissipation. This would put the components closer together and streamline the close-range interconnects.

No man is an island if he's on at least one mailing list.

Working...