Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Cloud Supercomputing

1.21 PetaFLOPS (RPeak) Supercomputer Created With EC2 54

An anonymous reader writes "In honor of Doc Brown, Great Scott! Ars has an interesting article about a 1.21 PetaFLOPS (RPeak) supercomputer created on Amazon EC2 Spot Instances. From HPC software company Cycle Computing's blog, it ran Professor Mark Thompson's research to find new, more efficient materials for solar cells. As Professor Thompson puts it: 'If the 20th century was the century of silicon materials, the 21st will be all organic. The question is how to find the right material without spending the entire 21st century looking for it.' El Reg points out this 'virty super's low cost.' Will cloud democratize access to HPC for research?"
This discussion has been archived. No new comments can be posted.

1.21 PetaFLOPS (RPeak) Supercomputer Created With EC2

Comments Filter:
  • by serviscope_minor ( 664417 ) on Wednesday November 13, 2013 @02:32PM (#45415115) Journal

    1.21 PetaFLOPS (RPeak)

    Getting RPeak high is simply a matter of getting enough computers which you have access to. They could be connected by TCP/IP over pigeons or PPP over two tin cans and a piece of wet string.

    Basically getting a high RPeak on EC2 requires the following procedure:
    1. Pay a fuck load of money
    2. Create new instance.
    3. Goto 2.

    Basically this article translates to "Amazon has a lot of computers and this guy rented out a bunch of them at once".

    Which I'm sure is good for his research, which must be of the very parallelizable type. I have done such stuff too in the past and it's nice when you have it.

    • by fuzzyfuzzyfungus ( 1223518 ) on Wednesday November 13, 2013 @03:14PM (#45415555) Journal
      The one (slightly) novel aspect of this, presumably also made possible because the workload parallelized well, is the use of Spot Instances [amazon.com]. As the name suggests, these aren't Amazon's standard fixed-price instances; but are rather instances whose price changes according to demand.

      You make a bid (specifying maximum price/hour, number and type of instances, availability zones, etc.) If the spot price falls at or below your maximum, your instance starts running. Should it exceed your maximum, your instance gets terminated. Using these things obviously requires a tolerance for server outages far above even the shoddiest physical systems; but if you can divide your problem space into relatively small, discrete, chunks, and get the results off the individual servers once computed, you won't lose more than a single chunk per shutdown, and spot instances can be crazy cheap, depending on demand at the time. My impression is that Amazon offers them whenever they don't have enough reserved instances to fill a given area, and will pretty much keep offering them as long as they pay better than they cost in additional electricity and cooling, so if you are willing to bottom feed, and potentially wait, there are some bargains to be had.
      • ... If the spot price falls at or below your maximum, your instance starts running. Should it exceed your maximum, your instance gets terminated. Using these things obviously requires a tolerance for server outages far above even the shoddiest physical systems; but if you can divide your problem space into relatively small, discrete, chunks, and get the results off the individual servers once computed, you won't lose more than a single chunk per shutdown...

        And how is this different from SETI@Home, other t

        • Architecturally, it really isn't. The main difference is just that, unlike the heyday of SETI@Home (which, in part, was greatly aided by the dearth of portables and the relatively lousy system idle powersave modes of the time), you can rent time on other people's computers with such low friction that humans needn't be involved (and, indeed, the intention is that they aren't, except at high levels), and that Amazon has a specific pricing mechanism for varying the price of machine time, in quite fine incremen
      • The one (slightly) novel aspect of this, presumably also made possible because the workload parallelized well, is the use of Spot Instances [amazon.com]. As the name suggests, these aren't Amazon's standard fixed-price instances; but are rather instances whose price changes according to demand.

        Even that isn't novel. Quoting some work done last year "Running a 10,000-node Grid Engine Cluster in Amazon EC2" [scalablelogic.com]: "Also, we mainly requested for spot instances because ..."

        Doesn't make it less interesting for me though.

        • Interesting, I didn't know about that one, though it certainly makes sense to use spot instances for a compute problem loosely-coupled enough that EC2 wouldn't be a total joke.
    • Basically this article translates to "Amazon has a lot of computers and this guy rented out a bunch of them at once".

      No the article translates to "if you've got embarrassingly parallel workloads you can use EC2 to churn through it without a massive infrastructure outlay of your own". Amazon isn't just renting out the actual CPUs but the power, HVAC, storage, and networking to go along with it. Infrastructure and maintenance is a huge cost of HPC and puts it out of reach for many smaller projects.

      You're enti

      • The lede buried in the reporting is that for $33,000 a professor was able to take off the shelf software and run it on a 1.21 petaflop parallel cluster. That's high teraflop to petaflop computing at relatively small research grant prices. I think that's the interesting fact out of this story.

        Well moderately so, except that there are quite a few supercomputers out there at academic institutions which rent themselves out to academic users. I wonder how much they cost by comparison. These days they are general

        • There's several potential problems with renting out time on another university's cluster. For one there may simply be a lot of bureaucratic steps involved in renting out resources from another university. The second is that some cluster you don't own might not support your particular software/platform/project.

          One attractive aspect of cloud services is the customer gets to load on whatever wonky configuration they want into a virtualized instance. Using someone else's cluster may not provide that sort of fle

          • For one there may simply be a lot of bureaucratic steps involved in renting out resources from another university.

            True if it's another nuiversity's cluster. However, there are quite a few academic supercomputer centres which are built specifically to rent out space to universities. They're often not even associated with universities at all.

            he second is that some cluster you don't own might not support your particular software/platform/project.

            Indeed, especially if the cluster is something exotic. If it's

    • But how is the EC2 network set up how do make sure you have the right balance between N/S and E/W traffic
    • I raised the point some time back that perhaps the various providers could lend instance idle-time to various distributed computing projects as, perhaps, a tax deduction. At least a half-step closer, although you have a good point about usability.
  • by Squiddie ( 1942230 ) on Wednesday November 13, 2013 @02:33PM (#45415129)
    But can it run Crysis?
  • FTA (Score:5, Insightful)

    by Saethan ( 2725367 ) on Wednesday November 13, 2013 @02:37PM (#45415169)
    FTA:

    Megarun's compute resources cost $33,000 through the use of Amazon's low-cost spot instances, we're told, compared with the millions and millions of dollars you'd have to spend to buy an on-premises rig.

    Running somebody else's machines for 18 hours costs less than buying a machine that powerful for yourself to run 24/7...

    NEWS AT 11!

    • by AvitarX ( 172628 )

      That's what I thought. It is great that it is possible to run a simulation on a five figure budget, but if it's something that gets heavy use, having your own is better. I predict this will help Cray (contrary to the article's implication), with companies able to start using big power and see where it takes them without dropping the capital expense, they will then be able to move to a more constant use of such resources with lower marginal cost by bringing it in house.

      • It wouldn't surprise me if organizational dynamics come into the picture as well. If researcher X can purchase consumables and services related to his work up to X dollars on his own (subject only to oversight after the fact if somebody raises an eyebrow) and up to Y dollars with a sign off from the lab head or somebody; but would need 6 signatures, university-level approval for the facilities repurposing, and who knows what else, he has a pretty strong incentive to just pay Amazon to do it, even if getting
    • I hate how the press sensationalizes the idea of renting out server space and calling it the cloud. Even marketing geared towards IT Professionals does it, and everyone speaks of the idea of having someone host your file and calling it "the cloud" as if just happened a few years ago. It's almost marketed as some mysterious magical force that just puts everything into play. I hate it. Get off my lawn.
      • Also, why is it special that he rented Amazon's computing time? If he had rented computing time on a University supercomputer, or a cluster owned by another private corporation, would it have made a sensationalist headline? Had a University donated the time to him, would it have been news? This is nothing but astroturfing for Amazon's proprietary service, and has no place here.
  • El Reg (Score:2, Insightful)

    by spike hay ( 534165 )

    How about let's not use the anti-science mouthbreathers at the Register as a source.

  • HPC? (Score:5, Insightful)

    by NothingMore ( 943591 ) on Wednesday November 13, 2013 @02:44PM (#45415243)
    "Supercomputing applications tend to require cores to work in concert with each other, which is why IBM, Cray, and other companies have built incredibly fast interconnects. Cycle's work with the Amazon cloud has focused on HPC workloads without that requirement." While this is cool, Can you really call something like this an HPC system if you are picking work loads that require little cross node communication? The requirement of cross node communication is pretty much the whole reason large scale HPC machines like ORNL's Titan exist at all. Wouldn't this system be classified closer to HTC because it is targeting workloads that are similar to those which would be able to run on HTC Condor pools?
    • If we follow the article's reasoning, then SETI@home was one massive supercomputer, not 10,000 individual computers working on parts of a common task.

  • Good but not great (Score:5, Insightful)

    by Enry ( 630 ) <enry@@@wayga...net> on Wednesday November 13, 2013 @03:01PM (#45415427) Journal

    So this ran for 18 hours, or about $1800/hour. That gives you just under $44,000 per day, or $16 million for a year.

    Give me $16 million a year and I can build you a very kick-butt cluster - the one I'm just finishing up is 5000 cores at about $3 million.

    EC2 is great if your needs are small and intermittent. But if you're part of a larger organization that has continual HPC needs, you're going to be better off building it yourself for a while.

    • by cdrudge ( 68377 )

      Give me $16 million a year and I can build you a very kick-butt cluster - the one I'm just finishing up is 5000 cores at about $3 million.Presuming costs scale approximately linearly, $16m would net you 26-27k cores. They hit 6x that at peak. I didn't see them mention what they sustained over the long haul or averaged, but it looks like it was well above your scaled core numbers.

    • So this ran for 18 hours, or about $1800/hour. That gives you just under $44,000 per day, or $16 million for a year.

      Give me $16 million a year and I can build you a very kick-butt cluster - the one I'm just finishing up is 5000 cores at about $3 million.

      EC2 is great if your needs are small and intermittent. But if you're part of a larger organization that has continual HPC needs, you're going to be better off building it yourself for a while.

      People need to stop thinking of "cloud" as some kind of magic fairy land. It's just a bunch of servers and software that cost the same to purchase as anywhere else. Plus they have to make a profit. So of course you can build it cheaper yourself, if all you are comparing is bare hardware.

  • 1) Did they FIND any exceptional and useful photovoltaic behavior in the compounds tested?

    2) How much will this sort of crunch make up of the revenue lost to the rest of the world's migration away from US-based cloud services, in the wake of Snowden's revelations?

    • 1) Did they FIND any exceptional and useful photovoltaic behavior in the compounds tested?

      The NSA certainly knows, and can tell any company they like.

      2) How much will this sort of crunch make up of the revenue lost to the rest of the world's migration away from US-based cloud services, in the wake of Snowden's revelations?

      If you are concerned about your competitors knowing about your internal research... I guess my answer to your first question also answers this one.

      • 1) Did they FIND any exceptional and useful photovoltaic behavior in the compounds tested?

        The NSA certainly knows, and can tell any company they like.

        Good point.

        Government signals intelligence has a long track record of being used for industrial espionage, leaking both sales and tech info from foreign competitors to the country's own companies.

        Examples include China's military leaking Cisco (and apparently other compaies') tech to Huawei, the US bugging Totyta and Nissan for the benefit of US auto companies

  • No relation in any way to doc brown ... must be another troll posting articles! 1.21 gigawatts.
  • While this a nice use of Amazon's EC to build a high throughput system, that doesn't translate as nicely to what most High Performance computing users need- high network bandwidth, low latency between nodes and large, fast shared filesystems on which to store and retrieve the massive amounts of data being used or generated. The cloud created here is only useful to the subset of researchers who don't need those things. I'd have a hard time calling this High Performance Computing.

    Look at XSEDE's HPC resources [xsede.org]

    • by Yohahn ( 8680 ) on Wednesday November 13, 2013 @03:57PM (#45415995)

      The problem is that in a number of cases a researcher could easily use HTC, but they follow the fashion of HPC, using more specialized resources than necessary.
      Don't get me wrong, there are a number of cases where HPC makes sense, but usually what you need is a large amount of memory, or a large amount of processors.
      HPC only makes sense where you need both.

      • by dlapine ( 131282 )

        Sure, that's why I said that this is an advance. If you don't need HPC resources, this can work really well. But, you have educate scientists and researchers on the difference, and this article doesn't do that well enough.

      • No, the distinguishing feature of HPC is primarily access to a large set of cores with fast interconnect. Generally heterogenous, with a flat, high-bisection fabric. Lots of memory is definitely not necessary; nor are features like SSD or GPUs.

        • by Yohahn ( 8680 )

          I was stating that in a number of cases, you don't need HPC, you need the high memory instead of the interconnect, because basically researchers write programs that just use the interconnect to provide a large memory.

      • "Following the fashion of HPC" is a bit harsh. It depends on whether the research group gets money (which they could spend on exactly the sort of compute that would suit them) or in-kind funding with grants of time at an existing large HPC site, and how much data they expect to produce, and where/how long they intend to store it. For instance, Australian university researchers had to pay ISP traffic charges on top of Amazon's own charges to download data from Amazon until November of 2012, when AARNET peer

  • Now all they need is a flux capacitor, and then they can... oh wait...
  • ...will find this the sort of thing they like. For people/groups who have SETI@home or Folding@home style workloads - the type that the HPC community call "embarrassingly parallel" - and some money, this is useful. But it's sad that there is no mention made in the article of Condor [wikipedia.org] - a job manager for loosely coupled machines that has been doing the same kind of thing since the '80s - essentially, since there has been a network between a few sometimes-idle computers in a CS department. Cycle Computing it

  • Amazon makes a killing renting computers. Certain kinds of enterprises really want to pay extra for the privilege of outsourcing some of their IT to Amazon - sometimes it really makes sense and sometimes they're just fooling themselves.

    People who do HPC usually do a lot of HPC, and so owning/operating the hardware is a simple matter of not handing that fat profit to Amazon. Most HPC takes place in consortia or other arrangements where a large cluster can be scheduled to efficiently interleave bursty usage

Beware of Programmers who carry screwdrivers. -- Leonard Brandwein

Working...