Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Supercomputing Australia Hardware IT

Slimming Down a Supercomputer 64

1sockchuck writes "Happy Feet animator Dr. D Studios has packed a large amount of supercomputing power into a smaller package in its new render farm in Sydney, Australia. The digital production shop has consolidated the 150 blade chassis used in the 2007 dancing penguin feature into just 24 chassis, entirely housed in a hot-aisle containment pod. The Dr. D render farm has moved from its previous home at Equinix to the E3 Pegasus data center in Sydney. ITNews has a video and photos of the E3 facility."
This discussion has been archived. No new comments can be posted.

Slimming Down a Supercomputer

Comments Filter:
  • He has burning feet? :p Had to say it. It's interesting to see the reduction in space.
  • In slow motion, please.

  • Priceless (Score:5, Funny)

    by syousef ( 465911 ) on Sunday March 28, 2010 @03:26AM (#31645770) Journal

    Cost of real estate in prime metropolitan area - $15 million
    Cost of state of the art server rocks - $30 million
    Cost of flying in a cooler the size of a small bus on a 747 - $2 million
    Cost of seeing data center employee's face when they realise they're on call 24/7 for no extra cash - Priceless.

    • Value of free publicity for Happy Feet and Hewlett Packard from the advertorial: $50,000

    • A380 actually. :P

    • Cost of seeing data center employee's face when they realise they're on call 24/7 for no extra cash - Priceless.

      Well, if he agreed to give away work for free, because he thinks he’s worth that little, then that’s his own damn fault.
      People who don’t learn to say no, will obviously be walked all over. Your boss is only a client of yours. You can always get another client. Either he offers a good deal, or he can GTFO.

  • It was a billion times more entertaining than Happy Feet.

    • Happy Feet was a fun movie. How dare you?

      • by M8e ( 1008767 )

        Happy feet was at least more entertaining than the lord of the rings. They had singing, dancing and walking in happy feet. The walking parts was also better as penguins do that in a more entertaining way.

        • We here in the countries of Europe don’t get what you Americans like about singing in movies and shows. Always the pointless singing. While we all here just collectively cringe. It ruins the whole movie for us.
          Not judging here. Do whatever makes you happy. :)
          But we don’t get it, and can’t stand such movies.

      • Indeed. It was an insult to pediphiles everywhere.

  • and I'm getting a kick out of these "claims" of supercomputer "prowess"...
  • But it's in Flash. And I didn't have the patience to wait for the clouds and animation to finish.

    http://www.e3networks.com.au/ [e3networks.com.au]

    Who is this supposed to be targeting? You have to be a class A moron to build a data centre website using flash on the landing page.

    • by deniable ( 76198 ) on Sunday March 28, 2010 @04:07AM (#31645904)
      It's targeted at managers with money. Need I say more?
    • Re: (Score:1, Interesting)

      by Anonymous Coward

      Slightly off-topic, but...

      1. The funny thing about their site design is that about 90% of it could have easily been done with mouseovers and no flash.

      2. None of the text can be highlighted. Lets say that they were the solution for my business, and I just needed to e-mail someone in management a snippet about their site. Too bad. No copy and paste.

      It feels like it's 2001 or 2002 again.

    • by MrMr ( 219533 )
      class B morons, obviously.
    • by Macka ( 9388 )

      You don't wait for it to finish, there is no finish. You just click on the background and it loads the rest of the site.

      Flash isn't just used for the landing page: it's the whole site. Every scrap of it is flash. I feel sick!

  • by hallux.sinister ( 1633067 ) on Sunday March 28, 2010 @04:05AM (#31645890)
    You all do realize that electrons spin backwards there, right?
    • Re: (Score:2, Funny)

      by M8e ( 1008767 )

      Not only that they are also upside down.

    • by Nkwe ( 604125 )

      You all do realize that electrons spin backwards there, right?

      Only when you are not watching.

    • by Lorens ( 597774 )

      You all do realize that electrons spin backwards there, right?

      Moderation +2, 100% Informative

      Only on Slashdot.

    • by CODiNE ( 27417 )

      Dude you could at least CITE it. Hello!
      http://en.wikipedia.org/wiki/Coriolis_effect [wikipedia.org]

      • Sorry, forgot. Interesting Wiki Article though. Some of it is beyond me, although it may also be that it's closing in on two in the morning, several days past my bedtime.
        • by CODiNE ( 27417 )

          I was joking to make your joke about electrons going backwards seem more real. But nobody modded me funny cuz they thought I was serious. No deadpan humor on the net. It's all about the voice.

    • Re: (Score:1, Funny)

      by Anonymous Coward

      It's ok, they just flip the servers upside-down.

  • by slincolne ( 1111555 ) on Sunday March 28, 2010 @04:06AM (#31645898)
    Physical space is the least interesting point of this article. Other things would be:

    What racks are they using (at least 42RU in height) ?

    How do they get power into these (4 chassis, each with 6 x 15A power inlets) ?

    Are they using rack top switches, or is there more equipment?

    Are they using liquid cooled doors - if so whose ?

    I once tried to get answers from HP on how to power their equipment at this density - they diddn't have a clue. It's worth remembering that each of these chassis has six power supplies, each rated at up to 2.2KW. Even allowing for a 2N configuration, that's a massive amount of power, and a lot of cables.

    • by imevil ( 260579 ) on Sunday March 28, 2010 @06:42AM (#31646290)

      TFA says they use 48RU, and each cabinet uses 14.4 kW (60A) which in my opinion is not that impressive: you just need 3 phases at 20A, 240V.

      As for cooling, you can easily get away with no water-cooling if your hot aisle confinement is well done. From the pics it is just Dell's 1U servers, and if you fill one 48U rack with those you do get to 14.4kW. But not all racks are for number-crunching, you have racks for storage, control and network, and those make less than 8kW.

      The problem is not powering those things, but more cooling. With a good hot-aisle or cold-aisle confinement you can go up to 15kW/rack, but depending on the air volume, you're quickly screwed up if the cooling fails.

      • Re: (Score:3, Interesting)

        by dbIII ( 701233 )
        Dell servers? Australia is the land Dell forgot so that's the last thing you want. From Australia you end up talking to Dell people from three different continents to get even the smallest problem solved, and the timezone difference as a barrier to communication spins things out to weeks that should be solved in a couple of days. Plus there is far better gear from whitebox suppliers using SuperMicro boards so why use Dell in the first place? Dell can't do two boards with 8 cores on each in 1U and I've g
        • by Anpheus ( 908711 )

          We just got a few 1U dual socket quad core servers from Dell, so I don't know why you're saying they can't do it.

          • by dbIII ( 701233 )
            No you've misunderstood - Two servers in 1U with a power supply between the two boards. Others have had them for a couple of years.
            • by Anpheus ( 908711 )

              Oh, they just released those. They have a 2U box with 4 servers sharing two PSUs now.

              PowerEdge C6100 I think. But you're right, I remember looking at HP and the others and they did have them but they weren't the right price for us, as we didn't need density.

        • ell can't do two boards with 8 cores on each in 1

          There comes a point when you pass the point of cramming too much heat in a case and the whole system rapidly becomes unstable.

          Shoving 16 cores into a single 1U case, without doing the numbers, safely bypasses any sane risk.

          Great, you can get that many in one U... Dell doesn't want to deal with the supporting of such hardware and dealing with all the heat issues.

          Theres more to a data center than how much you can stuff into the racks, its actually got to work w

          • by dbIII ( 701233 )
            They've worked well for at least two years and other places had them before me, so how's that for any sane risk.
            "Doing the numbers" is called design - in those cases enough airflow does the job.
            There are of course denser setups than that anyway but that changes the price catagory - while the two servers in 1U is less than what you would pay for 2 x 1 U Dell servers of equivalent specs. If you don't need the extra drive bays it's not worth going for Dell especially if you are in a country where their suppor
        • by drsmithy ( 35869 )

          Dell can't do two boards with 8 cores on each in 1U and I've got some of those a couple of years old now.

          They're called blades.

          The density isn't quite as high, but since you'll nearly always run out of power or cooling long before you run out of rack space, even with 1U boxes, there's not a lot of benefit from increasing density much past even a simple 1U pizza box. The benefits of blades are more in the management centralisation and reduced cabling, which you don't get in those servers you're talking a

          • by dbIII ( 701233 )
            Those paticular boxes ended up being cheaper than 2x1U nodes with the same processing power and five of them are cheaper than 10 equivalent blades and a chassis.
            My main point is Dell lags well over a year behind many of the other vendors and often costs more, so if they won't give you support in there is no reason to go with them.
    • They're likely not using top of rack switches, since you can pack a nutty amount of bandwidth into relatively few links with 10GB switches a la Cisco 3120s. I would be unsurprised if they had a Nexus 7000 in the middle of it all.

      The article does mention that they're using HP Blade servers, not Dells as another commenter posted. In the video they showed a BL490c g6 blade, which is a dual socket Nehalem blade at 16 per chassis. For cooling they were using watercooled APC pods. The power isn't really the
    • by Jaime2 ( 824950 )
      HP blade chassis are easy to power. They are designed to run three power supplies to the left and three to the right. Just run two PDUs on each side of the rack (30 to 50 amps each, depending on what servers you run). Twenty four power cords will supply about 100 devices (64 servers, 32 switches, and 8 management units). The system is designed so you will never need all six power supplies running at full tilt as that isn't fault tolerant. You can also get away with a few as four network cables for the
  • Somehow I don't believe the banks would be happy if we asked them to participate in "The Biggest Loser"
  • News story is that computers are faster and have more memory than they were 3 years ago, so they need fewer of them. They bought APC enclosed systems to avoid having a hot isle due to open air cooling (of course, that means they paid a non-trivial amount for that).

    • I am surprised they need fewer of them, instead of making something just as big and several times faster. I guess faster computers just aren't needed for render farms any more. With 6000 cores, you could render the movie in real time (24 fps) if each core were allowed 4 minutes per frame.
      • by glwtta ( 532858 )
        With 6000 cores, you could render the movie in real time (24 fps) if each core were allowed 4 minutes per frame.

        I may be wrong, but I believe it takes way more than that to render a frame.
  • Yay for Moore's Law.
  • The power cables are so easily accessible on the roof?
  • I'm really thinking that this article is leaving some very important details out... It's really strange that a money-making data center would have physical space as it's primary limiting factor. Things like power, cooling, network, etc are usually far more important than square feet of tile, especially when anyone with an experience in data centers isn't going to put it in a high-value real estate market, it's going to be out in some industrial/commercial zone in the burbs where land/power/water are cheape

    • by dstates ( 629350 )
      Like the fact that the power consumption density is now so high that they need to go to a rack system with water cooling. Back to the good old days of the IBM 360.
  • It is really necessary to pack this so thightly? Isn't the refrigerating cost overhead worth it?
  • A renderfarm is really nothing more then a bunch of slave processors, each one rendering a separate frame. There is basically NO internode communication. A supercomputer on the other hand has extensive internode communication, which is why the switching fabric is so fundamentally important. So do not confuse a farm (web farm, or render farm) with a supercomputer.
  • If they waited another 2 years they could pack the same processing power into a desktop PC.

    Why are we posting stories about companies who are just upgrading old PCs they use for their rendering farm.

    Whats next? Google server farm updates? Going to start posting to us when redhat upgrades its FTP servers to faster hardware just because its cheaper than replacing the old?

    I mean seriously, all they did was upgrade, and ... it wasn't even a big upgrade, I've made bigger purchases than that over the phone to d

  • by Ken_g6 ( 775014 )

    No GPUs?

  • Now let's see if they could put that technology to good use by creating a good film.

It is easier to write an incorrect program than understand a correct one.

Working...