Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

China Bumps US Out of First Place For Fastest Supercomptuer

samzenpus posted about a year ago | from the king-of-the-hill dept.

Supercomputing 125

An anonymous reader writes "China's Tianhe-2 is the world's fastest supercomputer, according to the latest semiannual Top 500 list of the 500 most powerful computer systems in the world. Developed by China's National University of Defense Technology, the system appeared two years ahead of schedule and will be deployed at the National Supercomputer Center in Guangzho, China, before the end of the year."

Sorry! There are no comments related to the filter you selected.

Clueless (0)

Anonymous Coward | about a year ago | (#44032263)

I'd normally expect this type of feat to be one of those clueless 1upsman type of scenarios except for the fact that every other lab has probably been hacked/infiltrated to copy all other research available.

Re:Clueless (3, Informative)

Anonymous Coward | about a year ago | (#44032435)

The article writer is clueless too. From TFA:

In all, Tianhe-2, which translates as Milky Way-2, operates at 33.86 petaflop per second

First of all, it's PetaFLOPS. It's not a plural, so there is no PetaFLOP. FLOPS = FLoating-point Operations Per Second, so saying "PetaFLOP per second" is saying "Peta-FLoating-point Operations Per per second"

Re:Clueless (2)

K. S. Kyosuke (729550) | about a year ago | (#44032653)

That's what the author gets for mistyping at ten writing flops per minute.

Re:Clueless (0)

Anonymous Coward | about a year ago | (#44032677)

The author misuses it multiple times. It's not a mistype, it's lack of knowledge.

Re:Clueless (1)

Sique (173459) | about a year ago | (#44033675)

If the author who compiles the list of the fastest computers in the world, and who co-developed Linpack, likes to write "petaflop/s" (see his blog entry in the second link), and if the author who writes the article in Nature World News, writes that as "petaflop per second", then who are you to argue?

Re:Clueless (1)

Kjella (173770) | about a year ago | (#44033967)

If the author who compiles the list of the fastest computers in the world, and who co-developed Linpack, likes to write "petaflop/s" (see his blog entry in the second link), and if the author who writes the article in Nature World News, writes that as "petaflop per second", then who are you to argue?

Like lack of qualifications has ever stopped any /.er from arguing they know best anyway. This place is pretty much the definition of the Internet peanut gallery.

Re:Clueless (0)

Anonymous Coward | about a year ago | (#44034181)

The person is a writer. That doesn't mean they actually know shit about the topic.

Re:Clueless (1)

Sique (173459) | about a year ago | (#44034691)

The person is a programmer. Programming Linpack, you know, the program, that as a result of a run puts out a figure giving the number of floating points per seconds according to the number of floating point operations processed during its run and the time consumed. And he maintains this list: Top 500 [top500.org] , which is the result of running Linpack on very large systems. And this list giving the computing power in TFlop/s. And not in TFLOPS.

And even if you stand on your head, this guy surely has seen a TFlop/s much earlier than you. And he probably gave the TFlop/s and the Petaflop/s its name - as his program is the tool to actually figure if a computer is able to put out a TFlop/s or a PFlop/s.

Re:Clueless (1)

Anonymous Coward | about a year ago | (#44032865)

No. It can be PetaFLOPS, or it can be written PetaFLOP/second or petaflop per second. Same way it can be kph or km/hr or kilometer/hour. Saying FLOP per second is like saying "FLoating OPerations per second". Yeah, thats right: not everyone uses the exact same interpretation you do. It's an abbreviation: you can use one of a number of them. And given petaflop/second is the abbreviation used by the guys who made the top 500 list for supercomputers (which, coincidentally, is the same people who wrote TFA), I'd say thats a valid usage, maybe even the preferred one in this context (given they are the industry standard experts on supercomputer speed measurement).

Re:Clueless (-1)

Anonymous Coward | about a year ago | (#44033439)

Wrong. FLOPS and MIPS go hand in hand. FLOPS = FLoating-point Operations Per Second, nothing else. MIPS = Million Instructions Per Second, nothing else. Stop trying to rewrite history to suit your delusional fairy tale world.

Re:Clueless (1)

Anonymous Coward | about a year ago | (#44036275)

FLOPS [wikipedia.org]

Re:Clueless (0)

Anonymous Coward | about a year ago | (#44033011)

Actually, it's Peta-FLoating-point-OPerations-per-Second, OPS was common jargon long before FLOPS became.

Re:Clueless (0)

Anonymous Coward | about a year ago | (#44033405)

No it isn't. Go learn computers.

Re:Clueless (0)

Anonymous Coward | about a year ago | (#44035767)

I remember when a kiloFLOP/s was an important unit, so I probably learned computers before you were even born.

Re:Clueless (2, Funny)

Anonymous Coward | about a year ago | (#44033903)

Wait. I thought PETAFlops was a measure of how many times PETA have launched an idiotic campaign. As in, "I read that PETA is campaigning that we should call fishes 'sea kittens'. That's 7 PETAFlops so far this year".

Re:Clueless (5, Informative)

Entropius (188861) | about a year ago | (#44033971)

As a computational physicist:

"flop" is sometimes used to mean "floating point operation", when you're talking about the compute cost of an algorithm. For instance:

"The Wilson dslash operation requires 1,320 flops per site" or "The comm/compute balance of this operation is 3.2 bytes per flop".

So saying "ten flops per second" is fine -- "flops" is the plural of "flop".

Yes, "flops" is also acronymized as "... per second", and while that's the most common use it's not exclusive.

Re:Clueless (0)

Anonymous Coward | about a year ago | (#44033999)

When you say FLOPS it means FLoating-point Operations Per Second. The S is part of the acronym.

Re:Clueless (1)

RicktheBrick (588466) | about a year ago | (#44035557)

How power efficient is this computer? Lets say it can do a Giga flop per watt. A trillion is a 1000 billion so 33,000 trillion is 33 million billion so this computer should require 33 million watts of power. There are some nuclear power plants that produce over a trillion watts so it could power 40 or more of these super computers. Even so I would think that they would do some planning at the power plant when this computer powers up since it will take a significant amount of its capacity.

Re:Clueless (2)

raddan (519638) | about a year ago | (#44035571)

As a computer scientist:

We rarely refer to the cost of an algorithm in terms of flops, since it is bound to change with 1) software implementation details, 2) hardware implementation details, and 3) input data dependencies (for algorithms with dynamical properties). Instead, we describe algorithms in "Big O" notation, which is a convention for describing the theoretical worst-case performance of an algorithm in terms of n, the size of the input. Constant factors are ignored. This theoretical performance figure allows apples-to-apples comparisons between algorithms. Of course, in practice, constant factors need to be considered for many specific scenarios.

"flops" are more commonly used when talking about machine performance, and that's why they're expressed as a rate. You care about the rate of the machine, since that often directly translates into performance. Computer architects also measure integer operations per second, which is in many ways more important for general-purpose computing. Flops are really only of interest nowadays for people doing scientific computing now that graphics-related floating point things have been offloaded to GPUs.

If you want to be pedantic, computers are, of course, hardware implementations of a Turing machine. But it's silly to talk about them using Big O notation, since the "algorithm" for (sequential) machines is mostly the same regardless of what machine you're talking about. The constant factors here are the most important thing, since these things correspond to gate delay, propagation delay, clock speed, DRAM speed, etc.

Re:Clueless (0)

Anonymous Coward | about a year ago | (#44035745)

We rarely refer to the cost of an algorithm in terms of flops

Yes, the costs of an algorithm are done in terms of scaling parameters, e.g. big O notation. The costs of an implementation however are expressed in operations or floating point operations, depending on if there is a floating point bottleneck. Regardless of how much it is machine dependent, there are some of us that need to optimize parts of tight loops for large scale computation, and it pretty much comes down to either sequencing of memory accesses or the number of operations.

Re:Clueless (1)

gmhowell (26755) | about a year ago | (#44035137)

I wonder if this new supercomputer can crack the PIN number I use at the ATM machine.

Re:Clueless (-1)

Anonymous Coward | about a year ago | (#44032617)

The article writer is clueless too. From TFA:

In all, Tianhe-2, which translates as Milky Way-2, operates at 33.86 petaflop per second

First of all, it's PetaFLOPS. It's not a plural, so there is no PetaFLOP. FLOPS = FLoating-point Operations Per Second, so saying "PetaFLOP per second" is saying "Peta-FLoating-point Operations Per per second".

Re:Clueless (1)

Sique (173459) | about a year ago | (#44033905)

There is a PetaFLOP. It's one trillon floating point operations. If you calculate the computative cost of an algorithm given some input data, you often arrive at some number like 20 MegaFLOP or 40 PetaFLOP. And if you want to know how much processing time this will take, you need to know how many floating point operations a given system can process per time, and you get FLOP per second or FLOP/s. There is nothing wrong with that, even if you like to abbreviate it as FLOPS.

first (-1)

Anonymous Coward | about a year ago | (#44032273)

fipo!

BUT... (0)

Anonymous Coward | about a year ago | (#44032279)

But the story comes from China so you have to imagine at least some of it is B.S. and since it's made in China, it will probably crash at least three times a week.

Re:BUT... (4, Funny)

ArcadeMan (2766669) | about a year ago | (#44032339)

it will probably crash at least three times a week

So we know it runs Microsoft Windows. U.S.A.! U.S.A.!

Re:BUT... (3, Funny)

rullywowr (1831632) | about a year ago | (#44032683)

The second place computer was from Redmond, Washington and almost clinched the title but it had to be connected to the internet at least once every 24 hours, had a camera connected 24/7, and was not able to share its programs with other computers.

Take THAT pee pee in your coke, Americans! (-1)

Anonymous Coward | about a year ago | (#44032295)

Now me put economic dominance down your throat!

Supercomputers are pretty useless (-1, Troll)

gweihir (88907) | about a year ago | (#44032329)

So this is just a "mine is bigger" with any real-world impact. True, there are some things supercomputers can do well, but the same effect can be reached with distributed computing, which, in addition, makes the individual CPUs useful for a range of other things. Basically, building supercomputers is pretty stupid and a waste of money, time and effort.

Re:Supercomputers are pretty useless (1)

Redeye Carci (2932323) | about a year ago | (#44032461)

Our aircraft carriers are longer then yours! On a more serious note the largest calculation that I can find (in terms of # of cores utilized) was a fluid dynamics calculation with a million cores on Sequoia. From my own experience we usually utilize 4-100 cores for throughput over the speed of a single job- if it takes a month to do then so be it.

Re:Supercomputers are pretty useless (4, Interesting)

Anonymous Coward | about a year ago | (#44032469)

Your information is out of date. Most supercomputers in the last decade have been distributed memory machines, so 'distributed computing' is what this is already. Also, as someone that's using a machine somewhat further down the list (in the 30s), if you have a big supercomputer that you feel is a waste, can you give me an account? Because my job (in fluid dynamics simulations) is basically dependent on their existence, and I've got applications for the biggest machine I can get my hands on.

Re:Supercomputers are pretty useless (1)

Redeye Carci (2932323) | about a year ago | (#44032489)

Out of curiosity how many cores do you use on a typical job?

Re:Supercomputers are pretty useless (1)

Anonymous Coward | about a year ago | (#44032769)

Fluid dynamics is one of those "as many as you'll give me" kinds of problems. So if he's currently on a machine around #30 on the list, it'll be a number near 80,000.

Re:Supercomputers are pretty useless (1)

manicb (1633645) | about a year ago | (#44032999)

Unfortunately the commercial fluid dynamics codes often have quite restrictive licenses where you pay for a certain number of cores. I've seen academic HPC queues full of 8-core jobs with hundreds of cores available, because that was all they could justify a license for. It's an absurdly artificial restriction (a bit like limiting numbers of tracks in cut-down music software), but >Ansys are fairly unrepentant at the moment.

Re:Supercomputers are pretty useless (0)

Anonymous Coward | about a year ago | (#44033693)

There are packages such as OpenFoam - http://www.openfoam.org/docs/user/damBreak.php#x7-610002.3.11 which are free... I haven't used it, but one should be able to distribute/map jobs off a large grid and collate/reduce their results together. If done correctly, a large enough supercomputer can definitely yield perfect results. The point with these kind of systems is the tail end in these computers is larger - diminishing returns on the more computers you put in.

Re:Supercomputers are pretty useless (3, Interesting)

clong83 (1468431) | about a year ago | (#44035717)

Not the poster upthread, but as someone else who runs fluids codes on big machines, I will chime in:
A lot of the guys on the big NICS machines aren't using ANSYS. They're using their own research codes that are tailored for parallel performance and/or to solve specific and difficult problems that commercial codes don't do well, like fluid-structure interaction. I know there are guys that depend on licensing somehow or another and this is artificially limiting. But I never really understood it. If all you want is a basic, parallel fluids solver, there are some open-source options. Probably won't scale well, but it sure beats spending half your lab budget to get only 8 processors.

Even if you have your own in-house solver, you will of course run into problems with latency as you scale up. I usually run on around 100-200 processors, depending on the problem. I would love use more, but the communication costs start to take over. Some guys can run on 10-100,000 processors. Not sure what they are doing, but I am guess whatever they are computing requires very little communication between nodes, or has been optimized to an extreme degree. Hard to imagine those guys are running a normal fluids solver with an unstructured grid. That'd be a huge waste.

And I agree to whomever said that if someone know of a big wasted supercomputer with idle time on it, please advertise it here! All the ones I've ever seen are more-or-less utilized to their full extent.

Re:Supercomputers are pretty useless (1)

Anonymous Coward | about a year ago | (#44032789)

Other AC from the fluid dynamics field, but in academia I typically see up to 32 nodes for 'normal' PhD's working on small local clusters, and up to around 512, 1024 for groups that do more fundamental research. The cluster is then usually in some centralized location and you have to get funding to use it. In Industry, I see most (R&D) groups working with small clusters of up to 32-64 nodes.

Re:Supercomputers are pretty useless (0)

Anonymous Coward | about a year ago | (#44036073)

Yes, the one who posted about supercomputers being useless, is clueless. I have a piddling 4 core processor with two threads per core, so 8 threads of execution. There are hundreds of jobs I can toss at it that will leave me wishing I had something faster. I have software that will easily utilize 64 cores (and it probes my cpu and re-adjusts to 8 threads). That same software can be used to run simulations, including CFD using Navier-Stokes algorithms. Time. It is able to do it all in time. But not real time! Give me a supercomputer, or at least 100 more cores, and then maybe.

Re:Supercomputers are pretty useless (5, Interesting)

Nite_Hawk (1304) | about a year ago | (#44032525)

I'll bite. You seem to think that distributed computing, however you are defining that, is a better solution. I am going to assume your primary objection then is using infiniband (or some other low latency interconnect such as Numalink or Gemini). What then, would you propose to do with the class of problems that are rely on extremely low latency transmission of data between nodes?

Re:Supercomputers are pretty useless (2, Funny)

Anonymous Coward | about a year ago | (#44032937)

You seem to think

Aha, I've identified the error in your logic!

Re:Supercomputers are pretty useless (0)

Anonymous Coward | about a year ago | (#44032537)

The only things supercomputer save on, that regular distributed computing may not, is the cost of expensive network switches. Logically, supercomputers are inherently distributed in a torus configuration. Nobody uses a full fledged shared RAM model but rely on talking to spatially nearby notes (as connected). Please read before you make these useless comments yourself.

Re:Supercomputers are pretty useless (2)

K. S. Kyosuke (729550) | about a year ago | (#44032697)

Logically, supercomputers are inherently distributed in a torus configuration.

Why a torus? Why not a hypercube or a fat tree?

Re:Supercomputers are pretty useless (2, Funny)

Anonymous Coward | about a year ago | (#44033217)

You get stuck in a five hour meeting and see if you can visualize anything other than a doughnut afterward.

Re:Supercomputers are pretty useless (4, Informative)

Cassini2 (956052) | about a year ago | (#44033253)

If you use a hyper-cube, then the processors on the outside edges have no one to talk to. For a single dimension example, imagine a series of processors where every processor in a line has two communication links, one to talk to its neighbour on the left, and one to talk to its neighbour on the right. This is great for all the processors in the middle of the arrangement. However, in a one-dimensional straight-line arrangement, the processors on the end are either missing a left (or a right) neighbour. The solution to this problem is to connect the processors on the ends to each other, making the line a circle or ring.

A one-dimensional hypercube is a line. In supercomputing, it is often desirable to avoid any topology where the there is a flat (non-connected surface) on the side of the cube. Connecting the opposite edges of the cube to each other results in the torus topology in higher dimensions, and the ring topology in 1-D. For a picture of this effect, see the torus interconnect article on wikipedia [wikipedia.org] .

While it is theoretically possible preferable to have really high-order interconnects, in practice wiring considerations limit the maximum number of interconnects. As such, most practical torus architectures are limited in the number of neighbours they can support.

FYI: The tree architecture is avoided in supercomputing for a different reason. Typically, each node has the fastest interconnect that can be provided, as interconnect speed affects system speed for many algorithms. Imagine if each leaf at the bottom of the tree needs 1X bandwidth. Then the parent node one-element up needs 2X bandwidth. The next parent node up requires 4X bandwidth, and so on. With tens of thousands of nodes in the supercomputer, it quickly becomes impossible to make fabricate interconnects fast enough for the parent nodes of the tree.

A practical application of the tree problem occurs on small Ethernet clusters. It is easy to make a 16-node 10Gb Ethernet cluster, because standard switches are readily available. As the system approaches hundreds of nodes, it becomes difficult to find fast enough switches. Even if the data communication speed to each node is reduced to 1Gb, for sufficiently large numbers of nodes, the backplane switches will be overwhelmed.

Re:Supercomputers are pretty useless (0)

Anonymous Coward | about a year ago | (#44033455)

You're referring to a "mesh" network. (Using the standard supercomputing lingo.) Typically, a "hypercube" refers to a network with 2^n nodes for some n (the dimension of the hypercube). There are no edges in the hypercube because the topology is vertex (and edge) symmetric.

The tree problem you're talking about has been solved ... enter "fat trees".

Re:Supercomputers are pretty useless (1)

Anonymous Coward | about a year ago | (#44033637)

A mesh requires more dense interconnects. A torus does not, and is meant to connect nodes spatially (data wise). Fat trees (or in general trees) have more hops to go when tackling spatial data. On the other hand, you can certainly assign a hierarchy on a Torus interconnect. :-) That's the reason people stick to Torus when building supercomputers. The fancier recent ones I've seen have 5 dimensional torus interconnects.

Re:Supercomputers are pretty useless (1, Insightful)

the gnat (153162) | about a year ago | (#44032573)

there are some things supercomputers can do well, but the same effect can be reached with distributed computing, which, in addition, makes the individual CPUs useful for a range of other things. Basically, building supercomputers is pretty stupid and a waste of money, time and effort.

That's a bit of an overstatement. There are plenty of simulations that really do benefit from a monolithic supercomputer rather than a distributed system, such as protein dynamics, global climate, etc. And the level of detail which can be attained (without approximations which diminish accuracy) increases with the size of computer.

I do think however that it's reasonable to question what the real-world impact of such systems is, and whether there are better approaches. My field is life sciences, where the applications are indeed limited. In the molecular dynamics field, for instance, specialized hardware [wikipedia.org] is potentially superior for both performance and efficiency (although this has some tradeoffs too). For genomics a supercomputer is completely unnecessary, and cloud computing is quite adequate. Ditto for most other analyses of experimental data, protein design, and so on.

Furthermore, the economic impact of supercomputer simulations tends to be greatly overstated. A common example is studies of drug binding to proteins - supercomputer centers love to put out press releases about how "new simulations tell us how to cure cancer/AIDS/Alzheimer's". But anyone familiar with pharmaceutical development will tell you that lack of supercomputers is by far the least of the problems faced by the field. Simulations aren't a magical substitute for actual benchwork, unfortunately - and clinical studies are vastly more expensive than supercomputers.

The main reason why having the biggest supercomputer is a status symbol is that it's traditionally tied to nuclear weapons research, and therefore the importance to the country in general is inflated by the politicians, the media, and of course the people who build and use supercomputers. A secondary reason is that it indicates the overall level of technical competence of a country, although as noted China is still using Intel CPUs. (This is not a trend specific to supercomputing; the Beijing Genomics Institute famously uses equipment entirely designed and built in the US and UK for sequencing.)

Re:Supercomputers are pretty useless (4, Informative)

Ambitwistor (1041236) | about a year ago | (#44032579)

True, there are some things supercomputers can do well, but the same effect can be reached with distributed computing, which, in addition, makes the individual CPUs useful for a range of other things. Basically, building supercomputers is pretty stupid and a waste of money, time and effort.

People don't build supercomputers for no reason, especially when HPC eats up a large part of their budget.

The main application of supercomputers is numerically solving partial differential equations on large meshes. If you try that with a distributed setup, the latency will kill you: the processors have to talk constantly to exchange information across the domain.

As someone pointed out, modern supercomputers are like distributed computing, often with commodity processors. They look like (and are) giant racks of processors. But they have very fast, low-latency interconnects.

Re:Supercomputers are pretty useless (1)

manicb (1633645) | about a year ago | (#44032921)

This mostly agrees with my experience. Here's some data: This [ed.ac.uk] is a breakdown of the codes used on HECToR, the main UK academic cluster. It is dominated by chemistry; generally in chemistry the main computational challenge is in performing very large matrix diagonalisations to solve approximations of quantum mechanical systems. Clearly generous allocation and effective sharing of memory is critical for this kind of task.

Who cares who is first place (0)

Anonymous Coward | about a year ago | (#44032341)

It's dick-waving, nothing more.

http://i.imgur.com/XH0CoHU.gif [imgur.com]

Re:Who cares who is first place (2, Insightful)

Anonymous Coward | about a year ago | (#44032565)

That is, hands down, the best thing I've seen all day.

Re:Who cares who is first place (2)

ranton (36917) | about a year ago | (#44032625)

While I completely agree that being in 1st place doesn't mean much, taking a look at the entire top 500 does give a good measure of which countries are spending the most on R&D. I do think it is a little shameful that a country with half of our GDP has the fastest supercomputer, it is still commendable that the USA has about half of the top 500 supercomputers with only 20% of the world's GDP.

Re:Who cares who is first place (0)

Anonymous Coward | about a year ago | (#44034461)

That's the funny thing about a command economy. The government can just come along and say "build the biggest supercomputer on earth" and it happens.

Re:Who cares who is first place (0)

Anonymous Coward | about a year ago | (#44032705)

That is funniest animated gif I have seen in YEARS. No joke!

If only I had modpoints Mr. AC.

We must close the supercomputer gap! (5, Funny)

Anonymous Coward | about a year ago | (#44032377)

Quickly before they sap and impurify all of our precious bodily fluids!

Re:We must close the supercomputer gap! (0)

Anonymous Coward | about a year ago | (#44032557)

Before they simulate sapping all of our precious bodily fluids.

Re:We must close the supercomputer gap! (0)

Anonymous Coward | about a year ago | (#44032847)

The US is full of idiots n dope smokers.

Re:We must close the supercomputer gap! (1)

AmiMoJo (196126) | about a year ago | (#44033523)

What does it feel like when you look over to the next stall and realise the guy's dick is a few millimetres longer than yours?

Re:We must close the supercomputer gap! (0)

Anonymous Coward | about a year ago | (#44033733)

Any true American would chastise himself for using SI units instead of Freedom Units.

Re:We must close the supercomputer gap! (0)

Anonymous Coward | about a year ago | (#44033537)

In the years after WW2, the west began to believe the Soviet lies about their economic performance. The result? The disastrous attempts to clone this in the form of bureaucratic socialism. This was the era of the quote you gave. The quote implying that those opposing utterly false system, were crackpots. Where Senators with knowledge (from the then classified, now declassified and public FBI files) of the extent of Soviet infiltration attempted to prove this with an entirely clean (unclassified) set of evidence, had there names successfully smeared by these same infiltrators, such that there names go down in history as references to evil acts.

What will be the result if we start believing that China truly has the answers? You can see the wheels turning already. From de-facto-nationalization of entire sectors across the western world, to increased calls for state control of the internet.

Two years ahead of schedule? (1)

Anonymous Coward | about a year ago | (#44032383)

Someone bumped up their schedule to put some pressure on the US. These machines, in the US and China and other nations, typically perform one news-worthy article of empathy-worthy "Science!" like modeling the beating of a human heart (awww) or predicting climate change, then spend the rest of their lives breaking codes for the national spy agencies. Several of the top computers, like Kraken, Jaguar, and Titan, were/are NSA cryptography machines.

There's never enough computing power for the amount of encrypted data that any first-world spy agency collects. Rumor had it a few years back that the NSA even had a deal with Pixar to use their rendering farm when not actively engaged in movie-making.

Re:Two years ahead of schedule? (2, Informative)

Anonymous Coward | about a year ago | (#44033091)

rest of their lives breaking codes for the national spy agencies. Several of the top computers, like Kraken, Jaguar, and Titan, were/are NSA cryptography machines.

The NSA has their own computers, why would they need to use the rather publicly known ones, and compete with other users for time? Do you assume those computers only do one piece of science because you only read about it in the news/PR, or did you actually bother to look at the research papers and groups using these computers on a daily basis? I know people on research groups that use those computers. What they have to sometimes compete with is not the NSA, but nuclear stewardship programs. Other than that, it is other sciences groups getting time and/or slices of the machine.

hack the planet (0)

zeroryoko1974 (2634611) | about a year ago | (#44032385)

Super Computer so they can hack all computers on the planet at one time. The design was probably stolen to.

mod dEown (-1)

Anonymous Coward | about a year ago | (#44032405)

represen*ts the

not faster than (0)

Anonymous Coward | about a year ago | (#44032505)

My Samsung Chromebook. I can have Twitter and Facebook open all at once.

Cluttered mess (1)

virgnarus (1949790) | about a year ago | (#44032521)

I love being pounded by not one, but two autoplaying video streams that evade my Adblock Plus. Doesn't help that the rest of the site is a nightmare to look at. At least present us with a site with far less elements to deal with.

Re:Cluttered mess (0)

Anonymous Coward | about a year ago | (#44032857)

I love being pounded by not one, but two

Not that there's anything wrong with that...

Re:Cluttered mess (0)

Anonymous Coward | about a year ago | (#44033805)

What filter list are you using? I'm using the "Fanboy+Easylist-Merged Ultimate List" and didn't see any ads on either of the pages linked in the summary.

OF COURSE IT IS LOADED WITH WAREZ !! (0)

Anonymous Coward | about a year ago | (#44032543)

Because that is what Chinese do !!

It runs benchmarks real fast (1)

stox (131684) | about a year ago | (#44032549)

Have the Chinese done anything of interest with their supercomputers yet?

Re:It runs benchmarks real fast (1)

the gnat (153162) | about a year ago | (#44032647)

Have the Chinese done anything of interest with their supercomputers yet?

Not in the area of biology/biochemistry, as far as I know. Basically all of the high-performance codes used for that purpose are written in the usual handful of countries (US/EU/Japan) and/or work just as well on distributed systems, and all of the really cutting-edge work I've seen has been done in the same countries. The big advantage that the Chinese have is cheaper labor (although getting steadily less so) and large amounts of money to through around (without any accountability), but I haven't seen any results that couldn't have been obtained just as easily by Western nations. (Whereas I've seen many cases where the reverse is true, because the Western world still has technology and expertise in many fields far beyond anything in China.)

Re:It runs benchmarks real fast (1)

unixisc (2429386) | about a year ago | (#44032801)

Probably for simulating nukes and other such military applications, so that after their economic dominance of the world is complete, they can militarily start to do what Japan did in the 1930s & 40s. No better time than now, when their cash is at a peak

Re:It runs benchmarks real fast (1)

jeffmeden (135043) | about a year ago | (#44032941)

"Of interest"??? How about boosting the stock price of Intel significantly...

"Tianhe-2 (also known as the Milky Way-2) consists of 16 000 nodes. Inside each node, two Intel Xeon IvyBridge processors and three Xeon Phi processors run the show, adding up to a total of 3.12 million computing cores."

As many cores as 800,000 desktops (a rough comparison but eh) should keep them happy considering everyone is buying (non-Intel) tablets these days.

Overwhelmingly Linux (95%) (4, Informative)

PastTense (150947) | about a year ago | (#44032575)

It's interesting to browse this website:
http://www.top500.org/ [top500.org]
And look at the Statistics section, such as Operating System Family
http://www.top500.org/statistics/list/ [top500.org]
Operating system Familyâf Countâf System Share (%)âf Rmax (GFlops)âf Rpeak (GFlops)âf Coresâf
Linux 476 95.2 217,913,963 318,748,391 18,700,112
Unix 16 3.2 3,949,373 4,923,380 181,120
Mixed 4 0.8 1,184,521 1,420,492 417,792
Windows 3 0.6 465,600 628,129 46,092
BSD Based 1 0.2 122,400 131,072 1,280

Re:Overwhelmingly Linux (95%) (0)

Anonymous Coward | about a year ago | (#44032951)

Well, it's not too surprising. Linux started beating Unix in 2003ish. No other OS was ever in the running for supercomputing.

Re:Overwhelmingly Linux (95%) (0)

Anonymous Coward | about a year ago | (#44033471)

It's interesting to browse this website:
http://www.top500.org/ [top500.org]
And look at the Statistics section, such as Operating System Family
http://www.top500.org/statistics/list/ [top500.org]
Operating system Familyâf Countâf System Share (%)âf Rmax (GFlops)âf Rpeak (GFlops)âf Coresâf
Linux 476 95.2 217,913,963 318,748,391 18,700,112
Unix 16 3.2 3,949,373 4,923,380 181,120
Mixed 4 0.8 1,184,521 1,420,492 417,792
Windows 3 0.6 465,600 628,129 46,092
BSD Based 1 0.2 122,400 131,072 1,280

Linux has owned the big-machine (Server, HPC) and small-machine (embedded, phone) markets forever. It's the desktop where Linux can't get traction.

Grand Slam (1)

Camael (1048726) | about a year ago | (#44036397)

Also from the list [wikipedia.org] .

All of the top 10 supercomputers are running Linux. Overwhelming dominance indeed.

mod doWn (-1)

Anonymous Coward | about a year ago | (#44032597)

On a0n en3eavour

fdasfdsa (0)

Anonymous Coward | about a year ago | (#44032615)

firstfirst frist

Coincidence? (3, Informative)

Steve_Ussler (2941703) | about a year ago | (#44032667)

China Bumps US Out of First Place For Fastest Supercomptuer. Posted samzenpus on Monday June 17, 2013 @02:42PM 20 minutes after: Book Review: The Chinese Information War Posted by samzenpus on Monday June 17, 2013 @02:22PM You do the math....

Re:Coincidence? (0)

Anonymous Coward | about a year ago | (#44035667)

02:22 to 02:42 is indeed 20 minutes. I can do math. You can do math. What are we supposed to conclude?

This word you keep using, it does not mean what... (3, Insightful)

jeffmeden (135043) | about a year ago | (#44032721)

"China Bumps US Out of First Place For Fastest Supercomptuer"

Fastest supercomputer, that 1) Runs Linpack and 2) is publicly-acknowledged. There are plenty of similar supercomputers that don't meet one or both of those criteria, and are therefore omitted. The Top500 is FAR from a comprehensive list of supercomputers, but twice a year we see a flurry of stories presuming that it is.

Re:This word you keep using, it does not mean what (0)

Anonymous Coward | about a year ago | (#44033147)

USA ! , USA! - number 2 - sore loser

Re:This word you keep using, it does not mean what (1)

Anonymous Coward | about a year ago | (#44033151)

Yeah, the NSA surely has a better one. Which starts to analyse this post in just ten seconds.

Re:This word you keep using, it does not mean what (0)

Anonymous Coward | about a year ago | (#44033207)

"CThe Top500 is FAR from a comprehensive list of supercomputers, but twice a year we see a flurry of stories presuming that it is.

Can you cite a better list?

Re:This word you keep using, it does not mean what (2)

MiniMike (234881) | about a year ago | (#44033485)

Here is a list of the top 5 supercomputers run by the NSA (partially redacted):
1- XXXXX_XXXXXXX_XXXXXX_XXXX
2- XXXXXXXXXXXXXinator
3- XXXXXXXXOfTheXXXXX
4- PinkiePie15
5- XXX_XXXXXX_XXXXXXX

Is that better?

Re:This word you keep using, it does not mean what (1)

jeffmeden (135043) | about a year ago | (#44033663)

"CThe Top500 is FAR from a comprehensive list of supercomputers, but twice a year we see a flurry of stories presuming that it is.

Can you cite a better list?

We could just list off sites drawing the most power, and probably stand a better chance at pegging most of the private/secret data centers used for supercomputing. The very nature of what they are doing really defies attempts to list, because the power of a system that big exists in more dimensions than the *FLOPS that the almighty Linpack measures. I have nothing against the orgs on the Top500 list or even Top500 itself, but to anyone interested in such things, Top500 is *not* all-encompassing and you would do well to understand what else is out there (on a project by project basis).

Re:This word you keep using, it does not mean what (1)

drinkypoo (153816) | about a year ago | (#44033973)

Can you cite a better list?

We could just list off sites drawing the most power, and probably stand a better chance at pegging most of the private/secret data centers used for supercomputing.

So your short answer is no then?

Top500 is *not* all-encompassing and you would do well to understand what else is out there (on a project by project basis).

So, can you cite a better list?

Re:This word you keep using, it does not mean what (0)

Anonymous Coward | about a year ago | (#44033225)

Get over it. American companies use this all the time. Stop crying about it and build something better. Oh wait, you're a little dweeb, you don't actually build anything yourself and get your kicks out of thinking you're part of other peoples' efforts. Loooooser, loooooser!

33.86 petaflops? Impressive! (2, Funny)

Anonymous Coward | about a year ago | (#44032839)

That's almost enough to run Vista

AMD (0)

Tim12s (209786) | about a year ago | (#44032845)

I suppose that once AMD finally complete their converged CPU/GPU/Memory strategy we'll see a couple of these pop up with devastating effect.

After PS4XBONE they need to go through another 2 generations before it could be considered mature and they will have the ecosystem to add incremental enhancements to the platform that would take others ages. The slowest part of the whole equation will be Windows.

Amazing (1)

Reliable Windmill (2932227) | about a year ago | (#44032979)

That is amazing. And 2 years ahead of schedule! The Chinese are at the absolute forefront of technological innovation this decade.

Re:Amazing (0)

Anonymous Coward | about a year ago | (#44033229)

And all powered by Intel processing...which as far as I know were primarily engineered and manufactured outside of China.

Of course the way things are going in 2-5 years version 3 might be built on their own MIPS processors and still top everything out there, at that point I'd say they would be at the absolute forefront.

spell (0)

Anonymous Coward | about a year ago | (#44033249)

you might want to change the spelling of "computer" if we will ever get it right!!!

Spelling Nazi (1)

AnalogDiehard (199128) | about a year ago | (#44033601)

China Bumps US Out of First Place For Fastest Supercomptuer

Well, China would have it easy when the article submitter misspells COMPUTER...

Re:Spelling Nazi (1)

NoMaster (142776) | about a year ago | (#44034315)

But that's how it was spelled on the front of the "Instruction Manuel"...

Can it be used to break publicly used cryptography (1)

Rubinhood (977039) | about a year ago | (#44033845)

Like the subject says - is this something the Chinese government might be able to use to break TOR or SSL or any other encryption which is commonly used by political dissidents, freedom fighters, or even foreign military contractors etc.?

I'm curious e.g. how long it would take to break a standard 128-bit SSL session that they find potentially interesting?

Re:Can it be used to break publicly used cryptogra (0)

Anonymous Coward | about a year ago | (#44034815)

It will take forever, no supercomputer can break modern cryptography. Quantum computing will stand the best chance, but not because its "fast" but because it can calculate math differently.

Right! (0)

Anonymous Coward | about a year ago | (#44035747)

"Developed by China's National University of Defense Technology, the system appeared two years ahead of schedule and will be deployed at the National Supercomputer Center in Guangzho, China, before the end of the year."

In a suckers words more like. In reality, it's probably launching DDOS on American interests right now:

more here [slashdot.org]

"It's a dense, well-researched overview of China's cold-war like cyberwar tactics against the US to regain its past historical glory and world dominance."

& why they need to be f*&%ed in the ass yesterday if they keep it up.

Misspelled city name (1)

beefsack (1172479) | about a year ago | (#44036111)

It's spelled Guangzhou, and Tianhe also happens to be name to one of the central districts in the city, though I'm not sure if the computer is actually located in that district.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?