Intel Announces Xeon E5 and Knights Corner HPC Chip 122
MojoKid writes "At the supercomputing conference SC2011 yesterday, Intel announced its new Xeon E5 processors and demoed their new Knights Corner many integrated core (MIC) solution. The new Xeons won't be broadly available until the first half of 2012, but Intel has been shipping the new chips to a small number of cloud and HPC customers since September. The new E5 family is based on the same core as the Core i7-3960X Intel launched Monday. The E5, while important to Intel's overall server lineup, isn't as interesting as the public debut of Knights Corner. Recall that Intel's canceled GPU (codenamed Larrabee) found new life as the prototype device for future HPC accelerators and complementary products. According to Intel, Knights Corner packs 50 x86 processor cores into a single die built on 22nm technology. The chip is capable of delivering up to 1TFlop of sustained performance in double-precision floating point code and operates at 1 — 1.2GHz. NVIDIA's current high-end M2090 Tesla GPU, in contrast, is capable of just 665 DP GFlops."
Huh? (Score:2)
Re: (Score:1)
Summary: Faster chips out. You can't get them. Also a 50 core chip was released.
Re: (Score:3)
Re: (Score:2)
You mean the fundamentals of parallel processor programming. It's not exactly a widely held skill yet.
Re: (Score:2)
Not exactly. You haven't needed to treat your data as though it was a texture on GPUs for a couple of generations now, and getting decent performance out of Knights Bridge means writing very similar incredibly-wide SIMD code to what GPUs use - except that GPUs have some decent tools to making the porting process easier, and I'm not sure if Knights Bridge does. (For example, if you have a loop over a bunch of elements that does the same operations on each but does a different number of passes for different e
Re: (Score:2)
Re: (Score:2)
Re: (Score:1)
Just to clarify, the 50-core beast hasn't been released as yet.
They gave a demo of what is most likely a prototype chip.
Re: (Score:2)
If you could make your question clearer, you'll probably get a more effective answer.
Re: (Score:2)
An interjection followed by two statements does not a question make. ;-)
Re: (Score:1)
huh?
Re: (Score:2)
Huh?
A question asked directly does, though.
Re: (Score:2)
Well, to continue the pedantry ... in and of itself, "Huh?" is merely an interjection [englishclub.com].
Re: (Score:2)
http://www.google.com/search?q=define+huh%3F&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a [google.com]
I think the use of a ? pretty clearly moves it from mere interjection to inquiry / expression of confusion. If he had titled his post Huh. instead of Huh? I'd agree with you.
Little Intel has growed up (Score:1)
When they said nobody needed multicore processors I heard the echos of "640K should be enough for anyone" and "There is no reason for any individual to have a computer in his home" Now they're trying to see how many they can jam on one die. 50 is a pretty odd number, though. Usuall see things in powers of 2 (2, 4, 8, 16) Perhpas they neede space on the die for Mickey or an etched portrait of Jobs.
Re: (Score:1)
When they said nobody needed multicore processors
[citation needed]
Re: (Score:2)
Re: (Score:1)
Re:Little Intel has growed up (Score:5, Informative)
Because computers count in binary, which is powers of two. And, I'll assume you meant cores.
Historically such things have been powers of two to make the addressing simpler without having extra magic or control lines left over. So, 1, 2, 4, 8, 16, 32 and 64 all make sense in terms of being expressable in a fixed number of bits ... 50 to some of us seems like a fairly arbitrary choice. Since you use an unusual combination of wiring, it might as well be 37 or 51 since it's not a number that 'naturally' lends itself to computers. The device is likely wired in such a way that it could count to 64 ... or they're doing things in a slightly odd way.
Anyway, that's why some of us find it to be a little odd. And it's also why the hard-drive makers deciding "1 GIG" is "1,000,000,000 bytes" is irksome ... with all of those extra powers of two, it should be "1 073 741 824 bytes". Which means you lose about 72MB/GIG ... so my 2TB drive isn't.
Re: (Score:1)
Indeed, cores. And I still don't see any reason, and AMD has 3 core processors. I can have 3G of memory. I can have 9G of memory. Binary numbers are not pervasive by mandate in all areas of computing.
Though I do agree base-10 usage for hard drives is ridiculous.
Re: (Score:3)
Well, in fairness, on the memory side, you do that with some combination of memory modules which are addressable by powers of two. (eg. 2GB + 1GB, or 4GB + 4GB + 1GB), each of which is discrete from the others. I don't believe you can buy a 3GB or 9GB memory module.
Nope, absolutely not. Not saying that ... ju
Re: (Score:2)
Certain models of Xeon processor have three memory controllers. Which, when configuring for maximum memory bandwidth, leads to memory being measured in terms of three times powers of two (3 x 2^30.)
Re: (Score:2)
Well, in fairness, on the memory side, you do that with some combination of memory modules which are addressable by powers of two. (eg. 2GB + 1GB, or 4GB + 4GB + 1GB), each of which is discrete from the others. I don't believe you can buy a 3GB or 9GB memory module.
However certain intel processors do use interleaved triple channel memory so there must be a division by 3 going on in the memory addressing system somewhere.
Re: (Score:2)
Or it's 64 cores with an average usable yield of 50 "good" ones.
Re: (Score:2)
With cores it's a little bit different than RAM in that you're physically limited by how many you can squeeze in a certain size.
So the addressing may be limited to 16, 32, 64, whatever cores, but physically you may not quite be able to get 16 in that space so they might say max out at 14 and then get a few dead ones here and there so end up selling 8, 10's and 12's with the rare perfect 14's being used for some special customers.
Now with the 50 cores, you might actually have 53 or so actually WORKING cores
Re: (Score:2)
The SI prefixes are specifically base-10 units, and have been so since the 1800's, with the metric system, and later adapted into the SI system. The fact that computer scientists and programmers misused the units and disregarded an established standard of communications and data encapsulation, and the fact that people STILL do it, is what's vexing, not the fact that the storage manufacturers have taken to use the proper approach.
Re:Little Intel has growed up (Score:4, Insightful)
So, are you always an asshole, or just on Slashdot?
Re: (Score:1)
Re: (Score:3)
Well, since I own 3 iPods and an iPad ... you'd think I'd be the one being accused of being an asshole by that logic.
I'm going to go with self-righteous prick who feels entitled to be an ass on the internet because he's got a 5-digit Slashdot ID and therefore considers himself to be l337.
Re: (Score:2)
Let's say you've set aside 6 bits in every data structure that deals with core administration. You can grow to 2^6, or 64 cores without re-architecting your data structures.
As long as we are using binary in computers, making everything 2^N will make the most efficient use of space.
Of course, space isn't always the limiting factor, so sometimes for cost or speed reasons, we see objects that number 2^N-M.
Re: (Score:1)
Re: (Score:2)
It's not storing 6 bits in a data structure. It's running traces (if that's even what they're called in IC design) throughout the die connecting these things together. At that level adding two extra traces to carry those two bits is an expense you might want to forgo. However once you've got six wires/bits out there, the only reasons I can think of to not use 64 whatevers is the previously mentioned heat management and die yield issues.
Re:Little Intel has growed up (Score:4, Interesting)
I wonder if Intel is taking a page from IBM's playbook.
Upper end POWER7 CPUs have the ability to have half their cores turned off. The cores that are on can then use the disabled neighbor's caches, and run at a higher clock speed. For some things, this switch actually speeds up some tasks that can't be evenly broken up into balanced threads.
I can see Intel doing this where some cores are disabled due to manufacturing defects (which happen to all dies), and having the operable cores use nearby caching which would otherwise go to waste.
Re: (Score:3, Insightful)
Odds are... they have it lined up such that... they are in a 5x10 grid. Or a 5x5 Grid front/back.
Just because it's a computer doesn't mean it's bound by the power of two. Boards are rectangular. Chips laid out aren't necessarily in binary distribution.
Re: (Score:1)
Re: (Score:1)
Tilera's 100-core processor is built like this. It's a 10x10 grid of cores.
Re: (Score:2)
I'm guessing 5x10, if you look at their Intel Core i7 3960X [anandtech.com] the cores are about twice as wide as they are high.
Re: (Score:2)
Re:Little Intel has growed up (Score:4, Informative)
Your average consumer doesn't need 50 cores.
Sure they do. What do you think a GPU is? History has shown over and over that we can never have enough computing power. Now that we're at the physical limits of clock speeds, parallelism is going mainstream.
Re: (Score:1)
Now that we're at the physical limits of clock speeds,
Since when? You can easily overclock most modern chips to 4ghz and with enough cooling to 5 or 6+ ghz. The i7 sandy bridge chips for example have been overclocked past 6ghz. So exactly what supposed "physical limit" do you mean?
Re: (Score:3, Interesting)
Below this point you have the problem of energy efficiency, i.e. whats the point of spending more energy on cooling than on actually powering the thing?
Intel's 3d-transistors are HUGE because of this, they can push higher clock speed more easily.
Re: (Score:1)
I agree generally, like AMD's bulldozer hitting 8GHz on a single core before failing to the limits of physics (even with extreme cooling). I'm assuming nobody will never be able to get more than 1 or 2 cores active (out of 8) while getting to 8GHz on that architecture.
But these days, the chips run in multiple clock domains. I believe the Intel chips are separated by a base clock, L3 Clock, Core clocks, RAM clocks, and bus clocks. The architectures are moving ever toward asynchronous operation in order to pa
Re: (Score:1)
Overclockers are up to 8.4 GHz now, with AMD chips.
Re: (Score:2)
Amazing what a liquid nitrogen jacket with a liquid helium center can do when overclocking.
Re: (Score:2)
Re: (Score:1)
Since when?
Since the point we reached the ability to handle the power and heat dissipation requirements economically. Engineering is about tradeoffs. Until we get better materials, multicore is more cost-effective than push the clock beyond the reasonable cost envelope.
Re: (Score:2)
"Your average consumer doesn't need 50 cores [yet]"
Games are getting pretty good at using my 1536 core GPU, which is just a co-processor
Re: (Score:2)
Your average consumer doesn't need the 80386. There's hardly any software compiled to take advantage of its features anyway. I can see maybe someone using them for servers, but that's a pretty small niche.
Re: (Score:2)
I remember almost exactly that quote in PC Magazine back in the day. I think at the time it was the 80486, but same thing. they probably said the same thing about the '386 too.
Of course, I have a quad-core machine sitting on my desk at home with 8GB of RAM, and running at a clock speed two orders of magn
Re: (Score:2)
"I still remember the first time I saw a PC with a 1GB hard-drive ... a bunch of us stood around it thinking "WTF will we ever do with that much disk space?"."
Now we're like "Damn 2GB texture pack."
Re: (Score:2)
Ah, the disaster that is the move from real to protected mode.
Summary: First fiasco was that in year 1982 MS ignored the announcement of the 286 around the time and proceeds to develop a real-mode multitasking version of DOS, and only in around 1985 when IBM refused to license it that it was realized it was a mistake. And while the resulting OS/2 1.x sucked and lost it's chance with Windows 3.x (which was incompatible and both designed for 16-bit protected mode), second fiasco was when MS broke the JDA with
Re: (Score:2)
Still bitter?
Re: (Score:3)
Yea, I know it is too late. The good news is that the x64 transition went much better.
Re: (Score:2)
I'm guessing there would have to be glue logic to get all these processors to share the memory space as well as read/write access. From the promotional pictures of other multi-core chip dies, each core is usually surrounded by a band of interface logic as well as a hefty large block of cache memory. That seems to be the biggest change in the evolution of CPU's. It seems easier to just create larger caches or more cores than anything low level.
Maybe they accept one or more non-functional cores in exchange fo
Re: (Score:2)
Re: (Score:2)
except for that pesky 8-bit alpha channel, which clearly isn't used.
Re:Little Intel has growed up (Score:4, Informative)
Once it became clear that that particular plan wasn't a happening thing, and that AMD was delivering serious server parts and knockdown prices, and Nvidia was doing interesting things with GPUs, and ARM licensees were pumping out increasingly zippy low-end chips, they stopped fucking around. These days they'll still charge as hard as they can for the features provided; but their hopes of sandbagging x86s in order to sell IA64s are dead
Re: (Score:2)
Who said nobody needed multicore processors? That seems like a pretty unlikely claim, particularly from intel who were very much into selling multi-cpu systems to the high-end long before multicore became the norm. I had a dual-socket pentium II consumer grade system ages ago. That we were headed to multicore was obvious even then.
Re: (Score:1)
Reminds me that I have a dual Deschutes 350 in the attic somewhere. Served me faithfully from 1998 to 2004. If it wern't for the 128MB of memory and the price of electricity - I might still have it do..... uhm something. Trouble is it's still hard to do multithreading, and our programming languages are still inherently single thread, maybe with some thread primitives glued on.
How can that be? (Score:4, Insightful)
Re: (Score:3)
Maybe, but probably not. The key to high performance computing when dealing with parallel workloads like this is not just raw processing power, but memory bandwidth. The Nvidia Tesla M2090 mentioned in TFS has a peak memory bandwidth of 177GB/s with specially designed memory and controllers designed for raw throughput. Conventional CPUs with fastest DDR3 memory available can barely crack a small fraction of that. A terraflop of sustained DP performance is going to be completely useless without the memory bandwidth to back it up.
Re: (Score:1)
Depends on how much cache is on the chip, and how big the problem being solved is. GPU's have a lot of FP units, but they have such a tiny amount of cache that they basically have to transfer ~everything they operate on over the memory bus. On a CPU, your dataset can be several MB and still fit on-chip, but of course you have fewer FP units. The algorithm I designed for my Ph.D. operate on the same few megabytes of data many times, and it ended up being about equally fast on both architectures, so I'm h
Re: (Score:2)
On a CPU, your dataset can be several MB and still fit on-chip...
Clearly you've never dealt with any HPC programming before. In the vast majority of massively parallel compution problems, the kind which are solved by these kind of chips the data sets are also necessarily large; hundreds of megabytes or gigabytes of data. The algorithms that allow massively parallel compution will compute a single step of an algorithm on a large number of elements.
Consider the scenario the GP was referring to, massively parallel dot product, for matrix operations or other algorithms used
Re: (Score:1)
Well, I suppose either my application (face recognition for hundreds of users) is under the threshold for your definition of HPC, or it's a notable exception. Our algorithm consists primarily of repeated BLAS level 1 and 2 operations on chunks of data that fit in CPU cache, but not GPU cache. Essentially, it's low arithmetic intensity operations performed repeatedly (hundreds of times) on gallery image sets that take up a couple megs at a time (and there are a couple hundred of those that can be computed
Re: (Score:2)
Your application is one in which GPU's excel normally, so I have to say that yours must be very badly written.
By treating it as streams of texture sets, rather than just working chunk by chunk, you improve performance. That way, you can just set up different Streaming Processors in a chain to perform the various steps. When programmed in that way, my dual quad core Xeon is outperformed by an old GTS 250.
When you program a GPU, even with CUDA or OpenCL, a DSP programming mindset is more appropriate than a ge
Re: (Score:1)
I would not be so quick to accuse people of writing poor code when you know very little about the problem they're working on. And remember, ~most code runs faster on CPUs. If you read some of Vasily Volkov's papers (he's the guy who wrote the early versions of CUBLAS), it is very clear that you might as well not bother with the GPU if you're mostly doing blas level 1-2 stuff, since the arithmetic intensity isn't high enough. For our application we had some specific operations we could combine and tricks
Re: (Score:2)
Oh, it was an academic project. No wonder then.
Basically, everything you've said so far was that you treated the GPU like a slot-in general purpose processor, which it's not. Take a look at what's been done in the INDUSTRY, not academia, to see how effective GPU's are at image recognition and processing.
Games push boundaries that academia has yet to reach.
In special effects, multi-object motion path detection, tracking and compensation is done on GPU's nowadays, because a cheap GPU can do it more efficientl
Re: (Score:1)
You win!
Re: (Score:2)
A 50 core chip at 1GHz is going to need to perform 20 double precision floating point ops per cycle per core to achieve 1Tflop performance. OK, so 1.2GHz cuts that down to 16flops/clock.
By your math it means that each core has a 1024-bit wide vector unit. And that means 64-bit FP, not 80-bit. Not impossible, but perhaps unlikely to ever run at theoretical max across all cores in anything but the most carefully crafted case.
Re: (Score:2)
You seem to be forgetting about SIMD and vectorization. If you pack more instructions into the bits available for one, it can do much more than your typical 32- or 64-bit core. That is often how early benchmarks are tested to give the highest results possible for the data throughput.
Re: (Score:2)
Dot product or vector multiply-add IS an SIMD instruction. I chose it because it does the most FLOPs of any instruction I'm aware of. If it can retire 2 of those per cycle then the FPU will have the claimed performance. Then I questioned the memory performance. And after recalling my own efforts to optimize the cache behavior of matrix operations I'm convinced they can do it with not too much cache per core.
Re: (Score:2)
Re: (Score:2)
The vector unit must be FMA capable just like Larrabee, hence the doubling of FLOPS/cycle.
Re: (Score:2)
there are lots of useful computations that are more flops-intensive (relative to memory footprint) than dot-products. matmul, fft, almost anything montecarlo, etc.
Re: (Score:2)
matmul IS dot products. FFT is dot products too. Most anything DSP is dot products. I chose dot product because it is the instruction that does the most floating point operations.
Re: (Score:2)
OK, so 1.2GHz cuts that down to 16flops/clock. Since when can anything Intel Architecture achieve that many flops per cycle?
Since LRBni and its 512-bit vectors. A double-precision FMA gets you 16 ops in a clock.
But can they really keep the FP unit running continuously at that rate? On all 50 cores?
Easily. HPC codes regularly keep thousands of cores busy.
Re: (Score:2)
It has 512bit AVX-like registers. You can do a lot of FP/clock with SIMD like that. But like you said(vector-scaler multiply-adds), they probably have multi-operand commands to allow fused math.
Not all that exciting (Score:1)
Re: (Score:3)
How about a consumer version? (Score:2)
Wonder if they'll produce a consumer version.
I use an ATI card as my main video card, wouldn't mind sticking a physics card in the other PCI-E slot. The thing is that if I put in an Nvidia card it won't work as a physics card since Nvidia has written the drivers in such a way that if you have a non-Nvida video card as your primary video card Nvidia will not allow you to use their cards just for physics.
So my hope is that if Intel puts out a consumer version then either I'll be able to buy an Intel board ju
Re: (Score:2)
"It means very little to most of us."
Just like your comment.
Intel's side entry into the GPU market (Score:2)
We may yet see high-end Intel discrete graphics cards in the future.
Knights Corner sounds like it is basically a high-end GPU without the actual graphics output. This lets Intel position it as a professional product for HPC and supercomputing, and squeeze out as much profit as possible from the early models. Then, once the R&D cost has been amortized and the fab technology is advanced further, they can add a HDMI output, dedicated RAM, and glue logic, and write appropriate drivers to make it a full-fled
Re: (Score:2)
Re: (Score:2)
To me it sounds like much more. The "cores" on a GPU are not equivalent to CPU cores [langly.org], whereas on Knight's Corner you get 50 actual x86 cores. It is sure to be much more general purpose. From the article: "Unlike other co-processors, the MIC is fully accessible and programmable as though it were a fully functional HPC node." It sounds like a cluster on a chip. I am curious about the memory model.
Re: (Score:1)
I'm curious about the memory model too. I'm pretty certain that bit about "cluster on a chip" is just marketing hyperbole, and it's actually still a shared memory system running one instance of the linux kernel. They're not going to make you run 50 linux kernel instances and communicate between them using network sockets.
Re: (Score:2)
They're essentially using x86 cores for a vaguely GPU-style wide SIMD unit, from what I can tell. AMD's next generation of GPUs appear to be heading towards a similar destination from the opposite direction - they're adding a non-vector core to each 16-wide block of vector cores for control code that can't easily be vectorized.
MIC presentations at SC11 (Score:4, Informative)
I'm at SC11 right now and just attended NIC's MIC presentation. The scaling looks fantastic according to various codes that they compiled to run on it, but what was notably absent was performance relative to traditional x86 chips. The final presenter even said that now that the technology has been demonstrated to work (with minimal porting effort required) the next step will be to optimize and improve performance. The take away is that relative to Intel's other chips, MIC performance wasn't impressive enough to include in the presentation. That's fine in my book because it's an ambitious project, but it sounds like there is still some work to do.
Remember ASCI Red?? (Score:1)
Just shows you the progress in CPU power: ASCI Red was the first supercomputer to go over 1TFlop, and was massive, now we have this with just one chip!
Re: (Score:2)
And the massive computers are going 100 Petaflops [theregister.co.uk] (that's 100,000 Teraflops).
Imagine... (Score:1)
Re: (Score:2)
Occam?
Intel sort comparison paper between MIC and GPU (Score:1)
HPC is Much More Than Multi-Core Processors (Score:1)
Re: (Score:3)
4chan is still down? Maybe we should lend them a hand.
Re: (Score:1)
Pfffft! My prosthetic horse cock penis can deliver 500 OPH (orgasms per hour)
"DP" is double precision in this case, not the other one;)