Democratic Underground Latest Greatest Lobby Journals Search Options Help Login
Google

New Processor Will Feature 100 Cores

Printer-friendly format Printer-friendly format
Printer-friendly format Email this thread to a friend
Printer-friendly format Bookmark this thread
This topic is archived.
Home » Discuss » Archives » General Discussion (1/22-2007 thru 12/14/2010) Donate to DU
 
The Straight Story Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Oct-26-09 02:46 PM
Original message
New Processor Will Feature 100 Cores

Gadget Lab Hardware News and Reviews
New Processor Will Feature 100 Cores

Forget dual-core and quad-core processors: A semiconductor company promises to pack 100 cores into a processor that can be used in applications that require hefty computing punch, like video conferencing, wireless base stations and networking. By comparison, Intel’s latest chips are expected to have just eight cores.

“This is a general-purpose chip that can run off-the-shelf programs almost unmodified,” says Anant Agarwal, chief technical officer of Tilera, the company that is making the 100-core chip. “And we can do that while offering at least four times the compute performance of an Intel Nehalem-Ex, while burning a third of the power as a Nehalem.”

The 100-core processor, fabricated using 40-nanometer technology, is expected to be available early next year.

In a bid to beat Moore’s law (which states number of transistors on a chip doubles every two years), chip makers are trying to either increase clock speed or add more cores to a processor. But cranking up the clock speed has its limitations, says Will Strauss, principal analyst with research and consulting firm Forward Concepts.

“You can’t just keep increasing the clock speed so the only way to expand processor power is to increase the number of cores, which is what everyone is trying to do now,” he says. “It’s the direction of the future.”

http://www.wired.com/gadgetlab/2009/10/tilera-100-cores/

In fact, Intel’s research labs are already working on a similar idea. Last year, Intel showed a prototype of a 80-core processor. The company has promised to bring that to consumers in about five years.
Printer Friendly | Permalink |  | Top
Recursion Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Oct-26-09 02:51 PM
Response to Original message
1. Current chips are fast enough -- improve the compilers
80 cores, 100 cores, 1000 cores: won't mean a dang thing if your compilers cant schedule well, or if your i/o model is stupid (which it inevitably is). CPU's haven't been the choke point on speed in about a decade and a half.
Printer Friendly | Permalink |  | Top
 
phantom power Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Oct-26-09 02:56 PM
Response to Reply #1
2. CPU cache memory misses are one of our biggest bottlenecks.
A CPU with big honking cache, or faster RAM->cache transfer, would speed up a lot of our stuff.

To say nothing of faster disk. And by the way, it would be nice if Windows would ever see fit to optimize their damned disk IO. We routinely measure 4x disk-access improvements, just by building on linux. Same machine, same disk. Just running on linux speeds things up by 4x.

If Microsoft had any pride, they might be ashamed of that.
Printer Friendly | Permalink |  | Top
 
Swamp Rat Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Oct-26-09 03:00 PM
Response to Reply #2
4. 2.6.31.5
:)



Printer Friendly | Permalink |  | Top
 
Recursion Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Oct-26-09 03:08 PM
Response to Reply #4
5. 4.6
:)

Printer Friendly | Permalink |  | Top
 
Statistical Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Oct-26-09 03:10 PM
Response to Reply #2
7. We routinely see 80 brazzilion times disk-access improvement just by compiling in windows.
Edited on Mon Oct-26-09 03:12 PM by Statistical
Hell even windows 98 is about 984023840934832904823904x faster than Linux.

(Wow nonsensical statements are fun)
Printer Friendly | Permalink |  | Top
 
cliffordu Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Oct-26-09 03:37 PM
Response to Reply #7
11. Well of course. If you completely ignore the BSOD during compilation,
then it's faster than all get out.
Printer Friendly | Permalink |  | Top
 
Realityhack Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Oct-26-09 03:00 PM
Response to Reply #1
3. Depens upon the application.
I do not think that is true as a universal statement.
Printer Friendly | Permalink |  | Top
 
Recursion Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Oct-26-09 03:09 PM
Response to Reply #3
6. Sorry, EE grad student moment there. I was assuming embedded systems
The problem for a while has been getting data into and out of processing, not processing itself, for most embedded applications.
Printer Friendly | Permalink |  | Top
 
Realityhack Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Oct-26-09 03:21 PM
Response to Reply #6
8. I understand. I just didn't want others to think it was universally true. n/t
Printer Friendly | Permalink |  | Top
 
Recursion Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Oct-26-09 09:05 PM
Response to Reply #8
14. Then again, memristors may change everything
Flash memory at twice RAM speeds... that could bring processing back to being the bottleneck.
Printer Friendly | Permalink |  | Top
 
boppers Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Oct-26-09 03:25 PM
Response to Reply #1
9. Improve the compilers? The compilers are fine.
Compilers, after all, are simple translators. That's like blaming a poor racing performance on a car's engine... when the driver and crew has decided that the vehicle cockpit needs a mini-bar, a jacuzzi, a bed, a trophy room, and seating for 15. Simple programs get slowed down with insane amounts of bloat, add-ons, and widgets, just to, oh.... type up a document or render a web page. (Classic example: A talking paperclip? Really?)

A friend of mine wrote this back in 1983:
http://www.pbm.com/~lindahl/real.programmers.html

...and it's even truer today. Languages have changed, but there's still a massive gap between programmers who know how to write efficient code and "programmers" who think their code just needs a faster CPU, or a better compiler, or a more efficient library, or better caching, or <insert somebody else's problem here>, rather than simply writing good code to start with.
Printer Friendly | Permalink |  | Top
 
Recursion Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Oct-26-09 03:29 PM
Response to Reply #9
10. We're talking specifically about parallelism here
And that is an issue with the compilers. If you can't properly schedule execution, more cores can end up slowing down execution time.
Printer Friendly | Permalink |  | Top
 
boppers Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Oct-26-09 03:57 PM
Response to Reply #10
12. Parallelism has been fairly understood for many years now, at the compiler level.
The D825 came out in 1962. Multics came out in 1969.

It's possible to write code for multi-core (or multi-node) properly, and it's also possible to write code that processes/threads/scales poorly.

At one extreme, there's google's system, who has over a million cores in their system, with multiple nodes that just do scheduling and load distribution between the nodes. At the other end, there are also lots of programs that were *never* written with multiple processes/threads in mind, and thus, sit on a single CPU while the other CPU's sit idle.

To paraphrase your statement:
If a programmer can't (or doesn't) properly plan execution, more cores can end up slowing down execution time.
Printer Friendly | Permalink |  | Top
 
Statistical Donating Member (1000+ posts) Send PM | Profile | Ignore Mon Oct-26-09 06:01 PM
Response to Reply #12
13. However parallel is much harder to get right and much much easier
to introduce bugs than single threaded (or lightly multithreaded app).

One only need to look at benchmarks of games and other apps on single, double, triple, and quad core.

The tools have been rather lacking on multicore optimization on x86/x64 platform. This is changing slowly but generaly speaking I would rather have a 12GHz single core (w/ hyperthreading) than 3Ghz quad core.

Of course netburst failed and we are never getting 12Ghz so we need to bite the bullet but maximizing performance on multicore enviroment is no easy task.
Printer Friendly | Permalink |  | Top
 
boppers Donating Member (1000+ posts) Send PM | Profile | Ignore Tue Oct-27-09 03:40 AM
Response to Reply #13
15. I agree, to an extent.
In a "mechanical engine RPM" metaphor, we have an entire generation of programmers who kept hoping that faster engines would be a solution, so a 12Grpm, single cylinder, vehicle engine would be hypothetically better than a six-cylinder, 3Grpm, vehicle engine. (We know how that turned out, both for the good and bad).

These engine engineers would have to learn about fuel distribution, EFI, balancing, firing sequence... a whole host of things in order to make better engines. Yes, it's much harder.

That's why it's a job, and why software is "engineering", and not "I get paid mad amounts of money to type up object templates that reduce a CPU to massive in-efficiency".
Printer Friendly | Permalink |  | Top
 
Statistical Donating Member (1000+ posts) Send PM | Profile | Ignore Tue Oct-27-09 08:33 AM
Response to Reply #15
16. Of course.
Edited on Tue Oct-27-09 08:35 AM by Statistical
No excuses however it is the simple reality that writing effective, efficient multithreaded code is often requires substantially more time to plan, code, and test. Given in software time = product cost often companies are reluctant to explore multi-threaded programing unless there is no other choice.

For a long time (60mhz - 3000 Mhz) there was the "easy way out" however we will never see that increase in frequency again so the industry will have to accept either a) more efficient code or b) developing multithreaded solutions. Doesn't mean the transition will be easy.

Recently there has been a lot of development in tools to assist in creation and testing of multithreaded apps so I am optimistic.

Intel Parallel Studio
http://software.intel.com/en-us/intel-parallel-studio-home/

Microsoft Parelel Tools for Visual Studio 2010
http://msdn.microsoft.com/en-us/concurrency/default.aspx
Printer Friendly | Permalink |  | Top
 
boppers Donating Member (1000+ posts) Send PM | Profile | Ignore Wed Oct-28-09 12:56 AM
Response to Original message
17. Shameless bump for DU's resident geeks.
I'm doing minor parallelism with 16-127 processes, on systems with up to 9 machines, 8 cores each (72 CPU cores) what are others doing?
Printer Friendly | Permalink |  | Top
 
DU AdBot (1000+ posts) Click to send private message to this author Click to view 
this author's profile Click to add 
this author to your buddy list Click to add 
this author to your Ignore list Thu May 02nd 2024, 07:54 PM
Response to Original message
Advertisements [?]
 Top

Home » Discuss » Archives » General Discussion (1/22-2007 thru 12/14/2010) Donate to DU

Powered by DCForum+ Version 1.1 Copyright 1997-2002 DCScripts.com
Software has been extensively modified by the DU administrators


Important Notices: By participating on this discussion board, visitors agree to abide by the rules outlined on our Rules page. Messages posted on the Democratic Underground Discussion Forums are the opinions of the individuals who post them, and do not necessarily represent the opinions of Democratic Underground, LLC.

Home  |  Discussion Forums  |  Journals |  Store  |  Donate

About DU  |  Contact Us  |  Privacy Policy

Got a message for Democratic Underground? Click here to send us a message.

© 2001 - 2011 Democratic Underground, LLC