Tip

Moore's law has a shelf life: Chip makers plan for a post-silicon world

According to experts, the chips used to power data center servers will continue to get smaller and faster, and Moore's Law of doubled performance every two years will continue unabated for at least the next 10 years. Also, new features like security and wireless communications will be bundled into microprocessors, ultimately easing the jobs of people in the data center who implement and maintain the servers.

First discovered in 1824 as a means of conducting electricity, silicon has been used as an essential semiconductor building block virtually since the first integrated circuit was designed at Texas Instruments in 1958.

After 2010, though, things could get very interesting.

"Everyone in the field is pretty comfortable that Moore's Law and silicon will dominate through the end of the decade," says Nathan Brookwood, a semiconductor analyst at Insight 64, an independent consultancy in Saratoga, Calif. "After that, though, people are a little worried about whether silicon will be able to go on indefinitely."

Some believe that silicon eventually will start to run out of steam; after all, one can cram only so many things into a tiny space before reaching a point of diminishing returns. Experts have been debating for years when that limit will be reached. There's been plenty of research going on in various areas of chip design and fabrication to help overcome the obstacles. Nanotechnology, quantum computing and other technologies have come to the fore as possible

    Requires Free Membership to View

silicon replacements, Brookwood says.

In the meantime, semiconductor makers are trying to do all they can to keep silicon alive. Although research in new areas is ongoing, to actually mass-produce anything other than silicon will cost billions of dollars in new semiconductor manufacturing and design equipment. As the economy continues to spiral downward, these are not costs that chip makers are eager to bear.

One method of prolonging silicon's life is to put two or more processors on a single piece of silicon. Called chip multiprocessing, this is expensive because all the components in each chip must be replicated. A less expensive approach -- one already being used by Intel and about to be adopted by Sun and others -- is called simultaneous multithreading. With this technique, some parts of the chip are replicated but the device can switch among multiple threads while sharing a lot of the underlying chip resources. Thus one chip can do the work of two or more, cutting down the number of chips needed to do the same amount of work.

Other approaches are being tried, too. In April, IBM and Sony announced a deal to jointly develop chips based on silicon-on-insulator and other types of materials. Silicon-on-insulator places a level of insulation between the transistor and its silicon substrate, reducing distortion and improving switching by 20 to 35 percent. Another up-and-coming area is called a "system on a chip," in which the central processing unit, communications features and memory are integrated onto one chip.

In the meantime, though, data center staffers will continue to see familiar, if welcome, improvements in chip and server technologies.

Mike Splain, chief technology officer for Sun's processor and network products group, expects several things to happen during the next few years with high-end servers. One is a migration from software-based recovery to hardware-based, so error recovery is more automatic and totally guaranteed.

Another trend will be the use of doubled-up processor cores -- using multiple threads in one CPU. "You won't get exactly a doubling every two years, but it will be some factor of that," Splain says. "So you might see a true doubling every three to four years" in the highest-end machines. He also expects the machines to become much smaller because of this doubling-up, so customers "will get more space back in their data centers."

For its part, Intel has committed to expanding its NetBurst architecture, now used in Intel's Pentium and Xeon chips as well as the Itanium line, to be able to handle 10 GHz, up from 2.8 GHz today, according to Mike Graf, product line manager for the Itanium processor family. Intel is positioning Itanium as the highest-end chip in its lineup, making it the basis for machines with 32-plus processors. Distributed databases, scientific technical computing and other high-performance niches are its target markets.

Intel is currently shipping its second-generation Itanium. At the Intel Developer's Forum in September, the company laid out plans for what's ahead. By summer 2003, the "Madison" generation of the chip will debut, with twice the cache (6M bytes versus the current 3M bytes), and those chips will be 30 to 50 percent faster than the current generation. Also, Itanium will feature multithreading and hyperthreading, which allows the computer to execute multiple threads in parallel. This, in turn, improves transaction rates by up to 30 percent over systems that don't have hyperthreading, Intel says. Multithreading and hyperthreading are already features in the Xeon family and will now move to Itanium.

Within about two years, the company will debut a chip with 1 billion transistors on a single processor, Graf says. Today's top-of-the-line processor has about 221 million transistors, and the Madison generation will sport around a half-billion.

Perhaps just as important, future generations of Itanium -- and there are five in development -- will be compatible at both the hardware and software levels with the existing chip set. This should translate into fewer installation problems with device drivers and other issues down the road.

The goal is to provide IT shops, especially in these troubled economic times, the means for "doing more with less," Graf says.

All told, Intel will "take technology traditionally in the high end of the market and bring it into the mainstream," says Tony Massimini, chief of technology at Semico Research Corp. in Phoenix, an independent consultancy specializing in semiconductor research. "They will keep pumping out chips in high volume and low price, and this will look very attractive" to IT shops, he says.

Although Massimini doesn't expect anything radically different for server chips in the next few years, he did say that Intel will be "pushing wireless" features a great deal. "With greater wireless connectivity for notebooks and desktops, that will put a load on the data center guys" to support those features from the server side, he adds.

According to Massimini, at its recent developer forum Intel "alluded" to the idea improving security by embedding some features into its chips, through something code-named La Grande, although the company didn't provide a timeframe for doing this. Intel's Graf wouldn't disclose any information about this project, but said that security is an issue the company is aware of and working on.

For more information:

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

Disclaimer: Our Tips Exchange is a forum for you to share technical advice and expertise with your peers and to learn from other enterprise IT professionals. TechTarget provides the infrastructure to facilitate this sharing of information. However, we cannot guarantee the accuracy or validity of the material submitted. You agree that your use of the Ask The Expert services and your reliance on any questions, answers, information or other materials received through this Web site is at your own risk.