TMCnet News

A change of mentality on transistors
[September 28, 2011]

A change of mentality on transistors


(Electronics Weekly (UK) Via Acquire Media NewsEdge) Mobile phones computers and graphics cards get faster for one simple reason; as process geometries improve semiconductors are becoming twice as transistor-dense, and therefore with twice the digital signal processing capability, every eighteen months.



However there may be problems on the horizon.

As transistor sizes continue to shrink, the ability of engineers to design chips using such high number of transistors is not increasing at anything like the same speed. The so-called 'design gap' is leaving designers far behind the capabilities of the hardware available to them.


Also, the march of semiconductor progress is about to slam up against some fundamental physical limitations of its medium.

Obviously electron-based ICs ­cannot, even in principle, get smaller than the diameter of an atom (0.1- 0.5nm). In reality each would actually have to be constructed from basic nodes which were much larger than this (around 1nm) in order to work. At these lowest geometries there are also serious issues with current ­leakage.

Then there is the related issue of the failure of Dennard Scaling: Dennard Scaling, simply put, describes the fact that halving the size of transistors halves the power they consume.

However, as we move to ever more advanced process nodes, Dennard Scaling breaks down as leakage effects come to dominate power consumption.

This means that the power density of chips increases - if you double the number of transistors in a given area, but don't at least halve the power dissipation of each of those transistors, you get a net increase in power density.

When a chip's peak power density exceeds the capabilities of cooling technologies, we see the phenomenon of 'dark silicon': areas of silicon that must remain unutilised at any given time in order to maintain ­acceptable thermal dissipation.

There are also economic problems in continuing to reduce transistor sizes. These include the increasing costs of mask sets and greater problems caused by physical flaws in silicon wafers as process geometries shrink; leading to lower yields.

We've seen these issues coming. Occasionally an architectural improvement or blue-sky technological development comes along that keeps the wolf from the door.

In a sense the rise of multicore over the last few years has been a reaction to scaling issues, Such parallel processing has proven immensely useful in DSPs.

However, in keeping with Amdahl's Law only limited returns can be had from multicore solutions with most applications, and it's essentially the equivalent of throwing more chips at the problem. Using numerous discrete chips, whether conventionally-packaged or stacked, introduces serious cost, power and heat-dissipation problems.

Thus we're reaching a plateau point. We can go further with our existing technologies for a while, but physics and economics are going to make big gains of the like we've seen with Moore's Law over the last 40 years impractical very quickly.

The nearest 'game-changing' solutions (photonics, quantum computing, etc.) we can see may be decades away.

I've been promoting the notion that what we really need in this situation is a more general shift in architectural thinking.

Today if you ask a chip designer how to extract maximum ability from a chip, they will generally see how much he can force a fixed-architecture chip to do with brute computational force. This only exacerbates the problems outlined above.

This all points, in my opinion, ­towards a situation where the internal dynamics of chips is reconfigurable to perform different tasks, but with the efficiency of hard-wired architectures.

My own company, for example, has designed a response to this problem in the form of a dynamically reconfigurable logic (DRL) IP system known as ART2.

With this architecture parts of a chip can redefine their function on a clock-cycle-by-clock-cycle basis. The result is more efficient use of a given number of transistors.

I suggest this approach presents a stopgap measure for the next 20 years or so. This is until the next fundamental change in the way we physically create logic circuits, and we move beyond the simple 'throw more transistors at it' mentality.

www.akya.co.uk (c) 2011 Reed Business Information - UK. All Rights Reserved.

[ Back To TMCnet.com's Homepage ]