Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Arm Shows Backside Power Delivery as Path to Further Moore’s Law (ieee.org)
140 points by magoghm on Dec 20, 2019 | hide | past | favorite | 57 comments


This is a super interesting result. As others have pointed out, two sided die have been tried in the past but it was expensive and the backside components could not be very interesting because you are limited in what you can do (for example ion implantation can shoot through the place you're trying to hit and damage the component on the 'other' side, not a problem when you are laying down the first set of devices on raw silicon, a big problem if you've got structures underneath you need to protect).

Whereas just doing power delivery on the back side is much simpler proposition and you could likely make simple transistors to give you the ability to switch more fine grained power domains on and off for even better power and heat management.


Was seconds away from posting this too. Basically, we have a situation where logic voltage is approaching Silicon's threshold voltage of 0.7.

With logic operating at such low voltage, the current you have to supply to the die is very high even in "low power" chips, and you begin suffering significant copper losses over single millimetres.

If you can put a DC-DC converter on the chip itself, that problem is solved.


That idea has been proposed before, and we even looked into it. The problem is that on-die DC-DC converter efficiency and current per mm2 is generally way too low. Converters using inductors do better but getting high Q inductors on a chip at low area is difficult.


First Hi!

Well is talk about DC-DC being made along with logic or with a separate analog process?

With the second, chip scale bucks were certainly made.

GaN grows on Si nicely, and ferrite-on-si is also a thing.


Hey!

GaN on Si could work, as could ferrite based inductors, but at least for the foreseeable future it doesn't seem like the incentive is enough to make it a wide scale thing. Probably heterogeneous integration stuff like say maybe chiplets, flip chip bonding, etc. might be the route if GaN converters get integrated.


Haswell had an on-die, integrated voltage regulator. I guess it didn't work as well as expected because they removed it in Broadwell.


It was removed in Skylake, not Broadwell. Ice Lake reintroduces the FIVR again. The inductors are now in the package substrate (rather than discrete inductors on the package or the weird 3DL module in Broadwell-Y) and efficiency at low power is improved. The latter is obviously extremely important for mobile workloads.


Do you see integrated super caps or batteries into the package or die at some point?


Not on die, but on a separate module on the substrate. Very different thing.

They still use FIVRs on mobile chips, except they managed to put inductors in the substrate now


It's on die. On the Haswells with two dies, the second one was eDRAM.

https://www.psma.com/sites/default/files/uploads/tech-forums...


No, no, no. Haswell inductors are off-die, that's what that very paper says.

What Intel was saying is that they had an on-die industrors ready in labs, but never stated if they are going into production.


Inductors are not the important part, it's all about the MOSFETs


Backside processing is standard in the CMOS image sensor industry. Turns out all the metal on the front-side makes it very difficult to get light down to the silicon where the actual photon-electron transduction takes place. Backside imagine (BSI) has been standard for quite a few years.

Wafer thinning is used extensively in stacked-die products which have become common in recent years.

This looks like an incremental change applying the tech from image sensor/memory technology to standard silicon. Although, I'm not sure why they chose ruthenium.


BSI is very simple compared to what they are proposing here. BSI illumination does not require that any features are actually built into the backside. It basically just requires that the backside be polished very nicely.

Here we are talking about building elements on the backside and then connecting them to the front side. It is much trickier.


That's not entirely true. BSI always had some metallization on the backside for aperture, power and packaging. For example, see this: https://image-sensors-world.blogspot.com/2018/03/techinsight...

Perhaps what they're proposing is more sophisticated, but definitely not something entirely new.


Copper has long been popular for metallization, but it is running into problems at finer linewidths. Cobalt and ruthenium are two popular alternatives being researched. I don't follow the industry closely enough to know if anyone is using them in volume manufacturing yet.

"Imec Presents Copper, Cobalt and Ruthenium Interconnect Results"

https://semiwiki.com/semiconductor-services/ic-knowledge/756...

Cobalt needs a barrier to prevent mixing under some circumstances where it contacts copper. Ruthenium doesn't need any barriers but chemical mechanical polishing (CMP) processes for it still have problems.


This is crazy:

> The only trade-off is the complexity of manufacturing the backside network, Prasad noted. To make it, the frontside of the wafer must be fully processed, including the construction of the buried power rails. The wafer is then flipped over, and the silicon is removed down to a mere 500 nanometers thickness. Then vertical connections less than 1 micrometer across, called micro-through-silicon vias (microTSVs), were built to contact the buried power rails.


Making double sided chips, and TSVs on them are nothing new. Been tried in the past and didn't take off.

Even without costs of TSVs, a double sided wafer costs twice the normal one, while you can at most squeeze like 20-30% more things onto it, given that the backside will only be usable for lower tier logic or analogue process.


> Even without costs of TSVs, a double sided wafer costs twice the normal one

If you ignore the TSVs the back of the wafer has zero circuitry constructed on it.


Well, as I catch the idea, they do want to put a DC-DC on the back side


They may want to, but they can trivially choose not to if it makes things too expensive. So 'double cost' will not be a problem.


At most this looks to provide a linear performance improvement on each iteration (if it's not a one off improvement - which it looks to be).

Moors law implies exponential improvement up to the the physical limits we hit over the last decade on each process shrinkage (this is also explicitly defined in Moore's paper) i.e Dennard scaling, this is possible because roughly speaking when you shrink the process, you reduce latency (allowing faster clock with the same design or new larger design i.e double transistors with same max straight line latency) without increasing power consumption or absolute heat dissipation requirements - it's almost "free" if you can keep shrinking it which is why you get exponential improvement if you can keep halving.

Media have completely watered down the term into a flowery synonym for "somehow improve".


Moore's Law and its many relatives is the result of two phenomena working in parallel.

1. Improvements tend to be of the form "X% better by the way we are measuring it."

2. Improvements tend to come at a consistent rate.

This means that the overall trend includes many one-off improvements which do not look related to anything else. And indeed are not except that business pressures say, "We have to figure out how to do X by time Y because that is what we expect our competitors to do" and therefore create an environment where people are constantly looking at the next barrier to improving at the expected rate and are finding improvements at about that same rate.

The Innovator's Dilemma has a good deal to say about the ubiquity of exponential improvements in various fields of technology. Moore's Law is by far the most famous, but is also not unique. Whenever an industry has agreed on a clear metric for "better" and is in a race to deliver it, they tend to get exponential rates of improvement until they either hit a physical limit, or sufficiently overdeliver customer needs that nobody cares any more.

Historically at different times we have had exponential improvement on everything from producing batteries more cheaply, being able to scoop more dirt with a backhoe, or increasing the range of steamships.


The article it refers to as prior art, https://spectrum.ieee.org/nanoclast/semiconductors/devices/b... claims that moving the rails to the back side saves about 15% of the surface area of the chip. That's nothing to sneeze at.


Let's not split hairs, Moore's law is the doubling of transistor count in the same size every 18 months.


Which we recently have only managed by shrinking the wires, packing the transistors tighter, increasing the die size, changing transistor shape or doing anything but actually shrink the transistors.


You don't want to shrink the transistors because that increases static leakage, other things being equal. With dark silicon becoming more important, you actually want bigger devices that can minimize that, and use charge-recovery logic to conserve dynamic power as well. Only stuff that's very infrequently powered up can be built naïvely, the way it would be absent dark silicon constraints.


You can shrink it even further, no questions, and we possibly go there once HfO process will advance to the point when you can bury everything in it.

There are many ways forward, I'd say even too much of them. Too many people are trumpeting "the death of silicon" or CMOS, and too many people are focusing on "revolutionary" solutions


Doesn't look like this is ARM's project, but imec's. They just used an ARM for their demonstrator. Could just as well have been a MIPS, except imec is probably well committed to ARM designs.


When did ARM become Arm? It's like that on their own site, too.



This part, though:

"ARM stands for Advanced RISC Machines, the name given to the company when it was spun out of Acorn Computers back in the day. It was abbreviated to ARM when the firm went public in 1998."

It was always abbreviated to ARM, ever since the 1980s. The whole reason for "Advanced RISC Machines" was to match the acronym they already had.


Right, but I think that 1998 is when it (officially/branding-wise) went from being an acronym with an expansion to being just three letters with no official 'long form'. (Compare the way IBM these days is just IBM, not International Business Machines.)


> Compare the way IBM these days is just IBM, not International Business Machines.

This is the second time I've seen someone make this claim online and I wonder where it comes from. IBM's name is still officially "International Business Machines Corporation (IBM Corp.)" according to this legal statement:

https://www.ibm.com/privacy/us/en/.

As for how it's referred to in practice, I've always heard it called IBM for as long as I can remember (at least back to the 70's) and I think that's been the case for much longer.


Oops, my mistake. I did try to check my belief on the IBM website by looking for an 'about the company' section but it was so reader-hostile I gave up and assumed that IBM was the name they were going by these days. Arm is definitely Arm, though, not anything-risc-machines.


Because the original was actually Acorn RISC Machines. They might have as well gone on to become "Acorn", well in line with Apple, which they power nowadays to a large degree, and which they might power to a much larger degree in the near future.


> They might have as well gone on to become "Acorn"

They were Acorn at that point. That's why the CPU was called Acorn RISC Machine (singular). There was never any company called "Acorn RISC Machines".


Good to know, thanks!


UK's grammar rules differ from the United States- they don't capitalize acronyms. i.e. NASA becomes Nasa.


From wikipedia (https://en.wikipedia.org/wiki/Capitalization_in_English#Capi...):

>Generally acronyms and initialisms are capitalized, e.g., "NASA" or "SOS." Sometimes a minor word such as a preposition is not within the acronym, such as "WoW" for "World of Warcraft". In British English, only the initial letter of an acronym is capitalized if the acronym is read as a word, e.g., "Unesco."


Genuine question, do normal people pronounce ARM as "Arm" (like the limb) as in /ɑːrm/ or /ɑːm/, or do they pronounce it like A.R.M as in each letter individually.

I've always said ARM (as in each individual letter) when talking about ARM processors and it's never even occured to me to pronounce it like Arm.

That being said... I do pronounce RISCV as /rɪsk-viː/ (or, as I believe was the intention rɪsk-faiv depending who I am talking to) and that seemed entirely normal as opposed to pronouncing each letter in the Acronym...

So maybe I'm just weird here...


The company and CPU name has always been pronounced 'arm' like the body part.


My general rule is to pronounced any three letter Capitalised term as individual characters, so A/R/M and S/Q/L.

Since RISC-V is longer I pronounced it the same as you; /rɪsk-viː/, the same as to NASA and NATO.


Yeah, it's pronounced like the limb, and always has been, at least in English-speaking countries. Hence the StrongARM pun.


In English, I usually just say it as the body part. In French, however, I would pronounce every letter separately.


Some additional bit of trivia: in Portuguese, you capitalize every letter for any acronyms of length <= 3 or those which you pronounce each letter separately like HTML. Having said that, for foreign names like NASA, you follow the original country's grammar rules so we'd never write Nasa


> UK's grammar rules differ from the United States- they don't capitalize acronyms.

They don't call themselves the Uk, do they?


That's because UK isn't pronounced as a word, it's pronounced as an acronym - yoo kay, so both letters are capitalised. If it was pronounced as a word - yuck perhaps - only the first letter would be capitalises.


An acronym is pronounced as a word. You mean initialism.


I can't say that I've ever noticed that to be generally true, but I'll certainly be looking out for it now.


The only time Brits capitalise the entire acronym is if we are worried about confusing Americans.


Which is always, is it not?


Almost never, actually.


I don't think it is /generally/ true - it's certainly in a bunch of style guides (e.g. The Guardian says Nato) but not in others (e.g. Plain English Campaign says NATO). I'd be amazed if there was an actual grammatical "rule" about this, mind.


ruthenium is only $8K per kg.


That sounds sarcastic? But a kg is a ridiculous number of chips.

Let's look at a very big chip, 3cm a side. Once you thin it to 500nm, the volume is .45 cubic millimeters. Since Ruthenium has a density of 12.2, it would take 5.5 milligrams to make a version of the chip that was solid Ruthenium all the way through. That's 4.4 cents.


> Backside Power Delivery

So they're saying some stimulation applied in a proper place gets things going faster?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: