This is a super interesting result. As others have pointed out, two sided die have been tried in the past but it was expensive and the backside components could not be very interesting because you are limited in what you can do (for example ion implantation can shoot through the place you're trying to hit and damage the component on the 'other' side, not a problem when you are laying down the first set of devices on raw silicon, a big problem if you've got structures underneath you need to protect).
Whereas just doing power delivery on the back side is much simpler proposition and you could likely make simple transistors to give you the ability to switch more fine grained power domains on and off for even better power and heat management.
Was seconds away from posting this too. Basically, we have a situation where logic voltage is approaching Silicon's threshold voltage of 0.7.
With logic operating at such low voltage, the current you have to supply to the die is very high even in "low power" chips, and you begin suffering significant copper losses over single millimetres.
If you can put a DC-DC converter on the chip itself, that problem is solved.
That idea has been proposed before, and we even looked into it. The problem is that on-die DC-DC converter efficiency and current per mm2 is generally way too low. Converters using inductors do better but getting high Q inductors on a chip at low area is difficult.
GaN on Si could work, as could ferrite based inductors, but at least for the foreseeable future it doesn't seem like the incentive is enough to make it a wide scale thing. Probably heterogeneous integration stuff like say maybe chiplets, flip chip bonding, etc. might be the route if GaN converters get integrated.
It was removed in Skylake, not Broadwell. Ice Lake reintroduces the FIVR again. The inductors are now in the package substrate (rather than discrete inductors on the package or the weird 3DL module in Broadwell-Y) and efficiency at low power is improved. The latter is obviously extremely important for mobile workloads.
Backside processing is standard in the CMOS image sensor industry. Turns out all the metal on the front-side makes it very difficult to get light down to the silicon where the actual photon-electron transduction takes place. Backside imagine (BSI) has been standard for quite a few years.
Wafer thinning is used extensively in stacked-die products which have become common in recent years.
This looks like an incremental change applying the tech from image sensor/memory technology to standard silicon. Although, I'm not sure why they chose ruthenium.
BSI is very simple compared to what they are proposing here. BSI illumination does not require that any features are actually built into the backside. It basically just requires that the backside be polished very nicely.
Here we are talking about building elements on the backside and then connecting them to the front side. It is much trickier.
Copper has long been popular for metallization, but it is running into problems at finer linewidths. Cobalt and ruthenium are two popular alternatives being researched. I don't follow the industry closely enough to know if anyone is using them in volume manufacturing yet.
"Imec Presents Copper, Cobalt and Ruthenium Interconnect Results"
Cobalt needs a barrier to prevent mixing under some circumstances where it contacts copper. Ruthenium doesn't need any barriers but chemical mechanical polishing (CMP) processes for it still have problems.
> The only trade-off is the complexity of manufacturing the backside network, Prasad noted. To make it, the frontside of the wafer must be fully processed, including the construction of the buried power rails. The wafer is then flipped over, and the silicon is removed down to a mere 500 nanometers thickness. Then vertical connections less than 1 micrometer across, called micro-through-silicon vias (microTSVs), were built to contact the buried power rails.
Making double sided chips, and TSVs on them are nothing new. Been tried in the past and didn't take off.
Even without costs of TSVs, a double sided wafer costs twice the normal one, while you can at most squeeze like 20-30% more things onto it, given that the backside will only be usable for lower tier logic or analogue process.
At most this looks to provide a linear performance improvement on each iteration (if it's not a one off improvement - which it looks to be).
Moors law implies exponential improvement up to the the physical limits we hit over the last decade on each process shrinkage (this is also explicitly defined in Moore's paper) i.e Dennard scaling, this is possible because roughly speaking when you shrink the process, you reduce latency (allowing faster clock with the same design or new larger design i.e double transistors with same max straight line latency) without increasing power consumption or absolute heat dissipation requirements - it's almost "free" if you can keep shrinking it which is why you get exponential improvement if you can keep halving.
Media have completely watered down the term into a flowery synonym for "somehow improve".
Moore's Law and its many relatives is the result of two phenomena working in parallel.
1. Improvements tend to be of the form "X% better by the way we are measuring it."
2. Improvements tend to come at a consistent rate.
This means that the overall trend includes many one-off improvements which do not look related to anything else. And indeed are not except that business pressures say, "We have to figure out how to do X by time Y because that is what we expect our competitors to do" and therefore create an environment where people are constantly looking at the next barrier to improving at the expected rate and are finding improvements at about that same rate.
The Innovator's Dilemma has a good deal to say about the ubiquity of exponential improvements in various fields of technology. Moore's Law is by far the most famous, but is also not unique. Whenever an industry has agreed on a clear metric for "better" and is in a race to deliver it, they tend to get exponential rates of improvement until they either hit a physical limit, or sufficiently overdeliver customer needs that nobody cares any more.
Historically at different times we have had exponential improvement on everything from producing batteries more cheaply, being able to scoop more dirt with a backhoe, or increasing the range of steamships.
Which we recently have only managed by shrinking the wires, packing the transistors tighter, increasing the die size, changing transistor shape or doing anything but actually shrink the transistors.
You don't want to shrink the transistors because that increases static leakage, other things being equal. With dark silicon becoming more important, you actually want bigger devices that can minimize that, and use charge-recovery logic to conserve dynamic power as well. Only stuff that's very infrequently powered up can be built naïvely, the way it would be absent dark silicon constraints.
You can shrink it even further, no questions, and we possibly go there once HfO process will advance to the point when you can bury everything in it.
There are many ways forward, I'd say even too much of them. Too many people are trumpeting "the death of silicon" or CMOS, and too many people are focusing on "revolutionary" solutions
Doesn't look like this is ARM's project, but imec's. They just used an ARM for their demonstrator. Could just as well have been a MIPS, except imec is probably well committed to ARM designs.
"ARM stands for Advanced RISC Machines, the name given to the company when it was spun out of Acorn Computers back in the day. It was abbreviated to ARM when the firm went public in 1998."
It was always abbreviated to ARM, ever since the 1980s. The whole reason for "Advanced RISC Machines" was to match the acronym they already had.
Right, but I think that 1998 is when it (officially/branding-wise) went from being an acronym with an expansion to being just three letters with no official 'long form'. (Compare the way IBM these days is just IBM, not International Business Machines.)
> Compare the way IBM these days is just IBM, not International Business Machines.
This is the second time I've seen someone make this claim online and I wonder where it comes from. IBM's name is still officially "International Business Machines Corporation (IBM Corp.)" according to this legal statement:
As for how it's referred to in practice, I've always heard it called IBM for as long as I can remember (at least back to the 70's) and I think that's been the case for much longer.
Oops, my mistake. I did try to check my belief on the IBM website by looking for an 'about the company' section but it was so reader-hostile I gave up and assumed that IBM was the name they were going by these days. Arm is definitely Arm, though, not anything-risc-machines.
Because the original was actually Acorn RISC Machines. They might have as well gone on to become "Acorn", well in line with Apple, which they power nowadays to a large degree, and which they might power to a much larger degree in the near future.
>Generally acronyms and initialisms are capitalized, e.g., "NASA" or "SOS." Sometimes a minor word such as a preposition is not within the acronym, such as "WoW" for "World of Warcraft". In British English, only the initial letter of an acronym is capitalized if the acronym is read as a word, e.g., "Unesco."
Genuine question, do normal people pronounce ARM as "Arm" (like the limb) as in /ɑːrm/ or /ɑːm/, or do they pronounce it like A.R.M as in each letter individually.
I've always said ARM (as in each individual letter) when talking about ARM processors and it's never even occured to me to pronounce it like Arm.
That being said... I do pronounce RISCV as /rɪsk-viː/ (or, as I believe was the intention rɪsk-faiv depending who I am talking to) and that seemed entirely normal as opposed to pronouncing each letter in the Acronym...
Some additional bit of trivia: in Portuguese, you capitalize every letter for any acronyms of length <= 3 or those which you pronounce each letter separately like HTML. Having said that, for foreign names like NASA, you follow the original country's grammar rules so we'd never write Nasa
That's because UK isn't pronounced as a word, it's pronounced as an acronym - yoo kay, so both letters are capitalised. If it was pronounced as a word - yuck perhaps - only the first letter would be capitalises.
I don't think it is /generally/ true - it's certainly in a bunch of style guides (e.g. The Guardian says Nato) but not in others (e.g. Plain English Campaign says NATO). I'd be amazed if there was an actual grammatical "rule" about this, mind.
That sounds sarcastic? But a kg is a ridiculous number of chips.
Let's look at a very big chip, 3cm a side. Once you thin it to 500nm, the volume is .45 cubic millimeters. Since Ruthenium has a density of 12.2, it would take 5.5 milligrams to make a version of the chip that was solid Ruthenium all the way through. That's 4.4 cents.
Whereas just doing power delivery on the back side is much simpler proposition and you could likely make simple transistors to give you the ability to switch more fine grained power domains on and off for even better power and heat management.