rubox-ad-pcnuts

Intel Can Now Mesh Different Process Nodes on the Same Chip

 

One constant of CPU manufacturing for decades has been that different components on the same die must share a common process node. It’s certainly possible to build a package that combines, say, a 14nm CPU with a large pool of on-chip cache built at 22nm, or to have a CPU built at one process node that has a GPU built at a different process node adjacent to it, but on the same physical piece of silicon. Intel has used both approaches in the past. At Hot Chips last week, however, the chip manufacturer showed off something different — a new packaging solution that offers an alternative to expensive 2.5D interposers (used by both AMD and Nvidia for various high-end GPUs). Intel has discussed its embedded multi-die interconnect bridge (EMIB) before, but it revealed additional details at Hot Chips this year.

Here’s how Intel describes EMIB on its site: “We sought a solution that is practical to design, reliable across any die, and simple to implement in a design. The result is the Embedded Multi-die Interconnect Bridge, affectionately abbreviated to EMIB. There can be many embedded bridges in a single substrate, providing extremely high I/O and well controlled electrical interconnect paths between multiple die, as needed. Because the chips do not have to be connected to the package through a silicon interposer with TSVs, there is nothing to potentially degrade their performance. We use micro-bumps for high density signals, and coarser pitch, standard flip chip bumps for direct power and ground connections from chip to package.”

 

 

There are theoretical advantages to using EMIB as opposed to a silicon interposer. In order to function effectively, the silicon interposer has to be the combined size of all the dies to be connected. Aligning this layer and manufacturing it with the requisite number of through-silicon vias (TSVs) has historically been difficult — the interposer itself is a simple sheet of silicon, but wiring the interposer up properly and connecting it to all the requisite devices is difficult. The diagram below shows a 2.5D interpose at the top and Intel’s EMIB solution at the bottom:

 

Intel’s goal is to move from a traditional monolithic CPU design to an approach that would allow it to mesh different components built on different nodes on the same physical chip. Certain components, like modems, don’t require or necessarily benefit from smaller process shrinks. In other cases, Intel could see benefits from reserving 10nm for the hardware most likely to benefit from using it, while other components might make the transition to a new node over time. One reason Santa Clara wants to build in this stepwise fashion is that it makes it easier to deploy critical components that will benefit from new nodes first, with other hardware adopting the technology when it makes sense to do so.

 

 

EMIB could also bypass some of the limitations of interposers, allowing for larger chip sizes without the reticle limits that prevent interposers from growing above a certain size. Interposer designs also require that all chip-to-chip communication use the interposer layer, while EMIB allows for more flexibility. Intel is initially deploying the technology in its FPGAs, but is clearly evaluating its use in other, upcoming products. Interposers and HBM2 have, thus far, been stuck at the top end of the GPU market with limited utility to other types of products. If Intel’s EMIB delivers the power consumption improvements and other benefits it promises, we could see this tech move over to Core in the not-too-distant future.

 

 

extremetech.com

August 28, 2017

rubox-ad-pcnuts