A few years ago, multiple high-end companies were backing the idea of moving from 300mm to 450mm silicon wafers. Historically, moving to larger wafers was a critical way that foundries and fabs cut prices and improved yields. Historically, companies like Intel led the charge on wafer size, but there’s always been a very long tail. Currently, some 23 firms have 300mm fabs in production, while 58 companies still operate 200mm fabs. 450mm wafers were meant to further extend the cost savings of 300mm wafers, but high costs and uncertain rollouts appear to have doomed the endeavor.
Several years ago, Intel, Samsung, GlobalFoundries, TSMC, and IBM collectively launched the Global 450 Consortium (G450C) in partnership with The Colleges of Nanoscale Science and Engineering (CNSE) at SUNY Polytechnic Institute (SUNY Poly). This collaboration was a $ 4.8 billion-dollar endeavor (over five years) to develop tools, work with suppliers on 450mm ecosystem development, and to create appropriate infrastructure for the future deployment of 450mm wafers. This was no small task — larger wafers means different tools, and tool costs are a substantial reason why process nodes are becoming more expensive over time. EUV tools and lithography equipment, for example, are significantly more expensive than traditional 193nm ArF lasers. According to a recent report in the Times-Union, two of the five companies involved in the G450C are pulling out after the end of the five-year program.
A report from IC Insights published in October 2016 sheds some additional light on this situation. While 300mm wafers are more popular than 200mm wafers in terms of total production capacity, they are also limited to specific areas of the market. DRAM, NAND flash, image sensors, power management devices, CPUs, GPUs, and other high volume technologies are typically built on 300mm wafers. 200mm wafers are used for smaller runs, where lower total volumes are expected. We’ve discussed before how TSMC makes a significant percentage of its income using nodes that haven’t been cutting-edge in a decade or more. Many of these older nodes are paired with older equipment to keep them cost-effective and minimize the purchase of new hardware.
There are several intrinsic advantages to using larger wafers. If the foundry can keep its wafers-per-hour production rate on 450mm wafers close to its 300mm wafer production rate, it can produce vastly more chips per hour. This helps reduce costs, provided that the semiconductor economy is healthy and the foundry utilization rate is high. Being able to build more chips per hour can also allow older lines to be shut down, saving on factory costs. Larger wafers mean that large-die processors don’t cost as much area around the edges of the wafer and reduce overall waste as a result. The graph above shows Intel’s estimated wafer costs over time, and illustrates why the company was anxious to move to 450mm wafers over the long term. If the shift had been successful, Intel’s long-term roadmap for Atom processors might have taken a different path, since 450mm wafers would have allowed the company to reduce its per-die Atom cost and likely compete more effectively against TSMC, Samsung, and GlobalFoundries (at least, until those firms rolled their own 450mm wafers).
With multiple companies pulling out of G450C and no clear roadmap for the technology, it seems safe to conclude that 450mm wafers are pretty much dead. Pilot programs have ended, firms are still focused on ramping up 300mm wafers at various foundries, and no semiconductor firm we’re aware of is still championing 450mm wafer research or deployment. High costs and machine replacements appear to have nuked any argument for superior cost savings in the long term or improving wafer utilization.
Let’s block ads! (Why?)