14.4 C
New York
Friday, April 12, 2024

How Apple Makes the AI Chip Powering the iPhone's Fancy Tricks

A few years ago—the company won’t say exactly when—some engineers at Apple began to think the iPhone’s camera could be made smarter using newly powerful machine-learning algorithms known as neural networks. Before long, they were talking with a lean vice president named Tim Millet.

Millet leads a team of chip architects, who got to work. When the iPhone X was unveiled last fall, Apple’s camera team had added a slick new portrait mode that can digitally adjust the lighting on subjects’ faces and artfully blur the background. It took advantage of a new module added to the iPhone’s main chip called the neural engine, customized to run machine-learning code. The same specialized new silicon enabled the iPhone X’s novel face-recognition unlock system, Face ID. “We couldn’t have done that properly without the neural engine,” says Millet.

That iPhone engineers could tap in-house chip chops to help create an innovative feature like Face ID shows the benefits of Apple’s unconventional hardware strategy. Most computer and gadget makers buy the chips at the heart of their devices from semiconductor manufacturers such as Intel, Qualcomm, or Samsung. By contrast, every iPhone since 2010’s iPhone 4 has been powered by an Apple-designed chip, made to order by an outside firm.

Millet says the approach stems in part from former CEO Steve Jobs’ feeling that off-the-shelf chips were constraining the dreams of his gadget designers. “It’s about owning the pieces that are critical and letting nothing get in your way,” Millet says. “The experiences we deliver through the phone are critically dependent on the chip.”

Apple’s ascent as a chip designer was evident last week when the company unveiled three new iPhones at Steve Jobs Theater, the subterranean auditorium at the company’s new headquarters in Cupertino, California.

>

All three devices pack a new chip designed by Millet’s team called the A12 Bionic. It’s made with more advanced chip technology than the equivalent chip in any mobile device on the market. The A12’s transistors have features as small as 7 nanometers, notably smaller than the 10 nanometer transistors in last year’s iPhone. That helped Apple pack in 6.9 billion transistors, up from last year’s 4.3 billion.

The luxury of more transistors opened new creative avenues for Millet’s team, he says. How they were deployed suggests Apple’s priorities for the iPhone. Engineers made the chip’s graphics processor significantly more powerful; they also created a much larger neural engine, to soup up artificial intelligence functions. Last year’s neural engine was capable of 600 billion operations per second, the new design can perform 5 trillion per second.

Millet says those upgrades contributed to the improved portrait mode that lets users adjust the depth of field after a photo has been taken, as well as more accurate and realistic augmented-reality experiences. The neural engine is also now open for use by outside developers, in tandem with recent software updates aimed at inspiring new apps built around artificial intelligence. “This is an emerging class of applications that’s super important,” Millet says.

Google thinks so, too. The company included a specialized image processor tuned to run neural networks in the Pixel 2 phones it released last October. Google designed the component for its devices in collaboration with Intel, but bought an off-the-shelf main chip from Qualcomm.

Apple’s freedom to co-evolve its silicon with its software, hardware, and corporate strategy is particularly valuable now that sales of iPhones have plateaued. The company needs new features to prod iPhone owners to upgrade and to support the price increases that have kept revenues growing. “They can provide a smoother experience and put new features in earlier,” says Patrick Moorhead, a semiconductor analyst with Moor Insights & Strategy. “That helps them command a premium price.” Samsung also makes both smartphones and smartphone processors, but those units of the business aren’t intertwined as they are at Apple, and the South Korean company also sells its processors to other device makers.

Apple won’t say who is making its new A12 chips. Semiconductor industry wonks say it’s Taiwan’s TSMC. Apple’s chief operating officer was quoted at a TSMC event last October saying TSMC was the sole supplier of new iPhone and iPad chips, praising it for producing half a billion Apple chips in under a year.

To ensure access to TSMC’s 7-nanometer technology, Moorhead says, Apple would have made significant financial commitments. “They’re making capital expenditures to buy fab space and get first in line,” he says. Apple says it expects to spend $17 billion in capital expenditures in the year ending September 29. That’s more than eight times as much as it spent in 2010, when Jobs announced the first Apple-designed chip, inside the iPhone 4.

Apple is the world’s most valuable company, but its chip designers will soon face an even greater power—the laws of physics. The semiconductor industry is reasonably confident it can deliver 5 nanometer transistors around 2020. How to shrink them further is less clear. Moore’s Law, the name given to the decades-long trend of transistors getting exponentially smaller, has slowed and may be over.

Apple’s strategy may still work if, or when, transistors do stop shrinking. Chip design could become the primary way to get more out of silicon chips, and Apple’s end-to-end control of the iPhone should provide more flexibility.

Millet declines to answer questions about his team’s plans, although he notes that the team is already looking far beyond the A12 announced this week. “It takes us roughly a couple of years to build a chip from beginning to end,” Millet says. Somewhere inside Apple's glassy ring-shaped headquarters, the hardware that will power next year’s new iPhone features is already taking shape.

Related Articles

Latest Articles