Google is slowly building its reputation in custom hardware development. While a design engineer may not be able to buy a standalone IC from Google, the company has built its own cloud compute and server infrastructure largely in-house with many ASICs in its repertoire. 

Google’s cloud infrastructure is largely built in-house

Google’s cloud infrastructure is largely built in-house. Image used courtesy of Google
 

The company made hardware headlines this week when it announced that Uri Frank, a long-time silicon veteran at Intel, has assumed the role of VP of engineering for server chip design. The new team headed by Frank will be headquartered in Israel.

What exactly is Google’s history in custom hardware? And what does this new hire tell us about Google’s plans for future integration? 

A Brief History of Google Hardware

One of Google’s first big achievements in custom hardware was its Tensor Processing Unit (TPU) introduced in 2016

The TPU is an ASIC designed specifically for neural network acceleration and is utilized in Google’s data centers, powering many famous applications such as Street View, Rank Brain, and Alpha Go. At the time of its release, the TPUs provided an order of magnitude improvement in performance per watt for ML applications, according to Google.

Google's TPU board

Google’s TPU board. Image used courtesy of Google
 

Another hardware endeavor for Google was OpenTitan, an open-source silicon root-of-trust project built off Google’s Titan chip. 

As explained by AAC contributor Cabe Atwell, OpenTitan was the first open-source project of its kind, which served to create secure ICs for data center applications. Central to the platform is the fact that it is open-source, which may help identify and iron out security vulnerabilities in the OpenTitan ICs. 

Last year also brought reports that Google was collaborating with Samsung on a chip codenamed “Whitechapel,” including an 8-core Arm processor on 5nm technology, to eventually power Pixel smartphones and even Chromebooks. 

“The SoC Is the New Motherboard” 

In Google’s most recent announcement on Uri Frank’s new position, the tech giant also offered a window into the direction of its custom hardware.

Uri Frank

Uri Frank, Google’s new VP of engineering for server chip design. Image used courtesy of Intel and Times of Israel
 

“To date, the motherboard has been our integration point, where we compose CPUs, networking, storage devices, custom accelerators, memory, all from different vendors, into an optimized system,” the press release reads.

“But that’s no longer sufficient: to gain higher performance and to use less power, our workloads demand even deeper integration into the underlying hardware.” To meet these demands, Google has decided that high levels of integration, specifically in the form of SoCs, is the future of its hardware. 

Instead of connecting individual ICs via traces on a board, SoCs provide significant improvements in latency, bandwidth, power, and cost while eliminating off-chip parasitics. It’s for these reasons that Google is banking on integration—where all peripheral and interconnected ICs are brought together on one SoC. For Google, this means “the SoC is the new motherboard.” 

Google Joins the Trend of In-house Processors

Google isn’t the first tech giant to make the dive into in-house processors. In the past few years, Amazon has released its own custom Arm-based processors. Facebook has also created on its application-specific hardware to fortify its AI technology. More recently, Apple took a step back from Intel silicon altogether to make way for its own in-house processors for Mac

Now, with Google’s selection of Frank—a CPU design expert—the company plans to build a chip design team in Israel, where Google has found past success in innovating products like Waze, Call Screen, and Velostrata’s cloud migration tools. 

This post was first published on: All About Circuits