This week Google partially lifted the curtain of secrecy surrounding the homegrown network architecture it built over the past decade to handle the massive amount of Internet traffic through the search giant's servers. To divulge the details, Google selected an adjunct CSE professor to go public. Amin Vahdat, who started advising Google while he was still teaching at UC San Diego and leading the university's Center for Networked Systems (CNS), is now a full-time Google Fellow and Technical Lead for Networking at the company, and he remains an adjunct member of the CSE faculty. [Vahdat is pictured below during the 2013 CNS Research Review.]
Vahdat gave a presentation at the 2015 Open Network Summit on June 17, "revealing for the first time the details of five generations of our in-house network technology," according to Google. While Vahdat was careful about not divulging too many proprietary details, he presented a first look into Google's data center network design and implementation, focusing on the data, control and management plane principles underpinning five generations of our network architecture." Vahdat told the conference that around 2005, the hardware didn't exist that Google required to build a network of the size and speed the company needed. So instead of buying networking from companies such as Cisco Systems, Google designed its own equipment and had it made to order in Asia and elsewhere. Today, he said, Google designs 100 percent of the networking hardware used inside its data centers. As a result, the company has been able to boost the capacity of a single datacenter network more than 100-fold in 10 years. The current generation of cluster switches, called Jupiter, provide about 40 terabits of bandwidth per second, the equivalent of 40 million home Internet connections. That capability, Vahdat said, is critical to meeting Google's bandwidth and scale demands that are growing exponentially -- doubling approximately every year.
Timed to coincide with his talk, Vahdat posted an article on the Google Cloud Platform blog. "Our datacenter networks are shared infrastructure," he wrote. "This means that the same networks that power all of Google's internal infrastructure and services also power Google Cloud Platform. We are most excited about opening this capability up to developers across the world so that the next great Internet service or platform can leverage world-class network infrastructure without having to invent it."
The hallmark of Google's network approach involved moving the complexity out of the hardware and into the software -- so-called software-defined networking -- which allowed the company to build complex networks on top of relatively cheap and abundant microchips. "Taken together, our network control stack has more in common with Google's distributed computing architectures than traditional router-centric Internet protocols," added Vahdat. "Some might even say that we've been deploying and enjoying the benefits of software-defined networking at Google for a decade... these systems come from our early work in datacenter networking." While Vahdat was talking about Google's early work specifically, it's clear that his own early work in CSE and the Center for Networked Systems pointed to the importance of software-defined networking for datacenters -- and he put theory into practice when given the opportunity to create what may be the largest computer network in the world... giving CSE some bragging rights by association.
Read Amin Vahdat's post on the Google Cloud Platform blog.
Learn more about the 2015 Open Networking Summit.