Disaggregating Data Centers

Apr 1, 2015
George Porter

CSE Prof. George Porter is back from the big annual Optical Fiber Conference (OFC) and expo, which took place in Los Angeles this year. The associate director of UC San Diego’s Center for Networked Systems (CNS) and two colleagues from the ECE department – Shaya Fainman and George Papen – were on the organizing/steering committee of a day-long industry workshop March 22. The topic: “Photonics for Disaggregated Data Centers.” The workshop – funded by NSF's Center for Integrated Access Networks (CIAN) and the Optical Society's Industry Development Associates trade group – explored the intersection of two relatively new research areas: data center networking, and disaggregated server design. 

Large-scale Internet data centers host tens or hundreds of thousands of servers, powering sites such as search engines, social networks, streaming video services, shopping and healthcare.  The cost and energy demands of such facilities depend heavily on how efficiently the servers can work together, which in turn depends on the quality of the network interconnecting them.  “As servers get faster and faster, the demands placed on the data center network get increasingly hard to meet,” says Porter. “Industry is increasingly moving to fiber optics and photonics as a technology that can meet these incredible bandwidth requirements.  A major topic of our workshop involved understanding how to develop next-generation photonics to power the requirements of data center networks.”

The workshop (left) also grappled with understanding the networking requirements for building disaggregated data centers – which, according to Porter, “re-evaluates” the entire concept of what a server is.  “Today a server represents a fixed combination of compute, memory, storage, and IO,” he explains. “As we look at the requirements for next-generation data centers, we see that the applications run in them have very dynamic requirements.” Porter cites the case of  Facebook, which may need servers with tons of memory that can be used to analyze billion-node graphs.  Alternatively, they may need servers with significant amounts of network IO that can power a caching layer. “Rather than build these as separate systems, with a disaggregated design, the individual components making up a server can be put directly on the network itself,” notes Porter. “Then a ‘server’ is simply a temporary binding of these resources together to work together for a specific purpose.  When those requirements change, different combinations of resources can be formed.  This vision is very powerful, but puts incredible strain on the network.” 

The workshop also pushed attendees to come up with metrics for the optical components that will be needed to power such a design. Participants in the discussion included representatives from Facebook, Intel, IBM and Corning as well as VMWare, Infinera, Mellanox and Samtec.

Read more about the CIAN-sponsored workshop.