12.10.13

 

Image removed.
NEWS HIGHLIGHTS                                                         Tuesday, December 10, 2013

 

Bullet Trains and Express Lanes  

 

Center for Networked Systems (CNS) research scientist George Porter has co-authored two papers with CSE students and colleagues to be presented Dec. 11 at the 9th ACM International Conference on Emerging Networking Experiments and Technologies (CoNEXT) in Santa Barbara, Calif. 

 

Image removed.According to "Bullet Trains: A Study of NIC Burst Behavior at Microsecond Timescales," a lot is known at a macro level about the behavior of traffic in data center networks. This includes the 'burstiness' of TCP, variability based on destination, and overall size of network flows. However, according to lead CSE graduate student Rishi Kapoor (right), Porter and CSE professors Geoff Voelker and Alex Snoeren, "little information is available on the behavior of data center traffic at packet-level timescales," that is, at timescales below 100 microseconds. Some 30 years ago, an MIT study compared packets of data with train cars - sent from a source to the same destination back-to-back like train cars pulled by a locomotive. In the context of data centers, however, the UC San Diego researchers came to the conclusion that those trains are more aptly termed "bullet trains" when viewed at microsecond timescales. Porter and his colleagues examined the various sources of traffic bursts and measured the traffic from different sources along the network stack, as well as the burstiness of different data-center workloads and the burst behavior of bandwidth-intensive applications such as data sorting (MapReduce) and distributed file systems (NFS and Hadoop). "Our analysis showed that network traffic exhibits large bursts at sub-100 microsecond timescales," said Porter. "Regardless of application behavior at the higher layer, packets come out of a 10 Gigabit-per-second server in bursts due to batching." The larger the burst, he added, the greater the likelihood of packets being dropped.

 

The researchers focused primarily on the network interface controller (NIC) layer, because the controller is directly implicated in the burst behavior that most affects computer networking speeds. While it would be ideal if packets transmitted within a single flow would be uniformly paced, real life turns out to be more complex. This is primarily because packets are batched differently across the network stack in order to achieve link rates of 10Gbps or higher. For their paper, Porter and his co-authors studied the burst behavior of traffic emanating from a 10Gbps end-host across a variety of data center applications. "We found that at 10- to 100-microsecond timescales, the traffic exhibits large bursts, tens of packets in length," said Porter. "We also found that this level of burstiness was largely outside of application control, and independent of the high-level behavior of applications."

 

Image removed.In the second study to be presented at CoNEXT, FasTrak: Enabling Express Lanes in Multi-Tenant Data Centers, lead graduate student Radhika Niranjan Mysore (left), Porter, and their co-author CSE Prof. Amin Vahdat (on leave at Google) explore an issue specifically facing operators of cloud services, such as Amazon EC2, Microsoft Azure and Google Compute Engine. These so-called multi-tenant data centers may host tens of thousands of customers. No customer wants their data or service to leak into those of other customers in the cloud, and typically, cloud operators rely on virtual machines (VMs) as well as network-level rules and policies that hypervisors enforce on every packet going in and out of the host in order to ensure network isolation. As a result, however, VMs carry innate costs in the form of latency (delays) and the increased cost of processing packets in the hypervisor, which affect both the provider and the tenant. The researchers came up with a solution called FasTrak, which keeps the functionality but curbs the cost of rule processing by offloading some of the virtualization functionality from the hypervisor software to the network switch hardware through so-called "express lanes."  There is limited space on a switch - not enough to take care of all the rules required by a server - so for FasTrak, the researchers determined the subset of data flows that could benefit most from offloading via express lanes to hardware. The result: an approximate doubling in latency improvement (i.e., 50 percent less delay or time to finish), combined with a 21 percent drop in the server load. According to the study's conclusion, FasTrak's actual benefits are workload dependent, but "services that should benefit the most are those with substantial communication requirements and some communication locality." 

 

Image removed.

Fat Trees vs. Aspen Trees 

 

Recent CSE alumna Meg Walraed-Sullivan (Ph.D. '12) is now a postdoctoral researcher in the Distributed Systems group at Microsoft Research in Redmond, Washington. The group investigates the scalability, security, fault tolerance, manageability, and performance of distributed systems. While in CSE, Walraed-Sullivan (left) worked on data-center communications challenges such as (a) enabling scalable communication via strategic label assignment, and (b) exploring the relationship between fault tolerance and scalability properties of hierarchical topologies.  Walraed-Sullivan is the first author on a paper at the 9th ACM International Conference on Emerging Networking Experiments and Technologies (CoNEXT) in Santa Barbara, Dec. 9-12. On Dec. 10, Walraed-Sullivan will present the paper "Aspen Trees: Balancing Data Center Fault Tolerance, Scalability and Cost," co-authored with her Ph.D. advisors, Prof. Amin Vahdat (on leave at Google), and Prof. Keith Marzullo (recently on leave at NSF).  

 

Image removed.The paper flows from Walraed-Sullivan's dissertation, which introduced a new class of network topologies called 'Aspen trees,' named after Aspen trees in nature, which share a common root system (right). Large-scale data center infrastructures typically use a multi-rooted, 'fat tree' topology, which provides diverse yet short paths between end hosts. A drawback of this type of topology is that a single link failure can disconnect a portion of the network's hosts for a substantial period of time (while updated routing information propagates to every switch in the tree). According to an advance copy of the CoNEXT paper, this shortcoming makes the fat tree less suited for use in data centers that require the highest levels of availability. Alternatively, Aspen tree topologies can provide the high throughput and path multiplicity of current data center network topologies, while also allowing a network operator to select a particular point on the spectrum of scalability, network size, and fault tolerance - affording data center operators the ability to react to failures locally. 

 

Walraed-Sullivan and her co-authors also outline a corresponding failure-notification protocol, ANP, whose "notifications require less processing time, travel shorter distances, and are sent to fewer switches, significantly reducing re-convergence time and control overhead in the wake of a link failure or recovery." The paper concludes that "Aspen trees provide decreased convergence times to improve a data center's availability, at the expense of scalability (e.g., reduced host count) or financial cost (e.g., increased network size)." The paper to be presented at CoNEXT details a thorough exploration of the tradeoffs among fault tolerance, scalability and network cost for data centers using an Aspen tree topology. Walraed-Sullivan and her co-authors also outline a corresponding failure-notification protocol, ANP, whose "notifications require less processing time, travel shorter distances, and are sent to fewer switches, significantly reducing re-convergence time and control overhead in the wake of a link failure or recovery." The paper concludes that "Aspen trees provide decreased convergence times to improve a data center's availability, at the expense of scalability (e.g., reduced host count) or financial cost (e.g., increased network size)." The paper to be presented at CoNEXT details a thorough exploration of the tradeoffs among fault tolerance, scalability and network cost for data centers using an Aspen tree topology. 

 

 

'A Novel Marriage of Techniques'
 

Image removed.

On Dec. 5, CSE Prof. Victor Vianu (right) was in Switzerland and gave a talk at the École Polytechnique Fédérale de Lausanne (EPFL). The topic: automatic verification of the increasingly common workflows centered around data.  Vianu drew on joint work with fellow CSE Prof. Alin Deutsch, Microsoft program manager and CSE alumnus Elio Damaggio (MS '08, Ph.D. '11), former CSE visiting scholar Fabio Patrizi (now a professor at the University of Rome-La Sapienza), and Richard Hull of IBM Research. Tools have been developed for high-level specification of such workflows and other data-driven applications. "Such specification tools not only allow fast prototyping and improved programmer productivity but, as a side effect, provide convenient targets for automatic verification," said Vianu in the abstract for his talk, to a notable example: IBM's business artifact framework, which has been successfully deployed in practice.

 

Vianu presented a formal model of data-centric workflows based on business artifacts, and results on automatic verification of such processes. "Artifacts are tuples of relevant values, equipped with local state relations and accessing an underlying database," he said. "They evolve under the action of services specified by pre- and post-conditions, that correspond to workflow tasks. The verification problem consists in statically checking whether all runs of an artifact system satisfy desirable properties, expressed in an extension of linear-time temporal logic." In his talk at EPFL, Vianu exhibited several classes of specifications and properties that could be automatically verified. Determining that the results thus far have been "quite encouraging," he said those results suggest that, unlike with arbitrary software systems, significant classes of data-centric workflows may be amenable to fully automatic verification. To do so, Vianu concluded, requires a "novel marriage of techniques" from the database and computer-aided verification areas.  

 

UPCOMING EVENTS

 

Image removed.

December 10 - 3-5pm - Room 1202, CSE Building

Teams of students with showcase their quarter-long Microsoft Kinect-based and week-long Google Glass applications for students, staff, faculty and visitors who will be invited to test the augmented-reality apps as part of the Final Demos from professor Nadir Weibel's (pictured) CSE 118 course on Applications in Ubiquitous Computing.

 

Image removed.

December 11 - 11am-Noon - Room 6504, Jacobs Hall

Prof. Ephi Zehavi, vice dean of engineering at Israel's Bar Ilan University, will speak "On Cooperative Radio Resource Allocation Techniques Based on Game Theory." Zehavi will address the multi-carrier allocation problem in modern communications channels, outlining several cooperative approaches and solution for sharing frequencies, including a new way of solving distributed stochastic optimization by using the properties of the communication channel.

 

And looking ahead, mark you calendar for CSE Day 2014 on January 23. The CSE Distinguished Lecture Series continues in January with talks by University of Michigan's HV Jagadish (Jan. 22), ETH Zurich's Donald Kossmann (Jan. 27), and Ravi Ramamoorthi of UC Berkeley (Feb. 3). And mark April 23-24 on your calendar for the Spring 2014 Center for Networked Systems (CNS) Research Review.

 

 

FACULTY GPS                                                             

Image removed.

Paris Au Revoir... Prof. Victor Vianu is finishing up his Fall 2013 sabbatical at INRIA in Paris, and is taking advantage of his (relative) proximity to give talks in Switzerland and Italy. On Dec. 5, he spoke at EPFL in Lausanne on "Automatic Verification of Data-Centric Workflows." Vianu will make a separate trip to talk on the same subject at Politecnico di Milano, in Milan. And yes, writes Vianu from Paris, "I will be back [in San Diego] in early January."

 

Final Mission... On Dec. 9, Prof. Larry Smarr flies to Florida for meetings at the Kennedy Space Center. This is Smarr's final year on the NASA Advisory Council. Prior to its meetings Dec. 11-12, he will chair his final meeting of the Council's Information Technology Infrastructure Committee. This will include, says Smarr, "briefing the NASA Administrator on cyberinfrastructure required within NASA to support Big Data."

 

Toronto Redux... Over the Dec. 7-8 weekend, CSE Research Scientist Nadir Weibel made a quick trip to Toronto and back to meet with fellow members of the ACM Special Interest Group on Computer-Human Interaction (CHI) program committee. 


The Roads (Well) Taken... On Tuesday, Dec. 10, CNS research scientist George Porter will chair a session called "The Roads Taken, in a Data Center" on day two of the 9th ACM International Conference on Emerging Networking Experiments and Technologies (CoNEXT) in Santa Barbara. The following day, Porter has two papers in a session on "Trains, Lanes and Autobalancing" (see news article on "Bullet Trains and Express Lanes").

 

Have a notice about upcoming travel to conferences, etc., for the Faculty GPS column in our weekly CSE Newsletter? Be sure to let us know! Email Doug Ramsey at dramsey@ucsd.edu