Keynote Speakers:

  • Mahesh Balakrishnan , Facebook Inc.
  • Biography:

    Mahesh Balakrishnan leads the Delos project at Facebook. Prior to Facebook he was at Yale University, VMware Research, and Microsoft Research Silicon Valley. His work has received best paper awards at OSDI and ASPLOS.

  • Title: Virtual Consensus in the Delos Storage System
  • Abstract:

    Delos is a storage system at the bottom of the Facebook stack, operating as the backing store for the Twine scheduler and processing more than 2B TXes/day. Delos has a shared log design (based on Corfu), separating the consensus protocol from the database via a shared log API. In this talk, I’ll describe the two ways in which Delos advances the state of the art for replicated systems. First, we virtualize consensus by virtualizing the shared log: as a result, the system can switch between entirely different log implementations (i.e., consensus protocols) without downtime. Second, we virtualize the replicated state machine above the shared log, splitting the logic of the database into separate, stackable layers called log-structured protocols. Virtual consensus in Delos enables safe upgrades to the consensus protocol as well as the database above it, without downtime or complex migration logic.

  • Murat Demirbas, Amazon Web Services and State University of New York at Buffalo), USA
  • Biography:

    Murat Demirbas is a Principal Applied Scientist at AWS and a Professor of Computer Science & Engineering at the University at Buffalo, SUNY. Murat received his Ph.D. from The Ohio State University in 2004 and did a postdoc at the Theory of Distributed Systems Group at MIT in 2005. He developed many influential protocols and systems, including hybrid logical clocks, retrospective monitoring tools, and planetary scale Paxos protocols. Murat received a National Science Foundation CAREER award in 2008, University at Buffalo Exceptional Scholars Young Investigator Award in 2010, and School of Engineering and Applied Sciences Senior Researcher of the Year Award in 2016. He maintains a popular blog on distributed systems at http://muratbuffalo.blogspot.com

    • Title:One size does not fit for specification and verification of distributed systems
    • Abstract:

      Developers have varying needs for specification and verification throughout the lifecycle of distributed systems design, implementation, and continuous operation. One use case is as a thinking tool for exploring the protocol design space. This requires quick and succinct prototyping and checking of a specification. Another is for gaining confidence in the design. This requires close alignment of the specification to the design of the system, including separate modeling of components, and involves doing systematic exploration of the design for bugs. Yet another big one is for providing continued assurance and boosting development/deployment velocity. This requires checking the conformance of the implementation to the design to prevent regression and violation of specification by the implementation.

      These different needs introduce different tradeoffs and compromises, and make it impossible for one solution to address all these needs. I will discuss these tradeoffs and present examples of suitable solutions for these needs.

    • Ahmed Ali-Eldin Hassan, Chalmers University of Technology, Sweden
    • Biography:

      Ahmed Ali-Eldin is a tenure-track assistant professor at Chalmers University of Technology. The world celebrated his move to Chalmers by locking down 8 days before he traveled to sign his contract. He has been to his Chalmers office three times until this day. Before 2020, he spent three great years as a postdoc at UMass Amherst, working with Prashant Shenoy. Being fond of polar bears (and very bad in animal geography), he spent five years to get his PhD from Umeå University in Northern Sweden. While he did not see any bears (polar or not), he ended up seeing lots of aurora borealis. Ahmed's research interests in distributed systems started with P2P systems, shifting focus to clouds, then edge, then serverless, and finally machine learning systems. His PhD work on autoscaling was the seed for a startup, Elastisys AB, which now focuses on securing Kubernetes. He collaborates extensively with Swedish Industry and many academics around the world, runs the Chalmers Girl's-Code-Club, and (co-)supervises three PhD students and postdocs. His work is funded by the NSF, WASP, Ericsson Research, CHAIR, and Chalmers.

    • Title: How to (Not) Build Edge Applications
    • Abstract:

      In this talk, I will introduce some recent results we obtained on the performance of edge applications. I will start by talking about how edge applications can provide in some cases much worse latency compared to just running in the cloud. I will show both analytically (using queuing models) and experimentally how and when these cases occur. I will conclude by discussing how our results can be used by system designers to design better edge systems.

    • Kate Keahey, Argonne National Lab, USA
    • Biography:

      Kate Keahey is one of the pioneers of infrastructure cloud computing. She created the Nimbus project, recognized as the first open source Infrastructure-as-a-Service implementation, and continues to work on research aligning cloud computing concepts with the needs of scientific datacenters and applications. To facilitate such research for the community at large, Kate leads the Chameleon project, providing a deeply reconfigurable, large-scale, and open experimental platform for Computer Science research. To foster the recognition of contributions to science made by software projects, Kate co-founded and serves as co-Editor-in-Chief of the SoftwareX journal, a new format designed to publish software contributions. Kate is a Scientist at Argonne National Laboratory and a Senior Fellow at the Computation Institute at the University of Chicago.

      • Title: Experimenting in the Edge to Cloud Continuum
      • Abstract:

        The increasing popularity of IoT devices allows us to communicate better, interact better, and ultimately build a new type of a scientific instrument that will allow us to explore our environment in ways that we could only dream about even just a few years ago. This disruptive opportunity however raises its own set of challenges: how should we manage the massive amounts of date such instruments will eventually produce? What types of environments will be most suited to developing their full potential? How should we implement communication between them most effectively?

        In a new dynamically developing research area too often such questions are approached only theoretically for lack of a scientific instrument that keeps pace with the emergent requirements of science and allows researchers to deploy, measure, and analyze relevant scientific hypotheses. In this talk, I will describe CHameleon — an experimental test bed for computer science systems research — originally created to allow exploration of cloud computing topics — such as design of new virtualization solutions, operating systems, or power management — it expanded to support a wide variety of experiments in programmable networking and most recently edge computing. In doing so by adapting mainstream open source infrastructure (OpenStack) we created a platform that provides a solid and adaptable framework for resource management supporting a variety of edge to cloud interactions.