Keynote Speakers:

  • Ken Birman, Cornell, USA
  • Prerecorded version of the talk.

  • Biography:

    Ken Birman is the N. Rama Rao Professor of Computer Science at Cornell University. His work seeks a balance between protocols with strong guarantees and practical utility/efficiency. His earliest distributed systems work included the Isis Toolkit, which introduced the virtual synchrony membership management model and implemented one of the first Paxos protocols to gain wide acceptance. Isis plays many roles in the French air traffic control system (active since 1995), and was also used in the New York Stock Exchange trading floor, the Swiss Stock Exchange, and many other critical systems. Later work explored gossip protocols and reliability and consistency in cloud computing infrastructures. Ken is a Fellow of the ACM and IEEE, received the IEEE Kanai Award and the IEEE TCDP Award for contributions to distributed computing, and was recognized with a Cisco “computing visionary” award.

  • Title: Cascade: An Edge Computing Platform for Real-time Machine Intelligence
  • Abstract:

    Cascade is an open-source distributed computing platform to host AI or ML software close to cameras, other sensors and actuators in settings where it is important to obtain ultra-low latencies and very high data rates, but where fault-tolerance and strong consistency is also needed. The computing model focuses on ML pipelines, cooperative graphical structures with multiple independently-created MLs that jointly solve problems and distributed AI solutions that obtain parallelism by running on multiple nodes. While supporting standard ML and AI platforms unchanged, Cascade also offers a fast path that maps data movement and computing to accelerators such as RDMA, NMVe memory and GPU.

    Cascade is built on Derecho [S. Jha; ACM TOCS 2019], the world’s fastest data replication framework. Derecho introduces a new virtually synchronous atomic multicast and Paxos that maps exceptionally well to RDMA or fast TCP, with performance 100x to 15,000x faster than prior Paxos solutions. This extreme speed reflects a novel way of matching Paxos to asynchronous high speed networking. Derecho is also provably optimal, and its proofs have been machine-verified. The mapping to the network is also ideally efficient. In some sense, the protocol is thus an “ultimate” expression of Paxos.

    The talk will be a quick tour of the whole project, and (to the extent possible in one hour) will touch on all three aspects: Cascade as perceived by developers, Derecho, and the techniques Cascade introduces to support its fast path.

  • Roger Wattenhofer, ETH Zurich, Switzer­land
  • Slides from the talk.

  • Biography:

    Roger Wattenhofer is a full professor at the Information Technology and Electrical Engineering Department, ETH Zurich, Switzer­land. He received his doctorate in Computer Science from ETH Zurich. He also worked multiple years at Microsoft Research in Redmond, Washington, at Brown University in Providence, Rhode Island, and at Macquarie University in Sydney, Australia. Roger Wattenhofer’s research interests include a variety of algorithmic and systems aspects in computer science and information technology, e.g., distributed systems, positioning systems, wireless networks, mobile systems, social networks, financial networks, deep neural networks. He publishes in different communities: distributed computing (e.g., PODC, SPAA, DISC), networking and systems (e.g., SIGCOMM, SenSys, IPSN, OSDI, MobiCom), algorithmic theory (e.g., STOC, FOCS, SODA, ICALP), and more recently also machine learning (e.g., ICML, NeurIPS, ICLR, ACL, AAAI). His work received multiple awards, e.g. the Prize for Innovation in Distributed Computing for his work in Distributed Approximation. He published the book “Blockchain Science: Distributed Ledger Technology“, which has been translated to Chinese, Korean and Vietnamese.

    • Title: Graph Neural Networks as Application of Distributed Algorithms
    • Abstract:

      At first sight, distributed computing and machine learning are two distant areas in computer science. However, there are many connections, for instance in the area of graphs, which are the focus of my talk. Distributed computing has studied distributed graph algorithms for many decades. Meanwhile in machine learning, graph neural networks are picking up steam. When it comes to dealing with graphical inputs, one can almost claim that graph neural networks are an application of distributed algorithms. I will introduce central concepts in learning such as underreaching and oversquashing, which have been known in the distributed computing community for decades, as local and congest models. In addition I am going to present some algorithmic insights, and a software framework that helps with explaining learning. Generally speaking, I would like to present a path to learning for those who are familiar with distributed message passing algorithms. This talk is based on a number of papers recently published at learning conferences such as ICML and NeurIPS, co-authored by Pál András Papp and Karolis Martinkus.

    Invited Papers:

    • Cascade: An Edge Computing Platform for Real-time Machine Intelligence, W. Song, Y. Yang, T. Liu, A. Merlina, T. Garrett, R. Vitenberg, L. Rosa, A. Awatramani, Z. Wang, K. Birman.
    • DARTS: Distributed IoT Architecture for Real-Time, Resilient and AI-Compressed Workflows, R. Gupta, B. Chen, S. Liu, T. Wang, K. Nahrstedt, t. abdelzaher, S. Sandha, M. Srivastava, A. Souza, P. Shenoy, J. Smith, M. Wigness, N. Suri. Prerecorded version of the talk.
    • Drone-Truck Cooperated Delivery Under Time Varying Dynamics, A. Khanda, F. Corò, S. Das. Slides from the talk.
    • A Roadmap To Post-Moore Era for Distributed Systems, V. De Maio, A. Aral, I. Brandic.
    • Towards an Approximation-Aware Computational Workflow Framework for Accelerating Large-Scale Discovery Tasks, M. Johnston, V. Vassiliadis.

    Regular Papers:

    • QUANTAS: Quantitative User-friendly Adaptable Networked Things Abstract Simulator, J. Oglio, K. Hood, M. Nesterenko, S. Tixeuil.
    • Exploring the use of Strongly Consistent Distributed Shared Memory in 3D NVEs, T. Hadjistasi, N. Nicolaou, E. Stavrakis.
    • A Closer Look at Detectable Objects for Persistent Memory, M. Moridi, E. Wang, A. Cui, W. Golab.
    • Colder than the warm start and warmer than the cold start! Experience the spawn start in FaaS providers, S. Ristov, C. Hollaus, M. Hautz

    Short research statements:

    • Research Summary: Deterministic, Explainable and Efficient Stream Processing, D. Palyvos-Giannas, M. Papatriantafilou, V. Gulisano.