IEEE CLOUD 2023
Symposium on Distributed Computing Continuum (DCC)
Integrating HPC, Cloud and the Edge

Program

Date/Time Session Presentation
Thursday 7/6
14:00 - 15:10
Location: Soldier Field
CLD_SYM_1

Session Chair: Constantinos Evangelinos
Programming Dynamic and Intelligent Workflows for the Continuum
Daniele Lezzi

Accelerating Scientific Applications on a Hybrid Classic/Quantum Continuum
Ivona Brandic

Leveraging the Computing Continuum for Urgent Science
Daniel Balouek-Thomert

Thursday 7/6
15:25 - 16:35
Location: Soldier Field
CLD_SYM_2

Session Chair: Pete Beckman
Cyber in the City: How Distributed Edge Computing is Enabling the Chicago Micronet
Christina Negri and Scott Collis

Increasing the Level of Autonomy of Agricultural Robotics
Katherine Rose Driggs-Campbell

Teach Me Where I Am
Nicola Ferrier
Friday 7/7
14:00 - 15:10
Location: Soldier Field
CLD_SYM_3

Session Chair: Constantinos Evangelinos
Computing at the Extreme Edge
Shadi Noghabi

AI and Computing for Climate
Hendrik Harman

Simplifying Extensible Infrastructure in the Cloud with Open Tooling
Tom Downes
Friday 7/7
15:25 - 16:35
Location: Soldier Field
CLD_SYM_4

Session Chair: Pete Beckman
Panel Discussion: How Does AI Transform the Computing Continuum?

Panelists:
Geoffrey fox
Ivona Brandic
Katherine Rose Driggs-Campbell
Shadi Noghabi
Tom Downes

Computing is pervasive; we live in a hyperconnected computing continuum, with more Internet of Things (IoT) devices than humans. Massive artificial intelligence (AI) models with over 100 trillion parameters have been built on powerful supercomputers connected to the cloud. In this new end-to-end connected world, sensors, instruments, and IoT devices have incorporated edge computing, starting the data analysis and AI pipeline in situ, where the data is generated. The worlds of commercial cloud providers and high-performance computing (HPC) centers have intersected --- scientists are now running massive earth system models in the cloud and on traditional HPC centers run by universities and scientific laboratories. From sensors and AI at the edge to cloud and HPC centers, this computing continuum grows more capable every year.

Traditional differences in software and hardware environments across this continuum are falling as workflows shift to a “cloud native” approach of containerizing and orchestrating services. The availability of diverse hardware resources distributed between cloud and university and laboratory computing centers has created a computing space that is being increasingly home to heterogeneous workflows. From smart cameras analyzing traffic flows in situ, at the edge, to gesture and audio recognition in everyday devices around our home, the computing continuum is opening new doors for novel AI techniques and creating new programming and execution/operational challenges.

In this symposium, we aim to explore the answers to questions such as:

  • What are the diverse requirements/stresses the complex interdisciplinary application workflows impose on the supporting hardware and software infrastructure?
  • How do we write programs for a computing continuum that spans lightweight sensors and massive cloud systems?
  • How can we improve the efficiency/utilization of our systems while at the same time offering better performance to individual applications workflows, ideally in a manner that is as transparent as possible to the users?
  • What kinds of new applications and science are enabled using real-time data from AI-enabled sensors at the edge connected to large-scale cloud platforms?
  • How will massive AI models be woven into the computing continuum?
  • Do we need new techniques for workflow resilience and recovery - especially for applications that have real-time requirements due to experimental data constraints?
  • How do we intelligently distribute work to the optimal set of compute/storage/networking resources to meet utilization as well as user QoS targets?

Symposium Chairs

Pete Beckman, Argonne National Laboratory
Constantinos Evangelinos, IBM Research, Cambridge