IEEE CLOUD 2023Symposium on Distributed Computing Continuum (DCC)Integrating HPC, Cloud and the Edge Program Date/Time Session Presentation Thursday 7/614:00 - 15:10Location: Soldier Field CLD_SYM_1Session Chair: Constantinos Evangelinos Programming Dynamic and Intelligent Workflows for the ContinuumDaniele LezziAccelerating Scientific Applications on a Hybrid Classic/Quantum ContinuumIvona BrandicLeveraging the Computing Continuum for Urgent ScienceDaniel Balouek-Thomert Thursday 7/615:25 - 16:35Location: Soldier Field CLD_SYM_2Session Chair: Pete Beckman Cyber in the City: How Distributed Edge Computing is Enabling the Chicago MicronetChristina Negri and Scott CollisIncreasing the Level of Autonomy of Agricultural RoboticsKatherine Rose Driggs-CampbellTeach Me Where I AmNicola Ferrier Friday 7/714:00 - 15:10Location: Soldier Field CLD_SYM_3Session Chair: Constantinos Evangelinos Computing at the Extreme EdgeShadi NoghabiAI and Computing for ClimateHendrik HarmanSimplifying Extensible Infrastructure in the Cloud with Open ToolingTom Downes Friday 7/715:25 - 16:35Location: Soldier Field CLD_SYM_4Session Chair: Pete Beckman Panel Discussion: How Does AI Transform the Computing Continuum?Panelists:Geoffrey foxIvona BrandicKatherine Rose Driggs-CampbellShadi NoghabiTom Downes Computing is pervasive; we live in a hyperconnected computing continuum, with more Internet of Things (IoT) devices than humans. Massive artificial intelligence (AI) models with over 100 trillion parameters have been built on powerful supercomputers connected to the cloud. In this new end-to-end connected world, sensors, instruments, and IoT devices have incorporated edge computing, starting the data analysis and AI pipeline in situ, where the data is generated. The worlds of commercial cloud providers and high-performance computing (HPC) centers have intersected --- scientists are now running massive earth system models in the cloud and on traditional HPC centers run by universities and scientific laboratories. From sensors and AI at the edge to cloud and HPC centers, this computing continuum grows more capable every year. Traditional differences in software and hardware environments across this continuum are falling as workflows shift to a “cloud native” approach of containerizing and orchestrating services. The availability of diverse hardware resources distributed between cloud and university and laboratory computing centers has created a computing space that is being increasingly home to heterogeneous workflows. From smart cameras analyzing traffic flows in situ, at the edge, to gesture and audio recognition in everyday devices around our home, the computing continuum is opening new doors for novel AI techniques and creating new programming and execution/operational challenges. In this symposium, we aim to explore the answers to questions such as: What are the diverse requirements/stresses the complex interdisciplinary application workflows impose on the supporting hardware and software infrastructure? How do we write programs for a computing continuum that spans lightweight sensors and massive cloud systems? How can we improve the efficiency/utilization of our systems while at the same time offering better performance to individual applications workflows, ideally in a manner that is as transparent as possible to the users? What kinds of new applications and science are enabled using real-time data from AI-enabled sensors at the edge connected to large-scale cloud platforms? How will massive AI models be woven into the computing continuum? Do we need new techniques for workflow resilience and recovery - especially for applications that have real-time requirements due to experimental data constraints? How do we intelligently distribute work to the optimal set of compute/storage/networking resources to meet utilization as well as user QoS targets? Symposium Chairs Pete Beckman, Argonne National Laboratory Constantinos Evangelinos, IBM Research, Cambridge