(Professor at the Department of Computer Science of ETH Zurich)
Title: Generalization versus Specialization in cloud computing infrastructures
Cloud computing represents a fundamental change in the business model
behind IT: a shift from manufacturing of software and hardware products
towards packaging infrastructure, processing, and storage as services.
Cloud data centers, given their intended use for general purpose
computing, would seem to push towards homogeneity in architectures and
platforms. Modern applications and use cases, from scientific computing
to big data, push in exactly the opposite direction: an increase in
specialization as a way to efficiently meet demanding requirements.
In this talk I will illustrate both trends and argue that, contradictory
as they seem to be, there are many opportunities in combining them.
Doing so requires to work in two areas. One is to find better ways to
extend the performance and efficiency advantages of specialization to
general purpose settings. The other is to develop the necessary software
and hardware layers to allow generalized use of specialized systems.
Taking together, these efforts outline an exciting research and
development landscape that I will outline as a conclusion of the talk.
Gustavo Alonso is a professor at the Department of Computer Science of
ETH Zurich (ETHZ) in Switzerland, where he is a member of the Systems
Group. Gustavo has a M.S. and a Ph.D. in Computer Science from UC Santa
Barbara. Before joining ETH, he was at the IBM Almaden Research Center.
His research interests encompass almost all aspects of systems, from
design to run time. His applications of interest are distributed systems
and databases, with an emphasis on system architecture. Current research
is related to multi-core architectures, large clusters, FPGAs, and big
data, mainly working on adapting traditional system software (OS,
database, middleware) to modern hardware platforms.
Gustavo is a Fellow of the ACM and of the IEEE. He has received numerous
awards, the most recent include the FCCM 2013 Best Paper Award, the AOSD
2012 Most Influential Paper Award, the VLDB 2010 Ten Year Best Paper
Award, and the 2009 ICDCS Best Paper Award. He was the Chair of ACM
EuroSys (the European Chapter of SIGOPS), and PC-Chair of a number of
conferences in several areas, among others: Middleware (2004), VLDB
(2006), Business Process Management (2007), ICDE (2008), VLDB
Experimental and Analysis Track (2012), ICDCS (2014), EDBT (2015), VLDB
Industrial Track (2016).
(General Manager, Amazon Web Services)
Title: Processing Big Data in Motion
Streaming analytics is about identifying and responding to events happening in your business, in your service or application, and with your customers in near real-time. Sensors, mobile and IoT devices, social networks, and online transactions are all generating data that can be monitored constantly to enable a business to detect and then act on events and insights before they lose their value. The need for large scale, real-time stream processing of big data in motion is more evident than ever before but the potential remains largely untapped by most firms. It’s not the size but rather the speed at which this data must be processed that presents the greatest technical challenges. Streaming analytics systems can enable business to inspect, correlate and analyze data in real-time to extract insights in the same manner that traditional analytics tools have allowed them to do with data at rest. In this talk I will draw upon our experience with Amazon Kinesis data streaming services to highlight use cases, discuss technical challenges and approaches, and look ahead to the future of stream data processing and role of cloud computing.
Roger Barga is General Manager and Director of Development at Amazon Web Services, responsible for Kinesis data streaming services including Kinesis Streams, Kinesis Firehose, and Kinesis Analytics. Before joining Amazon, Roger was in the Cloud Machine Learning group at Microsoft, where he was responsible for product management of the Azure Machine Learning service. His experience and research interests include data storage and management, data analytics and machine learning, distributed systems and building scalable cloud services, with emphasis on stream data processing and predictive analytics. Roger is also an Affiliate Professor at the University of Washington, where he is a lecturer in the Data Science and Machine Learning programs. Roger holds a PhD in Computer Science, a M.Sc. in Computer Science with an emphasis on Machine Learning, and a B.Sc. in Mathematics and Computer Science. Roger holds over 30 patents, he has published over 100 peer-reviewed technical papers and book chapters, and authored a book on predictive analytics.
(Professor of Communications Systems at the University of Cambridge)
Title: What could possibly go wrong?
There are many more things with moving parts in the world than computers. These are the objects that are being connected, initially artefacts, but also the natural world. They are connected both by being sensed, and via actuators. For a true Internet of things to emerge with all its potential value for innovation in efficiencies, the sensors and actuators must actually be reachable from anywhere, anytime, just like computers on today's internet. And they must be locally and remotely programmable. Of course, there must be mechanisms to implement policies about access and use. However, these policies are complex, since they don't merely reflect informational rules, but also rules about the physical world - a car ay be restricted to certain speeds in certain areas, but also to different speeds and areas at different times, due to the driver.
Unfortunately, in the rush to instrument and control the world of things, the complexity of the world seems to have been forgotten. Worse, the typical system software being deployed in many places does not reflect the last few decades evolution of safety and security work that has gone in to the implementation of operating systems and protocols. All too often, we here another system uses an embedded OS with no isolation or a protocol stack with known vulnerabilities, or is shipped with default access control credentials to millions of customers.
This is not good enough.
In this talk, I will cover some of the work we've been doing in the Microsoft sponsored project in Cambridge and QMUL, on the technical and legal challenges that are now facing our community.
Jon Crowcroft has been the Marconi Professor of Communications Systems
in the Computer Laboratory since October 2001. He has worked in the
area of Internet support for multimedia communications for over 30
years. Three main topics of interest have been scalable multicast
routing, practical approaches to traffic management, and the design of
deployable end-to-end protocols. Current active research areas are
Opportunistic Communications, Social Networks, and techniques and
algorithms to scale infrastructure-free mobile systems. He leans
towards a "build and learn" paradigm for research.
He graduated in Physics from Trinity College, University of Cambridge
in 1979, gained an MSc in Computing in 1981 and PhD in 1993, both from
UCL. He is a Fellow the Royal Society, a Fellow of the ACM, a Fellow of
the British Computer Society, a Fellow of the IET and the
Royal Academy of Engineering and a Fellow of the IEEE.
He likes teaching, and has published a few books based on learning
IC2E Invited Talk:
(Professor of Computer Science at the University of Chicago)
Title: The Discovery Cloud: Accelerating and Democratizing Research on a Global Scale
Modern science and engineering require increasingly sophisticated information technology (IT) for data analysis, simulation, and related tasks. Yet the small to medium laboratories (SMLs) in which the majority of research advances occur increasingly lack the human and financial capital needed to acquire and operate such IT. New methods are needed to provide all researchers with access to state-of-the-art scientific capabilities, regardless of their location and budget. Industry has demonstrated the value of cloud-hosted software and platform-as-a-service approaches; small businesses that outsource their IT to third-party providers slash costs and accelerate innovation. However, few business cloud services are transferable to science. We thus propose the Discovery Cloud, an ecosystem of new, community-produced services to which SMLs can outsource common activities, from data management and analysis to collaboration and experiment automation. We explain the need for a Discovery Platform to streamline the creation and operation of new and interoperable services, and a Discovery Exchange to facilitate the use and sustainability of Discovery Cloud services. We report on our experiences building early elements of the Discovery Platform in the form of Globus services, and on the experiences of those who have applied those services in innovative applications.
Ian is a Professor of Computer Science at the University of Chicago, a Distinguished Fellow at Argonne National Laboratory, and Director of the Computation Institute. He is also a fellow of the American Association for the Advancement of Science, the Association for Computing Machinery, and the British Computer Society. His awards include the British Computer Society's Lovelace Medal, honorary doctorates from the University of Canterbury, New Zealand, and CINVESTAV, Mexico, and the IEEE Tsutomu Kanai award.