DC Track

Prof. Hagit Attiya
Technion, Israel

Prof. Philippas Tsigas
Chalmers University of Technology, Sweden

Prof. Roger Wattenhofer
ETH Zurich, Switzerland

IT Track

Prof. Matthew E. Taylor
Univ. of Alberta, Canada

Prof. Michael Cashmore
University of Strathclyde, Glasgow, UK.

Prof. U. Deva Priyakumar
IIIT Hyderabad, India

Invited Talks

Graph Neural Networks

Prof. Roger Wattenhofer

Prof. Roger Wattenhofer

ETH Zurich, Switzerland

Abstract:

At first sight, Distributed Computing and Machine Learning are just two classic areas in Computer Science. However, there are many connections. In my talk, I will focus on graphs. Distributed Computing has studied distributed graph algorithms for many decades. Meanwhile, in Machine Learning, Graph Neural Networks are picking up steam. We are going to discuss what Distributed Computing and Machine Learning can teach each other when it comes to dealing with graph inputs. In the main part of the talk, I will present DropGNN, our new Distributed Computing-inspired approach for handling Graph Neural Networks.

DropGNN is joint work with Pál András Papp, Karolis Martinkus, and Lukas Faber, published recently at NeurIPS 2021.

Biography:
Roger Wattenhofer is a full professor at the Information Technology and Electrical Engineering Department, ETH Zurich, Switzer­land. He received his doctorate in Computer Science from ETH Zurich. He also worked multiple years at Microsoft Research in Redmond, Washington, at Brown University in Providence, Rhode Island, and at Macquarie University in Sydney, Australia. Roger Wattenhofer’s research interests include a variety of algorithmic and systems aspects in computer science and information technology, e.g., distributed systems, positioning systems, wireless networks, mobile systems, social networks, financial networks, deep neural networks. He publishes in different communities: distributed computing (e.g., PODC, SPAA, DISC), networking and systems (e.g., SIGCOMM, SenSys, IPSN, OSDI, MobiCom), algorithmic theory (e.g., STOC, FOCS, SODA, ICALP), and more recently also machine learning (e.g., NeurIPS, ICLR, ACL, AAAI). His work received multiple awards, e.g. the Prize for Innovation in Distributed Computing for his work in Distributed Approximation. He published the book “Blockchain Science: Distributed Ledger Technology“, which has been translated to Chinese, Korean and Vietnamese.

Preserving Hyperproperties when Using Concurrent Objects

Prof. Hagit Attiya

Prof. Hagit Attiya

Technion, Israel

Abstract:

Linearizability, a consistency condition for concurrent objects, is known to preserve trace properties.

This suffices for modular usage of concurrent objects in applications, deriving their safety properties from the abstract object they implement.

However, other desirable properties, like average complexity and information leakage, are not trace properties.

These *hyperproperties* are not preserved by linearizable concurrent objects, especially when randomization is used.

This talk will discuss formal ways to specify concurrent objects that preserve hyperproperties and their relation with verification methods like forward / backward simulation.

We will show that certain concurrent objects cannot satisfy such specifications, and describe ways to mitigate these limitations.

Biography:
Hagit Attiya is a professor of Computer Science at the Technion, Israel Institute of Technology, and holds the Harry W. Labov and Charlotte Ullman Labov Academic Chair.

She is the editor-in-chief of Springer’s journal Distributed Computing.

She won the Dijkstra award in Distributed Computing 2011 and is a fellow of the ACM.
Attiya received all her academic degrees, in Computer Science, from the Hebrew University of Jerusalem, and was a post-doctoral fellow

Role of AI in the COVID19 Pandemic

Prof. U. Deva Priyakumar

Prof. U. Deva Priyakumar

IIIT Hyderabad, India

Abstract:

The clinical course of coronavirus disease 2019 (COVID-19) infection is highly variable with the vast majority recovering uneventfully but a small fraction progressing to severe disease and death. Appropriate and timely supportive care can reduce mortality and it is critical to evolve better patient risk stratification based on simple clinical data, so as to perform effective triage during strains on the healthcare infrastructure. It is also important to understand how the new mutations of the SARS-CoV-2 virus influence disease severity and mortality. In parallel, efforts towards development of drugs and vaccines are essential for return to normalcy. In this talk, how modern machine learning methods have played a decisive role in different aspects related to COVID19. Specific examples of the use of ML methods for risk stratification, mortality prediction, host prediction, mutation-disease severity association prediction, and drug discovery will be presented.

Biography:
Deva is currently a Professor at International Institute of Information Technology, Hyderabad, where he heads the Center for Computational Natural Sciences and Bioinformatics. He currently is the Academic Head of IHub-Data, a Technology Innovation Hub on data driven technologies. His research interests are using computational chemistry tools to investigate chemical and biological systems/processes, and applications of modern artificial intelligence/machine learning techniques for molecular/drug design and healthcare. He has been the recipient of awards such as the Chemical Research Society of India Medal, Indian National Science Academy Young Scientist Medal, JSPS invitation Fellowship, and Innovative Young Biotechnologist Award.

Convergence, Consistency and Adaptiveness in Parallel Stochastic Gradient Descent

Prof. Philippas Tsigas

Prof. Philippas Tsigas

Chalmers University of Technology, Sweden

Abstract:

Stochastic Gradient Descent (SGD), is the backbone in the majority of learning algorithms used in industry and research. SGD is an iterative optimization procedure that typically requires several passes through the entire dataset in order to converge to a solution of sufficient quality. The SGD process is not trivial to parallelize since each iteration requires the computation of the previous one. Asynchronous parallel variants of SGD have received particular interest in recent literature due to their improved ability to scale due to less need for coordination, and subsequently no waiting time. However, asynchrony implies inherent challenges in understanding the execution and convergence criteria of SGD. In this talk, I will describe some of our recent work on parallel SGD that studies from several perspectives the impact of synchronization, consistency, staleness and parallel-aware adaptiveness on the overall convergence.

Biography:

Philippas Tsigas is currently a Professor at Chalmers University of Technology, Sweden. Prof. Tsigas received the B.Sc. degree in mathematics and the Ph.D. degree in computer engineering and informatics from the University of Patras, Greece. He also held positions at the National Research Institute for Mathematics and Computer Science (CWI), Amsterdam, The Netherlands, the Max-Planck Institute for Computer Science, Saarbrucken, Germany, and Uppsala University, Sweden. His research interests include concurrent data structures and algorithms for multiprocessor and many-core systems, power aware computing, fault-tolerant computing, autonomic computing, scalable data streaming. He is a co-recipient of best paper awards at IEEE IPDPS 2003 and 2021, ACM DEBS 2017, and ACM SNS 2012. He publishes in conferences that include ACM CHI, IEEE InfoVis, ACM ITiCSE, ACM PODC, DISC, OPODIS, ACM SPAA, IEEE IPDPS, ACM DEBS and ESA.

Reinforcement Learning and Human-Agent Teaming: Challenges and Opportunities

Prof. Matthew E. Taylor

Prof. Matthew E Taylor

Univ. of Alberta, Canada

Abstract:

Reinforcement learning has had many successes, from games to stock trading and helicopter tricks. However, significant amounts of time and/or data can be required to reach acceptable performance. If agents or robots are to be deployed in real-world environments, it is critical that our algorithms take advantage of existing programs, controllers, and know-how. This talk will discuss a selection of work that improves lets reinforcement learning agents not only learn from the environment, but also by leveraging demonstrations, feedback, and advice from existing imperfect knowledge sources, including humans. Furthermore, we will describe how humans and RL systems can work together in order to achieve results that neither group could on its own – and what this means for the future of automated intelligence.

Biography:

Matt received his Ph.D. in artificial intelligence in 2008. After multiple positions in academia and industry, he is now an Associate Professor of Computing Science at the University of Alberta, where he directs the Intelligent Robot Learning Lab and is a Fellow and Fellow-in-Residence at Alberta Machine Intelligence Institute (Amii). His current research interests include fundamental improvements to reinforcement learning, applying reinforcement learning to real-world problems, and human-AI interaction.

Some advanced topics in Explainable AI Planning (XAIP)

Prof. Michael Cashmore

Prof. Michael Cashmore

University of Strathclyde, Glasgow, UK.

Abstract:

Automated planning can handle increasingly complex applications but can produce unsatisfactory results when the goal and metric provided in its model does not match the actual expectation and preference of those using the tool. This can be ameliorated by including methods for explainable planning (XAIP), to reveal the reasons for the automated planner’s decisions and to provide more in-depth interaction with the planner. This talk describes two recent pieces of work in XAIP. First, plan exploration through iterative model restriction, in which contrastive questions are used to build a tree of solutions to a planning problem. Through this dialogue with the system the user better understands the underlying problem and the choices made by the automated planner. Second, strong controllability analysis of probabilistic temporal networks through solving a joint chance constrained optimisation problem. The result of the analysis is a Pareto optimal front that illustrates the trade-offs between costs and risk for a given plan.

Biography

Dr Michael Cashmore is Chancellor’s Fellow (Lecturer) in the Department of Computer Science and Informatics, University of Strathclyde, Glasgow. He received his doctorate from Strathclyde before working multiple years at King’s College London in the Human-AI-Teaming group. His research focus is on domain-independent task planning for autonomous systems with a focus on integrated planning and execution, particularly for autonomous robotics. He is currently the lead of the Strathclyde Center for Doctoral Training in Explainable AI for Industrial Decision Support, exploring the application of AI alongside humans across a variety of domains, such as telecommunications, satellite mission planning, nuclear engineering, and health.