When: Tuesday, 5 March, 09:30h
Where: ETH Zürich, HG E 3
Speaker: Avishay Tal
Abstract: This talk is motivated by three (seemingly unrelated) questions:
1. For which tasks do quantum algorithms outperform classical computation?
2. Does parallel computing always offer a speed-up, or are some tasks inherently sequential?
3. Do randomized algorithms have deterministic counterparts with similar memory footprint?
We make progress on all three questions by exploiting a common phenomenon at the core of their analysis: in all cases, the studied computational devices can be well-approximated by sparse multivariate polynomials. As an application towards the first question above, we show that relative to an oracle, quantum algorithms can efficiently solve tasks that are infeasible for the polynomial hierarchy (that captures P, NP, coNP and their generalizations). This settles an open problem raised by Bernstein and Vazirani in 1993.
Looking forward, we conjecture that several other computational devices can be well-approximated by sparse multivariate polynomials.
Proving our conjecture would resolve several big open questions in computational complexity that have remained open since the 1980s
Short Bio: Avishay Tal is a Motwani Postdoctoral Research Fellow at Stanford University, hosted by Omer Reingold. Prior to that, he was a postdoctoral researcher at the Institute for Advanced Study, hosted by Avi Wigderson. He obtained his Ph.D. in 2015 from the Weizmann Institute of Science, under the guidance of Ran Raz. His research interests include computational complexity theory, analysis of Boolean functions, quantum computing, pseudorandomness, and learning theory.
Greetings,
This is a friendly reminder of the upcoming
CS Colloquia at ETH Zurich
=========================================================================
When: Tuesday, 5 March, 08:30h
Where: ETH Zürich, HG E 3
Speaker: Nicolas Flammarion
Title: Optimization and Monte Carlo Sampling in Large Scale Machine Learning
Abstract: Optimization algorithms and Monte Carlo sampling algorithms have provided the computational foundations for the rapid growth in applications of machine learning in recent years. In this talk we will see how the interplay between gradients, stochastic and acceleration can be exploited to obtain fast and efficient statistical procedures.
To begin we present the first algorithm which achieves jointly optimal prediction rates both in term of computational speed and statistical efficiency. This new algorithm is based on combining both averaging and acceleration.
Second, we show how sampling algorithms can be seen as optimization algorithms in the space of measures. Using this perspective, we interpret the underdamped Langevin algorithm as performing accelerated gradient descent on the KL divergence. The power of this approach allows us to obtain faster convergence rates for non convex sampling problems.
Short Bio: Nicolas Flammarion is a postdoctoral fellow in computer science at UC Berkeley, hosted by Michael I. Jordan. He received his PhD in 2017 from Ecole Normale Supérieur in Paris, where he was advised by Alexandre d'Aspremont and Francis Bach. In 2018 he received the prize of the Fondation Mathematique Jacques Hadamard for the best PhD thesis in the field of optimization. His research focuses primarily on learning problems at the interface of machine learning and optimization.
=========================================================================
When: Tuesday, 5 March, 09:30h
Where: ETH Zürich, HG E 3
Speaker: Avishay Tal
Title: Sparse Polynomial Approximations and their applications to Quantum Advantage, Parallel Computation, and Pseudorandomness
Abstract: This talk is motivated by three (seemingly unrelated) questions:
1. For which tasks do quantum algorithms outperform classical computation?
2. Does parallel computing always offer a speed-up, or are some tasks inherently sequential?
3. Do randomized algorithms have deterministic counterparts with similar memory footprint?
We make progress on all three questions by exploiting a common phenomenon at the core of their analysis: in all cases, the studied computational devices can be well-approximated by sparse multivariate polynomials. As an application towards the first question above, we show that relative to an oracle, quantum algorithms can efficiently solve tasks that are infeasible for the polynomial hierarchy (that captures P, NP, coNP and their generalizations). This settles an open problem raised by Bernstein and Vazirani in 1993.
Looking forward, we conjecture that several other computational devices can be well-approximated by sparse multivariate polynomials.
Proving our conjecture would resolve several big open questions in computational complexity that have remained open since the 1980s
Short Bio: Avishay Tal is a Motwani Postdoctoral Research Fellow at Stanford University, hosted by Omer Reingold. Prior to that, he was a postdoctoral researcher at the Institute for Advanced Study, hosted by Avi Wigderson. He obtained his Ph.D. in 2015 from the Weizmann Institute of Science, under the guidance of Ran Raz. His research interests include computational complexity theory, analysis of Boolean functions, quantum computing, pseudorandomness, and learning theory.
=========================================================================
When: Tuesday, 5 March, 10:30h
Where: ETH Zürich, HG E 3
Speaker: Vasiliki Kalavri
Title: Long-running data stream analytics at scale
Abstract: While the early wave of big data technologies focused largely on accommodating the scale and complexity of available information, emerging data-driven applications additionally require low-latency and continuously updated results. Stream processing renounces the traditional view of static data inputs and instead advocates for long-running analysis of possibly unbounded event streams.
In this talk, I discuss fundamental research challenges in building next-generation streaming systems capable of automatic re-configuration without downtime. I will share recent work on an automatic scaling controller which makes fast and accurate scaling decisions via lightweight runtime instrumentation and a comprehensive dataflow performance model. I will conclude with open problems and future directions in automatic, long-running operation of streaming computations and large-scale continuous data analytics in general.
Short Bio: Vasiliki (Vasia) Kalavri is a postdoctoral fellow at the Systems Group, ETH Zurich, working on distributed stream processing and large-scale graph analytics. She has a joint PhD degree from KTH, Stockholm, and UCLouvain, Belgium, where was an EMJD-DC fellow. Her PhD thesis received the IBM Innovation Award 2017. Vasia is a committer and PMC member of Apache Flink.
=========================================================================
When: Tuesday, 5 March, 11:30h
Where: ETH Zürich, HG E 3
Speaker: Malte Schwarzkopf
Title: New Abstractions for High-Performance Datacenter Applications
Abstract: Developing high-performance datacenter applications is complex and time-consuming today, as developers must understand and correctly implement
subtle interactions between different backend systems. I describe an approach that redesigns core datacenter systems around new abstractions to substantially reduce their complexity while improving Performance. This saves expensive developer time, uses datacenter servers more efficiently, and can enable new, previously impossible systems and applications.
I illustrate the impact of such redesigns with the example of Noria, a new system that recasts web application backends—i.e., databases and caches—as a
streaming dataflow computation based on a new abstraction of partial state. Noria's _partially-stateful dataflow_ brings classic databases' familiar
query flexibility to scalable dataflow systems, simplifying applications and improving the backend's efficiency. For example, Noria increases the request load handled by a single server by 5-70x compared to state-of-the-art backends.
Additional new abstractions from my research increase the efficiency of other datacenter systems (e.g., cluster schedulers), or enable new kinds of systems
that, for example, may help protect user data against exposure through application bugs.
Short Bio: Malte Schwarzkopf is a postdoc at MIT CSAIL, where he is a member of the Parallel and Distributed Operating Systems (PDOS) group.
In his research, Malte designs and builds systems that aim to be both efficient and easy to use, and some of these systems have already impacted industry
Practice. Malte received both his B.A. and Ph.D. from the University of Cambridge, and his research has won an NSDI Best Paper Award and a EuroSys Best Student Paper Award.
=========================================================================
ETH Zürich | D-INFK
Universitätstrasse 6 | 8092 Zürich | Switzerland