BemacsTalks
Since its conception, this program has invited outstanding guest speakers from academia and the business world to provide insight on various topics related to Data Science. Starting this year, these talks have been divided into two separate branches: BEMACS Lectures and BEMACS Networking.
BEMACS Lectures
This series of presentations is held by distinguished academics from around the world, who describe their research and its impact on science and society in an accessible way. BEMACS Lecturers include Sir David Spiegelhalter (former Winton Professor of the Public Understanding of Risk at Cambridge), who talks about “trustworthiness” in the context of Data Science, Alessio Figalli (Fields Medal 2018, ETH Zurich), who shows applications of optimal transport from nature to Machine Learning, and David C. Parkes (Harvard), who discusses Deep Learning for economic discovery.
Upcoming BEMACS Lecture
Sabina Leonelli Professor of Philosophy and History of Science and Technology, Technical University of Munich
Misinformation is widely regarded as a fundamental problem for democratic societies, insofar as it undermines the very idea of rational deliberation based on shared interpretations of the available evidence. The critical impact of misinformation depends on its dissemination through digital technologies and media, and the extent to which it comes to permeate AI-powered inferential processes. I will discuss the link between the spread of misinformation and the quest for “convenient” AI to automate processes of research and validation, noting the challenges raised by a technological system that privileges productivity and efficiency over reliability, and the threat this poses in turn to expectations that AI will generate trustworthy knowledge. The quest for convenient technological solutions is ever more alluring in a world that often feels hard for humans to navigate, and yet it does not necessarily result in the creation of reliable knowledge – and can in fact instigate rushed or misleading interpretations of available evidence. As an alternative to convenient models of AI-powered research, I propose an alternative framework for understanding and developing digital technologies to facilitate research, which I label "environmental intelligence" (EI). This framework is grounded in a deep appreciation for: (1) the planetary ecology within which technology needs to operate, (2) the need to support such an ecology in the long term and (3) the recognition that misinformation can be the outcome of misguided interpretation as easily as the spread of false facts. In closing, I will discuss how such an alternative framework can spur innovation that serves the needs of a plural, unequal and divided democratic society, while also tackling the existential threat posed by the climate emergency.
Past BEMACS Lectures
Constantinos Daskalakis Avanessians Professor of Electrical Engineering and Computer Science, MIT
31 May 2024
Deep Learning has delivered significant advances in learning challenges across various data modalities including speech, image and language, much of that progress being fueled by the success of gradient descent-based optimization methods in computing good solutions to non-convex optimization problems. From playing complex board games at super-human level to robustifying Deep Neural Net-based classifiers against adversarial attacks, training deep generative models, and training DNN-based models to interact with each other and with humans, many outstanding challenges in Deep Learning now lie at its interface with Game Theory. Since these applications involve multiple agents, the role of single-objective optimization is played by equilibrium computation. On this front, however, the Deep Learning toolkit has been less successful. Gradient-descent based methods struggle to converge, let alone lead to good solutions, and due to non-convexities, standard game-theoretic equilibrium concepts fail to exist. So how does one train DNNs to be strategic? And what is even the goal of the training? We shed light on these challenges through a combination of learning-theoretic, complexity-theoretic, game-theoretic and topological techniques, presenting obstacles and opportunities for Deep Learning and Game Theory going forward.
David Spiegelhalter Emeritus Professor of Statistics, Statistical Laboratory, University of Cambridge
19 April 2023
The Covid pandemic has emphasized the key role that statistics play in understanding what is going on in the world. But how do we decide whether to trust the numbers that get put in front of us? Are they being used to inform or persuade? Using a wide range of examples, the lecture looks at the way that statistics can be used to try and persuade audiences to think or act in a certain way, contrast this with efforts to make communication 'trustworthy', and list the questions that you should ask whenever you are presented with claims based on numbers. Similar ideas can be used to judge claims made both about algorithms, and by algorithms, whether they are called 'AI' or not. It is argued that 'trustworthiness' should be a vital concern of data science.
Art B. Owen Max H. Stein Professor of Statistics, Stanford University
February 2022
Companies may offer incentives to their best customers and philanthropists may offer scholarships to the strongest students. They can evaluate the impact of these treatments later using a regression discontinuity analysis. Unfortunately, regression discontinuity analyses have high variance. It is possible to get much more statistical efficiency using a tie-breaker design that works by triage: top subjects get the treatment, bottom subjects do not, and those in between have their treatment randomized. Statistical efficiency increases monotonically with the amount of randomization, causing an exploration versus exploitation tradeoff. This holds in a simple two line regression model and also with nonparametric kernel regression based methods. The conclusion is that when it is possible (and ethical) to randomize for a group of subjects, it is wise to do so. Some new work studies optimal design of tie-breakers when the treatment is rare as well as extensions to multiple regression.
Watch it on youtube
David C. Parkes George F. Colony Professor of Computer Science, Harvard University
28 October 2021
This talk introduces and provides illustrations of the use of differentiable representations of economic rules, together with suitable objectives, architectures, and constraints, with which to use the methods of deep learning to discover incentive-aligned mechanisms that approximately optimize various kinds of desiderata. This approach is illustrated to the design of auctions and two-sided matching mechanisms.
Watch it on youtube
Alessio Figalli FIM Director and Chaired Professor, ETH Zurich
2 March 2021
We are in a golden age for mathematics. The technology we have now would not exist if there had not been research in 'pure' mathematics. This fact is now clear and many companies are investing more and more in 'fundamental' research. On the other hand, the challenges of today's society are an inexhaustible source of mathematical problems.
In order to exploit the information hidden in data it is necessary to use quantitative modeling extensively, with disciplines such as mathematics, statistics and computer science that merge to go deep into the problems and find solutions that are not only robust but often surprising and elegant.
Watch it on youtube
BEMACS Networking
This is a new series of presentations by leading professionals from the Data Science industry, who present the type of work they carry out on their teams and discuss current and future priorities and opportunities.