Riccardo Zecchina, the Man Who Teaches Machines How to Learn
Machine learning is changing our lives. The ability of deep artificial neural networks to learn efficiently is inspiring researchers to think about artificial intelligence in an unprecedented way, not to mention the prompting of competition between tech giants or the launch of countless startups across the globe. Many agree, though, that deep learning is still an empirical field and there is an urgent and fundamental need for a continuous progress in algorithms design and for an in-depth theoretical analysis. That is one of the reasons why Riccardo Zecchina's work is relevant. A new arrival at the Department of Decision Sciences, Zecchina operates at the intersection between computer science, information theory, statistical physics, computational biology. "Life sciences and social sciences", he says "are undergoing a major revolution".
Information explosion
The data explosion is setting new challenges and inspires science to ask new questions. How to extract significant information efficiently from data? How to learn and generalize optimally from examples? How to reconstruct causal models? Computers are now able to recognize objects in cluttered scenes, to process speech and answer questions, to extract relevant features from massive data or to play games which require forms of sophisticated strategy. In many applications, artificial intelligence (AI) is reaching abilities that are comparable to those of human beings, if not better. "In spite of all the hype during the last decades", Zecchina says, "all these data driven studies and applications were still impossible only ten years ago. The real progress has been triggered by the combined development of novel technologies for data production and acquisition, of more powerful computer platforms and of novel machine learning algorithms. The current main tools of AI are artificial deep neural networks inspired by human neural systems".
Optimization problems
Riccardo Zecchina has given fundamental contributions in the development of basic conceptual and algorithmic schemes for large scale optimization problems, scenarios where one needs to solve constraint satisfaction problems consisting of millions or even dozens of millions of variables. These solutions are now starting to be used in machine learning. "The distinguishing feature of my research activity has consisted in the identification of algorithmic counterparts of the advanced analytical techniques developed in the context of statistical physics of complex systems. This has led to novel distributed algorithms which have moved forward the boundaries of optimization and inference problems considered to be typically intractable". Thanks to these results, Zecchina has received several international recognitions, the most important ones being the ERC Advanced Grant for Optimization and inference algorithms from the theory of disordered systems and the Lars Onsager Prize 2016 of American Physical Society.
Machine learning
Since a few years, Riccardo Zucchina is full time studying machine learning and data science inverse problems, which we could roughly define as strategies to infer models from data. One of the latests results, obtained together with Carlo Baldassi, new assistant professor at the Department of Decision Sciences, has provided a basic analytical and algorithmic insight on the origin of the success of deep learning in large scale networks. "The hope is that the work done by the Bocconi group will help to bring together experts from different disciplines in attacking fundamental problems in data science. One key problem that needs to be addressed by future machine learning is unsupervised learning: the capability of modeling the environment and making predictions by observing unlabeled data and acting in it".
Find out more
R. Monasson, R. Zecchina, S. Kirkpatrick, B. Selman, L.Troyansky, Determining computational complexity from characteristic 'phase transitions', Nature 400, 1999.
M. Mezard, G. Parisi, R. Zecchina, Analytic and Algorithmic Solution of Random Satisfiability Problems, Science 297, 2002.
A. Braunstein, M. Mezard, R. Zecchina, Survey Propagation: an algorithm for satisfiability, Random Structures and Algorithms 27, 2005
C. Baldassi, A. Ingrosso, C. Lucibello, L. Saglietti, R. Zecchina, Subdominant dense clusters allow for simple learning and high computational performance in neural networks with discrete synapses, Physical Review Letters 115, 2015.
C. Baldassi, A. Ingrosso, C. Lucibello, L. Saglietti, R. Zecchina, Local entropy as a measure for sampling solutions in constraint satisfaction problems, Journal of Statistical Mechanics: Theory and Experiment, 2016.
C. Baldassi, C. Borgs, J.T. Chayes, A. Ingrosso, C. Lucibello, L. Saglietti, R. Zecchina, Unreasonable effectiveness of learning neural networks: From accessible states and robust ensembles to basic algorithmic schemes, Proceedings of the National Academy of Sciences 113, 2016.
H.C. Nguyen, R. Zecchina, J. Berg, Inverse statistical problems: from the inverse Ising problem to data science, 2017.