Contacts
Opinions

Not just investment advice

, by Claudio Tebaldi - direttore dell'Algorand Fintech Lab, Universita' Bocconi
Thanks to foundation models, i.e. basic models that are pretrained using large datasets, it is increasingly possible to customize financial proposals according to the characteristics of individual investors. But that's not enough. To be credible, proposals must be explainable. And therefore transparent

There is a large and established literature on the difficulties that non-professional investors encounter in orienting their financial decisions. Financial literacy initiatives aimed at disseminating the knowledge gained at the academic level to let it flow to even the most distracted investors, and those segments of the population who are most vulnerable to the cost of unaware financial decisions, are now numerous and institutionalized.

However, the experience gained in these contexts shows the existence of a gap that still needs to be filled between the context in which the quantitative allocation criteria formulated in the scientific field and the heterogeneity of the problems and constraints that affect the choices of actual investors.

Analyzing, for example, the commercial services of Robo Advisory, an automated advisory service for clients, it has been verified that the allocation rules that are implemented are often determined by a priori specific investment rules, which do not respond very well to the individual characteristics of investors and the differing contexts in which these choices are made.

The possibility of covering the last mile, by aligning the actual practice of financial decisions with the prescriptions that are the outcome of research, seems to be within reach for a particular class of learning models, the so-called foundation models, i.e. basic models that are pre-trained using large datasets. Their use as decision support tools is a recent introduction and promises substantial advantages, first of all the possibility of making choices by interacting with the decision-maker and actively contributing to the constitution of the training dataset.

The basic idea is quite simple and in fact follows the logic pursued by researchers to solve limitations of the same nature. In the scientific field, we have been working for some time using anonymized, homogeneous and internationally harmonized datasets that collect the variables necessary to reconstruct the balance sheets of individuals and family households, and associate them with portfolio allocations and investment choices. This data collection expands the information assets that can be employed to zero in the factors that underly individual choices, thus helping to define investment selection criteria that are more consistent with observed investor profiles.

Similarly, pre-trained models can generate scenarios and propose choices based on criteria that have a higher likelihood based on the information available in the datasets that collect investor information. The potential advantage that derives from this is the possibility of actively implementing rationality prescriptions and formulating investment proposals that are more appropriate to the specific market conditions, constraints and preferences of investors. The main obstacle to the systematic use of this approach derives from the known difficulties in maintaining the correctness check of self-produced output. In fact, the systematic use of artificial intelligence models must always be confronted with the need to operate according to adequate verifiability and correctness criteria of the contents generated. The scientific challenge is therefore linked to the possibility of formulating, together with the financial decision proposals, also an adequate explanation of the reasons that motivate them, according to the criteria associated with so-called XAI, eXplainable Artificial Intelligence. After all, the credibility of an advisor, human or cybernetic, is still built starting from the same principle, transparency.