In September the Joint Committee of the European Supervisory Authorities (ESA) published a report setting out the conclusions of a survey on the evolution of 'automation in financial advice' in the securities, banking and insurance sectors conducted over the past two years.

The ESAs noted that whilst the phenomenon of automation in financial advice seems to be slowly growing, the overall number of firms and customers involved remains quite limited. They resolved to keep the market under review, with any further formal monitoring exercise being deferred until "the development of the market and market risks warrant this work." The main barriers to the growth of automation identified by the various national regulators contributing to the report were both cultural/psychological (including the lack of digital (financial) literacy of consumers) and regulatory (including the challenge of applying the overlay of existing regulation to new products).

However, despite this recent "wait and see" conclusion a great deal of paddling is going on below the waterline. Regulators and policy makers are engaging with the challenges presented by artificial intelligence and the use of "Big Data". A review of recent statements concerning (i) Robo-advice and (ii) algorithmic decision-making in product terms and business models shows some emerging themes.

Robo-advice

In a 2015 discussion paper, the ESAs identified the following potential risks of automated advice to consumers:

  • having limited access to information and/or limited ability to process that information;
  • flaws in the functioning of the tool due to errors, hacking or manipulation of the algorithm; and
  • legal disputes arising due to unclear allocation of liability

In their 2018 Report the ESAsindicated that they still considered these risks live. However, as the current Robo-advice market was limited both in volume and in the nature of the products it covered, they proposed to take a 'wait-and-see' approach. When the market develops and matures, however, it seems certain that ESAs will be returning to consider these risks.

National regulators are also now beginning to grapple with such issues.

In the UK the FCA's Advice Unit was established in 2016 with the twin goals of:

  • providing regulatory feedback to firms who feel they face roadblocks to developing an automated model, helping to clarify for them perceived ambiguities in the rules via an informal steer; and
  • developing general tools that all firms providing advice to consumers can access.

Several cohorts of firms have now been enrolled with the Advice Unit and, drawing on its experiences, FCA has developed further guidance and signposts to help such firms understand how the regulatory framework applies in the context of their business model.

In the Netherlands, the AFM, has been one of the first to comment specifically, publishing its view on Robo-advice, which comments specifically on its expectations regarding the quality of the services and the duty of care to customers.

What comes through clearly from all of the national and supra-national feedback to date is that the regulatory requirements for suitability and quality of advice will not be applied differently for automated and person-to-person advice.

For Robo-advisors, as noted by the ESA, it can be "challenging to correctly profile clients and understand their objectives when there is no face-to-face interaction; and .... ensure that the customer understands both the advice and the features (such as the costs) of the product." There are some extremely interesting recommendations in the AFM's guidance on this point, concerning the use of behavioural tools and the cross-checking of data inputs to identify doubts and inconsistencies "remotely" in the absence of personal human contact. However, person-to-person advisors will also need to take note of emerging best practice in technology driven business. When the technology is there to identify inconsistencies between the risk tolerance indications given by clients and their responses to other questions, "I am only a human" may not suffice if latent issues and misunderstandings are not identified.

Algorithmic decision making

In a very thoughtful speech this summer Charles Randell, Chairman of the Financial Conduct Authority, considered the related topic of algorithmic decision-making. He posited one key challenge:

"How do we make sure that our financial future is not ..........governed by rules set by unaccountable data scientists, and implemented by firms who may not even understand how these algorithms reach their decisions?"

He noted that the advance of:

  • "Big Data", or more prosaically the exponential growth in our ability to collect and store data and the ever-increasing amounts of collectable data we produce;
  • the vast improvements in processing power which allows the analysis or "mining" of these Big Data sets for patterns; and
  • increasing understanding and use of behavioral science, such as decision-making influences
  • creates risks that automated and algorithmic decision-making and sophisticated customer profiling could exacerbate social exclusion and worsen access to financial services for certain groups.

This particular risk was also flagged in March by the European Banking Authority (EBA) in its FinTech Roadmap (in which the EBA set out its priorities for 2018/2019 and the establishing of a "FinTech Knowledge Hub" to enhance knowledge sharing in regulatory and supervisory approaches). Amongs the projected action points for 2019, the EBA announced the intention to consider financial exclusion in the context of big data algorithms.

In July, the European Insurance and Occupational Pensions Authority (EIOPA) launched a European Union-wide thematic review on Big Data. The stated purpose of this review is to gather empirical evidence on the use of Big Data by insurance undertakings and intermediaries along the whole insurance value chain, i.e. in pricing and underwriting, product development and claims management, as well as in sales and marketing. The review will seek to analyse the potential benefits and risks for the industry and consumers, considering data quality issues as well as impacts on financial inclusion and the fair treatment of consumers through consumer profiling.

Emerging Themes

Looking at these various sources, a number of consistent themes are emerging. Regulators are looking:

  • for firms to adopt clear ethical principles to encompass the design, operation and outcomes of automated decision-making. Ethical decisions will be needed to determine what data may be appropriately used and for what purpose and, in addition, firms will need a strong focus on checking, and understanding, outcomes to enable them to identify unintended consequences. As Charles Randell has noted "people, not machines, have to make the judgment as to whether ....... outcomes are ethically acceptable _ and ensure that they don't just automate and intensify unacceptable human biases that created the data of the past";
  • to ensure accountability of firms and also of appropriate individuals - "the computer did it" is not going to gain any sympathy in regulatory circles. Firms will need to consider where responsibilities and reporting lines will lie and what skillsets they will need for effective supervision;
  • to see effective communication with customers on data use in a manner which is "digestible" ; and
  • carefully at the implications of "individualised" pricing and risk assessment where consumers could be disadvantaged (such as by dual pricing or by losing the benefit of "pooling of risk").

In order to grapple fully with such issues, firms will need to look beyond pure technical and computing skills. It may well be that the future will be bright for the philosophers, statisticians and social scientists of tomorrow.