ANNOUNCEMENT: UK Adviser is now PA Adviser. Read more.

AI: Invisible or not…

As it is becoming increasingly used in the advice process

|

At our Wealth Ops: Live! event in October, we had a fascinating discussion with Intelliflo CEO Nick Eatock and financial planner Channelle Pattinson regarding the future impact of AI upon wealth operations and customer servicing, writes Carl Woodward joint founder and director of Simplify Consulting.

During the discussion, a question was asked about the ethical value of using AI and how transparent organisations should be in declaring to their customers that some form of non-human intelligence has intervened in a process.

As we see more and more cases of AI being used – eg in the advice process, in webchats and in helping to understand customer behaviour patterns and speaking with clients in contact centres, the ethical questions around transparency must be considered seriously by companies.

Trust

Why is the role of AI and the risks associated with the use of it so much more relevant in Financial Services, and in particular Financial Advice, than in other industries?

An advice process requires a degree of trust and is a long-term relationship that demands that two parties are clear on achieving a set of outcomes that can materially impact upon people’s lives and livelihoods.

If AI is to play a role, then that role is in a world where trust sits at the heart of the relationships that are formed. These aren’t transactional interactions; they are far more expansive and demand greater longevity.

If they are to be replaced by AI, then it is paramount that an organisation can build trust through evidence and transparency.

In our research of over 250 Wealth customers, our whitepaper – Making Waves: Exploring the Future of Wealth Operations – explored some of the feedback from customers in how they are engaged and how their data is used. Overwhelmingly, there was little support for the virtues of chatbots and little interest in customer data being used more widely to link life-events or promote targeted selling (much of which already exists today in the form of internet cookies).

With a degree of distrust in place, it is reasonable to consider that as adoption of AI capabilities broadens, the question of transparency becomes more important.

What does transparency mean in practice?

1. Being able to explain to a customer if and when AI has been used to inform an outcome;
2. Being able to explain to a customer how AI has been used, the data and insight it has leveraged and the process it has followed to support that outcome; and
3. Being able to demonstrate fairness, consistency and appropriateness in how AI has reached a conclusion and whether a human would have arrived at a different outcome. Avoiding any accusation of bias in the AI design is fundamental here.

Demonstrating use of AI

If AI is adopted to help accelerate outcomes, how do companies ensure that when a customer asks for clarification and evidence of the process that has been followed it is easily replicable and demonstrable in an efficient and clear way?

Providers seeking to offer full transparency must consider the unintended consequences and build in routines, procedures and processes that enable the inner workings are explained in a customer centric way. This is crucial to building trust. Customers will want to know that AI has been used in a safe and consistent way, and providers must be able to prove that on demand.

This all assumes that providers elect to be transparent around the use of AI at all. There may be a nuance to this and not every provider will reach the same conclusion for every use. However, awareness of potential consumer dissatisfaction or mistrust is important, especially in the context of technological innovation.

For example, advisers today are not often asked by their clients to evidence their qualifications to demonstrate competency and are rarely required to reveal their historic recommendations to highlight that their investment selections deliver the returns expected. Customers usually don’t ask for it because human interaction often starts from the position of trust and often via a personal recommendation or referral.

When a machine is brought into the equation, for some clients the barriers immediately go up and trust becomes a key area of focus. The faceless and senseless appearance of AI means it can’t defend its decisions and outcomes, and will often require human intervention to justify its usage.

Today, we see examples of chatbots, and webchat interactions being presented as avatars, but we also see examples where there is no attempt to disclose that the interaction is with a computer. The conversation often ultimately gives it away; but the question is, does it matter? Does a customer care how their question is answered and by who, if it’s answered quickly and efficiently?

As AI ‘learns’ and builds a repository of activity, history, and information it can refer to, it will broaden its reach and become increasingly part of day-to-day living. This is all whether the consumer wants it or not, or even knows about it. Providers must tread a fine ethical line, and make decisions about transparency consistently in the interests of serving one of their most important assets; their customers.

This article was written for International Adviser by Carl Woodward joint founder and director of Simplify Consulting.

MORE ARTICLES ON

Latest Stories