This tweet from IFA, Claire Walsh, is something I commonly hear from planners.
Could a computer ever help someone understand their needs as well as a human adviser?
Chris Budd says no. I disagree, although it will take some time and has little to do with artificial intelligence.
If you subscribe to bouffant-haired Frenchman Jacques Lacan’s theory of how and why therapy works, and apply this to financial planning and business coaching, it becomes clear what is required.
Lacan, rogue analyst and one of the most influential thinkers of the 20th century, uses ‘person-who-knows’ to describe the relationship of the analyst to the analysand (or client). Put (very) simply, for the process of analysis to work, the client must assume that the analyst (or adviser) ‘knows’ the answer. This allows the client to explore and uncover the answers for herself. For the client to understand themselves, they must first think that the person they sit opposite already has the answers but refuses to share them just yet.
Sounds weird, but the skills and techniques for establishing this position of authority are very similar for good business coaching and lifestyle planning. Use of silence, refusal to validate emotions and opinions by non-verbal gestures, subtraction of the coaches’ personal values from the discussion, and so on. The planner is the ‘person-who-knows’ and given the right amount of space, the client will keep talking, way beyond initial stock responses, until they come up with what sounds like an authentic answer.
The accuracy of the answer is, I would argue, impossible to measure by either a human or a computer. It is entirely subjective. We can only measure its efficacy by the client’s commitment to it as truth.
Could a machine ever replicate this? The questions designed to disturb respondents out of standard responses (“What would you do if you had 5 years left to live?” etc) could be fired out by a computer. Maybe you’d even be more honest with a computer than a real person. Psychometric testing already mandates no human involvement in the questioning. At the moment, however, would you really believe that a computer knows the real answer?
I’ve seen rooms full of people duped by online personality profiling tools which purport to analyse a person’s social media outputs and assess their personality. A couple of quick examples and the room is awed. 10 minutes playing with it afterwards and the superficiality of the tool is easily exposed. They are marketing gimmicks. However, the growth of genuine artificial intelligence is likely to be exponential in the years to come.
Crucially, if we accept Lacan’s model, the ability of a robot (or human) to coach has little to do with their own analytical or deductive powers. At no point does the coach know the clients’ true desires. Success lies in the ability to make the client believe in that they do. This perceived omniscience, allows then allows them to generate and believe in their own answers.
It’s a Wizard of Oz machine.
The success of the machine known as robo-advice is harder for us inside the financial services industry to believe in as we peeled back the curtain long ago, and saw an actuary on a bicycle working for a life company. We don’t believe.
That doesn’t make it impossible. But humans and robos alike are yet to realise that the answer doesn’t exist in their algorithms, but the relationship between an investor and a person (or computer)-who-knows.