Yolt’s Unthink Tank event on Thursday 31st January to see Pete Trainor wax lyrical on all things AI in his talk: AI and the future of fintech. Enjoy!

">

Exo Investing Blog

Empathy within AI: Yolt’s Unthink Tank review

unnamed (1).jpg


Last week, it was Nick and my privilege to head across to Central London’s Somerset House for Yolt’s latest “Unthink Tank.” This week, technologist and expert on all things AI, Pete Trainor (@petetrainor), was speaking on the future of AI in fintech and managed to squeeze in a lesson on it’s past, too.

unnamed (2).jpg

Pete’s been speaking around the world on the topics surrounding AI research and development, but more specifically, on the psychological effect, it has on its audience. More than that, he’s lined up the dots between how we build our AI programs and who or what it truly benefits.

From the start, there was a sense of relief for us both, as part of the Exo Investing team, and I’ll explain why. When you’re made fully aware of the way our data is re-packaged, traded and used to develop any number of AI programs, you begin to appreciate the importance of channelling this software to improve our relationships with the world around us, our social relationships, and even our mental wellbeing. Our data has been harnessed for anything from training driverless cars, to making sure Barbie dolls remember our children’s favourite foods. Our digital footprints begin before we’ve even learnt to walk.

So, the reason we were relieved was simple; our algorithms lead to something that is actioned by Exo, unlike a service for credit and loans or a budgeting app. We don’t raise the same concerns ethically, regarding what is and isn’t being revealed to the customer and if, or when, an intervention should happen. Banks, for example, could dig into individual spending habits and consider flagging certain issues, which brings us onto Pete’s later point around the ethics of surveillance. So, luckily we benefit from using sophisticated AI in purely building unique and robust investment portfolios for our clients. When you consider that this application is one small dimension of the applications of AI, you begin to realise how pervasive it could be, and the responsibility the developers need to have when building it, as a result.

IMG_2364.jpeg

Pete covered the history of AI too, and it’s remarkable just how long AI has been around, despite the stereotypical perceptions often drifting into the sci-fi genre. AI, as a functional tool, first arrived in the 1940s with the primitive versions of a computer. This continued into the 1960s, with the central motivation being to develop something like an electronic brain. By 1997, AI had beaten a chess champion at his own game. Moore’s law ended in 2016; the main, historical projection for computer advancement.

Today, we see AI in a place where it can enter a process known as “Deep Learning.” Deep learning models can now challenge human accuracy in image recognition and recreational competition. The beauty of AI is that it’s never one-dimensional or uniformly applied. At Exo, for example, the exact preferences and situation of each investor allow us to produce diverse portfolios that are as unique as their investors, while we manage the risk on an unprecedented level. This multi-layered, advanced processing has enormous potential, with programs like Corti (“AI that saves lives”) and Pete’s own project US AI, changing the face of how we look out for each other in the modern world.

The challenges of AI as a concept, as pointed out by Pete in the sobering finale of his talk, truly arise in the world of “support vs. surveillance.” The idea of monitoring an at-risk individual’s purchases, posts and locations in the name of their wellbeing. In Pete’s words; “Of the 100 people in this room, Is the privacy of 99 worth the life of 1?”

You might immediately answer, “yes, of course.” Pete used a powerful story personal to him to make the point. People’s habits, including those enacted on their phones can be incredibly insightful indicators to the risk they might pose to themselves, for example. Where someone’s crippling debts might remain hidden from their family and friends until it was too late; could their data be used, hypothetically, to help support services intervene and save these people’s lives before an issue manifests and grows? The arguments are hugely compelling.

unnamed (3).jpg

It’s only when you look at the other side of the coin that the ethical quandaries become clear. Consider, for a moment, the feeling you’d have if someone contacted you, concerned by your day-to-day spending and online activity, which their computer has told them required intervention. It all has a whiff of Big Brother to it, along with a sense that building these advanced AIs could produce consequences never originally intended. Looking at the expressions around the room, the dilemma wasn’t lost on anyone.

If Pete’s talk taught me and Nick anything, it was that there is still so much more AI can do to benefit people in a positive, and material, way. Its future is very much in our hands. To quote my favourite superhero movie uncle; “With great power, comes great responsibility.”

And that’s me, signing off,

Tom (Your friendly neighbourhood Content Writer)