Could AI 'trading bots' transform the world of investing?

 

Could AI 'trading bots' transform the world of investing?



Search for "AI investing" online, and you'll be flooded with endless offers to let artificial intelligence manage your money.

I recently spent half an hour finding out what so-called AI "trading bots" could apparently do with my investments.

Many prominently suggest that they can give me lucrative returns. Yet as every reputable financial firm warns - your capital may be at risk.

Or putting it more simply - you could lose your money - whether it is a human or a computer that is making stock market decisions on your behalf.

Yet such has been the hype about the ability of AI over the past few years, that almost one in three investors would be happy to let a trading bot make all the decisions for them, according to one 2023 survey in the US.


John Allan says investors should be more cautious about using AI. He is head of innovation and operations for the UK's Investment Association, the trade body for UK investment managers.

"Investment is something that's very serious, it affects people and their long-term life objectives," he says. "So being swayed by the latest craze might not be sensible.

"I think at the very least, we need to wait until AI has proved itself over the very long term, before we can judge its effectiveness. And in the meantime, there will be a significant role for human investment professionals still to play."

John Allan warns that AI-powered investment is still in its infancy

Given that AI-powered trading bots may end up putting some highly-trained but expensive human investment managers out of work you might expect Mr Allan to say this. But such AI trading is indeed new, and it does have issues and uncertainties.

Firstly, AI is not a crystal ball, it cannot see into the future any more than a human can. And if you look back over the past 25 years, there have been unforeseen events that have tripped up the stock markets, such as 9/11, the 2007-2008 credit crisis, and the coronavirus pandemic.

Secondly, AI systems are only as good as the initial data and software that is used to create them by human computer programmers. To explain this issue we need a little history lesson.

Investment banks have actually been using basic or "weak AI" to guide their market choices since the early 1980s. That basic AI could study financial data, learn from it, and make autonomous decisions that - hopefully - got ever more accurate. These weak AI systems did not predict 9/11, or even the credit crisis.

Fast-forward to today, and when we talk about AI we often mean something called "generative AI". This is far more powerful AI, which can create something new and then learn from that.

When applied to investment, generative AI can absorb masses of data and makes its own decisions. But it can also work out better ways to study the data and develop its own computer code.

Yet if this AI was originally fed bad data by the human programmers, then its decisions may simply get worse and worse the more code it creates.


Elise Gourier, an associate professor in finance at the ESSEC Business School in Paris, is an expert in the study of AI going wrong. She cites Amazon's recruitment efforts in 2018 as a prime example.

"Amazon was one of the first companies to get caught out," she says. "What happened was that they developed this AI tool to recruit people.

"So, they're getting thousands of CVs, and they thought we're just going to automate the whole process. And basically, the AI tool was reading the CVs for them and telling them who to hire.

"The problem was that the AI tool was trained on its employees, and its employees are mainly men, and so, as a result of that, basically what the algorithm was doing was filtering out all the women."

Generative AI can also simply just go wrong, and produce incorrect information, something termed a "hallucination", says Prof Sandra Wachter, a senior research fellow in AI at Oxford University.

"Generative AI is prone to bias and inaccuracies, it can spit out wrong information or completely fabricate facts. Without vigorous oversights it is hard to spot these flaws and hallucinations."

Prof Sandra Wachter also warns that automated AI systems can be at risk of data leakage or something called "model inversion attacks". The latter - in simple terms - is when hackers ask the AI a series of specific questions in the hope that it reveals its underling coding and data.

Post a Comment

0 Comments
* Please Don't Spam Here. All the Comments are Reviewed by Admin.