Read time: 8 min

Artificial Intelligence: The Future of FinTech?

Humans are curious creatures. We study our environment, consider ourselves in relation to our surroundings, and, uniquely among living things, even ponder our ability to think. Whether we realise it or not, we regard sentience as the crowning jewel of our species.

It’s no great wonder that the idea of smart machines captures our imagination. The thought of giving ‘Artificial Intelligence’ (AI) to another being, especially one that isn’t ‘alive’ by conventional standards, fills us with a profound combination of fear and intrigue. We fear the loss of our own uniqueness, yet we marvel at the thought of gaining mastery over our greatest gift. 
While the subject of AI needs no marketing to captivate us, recent breakthroughs such as the defeat of a world-class Go player by DeepBrain’s AlphaGo have also served to increase focus on the application of this technology to our daily lives. Commentators have indulged in wild speculation, suggesting that our trains may soon be driverless, our supermarket tills unstaffed, and our co-workers robotic. 
The financial services industry has not been immune to this latest surge of interest, but it may be wise to take a breather before giving your human fund manager the boot. Are we really on the cusp of robo-managed funds and wholly non-human markets?
Artificial What?
For all the noise, it seems that few mainstream commentators have a clear idea of what AI actually is, let alone how such cutting-edge research can be applied in the context of financial services. There are several causes of this ambiguity, chief among which, according to the experts, is a tendency to place humanity at the centre of any definition related to intelligence.
Dr Tom Doris, CEO of OTAS Technologies, says that the “tongue in cheek definition of AI… [is] that it’s AI if it hasn’t been solved yet and it sounds cool enough to get research funding”. 
Josh Sutton, global head of the AI practice at Publicis Sapient, adds, “There are a lot of people that say AI and they mean a lot different things.”
Dr Hasan Amjad of Cantab Capital Partners agrees with both of the above assessments, noting that AI is “a really badly defined concept”, because the definitional “goalposts keep moving…When a computer couldn’t beat a human at chess, we considered chess to be AI; now that computers can do it, it isn’t AI”. 
xX3xv6g2SOODQKK0x295cpr2fXGSsvhyRxoOVm61Building the Buzz
Definitional problems are also compounded by the fact that AI is too media-friendly for its own good. The nitty-gritty science behind building smarter machines is attached to a compelling brand, says Doris, adding that “it always has been… [in fact] it became sufficiently well-known that it was a brand that it then fell out of fashion”. 
Amjad talks about AI’s last wax and wane, noting that “in the ‘70s…there was enormous hype around…[the idea] that AI is here, and we’re soon going to have robotic butlers, machines that can essentially do most of what humans do”. When the hype didn’t deliver, Amjad notes that the resulting fatigue led to “what computer scientists know as the AI Winter…when funding for AI all but dried up.”
While Sutton correctly highlights that hype is good for visibility and funding, the buzz around AI means that the term is applied to everything from basic algorithms to more complex projects such as AlphaGo. Pinning down a one-line definition of such a vast area of study is never going to be possible, but how can we introduce some order to the unfeasibly broad range of definitions currently in play? 
One source of partial clarity is revealed by Doris, who introduces a conceptual spectrum ranging from ‘weak’ to ‘strong’ AI. “The definition of strong AI,” says Doris, “is that you will eventually have an intelligence which is comparable in its capabilities and very nature to a human or a sentient, thinking entity”. 
“I don’t think too many people within the domain really subscribe to it, it’s more [for] science fiction writers,” he adds. 
On the other side of the spectrum, ‘weak AI’ (and by extension machine learning), rests on the belief that “we can teach machines to parrot some things that look like they’re intelligent, but in fact… there’s no understanding going on there”.
qRZcgTvKAYApoAiJjQtA3R7p8n8DubADAcFgajNLReigning in the Debate
The road to defining AI is riddled with philosophical rabbit holes, and it’s perhaps unsurprising that the experts generally eschew both the hype and the more sci-fi-oriented debates surrounding the topic. In Doris’ view, “the weak AI is what we’re seeing coming through”, while former nuclear physicist Henri Waelbroeck of Portware, says that the boundary between “machine learning and artificial intelligence is really a matter of semantics”.
There are, in fact, very good reasons to take a conservative view on the future application of AI in financial services. As Amjad points out, some of the most impressive applications of AI have been in relation to chess and Go, where professional games have an exceptionally “high signal-to-noise ratio”. Financial data, particularly in fragmented markets, “has a horribly bad signal-to-noise ratio…It really takes quite a lot to find the signal in the noise,” stresses Amjad. 
Bridging Theory and Practice
AI is clearly an elusive beast, and it’s also obvious that we should be cynical regarding the possibility of meeting a full-blown robot banker in the near future, but is there anything at all ‘AI-like’ going on in financial services? 
Plenty, as it turns out. Waelbroeck, for instance, claims that Portware has “created an environment where predictive agents have access of all other agents in the environment”. 
“One of the goals of this environment,” he says, is “to enable it to grow over time… [as] each agent benefits from the predictions of other agents”, leading to the creation of a “user-sourced artificial intelligence where you’re not limited by the thinking of one person, or [even] a group of people… the intelligence in the system is the result of the collective effort of all the quants participating in developing the machine by introducing new agents.”
Amjad also highlights how AI-like technologies are being used in financial services to do things that have always been done, albeit in a qualitatively new way. When asked to describe the ‘bleeding edge’ of AI-type technology in financial services, Amjad points out that “people are using machine learning simply as a classifier for discovering what sort of market regime you’re in… this has been going on for a while”. “Why is it bleeding edge?” he continues, pre-empting the obvious follow-up, “Well, it’s bleeding edge, because it’s beginning to use machine learning techniques, like recurrent neural networks, that it wasn’t possible to use before.”
In some ways, taking a look at what is currently being done with AI-type technology in financial services reveals that the reality is more satisfying than fiction, if not quite as sexy. As Doris points out, “the fact that we have algos that can get within a basis point or two of VWAP very, very consistently is absolutely a form of weak AI, and is a great example of taking something that humans occasionally might have some edge on…but for the vast majority of cases, you’re better off assigning the rule-based behaviour to a machine that never gets tired.”
wSxfPR55kQHnr01gVhWx3RZULOg7mlEDXAdXhswjStirring the Pot
Given the ever-increasing importance of regulation in the financial sector, it would be remiss to neglect discussion of the view that regulators might take the technology behind increasingly smart trading platforms. Interestingly, expert opinions on this issue seem to diverge considerably. 
Sapient’s Sutton, for instance, doesn’t think that a strict regulatory framework will envelop AI, on the basis that it “isn’t really possible” to tell traders that they can’t use better tools. Where Sutton does see potential for regulation-driven change, however, is in relation to information. “There’s a lot of data available right now that might provide an unfair trading advantage,” he says, continuing by predicting that “as we track more and more data, and more and more data is unstructured [so] you need an AI to turn it into something [useful]”, we may see stricter controls over access to data. 
Waelbroeck echoes Sutton’s view that pervasive regulation of AI is unlikely, but for very different reasons. In Waelbroeck’s eyes, the kinds of deep learning-based networks being created by Portware are simply a reflection of the market itself. “The market pulls together a bunch of systems that are operating using the outputs of everyone else as inputs,” he observes, going on to claim that “the whole market structure… is really designed as an AI machine, and all we’re doing is developing that same concept within the confines of a cloud-based system.”
Other parties aren’t so sure that regulators will overlook AI. Doris argues that the buzz around new technology in financial services may create a roadblock to effective rules, but not to rules outright: “it definitely doesn’t bode well for regulation, because what often happens…is that the hype builds and builds, and then the regulator feels compelled to do something, to say something, and comes out with a bad law very quickly.”
A Realistic Look at the Future
In the sense that progress in the development of AI is inextricably linked to unpredictable jumps in computing power, it isn’t really possible to say where and when the next leap forward will occur. That doesn’t mean, however, that the relatively sober mathematicians and scientists who actually work on applying AI to financial problems have no views on the future of their craft. 
Amjad states that in coming years, “it would be nice if your system could not only say ‘here’s what I think we should do’, but also say, ‘and here’s why I think we should do it’, providing a chain of reasoning that your human can look at”. 
Interestingly, this type of system would meet Sutton’s “litmus test” for true AI, which is: “can the platform give you [the user] a logic trail in English that explains its answer…not that it tells you, ‘I came up with this because I’ve seen it a million times before’, but [rather], ‘I think that USD/EUR is going to go up for the following reasons’.”
Waelbroeck also looks to a financial services future that is defined by what Amjad terms the “collaboration of man and machine”. Waelbrock highlights that, moving forward, “one of the great challenges with AI is how you communicate with the trader”, by taking an idea that originates from a human and “instead of just applying it mechanically as is done in most algos today, [producing] an intelligent response that enables the trader to decide”.
Should this boundary-pushing feat be achieved, Waelbroeck points out that the AI agent becomes an “assistant to the trader”, not the full-blown autonomous entity that is so often conjured by the popular imagination.  
Y6AW3porpUg837IhTBw45jcIF7nM5BMyvsDs53msWhither FX?
Financial services and AI have met, and will continue to meet, then, but what about AI and the FX market? While there are already “weak AI” HFTs operating in the FX space, Waelbroeck sees a time coming in which more sophisticated deep learning-based systems such as the Portware Brain will also make their way to FX. Not only could this happen, says Waelbroeck, but “it almost has to be done…if you want an intelligent system to operate optimally, you want it to have access to all asset classes”. 
That isn’t to say, of course, that moving existing AI-related technologies to FX will be easy. Waelbroeck himself notes that “access to data is a little bit more difficult in FX”. 
Things become even more complicated when true ‘bleeding edge’ AI tech, such as ‘sentiment extractors’ that use Natural Language Processing to detect people’s feelings about a company, are imagined in an FX context. “I think [applying sentiment extraction to FX is] going to be pretty tough,” says Doris, “it strikes me that FX is just so different to talking about a company or an earnings forecast”.
Will FX and the overall financial sector ever be entirely dominated by AI? Not tomorrow, no. Perhaps not even in a decade. Talk to the experts however, and for all the academic conservatism that comes with truly rational minds, it’s hard not to come away with a sense that major changes are afoot in the long run. 
The potential of AI, both in financial services and in everyday life, is staggering. At the same time, it’s important to be realistic about the obstacles that lie in the way of a ‘strong AI’ future. Amjad, recounting a quote from Walt Kelly’s Pogo used at a recent conference, describes the current state of AI in financial services best: “Gentlemen, we’re surrounded by insurmountable opportunities.”

Galen Stops

Share This

Share on facebook
Facebook
Share on google
Google+
Share on twitter
Twitter
Share on linkedin
LinkedIn
Share on reddit
Reddit

Related Posts in