Regular readers will know I have what I believe to be a healthy level of scepticism over the use of AI and machine learning in trading – generally speaking I feel these techniques are extremely helpful in the process of trading, but I am as yet unconvinced of their value in the actual decision making process.
What will give me more confidence is the better embracing of adversarial AI, for only by imbuing an algo with a certain amount of cynicism will we empower it to trade effectively in markets because, and this is a point I have made before in these pages, it is actually quite easy to spoof an algo.
If you want proof, look no further than the number of spoofing cases that are being brought – and largely settled – by the authorities, especially in the US. At the end of the day it is likely we will look back upon this period in which spoofers seemed to run wild and see it as a passing phase, much like the collusion claims against FX traders. In both cases, people were exploiting what they saw as both the strength of enhanced technology and the weakness of market and business oversight, but eventually the oversight caught up.
But while I think we have turned the corner on collusion and inappropriate sharing of information, I am less sure on spoofing, especially in less opaque markets. Even in exchange world, where everything is meant to be perfect, there are still accusations of spoofing and manipulation because traders get desperate to make money and step into grey areas the more their desperation grows – and if there is one easy way to make money at the moment in markets it is through spoofing the machines, thanks to the limited use of adversarial AI.
This is, of course, a much bigger issue than just financial markets’ activities, for those of you who were at Forex Network London, who can forget the truly eye-opening presentation to kick the event off by Cristóbal Conde about how to mislead technology?
I want to focus, however, on financial markets of course, and I am sure there are firms who are looking at greater use of adversarial AI in their businesses. They should be, for while the relevant authorities seem adept at catching spoofers, they are doing so long after the event, and for various reasons, those traders on the other side of the spoof do not receive financial recourse.
It’s great that this behaviour can be identified, but it’s less so that it takes so long and the financial damage is already done, and that is why firms have to take responsibility – that word again – for protecting themselves. It’s not as though this is a radical departure and something excitingly new, adversarial AI is already used by security systems in other walks of life and so financial markets participants need to investigate bringing it into the market environment to help them spot when they are being spoofed.
As is often the case when clever ideas are being discussed, adversarial AI has some good dramatic descriptions for what it protects against, like data “poisoning” and “evasion attacks” but what we are talking about here is actually being able to spot a rogue bid or offer down the stack. I am not sure as to whether we want to go this far, but is to wrong to suggest that one day it will actually exploit the spoof prices? After all, what will stop a spoofer quicker than anything? Getting hit on their spoof bids and offers, or at least highlighting to everyone what they are.
The ideal solution, of course, has nothing to do with players having to protect themselves or hitting the spoofers in the market, that should just be a fall back, for in reality what we need it the oversight function using these techniques. Spoofing is yet another case – and there will be many more of course – of technology outstripping the oversight capability. At the moment surveillance is historical and logical, but markets are often anything but logical – especially in the short term – and as such the surveillance technology needs to be wired a little differently. It needs to have that degree of scepticism that a trader has to be able to see these rogue bids and offers immediately and investigate them. It can only do that if the activity is flagged, and that is where adversarial AI comes in – it is meant to spot things seeking to harm the ecosystem and protect against them.
It’s all very well fining spoofers millions of dollars after the event – and we undoubtedly need to ensure such sanctions are widely publicised as part of a deterrence programme. For me, however, the ultimate deterrent will be the understanding that there are plenty of people out there who can spot, identify and report on such activity within seconds or minutes of it happening.
There is an innate arrogance in traders that makes them think sometimes they are untouchable – and don’t we have enough examples of that at the moment? – but if they are given pause for thought it will stop a lot of people making a mistake that could ruin their life. I am not sure that an investigation and publicity surrounding events from years before does the job, but adversarial AI, bringing this process into the sphere of real time? That’s a different matter and, I think, a very effective deterrent.