Skip to main content

P&L Talk Series – Fluent Trade Technologies

P&L Talk Series – Fluent Trade Technologies

Colin Lambert: Much has been said over the years about the technology imbalance evident between smaller, non-bank firms, and the major FX banks in terms of their ability to evolve their technology, is this imbalance being addressed?

David Faulkner: Some larger banks are now geared towards embracing the latest technology, adapting well to ensure they operate in similar micro-second speeds. We are also starting to see more interest in the unbundling of commoditised processes within a trading technology stack, which allows a bank to focus on their IP. 

Some institutions, typically around the top 20 in FX, have been happy to outsource their connectivity – they have been in competition with the non-bank firms long enough to realise the benefits of doing so – but for those outside the top 20 things have moved slower. Institutions in the next tier down are now starting to think more about outsourcing, however.

The challenge here is that too many banks remain siloed and the silos don't interact, which means a complex and multi-layered technology stack. This in turn makes interaction so much harder. The silos need to be broken down if you are going to interact properly between them, because updating components will become increasingly hard to manage the more the silos develop.

Sell side tech structures can be convoluted and overly complex, there are overlapping operations and business lines that run their own programmes and so updating a tech stack can be hard to do, and even harder to manage.

CL: What do these firms need to do to change things?

DF: The number one challenge now is how they migrate from those multiple tech stacks into a single infrastructure. Even if they do that though, do they replicate the problem? Our view is that unbundling the stack is key.

Updating the tech stack can be complex – do they as an institution need to upgrade every component? Normally banks won't, but actually getting to the components they do need to upgrade can be difficult if the infrastructure isn't built well.

It is important to have one tech stack, but the unbundling of components is equally complex, especially now that the tech stack has last look in production. The secret is creating a tech framework, not a stack, where every piece is inter-changeable, so when banks need to upgrade they only need to unplug and replace one component.

CL: …and this will help them in the markets?

DF: Trading can't be decoupled from market data, and delays in market data can be caused by overly complex technology stacks. This complexity could be costing banks millions of dollars a year in revenue.

The client experience can also be greatly improved, with reduced hold times, improved rejection rates, and fewer miss fills. Such improvements to the ecosystem are also well received by regulators, so technology improvements are key 

CL: You mentioned focusing on banks’ IP, can you expand?

DF: For the larger FX banks there is a growing acceptance of the need to at least look at the whole framework and decide how much of the tech involved is really IP to the firm concerned. When you break it down to small components you can probably say that four of 10 are mechanical while the rest are IP. 

The key is to focus on those components that are important to your business. Probably the primary factor should be the market data, which should be fast and reliable. We have possibly hit the latency floor in FX markets now because it is at a level at which most are comfortable, we are consuming the right data in single digit microseconds whereas not too long ago it was in the hundreds of milliseconds. 

Moving forward the cost benefit of  reducing microseconds further probably doesn't add up. Banks are also recognising that they can outsource the work to get them to low microsecond throughput, and that is where we are starting to see more and more interest.

For a bank, they need to be a distributer of data across their global architecture, but this really is just a question of pipes and unifying the message into the system, so there is no real IP involved. This is the sort of area we think can be decoupled to firms like Fluent.

As an example, a skew is IP, it is what can win you a deal, while price delivery is mechanical, so it is commoditised, and by outsourcing this function we believe banks can achieve faster, cheaper and smarter pricing compared to what they have in place currently.

CL: That means winning people over to an argument they may not like though…

DF: One of the bigger challenges remains the mind set of the stakeholders within the institutions, they need to truly focus on their core IP rather than, for example, the processing and distribution of data – they need to focus on their price, which is their IP.

The challenge is coming to terms with the idea of focusing their budgets and IP efforts on what they are good at, what is unique to them, and outsourcing the rest of the process.

By thinking of outsourcing the price engine and distribution processes, we are talking about getting rid of a major headache when it comes to optimising performance. This could be a huge step forward for many firms who can optimise their performance in certain crucial areas. 

We don’t do everything, we work with people to help them better understand how they are operating and how they can do it more efficiently. We are effectively an extension of the firm’s IT workforce. These firms may have 500+ FIX connections, and they don't need to manage them, it's a pain and they are not that good at it. By outsourcing they can radically cut the number of services they are managing and can ultimately do more for less.

CL: There is still the fact that banks typically have clients with vastly different tech specs – how does this solution help them meet that challenge. What about the slow clients that can only receive updates every 250 milliseconds for example?

DF: Again it comes down to understanding the mechanical operations and that different clients will often require unique approaches. In reality, however, clients with different tech capabilities can be addressed if they are viewed as components. At the moment, banks have so many different models for different clients, that is what is clogging up the system, when one needs to be changed they all do. 

Our software allows a bank to set thresholds for slowing the price feed down per client and again it is componentised, so when a client gets a tech upgrade, the whole stack at the bank does not need to be revamped. We can facilitate and unbundle the price rather than change the entire price engine, so a bank can support more client types without the pain of an onerous and expensive updating cycle.

By breaking the process down, updating is significantly easier making the bank quicker to respond to changes such as advances in microwave connectivity or new places to co-locate.

The result is a more effective, adaptable, tech stack that has the ability to consume vast amounts of market data from the business and across the market. If applied properly it means happier clients because the bank is pricing more accurately and thus there will be better fill rates.

It is also important that a service provider such as a bank can show they treat their clients fairly, and having adaptable technology that meets the needs to individual client types is important – there are regulatory implications in how you create and manage your tech stack.

CL: How so?

DF: Businesses need to effectively monitor their business in real time, which means they need to synchronise their data distribution globally to ensure no clients are disadvantaged. If the entire business can access the same data and price at the same time this enables the risk engine to be moved into the execution path, from the post-trade and that is really valuable.

Historically a lot of these checks were done at the PB, which was a hub-based solution. But the benefits of moving the risk engine into the execution path means if a venue or algo provider can embed a fully synchronised risk check, it can be plugged into lots of information sources, which helps them recalibrate the price faster and add risk management features like price limits.

Banks need to ensure that their risk engines are talking to each other and cannot counter-trade within a certain window, which may look suspicious to a regulator even though it is innocent. The regulators like the fact that by fully synchronising these activities globally, banks are reducing the opportunity for disruptive behaviour and have their own house in order.

CL: Is this part of a wider realisation of certain aspects of operational risk?

DF:Markets are finally accepting that risk controls are not just about credit and market risk, there is also a risk in how your algos are behaving. They need to play by the rules and banks need to prove this to the regulators. 

Every month we hear of a trading firm which has an algo glitch, as bugs in the system happen. By improving controls and being able to update the components of the risk engine in an easier, quicker fashion, firms are improving their risk management frameworks. Having these controls, and having the right behaviours coded in is vitally important if banks have third parties using their algos or if they sponsor market access in any way.

There is also the fact that controls at ECNs are vastly different to each other and can be over-ridden, which means the banks are leaving it to a third party to control one risk element. This way they can control it themselves. 

Going forward we may see more firms bring these controls in house or, potentially there will be a global utility developed that is open to everyone, including the regulators. One way to do this is by having the risk engine in the execution path, then everyone using an algo is subject to those risk controls.

There are countless rules you can impose on different algo strategies but the key will be when you need to change something. If you are still vertically integrated the work involved in changing will be laborious and expensive.

CL: So the key is software, not hardware. FPGA is not the solution?

DF: FPGA gets you there but it costs you. Not only money, it is expensive, but you also lose nuances with FPGA. It processes quickly but it lacks flexibility, increases development time 10 fold and is not fully transparent for institutions, nor regulators. The answer is in reengineering and rethinking how the tech stack works by swapping hardware for software.

Our value proposition is that we are not selling a product as such, we are offering a piece of software that fuels the unbundling. We have no idea how our customers are trading with our software, because they retain the IP. By outsourcing the commoditised process to us, banks are able to keep up with the evolving technology landscape.