The FICC Markets Standards Board (FMSB) has published the first in a series of papers called Spotlight Reviews, which looks at emerging themes and challenges  in algorithmic trading and machine learning. The series will consider issues of FICC market structure and the impact of regulatory and technological change on the fairness and effectiveness of wholesale markets. FMSB members are currently working on a Statement of Good Practice on Algorithmic Trading and FMSB says this topic “is certain to remain an important focus for transparency, fairness and effectiveness of trading practices in the coming years”.

In the March 14 edition of Profit & Loss’ podcast In the FICC of It, FMSB chair Mark Yallop discussed the challenges of managing in a machine learning environment, this paper goes into the subject in greater depth, focusing especially on model risk management in market making, the adoption of new machine learning techniques and the increased use of execution algorithms.

While central banks and other regulators have issued guidelines on the controls for algorithmic trading, focusing primarily on the documentation and controls expected for the development, testing and deployment of algorithms, the application of model risk management to algorithmic trading is an area that has received less attention. “Nevertheless, the materiality of algorithmic model risks warrants a specialised practitioner-led approach,” the paper states, adding that while the need for quality and consistent data is self-evident, “Perhaps less obvious is the need to manage for increased model risk.”

The paper observes that while progress towards the increasing use of self-learning machines will be incremental and over an extended period, in the near term, machine learning in wholesale FICC markets looks likely to remain restricted to specific minor functions only and as a relatively small part of the overall trading and reporting process
with tight controls in place.

It adds that, as in other businesses where machine learning is being adopted, there are nascent concerns about the conduct risks around unintended design flaws, implementation and use. “There is also increasing discussion within the industry about practices that can mitigate any market abuse or stability risks that may emerge.”

Unsurprisingly from a standard-setting body in the private sector, the paper highlights the benefits of practitioner-led best practices. It states, “There are likely to be benefits from creating global best practices for model risks which are not fully covered by existing regulations. FMSB has a role to play in areas like this, where there may be knowledge gaps between the private sector and regulators and where there is scope for market participants to work together to address the issues rather than in isolation. We propose that market practitioners, given their deep domain expertise, are in a better position to provide solutions that are proactive on managing risks.”

Model Risk

The paper outlines eight factors for consideration when looking at model risk management. It starts off with a look at the current regulatory scene, namely regulators’ view of past incidents and the focus on conduct when strategies are being coded or the machine learning process initiated. “The current regulatory guidelines, which are principally focused on operational and conduct risks, may mitigate some risks from models through the consolidated approach to documentation, testing, controls and performance analysis at a trading algorithm level,” the paper says. “For instance, a lack of model robustness may lead to unexpected P&L losses but these would be bounded by a number of risk controls at an algorithm level. These include continuous validation in the form of P&L checks covering volatility/skew of returns and significant financial losses, position limits, price/spread limits. As a result, even though some models in algorithmic trading strategies may be highly complex, residual algorithmic model risk does not necessarily have to be high.”

Residual model risk can be low in algorithmic trading. Consequently, less weighting can be placed on the accuracy of a model’s estimates or predictions and more on the implementation testing, back testing and controls that minimise the conduct and operational risks.

Secondly, it adds that model validation should consider factors such as the complexity, appropriateness of the methodologies, input data quality, controls around model assumptions and implementation. Execution controls, back testing, sensitivity analysis, erroneous data handling measures, and clear documentation are some of the key mitigants. Of note, the paper stresses how risks can be greater in less liquid asset classes where pricing is less transparent, saying that the liquidity of the product should be considered when judgements about model risk are being made. At the same time, expectations around pricing precision should also be considered. For instance, in data-rich, heavily- traded instruments these expectations can be extremely high, while in data-light, infrequently- traded instruments pricing precision may have a larger allowable error term.

The third factor highlighted by the paper says that any approach leveraging existing model risk validation processes may need adjusting. “The risk associated with misspecification in any single model may be mitigated by bounds placed on how any model output data is used by the overall trading strategy,” it states. “This combined with the dynamic feedback in a live electronic trading ecosystem means that residual model risk can be low in algorithmic trading. Consequently, less weighting can be placed on the accuracy of a model’s estimates or predictions and more on the implementation testing, back testing and controls that minimise the conduct and operational risks.

“The number of individual models deployed in an algorithmic trading system is much larger than traditional areas so documentation and model risk ratings, while still key, will need to be scalable to be effective,” it continues. “Moreover, the depth and frequency of model validation deployed should reflect the complexity and potential impact of individual models.”

The fourth factor is the critical role of data inputs, namely the importance of quality, consistent data to mitigate operational risks. The paper also highlights the role played by CLOBs (central limit order books) in the price discovery and data process, observing that there is a “dependence” on these venues, but that when a lack of depth or market structure issues drive price changes on these platforms that are not in line with fundamentals, there is a risk of models following them ‘blindly’ as a key data input.  In less liquid markets, without a CLOB, the paper notes how recent data “may become irrelevant” and post-trade data may not give an accurate picture of liquidity.”

The fifth factor observes how it can be difficult to benchmark models because peer group comparison is hindered – naturally – by the proprietary nature of each firm’s algorithms. Rather obviously, the paper states, “Where peer group benchmarking is not appropriate, performance monitoring is critical.”

This theme is expanded in point six, which stresses the importance of having a rigorous model validation and performance monitoring process. “With the drive for improved efficiency across the whole financial services sector it is natural for there to be a drive to re-use as many components of existing models as possible in new products and geographies,” the paper states. “The question of whether a particular model is appropriate for use in a specific market, asset class or venue is not a new one, but likely to be more common than ever in future.

“Core to model assessment is the testing of model robustness and reliability to ensure safe and sound implementation,” it continues. “However, [regulation] SR 11-7 allows firms to take materiality of model risk into consideration when devising an approach to model risk management in order to meet supervisory expectations. Given the differences between pricing or risk and algorithmic trading models, different model validation approaches may need to be developed, where the control framework should be considered in deciding the model risk rating and any subsequent validation and testing requirements.”

The seventh factor to consider looks at stress testing, specifically the need to include negative stress testing, which seeks to determine the conditions under which the model assumptions break down. “Where model risks are found, controls should be put in place,” the paper asserts. “Limitations to data inputs can add to the uncertainty of results, and the real world is generally more unpredictable and complex than models. Another unintended risk that is extremely hard to capture is that of similarities, and resulting interdependencies between, the algorithmic models of different firms.

“Capturing the unintended consequences of algorithms and modelling components not performing in line with their intended aims is especially important,” it continues. “The behaviour of individual algorithms and modelling components may be as expected, but the combination of models up to the trading algorithm level may not be as expected. Unfortunately, it is very difficult to develop testing to demonstrate this, even with extremely clear guidelines on the aims of specific algorithmic components.”

The final aspect to be taken into account is the need for a “robust” second line of defence. While acknowledging it may be difficult to have a second line of defence with the quantitative trading expertise able fully to challenge the first line, the paper says it is crucial that the second line has enough product and technical knowledge to validate and test models properly. It adds this is an especial challenge for smaller institutions.

Growing Pains

While the paper acknowledges that FX markets are more advanced in the use of algorithms than many in fixed income, it observes that as the use of these products grows, it inevitably will extend into less liquid markets with sporadic data sources. In these markets the paper notes that data sets from more liquid proxy markets can be used, as can unstructured, or alternative, data. It warns, however, “Producing and maintaining such parallel, engineered or unstructured data itself carries serious and practical data governance challenges for firms attempting to use such strategies.”

Equally, having highlighted the negative aspects of a CLOB in markets, the paper observes that these markets provide an “important” source of hedging liquidity in times of market dislocation, and points to events surrounding the Swiss National Bank’s de-pegging in January 2015 as evidence of how dealers switch off streams on disclosed channels when risk limits are hit, in favour of exiting that risk on a public platform.

The lack of a CLOB, the paper observes, “…inevitably increases the tail risk associated with liquidity shocks or sudden gapping in prices in these markets.”

Producing and maintaining parallel, engineered or unstructured data carries serious and practical data governance challenges for firms attempting to use such strategies.

It also argues that the importance of public reference prices goes beyond the question of liquidity in times of stress. “It also directly affects the question of fairness,” the paper states. “Established manipulative techniques are easier to perpetrate in conditions where public reference prices are harder to establish, as may be the case in these less liquid products. A key goal of algorithmic governance needs to be ensuring that algorithms that go to market are fair in terms of not creating market abuse and market stability risks.”

As algorithms move into these less liquid markets, the paper argues the associated risks will be greater, including the likelihood of ‘gap’ pricing driven by idiosyncratic events. It notes that hold times in liquid markets like foreign exchange are typically sub-seconds to minutes, but for other FICC markets these times may be days or even weeks – although they will, inevitably, decrease.

“Whether it be longer hold times in less liquid markets or scope for greater losses from leverage in derivatives products, the market risk associated with algorithmic trading is likely to increase in coming years, as product coverage grows,” FMSB says. “In some instances, sporadic liquidity in one product may be compensated by hedging strategies in adjacent products, with associated basis risk.

“Markets in less liquid products are also likely to be much more concentrated as there are unlikely to be more than one or two non-bank market makers who are willing to extend liquidity in all market conditions,” it adds. “Model validation in such cases is even more important and needs to take account of the tail risks of potentially disappearing liquidity.”

Machine Learning

While the paper observes that machine learning in markets currently supports operations rather than replaces them; that current algo strategies are still built around relatively transparent rules-based deterministic models; and that there is very limited risk capital being deployed using machine learning algorithms alone as the basis for the whole market making process, it also discusses the challenges of tracing how decisions are being made.

There is the risk that a machine optimising on its own will ‘discover’ that unethical, manipulative trading practices are more profitable than ethical trading.

“[It is] very difficult to prevent in advance, or to correct afterwards, undesirable model outcomes,” the paper says. “For example, the machine may discover complex, non-linear ‘hidden’ correlations that it is difficult or impossible for the programmer to anticipate or discover. Further, it is impossible to predict how a machine, trained on known historical data but ‘making its own decisions’ will react when it is live in the market with a much larger dataset and it encounters events that have not been seen before in the data that was used to train it.”

The issue of bias is also discussed, notably, the paper suggests that machine learning is all about discrimination, and unpredictable discrimination during the optimisation process, when an enormously wide range of factors are analysed. “These biases could include unexpected or unfair changes in pricing or liquidity to certain types of market users, or even to individual customers, as a result of factors that are impossible to uncover because they lie effectively undiscoverable in the heart of the optimisation engine,” it asserts.

Another type of bias may also occur, it adds, the risk that a machine optimising on its own will ‘discover’ that unethical, manipulative trading practices are more profitable than ethical trading. “Indeed, this is virtually a certain outcome, if the machine does not have an ‘ethical governor’ that tests the optimisation process against ethical benchmarks and rejects trading tactics that fall short of these standards,” the paper states.

A final challenge around machine learning is one raised by traders for some time now – the increased risk of crowded trades. The paper notes that network effects can create winner-takes-all market structures and the way in which machine learning models improve by accessing more data is likely to create data network effects, which may in turn create barriers to entry for new firms. “Unless they are carefully managed, concentrated market structures may disadvantage market users by unfair rationing of liquidity, skewed pricing, and other non-price based discriminatory barriers,” the paper states. “As algorithms optimise big data from new sources, they may inadvertently increase, or create new correlations between macroeconomic or other input variables. Hungry algorithms will over time arbitrage the profit potential in these correlations – a machine learning version of the ‘crowded trade’ phenomenon – but in doing so they may make markets more fragile to unforeseen shocks and more interconnected, as multiple users depend on a limited number of underlying data relationships.”

The paper concludes by making the case for guidelines that make the traditional model validation process more suitable for algorithmic trading, stating these could have “significant benefits”, in terms of efficiency and appropriateness, as well as reduce the risks of market abuse and potential threats to market stability.

Such standards could ensure firms use appropriate data inputs and have controls over the appropriate use of model type and assumptions,” it says. “They could also create a common understanding of how best to test whether models and model components are robust in all market conditions, through appropriate stress testing. Where there are existing model risk teams, ensuring there is a suitable level of integration with algorithmic trading oversight committees, so that there is a consolidated approach to governance frameworks, would avoid duplication from the second line of defence.”

The paper refers to the “considerable debate” about how more complex machine learning techniques should be governed, noting that many market practitioners believe that existing governance arrangements with a tighter ‘sandbox’ in terms of controls and limits are appropriate. Others believe, however, that machine learning can create new market fairness and stability threats that require a new distinct governance framework. “It is too early in the evolution and usage of these new techniques to be definitive either way but there are likely to be new model risks especially related to the more limited transparency,” the paper states, adding, “The use of execution algorithms must be properly aligned with asset managers’ specific execution policies and strategy. It is important to ensure clarity about when it is appropriate for execution algorithms to direct flow to an in-house principal desk, and controls over the sharing of potentially inappropriate pre- trade information are also issues.

“In summary, the increasing usage of algorithmic trading and the growing complexity of models makes the topics and emerging themes discussed in this Spotlight Review extremely important,” the paper concludes. “Areas of such rapid technology change are also often best addressed by market practitioners with deep domain expertise who can develop solutions that are clear, practical and proactive in managing risks.”

“This Spotlight Review, the first that FMSB has published, looks at the very important issue of algorithmic trading and machine learning,” says Yallop. “This is a space that is developing fast and creating exciting opportunities in markets, but also an emerging area of risk and vulnerability. We are very grateful for the insights and support provided by FMSB members and other industry experts in producing this document. We hope it will create further discussion on the nascent challenges market participants face and also inform potential topics for FMSB’s future work.”

Ciara Quinlan, global head of principal electronic trading, FX, rates and credit at UBS, adds, “As the adoption of algorithmic trading expands into new products and new machine learning technologies emerge, model risk is likely to become increasingly relevant. This review discusses these important themes and makes a credible case for industry-led best practices in this area.”

Meanwhile, Mark Meredith, head of FX e-trading and algorithmic trading at Citi, says, “Citi welcomes FMSB’s work in this paper on emerging themes in algorithmic trading and the scope for best practices in this area.”

FMSB says the next publication will cover the role of data management in the financial system.

Colin Lambert

Share This

Share on facebook
Facebook
Share on google
Google+
Share on twitter
Twitter
Share on linkedin
LinkedIn
Share on reddit
Reddit

Related Posts in , , ,