LMAX Exchange Challenges Conventional Wisdom on TCA

A new white paper released today by LMAX Exchange seeks to
offer more in-depth analysis of the cost of trading in FX markets and calls for
the creation of “robust, commonly-agreed” transaction cost analysis (TCA)
metrics that compare and contrast the differences between executing on firm and
last look liquidity.

The paper, TCA and Fair Execution: The Metrics that the FX Industry
Must Use
, proposes a blueprint for clients to better discern and compare the
costs of trading across firm and last look venues and argues that existing TCA
metrics fail to capture the nuances and value of firm liquidity.

The paper sets
out the analysis from a buy side perspective and focuses on three questions: Do
the commonly used TCA metrics accurately measure execution costs on both last
look and firm liquidity? What are the metrics that measure all the underlying
processes of trading on firm liquidity, and thus should be used to properly
assess the cost of trading in the?FX marketplace? How does the total cost of
execution compare between last look and firm liquidity? ?

The
analysis examines in detail, fill ratio, price variation and hold time, as well
as touching on market impact and the resulting pre-trade information leakage. “Bid-offer
spread comparisons will be explored in future publications,” the paper states.

Unsurprisingly
for a venue that promotes its firm and no-last look liquidity, LMAX says the
findings demonstrate that applying the ‘standard’ execution quality metrics
which have been developed for last look liquidity does not provide the full
picture of execution costs, and more importantly, misses quantifiable positives
of trading on firm liquidity. ?

The main
positive it argues the existing TCA metrics miss, is price improvement (it also
highlights the benefits of consistent execution outcomes), however the analysis
does also offer explicit costs relating to trading on last look liquidity.
Specifically the analysis suggests trading costs on last look liquidity are
between $2.25 and $48.86 per million, however after nuancing the analysis, LMAX
settles on an estimated cost of $25 per million.

The
analysis uses data from a third party aggregator and compares six liquidity
providers’ performance (three bank and three non-bank), alongside that of LMAX
Exchange. More than seven million orders are included in the data set from
January 1 to December 31 2016, which incorporates the UK referendum on EU
Membership, the US election and the sterling flash crash of October 7 2016.

The paper
proposes five key metrics when assessing execution quality, the fill ratio;
price variation (more commonly slippage or price improvement); hold time (last
look window); bid-offer spread; and market impact – although the paper does
note the interpretation of market impact is “highly subjective”.

Although it
proposes the five metrics, the data is analysed only using the first three,
LMAX says that it will release more in-depth analysis on bid-offer spreads in
due course and it downplays the market impact angle because, “The nature of the
trading conducted through the [aggregator] also means that it does not provide
a useful source for comparing market impact when using different liquidity
types.”

Rejects

When
studying the impact of rejects, the LMAX paper calculates the fill ratio by the
simple method of dividing the number of order fills by the number of order submissions,
less errors. While the authors of the paper – LMAX’s CTO Dr Andrew Phillips;
Andrew Stewart, director of research and strategy; and Dr Sam Adams, head of
software – considered deeper analysis of the reasons for rejects, they say this
was discarded because of the lack of standardised error messages across venues,
some of which are “quite ambiguous”. The only exception to this was “client
errors”, over which the LPs or venue have no control, and it is this data set
that is excluded from the fill ratio calculation.

The
analysis looks at reject rates for market and limit orders and on market orders
the fill ratio from “Non-Bank 2” was 99.98%, on LMAX it was 99.94% and on “Bank
1” it was 99.94%. Although “Bank 2” managed a fill ratio of 99.63%, there was
then a gap to “Bank 3” (98.27%), “Non-Bank 3” at 98.18% and “Non-Bank 1” at
96.95%.

The paper
says that deeper analysis of the rejects on LMAX Exchange found that they were
all “liquidity-based rejects related to times when market conditions did not
permit orderly execution”, which suggests this could have been during the
sterling flash crash.

The picture
was radically different when looking at fill ratios by limit orders, although
the paper argues that LMAX Exchange’s apparent under-performance in this
criteria can be explained by participants not allowing slippage on their limit
orders. Of the rejects on LMAX, the paper finds that just 1.4% were due to the
customer’s “fill or kill” criteria not being met (generally insufficient
liquidity at price) while the remainder was due to a limit price miss.

It adds
that another reason for the apparent under-performance is the “jitter” on the
LMAX Exchange venue, simply it updates prices quicker than many customers can
handle, and as such many limit orders arrive when the price has updated.

To surmount
this obstacle the paper suggests that participants need to be able to handle a
higher rate of market data, co-locate with higher bandwidth and ensure they
have a sufficiently low tick-to-trade latency.

Slippage

To
calculate the price variation component, the paper simply uses the logged
market price at order entry and the actual fill. By removing all currency pairs
that traded less than 100,000 times in 2016, the data covers 91% of all
successful trades on LMAX Exchange that year.

The data
highlights the different uses of price variation across the six LPs, with Bank
3 providing 100% as expected execution, which Non-Bank appears to pass on all
price changes, showing 19.40% of orders are subject to negative slippage and
10.56% to price improvement.

Of potentially more
interest is the ratio of slippage to improvement with Bank 1 standing out (in a
negative sense) by recording a 10.38 ratio – meaning the client is negatively
slipped 10 times more than the price is improved.

The authors
note that some of the skew towards slippage is the natural bias towards trading
in the direction of the market, i.e. buying in a rising market and selling when
it is falling.

They also
highlight that other venues also offer price improvement, but say data from
these sources was unavailable.

Last Look

The paper
analyses the impact of hold time – or the last look window – by breaking down
execution latency into three components, systematic – the physical round trip
time and the time it takes to measure risk controls such as limits; the tail –
occasions when a surge in message traffic can cause bottlenecks; and
discretionary – the time an LP includes to analyse market conditions.

The authors
assign a systematic latency of three milliseconds as well as compare the hold
times between fills and rejects, generally finding that rejects come much
quicker than fills.

They
conclude that there are indications of arbitrary changes to discretionary
latency as a result of market conditions, and highlight a change in
discretionary latency after the UK referendum results. The analysis finds that
two out of the three non-bank LPs favour selective application of last look
times, while two of the three banks preferred a simple base hold time and a
longer tail.

The Numbers

The key
argument of the LMAX paper is that current TCA metrics do not sufficiently
include the benefits of price improvement and to highlight its case it provides
price improvement data across three customer segments – brokers, aggregator
users and institutions.

The data
show that in Q4 (the authors stress they are not cherry picking the data) the
average broker client on LMAX achieved $47.71 per million price improvement,
while institutional clients scored $3.44 and aggregator users $2.25 per million.

Interestingly
the authors also claim that if a client applied a price discretion of 0.3 pips,
the average fill ratio on the aggregator would climb from 92.37% to 98.91%,
institutional users would see fills rise from 81.95% to 97.77% and broker
clients a climb from 89.15% to 98%, all while still delivering price
improvement.

On the
other side of the balance sheet, the paper analyses the costs of rejections. “Taking
the 100ms hold time, where estimates of cost range from $17.80-$28.60/million
and applying a simple average we might assume that a trader receiving a 5%
reject rate and 100ms hold time is experiencing a cost between $0.89 and
$1.43/million in aggregate over their total trading volume,” the authors
observe. “The real experience is likely to be significantly worse. The costs
identified above show the outcome of a calculation for every order placed, but
we expect rejections to be higher during times of higher market volatility. A
superficially satisfactory fill ratio may mask a small number of extremely
expensive rejections.”

Using only
the unfilled orders and cost relative to the order limit price sees this cost
of hold time at 100ms in the range of $297- $336/million. “We should approach
this value with a healthy degree of scepticism as it relies on the order limit
prices being realistic, however the value is within the range of the extremes
of price swings observed over a 5ms time period during a major market event,”
the paper states. “Although the sample size is much smaller we can also use
unfilled orders for the trading account used by the TPA as another data point –
this returns values between $83-$86/million.”

Although
the paper offers evidence that the aggregator users flow is “benign”, it notes
that 11.2% of rejected orders are rejected more than once. Approximately 60% of
the last look costs are incurred in the first 10 ms, according to the analysis.

While the
paper does recognise that its data set is limited, it without doubt contributes
to the ongoing debate in the FX industry around execution quality. It also
states, “We would welcome a serious and critical collaboration by any
interested or independent parties, and would value access to other trade
databases with varying types of flow to the highly uncorrelated flow seen in
the TPA data set we use here.”

As David
Mercer, CEO of LMAX Exchange, observes in his introduction to the paper, “Our
intention with this white paper is to contribute to an industry-wide debate on
how to conduct TCA in a way that benefits the customer, provides a fair
comparison for liquidity providers, and creates genuine transparency: one that
enables choice and aids quality decision making. ?

“Ultimately,
this is about helping the industry move towards a position where traders are in
control of their trading costs and can set their liquidity strategy accordingly
which, in turn, should give true market makers an advantage through transparent
information. While there may be disagreements within the industry over the
veracity of specific trading practices, we should all be able to agree that
traders need better tools to understand the true cost of trading. ?

“Only with
these can they make informed decisions about their execution and liquidity
strategies; and only then can we take the step the whole industry needs towards
restoring trust and confidence in our market,” he adds. ?

Colin_lambert@profit-loss.com

Twitter
@lamboPnL

Twitter
@Profit_and_Loss

Colin Lambert

Share This

Share on facebook
Facebook
Share on google
Google+
Share on twitter
Twitter
Share on linkedin
LinkedIn
Share on reddit
Reddit

Related Posts in