Menu Close
Data and algorithms are an integral part of modern trading. Shutterstock

Explainer: the good, the bad, and the ugly of algorithmic trading

Algorithms are taking a lot of flak from those in financial circles. They’ve been blamed for a recent flash crash in the British pound and the greatest fall in the Dow in decades. They’ve been called a cancer and linked to insider trading.

Government agencies are taking notice and are investigating ways to regulate algorithms. But the story is not simple, and telling the “good” algorithms from the “bad” isn’t either. Before we start regulating we need a clearer picture of what’s going on.

The ins and outs of trading algorithms

Taken in the widest sense, algorithms are responsible for the vast majority of activity on modern stock markets. Apart from the “mum and dad” investors, whose transactions account for about 15 to 20% of Australian share trades, almost every trade on the stock markets is initiated or managed by an algorithm.

There are many different types of algorithms at play, with different intentions and impacts.

Institutional investors such as super funds and insurance companies rely on execution algorithms to transact their orders. These slice up a large order into many small pieces, gradually and strategically submitting them to the market. The intention is to minimise transaction costs and to receive a good price – if a large order were submitted in one go it might adversely move the entire market.

Human market makers used to provide quotes to buy or sell a given stock and were responsible for maintaining an orderly market. They have been replaced by algorithms that automatically post and adjust quotes in response to changing market conditions.

Algorithms drove the human market makers out of business by being smarter and faster. Most market-making algorithms, however, don’t have an obligation to maintain an orderly market. When the market gets shaky, algorithms can (and do) pull out, which is where the potential for “flash crashes” starts to appear – a sudden drop and then recovery of a securities market.

Further concerns about algorithmic trading are focused on another kind – proprietary trading algorithms. Hedge funds, investment banks and trading firms use these to profit from momentary price differentials, by trading on statistical patterns or exploiting speed advantages.

Rather than merely optimising a buy or sell decision of a human trader to minimise transaction costs, proprietary algorithms themselves are responsible for the choice of what to buy or sell, seeking to profit from their decisions. These algorithms have the potential to trigger flash crashes.

Fast vs. slow algorithms

Proprietary algorithmic traders are often further divided, between “slow” and “fast” (the latter also referred to as “high-frequency” or “low-latency”).

Many traditional portfolio managers use mathematical models to inform their trading. Nowadays such strategies are often implemented using algorithms, drawing on large datasets. Although these algorithms are often faster than human portfolio managers, they are “slow” in comparison to other algorithmic traders.

High-frequency algorithmic trading (HFT) is on the other end of the spectrum, where speed is fundamental to the strategy. These algorithms operate at the microsecond scale, making decisions and racing each other to the market using an array of different strategies. Winning this race can be highly profitable – fast traders can exploit slower traders that are yet to receive, digest or act on new information.
Proponents of HFT argue that they increase efficiency and liquidity because market prices are faster to reflect new information and fast market makers are better at managing risks. Many institutional investors, on the other hand, argue that HFTs are predatory and parasitic in nature. According to these detractors, HFTs actually reduce the effective liquidity of the stock market and increase transaction costs, profiting at the expense of institutional investors such as superannuation funds.

The effects of algorithms are complicated

A recent study by Talis Putnins from UTS and Joseph Barbara from the Australian Securities and Exchange Commission (ASIC) investigated some of these concerns. Using ASIC’s unique regulatory data to analyse institutional investor transaction costs and quantify the impacts of proprietary algorithmic traders on these, the study found considerable diversity across algorithmic traders.

While some algorithms are harmful to institutional investors, causing higher transaction costs, others have the opposite effect. Algorithms that are harmful, as a group, increase the cost of executing large institutional orders by around 0.1%. This ends up costing around A$437 million per year for all large institutional orders in the S&P/ASX 200 stocks.

But these effects are offset by a group of traders that significantly decrease those costs by approximately the same amount. The beneficial algorithms provide liquidity to institutional investors by taking the other side of their trades.

They do so not out of the goodness of their little algorithmic hearts, but rather because they earn a “fee” for this service (for example, the difference between the prices at which they buy and sell). What makes these algorithms beneficial to institutions, is that “fee” they charge is lower than the “fee” institutions would face if these algorithmic traders were not present and instead had to trade with less competitive or less efficient liquidity providers, such as humans. The ability for algorithms to provide liquidity more cheaply comes from the use of technology, as well as increased competition.

What distinguishes the algorithms is that the beneficial ones trade against institutional investors (serving as their counterparties), whereas the harmful ones trade with the institutions, competing with them to buy or sell. In doing so, the beneficial algorithms reduce the market impact of institutional trading. This allows institutions to get into or out of positions at more favourable prices.

The study also found that high-frequency algorithms are not more likely to harm institutional investors than slower algorithms. This suggests institutional investor concerns about HFT may be misdirected.

We shouldn’t stamp out the ‘good’ algorithms

ASIC is now using the tools developed in the Putnins and Barbara study to detect harmful algorithms in its surveillance activities. These are identified by looking for statistical patterns in the trading activity of individual algorithmic traders and the variation in institutional transaction costs. The result is an estimated “toxicity” score for every algorithmic trader, with the highest-scoring traders attracting the spotlight.

So, we know the affect of algorithms is complicated and we can start to tell the harmful apart from the beneficial. Regulators need to be mindful of this diversity and avoid blanket regulations that impact all algorithmic traders, including the good guys. Instead, they should opt for more targeted measures and sharper surveillance tools that place true misconduct in the cross-hairs.

Want to write?

Write an article and join a growing community of more than 180,400 academics and researchers from 4,911 institutions.

Register now