The explosive issue of banker bonuses refuses to go away – and not just because of the Bank of England’s controversial new “clawback” measures, which will defer bonuses for three to five years. President Obama recently played his part in relighting the fuse when he used a radio interview to claim bonuses still encourage Wall Street traders to “take big risks” that might imperil economic stability. As he put it:
Right now, if you’re in one of the big banks, the profit centre is the trading desk. You can generate a huge amount of bonuses by making some big bets. You might end up leaving the shop, but in the meantime everybody else is left holding the bag.
The remarks provoked a swift and outraged reaction from financial executives, who dismissed them as anything from “just political” to “just weird”. Trader pay has already undergone extensive reforms, they insisted, and the ability to make “risky bets” has been considerably reduced as a consequence.
Opinion on that score may well remain split, especially following the revelation that in 2013 the average Wall Street worker took home a bonus of US$164,530 – the highest figure since the chaos of 2008 and the third highest on record.
But one thing is certain: there is fresh impetus to the debate over risk and reward and whether human fallibility – as opposed to regulatory shortcomings or casino economics – should be our foremost concern in attempting to prevent another global financial crisis.
Rewind to the mid-2000s and the early days of the US sub-prime housing market disaster. It was during this extraordinary period of mismanagement and myopia that the lack of responsible risk controls in the world of finance reached its mind-boggling zenith. Individually and collectively, staff were pursuing their own destabilising agendas.
Take Clayton Holdings, the largest residential loan due diligence and securitisation surveillance company in the West. A review of more than 900,000 mortgages issued from January 2006 to June 2007 showed that just 54% of loans met the lenders’ underwriting standards – a fact that was revealed only years later, during testimony to the government’s financial crisis commission.
How did this happen? Turns out, it wasn’t a unique event. There are numerous examples of similarly sustained recklessness, by no means all of them confined to the global financial crisis. In 1997, NatWest Markets, the bank’s corporate and investment arm, revealed it had uncovered a £50m loss – a figure that escalated to £90.5m after further investigations into the actions of just two employees. In early 2008 it emerged that a relatively junior futures trader with Société Générale, Jérôme Kerviel, had fraudulently lost a staggering €4.9bn.
What unites all of these tales of woe is a phenomenon that has come to be known as “people risk”. It’s a topic capable of both terrifying and enthralling the financial sector; an industry that tends to be risk-averse in public and rash in private. Nobody is keen to admit to this failing, but pretty much everybody is keen to hear how it might be addressed.
The central argument is that incidents such as those mentioned above can almost invariably be traced back to internal hierarchies that allow major errors of judgement to go undetected, ignored or suppressed until calamity strikes. Funnily enough, finance academics all but ignore this possibility and leave applied psychologists and sociologists to come up with theories. Ethical climate theory, pro-social behaviour and the theory of planned behaviour are most commonly employed to explain how to deal with such risks; yet they tend not to be used in the finance sector, where secrecy hampers the need to access staff.
With solutions less than abundant, massive miscalculations, significant losses and even serious fraud are nigh on inevitable. An appropriate work climate and a well-motivated ethical workforce may help detect serious risk, but a demotivated and opportunistic workforce will at best look away and at worst exploit the flaws in the system.
This, to varying degrees, is what happened in the sub-prime market, at NatWest and at Société Générale. It’s what Obama implies could still be happening now. In other words, the seemingly uninformed man in the street might not be a million miles from the truth when he laments that his money was lost by an incompetent, greedy or dishonest banker.
One reason why such deviance is able to flourish is that three key strands of business bureaucracy exist in a state of permanent antagonism. Upper management either blithely assumes that safe policies are being implemented or is happy to collude with risky practices that deliver short-term profits. Risk assessors are left to contend with the contradictory demands of executives who maintain that greater performance can go hand-in-hand with greater caution. Frontline staff don’t understand – or don’t care to understand – the implications of policies that stop them maximising their earning potential through targets and bonuses.
This can lead to a perfect storm. People act subjectively out of malice, avarice or ignorance. The organisational disconnect between top and bottom is exploited as a matter of course. Whistleblowers are more likely to be sacked than supported, while the most cavalier risk-takers are lauded right up until the point where their actions finally backfire in spectacular and excruciatingly conspicuous fashion.
Maybe Obama did stray into the realms of politically charged hyperbole in his “holding the bag” outburst, but the subsequent gainsaying shouldn’t blind us to the fact that people risk is still an enormous and habitually overlooked threat. And it just so happens that bonuses – that most endlessly divisive of mechanisms – could prove to be a key component of a meaningful response.
After all, recent history has shown they have tended to serve as a reward for risky behaviour. So how about nurturing a culture in which they acknowledge responsible behaviour? How about using them as an incentive for compliance rather than to promote subversion?
Of course we shouldn’t pretend an effective remedy along these lines will be easily achieved. But neither should we be content to suppose the problem, like the fierce arguments that surround it, will simply vanish and be conveniently forgotten; and, perhaps above all, neither should we dare imagine for an instant that doing nothing will result in anything other than further harsh lessons.