Systematic risk management
As the casual reader of this blog (or my book) will be aware, I like to delegate my trading to systems , since humans aren't very good at it (well, I'm not). This is quite a popular thing to do; many systematic investment funds are out there competing for your money; from simple passive tracking funds like ETF's to complex quantitative hedge funds. Yet most of these employ people to do their risk management. Yes - the same humans who I think aren't very good at trading.
As I stated in a submit from a couple of years in the past, this does not make a variety of sense. Is risk control certainly one of those obligations that humans can do higher than computers? Doesn't it make more sense to put off the human feelings and biases from whatever that can affect the overall performance of your trading gadget?
In this post I argue that risk management fortrading systems should be done systematicallywith minimal human intervention. Ideally this should be done inside an automated trading system model.
For risk management inside the model, I'm using the fancy word endogenous. It's also fine to do risk management outside the model which would of course be exogenous. However even this should be done in a systematic, process driven, way using a pre-determined set of rules.
A systematic hazard control technique method people have less opportunity to make a screw up the system by using meddling. Automated chance control also approach much less work. This also makes sense for man or woman traders like myself, who can't / do not employ their very own chance supervisor (I bet we're our own chance managers - with all of the conflicts of hobby that involves).
This is the second one in a sequence of articles on danger management. The first (which is alternatively vintage, and wasn't at the start supposed to be a part of a series) ishere. The final article (now written, and here) can be about endogenous chance control, give an explanation for the easy technique I use in my own trading gadget, and show an implementation of this inpysystemtrade.
What is danger control?
Let's cross lower back to first ideas. According to wikipedia:
"Risk control is the identification, evaluation, and prioritization of risks (described in ISO 31000 as the effect of uncertainty on goals) accompanied by means of coordinated and cost-efficient software of assets to minimize, reveal, and manage the chance and/or impact of unlucky occasions[1] or to maximise the realization of possibilities. Risk management?S objective is to assure uncertainty does now not deflect the endeavour from the enterprise desires. "
This slightly overstates what risk management can achieve. Uncertainty is almost always part of business, and is a core part of the business of investing and trading. It's often impossible to minimise or control the probability of something happening, if that something is an external market event like a recession.
Still if I select out the juicy components of this, I get:
- Identification, assessment and priorization of risks
- Monitoring of risks
- Minimize and control the impact of unfortunate events
- Identify some important risks.
- Work out a way to measure them
- Set levels at which action should be taken, and specify an action to take.
- Monitor the risk measurements
- Take action if (when) the measurements exceed critical levels
- When (if) the situation has returned to normal, reverse the action
I would argue that only steps 1,2 and 3 are difficult to systematise. Steps 4 to 6 should be absolutely systematic, and if viable computerized, occuring within the buying and selling machine.
Types of threat
It's very easy to forget that there are many types of risk beyond the usual; "the price will fall when we are long and we will lose our shirts". This is known as market risk and whilst it's the most high profile flavour there are others. Pick up any MBA finance textbook and you'll find a list like this:
- Market risk. You make a
bettrade which goes against you. We quantify this risk using a model. - Credit / counterparty risk. You do a trade with a guy and then they refuse to pay up when you win.
- Liquidity risk. You buy something but can't sell it when you need to.
- Funding risk. You borrow money to buy something, and the borrowing gets withdrawn forcing you to sell your position.
- (Valuation) Model risk.You traded something valued with a model that turned out to be wrong. Might be hard to distinguish from market risk (eg option smile: is the Black-Scholes model wrong, or is it just that the correct price of OTM vol is higher?).
- (Market) Model risk. You trade something assuming a particular risk model which turns out to be incorrect. Might be hard to distinguish from market and pricing model risk ("is this loss a 6 sigma event, or was our measurement of sigma wrong?"). I'll discuss this more later.
- Operational / IT / Legal risk. You do a trade and your back office / tech team / lawyers screw it up.
- Reputational risk. You do a trade and everyone hates you.
Looking at these it is obvious that a number of them are things which might be hard to systematise, and almost not possible to automate. I might say that operational / IT and Legal dangers are very difficult to quantify / systematise past something like a pseudo objective exercise like a threat sign in. It's additionally difficult for computer systems to spontaneously examine the weaknesses of valuation fashions, manmade intelligence isn't pretty there yet. Finally popularity: computer systems do not care if you hate them or not.
It's possible to quantify liquidity, at least in open and transparent futures markets (it's tougher in multiple venue fairness markets, and OTC markets like spot fx and interest price swaps). It's very clean to application up an automatic buying and selling system which, for instance, may not change extra than 1% of the present day open hobby in a given futures transport month. However that is beyond the scope of this publish.
In contrast it's no longer perfect to rely on quantitative measures of credit score risk, which have a tendency to lag truth relatively and can also be completely divorced from fact (for example, bear in mind the AAA score of the "first-rate" tranche of nearly every loan backed safety issued within the years up to 2007). A laptop will simplest find out that it is margin funding has been abruptly reduce when it reveals it cannot do any greater buying and selling. Humans are better at choosing up and decoding whispers of feasible financial disaster or funding troubles.
This leaves us with market risk - what most people think of as financial risk. But also market model risk (a mouthful I know, and I'm open to using a better name). As you'll see I think that endogenous risk management can deal pretty well with both of these types of risk. The rest are better left to humans. So later in the post I'll outline when I think it's acceptable for humans to override trading systems.
What does accurate and horrific risk control appear like?
There isn't much evidence around of what good risk management looks like. Good risk management is like plumbing - you don't notice it's there until it goes wrong, and you've suddenly got "human excrement"* everywhere*Well my children may study this weblog. Feel free to use a special expression here.
There are lots of memories approximately terrible chance control. Where do we start... Possibly right here is a great area:https://en.Wikipedia.Org/wiki/List_of_trading_losses.
![]() |
| Nick Leeson. Bad risk management in motion, early ninety's style. Source: Daily Mail |
Generally traders are given a small quantity of danger control parameters they need to healthy within.
For example my first process in finance became operating as a trader for Barclays Capital. My buying and selling mandate protected a maximum viable loss (an insignificant million quid if I do not forget efficiently), in addition to limits at the greeks of my function (I become buying and selling alternatives). I additionally had a restriction on everyones preferred "single determine" danger size, VAR.
Bad investors will eithier willfuly, or thru lack of awareness, bend these limits as plenty as viable. For instance if I go back to the list of buying and selling losses above, it's crowned through this guy:
![]() |
| Howie. The 9 billion dollar man. Not in a good way. Source: wallstreetonparade.com |
Howie successfully known as the sub top loan debt fall apart. He bet on a group of mortgage associated derivative crap falling. But to offset the bad carry of this exchange (which precipitated a whole lot of pain to different human beings doing this exchange) he offered a gaggle of higher rated mortgage related derivatives. For uninteresting technical reasons he had to shop for loads more of the high price stuff.
On paper - and probably consistent with Morgan's inner fashions - this trade had minimum chance. It turned into assumed that the worst that could happen could be that residence costs stayed up, and that the lengthy and short aspect might stay excessive. Hopefully although Howie would get it proper - the crap would fall, and the good stuff would maintain it is price.
However it turned out that the great things wasn't that properly eithier; the losses on the long function ended up dwarfing the gains on the short function. The risk model become wrong.
(The chance management team did [eventually] warn about this, however Howie succesfully argued that the default charge they have been the use of to model the state of affairs would in no way take place. It did.)
Risk control embodied by buying and selling structures
From the above discussion we can derive my first precept of threat control:
Good traders do their own hazard control
(and by means of dealer here I mean all people answerable for making funding choices, so it includes fund managers of all flavours, plus folks who think about themselves as traders in place of traders).
Good buyers will take their given threat limits as a place to begin. They will understand that every one hazard measurements are wrong. They will think about what may want to cross wrong if the hazard model being used turned into incorrect. They will recollect risks that are not blanketed within the version.
Similarly properly buying and selling structures already do pretty loads of hazard management. This is not some thing we need to feature, it is already clearly embodied within the device itself.
For example in my book I explain how a trading system should have a predetermined long term target risk, and then how each position should be sized to achieve a particular target risk according to it's perceived profitability (the forecast) and the estimated risk for each block of the instrument you're trading (like a futures contract) using estimates of return volatility. I also talk about how you should use estimates of correlations of forecasts and returns to achieve the correct long run risk.
Trading structures that encompass fashion following guidelines additionally automatically manipulate the hazard of a role turning in opposition to them. You can do a similar element by means of the use of forestall loss regulations. I also explain how a trading gadget have to mechanically lessen your chance while you lose money (and there may be more on that subject here).
All this is stuff that feels quite a lot like risk management. To be precise it's the well knownmarket risk that we're managing here. But it isn't the whole story - we're missing out market model risk. To understand the difference I first need to explain my philosophy of risk in a little detail.
The distinct forms of hazard
I classify threat into types - the hazard encompassed by our model of market returns; and the component that isn't. To see this a little greater in reality have a take a look at a image I like to name the "Rumsfeld quadrant"
The top left is stuff we know. That method there isn't always any risk. Perhaps the sector of natural arbitrage belongs here, if it exists. The backside left is stuff we don't know we realize. That's philosophy, not risk management.
The interesting stuff happens on the right. In green on the top right we have known-unknowns. It's the area of quantifiable market risk. To quantify risk we need to have a market risk model.
The bottom right crimson phase is the domain of the black swan. It's the vicinity that lies outdoor of our market risk version. It's where we'll become if our version of marketplace chance is horrific. There are numerous approaches that may manifest:
- We have the wrong model. So for example before Black-Scholes people used to price options in fairly arbitrary ways.
- We have an incomplete model. Eg Black-Scholes assumes a lognormal distribution. Stock returns are anything but lognormal, with tails fatter than a cat that has got a really fat tail.
- The underlying parameters of our market have changed. For example implied volatility may have dramatically increased.
- Our estimate of the parameters may be wrong. For example if we're trying to measure implied vol from illiquid options with large bid-ask spreads. More prosically we can't measure the current actual volatility directly, only estimate it from returns.
An critical point is that it's very hard to tell (a) an intense movement within a marketplace danger version that is correct from (b) an intense motion that isn't always that intense, it is simply that your model is inaccurate. In simple phrases is the 6 sigma occasion (need to show up as soon as each 500 million days) really a 6 sigma occasion?
Or is it genuinely a 2 sigma event it's simply that your volatility estimate is out through a factor of three? Or the unobservable "actual" vol has modified by a aspect of three? Or does your model no longer account for fats tails due to the fact 6 sigma events without a doubt show up 1% of the time? You usually want a number of information to make a Bayesian judgement about what's more likely. Even then it's a moving goal due to the fact the underlying parameters will constantly be changing.
This also applies to distinguishing distinct styles of market version danger. You in all likelihood can not tell the distinction between a kingdom market with high and occasional volatility (converting parameter values), and a marketplace which has a single nation but a fat tailed distribution of returns (incomplete version); and arguably it would not be counted.
What human beings like to do, particularly quants with Phd's trapped in risk management jobs, is make their marketplace models more complicated to "remedy" this trouble. Consider:
On the left we can see that less than half of the world has been explained by green, modelled, market risk. This is because we have the simplest possible multiple asset risk model - a set of Gaussian distributions with fixed standard deviation and correlations. There is a large red area where we have the risk that this model is wrong. It's a large area because our model is rubbish. We have a lot of market model risk.
However - importantly - we recognize the model is rubbish. We comprehend it has weaknesses. We can possibly articulate intuitively, and in a few detail, what those weaknesses are.
On the proper is the quant method. A plenty extra state-of-the-art threat model is used. The upside of this is that there might be fewer risks that are not captured by way of the version. But that is no magic bullet. There are a few negative aspects to greater complexity. One trouble is that with more parameters they're harder to estimate, and estimates of factors like higher order moments or nation transition probabilities can be very sensitive to outliers.
More significantly however I think those complicated models give you a fake feel of safety. To each person who does not believe me I have just two words to mention: Gaussian Copula. Whilst I can articulate very easily what is inaccurate with a easy danger version it is tons tougher to think of what could pass wrong with a far weirder set of equations.
(There is an analogy right here with valuation version threat. Many investors choose to use Black-Scholes alternative pricers and modify the volatility input to account for smile outcomes, in preference to use a more complicated choice pricer that captures this effect directly)
So my second principle of risk management is:
Complicated chance model = a terrible component
What I prefer to do is use a simple version of returns as part of my buying and selling machine. Then I cope with market model chance systematically: both endogenously in the device, or exogenously.
Risk control in the machine (endogeonous)
The disadvantage of less difficult models is their simplicity. But because they're easy, it is also clean to put in writing down what their flaws are. And what may be written down without problems can, and should, be added to a buying and selling machine as an endogenous hazard management layer.
Let's take an instance. We know that the model of fixed Gaussian volatility is naive (and I am being polite). Check this out (ignore the headline, that is beside the point and for which there may be no proof):
![]() |
| S&P 500 vol over time. Source: Seeking Alpha |
Now I could deal with this problem by using a model with multiple states, or something with fatter tails. However that's complicated (=bad).
If I was to pinpoint exactly what worries me here, it's this: Increasing position size when vol is really low, like in 2006 because I know it will probably go up abruptly. There are far worse examples of this: EURCHF before January 2015, Front Eurodollar and other STIR contracts, CDS spreads before 2007...
I can very easily write down a simple method for dealing with this, using the 6 step process from before:
- We don't want to increase positions when vol is very low.
- We decide to measure this by looking at realised vol versus historical vol
- We decide that we'll not increase leverage if vol is in the lowest 5% of values seen in the last couple of years
- We monitor the current estimated vol, and the 5% quantile of the distribution of vol over the last 500 business days.
- If estimated vol drops below the 5% quantile, use that instead of the lower estimated vol. This will cut the size of our positions.
- When the vol recovers, use the higher estimated vol.
It's easy to imagine how we could come up with other simple ways to limit our exposure to events like correlation shocks, or unusually concentrated positions. The final post of this mini series will explain how my own trading system does it's own endogenous risk management, including some new (not yet written) code for pysystemtrade.
Systematic risk management outside the system (exogeonous)
There is a second category of risk management issues. This is mostly stuff that could, in principle, be implemented automatically within a trading system. But it would be more trouble than it's worth, or pose practical difficulties. Instead we develop a systematic process which is followed independently. The important point here is that once the system is in place there should be no room for human discretion here.
An example of something that would fit nicely into an exogenous risk management framework would be something like this, following the 6 step programme I outlined earlier:
- We have a large client that doesn't want to lose more than half their initial trading capital - if they do they will withdraw the rest of their money and decimate our business.
- We decide to measure this using the daily drawdown level
- We decide that we'll cut our trading system risk by 25% if the drawdown is greater than 30%, by half at 35%, by three quarters at 40% and completely at 45% (allowing some room for overshoot).
- We monitor the daily drawdown level
- If it exceeds the level above we cut the risk capital available to the trading system appropriately
- When the capital recovers, regear the system upwards
[I note in passing that:
Firstly this will probably result in your client making lower profits than they would have done otherwise, see here.
Secondly this might seem a bit weird - why doesn't your client just stump up only half of the money? But this is actually how my previous employers managed the risk of structured guaranteed products that were sold to clients with a guarantee (in fact some of the capital was used to buy a zero coupon bond). These are out of fashion now, because much lower interest rates make the price of the zero coupon bonds far too rich to make the structure work.
Finally for the terminally geeky, this is effectively the same as buying a rather disjointed synthetic put option on the performance of your own fund]
Although this example can, and perhaps should, be automated it lies outside the trading system proper. The trading system proper just knows it has a certain amount of trading capital to play with; with adjustments made automatically for gains or losses. It doesn't know or care about the fact we have to degear this specific account in an unusual way.
In the next post I'll explain in more detail how to construct a systematic exogenous risk management process using a concept I call the risk envelope. In this process we measure various characteristics of a system's backtested performance, and use this information to determine degearing points for different unexpected events that lie outside of what we saw in the backtest.
For now let me give you another slightly different example - implied volatility. Related to the discussion above there are often situations when implied vol can be used to give a better estimate of future vol than realised vol alone. An example would be before a big event, like an election or non farm payroll, when realised vol is often subdued whilst implied vols are very rich.
Ideally you'd do this endogenously: build an automated system which captured and calculated the options implied vol surface and tied this in with realised vol information based on daily returns (you could also throw in recent intraday data). But this is a lot of work, and very painful.
(Just to name a few problems; stale and non synchronous quotes, wide spreads on the prices of OTM options give you very wide estimates of implied vol, non continuous strikes, changing underlying mean the ATM strike is always moving....)
Instead a better exogenous system is to build something that monitors implied vol levels, and then cut positions by a proscribed amount when they exceed realised vol by a given proportion (thus accounting for the persistent premium of implied over realised vol). Some human intervention in the process will prevent screwups caused by bad option prices.
Discretionary overrides
Ideally all risk managers at systematic funds could now be fired, or at least redeployed to more useful jobs.
![]() |
| Risk manager working on new career. Source: wikipedia |
But is it realistic to do all risk management purely systematically, either inside or outside a system? No. Firstly we still need someone to do this stuff...
- Identify some important risks.
- Work out a way to measure them
- Set levels at which action should be taken, and specify an action to take.
Secondly there are a bunch of situations in which I think it is okay to override the trading system, due to circumstances which the trading system (or predetermined exogenous process) just won't know about.
I've already touched on this in the discussion related to types of risk earlier, where I noted that humans are better at dealing with hard to quantify more subjective risks. Here are some specific scenarios from my own experience. As with systematic risk management the appropriate response should be to proportionally de-risk the position until the problem goes away or is solved.
Garbage out – parameter and coding errors
If an automated system does not behave according to its algorithm there must be a coding bug or incorrect parameter. If it isn't automated then it's probably a fat finger error on a calculator or a formula error on a spreadsheet. This clearly calls for a de-risking unless it is absolutely clear that the positions are of the correct sign and smaller than the system actually desires. The same goes for incorrect data; we need to check against what the position would have been with the right data.
Liquidity and market failure
No trading system can cope if it cannot actually trade. If a country is likely to introduce capital controls, if there is going to be widespread market disruption because of an event or if people just stop trading then it would be foolish to carry on holding positions.
Of course this assumes such events are predictable in advance. I was managing a system trading Euroyen interest rate futures just before the 2011 Japanese earthquake. The market stopped functioning almost overnight.
A more pleasant experience was when the liquidity in certain Credit Default Swap indices drained away after 2008. The change was sufficiently slow to allow positions to be gradually derisked in line with lower volumes.
Denial of service – dealing with interruptions
A harder set of problems to deal with are interruptions to service. For example hardware failure, data feed problems, internet connectivity breaking or problems with the broker. Any of these might mean we cannot trade at all, or are trading with out of date information. Clearly a comparison of likely down time to average holding period would be important.
With medium term trading, and a holding period of a few weeks, a one to day outage should not unduly concern an individual investor, although they should keep a closer eye on the markets in that period. For longer periods it would be safest to shut down all positions, balancing the costs of doing this against possible risks.
What's next
As I said I'll be doing a another post on this subject. The final post will explain how I use endogenous risk management within my own trading system.



