HPK

mesothelioma survival rates,structured settlement annuity companies,mesothelioma attorneys california,structured settlements annuities,structured settlement buyer,mesothelioma suit,mesothelioma claim,small business administration sba,structured settlement purchasers,wisconsin mesothelioma attorney,houston tx auto insurance,mesotheliama,mesothelioma lawyer virginia,seattle mesothelioma lawyer,selling my structured settlement,mesothelioma attorney illinois,selling annuity,mesothelioma trial attorney,injury lawyer houston tx,baltimore mesothelioma attorneys,mesothelioma care,mesothelioma lawyer texas,structered settlement,houston motorcycle accident lawyer,p0135 honda civic 2004,structured settlement investments,mesothelioma lawyer dallas,caraccidentlawyer,structured settlemen,houston mesothelioma attorney,structured settlement sell,new york mesothelioma law firm,cash out structured settlement,mesothelioma lawyer chicago,lawsuit mesothelioma,truck accident attorney los angeles,asbestos exposure lawyers,mesothelioma cases,emergency response plan ppt,support.peachtree.com,structured settlement quote,semi truck accident lawyers,auto accident attorney Torrance,mesothelioma lawyer asbestos cancer lawsuit,mesothelioma lawyers san diego,asbestos mesothelioma lawsuit,buying structured settlements,mesothelioma attorney assistance,tennessee mesothelioma lawyer,earthlink business internet,meso lawyer,tucson car accident attorney,accident attorney orange county,mesothelioma litigation,mesothelioma settlements amounts,mesothelioma law firms,new mexico mesothelioma lawyer,accident attorneys orange county,mesothelioma lawsuit,personal injury accident lawyer,purchase structured settlements,firm law mesothelioma,car accident lawyers los angeles,mesothelioma attorneys,structured settlement company,auto accident lawyer san francisco,mesotheolima,los angeles motorcycle accident lawyer,mesothelioma attorney florida,broward county dui lawyer,state of california car insurance,selling a structured settlement,best accident attorneys,accident attorney san bernardino,mesothelioma ct,hughes net business,california motorcycle accident lawyer,mesothelioma help,washington mesothelioma attorney,best mesothelioma lawyers,diagnosed with mesothelioma,motorcycle accident attorney chicago,structured settlement need cash now,mesothelioma settlement amounts,motorcycle accident attorney sacramento,alcohol rehab center in florida,fast cash for house,car accident lawyer michigan,maritime lawyer houston,mesothelioma personal injury lawyers,personal injury attorney ocala fl,business voice mail service,california mesothelioma attorney,offshore accident lawyer,buy structured settlements,philadelphia mesothelioma lawyer,selling structured settlement,workplace accident attorney,illinois mesothelioma lawyer

Menu Navigasi

When endogenous risk management isn't enough: a simple risk overlay

Start

"How does your chance control paintings?"

... Is a question I'm regularly requested.

In fact this is actually a difficult question, if you were to look at my open source python backtesting projectpysystemtrade, you would struggle to point at a piece of code and say "Behold! Right there, that's the risk management part alright!". The reason is that the risk management in my trading system is endogenous (from the greek, meaning 'word used to mean internally or inside by people trying to sound clever'). Risk management is something it just does without even trying.

For example, if volatility rises, then positions will be cut. If it starts to lose money on a particular position, the position will be cut. If the amount of capital deployed reduces, the position will be cut. Many of these things look like deliberate risk management, or perhaps the term 'position management' is more appropriate. But they are just a consequence of the simple building blocks that the system is built upon: inverse vol position scaling, a preponderance of trend following rules, and liberal use of the Kelly criterion.

However, these simple building blocks make some heroic assumptions. In particular they assume that asset returns follow a joint Gaussian return distribution, where co-movements are linear, and both volatility and correlations are perfectly predictable from historic data. The system also does it's risk management on a long run average basis:

The consequences of this are, to use technical language for a moment: sometimes things could get a bit scary. This post explains how, and introduces a simple risk overlay to make things slightly less scary. Essentially this overlay sits slightly outside the main system (although it runs as part of the same code base), tweaking positions when certain risk limits are hit.

This is an overlay I have already implemented in my existing trading system, so it's worked well for over  6 years. Although this new code is designed for pysystemtrade I will make the python code as stand-alone as possible so you can adapt it for your own use if you wish. It will work equally well in other trading systems, although it will be most useful in a system that works more like mine.

This is a continuation of a chain I began some years in the past, however simplest got around to writing multiple posts for. In the spirit of tidiness, here are the primary posts:

  • Why black box hedge fund managers should have lazy risk managers
  • Systematic risk management

You do not should read the primary two posts to apprehend this one, but it would help especially if you do not recognize exactly what I imply via edogenous threat control.

Parts of this post will be simpler to follow in case you've examine my first ebook, Systematic Trading.

Realised hazard

Let's start with the aid of measuring the real threat we realised. This have to common 25%, or some thing is described in machine.Config.Percentage_vol_target.

(I even have grabbed a backtest to do this which basically displays my stay device with a subset of instruments, but you may play in conjunction with a specific pysystemtrade backtest in case you wish, or any collection of every day returns you manifest to have).

# assuming we have already got a pysystemtrade gadget object...
Returns = system.Money owed.Portfolio().As_percent()
returns = returns[pd.Datetime(1997,1,1):]
annualised_std = returns.Std()*16

(I'm most effective displaying information given that 1997, because I'm inside the manner of cleaning up my fee data which nonetheless has some spikes in it that aren't actual.)

The average widespread deviation comes out at 23.7% that's a fraction below the goal of 25%, however greater importantly how does this vary through the years? Let's plot the day by day returns:

With 23.Eight% annual risk, which equates to round 1.5% an afternoon, we would expect to look around two thirds (sixty eight%) of our returns coming in at among -1.Five% and 1.5% (If our imply turned into 0. The suggest is zero.08%, so the returns will are available among -1.Forty two% and 1.58%, which is not very one of a kind).

To see more simply, permit's look at the rolling trendy deviation of returns over the last a hundred twenty five enterprise days (about 6 months), and multiply by using sixteen to annualise:

				
roll_std_returns = returns.Rolling(one hundred twenty five).Std()*16

You can see surely that the 6 month rolling chance is relatively variable, dipping right down to 10% at a few points, and up above 40% inside the halcoyn days of the early 2000's.

There are two explanations for this:

  1. We are very bad at predicting our risk
  2. We are allowing our expected risk to vary a lot

... Or possibly a bit bit of both.

It's very smooth to check if our predicted danger have to range loads, by measuring our anticipated chance (assuming, certainly, a joint Gaussian risk version).

The code for that is a chunk prolonged, so in place of cut and paste I've dropped into it is very own little gist. The complex part is changing the whole thing into notional publicity as a percentage of capital, which lets in us to use percent returns. Incidentally, I use a 30 day span for popular deviations, and one hundred twenty days for correlations. These provide a reasonably exact estimate, but using different values might not make a big distinction.

OK, so our expected risk is expected to vary (and I will explain why below). Another way of thinking about the problem is to see how well we did at matching expected and realised risk (basically, how good is our simple Gaussian normal risk model at forecasting risk). The plot below shows the realised returns, with one and two standard deviation bands from the expected risk distribution (ignoring the tiny mean).

Threshold=pd.Concat([risk_series*100/16.0, -risk_series*100/16.0],axis=1)
			
thresh_and_returns = pd.Concat([returns, threshold], axis=1)

thresh_and_returns.Columns = ['returns', ' 1std','-1std']

We ought to see about two thirds of the returns fall in the bands, and certainly they do. Also, the bands need to make bigger whilst the returns do. This happens eg in 2004. The threat version is not best, due to the fact there are some huge outliers that we wouldn't expect to get with Gaussian ordinary returns (although a number of these can be because of terrible records). But it's doing an inexpensive activity of forecasting destiny threat.

Mostly our real threat is varying due to the fact our predicted chance is varying. So let's discover why.

Why does anticipated hazard vary?

It is well worth briefly revisting the calculations used to workout a position length. The position measured as a percent of capital is a product of numerous special numbers, however it simplifies to this:

position as % of capital = (tool forecast / common device forecast)* (target chance / device hazard) * instrument weight * IDM

Where the IDM (Instrument Diversification Multiplier) is the factor applied to positions to account for the correlation between trading subsystems (i.e. the trading strategies we run for each instrument and the returns they product, not the underlying instrument returns)

And the portfolio risk calculation is wSw' , where w are the weights (basically position as % of capital) and S is the covariance matrix composed of instrument standard deviations and the correlation between instrument returns (different from that used for IDM).

In a very handwaving manner, it may be proven that the cutting-edge anticipated portfolio risk will then be equal to:

Expected risk = target risk * (relative forecast strength) * (relative correlation factor)

Relative forecast strength is a degree of ways strong forecasts are relative to the average; it is same to the forecasts for each tool, weighted by device weight and divided by way of the average forecast (set to ten in pysystemtrade).

All different matters being equal, in case your forecasts are all 20, and the average is 10, then your predicted portfolio chance would be two times the average chance, or more or less twice the goal danger (50% in the instance I've been the use of).

Importantly, we want risk to vary according to forecast strength. Otherwise we'd have exactly the same risk on even if our forecasts were all +0.001, as if they were +20 (the maximum allowed under forecast capping).

(There is a college of thought that asserts that we want chance to remain constant, which is how a whole lot of long:brief hedge budget assemble portfolios, however this is some other weblog put up)

The relative correlation factor (RCF) is a bit more complicated. It is equal to the ratio between the IDM (which accounts for the average correlation across subsystem returns), and the IDM that would be appropriate given the current set of positions and current correlation between instrument returns.

So for example, if you normally trade two subsystems (say US 10 year and S&P 500) with corelation between subsystems of zero then your IDM will be equal to square root of 2: 1.414

Now imagine that for some reason your system has a long average sized position in US 10 years, and a short average sized position in S&P 500 futures, and also that the correlation between these two instruments is -1. A quick calculation shows that the expected risk here will be 2.82 times the average. If the correlation was zero, then the expected risk would be twice the average; and if the correlation was +1 then the expected risk would be zero. The relevant RCF would be 2.82/1.41, 2/1.41, and zero.

Similarly if the current position was long an average sized position in both instruments, then with a correlation of +1 the risk would be 2.8 times the average, with a correlation of zero it would be twice, and again with a correlation of -1 it would be zero. The relevant RCF again are 2.82/1.41, 2/1.41 and 0. Notice the symmetry here - we'll use this result later.

Clearly the RCF can vary quite a lot depending on what the current positions are, and the current correlation matrix. You might argue that positions and correlations of this kind are unlikely given the average correlation between subsystems is zero. They are unlikely, but they aren't impossible. In particular, correlations do vary especially in the kind of crisis we've just seen.

The RCF is more of an annoyance in terms of expected risk; we wouldn't neccessarily want our risk to be a lot higher just because the positions we happen to have on are especially toxic given todays correlation factor.

Let's plot the relative forecast strength against our expected risk to see if we can decompose how much of our risk is coming from these two components: relative forecast (which we like!), and relative correlation (which we don't like!).

def forecast__strength_for_system(system): list_of_instruments = system.get_instrument_list() forecasts = [system.combForecast.get_combined_forecast(instrument_code)
			for instrument_code in
			list_of_instruments] forecasts = pd.concat(forecasts, axis=1) forecasts.columns = list_of_instruments forecasts = forecasts / system.config.average_absolute_forecast instrument_weights = system.portfolio.get_instrument_weights() weighted_forecast = instrument_weights.ffill() * forecasts.abs().ffill() forecast_strength = weighted_forecast.sum(axis=1) return forecast_strength risk_vs_average = 100*risk_series / system.config.percentage_vol_target forecast_strength = forecast__strength_for_system(system) to_plot = pd.concat([risk_vs_average, forecast_strength], axis=1)
to_plot.columns = ['Expected risk', 'Forecast strength']

(I've put everything in terms relative to it's expected long average so we can plot them together)

So expected risk does indeed follow forecast strength pretty well. For example, in late 2018:

... forecast strength goes up, and expected risk follows it. However this isn't always the case. Strikingly, in the past couple of months expected risk has exploded while forecast strength has been falling. This is because the relative correlation factor has increased dramatically, most likely as correlations have got really weird in the current crisis.

Overview of the risk overlay

Now we have a better understanding of what is driving our expected risk, it's time to introduce the risk overlay. The risk overlay calculates a risk position multiplier, which is between 0 and 1. When this multiplier is one we make no changes to the positions calculated by our system. If it was 0.5, then we'd reduce our positions by half. And so on.

So the overlay acts across the entire portfolio, reducing risk proportionally on all positions at the same time.

The risk overlay has three components, designed to deal with the following issues:

- Expected risk that is too high

- Weird correlation shocks combined with extreme positions

- Jumpy volatility (non stationary and non Gaussian vol)

Each component calculates it's own risk multipler, and then we take the lowest (most conservative) value.

That's it. I could easily make this a lot more complicated, but I wanted to keep the overlay pretty simple. It's also easy to apply this overlay to other strategies, as long as you know your portfolio weights and can estimate a covariance matrix (I'm assuming anyone who has read this far can do both of those things, or knows a person that can). You don't need the same concept of a 'forecast' for example, since forecast calculations don't enter into these.

Let's dive into the individual components.

Maximum expected risk

This component assumes that Guassian risk is a good enough model for expected risk, and it also assumes that we don't want too much of it. Specifically the risk multiplier looks like this:

risk multiplier = min(1, 2*target risk / current expected risk)

So if the current expected risk is more than twice the long run target, we'll start reducing our positions. The choice of '2' is arbitrary, and down to personal preference. However, since the combined forecast for an instrument is limited to 2.0, allowing the expected risk to be double the average seems to make sense.

From the discussion above, we'll be doing that if (a) we have very strong relative forecasts, or (b) the current correlation factor is particularly nasty. I could have made this more complex to specifically target the correlation factor, but this is simple enough to understand and explain, and works nicely on any kind of strategy with a long run risk target but varying expected risk.

How often will this kick in? We've already calculated expected risk vs target earlier:

risk_vs_average = 100*risk_series / system.config.percentage_vol_target

So now plotting the series:

There are a few times when risk goes over 2, including in recent weeks. Here is the risk multiplier:

risk_multiplier = 2/risk_vs_average risk_multiplier[risk_multiplier>1.0]=1.0

Notice the sharp drop at the end, when expected risk balloons in the COVID-19 crisis.

Correlation risk

The maximum expected risk measure assumes that Gaussian risk is sufficient, and that we can forecast it's components (correlation, and standard deviation). Now let's relax that assumption. Correlation risk is the risk that instrument correlations will do scary unusual things, that happen to be bad for my position. If this has already happened (i.e. we have a correlation factor problem) then it will be dealt with in the expected risk calculation, that uses recent historic returns to calculate the instrument correlation. But what if it is about to happen?

There is a very simple way of dealing with this, which is that we replace the estimated correlation matrix with the worst possible correlation matrix. Then we re-estimate our expected risk, and plug it into a risk multiplier formula. Because we're shocking the correlations to the extreme, we allow expected risk to be 4 times larger than our target.

(There is no justification for this number 4, it's calibrated to target a particular point on the realised distribution of the estimate of relative risk. I talk a bit about calibration at the end of the post)

Specifically the risk multiplier looks like this:

risk multiplier = min(1, 4*target risk / current expected risk with worst possible correlation)

What is the worst possible correlation matrix? Simply, it's a matrix where all the correlations are 1. But that's only bad if all of our positions are long, right? If we had offsetting long/short positions, it would help us. You're right, which is why we also use the absolute weights when calculating the expected risk, not the normal signed weights. Note that a correlation of 1 if your weights are all long is equivalent to a correlation of -1 if your weights were long/short (we already saw this in the calculations above).

Here's a horribly hacky way (ugly! slow!) to calculate this risk multiplier (there is a better implementation in pysystemtrade, of which more later). In thegist above replace this function with this code:



def calc_risk_for_date(rolling_corr, rolling_std, index_date, value_of_positions_proportion_capital, list_of_instruments): std_dev = rolling_std.loc[index_date].values std_dev[np.isnan(std_dev)] = 0.0 ## Use absolute weights rather than signed
				weights = value_of_positions_proportion_capital.abs().loc[index_date].values weights[np.isnan(weights)]=0.0
				cmatrix = get_corr_matrix_for_date(rolling_corr, index_date, list_of_instruments)
				# replace correlation matrix with zeros
				# yeah this is ugly and slow, but makes the point clearer cmatrix[:] = 1.0 sigma = sigma_from_corr_and_std(std_dev, cmatrix) portfolio_variance = weights.dot(sigma).dot(weights.transpose()) portfolio_std = portfolio_variance**.5 annualised_portfolio_std = portfolio_std*16.0 return annualised_portfolio_std

Then we just recalculate everything: expected risk, and expected vs average:

risk_series_for_correlation = get_expected_risk_for_system(system)
risk_vs_average_for_correlation = 100*risk_series_for_correlation / system.config.percentage_vol_target

Let's plot it

Now for the risk multiplier:

			
risk_multiplier_for_correlation = 4/risk_vs_average_for_correlation

risk_multiplier_for_correlation[risk_multiplier_for_correlation>1.0]=1.0

This is a bit more active than the expected risk filter. Interestingly, it also shows a recent application in March 2020.

Incidentally, because of the way the system scaling works this is effectively the following constraint:

IDM*sqrt[Sum_i( k_i^2) + 2*abs(k_1*k_2*k_3... )] <=4

Where k_i = [instrument weight * forecast / average forecast] for instrument i

Proof of the above result, well for 2 assets anyway. Feel free to do this properly with matrices and stuff.

So it will only go into effect when we have a lot of large forecasts kicking off at the same time. No other inputs are relevant.

Standard deviation risk

Now let's deal with standard deviation risk. Specifically, we're concerned with a situation where we're estimating a standard deviation that is relatively low, but there's a good chance it will get a lot higher. This could be because risk is Gaussian, but varies, or because risk is non Guassian. We don't care what the cause is (and in fact it's impossible to distinguish these two explanations). We just want to deal with it.

To do this we use our standard estimate of portfolio risk, but replace our standard deviation estimates with '99vol'. This rather catchily named value* is the 99th percentile of the standard deviation estimate distribution, measured over the last 10 years. It's the standard deviation we'll get 1% of the time.

* "I've got 99 problems, but vol ain't one of them..." Sorry couldn't resist.

(Incidentally, if current vol is above the 99% point I still use the 99% point in this calculation.  In this case expected risk is likely to be very high anyway)

Once the new risk estimate has been calculated, I apply a multiplier if this comes out more than 6 times the target risk (again, no deep underlying logic for this, just a calibration)

Specifically the risk multiplier looks like this:

risk multiplier = min(1, 6*target risk / current expected risk with 99vol)

Note: Relationship to VAR. Yes, this is a bit like a 99% VAR. I prefer not to use VAR, since it confounds standard deviations and correlations.

Here's the hacky way of calculating it. Using the originalgist (without the hacked function above) add one line to this other function:



def get_expected_risk_for_system(system): value_of_positions_proportion_capital = get_positions_as_proportion_of_capital(system) instrument_returns = get_instrument_returns(system) instrument_returns = instrument_returns.ffill().reindex(value_of_positions_proportion_capital.index) rolling_std = instrument_returns.ewm(span=30).std() rolling_corr = instrument_returns.ewm(span=120).corr() # new line rolling_std = rolling_std.ffill().rolling(2500, min_periods=10).quantile(.99) list_of_instruments = system.get_instrument_list() expected_risk = calc_expected_risk_over_time(rolling_corr, rolling_std, value_of_positions_proportion_capital, list_of_instruments) return expected_risk

New risk series:

And the risk multiplier:

Putting them together

That's the most conservative multiplier, going back to 1997.

The results aren't too dramatic: they shouldn't be. This is a risk overlay, to deal with corner cases and potential black swans. The vast bulk of the risk management load is being carried by the core system.

Pysystemtrade implementation

Now to implement the overlay into pysystemtrade. First of all we need some configuration options: as they'd appear in your .yaml file. Here are the defaults:

risk_overlay: max_risk_fraction_normal_risk: 2.0 max_risk_fraction_correlation_risk: 4.0 max_risk_fraction_stdev_risk: 6.0

Next you need to override the portfolio stage class with an inherited class which includes risk scaling:



## run inside pysstemtraderimport matplotlib matplotlib.use("TkAgg") from systems.provided.futures_chapter15.basesystem import * ## use your own config here config = Config( "private.legacy_system.legacy_config_all_markets.yaml") from systems.futures.risk_overlay import portfoliosRiskOverlay data = csvFuturesSimData() system = System([ Account(), portfoliosRiskOverlay(), PositionSizing(), FuturesRawData(), ForecastCombine(), ForecastScaleCap(), Rules() ], data, config) system.set_logging_level("on")

There are various new methods in the portfolio stage, such as:

system.portfolio.get_risk_multiplier()

Incidentally for efficiency the calculations work a bit different in pysystemtrade; I use weekly returns for correlations, and I only calculate a covariance matrix on a monthly basis (though I do use todays position weights, so the risk multiplier is calculated on a daily basis).

A quick test

I ran a backtest with, and without, the risk overlay to see what it looks like. Firstly here's the whole account curve:

The blue line is with the overlay, the orange line is without. This isn't unexpected; the overlay can only ever reduce risk, and so it will make less in returns unless it is lucky enough to do so only when the system is losing money. Broadly speaking the risk overlay knocks about 3% anually off both the returns and the risk.

The Sharpe Ratios are pretty close though: 0.940 with the overlay and 0.956 with it. More interestingly the overlay reduces the positive skew of the system somewhat (and this holds at all frequencies- read this to see why that's important). One argument for not applying any kind of risk control to trend following is that we lose the positive skew (see here for a relevant discussion).

The kurtosis does fall however, suggesting we are doing a good job of 'tidying up the tails'. Other measures of 'left tailedness', like the 1% quantile point are also improved. Drawdowns are a litle shallower.

If the performance penalty is too great then you can change the calibration of the risk overlay. Try not to tweak these for performance though, that is implicit fitting. Instead target something like:

  • a distributional point on the turnout of the estimated risk relative to the target risk (eg 95% point),
  •  the time you want the filter switched on for on average (eg 1% of the time),
  • the average value of the risk multiplier including when it is switched off (eg 0.99),
  •  or a minimum correlation between the system with and without the overlay (eg 0.98)

Summary

This has been an interesting journey which has hopefully given some more intuition about how the risk in CTA type strategies works. I've also introduced a simple risk overlay that can be used in a number of different strategies.

As usual questions are welcome in the comment box below.

Finish
Bagikan ke Facebook

Artikel Terkait

Lanjut