A little demonstration of portfolio optimisation
I've had a request for the code used to do the optimisations in bankruptcy 4 of my e-book "Systematic Trading" (the 'one-duration' and 'bootstrapping' methods; there isn't always plenty point in which includes code to the 'handcrafted' technique as it's speculated to keep away from programming).
Although this put up will make more feel if you've read the e-book, it may also be read independently as I'll be losing short reasons in as we pass. Hopefully it's going to whet your urge for food!
You can get the code you need from right here:
https://github.Com/robcarver17/systematictradingexamples/blob/master/optimisation.Py
The code also consists of a characteristic for producing "expanding window", "rolling window" and "in pattern" returned check time periods which could be useful for popular becoming.
The problem
The problem we are trying to solve here is "What portfolio weights should we have held in the past (between 2000 and mid 2015) given the returns of 3 assets: S&P 500 equity index, NASDAQ equity index and a US 20 year bond*?"
* You can think of this as a synthetic constant maturity bond, or what you'd get if you held the 20 year US bond future and also earned interest on the cash you saved from getting exposure via a derivative.
Some of the issues I will explore in this post are:
- This is a backtest that we're running here - a historical simulation. So how do we deal with the fact that 10 years ago we wouldn't have had data from 2005 to 2015? How much data should we use to fit?
- These assets have quite different volatility. How can we express our portfolio weights in a way which accounts for this?
- Standard portfolio optimisation techniques produce very unstable and extreme weights. Should we use them, or another method like bootstrapping which takes account of the noise in the data?
- Most of the instability in weights comes from having slightly different estimates of the mean return. Should we just assume all assets have the same mean return?
In sample
Let's begin by doing some simple in sample testing. Here we cheat, and assume we have all the data at the start.
I'm going to do the most 'vanilla' optimisation possible:
opt_and_plot(data, "in_sample", "one_shot", equalisemeans=False, equalisevols=False)

Let's deal with the first problem - different volatility. In my book I use the technique of volatility normalisationto make sure that the assets we are optimising weights for have the same expected risk. That isn't the case here. Bonds are much less volatile than stocks. To compensate for this they have a much bigger weight.
We can change the optimisation function so it does a type of normalisation; measure the standard deviation of returns in the dataset and change all the returns so they have some arbitrary annualised risk (20% by default). This has the effect of turning the covariance matrix into a correlation matrix.
opt_and_plot(data, "in_sample", "one_shot", equalisemeans=False, equalisevols=True)

However it's still a pretty extreme portfolio. Poor NASDAQ doesn't get a look in. A very simple way of dealing with this is to throw away the information we have about expected mean returns, and assume all assets have the same mean return (notice that as we have equalised volatility this is the same as assume the same Sharpe Ratio for all assets; and indeed this is actually what the code does).
opt_and_plot(data, "in_sample", "one_period", equalisemeans=True, equalisevols=True)

However what if our assets do have different expected returns, and in a statistically significant way? A better way of doing the optimisation is not to throw away the means, but to use bootstrapping. With bootstrapping we pull returns out of our data at random (500 times in this example); do an optimisation on each sample of returns, and then take an average of the weights from each sample.
opt_and_plot(data, "in_sample", "bootstrap", equalisemeans=False, equalisevols=True, monte_carlo=500)

Looking at the actual weights they are similar to the previous example with no means, although NASDAQ (which did really badly in this sample) is slightly downweighted. In this case using the distribution of average returns (and correlations, for what it's worth) hasn't changed our minds very much. There isn't a statistically significant difference in the returns of these three assets over this period.
Rolling window
Let's stop cheating and run our optimisation in such a way that we only know the returns of the past. A common method to do this is 'walk forward testing', or what I call 'a rolling window'. In each year that we're testing for we use the returns of the last N years to do our optimisation.
To begin with let's use 'one period' optimisation with a lookback of a single year.
opt_and_plot(data, "rolling", "one_period", rollyears=1, equalisemeans=False, equalisevols=True)

opt_and_plot(data, "rolling", "one_period", rollyears=5, equalisemeans=False, equalisevols=True)

I won't show the results for bootstrapping with a rolling window; this is left as an exercise for the reader.
Expanding window
It's my preference to use an expanding window (sometimes called anchored fitting). Here we use all the data that we have available as we step through each year. So our window gets bigger over time.
opt_and_plot(data, "expanding", "one_period", equalisemeans=False, equalisevols=True)

Let's go back to the boostrapped method. This is my own personal favourite optimisation method:
opt_and_plot(data, "expanding", "bootstrap", equalisemeans=False, equalisevols=True)

* I'm using 250 days - about a year - of data in each bootstrap sample (you can change this with the monte_length parameter). With the underlying sample also only a year long this is pushing things to their limit - I normally suggest you use a window size around 10% of the total data. If you must optimise with only a year of data then you should probably use samples of around 25 business days. However my simple code doesn't support a varying window size; though it would be easy to use the 10% guideline eg by adding monte_length=int(0.1*len(returns_to_bs.index)) to the start of the function bootstrap_portfolio.
Just to reinforce the point that these are 'risk weightings' here is the same optimisation done with the actual 'cash' weights and no normalisation of volatility:
opt_and_plot(data, "expanding", "bootstrap", equalisemeans=False, equalisevols=False)

Conclusion
I hope this has been useful both to those who have bought my book, and those who haven't yet bought it (I'm feeling optimistic!). If there is any python code that I've used to write the book you would like to see, let me know.