HPK

mesothelioma survival rates,structured settlement annuity companies,mesothelioma attorneys california,structured settlements annuities,structured settlement buyer,mesothelioma suit,mesothelioma claim,small business administration sba,structured settlement purchasers,wisconsin mesothelioma attorney,houston tx auto insurance,mesotheliama,mesothelioma lawyer virginia,seattle mesothelioma lawyer,selling my structured settlement,mesothelioma attorney illinois,selling annuity,mesothelioma trial attorney,injury lawyer houston tx,baltimore mesothelioma attorneys,mesothelioma care,mesothelioma lawyer texas,structered settlement,houston motorcycle accident lawyer,p0135 honda civic 2004,structured settlement investments,mesothelioma lawyer dallas,caraccidentlawyer,structured settlemen,houston mesothelioma attorney,structured settlement sell,new york mesothelioma law firm,cash out structured settlement,mesothelioma lawyer chicago,lawsuit mesothelioma,truck accident attorney los angeles,asbestos exposure lawyers,mesothelioma cases,emergency response plan ppt,support.peachtree.com,structured settlement quote,semi truck accident lawyers,auto accident attorney Torrance,mesothelioma lawyer asbestos cancer lawsuit,mesothelioma lawyers san diego,asbestos mesothelioma lawsuit,buying structured settlements,mesothelioma attorney assistance,tennessee mesothelioma lawyer,earthlink business internet,meso lawyer,tucson car accident attorney,accident attorney orange county,mesothelioma litigation,mesothelioma settlements amounts,mesothelioma law firms,new mexico mesothelioma lawyer,accident attorneys orange county,mesothelioma lawsuit,personal injury accident lawyer,purchase structured settlements,firm law mesothelioma,car accident lawyers los angeles,mesothelioma attorneys,structured settlement company,auto accident lawyer san francisco,mesotheolima,los angeles motorcycle accident lawyer,mesothelioma attorney florida,broward county dui lawyer,state of california car insurance,selling a structured settlement,best accident attorneys,accident attorney san bernardino,mesothelioma ct,hughes net business,california motorcycle accident lawyer,mesothelioma help,washington mesothelioma attorney,best mesothelioma lawyers,diagnosed with mesothelioma,motorcycle accident attorney chicago,structured settlement need cash now,mesothelioma settlement amounts,motorcycle accident attorney sacramento,alcohol rehab center in florida,fast cash for house,car accident lawyer michigan,maritime lawyer houston,mesothelioma personal injury lawyers,personal injury attorney ocala fl,business voice mail service,california mesothelioma attorney,offshore accident lawyer,buy structured settlements,philadelphia mesothelioma lawyer,selling structured settlement,workplace accident attorney,illinois mesothelioma lawyer

Menu Navigasi

Skew and expected returns

Start

Some bloke* once said "The most disregarded function of a method is the predicted skew of it is returns, i.E. How symmetrical they're"

* It turned into me. "Systematic Trading" web page forty

Skew then is an critical idea, and one which I locate myself thinking about loads. So I've decided to jot down a chain of posts approximately skew, of which is the first.

In reality I've already written a substantive post on trend following and skew, so this publish is sort of the prequel to that. This then is virtually the second post within the collection, however absolutely it is the first, because you have to examine this one first. Don't inform me you are pressured, I recognise for a fact every body studying that is high-quality with the truth that the Star Wars movies came out inside the order four,5,6,1,2,3,7,eight,9.

In this put up I'll speak about matters. Firstly I will (in short) speak the difficulties of measuring skew: yes it's my old favorite situation sampling variant. Secondly I'll talk (at tremendous length) about how skew can expect expected returns via answering the subsequent questions:

  • Do futures with negative skew generally have higher returns than those with positive skew? (strategic allocation)
  • Does this impact nonetheless hold while we modify returns for threat (the use of trendy deviations simplest)? (risk adjusted returns)
  • Are these rewards because we're taking on more risk in the form of
  • Does an asset that presently has negative skew outperform one that has superb skew? (time collection and forecasting)
  • Does an asset with decrease skew than ordinary carry out better than common (normalised time series)?
  • Do those results preserve within asset lessons? (relative price)

Some of those are widely recognized outcomes, others is probably novel (I have not checked - this isn't always an educational paper!). In unique, this is the canonical paper on skew for futures: but it focuses on commodity futures. There has also been research in unmarried equities that might be relevant (or it could no longer "Aggregate inventory marketplace returns display terrible skewness. Firm-level inventory returns show positive skewness." from right here).

This is the sort of 'pre-backtest' analysis that you should do with a new trading strategy idea, to get a feel for it. What we need to be wary of isimplicit fitting; deciding to pursue certain variations and not others having seen the entire sample. I will touch on this again later.

I will count on you are already acquainted with the fundamentals of skew, if you're now not then you could (i) study "Systematic Trading" (again), (ii) examine the primary part of this publish, or (iii) use the magical power of the internet, or in case you're determined (iv) examine a book on data.

Variance in skew sample estimates

Really brief reminder: The variance in a pattern estimate tells us how assured we may be in a particular estimate of a few belongings. The diploma of self belief relies upon on how much statistics we've got (greater statistics: more assured), the quantity of variability within the information (e.G. For sample manner, much less volatility: greater assured), and the estimator we are using (estimates of general deviation: low version)

We can estimate sampling distributions in approaches: the use of closed shape formulae, or the usage of a monte carlo estimator.

Closed form formulae are available for such things as mean, wellknown deviation, Sharpe ratio and correlation estimates; but they commonly assume i.I.D. Returns (Gaussian and serially uncorrelated). For instance, the formulation for the variance of the mean estimate is sigma / N^2, where sigma is the sample variance estimate and N is the wide variety of records points.

What is the closed shape components for skew? Well, assuming Gaussian returns the formula is as follows:

Skew = 0

Obviously that is not a whole lot use! To get a closed shape we'd want to count on our returns had a few different distribution. And the closed forms have a tendency to be pretty horific, and additionally the distributions are not usually lots use if there are outliers (some thing which, as we shall see, has a truthful antique impact at the variance). So permit's stick with the usage of monte carlo.

Obviously to try this, we're going to want a few data. Let's flip to my unfortunately disregarded open source venture, pysystemtrade.

Import numpy as np
from structures.Furnished.Futures_chapter15.Estimatedsystem import * machine = futures_system() del(device.Config.Units) # so we are able to get consequences for the entirety instrument_codes = machine.Get_instrument_list()
percentage_returns = dict() for code in instrument_codes: denom_price = system.rawdata.daily_denominator_price(code) instr_prices = system.rawdata.get_daily_prices(code) num_returns = instr_prices.diff() perc_returns = num_returns / denom_price.ffill() # there are some false outliers in the data, let's remove them vol_norm_returns = system.rawdata.norm_returns(code) perc_returns[abs(vol_norm_returns)>10]=np.nan

percentage_returns[code] = perc_returns
We'll use this data throughout the rest of the post; if you want to analyse your own data then feel free to substitute it in here.

Pandas has a way of measuring skew:

percentage_returns["VIX"].Skew() 0.32896199946754984

We're ignoring for now the query of whether we ought to use daily, weekly or some thing returns to outline skew.

However this does not capture the uncertainty in this estimate. Let's write a quick feature to get that statistics:

			
import random

def resampled_skew_estimator(records, monte_carlo_count=500): """ Get a distribution of skew estimates :param data: a while collection :param monte_carlo_count: quantity of goes we monte carlo for :return: list """ skew_estimate_distribution = [] for _notUsed in range(monte_carlo_count): resample_index = [int(random.Uniform(0,len(data))) for _alsoNotUsed in range(len(data))] resampled_data = records[resample_index] sample_skew_estimate = resampled_data.Skew() skew_estimate_distribution.Append(sample_skew_estimate) go back skew_estimate_distribution

Now I can plot the distribution of the skew estimate for an arbitrary market:

import matplotlib.Pyplot as pyplot facts = percentage_returns['VIX'] x=resampled_skew_estimator(data, one thousand) pyplot.Hist(x, boxes=30)

Boy... This is quite a selection. It's practicable that the skew of VIX (one of the maximum positively skewed assets in my dataset) will be zero. It's similarly possible it could be around 0.6. Clearly we must be quite cautious about interpreting small variations in skew as some thing substantial.

Let's examine the distribution throughout all of our exceptional futures devices

# do a boxplot for the entirety
import pandas as pd df_skew_distribution = dict() for code in instrument_codes: print(code) x = resampled_skew_estimator(percentage_returns[code],one thousand) y = pd.Series(x) df_skew_distribution[code]=y df_skew_distribution = pd.DataFrame(df_skew_distribution) df_skew_distribution = df_skew_distribution.Reindex(df_skew_distribution.Suggest().Sort_values().Index, axis=1) df_skew_distribution.Boxplot() pyplot.Xticks(rotation=90)

It looks like:

  • Most assets are negatively skewed
  • Positively skewed assets are kind of logical: V2X, VIX (vol), JPY (safe haven currency), US 10 year, Eurodollar (bank accounts backed by the US government).
  • The most negatively skewed assets include stock markets and carry currencies (like MXP, AUD) but also some commodities
  • Several pinches of salt should be used here as very few assets have statistically significant skew in eithier direction.
  • More negatively and positively skewed assets have wider confidence intervals
  • Positive skewed assets have a postively skewed estimate for skew; and vice versa for negative skew
  • There are some particularly fat tailed assets whose confidence intervals are especially wide: Corn, V2X, Eurodollar, US 2 year.

Bear in mind that not all gadgets have the identical period of information, and in particular many do not consist of 2008.

Do belongings with terrible skew usually have better returns than people with high-quality skew?

Let's discover.

# common go back vs skew
avg_returns = [percentage_returns[code].Suggest() for code in instrument_codes] skew_list = [percentage_returns[code].Skew() for code in instrument_codes] fig, ax = pyplot.Subplots() ax.Scatter(skew_list, avg_returns, marker="") for i, txt in enumerate(instrument_codes): ax.Annotate(txt, (skew_list[i], avg_returns[i]))

Ignoring the two vol markets, it looks like there might be a susceptible relationship there. But there may be big uncertainty in return estimates. Let's bootstrap the distribution of imply estimates, and plot them with the maximum negative skew on the left and the most high quality skew at the proper:

def resampled_mean_estimator(facts, monte_carlo_count=500): """ Get a distribution of imply estimates :param statistics: some time collection :param monte_carlo_count: quantity of is going we monte carlo for :return: listing """ mean_estimate_distribution = [] for _notUsed in variety(monte_carlo_count): resample_index = [int(random.Uniform(0, len(data))) for _alsoNotUsed in range(len(data))] resampled_data = information[resample_index] sample_mean_estimate = resampled_data.Suggest() mean_estimate_distribution.Append(sample_mean_estimate) go back mean_estimate_distribution df_mean_distribution = dict() for code in instrument_codes: print(code) x = resampled_mean_estimator(percentage_returns[code],a thousand) y = pd.Series(x) df_mean_distribution[code]=y df_mean_distribution = pd.DataFrame(df_mean_distribution) df_mean_distribution = df_mean_distribution[df_skew_distribution.Columns] df_mean_distribution.Boxplot() pyplot.Xticks(rotation=90)

Again, other than the vol, hard to look lots there. Let's lump collectively all of the countries with above average skew (high skew), and those with beneath common (low skew):

skew_by_code = df_skew_distribution.mean() avg_skew = np.mean(skew_by_code.values) low_skew_codes = list(skew_by_code[skew_by_code<avg_skew].index) high_skew_codes = list(skew_by_code[skew_by_code>=avg_skew].index) def resampled_mean_estimator_multiple_codes(percentage_returns, code_list, monte_carlo_count=500): """ :param percentage_returns: dict of returns :param code_list: list of str, a subset of percentage_returns.keys :param monte_carlo_count: how many times :return: list of mean estimtes """ mean_estimate_distribution = [] for _notUsed in range(monte_carlo_count): # randomly choose a code code = code_list[int(random.uniform(0, len(code_list)))] data = percentage_returns[code] resample_index = [int(random.uniform(0,len(data))) for _alsoNotUsed in range(len(data))] resampled_data = data[resample_index] sample_mean_estimate = resampled_data.mean() mean_estimate_distribution.append(sample_mean_estimate) return mean_estimate_distribution df_mean_distribution_multiple = dict() df_mean_distribution_multiple['High skew'] = resampled_mean_estimator_multiple_codes(percentage_returns,high_skew_codes,1000) df_mean_distribution_multiple['Low skew'] = resampled_mean_estimator_multiple_codes(percentage_returns,low_skew_codes,1000) df_mean_distribution_multiple = pd.DataFrame(df_mean_distribution_multiple) df_mean_distribution_multiple.boxplot()

Incidentally I've truncated the plots here because there is a huge tail of terrible returns for high skew: basically the vol markets. The imply and medians are instructive, multiplied via 250 to annualise the mean return is -6.6% for high skew and 1.8% for low skew. Without that lengthy tail having such an impact the medians are a good deal nearer: 0.9% for high skew and 2.2% for low skew.

If I take out the vol markets I get method of 0.6% and 1.7%, and medians of one.2% and 2.Three%. The median is unaffacted, but the ridiculously low mean return for high vol markets is taken out.

So: there is something there, of the order of a 1.0% advantage in extra annual returns for owning markets with lower than average skew. If you're an investor with a high tolerance for risk who can't use leverage; well you can stop reading now.

Does this impact nonetheless hold while we modify returns for threat (the use of trendy deviations simplest)?

Excellent question. Let's discover.

# sharpe ratio vs skew
sharpe_ratios = [16.Zero*percentage_returns[code].Imply()/percentage_returns[code].Std() for code in instrument_codes] skew_list = [percentage_returns[code].Skew() for code in instrument_codes] fig, ax = pyplot.Subplots() ax.Scatter(skew_list, sharpe_ratios, marker="") for i, txt in enumerate(instrument_codes): ax.Annotate(txt, (skew_list[i], sharpe_ratios[i]))

Hard to look any courting right here, although the 2 vol markets nonetheless stand out as outliers.

Let's bypass instantly to the excessive skew/low skew plot, this time for Sharpe Ratios:

def resampled_SR_estimator_multiple_codes(percentage_returns, code_list, monte_carlo_count=500, avoiding_vol=False): """ :param percentage_returns: dict of returns :param code_list: list of str, a subset of percentage_returns.Keys :param monte_carlo_count: how regularly :return: listing of SR estimtes """ SR_estimate_distribution = [] for _notUsed in range(monte_carlo_count): # randomly select a code # comment in those lines to keep away from vol if avoiding_vol: code = "VIX" even as code in ["VIX", "V2X"]: code = code_list[int(random.Uniform(0, len(code_list)))] else: code = code_list[int(random.Uniform(0, len(code_list)))] information = percentage_returns[code] resample_index = [int(random.Uniform(0,len(data))) for _alsoNotUsed in range(len(data))] resampled_data = facts[resample_index] SR_estimate = 16.Zero*resampled_data.Mean()/resampled_data.Std() SR_estimate_distribution.Append(SR_estimate) go back SR_estimate_distribution df_SR_distribution_multiple = dict() df_SR_distribution_multiple['High skew'] = resampled_SR_estimator_multiple_codes(percentage_returns,high_skew_codes,one thousand) df_SR_distribution_multiple['Low skew'] = resampled_SR_estimator_multiple_codes(percentage_returns,low_skew_codes,1000) df_SR_distribution_multiple = pd.DataFrame(df_SR_distribution_multiple) df_SR_distribution_multiple.Boxplot()

Hard to see what, if any, is the distinction there. The summary data are even more telling:

Mean: High skew zero.22, Low skew 0.26

Median: High skew zero.22, Low skew zero.20

Once we modify for danger, or at the least threat measured by using the second one second of the distribution, then uglier skew (the 1/3 second) doesn't seem to be rewarded by an stepped forward return.

Things get even greater interesting if we get rid of the vol markets once more:

Mean: High skew zero.37, Low skew 0.24

Median: High skew 0.29, Low skew zero.17

A whole reversal! Probably now not that sizeable, however a shocking flip of activities none the less.

Does an asset that presently has negative skew outperform one that has superb skew? (time collection and forecasting)

A verageskew andaveragereturns aren't that important or interesting; but it would be cool if we could use the current level of skew to predict risk adjusted returns in the following period.

An open question is, what is the current level of skew? Should we use skew defined over the last week? Last month? Last year? I'm going to check all of these, since I'm a big fan of time diversification for trading signals.

I'm going to get the distribution of threat adjusted returns (no want for bootstrapping) for the following N days, wherein skew over the preceding N days has been higher or decrease than common. Then I do a t-check to see if the realised Sharpe Ratio is statistically considerably higher in a low skew versus excessive skew surroundings.

all_SR_list = [] all_tstats=[] all_frequencies = ["7D", "14D", "1M", "3M", "6M", "12M"] for freqtouse in all_frequencies: all_results = [] for instrument in instrument_codes: # we're going to do rolling returns perc_returns = percentage_returns[instrument] start_date = perc_returns.index[0] end_date = perc_returns.index[-1] periodstarts = list(pd.date_range(start_date, end_date, freq=freqtouse)) + [ end_date] for periodidx in range(len(periodstarts) - 2): # avoid snooping p_start = periodstarts[periodidx]+pd.DateOffset(-1) p_end = periodstarts[periodidx+1]+pd.DateOffset(-1) s_start = periodstarts[periodidx+1] s_end = periodstarts[periodidx+2] period_skew = perc_returns[p_start:p_end].skew() subsequent_return = perc_returns[s_start:s_end].mean() subsequent_vol = perc_returns[s_start:s_end].std() subsequent_SR = 16*(subsequent_return / subsequent_vol) if np.isnan(subsequent_SR) or np.isnan(period_skew): continue else: all_results.append([period_skew, subsequent_SR]) all_results=pd.DataFrame(all_results, columns=['x', 'y']) avg_skew=all_results.x.median() all_results[all_results.x>avg_skew].y.median() all_results[all_results.x<avg_skew].y.median() subsequent_sr_distribution = dict() subsequent_sr_distribution['High_skew'] = all_results[all_results.x>=avg_skew].y subsequent_sr_distribution['Low_skew'] = all_results[all_results.x<avg_skew].y subsequent_sr_distribution = pd.DataFrame(subsequent_sr_distribution) med_SR =subsequent_sr_distribution.median() tstat = stats.ttest_ind(subsequent_sr_distribution.High_skew, subsequent_sr_distribution.Low_skew, nan_policy="omit").statistic all_SR_list.append(med_SR) all_tstats.append(tstat) all_tstats = pd.Series(all_tstats, index=all_frequencies) all_tstats.plot()
Here are the T-statistics:

Large negative numbers mean a bigger difference in overall performance. It looks as if we get large effects measuring skew over at least the last 1 month or so.

How a whole lot is that this well worth to us? Here are the conditional median returns:

all_SR_list = pd.DataFrame(all_SR_list, index=all_frequencies) all_SR_list.Plot()

Nice. An extra 0.1 to 0.4 Sharpe Ratio units. One staggering thing about this graph is this, when measuring skew over the last 3 to 12 months, assets with better than average skew make essentially no cash.

Should we use 'lower than common skew' or 'bad skew' as our cutoff / demeaning point?

Up to now we've been using the median skew as our cutoff (which in a contionous trading system would be our demeaning point, i.e. we'd have positive forecasts for skew below the median, and negative forecasts for skew which was above). This cuttoff hasn't quite been zero, since on average more assets have negative skew. But is there something special about using a cutoff of zero?

Easy to test:

#avg_skew=all_results.X.Median()# rather:
avg_skew = zero

With a less symettric cut up we would generally to get higher statistical significance (because the 'high skew' organization is a chunk smaller now), but the consequences are almost tthe equal. Personally I'm going to stick to the use of the median as my cutoff, since it's going to make my buying and selling machine greater symettric.

Does an asset with decrease skew than ordinary carry out better than common (normalised time series)?

The effects straight away above can be summarised as:

- Assets which currently have more negative skew than average - measured as an average across all assets over all time

This confound three viable effects:

- Assets with on average (over all time)more negative skew perform better on average (the first thing we checked - and on a risk adjusted basis the effect is pretty weak and mostly confined to the vol markets)

- Assets which have currently more negative skew than their own average perform better

- Assets which currently have more negative skew than the current average perform better than other assets

Let's check two and three.

First let's check the second effect, which can be rephrased as is skew demeaned by the average for an asset predictive of future performance for that asset?

I'm going to apply the common skew for the final 10 years to demean every asset.

Code similar to above, except:

perc_returns = percentage_returns[instrument] all_skew = perc_returns.Rolling("3650D").Skew()
...
			
Period_skew = perc_returns[p_start:p_end].Skew() avg_skew = all_skew[:p_end][-1] period_skew = period_skew - avg_skew subsequent_return = perc_returns[s_start:s_end].Imply()

Similar to earlier than, however not pretty as massive. The weaker impact at one month has vanished. Here are the Sharpe Ratios:

The 'skew bonus' has reduced somewhat to around 0.2 SR points for the closing three hundred and sixty five days of returns.

Now let's check the third effect, which can be rephrased asis skew demeaned by the average of current skews across all assets asset predictive of future performance for that asset?

Code modifications:

all_SR_list = [] all_tstats=[] all_frequencies = ["7D", "14D", "30D", "90D", "180D", "365D"]
...

for freqtouse in all_frequencies: all_results = [] # relative price skews need averaged skew_df =  for device in instrument_codes: # rolling skew over period instrument_skew = percentage_returns[instrument].Rolling(freqtouse).Skew() skew_df[instrument] = instrument_skew skew_df_all = pd.DataFrame(skew_df) skew_df_median = skew_df_all.Median(axis=1) for instrument in instrument_codes:
....
			
Period_skew = perc_returns[p_start:p_end].Skew() avg_skew = skew_df_median[:p_end][-1] period_skew = period_skew - avg_skew subsequent_return = perc_returns[s_start:s_end].Imply()

...

Plots, as earlier than:

To summarise then:

- Using current skew is very predictive of future returns, if 'latest' way the use of at the least 1 month of returns, and ideally more. The impact is strongest if we use the final 6 months or so of returns.

- Some, but now not all, of this impact persists if we normalise skew by the long term average for an asset. So, for example, even for belongings which usually have fine skew, you are higher off investing in them when their skew is decrease than normal

- Some, however not all, of this effect persists if we normalise skew by using the current average stage of skew. So, as an example, even in times whilst skew typically is terrible (2008 anyone?) it's better to make investments within the belongings with the most terrible skew.

We could formally decompose the above outcomes with as an example a regression, but I'm greater of partial to using easy buying and selling singles which are linearly weighted, with weights conditional on correlations among indicators.

Do those results preserve within asset lessons? (relative price)

Rather than normalising skew by the current average across all assets, maybe it would be better to consider the average for that asset class. So we'd be comparing S&500 current skew with Eurostoxx, VIX with V2X, and so on.

all_SR_list = [] all_tstats=[] all_frequencies = ["7D", "14D", "30D", "90D", "180D", "365D"] asset_classes = list(system.data.get_instrument_asset_classes().unique()) for freqtouse in all_frequencies: all_results = [] # relative value skews need averaged skew_df_median_by_asset_class = {} for asset in asset_classes: skew_df = {} for instrument in system.data.all_instruments_in_asset_class(asset): # rolling skew over period instrument_skew = percentage_returns[instrument].rolling(freqtouse).skew() skew_df[instrument] = instrument_skew skew_df_all = pd.DataFrame(skew_df) skew_df_median = skew_df_all.median(axis=1) # will happen if only one asset class skew_df_median[skew_df_median==0] = np.nan skew_df_median_by_asset_class[asset] = skew_df_median for instrument in instrument_codes: # we're going to do rolling returns asset_class = system.data.asset_class_for_instrument(instrument) perc_returns = percentage_returns[instrument]
...
			
period_skew = perc_returns[p_start:p_end].skew() avg_skew = skew_df_median_by_asset_class[asset_class][:p_end][-1] period_skew = period_skew - avg_skew subsequent_return = perc_returns[s_start:s_end].mean()

That's interesting: the effect is looking a lot weaker except for the longer horizons. The worse t-stats could be explained by the fact that we have less data (long periods when only one asset is in an asset class and we can't calculate this measure), but the relatively small gap between Sharpe Ratios isn't affected by this.

So almost all of the skew effect is happening at the asset class level. Within asset classes, for futures at least, if you normalise skew by asset class level skew you get a not significant 0.1 SR units or so of benefit, and then only for fairly slow time frequencies.

Summary

This has been a long post. And it's been quite a heavy, graph and python laden, post. Let's have a quick recap:

  • Most assets have negative skew
  • There is quite a lot of sampling uncertainty around skew, which is worse for assets with outliers (high kurtosis) and extreme absolute skew
  • Assets which on average have lower (more negative) skew will outperform in the long run.
  • This effect is much smaller when we look at risk adjusted returns (Sharpe Ratios), and is driven mainly by the vol markets (VIX, V2X)
  • Assets with lower skew right now will outperform those with higher skew right now. This is true for skew measuring and forecasting periods of at least 1 month, and is strongest around the 6 month period. In the latter case an average improvement of 0.4 SR units can be expected.
  • This effect is the same regardless of wether skew is compared to the median or compared to zero.
  • This effect mostly persists even if we demean skew by the rolling average of skew for a given asset over the last 10 years: time series relative value
  • This effect mostly persists if we deman skew by the average of current skews across all assets: cross sectional relative value
  • But this effect mostly vanishes if the relative value measure is taken versus the average for the relevant asset class.

This is all very interesting, but it mostly compares, and it still isn't a trading strategy. So in the next post I will consider the implementation of these ideas as a suite of trading strategies:

  • Skew measured over the last N days, relative to a long term average across all assets
  • Skew measured over the last N days, relative to a long term average for this asset
  • Skew measured over the last N days, relative to the current average for all assets
  • Skew measured over the last N days, relative to the current average for the relevant asset class

Where N will be in the ballpark 10... 250 business days.

For now it's important to bear in mind that I must not discard any of the above ideas because of likely poor performance:

  • lower values of N (though some might be removed because their trading costs are too high), eg 2 weeks
  • Asset class relative value

Finish
Bagikan ke Facebook

Artikel Terkait

Lanjut