Some reflections on QuantCon 2017
As you may understand in case you've been following any of my numerous social media debts I spent the weekend in New York at QuantCon, a convention organised by Quantopian who provide a cloud platform for python systematic buying and selling strategy backtesting.
Quantopian had kindly invited me to come and speak, and you can find the slides of my presentation here. A video of the talk will also be available in a couple of weeks to attendees and live feed subscribers. If you didn't attend this will cost you $199 less a discount using the code CarverQuantCon2017 (That's for the whole thing - not just my presentation! I should also emphasise I don't get any of this money so please don't think I'm trying to flog you anything here).
Is a bit much less than $2 hundred worth it? Well study the relaxation of this put up for a flavour of the excellent of the convention. If you're inclined to wait some months then I believe that the films will probably become publicly available at some point (this is what befell final 12 months).
The complete event was very thrilling and concept scary; and I thought it might be well worth recording a number of the greater exciting thoughts that I had. I might not trouble with the less thrilling mind like "Boy it is tons hotter right here than I'd predicted it to be" and "Why can't they make US bucks of various denominations extra easily distinguishable from each different?!".
Machine mastering (and so on etc) may be very much a factor
Cards on the table - I'm not super keen on machine learning (ML),AIArtificial intelligence, NNNeural Networks, andDLDeep Learning (or any mention of Big Data, or people calling me a Data Scientist behind my back - or to my face for that matter). Part of that bias is because of ignorance - it's a subject I barely understand, and part is my natural suspicion of anything which has been massively over hyped.
But it's surely the case that each one these items may be very a good deal in style proper now, to the factor in which at the conference I turned into advised it's nearly not possible to get a QuantJob until you profess know-how in this concern (when you consider that I have none I'd be stuck with a McJob if I attempted to break into the industry now); and universities are renaming guides on facts "gadget mastering"... Even though the content material is slightly changed. And at QuantCon there were a cornucopia of shows on those form of subjects. Mostly I controlled to avoid those. But the primary keynote was about ML, and the closing keynote which turned into purportedly approximately portfolio optimisation (by using the manner it was first-rate, and I'll go back to that later), so I failed to control to keep away from it absolutely.
I additionally spent pretty a bit of time at some stage in the 'off line' a part of the convention talking to people from the ML / NN / DL / AI side of the fence. Most of them were clever, best and fascinating which became somewhat disconcerting (I felt like a heretic who'd met a few men from the Spanish inquisition at a celebration, and located that they were all actually high-quality people who just happened to have jobs that involved torturing human beings). Still it's fair to say we had a few very interesting, although very civilised, debates.
Most of those men as an instance have been very open about the reality that economic rate forecasting is a miles harder trouble than forecasting in all likelihood credit card defaults or recognising images of cats at the internet (an example that Dr Ernie Chan turned into mainly fond of using in his outstanding talk, which I'll go back to later. I wager he likes cats. Or watches a variety of youtube).
Also, this cartoon:
![]() |
| Source: https://xkcd.Com/1831/ This is uncannily just like what DJ Trump recently said approximately healthcare reform. |
The problem I have here is that "machine learning" is a super vague term which nobody can agree on a definition for. If for example I run the most simple kind of optimisation where I do a grid search over possible parameters and pick the best, is that machine learning? The machine has "learnt" what the best parameters are. Or I could use linear regression (200+ years old) to "learn" the best parameters. Or to be a bit fancier, if I use a Markov process (~100 years old) and update my state probabilities in some rolling out of sample Bayesian way, isn't that what an ML guy would call reinforcement learning?
It strikes me as quite arbitrary whether a selected method is gadget gaining knowledge of or taken into consideration to be "vintage school" records. Indeed have a look at this listing of ML techniques that Google simply found for me, right here:
- Linear Regression
- Logistic Regression
- Decision Tree
- SVM
- Naive Bayes
- KNN
- K-Means
- Random Forest
- Dimensionality Reduction Algorithms
- Gradient Boost & Adaboost
Some of these machine learning techniques don't seem to be very fancy at all. Linear and logistic regression are machine learning? And also Principal Components Analysis? (which apparently is now a "dimensionality reduction algorithm". Which is like calling a street cleaner a "refuse clearance operative")
Heck, I've been using clustering algorithms like KNN for donkeys years, mainly in portfolio construction (of which more later in the post). But apparently that's also now "machine learning".
Perhaps the only important distinction then is between unsupervised and supervised machine learning. It strikes me as fundamentally different to classical techniques when you let the machine go and do it's learning, drawing purely from the data to determine what the model should look like. It also strikes me as potentially dangerous. As I said in my own talk I wouldn't trust a new employee with no experience in the financial markets to do their fitting without supervision. I certainly wouldn't trust a machine.
Still this might be the only way of discovering a genuinely novel and highly non linear pattern in some rich financial data. Which is why I personally think high frequency trading is one of the more likely applications for these techniques (I particularly enjoyed Domeyards Christina Qi's presentation on this subject, which most of us only know about through books like Flash Boys).
I think it's fair to say that I am a bit more well disposed towards those on the other side of the fence than I was at the conference. But don't expect me to start using neural networks anytime soon.
... but "Classical" statistics are still important
One of my favourite talks that I've already mentioned wasDr Ernie Chan who talked about using some fairly well known techniques to identify pictures of cats on you tube enhance the statistical significance of backtests (with a specific example of a multi factor equity regression).
![]() |
| Source: https://twitter.com/saeedamenfx |
Although I didn't personally learn anything new in this talk I found it extremely interesting and useful in reminding everyone about the core issues in financial analysis. Fancy ML algorithims can't help solve the fundamental problem that we usually have insufficient data, and what we have has a pretty low ratio of signal to noise. Indeed most of these fancy methods need a shed load of data to work, especially if you run them on an expanding or rolling out of sample basis as I would strongly suggest. There are plenty of sensible "old school" methods that can help with this conundrum, and Ernie did a great job of providing an overview of them.
Another talk I went to was about detecting structural breaks in relative value fixed income trading, which was presented by Edith Mandel of Greenwhich Street Advisors. Although I didn't actually agree with the approach being used this stuff is important. Fundamentally this business is about trying to use the past to predict the future. It's really important to have good robust tests to distinguish when this is no longer working, so we know that the world has fundamentally changed and it isn't just bad luck. Again this is something that classical statistical techniques like Markov chains are very much capable of doing.
It's all about the portfolio construction, baby
As some of you know I'm currently putting the final touches to a modest volume on the ever fascinating subject of portfolio construction. So it's something I'm particularly interested in at the moment. There were stacks of talks on this subject at Quancon, but I only managed to attend two in person.
Firstly the final keynote talk, which was very well received, was on Building Diversified Portfolios that Outperform Out-of-Sample", or to be more specific Hierarchical Risk Parity (HRP), by Dr. Marcos López de Prado:
![]() |
| Source: https://twitter.com/quantopian. As you can see Dr. Marcos is both intelligent, and also rather good looking (at least as far as I, a heterosexual man, can tell). |
HRP is basically a combination of a clustering method to group assets and risk parity (essentially holding positions inversely scaled to a volatility estimate). So in some ways it is not hugely dissimilar to an automated version of the "handcrafted" method I describe in my first book. Although it smells a lot like this is machine learning I really enjoyed this presentation, and if you can't use handcrafting because it isn't sophisticated enough then HRP is an excellent alternative.
There were also some interesting points raised in the presentation (and Q&A, and the bar afterwards) more generally about testing portfolio construction methods. Firstly Dr Marcos is a big fan (as am I) of using random data to test things. I note in passing that you can also use bootstrapping of real data to get an idea of whether one technique is just lucky, or genuinely better.
Secondly one of the few criticisms I heard was that Dr Marcos chose an easy target - naive Markowitz - to benchmark his approach against. Bear in mind that (a) nobody uses naive Markowitz, and (b) there are plenty of alternatives which would provide a sterner test. Future QuantCon presenters on this subject should beware - this is not an easy audience to please! In fairness other techniques are used as benchmarks in the actual research paper.
If you want to know more about HRP there is more detail here.
I also found a hidden gem in one of the more obscure conference rooms, this talk by Dr. Alec (Anatoly) Schmidt on "Using Partial Correlations for Increasing Diversity of Mean-variance Portfolio".
![]() |
| Source: https://twitter.com/quantopian |
That is more interesting than it sounds - I believe this relatively simple technique could be something genuinely special and novel which will allow us to get bad old Markowitz to do a better job with relatively little work, and without introducing the biases of techniques like shrinkage, or causing the problems with constraints like bootstrapping does. I plan to do some of my own research on this topic in the near future, so watch this space. Until then amuse yourself with the paper from SSRN.
Dude, QuantCon is awesome
Finance and trading conferences have a generally bad reputation, which they mostly deserve. "Retail" end conferences are normally free or very cheap, but mostly consist of a bunch of snake oil salesman. "Professional" conferences are normally very pricey (though nobody there is buying their ticket with their own money), and mostly consist of a bunch of better dressed and slightly snake oil salespeople.
QuantCon is different. Snake oil sales people wouldn't last 5 minutes in front of the audience at this conference, even if they'd somehow managed to get booked to speak. This was probably the single biggest concentration of collective IQ under one roof in finance conference history (both speakers and attendees). The talks I went to were technically sound, and almost without exception presented by engaging speakers.
Perhaps the only downside of QuantCon is that the sheer quantity and variety of talks makes decisions difficult, and results in huge amount of regret at not being able to go to a talk because something only slightly better is happening in the next room. Still I know that I will have offended many other speakers by not (a) going to their talk, and (b) not writing about it here.
So I feel obligated to mention this other review of the event from Saeed Amen, and this one from Andreas Clenow, who are amongst the speakers whose presentations I sadly missed.
PS If you're wondering wether I am getting paid by QuantCon to write this, the answer is zero. Regular readers will know me well enough that I do not shill for anybody; the only thing I have to gain from posting this is an invite to next years conference!



