ECRI recession call "Downfall" parody

h/t bonddad blog

If you’ve been alive in the financial blogosphere you’ve definitely come across a critique of the ECRI recession call last year, which has been twisted and pushed forward through time. It looks like they were wrong and have yet to admit defeat. This downfall parody is hilarious. I’ve seen them before but I didn’t know there were so many

Advertisements

VIX closes up 34%, what next?

What seemed like a normal day quickly escalated into relatively big sell off (compared to recent historical volatility). The SPX was down 1.83% and the VIX had a huge pop of 34%.

Source: Stockcharts.com

I thought I would see what history says about this type of event and the returns going forward. Lets see what the average SP500 return is after a 34% single day spike in the VIX.

Only 5 instances since 1996, an extremely small sample, but it looks like we’re in for choppy weather going forward. I’m not surprised given the deteriorating internals we’ve been witnessing in the sector rotation. Of course, this study should be aggregated with other events that have triggered in the recent past, however it gives up one of many reference points to guide us.

Update: See also:
What Follows Huge 1-Day VIX Spikes
$VIX Explodes: What Happens After 1 Day $VIX Gains of 30+%?

R code (h/t SIT):

# Load Systematic Investor Toolbox (SIT)
setInternet2(TRUE)
con = gzcon(url('https://github.com/systematicinvestor/SIT/raw/master/sit.gz', 'rb'))
source(con)
close(con)

#*****************************************************************
# Load historical data
#******************************************************************
load.packages('quantmod,quadprog,lpSolve,kernlab')
require(XLConnect)
require(rJava)
require(plyr)

cbind.fill<-function(...){
  nm <- list(...)
  nm<-lapply(nm, as.matrix)
  n <- max(sapply(nm, nrow))
  do.call(cbind, lapply(nm, function (x)
    rbind(x, matrix(, n-nrow(x), ncol(x)))))
}

#Set variables
forward.len = 50

data.SPY = getSymbols('^GSPC', src = 'yahoo', from = '1996-01-01',auto.assign = FALSE)
data.VIX = getSymbols('^VIX', src = 'yahoo', from = '1996-01-01',auto.assign = FALSE)

VIX.ret = Cl(data.VIX)/mlag(Cl(data.VIX)) - 1
SPY.ret = Cl(data.SPY)/mlag(Cl(data.SPY)) - 1

VIX.ret = na.omit(VIX.ret)
VIX.ret = merge(VIX.ret,SPY.ret)
results = matrix(NA, nr=forward.len,nc=1)
datelist = matrix(NA, nr=1,nc=2)

for (i in 2:(nrow(VIX.ret)-1) ) {
  if (VIX.ret[i,"VIX.Close"] > 0.34) {

    results = cbind.fill(results,matrix(coredata(VIX.ret[(i+1):min((i + forward.len),nrow(VIX.ret)),"GSPC.Close"]),nr=forward.len))
    datelist = rbind(datelist,c(index(VIX.ret[i]),Cl(data.SPY[i+1])))
    #lines(ind2, type = 'l', col = k, ylim = range(ind2))
  }
}

results.avg = apply(results,1,mean,na.rm=TRUE)
results.avg.equity = cumprod(1 + results.avg)
plot(results.avg.equity, type = 'l',col = 'red', main = 'SP500 average return after VIX is up 34%', ylim = c(0.97,1.05),axes = TRUE,ann=FALSE,lwd=3)

title(xlab="# of Days trade is held",font.sub=3,lwd=10)
title(ylab="Avg Equity (Day 0 = 1)",font.sub=3,lwd=10)
title(main ="SP500 average return after VIX is up 34%",cex=1)
axis(1,lwd=3)
axis(2,lwd=3)

#SP500 average returns for period
SPYonly = matrix(NA, nr=forward.len,nc=1)

for (i in 1:nrow(VIX.ret) ) {
  SPYonly = cbind.fill(SPYonly,matrix(coredata(VIX.ret[(i+1):min((i + forward.len),nrow(VIX.ret)),"GSPC.Close"]),nr=forward.len))
}

SPYonly.avg = apply(SPYonly,1,mean,na.rm=TRUE)
SPYonly.avg.equity = cumprod(1 + SPYonly.avg)

lines(SPYonly.avg.equity, type = 'l', col = 1, ylim = range(SPYonly),lwd=3)
legend(1,1.04,c("SPY average return","VIX up 34%"),cex = 0.7,col=1:2,lwd=3)

Aronson/Masters release much anticipated trading signal machine learning tool

Check out their website for the free software.

The January edition of the Technically Speaking newsletter written by the Market Technicians Association had an article describing the software:

David is addressing that by creating a unique trading platform called Trading System Synthesis & Boosting (TSSB). This software, with documentation, will soon be available for free. Readers will be notified when the software is released. Among the features that will be offered are:

-The ability to rank a large list of indicators vs. a target, get a chi-square statistic and a level of significance (p-value) for the indicators that is adjusted for data mining bias. As David’s book points out an ordinary p-value does not work. He describes the problem, “If I test 1000 indicators versus a given target variable, even if none of them have any predictive value, 5% or 50 of the indicators will appear to be statistically significant at the 0.05 level of significance. However,when we correct for the fact that we have tested (data mined) 1000 indicators, a correct p-value will reveal the fact that none are significant.”

-Provide non-redundant predictive screening (NRPS), a feature that takes a list of indicators and finds the best one for predicting a given target variable that adds the most information to (is least redundant of) the first one selected. This process continues until adding a new indicator does not produce a statistically significant increase in information. This will be done by a specialized Monte Carlo Permutation test that is robust to (corrected for) the data mining going on. This is crucial for traders to understand because without this test adding another indicator will always appear to increase statistical significance according to a conventional significance test.

-Plot the predictive power of an indicator over time. It fluctuates. There will also be data presented to break down the predictive information into two parts, (1) the ability to predict the sign of the target variable (positive vs. negative) and (2) the ability to predict the magnitude of the target. For example, there may be an indicator that is unable to predict if a future move will be up or down but is very good at predicting if the move will be large or small (irrespective of its algebraic sign).

Looks impressive, can’t wait to try it out. See their site for a thorough description of all the features. Great work guys. h/t Mebane Faber for the update.

Notes and Quotes from Jaffray Woodriffs's Hedge Fund Market Wizards Interview

Jack Schwager has written a great series of Market Wizards books interview a series of fund managers and traders with different styles who are all successful. In his most recent book Hedge Fund Market Wizards, he interviews quant manager Jaffray Woodriff. Woodriff is head of Quantitative Investment Management (QIM), a successful CTA with a different approach than the rest. As the book explains:

The majority of futures traders, called CTAs, use trend-following methodologies.1 These programs seek to identify trends and then take a position in the direction of the trend until a trade liquidation or reversal signal is received. A smaller number of systematic CTAs will use countertrend (also called mean reversion) methodologies. As the name implies, these types of systems will seek to take positions opposite to an ongoing trend when system algorithms signal that the trend is overextended. There is a third category of systematic approaches whose signals do not seek to profit from either continuations or reversals of trend. These types of systems are designed to identify patterns that suggest a greater probability for either higher or lower prices over the near term. Woodriff is among the small minority of CTAs who employ such pattern-recognition approaches, and he does so using his own unique methodology. He is one of the most successful practitioners of systematic trading of any kind.

Source: Hedge Fund Market Wizards

Several posts have been written investigating Woodriff’s methods, specifically what’s been labeled the Internal Bar Strength indicator. You can check out some of the other discussion at the following posts:

Doing the Jaffray Woodriff Thing (Kinda), Part 1

http://stats.stackexchange.com/questions/31513/new-revolutionary-way-of-data-mining

http://www.activetradermag.com/index.php/c/Trading_Strategies/d/The_low-close_edge

http://adaptivetrader.wordpress.com/2012/12/28/cumulative-ibs-indicator/

http://dontfearthebear.com/2013/01/04/using-strength-of-recent-closes-to-time-broad-asset-classes/

http://intelligenttradingtech.blogspot.tw/2013/01/ibs-reversion-edge-with-quantshare.html

I wanted to highlight some of my favorite quotes and takeaways from the interview. Most have to do with the issue of robustness in trading systems design. All the following quotes are from Hedge Fund Market Wizards. Here we go:

I discovered that it was much better to use multiple models than a single best model

Here Woodriff highlights the diversification benefits of having multiple trading systems versus a single trading system. Of course we know if the systems are relatively uncorrelated the result of combining the equity curves will be a better risk adjusted single system.

I found that using the same models across multiple markets provided a far more robust approach. So the big change that occurred during this period was moving from separate models for each market to common models applied across all markets.

This is a common technique to increase robustness. If a system is optimized for a single market, it can be easily over-optimized. If the rules have to work on several different markets and they still yield good results, they will be more likely to repeat in live trading.

Are all your secondary variables derived just from daily open, high, low, and close price data?Absolutely. That is all I am using.

I was a little suprised that all he uses are OHLC bars. I know several successful traders who only use these datapoints and thus I know how much information is conveyed in them. Perhaps we should limit our dataset and simply refine our existing research methods rather than search for more information in our quest for the ultimate system.

Sometimes we give a little more weight to more recent data, but it is amazing how valuable older data still is. The stationarity of the patterns we have uncovered is amazing to me, as I would have expected predictive patterns in markets to change more over the longer term.

Woodriff uses data going back several decades in his analysis. I know people that trader intraday only use the last 3-10 years so it all depends on the timeframe you are trading. For daily bars i’d say more is better, but you also have to consider the quality of the data you have.

combined different secondary variables into trend-neutral models. They were neither trying to project a continuation of the trend or a reversal of the trend. They were only trying to predict the probable market direction over the next 24 hours.

Here Woodriff talks about his direction recognition approach which stands in contrast to the typical trend following or mean-reversion CTA strategies.

I don’t do that. I read all of that just to get to the point that I do what I am not supposed to do, which is a really interesting observation because I am supposed to fail. According to almost everyone, you have to approach systematic trading (and predictive modeling in general) from the framework of “Here is a valid hypothesis that makes sense within the context of the markets.” Instead, I blindly search through the data. It’s nice that people want hypotheses that make sense. But I thought that was very limiting. I want to be able to search the rest of the stuff. I want to automate that process. If you set the problem up really well with cross validation, then overfitting is a problem that can be overcome. I hypothesized that there are patterns that work, and I would rather have the computer test trillions of patterns than just a few hundred that I had thought of.

This is really interesting. For years I’ve read that you need to start with a valid market or eocnomic hypothesis for a trading system in order for it to be successful. Woodriff’s approach is the opposite, in that he just data-mines ‘blindly’ but uses robust techniques to avoid over-fitting of data. It does seem very limiting to only test hypothesis you can come up with yourself vs looking at all other possible combinations. This will require more analysis….

A lot of people think they are okay because they use in-sample data for training and out-of-sample data for testing.6 Then they sort the models based on how they performed on the in-sample data and choose the best ones to test on the out-of-sample data. The human tendency is to take the models that continue to do well in the out-of-sample data and choose those models for trading. That type of process simply turns the out-of-sample data into part of the training data because it cherry-picks the models that did best in the out-of-sample period. It is one of the most common errors people make and one of the reasons why data mining as it is typically applied yields terrible results.

Again, Woodriff goes against conventional wisdom and pokes holes in the most common method of system robustness testing, namely Out-Of-Sample tests. He mentions several times in the interview the use of the Cross Validation technique as a better alternative (something that is also highlighted by fellow market wizard and CTA Bill Eckhardt).

You can look for patterns where, on average, all the models out-of-sample continue to do well. You know you are doing well if the average for the out-of-sample models is a significant percentage of the in-sample score. Generally speaking, you are really getting somewhere if the out-of-sample results are more than 50 percent of the in-sample.

Here he basically describes the cross validation process and puts some concrete thresholds on what types of results he looks for in the test; perhaps I will try this out.

This was a great interview with a successful quant. I recommend purchasing the Hedge Fund Market Wizards book and reading the full interview as well as the interviews of traders with other approaches. It always helps to get insights from the best….

Secular Market Cycles and other Macro thoughts….

There have been several articles about the secular bear market as of late:

Is Bull Sprint Becoming a Marathon?

Is the Secular Bear Market Coming to an End?

Secular Bull and Bear Markets

The Next Secular Bull Market is Still a Few Years Away

secular-bear-market-in-stocks-is-over

For those of you that don’t know, Secular market cycles of 10-25 years exist in the stock market where stocks alternate between churning in a trading range and trending upwards. The current bear market cycle started in 2000 and is 13 years old. Unfortunately, these cycles are only easily identifiable in hindsight.

Source: Monthly Chart Portfolio, Merrill Lynch Market Analysis, November 4, 2011

I thought we could review a few macro charts given the recent focus on long term market cycles. Also for those of you who missed it check out my earlier chart reviews here and here. Personally, I’m reminded of a 2009 Mark Hulbert article regarding Ned Davis’ indicators for secular bear market lows:

Secular bear, cyclical bull

Davis was able to identify seven dimensions that he could use to compare the March 9 low to those past secular lows:
• “Economically, the debt structure should be deflated.” Bearish. This is the most negative of any of Davis’ seven dimensions, since by no means is the debt structure deflated. On the contrary, Davis calculates that the total credit-market debt load right now is nearly four times the size of gross domestic product, and that it takes more than $6 of new debt for our country to produce just $1 of GDP growth. That’s almost double the amount of debt required in the 1990s.
• “There should be a large pent-up demand for goods and services.” Bearish. Davis acknowledges that there has been improvement along this dimension from where things stood at the beginning of the bear market. But he is particularly worried by the ratio of total Personal Consumption Expenditures to Non-Residential Fixed Investment, which currently stands at a record high. At the secular bear market low in 1982, in contrast, this ratio was at a record low.
• “Fundamentally, stocks should be clearly cheap based upon time-tested, absolute valuation measures.” Neutral. Though the stock market “got undervalued at the March lows,” it never became “dirt cheap.”
• “Technically, major investor groups should have below-average stock holdings and large cash reserves.” Neutral. While foreign investors have record-low stock holdings, according to Davis, household holdings — while low — are not nearly as low as they were at prior secular bear market lows. And institutional investors’ stock holdings “are only down to an average weighting historically.”
• “A fully oversold longer-term market condition in terms of normal trend growth and in terms of time.” Neutral. Davis believes that, though many of the excesses of the real-estate bubble have been worked off, some still exist. That’s particularly a problem, he says, given that the stock market bubble of the late 1990s never completely deflated either. “As we saw in Japan after 1990, a double-bubble in stocks and real estate leaves it difficult to put ‘humpty dumpty’ together again.”
The bottom line? Only one of the seven foundations of a secular bull market is in place. Three more are neutral, and the remaining three are bearish.

The overriding theme of his article was that there has to be a pent up demand for stocks and goods/services and that corrections are in terms of price and time. Lets look at some charts and see if this is the case:


Source: Gluskin Sheff

The debt structure is still inflated from an overall perspective, although deleveraging slowly. As we have shown before on this blog, the private sector debt is contracting while the government is leveraging up to offset it, in this balance sheet recession.


Source: Gluskin Sheff

Drilling down into the private sector, household balance sheets are still in the process of being repaired and debt ratios reverting to the mean. This process has still room to run and could take a while. It hasn’t even begun in Canada (where I live).


Source: Fusion

How much equities are investors holding? Relative to the past the current allocation is neutral to bullish. Unfortunately it only has a 26-year history.



Source: Chris Puplava

A look at demographic trends versus normalized equity valuations shows that we might see more downside in the next couple years but then we will see a tailwind for quite some time after that. I’m not sure about using this chart as a timing tool but it’s definitely interesting.


Source: Chris Puplava

With all the talk of a ‘Great Rotation’ from stocks to bonds, I had to highlight a couple bond related charts. Firstly, the ratio of stocks to bonds is forming a technical pattern that looks as if it is breaking from a range to the upside, indicating out-performance of stocks relative to bonds for the long term. Watch out for follow through…


Source: Fusion

Secondly, from a technical perspective, we can’t speculate on a bull market in bonds from being over until at least the 3.75% is taken out on the 10Y yield. Also, we have to see follow through. I believe it was the summer of 2007 when we saw this downtrend line being tested and several technicians pointing to a ‘secular’ breakout and reversal of trend for bonds, only for bonds to rally significantly for the 6 years.

As you can see there are many factors to consider when trying to figure out if we are in a secular bear market. I personally think that we have a few more years of churn ahead of us, but look forward to a return of the go-go years at some point since I’ve never experienced them personally….