• John Hawver

The Curve vs. The VIX: A Chart Deception

Everyone has their pet peeves. I’ll admit, being a little OCD I have a few. One of mine are charts with two y-axes. These types of charts are overused to “prove” points in investment research and I find they easily mislead both professionals on Wall St and regular investors on Main St.

Let’s take and example and dissect it. Here’s a chart I recently saw in a blog by Morgan Stanley Research. This chart “suggests” that, based on the shape of the yield curve, investors should “expect” higher volatility in the equity markets, as measured by the Vix Index, at “any moment” in a “sustained” manner. Clearly, this was written by a derivatives strategist who would like to sell you some options to protect your portfolio.

Source: ZeroHedge

So, let’s replicate the work and go line by line through the results and see if you buy the sales pitch. The code you can use to replicate my work is included in the bottom of the post. In it, the first thing I do is to set up some helper functions and go get the data from the St. Louis Federal Reserve website, FRED. FRED has a wealth of free data if you’re not familiar with it.

The second thing I do is transform the data just like Morgan Stanley (MS) does in their chart. Now, you should be asking why are we transforming the data? I can’t tell you for certain what MS was thinking, but it looks to me like the transform was just done to make the pretty lines on the chart match up.

MS transforms the Vix Index by taking a 4-month moving average of the data and then taking the log of it. To be fair, perhaps the log was taken to make the Vix Index more normally distributed for use in a linear regression. But, by taking the moving average, MS introduces auto-correlation, which would potentially violate an assumption of linear regressions. So, it's a wash, back to my pretty picture hypothesis.

For the yield curve, MS inverts it and lags the data by 36 months. So far, seems like all these numbers are pretty arbitrary. Next I just plot them to see if they match the original plot. They do.

Now, let’s test Morgan Stanley’s thesis, does the yield curve (inverted) predict future moves in the Vix Index? To do this, I apply a simple linear regression and get these results:


Min 1Q Median 3Q Max

-0.55075 -0.18032 -0.01486 0.16751 0.81296


Estimate Std. Error t value Pr(>|t|)

(Intercept) 3.170003 0.005544 571.82 <2e-16 ***

X 0.154897 0.002596 59.67 <2e-16 ***


Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 0.2502 on 6977 degrees of freedom

Multiple R-squared: 0.3379, Adjusted R-squared: 0.3378

F-statistic: 3560 on 1 and 6977 DF, p-value: < 2.2e-16

For non-statisticians, let’s explain. There IS a relationship between the variables and the coefficients which represent that relationship are reasonably strong as indicated by the P value (lower is better). The Adjusted R-squared value tells us that this relationship explains about 33% of the variability in the Vix Index. The equation this regression gives is:

Vix_transformed = 3.17 + .154 * curve_transformed_lagged.

Great. So, MS stretched some of the linear regression assumptions but DID come up with something that moderately predicts the future. I’m not sure it means a sustained rise is imminent however, so let’s use our new model to do a prediction. Doing that, we find that given this level of the curve, there’s a 33% chance the Vix Index gets to a whopping 19.5 or a 10% rise from where we are today. Scary. We should definitely run out and buy some options for portfolio protection.

You can see how transforming (manipulating) the data and presenting it on a two y-axis chart exaggerates the true relationship. These types of charts make great sales tools and are designed to get long-term investors into trades that aren’t necessary based on the real underlying data.

Hope that helps.


#### Code to Replicate ####

# Setup


# helper function

fLead <- function(vec, leadN) { c(vec[(leadN + 1):(length(vec)) ], rep(NA, leadN)) } # pulls data from t+h to t

fLag <- function(vec, lagN) { c(rep(NA, lagN), vec[1:(length(vec) - lagN)]) } # pushes data from t to t+h

fggplotRegression <- function(fit) {

ggplot(fit$model, aes_string(x = names(fit$model)[2], y = names(fit$model)[1])) +

geom_point() +

stat_smooth(method = "lm", col = "blue") +

labs(title = paste("Adj R2 = ",signif(summary(fit)$adj.r.squared, 5),

"Intercept =",signif(fit$coef[[1]],5 ),

" Slope =",signif(fit$coef[[2]], 5),

" P =",signif(summary(fit)$coef[2,4], 5)))


# Get data

crv <- getSymbols('T10Y3M', src = 'FRED', auto.assign = F); names(crv) <- 'CRV'

vix <- getSymbols('VIXCLS', src = 'FRED', auto.assign = F); names(vix) <- 'VIX'

# MS Transform... vix(log, 4m avg)... crv(leading 36m, inverted)

vix_tfm <- rollapply(log(na.omit(vix)), 120, mean)

crv_tfm <- xts(-fLag(coredata(crv), 720), order.by = index(crv))

crv_prd <- xts(-fLead(coredata(crv), 720), order.by = index(crv))

# plot these




# Let's regress these and see if there's a relation; merge the data on the dates so matches nicely

all_data <- na.omit(merge(vix_tfm, crv_tfm))

X <- all_data$crv_tfm

Y <- all_data$VIX

mod <- lm(Y ~ X)


# Plot the regression


# predict the future, per the MS model

y_hat <- predict(mod, newdata = crv_prd)

exp(tail(y_hat, 1))

log(exp(tail(y_hat, 1)) / 17.5)

1 comment