Pattern matching Cryptocurrencies

Bitcoin, Ethereum and some other cryptocurrencies seem to be in the spotlight again due to their most recent acceleration.
C_y2pGfXoAApvdI

Source: CEOTechnician

Ethereum is up multiples since January. I thought we could take a look at importing Etherum price data in R and then seeing if we can draw any parallels between Ethereum and Bitcoin using the pattern matching algorithm we’ve looked at here before.

But first we will load Bitcoin data in R, since it can be more easily accessed through the Quandl API.

library(Quandl)
library(SIT)
library(quantmod)
library(ggplot2)

BTCUSD = Quandl("BCHAIN/MKPRU",type="xts")
head(BTCUSD)
colnames(BTCUSD)[1] =  'avg_price'
BTCUSD.Avg = BTCUSD[,1]

par(mfrow=c(2,1))
ggplot(data=fortify.zoo(BTCUSD), aes(x=Index, y=avg_price))+
  geom_line(aes()) + scale_y_log10() + ggtitle("Bitcoin in USD (Log10)")


ggplot(data=fortify.zoo(BTCUSD), aes(x=Index, y=avg_price))+
  geom_line(aes()) + ggtitle("Bitcoin in USD ")

bitcoin1

As you can see Bitcoin is undergoing another run-up in prices. It’s difficult to see previous runs in the price, so we will also plot bitcoin prices on a log scale, as to better see the percentage gains.

bitcoin_log10

The price acceleration looks more tame in this view relative to previous ones, but the market capitalization of bitcoin has grown much larger than when it had its initial run.

Lets take a look at Ethereum now. We will pull in data via the Poloniex API, process the JSON object and convert the dates. Finally we will look at the linear and log price graphs.

url_m = 'https://poloniex.com/public?command=returnChartData&currencyPair=USDT_ETH&start=1435699200&end=9999999999&period=86400'

library(jsonlite)
mkt_data <- fromJSON(url_m)


ETH <- as.xts(mkt_data[,5],order.by=as.Date(as.POSIXct(mkt_data[,1], origin="1970-01-01")))
colnames(ETH) <- 'ETH'
head(ETH)

ggplot(data=fortify.zoo(ETH), aes(x=Index, y=ETH))+
  geom_line(aes()) + scale_y_log10() + ggtitle("Ethereum in USD (Log10)")


ggplot(data=fortify.zoo(ETH), aes(x=Index, y=ETH))+
  geom_line(aes()) + ggtitle("Ethereum in USD ")

etheth2

What a run!

Lets take a look at the bitcoin data again and see what the closest matches are for the most recent run up. We will be using code similar to a previous post.

bitcoin_Match1

Using a 63 day period for pattern matching and dynamic time warping, we get these results. The prices highlighted in blue are the most recent prices we are matching, and the prices in red are the 10 closest matches.

matching_new

It looks as though most of the upside in the price gains has already been had, as the forward week, month, quarter returns are all negative. Having said this, there was also one run (Match 5) that had a 1,044.7% run.

In the future, we may look at pattern matching log prices, or cross-market pattern matching, for example Ethereum’s recent price run on Bitcoin’s history.

*Not investment advice

If you are trading this parabolic spike, be careful and always use stops. I found this idealized image of stop placement fascinating. stops

Source: “Stop Techniques – The Implications From Inconsistent Forecasting Skills ” Nordea

S&P500 tracing out crash pattern

By crash pattern I’m referring to Didier Sornette’s Log Periodic Power Law (LPPL). This is a price trend that has exhibited super exponential price acceleration with log periodic oscillations and mean reverting residuals. This type of pattern does not always lead to a crash but has led to a change in regime in price action. I noticed the pattern forming visually and then John Hussman confirmed a fit to the model on his twitter feed last week. It is an interesting event to consider, although traders should always default to the price action at hand, as patterns and divergences can exist longer than we can remain solvent if we ignore the reality of price action.

A 1590; B -260; Tc 2013.272; beta = 0.55; C = 0.28; omega = 10.2; phi (phase) 2.3. Classic uncorrected, diagonal, high frequency ramp at end
Not convinced that markets obey math, but increasingly shallow corrections at accelerated frequency suggest euphoria
Geek’s Note – Log-periodic Sornette-type bubble in S&P 500 with Tc = 2013.27 just reached its finite-time singularity. Interesting to watch.
Sornette bubbles – increasing volatility at 10-minute intervals is indicative of log-periodic fluctuations => singularity: buying every dip

Source: John Hussman

This pattern got media coverage a couple years ago when it predicted a a drop in the Chinese equity market. I thought the following quote helped explain what the pattern is trying to model in terms of underlying agent dynamics.


Chinese Equity Bubble: Ready To Burst

Patterns emerging from complexity

Due to their very nature, financial markets exist of interacting players that are connected in a network structure. These inter acting players, often referred to as interacting agents, are continuously influencing each other. In scientific literature it is said that such a system is subject to non-linear dynamics. Modeling such a system in full detail is practically impossible to do. That is why the long-term behavior of the global economy or the weather is quite hard to predict.

Recent research, however, has provided new tools to analyze complex non-linear systems without having to go through the simulation of all underlying interactions. When interacting agents are playing in a hierarchical network structure very specific emerging patterns arise. Let us clarify this with an example3. After a concert the audience expresses its appreciation with applause. In the beginning, everybody is handclapping according to their own rhythm. The sound is like random noise. There is no imminence of collective behavior. This can be compared to financial markets operating in a steady-state where prices follow a random walk. All of a sudden something curious happens. All randomness disappears; the audience organizes itself in a synchronized regular beat, each pair of hands is clapping in unison. There is no master of ceremony at play. This collective behaviour emanates endogenously. It is a pattern arising from the underlying interactions. This can be compared to a crash. There is a steady build-up of tension in the system (like with an earthquake or a sand pile) and without any exogenous trigger a massive failure of the system occurs. There is no need for big news events for a crash to happen.

I don’t want to claim this pattern has magical powers in predicting the market but it interesting to watch. Is it nothing more than an acceleration in trend? Is this the same as a rising wedge formation?

In my experience, rising wedge formations have an equal tendency to break upwards or downwards, making the pattern equally bullish and bearish. However, that is in looking at individual stocks. In an index this level of exponential acceleration is unlikely. We shall we what happens….

Update: John Hussman wrote his weekly market commentary and provided some great quotes regarding the LPPL:

It’s important to begin this section clearly: I don’t believe that markets obey math. Markets are complex, adaptive, behavioral systems that reflect the combined behavior and feedback between an enormous number of participants. At the same time, I strongly believe that the results of those interactions often take on observable patterns, and part of the job of investors is to recognize and understand those patterns.

Another pattern that we’ve trained ourselves to identify, with some concern, is an emerging tendency toward increasingly immediate attempts by investors to buy every dip in the market. This tendency reflects a broadening consensus among investors that there is no direction other than up, and that any correction, however small, is a buying opportunity. As investors clamor to buy ever smaller dips at increasing frequency, the slope of the market’s advance becomes diagonal or parabolic. This is one of the warning signs of a bubble. It does not require much of a “catalyst” for these bubbles to burst, other than the retreat of some investors from the unanimous consensus that buying every dip is an act of genius.

Back in July 2008, I observed this dynamic in the parabolic ramp of oil prices, writing “Geek’s Rule o’ Thumb: When you have to fit a sixth-order polynomial to capture price history because exponential growth is too conservative, you’re probably close to a peak” (see The Outlook for Inflation and the Likelihood of $60 Oil).

Indeed, the closest way to describe the price dynamics of oil at the time was to think in terms of a “log-periodic bubble” as described by Didier Sornette. The essential feature here isn’t precision in the fit between the log-periodic wave and the actual price, but rather the tendency of prices to experience a series of increasingly frequent but shallower dips, ending in a nearly uncorrected upward ramp in which virtually every dip is purchased as soon as it emerges. Again, I don’t believe that markets follow math, and Sornette’s approach shouldn’t be taken as implying such precision. For my part, the key feature of log-periodic bubbles is the tendency toward those increasingly frequent and shallow corrections, as investors buy dips with accelerating urgency, ending in a diagonal or parabolic ramp that I’ve identified with the yellow oval. That uncorrected binge at the end of mature, overbought, overbullish advances is a hallmark of bubbles.

Source: Hussman Funds

John Hussman goes on to provide some great examples of the LPPL with charts. you should definitely check out his article.

From Financial Turbulence to Correlation Surprise

Systematic investor did a great post using the Mahalanobis distance to calculate a measure of financial turbulence. This was based on the paper Skulls, Financial Turbulence, and Risk Management by Mark Kritzman and Yuanzhen Li.

According to wikipedia:

In statistics, Mahalanobis distance is a distance measure introduced by P. C. Mahalanobis in 1936.[1] It is based on correlations between variables by which different patterns can be identified and analyzed. It gauges similarity of an unknown sample set to a known one. It differs from Euclidean distance in that it takes into account the correlations of the data set and is scale-invariant. In other words, it is a multivariate effect size.

Another useful turbulence measure can be calculated by decomposing the Mahalanobis distance into both a correlation part and magnitude part. This concept was the basis of the paper Correlation Surprise by Will Kinlaw and David Turkington. The authors go on to explain:

Kritzman and Li (2010) introduced what is perhaps the first measure to capture the degree of multivariate asset price “unusualness” through time. Their financial turbulence score spikes when asset prices “behave in an uncharacteristic fashion, including extreme price moves, decoupling of correlated assets, and convergence of uncorrelated assets.” We extend Kritzman and Li’s study by disentangling the volatility and correlation components of turbulence to derive a measure of correlation surprise.

Systematic investor created the turbulence indicator for G10 currencies, so I’ll borrow that base code to get us started and make a few modifications along the way. Going back to the Correlation Surprise paper, the authors describe how to create the indicators. I’ll also highlight the specific R code that does the calculation:

To review, we compute the following quantities to calculate correlation surprise:
1. Magnitude surprise: a “correlation-blind” turbulence score in which all off-diagonals in the covariance matrix are set to zero.

magnitude[i] = mahalanobis(ret[i,], colMeans(temp), diag(cov(temp))*diag(n))

2. Turbulence score: the degree of statistical unusualness across assets on a given day, as given in Equation 1.

turbulence[i] = mahalanobis(ret[i,], colMeans(temp), cov(temp))

3. Correlation surprise: the ratio of turbulence to magnitude surprise, using the above quantities (2) and (1), respectively.

correlation = turbulence / magnitude

The full code is below:

###############################################################################
# Load Systematic Investor Toolbox (SIT)
# http://systematicinvestor.wordpress.com/systematic-investor-toolbox/
###############################################################################
setInternet2(TRUE)
con = gzcon(url('http://www.systematicportfolio.com/sit.gz', 'rb'))
source(con)
close(con)

#*****************************************************************
# Load historical data
#******************************************************************
load.packages('quantmod')

fx = get.G10()
nperiods = nrow(fx)
n = ncol(fx)

#*****************************************************************
# Rolling estimate of the Correlation Surprise for G10 Currencies
#******************************************************************
turbulence = fx[,1] * NA
magnitude = fx[,1] * NA
correlation = fx[,1] * NA
ret = coredata(fx / mlag(fx) - 1)

look.back = 252

for( i in (look.back+1) : (nperiods) ) {
temp = ret[(i - look.back + 1):(i-1), ]

# measures turbulence for the current observation
turbulence[i] = mahalanobis(ret[i,], colMeans(temp), cov(temp))
magnitude[i] = mahalanobis(ret[i,], colMeans(temp), diag(cov(temp))*diag(n))

if( i %% 200 == 0) cat(i, 'out of', nperiods, 'n')
}

correlation = turbulence / magnitude

Next, we’ll create some charts to visualize 20 day moving average the indicator.

layout(c(1,2))
plota(EMA(correlation, 20), type = 'l',col = 'red',main='Correlation Surprise')
plota(EMA(magnitude, 20), type = 'l',col = 'blue', main='Magnitude Surprise')

Perhaps in a future post we’ll look at backtesting this analysis technique to determine its merit in trading.

Visualizing DeMark indicators in Matlab

The intrigue of a friend and the MTA article of James Brodie led me to a closer examination of the profitablility of DeMark counts

I can’t believe how easy it has gotten to load data from yahoo into matlab and run tests. Choosing from daily, weekly or monthly data sources allows for a multiple time frame look at the profitability of DeMark. I won’t reveal any secrets (stats) in this post, but simply post the visualizations I was able to create and some thoughts on further testing. Below you can see TDST (Tom DeMark Setup Trend) initiation point (for a buy this would be C[] > C[-4] AND C[-1] C[-4]. It certainly marks some interesting points in the market.

Here we can picture the full TD sequential setup with its countup:

demark

Source: Impressive Signals from DeMark

In my own analysis you can visually see where the initiation and profit taking points are for the Weekly and Monthly time frames:

Weekly DeMark:
Weeklies

Monthly DeMark:
Monthlies

Some additional thoughts:
– How sensitive is the whole system to the bar count in the TD Price Flip?
– Should we store all counts and use PercentRank to optimize the target profit taking level instead of relying on 9’s and 13’s?
– Test different thresholds for Close > CLose[-4] rule for the count. Why 4 bars? Why not greater than previous lows or lower than previous highs?
– Instead of Daily/Weekly/Monthly multiple time frame analysis why not use Kase Synthetic Rolling bars to test various loopbacks?

Unfortunately we only have so much time to test, but it would be interesting to see the results from tests likes these