Looks like Didier Sornette has a new pre-print out on the arXiv. I’ve only had a minute or two to scan the paper, but it looks like they’ve slightly modified their JLS model to fit to the repo market to measure the “bubblieness” of leverage. They claim this allows them to some successful prediction, and make sure the reader connects this to the recent chatter at the Reserve and in Dodd-Frank on “detecting” bubbles or crises.

Abstract: Leverage is strongly related to liquidity in a market and lack of liquidity is considered a cause and/or consequence of the recent financial crisis. A repurchase agreement is a financial instrument where a security is sold simultaneously with an agreement to buy it back at a later date. Repurchase agreements (repos) market size is a very important element in calculating the overall leverage in a financial market. Therefore, studying the behavior of repos market size can help to understand a process that can contribute to the birth of a financial crisis. We hypothesize that herding behavior among large investors led to massive over-leveraging through the use of repos, resulting in a bubble (built up over the previous years) and subsequent crash in this market in early 2008. We use the Johansen-Ledoit-Sornette (JLS) model of rational expectation bubbles and behavioral finance to study the dynamics of the repo market that led to the crash. The JLS model qualifies a bubble by the presence of characteristic patterns in the price dynamics, called log-periodic power law (LPPL) behavior. We show that there was significant LPPL behavior in the market before that crash and that the predicted range of times predicted by the model for the end of the bubble is consistent with the observations.

Citation: W. Yan, R. Woodard, D. Sornette. Leverage Bubble. arXiv:1011.0458.

I also noticed that two of the EPS figures didn’t make it through arXiv’s compilation, so I’ve uploaded them here.

I regularly publish papers on arXiv.org, an open-access archive for research in physics, math, computer science, and (recently) quantitative finance.  I also subscribe to digest updates on recently published research.

Edit: It’s unclear whether the in-sample issue actually affects the prediction or whether this is only used to compare OF and GPOMS.  Though the regressions are written using  X_t and not Z_{X_t},  Figure 3 and its accompanying interpretation clearly compare the z-scores of $DJIA and their chosen signals.

I noticed an interesting paper hit the digest tonight: Twitter mood predicts the stock market.  Though I haven’t read it in detail, the paper suggests that sentiment analysis of Twitter can be used to improve the prediction of market direction.  My quick scan of the paper found it to be mostly out-of-sample, though it appears that the OpinionFinder (OF) and Google Profile (GPOMS) data are normalized with symmetric windows that do incorporate in-sample data.  However, the degree of improvement in prediction suggests to me that this sentiment analysis might improve prediction even when this issue corrected.  Below is the abstract and full citation:

“Behavioral economics tells us that emotions can profoundly affect individual behavior and decision-making. Does this also apply to societies at large, i.e., can societies experience mood states that affect their collective decision making? By extension is the public mood correlated or even predictive of economic indicators? Here we investigate whether measurements of collective mood states derived from large-scale Twitter feeds are correlated to the value of the Dow Jones Industrial Average (DJIA) over time. We analyze the text content of daily Twitter feeds by two mood tracking tools, namely OpinionFinder that measures positive vs. negative mood and Google-Profile of Mood States (GPOMS) that measures mood in terms of 6 dimensions (Calm, Alert, Sure, Vital, Kind, and Happy). We cross-validate the resulting mood time series by comparing their ability to detect the public’s response to the presidential election and Thanksgiving day in 2008. A Granger causality analysis and a Self-Organizing Fuzzy Neural Network are then used to investigate the hypothesis that public mood states, as measured by the OpinionFinder and GPOMS mood time series, are predictive of changes in DJIA closing values. Our results indicate that the accuracy of DJIA predictions can be significantly improved by the inclusion of specific public mood dimensions but not others. We find an accuracy of 87.6% in predicting the daily up and down changes in the closing values of the DJIA and a reduction of the Mean Average Percentage Error by more than 6%.”

Johan Bollen, Huina Mao, Xiao-Jun Zeng. Twitter mood predicts the stock market. arXiv:1010.3003