Abstract: We introduce a multivariate GARCH-Copula model to describe joint dynamics of overnight and daytime returns for multiple assets. The conditional mean and variance of individual overnight and daytime returns depend on their previous realizations through a variant of GARCH specification, and two Student’s t copulas describe joint distributions of both returns respectively. We employ both constant and time-varying correlation matrices for the copulas and with the time-varying case the dependence structure of both returns depends on their previous dependence structures through a DCC specification. We estimate the model by a two-step procedure, where marginal distributions are estimated in the first step and copulas in the second. We apply our model to overnight and daytime returns of SPDR ETFs of nine major sectors and briefly illustrate its use in risk management and asset allocation. Our empirical results show higher mean, lower variance, fatter tails and lower correlations for overnight returns than daytime returns. Daytime returns are significantly negatively correlated with previous overnight returns. Moreover, daytime returns depend on previous overnight returns in both conditional variance and correlation matrix (through a DCC specification). Most of our empirical findings are consistent with the asymmetric information argument in the market microstructure literature. With respect to econometric modelling, our results show a DCC specification for correlation matrices of t copulas significantly improves the fit of data and enables the model to account for time-varying dependence structure.

L. Kang, S. Babbs. Modelling Overnight and Daytime Returns Using a Multivariate Garch-Copula Modelhttp://ssrn.com/abstract=1710799.

Here’s a paper out of the CabDyn group at Oxford from D. Fenn and M. Porter. Mason is also one of the leading researchers in network science, and their group has entered into a number of joint Ph.D./post-doctoral hires with the business school there. The result is a large number of interesting papers.  This particular paper investigates correlation matrices through RMT, which is exactly what my Quantitative Finance paper and recent working paper address.  Though they don’t examine their calculations in an applied context, the results provide additional view into recent correlation dynamics.  Abstract and download below:

Abstract: We investigate financial market correlations using random matrix theory and principal component analysis. We use random matrix theory to demonstrate that correlation matrices of asset price changes contain structure that is incompatible with uncorrelated random price changes. We then identify the principal components of these correlation matrices and demonstrate that a small number of components accounts for a large proportion of the variability of the markets that we consider. We then characterize the time-evolving relationships between the different assets by investigating the correlations between the asset price time series and principal components. Using this approach, we uncover notable changes that occurred in financial markets and identify the assets that were significantly affected by these changes. We show in particular that there was an increase in the strength of the relationships between several different markets following the 2007–2008 credit and liquidity crisis.

D. J. Fenn, M. A. Porter, S. Williams, M. McDonald, N. F. Johnson, N. S. Jones. Temporal Evolution of Financial Market Correlations. http://arxiv.org/abs/1011.3225.

Tonight, I’ll be clearing out a backlog of papers built up over the last few days. Here’s the first paper (not including mine) – Network-Based Modeling and Analysis of Systemic Risks in Banking Systems.  The paper is interesting for two reasons.  First, they model a banking system with an explicit network of bank-to-bank relationships and correlated bank balance sheets.  Second, they simulate their model to compare the ability of  bank risk measures to predict contagion.  I’m not sure their LASER algorithm is actually any different than the Kleinberg HITS method applied to an edge-weighted network, but it does outperform the standard accounting measures in their simulations.  Abstract and download below:

Abstract: Preventing financial crisis has become the concerns of average citizens all over the world and the aspirations of academics from disciplines outside finance. In many ways, better management of financial risks can be achieved by more effective use of information in financial institutions. In this paper, we developed a network-based framework for modeling and analyzing systemic risks in banking systems by viewing the interactive relationships among banks as a financial network. Our research method integrates business intelligence (BI) and simulation techniques, leading to three main research contributions in this paper. First, by observing techniques such as the HITS algorithm used in estimating relative importance of web pages, we discover a network-based analytical principle called the Correlative Rank-In-Network Principle (CRINP), which can guide an analytical process for estimating relative importance of nodes in many types of networks beyond web pages. Second, based on the CRINP principle, we develop a novel risk estimation algorithm for understanding relative financial risks in a banking network called Link-Aware Systemic Estimation of Risks (LASER) for purposes of reducing systemic risks. To validate the LASER approach, we evaluate the merits of the LASER by comparing it with conventional approaches such as Capital Asset Ratio and Loan to Asset Ratio as well as simulating the effect of capital injection guided by the LASER algorithm. The simulation results show that LASER significantly outperforms the two conventional approaches in both predicting and preventing possible contagious bank failures. Third, we developed a novel method for effectively modeling one major source of bank systemic risk – correlated financial asset portfolios – as banking network links. Another innovative aspect of our research is the simulation of systemic risk scenarios is based on real-world data from Call Reports in the U.S. In those scenarios, we observe that the U.S. banking system can sustain mild simulated economic shocks until the magnitude of the shock reaches a threshold. We suggest our framework can provide researchers new methods and insights in developing theories about bank systemic risk. The BI algorithm – LASER, offers financial regulators and other stakeholders a set of effective tools for identifying systemic risk in the banking system and supporting decision making in systemic risk mitigation.

D. Hu, J. L. Zhao, Z. Hua, M. C. S. Wong. Network-Based Modeling and Analysis of Systemic Risks in Banking Systems. http://ssrn.com/abstract=1702467.

I’ve been offline for a few days wrapping up some contracts and academic work, but I wanted to highlight an exciting paper that Dan and I have been working on – Measuring the Complexity of Law: The United States Code.  This law review is a thorough description of our method for measuring legal complexity and is the counterpart to A Mathematical Approach to the Study of the United States Code, recently published in Physica A.  Given the recent chatter on possible tax reform and simplification lately, Tax Code complexity may be popular topics in the near future.  The paper isn’t public yet, but you can read the abstract below:

Abstract: The complexity of law is an issue relevant to all who study legal systems. In addressing this issue, scholars have taken approaches ranging from descriptive accounts to theoretical models. Preeminent examples of this literature include Long & Swingen (1987), McCaffery (1990), Schuck (1992), White (1992), Kaplow (1995), Epstein (1997), Kades (1997), Wright (2000), Holz (2007) and Bourcier & Mazzega (2007). Despite the significant contributions made by these and many other authors, a review of the literature demonstrates an overall lack of empirical scholarship.

In this paper, we address this empirical gap by focusing on the United States Code (“Code”). Though only a small portion of existing law, the Code is an important and representative document, familiar to both legal scholars and average citizens. In published form, the Code contains hundreds of thousands of provisions and tens of millions of words; it is clearly complex. Measuring this complexity, however, is not a trivial task. To do so, we borrow concepts and tools from a range of disciplines, including computer science, linguistics, physics, and psychology.

Our goals are two-fold. First, we design a conceptual framework capable of measuring the complexity of legal systems. Our conceptual framework is anchored to a model of the Code as the object of a knowledge acquisition protocol. Knowledge acquisition, a field at the intersection of psychology and computer science, studies the protocols individuals use to acquire, store, and analyze information. We develop a protocol for the Code and find that its structure, language, and interdependence primarily determine its complexity.

Second, having developed this conceptual framework, we empirically measure the structure, language, and interdependence of the Code’s forty-nine active Titles. We combine these measurements to calculate a composite measure that scores the relative complexity of these Titles. This composite measure simultaneously takes into account contributions made by the structure, language, and interdependence of each Title through the use of weighted ranks. Weighted ranks are commonly used to pool or score objects with multidimensional or nonlinear attributes. Furthermore, our weighted rank framework is flexible, intuitive, and entirely transparent, allowing other researchers to quickly replicate or extend our work. Using this framework, we provide simple examples of empirically supported claims about the relative complexity of Titles.

In sum, this paper posits the first conceptually rigorous and empirical framework for addressing the complexity of law. We identify structure, language, and interdependence as the conceptual contributors to complexity. We then measure these contributions across all forty-nine active Titles in the Code and obtain relative complexity rankings. Our analysis suggests several additional domains of application, such as contracts, treaties, administrative regulations, municipal codes, and state law.

D.M. Katz, M. J. Bommarito II. Measuring the Complexity of Law: The United States Code..

Here’s one of those papers that you’d always meant to write.  In this case, I think I even suggested it on the blog once – if you have to use some parametric VaR/ES method, why not replace the 2-moment normal characterization of return with its generalization, the 4-moment Johnson characterization?

Abstract: The Cornish-Fisher and Gram-Charlier expansions are tools often used to compute value at risk (VaR) in the context of skewed and leptokurtic return distributions. These approximations use the fi rst four moments of the unknown target distribution to compute approximate quantile and distribution functions. A drawback of these approaches is the limited set of skewness and kurtosis pairs for which valid approximations are possible. We examine an alternative to these approaches with the use of the Johnson (1949) system of distributions which also uses the first four moments as main inputs but is capable of accommodating all possible skewness and kurtosis pairs. Formulas for the expected shortfall are derived. The performance of the Cornish-Fisher, Gram-Charlier and Johnson approaches for computing value at risk and expected shortfall are compared and documented. The results reveal that the Johnson approach yields smaller approximation errors than the Cornish-Fisher and Gram-Charlier approaches when used with exact or estimated moments.

J.-G. Simonato. The performance of Johnson distributions for computing value at risk and expected shortfall. http://ssrn.com/abstract=1706409.

Here’s a new paper from the Fed that tries to determine the actual effect of asset purchases on the yield curve.  Abstract below!

Abstract: Using a panel of daily CUSIP-level data, we study the effects of the Federal Reserve’s program to purchase $300 billion of U.S. Treasury coupon securities announced and implemented during 2009. This program represented an unprecedented intervention in the Treasury market and thus allows us to shed light on the price elasticities and substitutability of Treasuries, preferred-habitat theories of the term structure, and the ability of large-scale asset purchases to reduce overall yields and improve market functioning. We find that each purchase operation, on average, caused a decline in yields in the sector purchased of 3.5 basis points on the days when these purchases occurred (the “flow effect” of the program). In addition, the program as a whole resulted in a persistent downward shift in the yield curve of as much as 50 basis points (the “stock effect”), with the largest impact in the 10- to 15-year sector. The coefficient patterns generally support a view of segmentation or imperfect substitution within the Treasury market.

S. D’Amico. T. B. King. Flow and Stock Effects of Large-Scale Treasury Purchases. FEDS Working Paper No. 2010-52. http://ssrn.com/abstract=1702314.

Last week, I posted a zoomable visualization of the weekly market and sector performance and correlation.  People seem to find this image both useful and “cool,” so here is this week’s edition and takeaways below:

  • Green, green, green (on the diagonal).  Other than healthcare  (XLV), every sector was up at least 1%, and most were up well over 3%.
  • More green (off the diagonal).  Most sectors were strongly correlated with one another, with the exception of financials (XLF) and healthcare (XLV).  Healthcare, as noted above, underperformed the market significantly by 2.5%.  The story with financials is the opposite – financials were up a whopping 6.8% this week, putting them over 3% ahead of the market.
  • Correlation was strongest between energy (XLE) and materials (XLB) at 99.5% and weakest between financials (XLF) and healthcare (XLV) at -21.8%.


By the way, this figure is produced with Python and cairo.  The code is fairly ugly and long, so I probably won’t release it unless there’s some demand.

Data has driven finance forward in ways that theory alone could not have.  In recent years, a similar trend has developed in the study of law, especially within fields such as “law and economics” or “empirical legal studies.”  I find this “data-driven” approach to law fascinating (having been introduced to it by Dan), and have been active in a number of projects on the topic.

The paper below, which is forthcoming in the Virginia Tax Review (one of the top three tax reviews), is an excellent example of this trend. In it, we examine a number of aspects of the population of the Tax Court’s written decisions. Obtaining and analyzing these decisions required a significant degree of technical sophistication, and interpreting the results in their legal context has provided a number of insights. The abstract is below:

Abstract: What can empirical data tell us about the jurisprudence of United States Tax Court? Which sections of the Internal Revenue Code are most frequently cited and has recent tax legislation sparked change in the Tax Court’s decisions? This article presents an analysis of the citation practices of the United States Tax Court between 1990 and 2008. While previous citation studies focus on case-to-case citations, we modify this approach to focus on statutory citations, which better capture the nature of tax jurisprudence. By applying techniques from computer science, we collect and analyze more than 11,000 decisions and 244,000 statutory citations authored by the United States Tax Court between 1990 and 2008. Our approach includes both a static and longitudinal analysis of the most cited Internal Revenue Code sections. In addition, we carry out a network analysis of these case-to-statute citations to uncover patterns in citation practices, concept relationships, and legislative acts. This article answers the call for greater empiricism in tax scholarship and paves the way for future research on Tax Court jurisprudence.

M. J. Bommarito II, D. M. Katz, J. Isaacs-See. An Empirical Study of the Population of United States Tax Court Written Decisions (1990- 2008). Forthcoming, Virginia Tax Review, 2010. http://ssrn.com/abstract=1441007.