Instead of posting papers separately, I’ve decided to transition to a weekly reading list format.  I’ll update this post over the course of the week, but here’s the initial list:

Tonight, I’ll be clearing out a backlog of papers built up over the last few days. Here’s the first paper (not including mine) – Network-Based Modeling and Analysis of Systemic Risks in Banking Systems.  The paper is interesting for two reasons.  First, they model a banking system with an explicit network of bank-to-bank relationships and correlated bank balance sheets.  Second, they simulate their model to compare the ability of  bank risk measures to predict contagion.  I’m not sure their LASER algorithm is actually any different than the Kleinberg HITS method applied to an edge-weighted network, but it does outperform the standard accounting measures in their simulations.  Abstract and download below:

Abstract: Preventing financial crisis has become the concerns of average citizens all over the world and the aspirations of academics from disciplines outside finance. In many ways, better management of financial risks can be achieved by more effective use of information in financial institutions. In this paper, we developed a network-based framework for modeling and analyzing systemic risks in banking systems by viewing the interactive relationships among banks as a financial network. Our research method integrates business intelligence (BI) and simulation techniques, leading to three main research contributions in this paper. First, by observing techniques such as the HITS algorithm used in estimating relative importance of web pages, we discover a network-based analytical principle called the Correlative Rank-In-Network Principle (CRINP), which can guide an analytical process for estimating relative importance of nodes in many types of networks beyond web pages. Second, based on the CRINP principle, we develop a novel risk estimation algorithm for understanding relative financial risks in a banking network called Link-Aware Systemic Estimation of Risks (LASER) for purposes of reducing systemic risks. To validate the LASER approach, we evaluate the merits of the LASER by comparing it with conventional approaches such as Capital Asset Ratio and Loan to Asset Ratio as well as simulating the effect of capital injection guided by the LASER algorithm. The simulation results show that LASER significantly outperforms the two conventional approaches in both predicting and preventing possible contagious bank failures. Third, we developed a novel method for effectively modeling one major source of bank systemic risk – correlated financial asset portfolios – as banking network links. Another innovative aspect of our research is the simulation of systemic risk scenarios is based on real-world data from Call Reports in the U.S. In those scenarios, we observe that the U.S. banking system can sustain mild simulated economic shocks until the magnitude of the shock reaches a threshold. We suggest our framework can provide researchers new methods and insights in developing theories about bank systemic risk. The BI algorithm – LASER, offers financial regulators and other stakeholders a set of effective tools for identifying systemic risk in the banking system and supporting decision making in systemic risk mitigation.

D. Hu, J. L. Zhao, Z. Hua, M. C. S. Wong. Network-Based Modeling and Analysis of Systemic Risks in Banking Systems. http://ssrn.com/abstract=1702467.

I’ve been offline for a few days wrapping up some contracts and academic work, but I wanted to highlight an exciting paper that Dan and I have been working on – Measuring the Complexity of Law: The United States Code.  This law review is a thorough description of our method for measuring legal complexity and is the counterpart to A Mathematical Approach to the Study of the United States Code, recently published in Physica A.  Given the recent chatter on possible tax reform and simplification lately, Tax Code complexity may be popular topics in the near future.  The paper isn’t public yet, but you can read the abstract below:

Abstract: The complexity of law is an issue relevant to all who study legal systems. In addressing this issue, scholars have taken approaches ranging from descriptive accounts to theoretical models. Preeminent examples of this literature include Long & Swingen (1987), McCaffery (1990), Schuck (1992), White (1992), Kaplow (1995), Epstein (1997), Kades (1997), Wright (2000), Holz (2007) and Bourcier & Mazzega (2007). Despite the significant contributions made by these and many other authors, a review of the literature demonstrates an overall lack of empirical scholarship.

In this paper, we address this empirical gap by focusing on the United States Code (“Code”). Though only a small portion of existing law, the Code is an important and representative document, familiar to both legal scholars and average citizens. In published form, the Code contains hundreds of thousands of provisions and tens of millions of words; it is clearly complex. Measuring this complexity, however, is not a trivial task. To do so, we borrow concepts and tools from a range of disciplines, including computer science, linguistics, physics, and psychology.

Our goals are two-fold. First, we design a conceptual framework capable of measuring the complexity of legal systems. Our conceptual framework is anchored to a model of the Code as the object of a knowledge acquisition protocol. Knowledge acquisition, a field at the intersection of psychology and computer science, studies the protocols individuals use to acquire, store, and analyze information. We develop a protocol for the Code and find that its structure, language, and interdependence primarily determine its complexity.

Second, having developed this conceptual framework, we empirically measure the structure, language, and interdependence of the Code’s forty-nine active Titles. We combine these measurements to calculate a composite measure that scores the relative complexity of these Titles. This composite measure simultaneously takes into account contributions made by the structure, language, and interdependence of each Title through the use of weighted ranks. Weighted ranks are commonly used to pool or score objects with multidimensional or nonlinear attributes. Furthermore, our weighted rank framework is flexible, intuitive, and entirely transparent, allowing other researchers to quickly replicate or extend our work. Using this framework, we provide simple examples of empirically supported claims about the relative complexity of Titles.

In sum, this paper posits the first conceptually rigorous and empirical framework for addressing the complexity of law. We identify structure, language, and interdependence as the conceptual contributors to complexity. We then measure these contributions across all forty-nine active Titles in the Code and obtain relative complexity rankings. Our analysis suggests several additional domains of application, such as contracts, treaties, administrative regulations, municipal codes, and state law.

D.M. Katz, M. J. Bommarito II. Measuring the Complexity of Law: The United States Code..

I’m going to assume that you’ve heard that the number is $600B in expansion, putting the total amount of purchases including reinvestment at just shy of $1T. Here are some excerpts from the official statement that are more interesting, as well as my emphasis added in bold:

Purchases associated with balance sheet expansion and those associated with principal reinvestments will be consolidated into one set of operations to be announced under the current monthly cycle. On or around the eighth business day of each month, the Desk will publish a tentative schedule of purchase operations expected to take place through the middle of the following month, as well as the anticipated total amount of purchases to be conducted over that period. The schedule will include a list of operation dates, settlement dates, security types to be purchased (nominal coupons or TIPS), the maturity date range of eligible issues, and an expected range for the size of each operation.

The Desk expects to conduct the November 4 and November 8 purchase operations that were announced on October 13, and it plans to publish its first consolidated monthly schedule on November 10 at 2:00 p.m.

Purchases will be conducted with the Federal Reserve’s primary dealers through a series of competitive auctions operated through the Desk’s FedTrade system. Consistent with current practices, the results of each operation will be published on the Federal Reserve Bank of New York’s website shortly after each purchase operation has concluded. In order to ensure the transparency of our purchase operations, the Desk will also begin to publish information on the prices paid in individual operations at the end of each monthly calendar period, coinciding with the release of the next period’s schedule.

Note that this means that much more out-of-sample prediction may be possible in the future for POMO, both due to better prospective data release and higher detail in released training data.

Looks like Didier Sornette has a new pre-print out on the arXiv. I’ve only had a minute or two to scan the paper, but it looks like they’ve slightly modified their JLS model to fit to the repo market to measure the “bubblieness” of leverage. They claim this allows them to some successful prediction, and make sure the reader connects this to the recent chatter at the Reserve and in Dodd-Frank on “detecting” bubbles or crises.

Abstract: Leverage is strongly related to liquidity in a market and lack of liquidity is considered a cause and/or consequence of the recent financial crisis. A repurchase agreement is a financial instrument where a security is sold simultaneously with an agreement to buy it back at a later date. Repurchase agreements (repos) market size is a very important element in calculating the overall leverage in a financial market. Therefore, studying the behavior of repos market size can help to understand a process that can contribute to the birth of a financial crisis. We hypothesize that herding behavior among large investors led to massive over-leveraging through the use of repos, resulting in a bubble (built up over the previous years) and subsequent crash in this market in early 2008. We use the Johansen-Ledoit-Sornette (JLS) model of rational expectation bubbles and behavioral finance to study the dynamics of the repo market that led to the crash. The JLS model qualifies a bubble by the presence of characteristic patterns in the price dynamics, called log-periodic power law (LPPL) behavior. We show that there was significant LPPL behavior in the market before that crash and that the predicted range of times predicted by the model for the end of the bubble is consistent with the observations.

Citation: W. Yan, R. Woodard, D. Sornette. Leverage Bubble. arXiv:1011.0458.

I also noticed that two of the EPS figures didn’t make it through arXiv’s compilation, so I’ve uploaded them here.

I’m attaching a copy of the bargaining platform that was circulated by the leaders of the University of Michigan graduate student union. I don’t want to go into too much detail on my opinion of my fellow doctoral students, but I’d like to highlight how out of touch with reality these bargaining demands are in light of the current Michigan labor market:

  • Full child-care subsidy.
  • 3%, 3%, and 6% year-on-year wage increases for 2010, 2011, 2012.
  • No cap on mental health care visits.
  • Two pairs of glasses per year.
  • Prevent student instructors from being removed for lack of English language proficiency.
  • 401K with employer matching.

In case you didn’t know, graduate student instructors and research assistants (myself included) already get the following compensation:

  • Full tuition waivers, which are worth between $20K (in-state) to $40K (out-of-state) after-tax per year.
  • Monthly stipends ranging from $1000 to $2000 per month, some of which are tax-free.
  • Healthcare benefits that exceed average private sector benefits.
  • Access to a wide range of other University services.

That’s right, the demands above are in addition to this compensation that we already receive.

Go ahead and read the document itself below.  Make sure to soak these demands in while looking at the unemployment rate and per-capita income for Michigan.

GEO_UM_2010

Since I’m sick of hearing ZeroHege purposefully misstating the empirical relationship between POMO and the equity market, I decided to put up this little figure below. This figure demonstrates the performance of the S&P 500 (SPY) in solid black compared to two POMO strategies in dashed black and red (close-close and open-close, respectively).

Note that only holding the market on POMO days has not returned more than the buy-and-hold S&P 500 strategy year-to-date. The S&P 500 has returned 3.62% YTD (close-close, not including dividend, which puts the buy-hold strategy even further ahead), whereas the open-close and close-close strategies have returned -2.63 and 0.79% respectively. These strategies do not even outperform the S&P 500 on a risk-adjusted basis (Sharpe). Furthermore, none of the regressions that were significant (p=0.05) in the 2005-2010 dataset are significant (p=0.1) in the 10 months through this year. In other words, though a relationship between the accepted-submitted proportion and return magnitude exists in the dataset as a whole, this relationship appears to have disappeared on the daily timescale.  Sorry, Tyler(s).

Since October has apparently been National Bash “Nobelist” Paul Krugman Month and I only have one more day left to get in on the action, here are my two cents on his  column today, Accounting Identities.

OK, so here’s the bit:

To avoid all this, we’d need policies to encourage more spending. Fiscal stimulus on the part of financially strong governments would do it; quantitative easing can help, but only to the extent that it encourages spending by the financially sound, and it’s a little unclear what the process there is supposed to be.

Oh, and widespread debt forgiveness (or inflating away some of the debt) would solve the problem.

But what we actually have is a climate in which it’s considered sensible to demand fiscal austerity from everyone; to reject unconventional monetary policy as unsound; and of course to denounce any help for debtors as morally reprehensible. So we’re in a world in which Very Serious People demand that debtors spend less than their income, but that nobody else spend more than their income.

My understanding of this passage is that Krugman is arguing that we probably can’t avoid fundamental national accounting identities with austerity (UK) or indirect measures (QE1/2).  However, his flippant suggestion in paragraph two above is that forgiving  debt (of consumer debtors, I assume) would solve the problem by freeing up these actors to spend.  Many countries around the world, ours included, have demonstrated that property rights and contract enforcement are sometimes “flexible” in times of crisis.  Debt forgiveness, or a debt-to-equity swap more generally, can be reasonable tools if the rules for these credit events are determined a priori in a way that creditors can model.

However, for someone like Krugman to suggest widespread debt forgiveness as an ex-post government policy seems like an incredible affront to property rights and contract enforcement, the two basic legal principles that have been empirically demonstrated to produce real growth around the world (on both sides of the autocratic/democratic scale, by the way).   Krugman may have named his column “the conscience of a liberal,” but I’d love to see what he thinks wealth demographics would look like after consumer credit markets disappear.