Journal of Investment Strategies

Welcome to the second issue of The Journal of Investment Strategies. When we launched the journal in December 2011, the Editorial Board set out to create an outlet for diverse ideas ranging from methodology to practical applications, from portfolio management frameworks to specific strategy design, from the micro structure of markets to global macro views and from risk management to principal investments, providing us with a window into cutting-edge research conducted both in academia and in the industry.

The second issue of the journal will not disappoint readers who are looking for all of these ideas. We selected four very diverse papers for this issue, each of which offers a unique insight into a particular topic of significant interest for investment managers, strategists and research analysts. I am certain that you will find them illuminating and useful, just as I did.

In the first paper of the issue, "Downside risk properties of foreign exchange and equity investment strategies", Jacob Gyntelberg and Andreas Schrimpf give an overview of widely used foreign exchange (FX) and equity strategies, focusing in particular on their tail risk characteristics. They begin with a classification of FX strategies, highlighting in particular the carry, momentum and yield-curve slope (or term-spread) strategies. There are, of course, many other FX strategies that are not included in the authors' study, particularly those that work with economic observables (such as purchasing power parity-based strategies), or those which combine cross-asset-class signals (such as combinations of FX and rate-derivatives markets, in addition to the FX spot and yield curves). Nevertheless, the three highlighted models are arguably the easiest to construct and are among the most broadly followed strategies (perhaps the latter has something to do with the former). For comparison, Gyntelberg and Schrimpf also consider some of the well-known systematic US equity-style strategies, specifically the Fama-French Carhart long-short SMB (size), HML (value) and UMD (momentum) portfolio strategies, as well as the long-only equity portfolio invested in the broad US market (CRSP index). The main finding of the paper is that both the standard FX strategy and the long- short equity strategy have significant unmitigated tail risks, and that those tail risks are large compared with the expected returns. This is not surprising for many practitioners who have lived through periods of large drawdowns in these strategies. What is illuminating, however, is the thorough analysis of the strategy risk metrics (including volatility, value-at-risk and expected shortfall) and the contrast between the downsidesensitive and the tail-agnostic risk measures presented in the paper. I particularly like the analysis of the time it takes to recover from a drawdown event (defined here as a one-month tail event), which makes it clear that certain strategies can wipe out as much as a couple of years'worth of returns in a single bad month. Such analysis is, I believe, very useful for asset allocators, who might find it difficult to predict the return of these typical FX and equity strategies over the next couple of years but who would potentially feel more comfortable limiting the answer to the question: "Given the market conditions, how frequently might we see market disruptions over the next few years?" If the answer to this question is once or twice a year, and if we believe that the analysis in the Gyntelberg and Schrimpf paper will still hold on average, then we know that those market disruption events will wipe out all of the returns that we hope to make over the next few years, perhaps resulting in a loss. This, in a nutshell, is the most relevant insight for any asset allocator, and it might help them to avoid potentially costly mistakes.

In the second paper, "Gauge invariance, geometry and arbitrage", Samuel Vázquez and Simone Farinelli present a very general framework for detecting arbitrage in financial markets and show a few examples of its practical application. Using analogies with quantum field theory in physics, they argue that the well-known numeraire invariance of financial models can be reinterpreted for the language of gauge invariance. Changing the unit of measurement (such as using British pounds instead of US dollars) should obviously not introduce any arbitrage if it did not already exist. On
the other hand, this process would not hide arbitrage if it existed across certain assets. This simple observation is sufficient to make fairly far-reaching conclusions, as the authors of the paper show. The key is to note that the invariance with respect to the measuring unit can be recast as a gauge-transformation invariance, and to explicitly construct the so-called gauge connection, which is a differential geometric concept that is widely used in physics to describe similar transformation invariant systems. The second crucial step is to note that, in analogy with differential geometric and physical settings, one can construct the curvature of the gauge connection, which is itself a gauge-invariant metric. Because it is gauge invariant (ie, is independent of transformations of the numeraire), this metric unambiguously measures the presence of a nontrivial dependence of the value of the asset (or portfolio of assets) on the manner in which it is assembled: in other words, a potential arbitrage opportunity. For the case of underlying asset prices following a general Ito process, the authors derive the stochastic versions of the gauge connection and gauge curvature, and show that the latter leads to a modification of the Black-Scholes equation that only disappears if the curvature is precisely zero, ie, if the system is free of arbitrage. Remarkably, the modification of the Black-Scholes equation is nonlinear and does not reduce to a simple difference of drifts, as one would have expected based on conventional thinking. Perhaps the runaway solutions of this equation could even explain some bubble-like behavior when arbitrage is present? The authors must be commended for taking such a high-level topic and bringing it down to some very specific examples of practical application. They work through several examples, from an illustration of how simple volatility arbitrage manifests itself in their framework to a series of statistical arbitrage examples dealing with simulated and real financial time series. These latter examples show how the notion of arbitrage can be made quite precise, even when it applies to a notoriously noisy situation of trading across correlated equity indexes. It is quite telling that the analysis reveals much greater levels of "apparent arbitrage" in time series of equity indexes, which themselves are not tradable assets, and it is illuminating that the arbitrage almost disappears once the authors turn to equity index futures. They hint, however, that, even in the tradable futures market, it is possible to detect an arbitrage using their methods, although one must take more care in suppressing the noise in the estimation of the model parameters. The statistical arbitrage examples presented in the paper highlight the robust estimate of the high-frequency correlations between the asset returns as a key input parameter for the arbitrage detection algorithm. It appears, at least to the casual reader, that this is the same type of statistic that one would normally need to estimate when following more conventional approaches to finding statistical arbitrage relations, such as cointegration and robust regression relations. I would hazard a guess that the positive arbitrage metric of Vázquez and Farinelli will, in this particular case, be equivalent to the presence of a set of cointegrating vectors (perhaps of the same dimensionality as the "null space" of gauge transformations described by the authors) among the asset price series. This would sit well with the intuition that arbitrage means the presence of small transient price discrepancies between otherwise essentially similar assets. I would note, however, that the geometric framework presented in this paper seems substantially richer than simply a generalization of such "linear" statistical relationships, and in other applications it may lead to arbitrage metrics that are not detectable by already known methods.

In the third paper, "Advances in cointegration and subset correlation hedging methods", Marcos M. López de Prado and David Leinweber thoroughly analyze and classify the well-known methods for estimation of hedge ratios, presenting their strong and weak points, and propose a couple of novel methods that outperform the known ones from an efficiency and stability standpoint. Having formalized the hedging problem, they give a useful taxonomy of the hedging methods, dividing them into single-period ones, ie, those based on the analysis of hedging error distribution at the end of a single time period, usually obtained under assumptions of independent and identically distributed returns for the portfolio and the hedge, and multiperiod ones, ie, those based on an analysis of the dynamics of the hedging error and of the distribution of the cumulative error over multiple periods, allowing for serial correlations in the returns of assets and hedging errors. Among the single-period methods, they point out the notoriously unstable character of the most widely used approaches: namely, the ordinary least-squares regression of differences and its analytical counterpart, the minimum variance portfolio method. The authors argue that, within this class, the principal component analysis (PCA) method is often preferable due to greater robustness, but they also point out that it often lacks interpretability and suffers from instability of eigenvectors. (One application where this is not the case and the method is not only quantitatively efficient but also qualitatively clear and intuitive is the hedging of portfolios of related bonds of different maturity, where the PCA method actually reveals the hidden structure of the yield-curve movements.) The PCA method and the subsequent analysis of equal risk contribution and maximumdiversification ratio approaches to the hedging problem naturally lead the authors to highlight the importance of identifying and hedging the most important risk contributions, instead of simply having the objective of reducing the hedging error. They then propose a new method that tackles the shortcomings of PCA and the other methods and that explicitly defines the objective function in a manner that leads to well-behaved, balanced portfolios that do not contain any subsets that could have large residual correlations with the error of the hedge. This is the minimax subset correlation method, which, as the authors demonstrate on specific examples, is indeed superior to previously known methods, and achieves the error reduction in a clear and robust fashion. Among the multiperiod methods, most importantly they present the error correction method and the Box-Tiao canonical decomposition method, both of which are variants of the cointegration analysis. In particular, the authors generalize the Box-Tiao method to a multidimensional setting, which is directly applicable to multiasset portfolios. This analysis naturally leads the authors to the alternative specification of the objective function for solving the multiperiod hedging problem. Explicitly estimating the Dickey-Fuller statistic, which tests for the presence of a unit root in the hedging residual, allows to define the minimization of this statistic (and therefore minimization of the probability of having a randomly diverging hedging error) as a new, global objective function. The corresponding Dickey-Fuller optimal hedge ratios are shown to be more stable than the closely related error correction method results. The authors conclude that among all the presented methods, three particularly stand out as being both efficient in reducing the hedging error and stable over time as the estimation sample changes. These are the single-period minimax subset correlation method and the multiperiod Box-Tiao canonical decomposition and Dickey-Fuller optimal methods. Obviously, since it is a research paper rather than an exhaustive review paper, the authors necessarily leave out some known methods. Two that I would highlight are the single-period method based on the random matrix model analysis of the multiasset correlations, which adds an important stability layer over the PCA, and multiperiod methods based on the dynamic conditional correlations model, which explicitly describe the changing betas over time while still retaining a high degree of stability of behavior. I am certain that the paper by López de Prado and Leinweber will become a goto reference point for many practitioners who will rely on their analysis in order to decide on the choice of hedging methods. (I know this will be the case for me!) The most important insight that the readers will get from this paper, however, is that one must clearly define the desired characteristics of the hedge and associate them with an appropriate choice of the objective function in order to obtain well-hedged portfolios.

In his discussion paper "Understanding risk-based portfolios" in the Investment Strategy Forum, Ryan Taliaferro presents a review of the increasingly popular alternative equity strategies that use the risk characteristics of stocks for forming portfolios while essentially shunning the task of predicting the returns. That such an approach can indeed lead to a meaningful portfolio choice should not be a surprise to readers: after all, much of finance theory presents returns as compensation for risk and, therefore, as somehow dependent on risk. So it would seem that choosing stocks based on risk should indeed be broadly in line with choosing them based on returns, assuming that the risk-return relationships hold over the time frame in which the portfolios are evaluated. What is much more surprising, however, is the evidence cited by Taliaferro that it is not the higher-risk stocks that win the race to be in investors' portfolios. On the contrary, it is the lower-risk ones. Basically, the argument is that the empirical evidence shows little correlation between the cross-section of stock risks and their prospective returns. Therefore, the taking of additional stock risk is not actually compensated. Hence, portfolios that are explicitly constructed to minimize volatility by choosing low-risk stocks will have a higher risk-adjusted performance. Taliaferro shows that this has indeed been the case, and that these "minimum variance" (MV) portfolios have substantially outperformed, on a risk-adjusted basis, the other choices that are sometimes followed (equal risk contribution (ERC) portfolios and maximum diversification (MD) portfolios). While all three approaches are risk based, it appears that the MV methodology produces the clearest improvement in risk-adjusted returns, particularly over the past few years. Of course, these past years have included the 2008 crisis and the 2010-11 market perturbations, and perhaps this favors the MV portfolios over the ERC and MD ones. Remarkably, though, the MV portfolio does not seem to lose any ground compared with ERC and MD even during the strong market rebound in 2009. This convinces me that the effect is real and that there is much to be gained from using good risk estimates for picking stocks.

On behalf of the Editorial Board, I would like to thank our contributing authors for their excellent papers, and our readers for their keen interest and feedback. I look forward to receiving more insightful contributions and to continuing to share them with an eager worldwide audience.

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here