Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
2014, arXiv (Cornell University)
We provide a mathematical definition of fragility and antifragility as negative or positive sensitivity to a semi-measure of dispersion and volatility (a variant of negative or positive "vega") and examine the link to nonlinear effects. We integrate model error (and biases) into the fragile or antifragile context. Unlike risk, which is linked to psychological notions such as subjective preferences (hence cannot apply to a coffee cup) we offer a measure that is universal and concerns any object that has a probability distribution (whether such distribution is known or, critically, unknown). We propose a detection of fragility, robustness, and antifragility using a single "fast-and-frugal", model-free, probability free heuristic that also picks up exposure to model error. The heuristic lends itself to immediate implementation, and uncovers hidden risks related to company size, forecasting problems, and bank tail exposures (it explains the forecasting biases). While simple to implement, it outperforms stress testing and other such methods such as Value-at-Risk.
Risk analysis : an official publication of the Society for Risk Analysis, 2014
Nassim Taleb's antifragile concept has been shown considerable interest in the media and on the Internet recently. For Taleb, the antifragile concept is a blueprint for living in a black swan world (where surprising extreme events may occur), the key being to love variation and uncertainty to some degree, and thus also errors. The antonym of "fragile" is not robustness or resilience, but "please mishandle" or "please handle carelessly," using an example from Taleb when referring to sending a package full of glasses by post. In this article, we perform a detailed analysis of this concept, having a special focus on how the antifragile concept relates to common ideas and principles of risk management. The article argues that Taleb's antifragile concept adds an important contribution to the current practice of risk analysis by its focus on the dynamic aspects of risk and performance, and the necessity of some variation, uncertainties, and risk to ac...
Journal of Derivatives, 2015
Traditional risk modeling using Value-at-Risk (VaR) is widely viewed as ill equipped for dealing with tail risks. As a result, scenario-based portfolio stress testing is increasingly being promoted as central to the risk management process. A recent innovation in portfolio stress testing endorsed by regulators, called reverse stress testing, is intended to identify economic scenarios that will threaten a financial firm's viability, but do so without injecting the manager's cognitive biases into stress scenario specification. While the idea is intuitively appealing, no template has been provided to operationalize the idea. Some first steps in developing reverse stress testing approaches have begun to appear in the literature. Complexity and computational intensity appear to be important issues. A more subtle issue appearing in this emerging research is the relationship among the concepts of likelihood, plausibility, and representativeness. In this paper, we propose a novel method for reverse stress testing. The process starts with a multivariate normal distribution and uses Principal Components Analysis (PCA) along with Gram-Schmidt orthogonalization to determine scenarios leading to a specified loss level. The approach is computationally efficient. The method includes the maximum likelihood scenario, maximizes (a definition of) representativeness of the scenarios chosen, and measures the plausibility of each scenario. In addition, empirical results for sample portfolios show this method can provide new information beyond VaR and standard stress testing analyses.
IMF Working Papers, 2012
This paper presents a simple heuristic measure of tail risk, which is applied to individual bank stress tests and to public debt. Stress testing can be seen as a first order test of the level of potential negative outcomes in response to tail shocks. However, the results of stress testing can be misleading in the presence of model error and the uncertainty attending parameters and their estimation. The heuristic can be seen as a second order stress test to detect nonlinearities in the tails that can lead to fragility, i.e., provide additional information on the robustness of stress tests. It also shows how the measure can be used to assess the robustness of public debt forecasts, an important issue in many countries. The heuristic measure outlined here can be used in a variety of situations to ascertain an ordinal ranking of fragility to tail risks. JEL Classification Numbers: G10, G20, G21
SSRN Electronic Journal
In this study we empirically explore the capacity of historical VaR to correctly predict the future risk of a financial institution. We observe that rolling samples are better able to capture the dynamics of future risks. We thus introduce another risk measure, the Sample Quantile Process, which is a generalization of the VaR calculated on a rolling sample, and study its behavior as a predictor by varying its parameters. Moreover, we study the behavior of the future risk as a function of past volatility. We show that if the past volatility is low, the historical computation of the risk measure underestimates the future risk, while in period of high volatility, the risk measure overestimates the risk, confirming that the current way financial institutions measure their risk is highly procyclical.
Annals of Finance, 2006
Oriol Aspachs-Bracons is a PhD student in Economics at the London School of Economics and a member of the FMG. He would like to acknowledge the Fundacion Rafael Del Pino for their financial support. Charles A.E. Goodhart is Norman Sosnow Professor of Banking and Finance at the London School of Economics. He is also the Deputy Director of the Financial Markets Group Research Centre, and an advisor to the Governor at the Bank of England. Dimitrios P. Tsomocos is University Lecturer in Management Science (Finance), Said Business School and Fellow of St. Edmund Hall, University of Oxford. He is also a senior research associate at the Financial Markets Group and a consultant at the Bank of England. Lea Zicchino is an Economist at the Bank of England, Financial Industry and Regulation Division. He has a PhD in Finance and Economics (Financial structure and economic activity under asymmetric information) from Columbia Business School, New York. Any opinions expressed here are those of the authors and not necessarily those of the FMG. The research findings reported in this paper are the result of the independent research of the authors and do not necessarily reflect the views of the LSE.
Economic Theory, 2006
This paper sets out a tractable model which illuminates problems relating to individual bank behaviour, to possible contagious inter-relationships between banks, and to the appropriate design of prudential requirements and incentives to limit 'excessive' risk-taking. Our model is rich enough to include heterogeneous agents, endogenous default, and multiple commodity, and credit and deposit markets. Yet, it is simple enough to be effectively computable and can therefore be used as a practical framework to analyse financial fragility. Financial fragility in our model emerges naturally as an equilibrium phenomenon. Among other results, a non-trivial quantity theory of money is derived, liquidity and default premia co-determine interest rates, and both regulatory and monetary policies have nonneutral effects. The model also indicates how monetary policy may affect financial fragility, thus highlighting the trade-off between financial stability and economic efficiency.
SSRN Electronic Journal, 2004
The Value-at-Risk (VAR) measure is based on only the second moment of a rates of return distribution. It is an insufficient risk performance measure, since it ignores both the higher moments of the pricing distributions, like skewness and kurtosis, and all the fractional moments resulting from the long-term dependencies (long memory) of dynamic market pricing. Not coincidentally, the VaR methodology also devotes insufficient attention to the truly extreme financial events, i.e., those events that are catastrophic and that are clustering because of this long memory. Since the usual stationarity and i.i.d. assumptions of classical asset returns theory are not satisfied in reality, more attention should be paid to the measurement of the degree of dependence to determine the true risks to which any investment portfolio is exposed: the return distributions are time-varying and skewness and kurtosis occur and change over time. Conventional mean-variance diversification does not apply when the tails of the return distributions ate too fat, i.e., when many more than normal extreme events occur. Regrettably, also, Extreme Value Theory is empirically not valid, because it is based on the uncorroborated i.i.d. assumption. Acknowledgement 1 The author recently published a provocative Letter-to-the-Editor regarding the counterintuitive effects of long memory on portfolio diversification in Financial Engineering News, January/February 2005. This article explains why long memory is such a crucial phenomenon and why portfolio managers ignore it at their peril
Journal of Statistical Mechanics: Theory and Experiment, 2008
We study the feasibility and noise sensitivity of portfolio optimization under some downside risk measures (Value-at-Risk, Expected Shortfall, and semivariance) when they are estimated by fitting a parametric distribution on a finite sample of asset returns. We find that the existence of the optimum is a probabilistic issue, depending on the particular random sample, in all three cases. At a critical combination of the parameters of these problems we find an algorithmic phase transition, separating the phase where the optimization is feasible from the one where it is not. This transition is similar to the one discovered earlier for Expected Shortfall based on historical time series. We employ the replica method to compute the phase diagram, as well as to obtain the critical exponent of the estimation error that diverges at the critical point. The analytical results are corroborated by Monte Carlo simulations.
International Journal of Central Banking, 2002
Mathematics, 2025
This study explores smoothing techniques to refine financial risk tolerance (FRT) data for the improved prediction of financial market indicators, including the Volatility Index and S&P 500 ETF. Raw FRT data often contain noise and volatility, obscuring their relationship with market dynamics. Seven smoothing methods were applied to derive smoothed mean and standard deviation values, including exponential smoothing, ARIMA, and Kalman filter. Machine learning models, including support vector machines and neural networks, were used to assess predictive performance. The results demonstrate that smoothed FRT data significantly enhance prediction accuracy, with the smoothed standard deviation offering a more explicit representation of investor risk tolerance fluctuations. These findings highlight the value of smoothing techniques in behavioral finance, providing more reliable insights into market volatility and investor behavior. Smoothed FRT data hold potential for portfolio optimization, risk assessment, and financial decision-making, paving the way for more robust applications in financial modeling.
SSRN Electronic Journal, 2000
Under the new capital accord stress tests are to be included in market risk regulatory capital calculations. This development necessitates a coherent and objective framework for stress testing portfolios exposed to market risk. Following recent criticism of stress testing methods our tests are conducted in the context of risk models, building on the VaR literature. First, to identify the most suitable risk models for stress testing, we apply an extensive back testing procedure that focuses on extreme market movements. We consider eight possible risk models including both conditional and unconditional models and four possible return distributions (normal, Student's t, empirical and normal mixture) applied to three heavily traded currency pairs using a sample of daily data spanning more than 20 years. Finding that risk models accommodating both volatility clustering and heavy tails are the most accurate predictors of extreme returns, we develop a corresponding model-based stress testing methodology. Our results are compared with traditional stress tests and we assess the implications for capital adequacy. On the basis of our results we conclude that the new recommendations for market risk regulatory capital calculation will have little impact on current levels of foreign exchange regulatory capital.
Fractile Graphical Analysis was proposed by Prashanta Chandra Mahalanobis (Mahalanobis, 1960) in a series of papers and seminars as a method for comparing two distributions controlling for the rank of a covariate through fractile groups. Mahalanobis used a heuristic method of approximating the standard error of the dependent variable using fractile graphs from two independently selected "interpenetrating subsamples." We revisit the technique of fractile graphical analysis with some historical perspectives. We a propose a new non-parametric regression method called Fractile Regression where we condition on the ranks of the covariate, and compare it with existing regression techniques. We apply this method to predict mutual fund in ‡ow distributions after conditioning on returns and to wage distribution after conditioning for educational quali…cations. Finally, we investigate large and …nite sample properties of fractile regression coe¢ cients both analytically and through Monte Carlo simulations.
2001
"The possibility that market participants are developing a degree of complacency or a feeling that (risk management) technology has inoculated them against market turbulence is admittedly somewhat disquieting. Such complacency is not justified. In estimating necessary levels of risk capital, the primary concern should be to address those disturbances that occasionally do stress institutional solvency-the negative tail of the loss distribution that is so central to modern risk management. As such, the incorporation of stress scenarios into formal risk modeling would seem to be of first-order importance. However, the incipient art of stress testing has yet to find formalization and uniformity across banks and securities dealers. At present, most banks pick a small number of ad hoc scenarios as their stress tests. And although the results of the stress tests may be given to management, they are, to my knowledge, never entered into the formal risk modeling process." -Alan Greenspan [2000] W hile VaR models have proven themselves to be very useful risk management tools, recent financial debacles have also highlighted their limitations-in particular, their excessive dependency on history or unrealistic statistical assumptions. The natural response to these limitations is for firms to resort to stress tests to complement the results of their VaR analyses. Stress tests are exercises to determine the losses that might occur under unlikely but plausible circumstances, and there has been a dramatic increase in the importance given to stress testing since the east Asian crisis and the LTCM affair. Indeed, many firms and regulators now regard stress tests as no less important than VaR methods for assessing a firm's risk exposure.
2016
Author(s): Khanom, Najrin | Advisor(s): Chauvet, Marcelle; Ullah, Aman | Abstract: The theme of this dissertation is the risk and return modeling of financial time series. The dissertation is broadly divided into three chapters; the first chapter focuses on measuring risks and uncertainty in the U.S. stock market; the second on measuring risks of individual financial assets; and the last chapter on predicting stock return. The first chapter studies the movement of the SaP 500 index driven by uncertainty and fear that cannot be explained by economic fundamentals. A new measure of uncertainty is introduced, using the tone of news media coverage on the equity market and the economy; aggregate holding of safe financial assets; and volatility in SaP 500 options trading. Major contributions of this chapter include uncovering a significant non-linear relationship between uncertainty and changes in the business cycle. An increase in uncertainty is found to be associated with drastic but sho...
Relatórios COPPEAD, 2001
Accurate forecasting of risk is the key to sucessful risk management techniques. Given the fat-tailed characterisitic of financial returns, the assumptions of modeling these returns with the thin-tailed Gaussian distribution is inappropriate. In this paper a more accurate VaR estimate is tested using the "stable" or "αstable" distribution, which allows for varying degrees of tail heaviness and varying degrees of skewness. Stable VaR measures are estimated and forecasted using the main Latin American stock market indexes. The results show that the stable modeling provides conservative 99% VaR estimates, while the normal VaR modeling significantly underestimates 99% VaR. The 95% VaR stable and normal estimates, using a window length of 50 observations, are satisfactory. However, increasing the window length to 125 and 250 observations worsens the stable and the normal VaR measurements.
2010
Portfolio risk estimation requires appropriate modeling of fat-tails and asymmetries in dependence in combination with a true downside risk measure. In this survey, we discuss computational aspects of a Monte-Carlo based framework for risk estimation and risk capital allocation. We review different probabilistic approaches focusing on practical aspects of statistical estimation and scenario generation. We discuss value-at-risk and conditional value-at-risk and comment on the implications of using a fat-tailed framework for the reliability of risk estimates.
Physica A: Statistical Mechanics and its Applications, 2001
We analyze the performance of RiskMetrics, a widely used methodology for measuring market risk. Based on the assumption of normally distributed returns, the RiskMetrics model completely ignores the presence of fat tails in the distribution function, which is an important feature of financial data. Nevertheless, it was commonly found that RiskMetrics performs satisfactorily well, and therefore the technique has become widely used in the financial industry. We find, however, that the success of RiskMetrics is the artifact of the choice of the risk measure. First, the outstanding performance of volatility estimates is basically due to the choice of a very short (one-period ahead) forecasting horizon. Second, the satisfactory performance in obtaining Value-at-Risk by simply multiplying volatility with a constant factor is mainly due to the choice of the particular significance level.
2019
Analysing the world banking sector, we realize that traditional risk measurement methodologies no longer reflect the actual scenario with uncertainty and leave out events that can change the dynamics of markets. Considering this, regulators and financial institutions began to search more realistic models. The aim is to include external influences and interdependencies between agents, to describe and measure the operationalization of these complex systems and their risks in a more coherent and credible way. Within this context, X-Events are more frequent than assumed and, with uncertainties and constant changes, the concept of antifragility starts to gain great prominence in comparison to others methodologies of risk management. It is very useful to analyse whether a system succumbs (fragile), resists (robust) or gets benefits (antifragile) from disorder and stress. Thus, this work proposes the creation of the Banking Antifragility Index (BAI), which is based on the calculation of a ...
Journal of Financial Econometrics, 2022
for their excellent discussions of our article, and for pointing out the possibility of extending our work in several research directions. The constructive discussions have raised some common issues (such as the Gaussianity assumption, and large cross-sections), which we opt to address first in the rejoinder, before providing our thoughts on the remaining comments. Overall, we concur with the thoughtful comments by our discussants, while we provide additional interpretations in this rejoinder.
Economic Modelling, 2015
Variance and downside risk are different proxies of risk in portfolio management. This study tests mean-variance and downside risk frameworks in relation to portfolio management. The sample is a highly volatile market; Karachi Stock Exchange, Pakistan. Factors affecting portfolio optimization like appropriate portfolio size, portfolio sorting procedure, butterfly effect on the choice of appropriate algorithms and endogeneity problem are discussed and solutions to them are incorporated to make the study robust. Results show that downside risk framework performs better than Markowitz mean-variance framework. Moreover, this difference is significant when the asset returns are more skewed. Results suggest the use of downside risk in place of variance as a measure of risk for investment decisions.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.