Lambda Value at Risk and Regulatory Capital: A Dynamic Approach to Tail Risk

This paper presents the first methodological proposal of estimation of the Lambda VaR. Our approach is dynamic and calibrated to market extreme scenarios, incorporating the need of regulators and financial institutions in more sensitive risk measures. We also propose a simple backtesting methodology by extending the VaR hypothesis-testing framework. Hence, we test our Lambda VaR proposals under extreme downward scenarios of the financial crisis and different assumptions on the profit and loss distribution. The findings show that our Lambda VaR estimations are able to capture the tail risk and react to market fluctuations significantly faster than the VaR and expected shortfall. The backtesting exercise displays a higher level of accuracy for our Lambda VaR estimations.


Introduction
The global financial crisis has made risk measurement and its backtesting a primary concern for regulators and financial institutions. Over the past two decades, the value at risk (V aR) has become the most popular method to assess the risk exposure of financial investments. One of the reasons for its widespread use is that the Basel Committee on Banking Supervision (1996a) has suggested that banks use an internal V aR model for the calculation of their regulatory capital. Thus, authorities around the world have endorsed V aR as the best practice or as a regulatory standard.
Despite its popularity, V aR has been extensively criticized by academics. For instance, Artzner et al. (1997) and Artzner et al. (1999) have underlined some of the theoretical shortcomings of V aR as a risk measure. Specifically, V aR might penalize diversification since it is not subadditive and does not capture the tail risk. This is so because it does not consider losses greater than the V aR amount. In addition, the recent global financial crisis has highlighted the lack of sensitivity of the V aR. Financial risk managers have addressed the difficulty of a rapid adjustment of its confidence levels. As a consequence, V aR may lead to under-forecasting of risk estimates before the crisis or even over-forecasting of them post-crisis. Therefore, both regulators and financial institutions have recently increased their interest in more sensitive risk measures and their backtesting. In particular, the Minimum Capital Requirements for Market Risk by the Basel Committee (2016) has proposed to move to the expected shortfall (ES), known also as conditional value at risk (CV aR), which was introduced by Artzner et al. (1997). This measure of risk, which is defined as the expected value of losses exceeding V aR, can solve some of the issues with the V aR and also has sounder theoretical properties (i.e., it fulfills subadditivity). However, many studies have pointed out the challenges of delivering a robust forecasting and backtesting of the ES (see Gneiting, 2011 andEmbrechts andHofert, 2014). Additional concerns about the ES backtesting are expressed by the Basel Committee (2016) who requires that the backtesting regulatory framework will still be based on V aR.
A recent paper by Frittelli et al. (2014) has proposed a new risk measure, the lambda value at risk (lambdaVaR or ΛV aR) as a generalization of the V aR. The novelty of the ΛV aR is considering a function Λ, that depends on the profits and losses (P&Ls) of the risk factors, instead of a constant confidence level λ.
For how it is built, the ΛV aR assigns more risk to heavy-tailed P&L distributions and less in the opposite case. Thus, the ΛV aR should be able to discriminate the risk among assets with the same V aR at level λ but different tail behaviour of the P&L distribution. However, in this theoretical paper there is no explanation on how the ΛV aR should be computed. The function Λ can be either increasing or decreasing, but the authors didn't propose any particular shape of the Λ function or a method for its estimation.
The objective of this study is twofold: first, to provide a methodological proposal of estimation for the ΛV aR and, second, to test its effectiveness as regulatory alternative to V aR. Our methodological approach makes the ΛV aR able to incorporate the actual market conditions, allowing to reserve more capital in crisis periods and less in normal market situations. We base the computation of the Λ function on order statistics of the historical distribution function of some selected market benchmarks. The parameters are recalculated for each out-of-sample period allowing the ΛV aR to capture the market changing and assess the different reactivity of the assets to the market variations. Thus, our ΛV aR estimations are able to discriminate the risk among assets with different tail behaviour in respect to the market. In addition, the ΛV aR can be specified differently according the particular risk profile. We call this method of estimation, "dynamic benchmark approach". We also propose a simple backtesting methodology by extending the V aR hypothesis-testing framework by Kupiec (1995) to the ΛV aR in order to have an initial evaluation of the ΛV aR accuracy 1 .
We test our ΛV aR effectiveness as regulatory alternative to V aR under extreme downward scenarios of the financial crisis and different assumptions on the P&L distribution. We compare these estimates with those of the V aR and the ES, highlighting the different levels of reactivity to bad changes in financial markets.
Hence, we perform a backtesting exercise where we compare V aR and ΛV aR estimations' accuracy.
The remainder of the paper is organized as follows. Section 2 describes the new risk measure, the ΛV aR, from a theoretical point of view, and our proposal of estimation. Here, the backtesting methodology is also illustrated. Section 3 shows the results of the empirical test, consisting in the computation, backtesting and comparison of the risk measures. Section 4 concludes.

Current risk measures and ΛV aR
Because of its simple formulation and interpretation, the V aR is the most popular tool for measuring financial risk. Let X be the random variable that models asset returns (i.e., profit and loss, P&L) and F (x) = P (X ≤ x) its cumulative distribution function. We denote by P the set of all the distributions. The V aR of a financial asset at the confidence level λ, where 0 < λ < 1, is defined as the λ-right quantile of its P&L distribution over a certain holding period. Formally: In other words, V aR λ represents the maximum loss x that may occur such that the probability of losing more than the amount x is lower than λ over a certain time horizon. The main advantage of the V aR is that a single number immediately provides the idea of the amount of capital that should be allocated to cover the risk of a financial asset. On the other hand, the V aR has many critics. Academics have pointed out that V aR might penalize diversification because of its lack of subadditivity; that is, the risk of the portfolio in terms of the V aR may be larger than the sum of the risks of its components. In addition, practitioners have noticed its lack of sensitivity, especially during changes in the economic cycle. It seems to be difficult to rapidly decrease the confidence level when a crisis period occurs and to increase it post-crisis. Moreover, the V aR does not allow practitioners to discriminate the risk of financial positions having the same λ-right quantile but a different tail thickness, thereby failing to capture extreme events.
The experiences from the global financial crisis have raised additional doubts about the accuracy of internal V aR models. These serious concerns have prompted the recent response by the Basel Committee (2016) to move to another risk measure known as the expected shortfall (ES), which was introduced by Artzner et al. (1997). Formally, the ES of an asset X at confidence level λ, where 0 < λ < 1, is given by: By definition, this risk measure is able to capture the tail risk. In addition, it does not discourage diversification since it satisfies the subadditivity property. However, several studies have found that the biggest issue associated with the ES is the difficulty of achieving robust estimation and backtesting (see Gneiting, 2011 andEmbrechts andHofert, 2014).
The new risk measure introduced by Frittelli et al. (2014), the ΛV aR, may be a valid alternative. The ΛV aR is a generalization of the V aR and is based on the fact that the confidence level can change and adjust according to the risk factor P&L.
Specifically, it considers a function Λ instead of a constant confidence level λ. Formally, given a monotone and right continuous function Λ : R → [λ m , λ M ] with 0 < λ m ≤ λ M < 1, the ΛV aR of the asset return X is a generalized quantile represented by the map ΛV aR : P → R defined as follows: Intuitively, the ΛV aR of the financial position X is given by the smallest intersection between F and Λ if both are continuous. The function Λ plays a key role in the definition of the ΛV aR and adds flexibility.
From a theoretical point of view, no particular properties are required by Λ, which can be either increasing or decreasing. In addition, ΛV aR satisfies the mathematical properties of interest from a risk management point of view (Frittelli et al., 2014;Burzoni et al., 2017).
In Subsection 2.2, we propose a methodology to compute the ΛV aR, we explain our Λ choices, the assumptions behind them, and the empirical implications. See Figure 1 for a clearer understanding of this.

Proposal of ΛV aR estimation: a dynamic benchmark approach
This section contains a guide to the estimation of the ΛV aR and our methodological proposal. As discussed in Section 2.1, the flexibility of the ΛV aR stems from the possibility of choosing the function Λ instead of fixing a confidence level λ. deciding Λ direction (increasing or decreasing); 3. choosing Λ functional shape; 4. estimating Λ parameters.
Concerning the choice of the Λ range of values, we first fix the minimum λ m close to 0, specifically 0.001; we have proved, computationally, that a further reduction of λ m would determine an increase of the capital requirement without any benefit in terms of reduction of the number of violations. However, the main issue is the choice of the maximum, λ M , that we call the ΛV aR confidence level. This choice reflects the bank risk aversion profile. Regarding the second step, we suggest that the decision on the Λ direction should be taken according the expectation about the economic cycle. In the case of a bearish market trend, increasing function Λ should make it easier to detect downside scenarios and reduce the number of overdrafts between the realized P&L and the ΛV aR estimations. On the other hand, a decreasing Λ may be more convenient in periods of expected growth, allowing a reduction of the capital aside which may boost the investments.
The third step consists in examining different functional forms of continuous Λ. In the increasing case, it turns out to be an immediate and at the same time sensible specification to have a Λ function that is obtained by linear interpolation. Formally, we divide the real line of the P&L and probabilities in n + 1 intervals, where n is the number of data points used for the interpolation. Let us denote with π i and λ i , where i = 1, 2, ..n, the extremes of the interpolating intervals on the P&L and probability axis, respectively.
For any P&L amount x < π 1 we fix Λ(x) = λ 1 = λ m , for x ≥ π n we fix Λ(x) = λ n = λ M , and when π 1 ≤ x < π n we suggest that: In the decreasing case, for x < π 1 we fix Λ(x) = λ M , for x ≥ π n we fix Λ(x) = λ m , and when π 1 ≤ x < π n we fix Λ as follows: The final step of the Λ calibration consists of the estimation of λ i and π i . Here, the financial manager's aversion or propensity towards the risk plays the crucial role. In the increasing case, more Λ is shifted on the right of the P&L axis, the ΛV aR absolute value is larger and, thus, the capital allocated as risk protection is larger. In the decreasing case the situation is reversed; financial managers adopting a prudential approach may choose Λ to be more translated on the left. These considerations have a direct impact on the choice of the points π i . In particular, if the objective is to strengthen the capital requirement, the points π i will be arranged on the right, in the increasing case, or on the left, in the decreasing case; on the contrary, if the objective is to release capital, the points π i will be arranged more on the left, in the increasing case, or on the right, in the decreasing case.
On the other hand, considering another scale that thickens the points close to the maximum λ M (or the minimum λ m ) would determine an increase (or decrease) of the concavity of the Λ function between π 1 and π n , that would have an impact on the capital requirement depending on the Λ direction. One could choose to modify the capital requirement by varying either the vertical coordinates (on the probability axis) or the horizontal coordinates (on the P&L axis) of the Λ function or even both. We prefer to maintain a neutral approach on the vertical coordinates of Λ and provide different ΛV aR specifications by varying only the horizontal coordinates as described in the below paragraph. However, there are no restrictions on this point and other solutions can be experimented, although the number of ad hoc choices on the model should remain limited.
Finally, we estimate the points π i using the following approach, which we call dynamic benchmark approach. We calibrate Λ on the statistics of the tail historical distribution of selected benchmarks. The idea is to compare the tails of an asset P&L distribution with a function Λ that directly depends on the tails of the market historical distribution. In such a way, we make the capital requirement decision depend on the behavior of the risk factor returns in comparison with market returns. With this choice, we expect that the ΛV aR is able to incorporate the recent market changes and the particular asset reactions faster than other risk measures. This approach is dynamic since the Λ function is continuously recalculated by using the same rolling window of the risk measure and maintained constant throughout the out-of-sample period. However, the Λ function must be unique for each risk factor, so the calibration cannot depend on specific features of the assets under analysis.
We set the points π i on the basis of the n order statistics of the benchmark historical P&L distributions.
We propose to take four points π i , so n = 4. In our opinion, this number of points represents a good trade-off between fitting accuracy and function parametric complexity. However, this choice does not substantially affect the results. We fix π 1 equal to the smallest order statistic; that is, the minimum of all the benchmark returns, π 1 = min (r min1 , ...r minj , ...r minB ), where r minj is the minimum return of the j-th benchmark and B is the total number of benchmarks. We fix π 2 , π 3 , and π 4 equal to the maximum, mean, and minimum of their historical λ%-V aR, respectively. The choice of the confidence level λ for the benchmarks' V aR depends on the risk aversion profile. In the case of an increasing Λ, a 5%-V aR represents a more prudential choice than a 1%-V aR, since 5%-V aR order statistics shift the Λ function more to the right. The converse holds in the case of a decreasing Λ. The rolling window used for computing the 1%-5%-V aR of the benchmarks should be the same used for computing the V aR and ΛV aR of the risk factors.
For instance, if we test the ΛV aR on equity markets, good benchmark candidates are the equity indexes with the highest volume of transactions and that represent the markets in which the bank's trading activity is concentrated. In our empirical test, we selected the S&P500 (US), the FTSE 100 (UK), and the EURO STOXX 50 (Eurozone). Alternatively, the selection of these benchmarks as well as the interval of confidence for their V aR computation can be done externally by the regulator. Figure 1 shows two examples of the ΛV aR estimations for the increasing and decreasing cases.
In conclusion, more advanced and sophisticated Λ estimations may be considered provided that the mathematical properties of the ΛV aR are preserved. However, increasing the Λ complexity is not consistent with the purpose of this study. Our benchmark approach is easy to compute and allows for a better understanding of the new risk measure. . Increasing and decreasing ΛV aR. ΛV aR coincides with the smallest intersection between the P&L distribution and the Λ function. ΛV aR is able to capture different tail behavior of return distributions better than V aR. For instance, in the figure on the top, Total has thicker tails than Microsoft. They have the same 2% V aR ( ∼ = 0.075) but Total's 2% ΛV aR ( ∼ = 0.1) is higher than Microsoft's 2% ΛV aR ( ∼ = 0.0875). In the figure at the bottom, the same happens for Telefonica and Unilever.

Backtesting method
The reliability and accuracy of a risk measure depend on its ability to predict and cover future unexpected losses. For this reason, the risk measure should be backtested with appropriate methods. According to the Basel Committee on Banking Supervision (1996b), the backtesting, that is, the statistical procedure of comparing realized profits and losses y with forecast risk measures x, is essential in the validation process of risk management internal models (see Jorion, 2007).
The Basel Committee on Banking Supervision (1996b) has set up a regulatory backtesting framework for internal V aR models in order to monitor the frequency of exceptions; this is known as the traffic light approach. This procedure is carried out by comparing the last 250 daily 1% V aR estimates with the corresponding daily P&L outcomes. The accuracy of the model is then evaluated by counting the number of exceptions during this period. Many alternative proposals have been introduced in the literature for V aR (we remand to Campbell, 2005, Christoffersen, 2010, and Berkowitz et al., 2011 for a detailed review). On the other hand, the backtesting of ΛV aR has been studied only recently by Corbetta and Peri (2017).
In this paper, we propose to use for the backtesting of the risk measures the unilateral hypothesis test by Kupiec (1995) that is the first method introduced in literature for V aR and the most widely known also in practice. This is also one of the simplest test that allows for an intuitive interpretation and comparison of the V aR and ΛV aR backtesting performances. The Kupiec's test, known as unconditional coverage (UC) or portion of failure (POF) test, measures whether the number n of exceptions y < x over a specific number of observations N in the backtesting window is consistent with the confidence level λ. The V aR model should be accepted if the frequency of exceptions over the specific time interval,λ, does not significantly differ from the confidence level, λ. Hence, the null and the alternative hypothesis for the POF test are given by: Under H 0 , where the V aR is considered to be "correct", the number of exceptions over the selected time period follows a binomial distribution. Thus, the POF test is conducted by the following log-likelihood ratio: Asymptotically, as the number of observations N goes to infinity, the test will be distributed as a χ 2 with 1 degree of freedom. If the LR P OF statistic exceeds the critical value of the χ 2 1 , the model should be rejected. This critical level depends on the test confidence level. However, the choice of the confidence level is based on the balance of two types of errors: type I error to reject a correct model and type II error to accept an incorrect model. Increasing the significance level implies larger type I errors but smaller type II errors and vice versa. Best practice suggests the use of a confidence level at least equal to 5% to control the type II errors, which can be very costly.
We extend the V aR backtesting framework to the ΛV aR while maintaining the same structure and fundamental meaning. Being a generalized quantile, the confidence level of the ΛV aR changes according to the Λ function. A good candidate for the ΛV aR confidence level is the maximum of the Λ function, max(Λ).
Hence, we propose adjusting the POF test by considering the max(Λ), under the null hypothesis, instead of λ. In particular, the null and the alternative hypothesis for the ΛV aR test become: This is still an unilateral hypothesis test with the same critical region as V aR test (see Casella and Berger, 2002). Hence, it can be conducted by using the same log-likelihood ratio and critical value of the V aR test. The risk model is validated if the relative number of the exceptions does not exceed the target level of exceptions given by max(Λ). This adjustment provides information about the ΛV aR's accuracy and an immediate interpretation. Indeed, it allows to verify if the coverage objective given by the Λ maximum has been reached. However, it has some limits which has been highlighted by Corbetta and Peri (2017) during the review process of this paper. On the other hand, the objective of the current paper prescinds from providing a complete framework on the backtesting of the ΛV aR.

Empirical analysis
In this section, we test our methodology to compute the ΛV aR, the so called dynamic benchmark approach  Table 3 of the Appendix A together with a brief discussion.

Risk measures computation, backtesting, and comparison
The first step of our analysis is to test our ΛV aR specifications and compare its forecasts with the risk measures proposed by the current regulation, the V aR, and the ES. The aim is to evaluate the ability of the new risk measure to incorporate extreme downward scenarios and cover the risk of the trading book.
For a more complete analysis, we compare all the ΛV aR models in the previous section with three different V aR models, one for each confidence level of 1%, 2% and 3%. We add to the analysis the computation of the ES at a 2.5% level of confidence, as recently suggested by the Basel Committee ( The second objective of this analysis is to gauge the ΛV aR's accuracy in comparison to the V aR. Hence, we conduct a backtesting exercise for the V aR and ΛV aR models. We compare the realized ex-post daily P&L with the daily V aR/ΛV aR estimates over the time period of 1 year. In particular, we split the analysis into six different 2-year rolling windows (250 days for the risk measure computation and 1 year for the backtesting). We perform the backtesting method described in Section 2.3 for all of the 12 stocks and the ΛV aR and V aR models previously specified. Figure 2 exhibits the first fundamental result of this analysis: the ΛV aR is the most prudential approach and has the highest reactivity to market conditions. This figure displays a comparison between the realized out-of-sample P&L and the risk measures under historical simulation; specifically, the 1% V aR, 1% ΛV aR, and 2.5% ES for a significant sample of the selected equities during the different phases of the recent financial crisis. During the 2008 crisis year, the ΛV aR is more conservative and reacts to adverse market conditions faster than the other risk measures. During the first year after the crisis, the ΛV aR maintains the most prudential approach and guarantees the highest reactivity to unexpected downturns (i.e., Volkswagen).
However, in the more stable periods before 2007, the behavior of the ΛV aR is in line with the behavior of the 1% V aR and 2.5% ES in preserving the highest risk protection. The backtesting exercise shows the second fundamental result of our analysis: the ΛV aR has the highest accuracy. We discuss the backtesting results under the assumption of historical simulation of the P&L distributions. In order to provide an overview of the model accuracy and show the robustness of the backtesting results, we aggregate the POF test outcomes and violations at the level of the V aR as well as the increasing and decreasing ΛV aR models.   To summarize, this empirical application highlights that our ΛV aR models are able to capture downside risks and react to adverse market conditions faster than V aR and ES models, thereby maintaining a behavior in line with the other risk measures in more stable periods. In addition, the ΛV aR models have a significantly higher level of accuracy than the V aR models. Specifically, the increasing ΛV aR models register a higher performance in crisis periods.

Conclusions
The global financial crisis has made risk measurement and its backtesting a primary concern for regulators and financial institutions. Several issues concerning the V aR and doubts raised about the ES cause us to examine alternative risk measures. A good candidate to overcome these issues seems to be the ΛV aR. This study presents the first methodological proposal of estimation of the ΛV aR.
We estimate the ΛV aR based on order statistics of the distribution of some selected market benchmarks.
This approach allows the ΛV aR to discriminate the risk among assets with different tail behaviour and capture the specific reactions to market fluctuations. In addition, the parameters of the ΛV aR are constantly recalculated for incorporating the changing of the market condition and can be specified according the different risk attitudes. We also propose a first backtesting methodology by extending the V aR hypothesistesting framework.
We test our approach under different assumptions of the P&L distribution and during different phases of the global financial crisis. We experimented with different estimations of the ΛV aR and using several confidence levels. The first finding displays the significant ability of the ΛV aR estimates to capture extreme downward scenarios and react to financial markets changes faster than the V aR and ES. The results of the backtesting exercise displays the significantly higher accuracy of our ΛV aR specifications during different phases of the global financial crisis even increasing the confidence level up to 1.5%. The results are confirmed using different assumptions on the P&L distributions.
Finally, this study shed some light on the importance of incorporating the recent mark trends in the risk measure for assessing the bank capital requirement. This may lead to a prompt adjustment of the bank's capital to unexpected downturns and assure, overall, a higher stability of the financial system. Hence, the paper provides some insights to the future Basel Committee's reviews of the role of internal models in determining the bank capital requirement. To this end, future researches will be focused on the backtesting of the ΛV aR, the computation of the ΛV aR with other risk factors and final ΛV aRs' aggregation.
A. Descriptive statistics of the data set Table 3  tails. This is also confirmed by the JB test, which rejects the normality assumption at the 5% significance level for most of the stocks and time windows under investigation.  Table 3. Annual descriptive statistics for the equities and indexes in each year under analysis. The dataset includes 12 stocks belonging to the S&P500, the FTSE 100, and the EURO STOXX 50. The dataset contains six 1-year windows from January 2006 to December 2011. For each stock and index we report the minimum daily return, maximum daily return, annual mean (the average daily return is annualized), annual standard deviation (the daily standard deviation is annualized), skewness, kurtosis, Jarque-Bera (JB) test statistic and its null hypothesis h (h = A if H 0 is accepted and h = R otherwise).

B. Violations and Kupiec's POF Test 2008
Violations