Journal of Risk

Risk.net

A numerical approach to the risk capital allocation problem

Henryk Gzyl and Silvia Mayoral

  • We provide a numerical procedure to allocate capital in ranges provided by experts.
  • We provide a numerical procedure to find a distortion function from a collection of risk prices.
  • That provides a risk measure consistent with a collection of risk prices as standalone risks.
  • That risk measure can be further used to further compute the price of other risks.

Capital risk allocation is an important problem in corporate, financial and insurance risk management. There are two theoretical aspects to this problem. The first aspect consists of choosing a risk measure, and the second consists of choosing a capital allocation rule subordinated to the risk measure. When these two complementary aspects are settled, the problem is reduced to a computational problem. No universal prescription for choosing a risk measure exists, let alone a subordinated capital allocation rule. However, expert opinion may be expressed as a range of capital to be allocated to each risk. For example, the managers of each line of activity of a corporation or bank may have a way to estimate a minimum and maximum losses with high confidence. In that case, one is confronted with two interesting numerical problems. First, to devise a method to allocate risk capital within the bounds provided by the expert. Second, to use the upper bounds as stand-alone risk prices to determine a risk measure, which can then be used to compute the prices of other risks. The risk measure could also be used to compute a range for the capital allocations of a new collection of risks, to which our numerical procedure to allocate risk capital can be applied. The aim of this paper is to use a model-free, nonparametric approach based on the method of maximum entropy in the mean to solve both problems and to examine how they are intertwined. In this way we circumvent the problem of choosing between risk valuation and risk pricing methods, and between risk capital allocation rules consistent with them. Also, the maximum-entropy-based methodology provides us with a method to determine coherent risk measures consistent with the bounds provided by expert opinion or management. The risk measure provides us with a method to determine either risk prices or bounds to be used for capital allocation for a different collection of risks.

1 Introduction

Risk capital allocation is a ubiquitous problem in risk management. It can be stated as follows: consider a collection of risks {Xi,i=1,,N}, modeled by continuous, positive random variables and adding up to a total risk Xi. As sample space we may consider ([0,)N,,FbX), where stands for the Borel subsets of [0,)N and F? stands for the joint density of the collection of risks. We do not make use of these (generally) unknown distributions because the maxentropic method relies either on expert opinion or on bounds for the stand-alone risks.

Given a total risk capital K we want to determine the capital amounts {Ki:i=1,,N} to cover for each risk in such a way that Ki=K. There are several issues involved. First, to characterize theoretically a capital allocation rule; that is, what are the basic principles or rules that guide a capital allocation? Second, how do we actually carry out the capital allocation consistently with those principles? Third, if no preferred or prescribed risk measures or capital allocation rules exist, what tools are available to the risk manager to solve the problem numerically?

As to the first two issues, deciding which risk measure and which capital allocation rule to use is usually left up to the risk manager. The answer depends on the business sector in which the risk allocation process arises. For example, the banking sector uses value-at-risk (VaR) or tail VaR (TVaR) as required by regulatory agencies. A manufacturing corporation may use them to estimate costs or losses.

It is probably in the insurance industry where the need to price risks has existed for a longest and where the variety of proposals is largest. We refer the reader to Gerber (1979), Bühlman (1984), Goovaerts et al (1984), Müller (1987), Wang (1996), Kamps (1998), Young (2004), Denuit et al (2006), Kaas et al (2008), and the many references cited therein.

The relationships between risk pricing measures, distortion risk measures and capital allocation rules have been explored in many publications. As a sample, consider Tasche (2004), Panjer and Jing (2001), Hesselhager and Anderson (2002), Kalkbrener (2003), Ventner (2004), Chorafas (2004), Furman and Zitikis (2008, 2009) and Karabey (2012). In terms of risk allocation for insurance companies consider Myers and Read (2001) and Urban et al (2003), as well as Gründl and Schmeiser (2007) and Zhou et al (2018).

A quick summary of the procedure described in the references just cited goes as follows. Denote by Y the generic random variable describing a risk and denote by ρ(Y) the risk measure. Let {Xi;i=1,,N} be the individual risks and X=i=1NXi be the aggregate risk. Then, any capital allocation rule assigning Λ(Xi,X) to risk Xi has to satisfy i=1NΛ(Xi,X)=K (the total available capital) and, for any possible risk, Λ(Y,Y)=ρ(Y) as a stand-alone risk.

We use two typical risk measures. The symbol VaRα(X) stands for α-quantile of X, and is called the value at risk of X with confidence α. By TVaRα(X) we mean E[XXVaRα(X)]. Since we are considering risk to be modeled by continuous, positive random variables, with strictly positive densities, the TVaR coincides with expected shortfall or conditional value at risk. We use TVaR because it is analogous to VaR, and because, despite being a widely used risk measure, VaR is not coherent. For more details see Fölmer and Schied (2016).

As mentioned at the outset, here we propose a numerical procedure that on the one hand allows us to solve the capital allocation problem without making use of any risk measure, and on the other allows us to infer a risk measure from a collection of risk prices. The risk measure thus obtained can then be used to price any other collection of risks and to solve the capital allocation problem for these risks.

1.1 Problem statements

To be more precise, let us describe the two problems that we are concerned with, and how they are related to each other.

1.1.1 Problem 1: how to assign risk capital when bounds are specified

As we said above, there are a variety of methods to address the capital allocation problem. However, a difficulty arises when the corporation does not have an established risk pricing methodology, and decides to use scenario analysis or to consult with different experts. Based on expert opinion, the risk manager ends up with a range [Li,Ui] for the capital to be allocated to the ith risk. For example, as lower bound for the capital to be allocated to the ith risk, the manager is advised to use Li=E[Xi] (the simplest actuarial predictor of the loss), and as upper bound the manager is advised to use Ui=VaR0.99(Xi) or perhaps Ui=TVaR0.975(Xi). Thus, all that the risk analyst is provided with is a range [Li,Ui] for the risk capital Ki to be allocated to Xi.

Using this information, the management of the company decides on a total risk capital K such that Li<K<Ui. The lower bound is clear: the risk capital should not be less than the expected aggregate loss E[Xi]=Li. Since Ui-Li can be interpreted as the unexpected loss incurred by the ith risk, and the risk manager is optimistic, they assign K<Ui to cover potential loses. With this, the risk capital allocation problem to be solved can be stated as follows:

  solve i=1NKi=K subject to Li<Ki<Ui for each i=1,,N.   (1.1)

Observe that (1.1) is an ill-posed linear algebraic problem, consisting of one equation with many unknowns with a convex constraint upon the solutions. One of the aims of this work is to solve this problem directly, using the method of maximum entropy in the mean (MEM).

1.1.2 Problem 2: how to determine the risk measure from expert opinion

As a numerical procedure to solve the capital allocation problem, solving (1.1) is sufficient. However, the method of MEM allows us to determine a risk measure that may be used to value other risks or to provide bounds for a new collection of risks, which can solve the capital allocation problem.

The idea is as follows. The capital allocation process must be such that the allocated risk capital is smaller than the stand-alone risk price, and since the upper bounds {Ui:i=1,,K} for the individual risks can be taken to be the stand-alone risk prices for the risks, we set out to determine a risk measure ρ such that ρ(Xi)=Ui for i=1,,N.

The intersection of the class of distorted risk measures and the class of spectral risk measures is interesting because they are coherent and they can be determined from a collection of risk prices. The inverse problem may be stated as follows:

  determine ϕ(u) such that 01ϕ(u)VaRu(Xi)du=Ui,i=1,,N.   (1.2)

To lead to coherent risk measures the function ϕ has to be positive, increasing and subject to the integral constraint 01ϕ(u)du=1. For more details see Appendix 1 online, or see Kusuoka (2001), Bellini and Caperdoni (2017) or Föllmer and Schied (2016). Clearly, (1.2) is an ill-posed linear problem subject to convex constraints. After discretization this infinite-dimensional problem becomes finite dimensional, but it is still ill-posed.

The maxentropic methodology that we use to solve this second problem was originally used in Gzyl and Mayoral (2008). For us the interest lies in the intertwining of the two problems. Once Problem 2 is solved for a class of risks, the ρ so determined can be used to determine the range for the capital allocation of any other collection of risks, and the procedure developed to solve Problem 1 can be brought in to determine the capital allocation problem for a new collection of risks.

The remainder of the paper is organized as follows. First, in Section 2 we explain the method of MEM, which will be used to solve both problem (1.1) and the discretized version of problem (1.2). Then, in Section 3 we adapt the notation in Section 2 to describe the solution to (1.1). In Section 4 we explain how to transform (1.2) into an algebraic problem and then how to adapt the results in Section 2 to solve it.

In Section 5 we shall consider some explicit numerical examples. In particular, in the second half of the example in Section 5.2.2 we consider the intertwining of the two maxentropic approaches as explained two paragraphs above. We conclude with some remarks in Section 6. So that we do not interrupt the description of the maxentropic methodology, in Appendix 1 online we briefly recall the notion of distorted risk measure as used in this paper. In Appendix 2 online we explain the reason for the numerical instability, since the datum is a boundary point and, in Appendix 3 online, we provide a short refresher for those readers unfamiliar with the definition of entropy.

2 Method of maximum entropy in the mean

In this section we rapidly review the basic formalism of the MEM. This is a model-free, nonparametric procedure to solve the inverse problems (1.1) and (1.2) stated above. Here we first present the method in context-free notation, and in the next two sections we particularize appropriately. The problem is

  solve the linear system ??=? for ??n,   (2.1)

where ?d and ? is a d×n matrix, and the set ? is required to be convex in its ambient space and to have nonempty interior. As a matter of fact, we shall eventually consider ?=i=1N[Li,Ui] in Section 3 and ?=[0,)n in Section 4. Also, since the problems to be solved are different, the data vector ? will be different in Sections 3 and 4.

The idea behind MEM can be summed up as follows: it consists of a procedure to transform an algebraic problem with convex constraints on the solution into a convex problem consisting of the determination of a probability that satisfies integral constraints. For that, we use the constraint set ? to define a measurable space (?,(?),Q), where (?) denotes the Borel subsets of ?, and Q is any (σ-finite) measure such that the convex hull con(supp(Q)) generated by its support equals ?. Think of the identity mapping ?(ξ)=ξ on ? as a ?-valued random variable. With this auxiliary setup, instead of solving (2.1) we use the standard maximum entropy (SME) procedure to find a probability PQ such that ?=EP[?]. If such probability can be found, the constraint upon ? is automatically satisfied. Which Q to choose depends on the setup. The main criterion is that it leads to a simple way to compute the function Z(?) defined in (2.4).

The SME procedure goes as follows. On the class of densities (absolutely continuous with respect to Q) we define the entropy function (see Appendix 4 online) by

  S(p)=-?p(ξ)lnp(ξ)dQ(ξ),  

or - if the integral of |lnp(ξ)| is not convergent. The choice of sign is conventional: we want to maximize entropy to go along with tradition. The entropy function is strictly concave due to the strict concavity of the logarithm. To find a p*(ξ) such that dP*=p*dQ satisfies (2.1), we have to solve the problem

  find p* at which sup{S(p)?EP[?]=?} is achieved.   (2.2)

The representation of the solution to (2.2) is well known. See Jaynes (1957) for the original formulation and Borwein and Lewis (2000) for proper mathematical details. It is given by

  p*(ξ)=e-?*,?ξZ(?*),   (2.3)

where the normalization factor is clearly given by

  Z(?)=?e-?,?ξdQ(ξ),   (2.4)

computed at ?*. Note that lnZ(?) is a cumulant generating function, and we could think of (2.3) as an exponential tilting of Q. The systematic way to determine the ?* appearing in (2.3) is by minimizing the convex function (dual entropy) defined by

  Σ(?,?)=lnZ(?)+?,?   (2.5)

over {?dZ(?)<}, which in all cases of interest for us will be d itself. In the numerical examples considered below, the minimization is carried out using a gradient method with a self-adapting step proposed by Barzilai and Borwein (1988). Furthermore, S(ρ*)=Σ(?*,?). Once the ?* that minimizes the right-hand side of (2.5) has been found, then the solution to (2.1) is given by

  xj*=?ξjp*(ξ)Q(dξ),j=1,,n.   (2.6)

At this point we comment on the role of the reference measure Q. It is needed to define an entropy, but any two equivalent reference measures lead to entropies that differ by a linear term in p. The choice of the reference measure is dictated by the simplicity of the dual entropy (2.5) for the purpose of numerical minimization. There is no financial (or physical) interpretation whatsoever of the reference measure Q. In particular the actual statistical nature of the risks to which we want to assign capital is not related in any way to the choice of the auxiliary setup. All possible ways of mapping the linear inverse problem onto an entropy maximization problem are equally valid. Apart from these remarks, we mention in (2.6) that what is important is that the p*(dξ)Q(dξ) yield an expected value for ξi that satisfies the constraints in (2.2).

How to adapt the above setup to solve problems (1.1) or (1.2) is made explicit in Sections 3 and 4. Once the constraint sets are described, we make a choice of Q that leads to an easily computable Z(?). We direct the reader to McLeish and Reesor (2003) for information on the relationship between MEM and the transformation of probability laws under distortion.

3 Application of maximum entropy in the mean to determine capital allocation

Problem (1.1) consists of determining numbers Ki such that

  i=1NKi=K under the constraint LiKiUi for each i=1,,N.  

Compared with the notation in Section 2, we see that here n=N, d=1. In this case ?=?t is an N-dimensional row vector with all components equal to 1, and to finish, the right-hand side of (2.1) becomes just a real number y=K. The constraint set for this situation is ?=i=1N[Li,Ui]. As the measure Q on ? such that the convex hull of its support is ? we consider

  dQ(ξ)=i=1N(ϵLi(dξi)+ϵUi(dξi)).  

Here ϵa(dξ) denotes the measure that puts a unit mass at point a. The intuition behind the choice is that any point with an interval like [L,U] is a convex combination of its end points.

With this choice of Q the computation is simple:

  Z(λ)=?e-λ?,ξdQ(ξ)=i=1N(e-λLi+e-λUi).  

In order to complete the procedure, we must minimize (2.5), which in this case is

  Σ(λ,K)=i=1Nln(e-λLi+e-λUi)+λK.   (3.1)

Once the minimizer λ* is determined, an application of (2.6) to compute the solution to (1.1) yields

  Ki*=Lie-λ*Lie-λ*Li+e-λ*Ui+Uie-λ*Uie-λ*Li+e-λ*Ui.   (3.2)

Observe that

  1. (1)

    the weights in the convex combination depend only on the available information;

  2. (2)

    when all risks fall in the same range, say [L,U], then all allocated capital amounts are the same and equal to K/N, because λ* is determined so that this constraint is satisfied;

  3. (3)

    if business units have very small unexpected risks, ie, if LiUi, then the allocated capital is Ki*Li; and

  4. (4)

    (3.2) allows the analysis of the sample dependence of the solution on the estimate of the lower and upper limits of the range of the capital.

To analyze the relationship between λ and the total capital K to be allocated, notice that the first-order condition that determines λ* is

  dΣ(λ,K)dλ=0=-i=1NLie-λ*Lie-λ*Li+e-λ*Ui+Uie-λ*Uie-λ*Li+e-λ*Ui+K.  

If we now differentiate the last identity with respect to K, and isolate dλ*/dK, we obtain

  dλ*dK=-1C(λ*),  

where C(λ*)>0 is given by

  C(λ*) =i=1N(Li2e-λ*Lie-λ*Li+e-λ*Ui+Ui2e-λ*Uie-λ*Li)  
      -(i=1NLie-λ*Lie-λ*Li+e-λ*Ui+Uie-λ*Uie-λ*Li)2.  

This happens to be a variance, and therefore it is positive. To conclude, we have the following.

Proposition 3.1.

With the notation introduced above, a glance at (3.2) shows that

  1. (1)

    if λ*-, then Ki*Ui, and therefore Ki=1NUi; and

  2. (2)

    if λ*, then Ki*Li and therefore Ki=1NLi,

Again, see Appendix 7.2 online for details about the numerical instability issue.

4 Application of maximum entropy in the mean to determine the distortion measure

We shall adapt the notation introduced in Section 2 to deal with problem (1.2), that is, given the prices of a collection of risks, we want to determine a risk pricing measure based on a distortion function, which reproduces the given risk prices. The distortion function thus obtained can then be used to allocate capital to a new collection of new risks.

Observe that (1.2) is an integral equation with a discrete right-hand side which we solve numerically using MEM. In order to proceed, the first step consists of discretizing the problem. For this we shall closely follow Gzyl and Mayoral (2008). To simplify notation in (1.2), use qi(u)=VaRXi(u), and write it as

  ρϕ(Xi)=01qi(u)ϕ(u)du=Ui,i=1,,N+1,  

where to accommodate the condition 01ϕ(u)du=1 we add XN+1 such that qN+1(u)=1 and UN+1=1. Thus, in this case we shall consider d=N+1 and ?t=(U1,,UN,1). Also, we have a methodological constraint imposed upon ϕ. We know that it is increasing.

To proceed to the discretization stage, we consider a partition of [0,1] at points uj=j/n. The choice of n depends on the known variability of the qi(u) in [0,1]. Let us define the (N+1)×n matrix ? by setting Bi,j=qi(uj)/n for i=1,,N+1 and j=1,,n. Set ϕ(aj)=ϕj, where aj=12(uj+uj-1) and u0=0. With all this, the discretized version of problem (1.2) can be restated as follows. Solve

  ?ϕ=?,ϕ?,   (4.1)

where the constraint set ?n is a convex set defined in this case by

  ?={(ϕ1,,ϕn)0ϕ1<<ϕj<ϕj+1<<ϕn}.  

To simplify the description of the constraints, we set ϕ1=x1, ϕ2=x1+x2, , ϕn=xn++x1 or ϕ=?x, where ? is the obvious lower diagonal matrix describing the change of coordinates. Setting ?=??, we can restate our discretized problem as

  ??=?,??,   (4.2)

where now the convex constraint set is ?=[0,)n, ie, the positive orthant in n. Clearly, once the vector ? is at hand, the ϕ is easily recovered. To apply the maximum entropy procedure, we begin by specifying a reference measure Q on ?. In the Section 2 notation we choose

  dQ(ξ)=j=1n(n=11n!ϵn(dξj)).  

That is, we choose a product of unnormalized Poisson measures of parameter 1 on ?. This choice makes the computation of Z(?) very simple. Clearly, the convex hull of Q is ?. Note that in this case, for ?d, the function Z(?) is given by

  Z(?)=?e-?t?,?dQ(?)=j=1n(exp(1-e-(?t?)j)).  

This time we need to minimize

  Σ(?,?)=j=1nln(e-(1-?t?)j)+?,?   (4.3)

with respect to ?n. Once the vector ?* is at hand, we have

  xj*=e-(?t?*)jfor j=1,,n,   (4.4)

from which ϕ(j) can be obtained as indicated above. In conclusion we mention that, in order to avoid overflow or underflow when dealing with exponentials, it is convenient to replace problem ??=? with problem

  (1K?)?=1K?  

for some appropriate scaling factor K.

5 Numerical results

We shall first consider an example of capital allocation and then three examples of distortion function determination including examples of how the two procedures tie up. We suppose that assets are distributed according to either a lognormal, a Gamma or a Pareto density. The first two are quite common in the insurance industry, but nevertheless, we refer the reader to Kiche et al (2019) and García et al (2014). Consider, for example, the work by Park and Kim (2016), Jakata and Chikobvu (2019), Lee and Kim (2019) and Charpentier and Flachaire (2019) in which the Pareto (standard or generalized) is used in risk management and in applications of extreme value theory for estimating large losses and tail risk. Finally, consider an application of distorted risk measures to systemic risk developed by Dhaene et al (2019).

The parameters were chosen arbitrarily. The determination of the upper bounds is such that they are larger than the corresponding lower bounds. They reflect a possible choice by a risk manager. The different cases considered in Section 5.2 are chosen to illustrate the potential of the MEM to reproduce prices of statistically diverse risks by means of one single distorted risk measure.

Keep in mind that the examples are only chosen to appear realistic, and they do not correspond to any real bank or corporation.

5.1 Problem 1: capital allocation

For the first example we shall consider a situation in which the stand-alone risks are known and modeled by lognormal random variables with parameters (μ,σ), that is, X=exp(μ+σζ), where ζN(0,1). We suppose that, to determine a range for the capital allocation problem, the management considers L=E[X] as the lower bound for all risks, and either U=VaR0.95(X), U=VaR0.99(X), U=TVaR0.95 or U=TVaR0.975 as upper bounds of the acceptable risk. For lognormal random variables, the values of U are obtained from their analytical expression: namely,

  either U=VaRα(X)=eμ+zασ or TVaRα(X)=[Φ(σ-zα)1-α]eμ+σ2/2.  

Here zα denotes the 100αth quantile of N(0,1) and Φ denotes the distribution function for N(0,1). We remark that the choice of variables to model the input is arbitrary. Also, the risk measures chosen to estimate the upper bound are standard in the financial industry and easy to estimate from empirical data.

The input information for the numerical work is summarized in Table 1, in which we display the parameters of the distribution plus the lower and upper bounds of the different risks. In order to study the effect of the coefficient of variation (CV), we organize the data as follows. In panel (a) we fixed σ=0.1 (which in this case means fixing the CV) and let μ vary as indicated, and in panel (b) we fixed μ and let the σ (or CV) vary. This was to give us an idea of what affects the bounds of the risks.

Table 1: Input data for the capital allocation problem.
(a) σ=0.10
  ?? ?? ?? ??
μ 1.00 1.50 02.00 02.50  
L 2.73 4.50 07.43 12.24 26.91
VaR0.95 3.20 5.28 08.71 14.36 31.56
VaR0.99 3.43 5.66 09.32 15.37 33.78
TVaR0.95 3.50 5.77 09.51 15.57 34.44
TVaR0.975 3.59 5.93 09.77 16.11 35.40
CV 0.10 0.10 00.10 00.10  
(b) μ=1.00
  ?? ?? ?? ??
σ 0.30 00.50 00.70 01.00  
L 2.84 03.08 03.47 04.48 11.03
VaR0.95 4.45 06.17 08.86 14.08 33.32
VaR0.99 5.46 08.70 13.85 27.84 55.85
TVaR0.95 5.64 08.81 13.30 23.26 51.01
TVaR0.975 6.12 10.07 16.02 30.21 62.43
CV 0.31 00.53 00.80 01.31  

In both panels, the columns labeled by contain the sum of the lower or upper bounds of the risk capital. Even though the computations were carried out at high precision, we only report figures to two decimal places so that the table fits on the page. The row labeled L contains the mean of each risk, and the rows labeled VaR and TVaR list two possible upper bounds for each risk at the specified confidence levels. Knowledge of the sum of the lower or upper bounds allows management to choose the total capital to be allocated as a value between these values. To solve the capital allocation problem numerically, for each risk level we considered two possible values of the total capital to be allocated, ranging between the sum of the lower bounds and the sum of the upper bounds. The results that were obtained are shown in Table 2.

Table 2: Allocated risk capitals.
(a) Upper bounds and their sum
  ? ?? ?? ?? ??
VaR0.95 29.50 2.98 4.92 8.13 13.47
VaR0.95 31.00 3.05 5.11 8.55 14.29
VaR0.99 29.50 3.05 5.00 8.17 13.27
VaR0.99 31.00 3.10 5.14 8.53 14.23
TVaR0.95 32.00 3.16 5.26 8.79 14.79
TVaR0.95 34.00 3.32 5.61 9.42 15.66
TVaR0.975 32.00 3.32 5.29 8.80 14.72
TVaR0.975 34.00 3.28 5.53 9.36 15.83
(b) Lower bounds and their sum
  ? ?? ?? ?? ??
VaR0.95 31.00 3.88 5.43 7.90 13.79
VaR0.95 32.00 3.98 5.70 8.30 14.03
VaR0.99 31.00 4.11 5.71 8.05 13.12
VaR0.99 32.00 4.12 5.76 8.21 13.91
TVaR0.95 49.00 4.84 8.03 12.91 23.22
TVaR0.95 50.00 4.07 8.46 13.21 23.26
TVaR0.975 49.00 4.63 7.25 11.86 25.26
TVaR0.975 50.00 4.65 7.33 12.09 25.93

The results in Table 2 are organized as follows. In each panel, in each case the lower bound for each risk is the value of L listed in Table 1. The first four rows of Table 2 correspond to a total risk capital larger than L but smaller than the U obtained by summing the individual values of the VaR, and the last four rows correspond to a total risk larger than L but smaller than the sum of the individual values of TVaR. The two panels display the effect of a constant CV or a variable CV. They also show the effect of varying total risk capital, listed in the column labeled K in each panel of Table 2.

We add that when the allocated capital lies exactly at the midpoint between the minimum and the maximum, the resulting allocation is also at the midpoint between Li and Ui as must be clear after a glance at (2.6). Also, the method becomes unstable when the total capital to be allocated is chosen to coincide with any of its extreme values. When the constraint is at the boundary of the allowed values, the minimization of (2.5) becomes numerically unstable. This is explained in Appendix 7.2 online.

5.2 Problem 2: determining the distortion function from given risk prices

Recall that the aim of this section is to guess how experts price risk. As mentioned at the beginning of Section 1.1.2, the upper bound of the range for the capital allocation problem may be considered to be its stand-alone risk price. All that the analyst knows is the statistical distribution of the different risks, and the risk manager assumes that the expert uses some distortion function to price the risks. The problem is to determine that distortion function.

There are two important reasons behind this essential assumption. On the theoretical side, the distorted risk measures are a large and flexible class of coherent measures, and on the practical side, they lead to a problem that can be solved by the MEM methodology.

Once the distortion function is obtained, we can use it to price other risks. Not only that, we can use this risk price to determine ranges for the capital to be assigned to a new collection of risks using the methodologies proposed in Section 3 and the previous example.

For the computations that we describe below, we shall consider the risks to be distributed according to

  1. (a)

    generalized Pareto with density

      f(x;kσ,θ)=1σ(1+k(x-θ)σ)-(1+1/k);  
  2. (b)

    Gamma with density

      f(x;a,b)=1baΓ(a)xa-1e-x/b;  

    and

  3. (c)

    lognormal with density given by

      f(x;μ,σ)=1xσ2πexp(-(lnx-μ)22σ2)).  

Before describing the numerical examples, we mention at this point that in all numerical experiments below we used partitions of [0,1] of size n={20,50,100,200}. All solutions look alike and we report only the cases when n=50 and n=100. Recall that for this we apply MEM to solve two ill-posed problems consisting of solving a set of four or six equations to determine 50 or 100 unknowns. In all the plots displayed in this section, the dashed line shows the true distortion function and the dotted line shows the reconstructed (estimated) distortion function. Of course, in practice, the original function is not known. This procedure is carried out here to illustrate the performance of MEM.

5.2.1 Different risk distributions

For the first numerical example we consider six risks, characterized by the distributions specified in Table 3, and as has been said, the analyst wants to determine the distortion function used by the expert to value the risks. The relationship between the distortion functions used as inputs for MEM was explained in Section 4, and a few remarks about distortion functions are presented in Appendix 7.2 online.

Table 3: Risk densities and their parameters.
Distribution ? ? ? CV
Pareto 1 0.25 1.00 1.00 1.52
Pareto 2 0.25 2.00 20 3.04
    ? ? CV
Gamma 1   1.50 2.50 0.82
Gamma 2   1.00 2.00 1.00
    ? ? CV
Lognormal 1   0.50 0.40 0.42
Lognormal 2   1.00 0.70 0.80

We suppose the expert used a proportional hazard (PH) distortion function (g(u)=u1/γ) with γ=1.5 to compute the risk prices listed in the first row of Table 4. In the second row we list the prices computed with the distortion function obtained by applying MEM.

Table 4: Given and determined prices.
True price 2.5953 5.2107 4.765 2.4171 2.0309 4.0483
Estimated price 2.5929 5.2114 4.7647 2.4181 2.0314 4.0483

As seen in Table 4, the agreement between the input data and the predicted prices is good up to the third decimal place. In Figure 1 we display the original (dashed line) and the numerically reconstructed distortion functions (dotted line) for two different mesh sizes.

True and estimated distortion functions. (a) n=50. (b) n=100.
Figure 1: True and estimated distortion functions. (a) n=50. (b) n=100.

Now, once we have the distortion function we can price other risks. To recall, the interpretation is the following: we use the distortion function just computed to compare the risk prices of a new set of risks with the price that the hired expert provides us with (of course, supposing that they use the same methodology as before). So, let us consider the risks described in Table 5.

Table 5: Risk densities and their parameters.
Distribution ? ? ? CV
Pareto 1 0.30 1.30 2.60 1.93
Pareto 2 0.30 2.50 5.00 3.70
    ? ? CV
Gamma 1   1.25 2.25 0.89
Gamma 2   1.750 2.75 0.76
    ? ? CV
Lognormal 1   0.70 0.60 0.66
Lognormal 2   1.00 0.70 1.12

Given this data, we compute the risk prices using the exact (original) distortion function (that is, the one that the expert would use) and the distortion function determined numerically using the previous data set. In the first row of Table 6 we list the prices that the expert would provide us with, and in the second we list the prices computed previously with the maxentropic distortion function. We emphasize that this is an indirect performance test since we know the true prices, because we know the distortion function that the expert uses.

Table 6: A robustness test: given and determined prices.
  ?? ?? ?? ?? ?? ??
True price 5.2027 9.2242 3.5862 5.6455 2.8779 5.7918 32.3283
Estimated price 5.2234 9.2281 3.5667 5.6383 2.8749 5.7981 32.3295

Keeping in mind that we are using six values of an integral to reconstruct at least 50 points of a density, the agreement is quite good.

To close the circle, we apply the technique proposed in the first example to solve the capital allocation problem for the new collection of risks. That is, we take their means as lower bounds for the capital allocation, and their risk prices as upper bounds (from Table 6), thought of as stand-alone risk prices. For values of the total risk capital between the sums of the lower and upper bounds, we compute the allocated capital per risk. The results are displayed in Table 7.

Table 7: Given and determined prices.
  ?? ?? ?? ?? ?? ??
Lower bound 4.4571 8.5714 2.8125 4.8125 2.4109 4.5042 27.5686
Upper bound 5.2234 9.2281 3.5667 5.6383 2.8749 5.7981 32.3295
Alloc. capital 4.9939 9.0144 3.3388 5.4023 2.7014 5.5493 31
Alloc. capital 5.0763 9.0798 3.4193 5.4936 2.738 5.6931 31.5
Alloc. capital 5.1674 9.1613 3.5096 5.5881 2.793 5.7806 32
Table 8: Risk densities and their parameters.
Distribution ? ? ? CV
Pareto 1 0.25 00.3 0.30 0.46
Pareto 2 0.25 00.3 2.50 0.76
Pareto 3 0.50 00.75 0.75 0.87
Pareto 4 0.20 21.00 1.00 1.16
Table 9: A consistency test: original versus reconstructed prices (case 1).
True price (Pareto) (PH) 0.791 1.4789 2.1313 2.6408
Estimated price (Pareto) (PH) 0.7916 1.7496 2.13 2.6413
True price (Pareto) (Wang) 1.2555 2.0055 2.8704 3.7334
Estimated price (Pareto) (Wang) 1.2551 2.0066 2.8684 3.7345
Table 10: Risk densities and their parameters (case 1).
Distribution ? ? ? CV
Pareto 1 0.15 1.00 1.00 0.91
Pareto 2 0.40 0.50 0.50 2.60
Pareto 3 0.20 2.00 1.50 1.16
Pareto 4 0.30 1.75 1.50 4.46

5.2.2 All risks are from the same family of distributions

To better examine the roles of the statistical nature of the risks and that of the distortion function, we consider three sets of risks, with Pareto, Gamma, and lognormal densities, and suppose we have two experts, who price using two distortion functions: a proportional hazard and a Wang distortion function. The proportional hazard distortion function is as above and the Wang distortion is g(u)=Φ(Φ-1(u)+λ) with λ=0.05. See Wang (2000) for more on this class of distortion functions. We shall use these as data to determine the distortions and examine how they price related risks.

To examine the effect of the statistical nature of the risks, we shall suppose that all risks in the data set have the same CV, but that the new risks to price have different CVs. In all cases we use a partition of size n=100 to discretize [0,1], and we shall split the example into three cases according to the nature of the risk.

For typographical reasons we organize the description into two cases: first we consider the Pareto case in detail, and then we consider Gamma and the lognormal cases. The description of the tables and the plots for each case are analogous.

Case 1 (Pareto risks) The Pareto risks have the heaviest tails. The first data set is described in Table 8.

In the first and third rows of Table 9, we list the prices of the risks that are used as inputs for the application of MEM to determine the distortion functions. The distortion functions are shown in Figure 2.

True and estimated distortion functions. (a) Pareto risk and PH distance. (b) Pareto risk and Wang distance.
Figure 2: True and estimated distortion functions. (a) Pareto risk and PH distance. (b) Pareto risk and Wang distance.

This time the reconstructed distortion functions (dotted line) seem to cling better to the true distortion (dashed line). This may, perhaps, be due to the fact that the distributions of all the risks belong to the same family. The test of consistency this time yields the results displayed in Table 9.

In the even rows of Table 9 we list the prices determined using the reconstructed distortion function. The first consistency test is passed reasonably well. To carry out the second, we again used the original distortion function to compute the true prices and then we used the reconstructed function to compute the predicted prices. In Table 10 we display the data about the new risks to be priced, and in Table 11 we display two sets of prices.

To finish the list of tables for case 1, in Table 11 we display the predicted price of a new collection of risks, along with the prices of the same risks calculated using the original (true) distortion function.

Table 11: A robustness test: original versus predicted prices (case 1).
True price (Pareto) (PH) 2.4837 1.5951 2.7469 4.5039
Estimated price (Pareto) (PH) 2.4908 1.5977 2.7547 4.4930
True price (Pareto) (Wang) 3.6440 2.6218 3.2840 6.7449
Estimated price (Pareto) (Wang) 3.6410 2.6070 3.2922 6.7125

Case 2 (Gamma and lognormal risks) The data for these examples are given in Table 12.

Table 12: Risk densities and their parameters (case 2).
Distribution ? ? CV
Gamma 1 1.25 1.50 0.46
Gamma 2 1.50 1.75 0.76
Gamma 3 2.75 2.00 0.87
Gamma 4 2.25 2.50 1.16
  ? ? CV
Lognormal 1 0.50 0.44 0.476
Lognormal 2 1.00 0.68 0.76
Lognormal 3 1.35 0.75 0.87
Lognormal 4 1.50 0.93 1.16
Table 13: A consistency test: original versus reconstructed prices (case 2).
True price: (Gamma) (PH) 10.9353 4.4893 4.4139 03.0414
Estimated price: (Gamma) (PH) 10.9364 4.4881 4.4123 03.0430
True price: (Gamma) (Wang) 12.3043 6.8233 6.1334 04.5165
Estimated price: (Gamma) (Wang) 12.3043 6.8233 6.1334 04.5165
True price (Lognormal) (PH) 02.0670 3.9276 5.1457 10.3495
Estimated price (Lognormal) (PH) 02.0689 3.9271 5.1453 10.3496
True price (Lognormal) (Wang) 02.4675 6.1616 7.9963 14.9900
Estimated price (Lognormal) (Wang) 02.4662 6.1619 7.9968 14.9898
Table 14: Risk densities and their parameters (case 2).
Distribution ? ? CV
Gamma 1 1.10 1.75 1.00
Gamma 2 1.20 2.00 0.92
Gamma 3 1.50 2.25 0.82
Gamma 4 2.00 2.50 0.71
  ? ? CV
Lognormal 1 0.70 0.30 0.31
Lognormal 2 1.20 0.70 0.80
Lognormal 3 1.40 0.90 1.12
Lognormal 4 1.60 1.10 1.53
Table 15: A robustness test: original versus predicted prices (case 2).
True price (Gamma) (PH) 2.3637 2.9110 04.0784 05.901
Estimated price (Gamma) (PH) 2.3505 2.9138 04.0853 05.8917
True price (Gamma) (Wang) 3.1863 4.8174 06.2996 08.6080
Estimated price (Gamma) (Wang) 3.1835 4.8043 06.2970 08.6115
True price (Lognormal) (PH) 2.260 5.3935 07.5330 11.4069
Estimated price (Lognormal) (PH) 2.2670 5.3595 07.4887 11.3954
True price (Lognormal) (Wang) 2.4307 7.5638 14.1933 23.2203
Estimated price (Lognormal) (Wang) 2.4298 7.5450 14.1145 22.9311
True and estimated distortion functions. (a) Gamma risk PH distance. (b) Gamma risk and Wang distance. (c) Lognormal risk and PH distance. (d) Lognormal risk and Wang distance.
Figure 3: True and estimated distortion functions. (a) Gamma risk PH distance. (b) Gamma risk and Wang distance. (c) Lognormal risk and PH distance. (d) Lognormal risk and Wang distance.

These yield the reconstructed distortion functions displayed in Figure 3. The dashed line corresponds to the original distortion and the dotted line to the reconstructed distortion.

The first consistency test yields the results in Table 13. Again, the odd rows denote the prices used as data and the even rows denote the predicted prices, and again, this measures the quality of the reconstruction.

To double check the applicability of the reconstructed distortion function we consider the following data with variable CV.

The second consistency test applied to this new data set yields the results in Table 15, where the description of the entries are the same as used previously: the odd rows contain the original (true) data and the even rows contain the predicted data.

Before concluding this section, we mention again that the test of robustness of the reconstructed distortion function, in which the prices of new risks are computed with the reconstructed density, cannot be performed in real life because the true prices are actually unknown.

6 Conclusion

The relevance of any numerical approach to solving the capital allocation problem is that it fills an epistemological and methodological gap: in principle, there are no prescribed rules to choosing a risk measure or a capital allocation rule that is subordinate to it. Also there is no existing a priori way to contrast the results of different risk measure choices or the results of capital allocation principles subordinated to it.

The maxentropic methodology that we propose here can be used to solve two intertwined ill-posed inverse problems. Mathematically, one of them consists of determining a sequence of numbers in preassigned ranges that add up to a given number. The other is a generalized moment problem, which consists of determining a function on [0,1] from the knowledge of its integral with respect to a few other functions.

Two problems arise in risk management. The first is the capital allocation problem and the second consists of determining a distortion function from given prices of risk. The distortion function obtained solving the second problem can be used to determine intervals of different collections of risks for the capital allocation problem.

The relevance of our approach is that the methodology we propose to solve both inverse problems is model independent; therefore, it does not involve calibration of parameters and it is robust, ie, continuous with respect to the data. Another interesting feature of our approach is that it does not impose theoretical constraints on the capital allocation problem.

To conclude, we emphasize once more that the quality of the solution to Problem 2 can only be tested as we tested it: comparing the input risk prices (which we know as part of the numerical modeling) with the risk price determined by the reconstructed distortion function.

Declaration of interest

The authors report no conflicts of interest. The authors alone are responsible for the content and writing of the paper.

References

  • Barzilai, J., and Borwein, J. M. (1988). Two-point step size gradient methods. IMA Journal of Numerical Analysis 8, 141–148 (https://doi.org/10.1093/imanum/8.1.141).
  • Bellini, F., and Caperdoni, C. (2007). Coherent distortion risk measures and higher-order stochastic dominances. North American Actuarial Journal 11, 35–42 (https://doi.org/10.1080/10920277.2007.10597446).
  • Borwein, J. M., and Lewis, A. S. (2000). Convex Analysis and Nonlinear Optimization. CMS Books in Mathematics. Springer (https://doi.org/10.1007/978-1-4757-9859-3).
  • Bühlman, H. (1984). The general economic premium principle. ASTIN Bulletin 14, 13–21 (https://doi.org/10.1017/S0515036100004773).
  • Charpentier, A., and Flachaire, E. (2019). Pareto models for risk management. Preprint (arXiv:1912.11736v1).
  • Chorafas, D. N. (2004). Economic Capital Allocation with Basel II. Elsevier/Butterworth-Heinemann, Burlington, MA (https://doi.org/10.1016/B978-075066182-9.50010-0).
  • Denuit, M., Dhaene, J., Goovaerts, M., Kaas, R., and Laeven, R. (2006). Risk measurement with equivalent utility principles. Statistics and Decisions 24, 1–25 (https://doi.org/10.1524/stnd.2006.24.1.1).
  • Dhaene, J. Laeven, R. J. A., and Zhang, Y. (2019). Systemic risk: conditional distortion risk measures. Preprint (arXiv:1901.04689v2).
  • Fölmer, H., and Schied, A. (2016). Stochastic Finance. De Gruyter, Berlin (https://doi.org/10.1515/9783110463453).
  • Furman, E., and Zitikis, R. (2008). Weighted risk capital allocations. Insurance: Mathematics and Economics 43, 263–269 (https://doi.org/10.1016/j.insmatheco.2008.07.003).
  • Furman, E., and Zitikis, R. (2009). Weighted pricing functionals with applications to insurance: an overview. North American Actuarial Journal 13, 483–495 (https://doi.org/10.1080/10920277.2009.10597570).
  • Gerber, H. U. (1979). An Introduction to Mathematical Risk Theory. University of Pennsylvania Press, Philadelphia, PA.
  • Goovaerts, M., de Vylder, F., and Hazendonck, J. (1984). Insurance Premiums: Theory and Practice. North-Holland, Amsterdam.
  • Gründl, H., and Schmeiser, H. (2007). Capital allocation for insurance companies: what good is it? Journal of Risk and Insurance 74, 301–317 (https://doi.org/10.1111/j.1539-6975.2007.00214.x).
  • Gzyl, H., and Mayoral, S. (2008). Determination of risk pricing measures form market prices of risk. Working Paper 03/07, School of Economics and Business Administration, University of Navarra, Spain.
  • Hesselhager, O., and Anderson, U. (2002). Risk sharing and capital allocation. Working Paper, Tryg Insurance, Denmark. URL: http://www.soa.org/globalassets/assets/library/research/actuarial-research-clearing-house/2000-09/2003/arch-1/arch03v37n1-16.pdf.
  • Jakata, O., and Chikobvu, D. (2019). Modeling extreme risk of the South African Financial Index (JS80) using generalized Pareto distribution. Journal of Economic and Financial Services 12(1), paper 407 (https://doi.org/10.4102/jef.v12i1.407).
  • Jaynes, E. T. (1957). Information theory and statistical mechanics. Physical Review 106, 620–630 (https://doi.org/10.1103/PhysRev.106.620).
  • Kaas, R., Goovaerts, M., Dhaene, J., and Denuit, M. (2008). Modern Actuarial Risk Theory: Using R. Springer (https://doi.org/10.1007/978-3-540-70998-5).
  • Kalkbrener, M. (2003). Axiomatic approach to capital allocation. Mathematical Finance 15(3), 425–437 (https://doi.org/10.1111/j.1467-9965.2005.00227.x).
  • Kamps, U. (1998). On a class of premium principles including the Esscher principle. Scandinavian Actuarial Journal 1, 75–80 (https://doi.org/10.1080/03461238.1998.10413993).
  • Karabey, U. (2012). Risk capital allocation and risk quantification in insurance companies. PhD Thesis, Heriot-Watt University, Edinburgh.
  • Kiche, J., Ngesa, O., and Orwa, G. (2019). On generalized Gamma distribution and its application to survival data. International Journal of Statistics and Probability 8, 85–102 (https://doi.org/10.5539/ijsp.v8n5p85).
  • Kusuoka,S. (2001). On law invariant coherent risk measures. In Advances in Mathematical Economics, Kusouka, S., and Maruyama, T. (eds), Volume 3, pp. 88–95. Springer (https://doi.org/10.1007/978-4-431-67891-5_4).
  • Lee, S., and Kim, J. H. T. (2019). Exponentiated generalized Pareto distribution: properties and applications towards extreme value theory. Communications in Statistics: Theory and Methods 48, 2014–2038 (https://doi.org/10.1080/03610926.2018.1441418).
  • McLeish, D., and Reesor, R. M. (2003). Risk, entropy and the transformation of distributions. North American Actuarial Journal 7, 128–144.
  • Müller, H. H. (1987). Economic premium principles in insurance and the capital asset pricing model. ASTIN Bulletin 17, 141–150 (https://doi.org/10.2143/AST.17.2.2014969).
  • Myers, S. C., and Read, J. A. (2001). Capital allocation for insurance companies. Journal of Risk and Insurance 68,545–580 (https://doi.org/10.2307/2691539).
  • Panjer, H., and Jing, J. (2001). Solvency and capital allocation. Report 01-14, Institute of Insurance and Pension Research, University of Waterloo.
  • Park, M. H., and Kim, J. H. T. (2016). Estimating extereme tail risk measures with generalized Pareto distribution. Computational Statistics and Data Analysis 98, 91–104.
  • Tasche, D. (2004). Allocating portfolio economic capital to sub-portfolios. In Economic Capital: A Practitioners Guide, Dev, A. (ed), pp. 275–302. Risk Books, London.
  • Urban, M., Dietrich, J., Klüppelberg, C., and Stölling, R. (2003). Allocation of risk capital to insurance companies. Blätter der DGVFM 26, 389–406 (https://doi.org/10.1007/BF02808388).
  • Ventner, G. G. (2004). Capital allocation survey with commentary. North American Actuarial Journal 2, 96–107 (https://doi.org/10.1080/10920277.2004.10596139).
  • Wang, S. S. (1996). Premium calculation by transforming the layer premium density. ASTIN Bulletin 26, 71–92 (https://doi.org/10.2143/AST.26.1.563234).
  • Wang, S. S. (2000). A class of distortion operators for pricing financial and insurance risks. Journal of Risk and Insurance 67, 15–36 (https://doi.org/10.2307/253675).
  • Young, V. R. (2004). Premium principles. In Encyclopedia of Actuarial Science, Teugels, J. L., and Sundt, B. (eds). Wiley (https://doi.org/10.1002/9780470012505.tap027).
  • Zhou, M., Dhaene, J., and Yao, J. (2018). An approximation method for risk aggregations and capital allocation based on additive risk factor models. Insurance: Mathematics and Economics 79, 92–100 (https://doi.org/10.1016/j.insmatheco.2018.01.002).

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@risk.net or view our subscription options here: http://subscriptions.risk.net/subscribe

You are currently unable to copy this content. Please contact info@risk.net to find out more.

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here