Value at risk
Get assignment help for this at
assignment4finance@gmail.com
ABSTRACT
Risk management is becoming more and more
important after we have faced several times of financial crisis especially the
one we are suffering now for financial institutions and regulators. Before
recent financial crisis, Value at Risk (VaR) is a simple and widely acceptable
tool to measure and manage risk in the last 15 years since JP Morgan published
its Riskmetrics to measure and manage risk, but recently more and more analysts
doubt its usefullness and efficiency during financial crisis.
In this study we employ four widely used
approaches to estimate VaR for three different financial assets at three
different confidence levels to test their performance. The approaches we used
in this study are the Historical Simulation approach, Moving average approach,
GARCH Normal approach and GARCH Student t approach, the three financial assets
are S&P 500, Brent oil and United States three month treasury bill, the
three confidence levels are 95%, 99% and 99.9%. There are two main purposes in
this paper, the first is to test the performance of four approaches to see
which one is superior to others. The Second is to analyze the results and try
to answer the question whether VaR can measure and manage risk effectively
especially during financial crisis time period. The data we collected in this
study is daily return of three financial assets from 1st Jun 1989 to 29th May
2009. The results of the study show that GARCH student t approach is superior
to other three approaches in most cases, it's the only approach did not
underestimate risk, even more in some cases it overestimated risk. We come to a
conclusion that VaR can measure and manage risk no matter in which time period
if we employ the proper approach for proper financial asset at proper
confidence level.
Introduction
In light of recent financial crisis, risk
management has drawn very high attention from regulators and financial
institutions. Regulators and financial institutions are reviewing the tools to
measure and manage risk and making more strict measures to control risk. Value
at Risk (VaR) is a simple and widely used tool to measure and manage risk, it
is popular in the last 15 years since JP Morgan published its Riskmetrics to
measure and manage risk, but recently more and more analysts doubt its
usefullness and efficiency during financial crisis. In this study we will try
to answer the above question by testing the performance of VaR during different
time period.
The concept of risk is refers to the
volatility of unexpected outcomes in finance. It includes business risk,
strategic risk and financial risk. Business risk and strategic risk are risks
that relate to product markets or economic and political environments of a
company, financial risk is the risk associated with financial market
activities. Financial risk can be further divided into several sub-categories:
first is market risk, it is the risk due to changes in market prices. Second is
credit risk that counterparties are not able to fulfill their contractual
obligations. Third is liquidity risk, the risk of inability to meet payment
obligations. Fourth is operation risk, this risk stem from the internal staff
or system failure or external events. Fifth is legal risk, which is the risk
that due to unlawful transactions. This paper will focus only on financial
risk, and more specifically, on how this type of risk can be captured through
four most commonly used methods to estimate Value-at-Risk (VaR) based on three different
characteristics of financial assets and at different confidence levels.
Historical Simulation Approach
Unlike other parametric VAR models, the
historical simulation (HS) model[i] does not make specific assumption about the
distribution on the asset returns. Also, it is a nonparametric approach. The
VAR number of the historical simulation is easy to understand, so it is more
easily accepted by management and the trading community. It is predict that the
current positions will replay the record of history. Also, it is relatively
easy to implement. In most simple case, historical simulation provides current
weights to a time serious of historical asset returns, that is, (Jorion, 1995)
There are several advantages of historical
simulation. First, historical simulation is simple to implement which based on
historical data on risk factors have been collected in-house for daily marking
to market. Second, historical simulation accounts for fat tails which arepresent in the historical data. Third, historical simulation uses the choice of
horizon for measuring VaR. Also, historical is intuitive. Users can go back in
time and explain the circumstances behind the VaR measure. (Best on 1998)
On the other hand, the historical simulation
approach has a number of drawbacks. Due to the value of the portfolio changes,
the percentage value changes in the portfolio no longer refer to the original
portfolio value. One problem of historical simulation approach is that extreme
percentiles are difficult to estimate precisely without a large sample of
historical data. Another problem of historical simulation is that asset prices
often exhibit trending behavior. A solution provided to deal with trend problem
is to imply symmetry on the portfolio value distribution by taking the negative
of the profits and losses used in standard historical simulation, which doubles
the data used in computing the percentiles and eliminates the trend. (Holt on
1998)
The Variance-Covariance Approach
The variance-covariance approach is the
simplest of the VaR methods in calculation required. Normally, global banks
used it to aggregate data from a large number of trading activities.
Variance-Covariance approach is widely used by banks with comparatively low
levels of trading sites and it also the first VaR model to be provided in
off-shelf computer packages. Portfolio profits and loss are normally
distributed in the variance-covariance approach which is based on assumption
that financial-asset returns. (Colleen& Marianne 1997)
Define Rt to be the matrix of market returns
at time t and let ?t represent the variance-covariance model is that it has
zero mean, which matches standard market practice. Based on this assumption,
Jackson (1997) points out that the estimation error associated with poorly
determined mean estimates which may decrease the efficiency to estimate
variance-covariance matrix. The return on a portfolio of foreign-exchange
positions can be expressed as a linear combination of exchange-rate returns due
to we are not considering complex derivatives. In terms of the sensitivity of
portfolio, one risk factor is explained the change of the portfolio value.
This
The Fixed-weight Specification
Return covariance and variance are constant
over the period, which is the assumption of the fixed-weight approach. Hence,
it is predict that future variances and covariances are equal to the sample
variances and covariances calculated over the fixed-length data history.
The unbiased and efficient estimator of the
population variance-covariance matrix should use all data which each
observation is equal, if return variances and variances are constant. One of
the fixed-weight approach? the random-walk model which restricts the past data
period to just one observation (i.e T=1). The fixed-weight assumes that ?t is a
random-walk and that ?t is based on much empirical work with asset returns,
which suggests that relatively old data should be ignored (Engel and Gizycki,
1998)
Multivariate GARCH
Bollerslev (1996) described the
generalized-autoregression conditional heteroscedasticy (GARCH) models which
captures volatility clustering. These models apply both autoregression and
moving average behavior in variance and covariance.
It is necessary to impose restrictions before
engaging in estimation as GARCH model is that the number of risk factors
increases calculation rapidly becomes intractable.
Monte Carlo Simulation
The Monte Carlo simulation method is a
parametric approach which can be priced using full valuation, generating random
movements in risk factors from estimated parametric distributions. The Monte
Carlo simulation approach proceeds in two steps.
First, all risk factors will be specified by
the risk manager in a parametric stochastic process. Second, all the risk
factors simulate different price paths. The portfolio of the Monte Carlo
simulation method is marked to market using full valuation as in the historical
simulation method, that is, V*k=V(S*i,k), considering at each horizon. Hence,
the Monte Carlo method is similar to the historical simulation approach,
excluding the hypothetical charges in prices ?Si for asset i in equation which
are created by random draws from a prespecified stochastic process instead of
sampled from historical data (Sorvon 1995).
The Monte Carlo methods interject an explicit
statistical approach and apply mathematical techniques to generate a large
number of possible portfolio-return outcomes. The Monte Carlo approach takes
into account the events that probable occur, but, in fact, they were not
observed over the historical period. One of the main advantages of Monte-Carlo
methods is that it evaluates a richer set of events than contained within past
history. In order to implement the Monte-Carlo method, a statistical approach
of the asset returns must be selected. We use the Monte Carlo method into two
statistical approaches: simple normal distribution and a mixture of normal
distribution.
Monte- Carlo Methods Using
Normally-Distributed Asset Returns
We consider apply the assumption that assets
returns are normally distributed, which is the first implementation of the
Monte-Carlo approach. The variance covariance matrix is estimated using the
fixed-weight variance-covariance approach. The VaR estimate is provided by the
appropriate percentile and the resulting changes in portfolio value. The
results should be close to those obtained from the fixed-weight
variance-covariance approach due to this method using the same distributional
assumptions as the variance-covariance method.
A Monte-Carlo approach is proposed by Zangari
(1996) which makes use of a mixture of normal distributions. This approach is
to duplicate the fat-tailed nature of asset returns. The assumption implies
that an asset-return realization is from two distributions: one with
probability p and another one with probability (1-p). The parameters of the
mixture of normal distribution one estimated that both distributions have zero
means.
Unfortunately, Hamilton (1991) proposed that
this function does not have a global maximum. When one of the observation is
exactly zero the likelihood is infinite, this property emerges. Although
Hamilton has provided Bayesian solutions to this problem, our model was to
restart the estimation procedure with various starting values. The standard
Monte-Carlo model is used to acquire the VaR, when the parameters have been
estimated. In the mixed distribution, observations are simulated by drawing p
observations from simulations of the first distribution and (1-P) observations
from second distribution.
The problem of non-linearity is solved by a
lot of approximations, using a second order Taylor series of expansion. This
approach brings two main problems. First, the Taylor series is not able to
cover all non-linearities well enough, especially the stock price in the
relatively large movements, which merge in a risk management setting. Second,
the normal distribution of portfolio returns is lost, which makes the delta
model computationally efficient and easy to implement. According to compare
three approximations with respect to accuracy and computational time to a full
valuation mode, Pritsker (1997) finds that in 25% of the approximations a Monte
Carlo simulation using the second order Taylor series, normally underestimated
the true VaR by an average of 10%. It assumes that no limit on computational
time, the full valuation model implemented considers all non-linear
relationships. This model implements a VaR computation focused on a Monte Carlo
simulation, staying within the Black-Scholes framework of constant volatility
and stock price movement.
This essay is an example of a student's work
Disclaimer
Methodology
Analytical Approach
In this study, it takes an analytical
approach to prove the results by assuming an independent reality. Arbnor &
Bjerke (1994) points out that the characteristic of this approach is its cyclic
nature. This characteristic can begins and ends with facts and these facts can
lead to the start of a new cycle. When it applies for this study, it means to
select a good model to describe the objective reality or test a model whether
it is nice fit for describe the objective reality. In addition, the approach
shows quantitative character and involves some complicated mathematics
computation apply to the model.
Quantitative Approach
We use a large amount of empirical data apply
the approached we employed to estimate VaR, it means that the results come from
a lot of historical data test and analysis. This way of study shows we are
taking a quantitative approach. According to Arbor & Bjerkne (1994),
quantitative approach show much clear about the variables and cover a great
amount of historical data compare to qualitative approach. This approach also
assumes that the theoretical concepts can be measured. A lot of empirical data
is collected to be tested to measure the approaches whether can estimate VaR
precisely.
Deductive Approach
Deductive approach begins with a general
concept, given rule or existing theory and then moving on to a more specific conclusion.
Woolfolk (Woolfolk, 2001, p. 286) describes this approach as "drawing
conclusions by applying rules or principles; logically moving from a general
rule or principle to a specific solution".
In this study, we test the performance of
four commonly use VaR approaches based on three different underlying assets
with different characteristics at different confidence level. The purpose is to
examine the models not create a new model to estimate VaR.
The final conclusion might strengthen some
approaches for some specific underlying assets and might weaken some approaches
for other approaches on other underlying assets at different confidence level.
Reliability
All the empirical data is used in this study
can be checked in public sources, and a certain amount of previous studies
regarding to the approaches used in this study to estimate VaR. One can check
the results whenever they want to see whether the results are reliable by
checking whether the reproducible results are the same as this study shows, if
not the same, means this study is not reliable.
Validity
It is very important to show validity when
justify an approach or a model. This means if the results is not able tell the
truth of the reality, the results which is employed by the approach or model is
not validity, the result is not meaningful. In other words, the degree of
validity is depends on how closer we get a true picture of the reality of a
given situation.
To better show validity, it is vital to know
the relation between the theory and data. If the data is adapting to the
theories in a continuously way, that means it have a strong validity of the
theory or the model which employed for the study. This is confirmed by the
study of Holme & Solvang (1991). In this paper, different approaches are
chosen to estimate VaR base on three different assets and empirical time series
data at three confidence levels. The validity will be enhanced if the data fit
the approaches or model continuously.
Estimation of VaR
In this study, four approaches are employed
to estimate VaR for three different underlying assets. The ideal situation is
that estimation value of VaR is fit for the future value of returns. But the
actual situation is that the approach might overestimate or underestimate VaR
compare to the actual returns. For example of bank industry, if VaR is
overestimated, it means that banks hold excessive capital to cover losses under
the regulation of Basel II accord. While in case of VaR is underestimated, it
might lead to failure to cover unexpected losses. This is why some American
bank went to bankruptcy during the recent financial crisis.
The four approaches are Historical simulation
approach, Moving average approach, GARCH normal approach and GARCH student t
approach. The underlying assets are being analyzed are Brent oil, S&P 500
and United States three month treasury bill.
When using the parametric approaches to
estimate VaR, we do suspect that whether the returns of underlying assets are
fit for our assumption of distribution, such as normal distribution for moving
average approach and GARCH normal approach. According to Jorion (2007) that
economic time series are rarely normally distributed. So the performance of these
parametric approaches will be showed less efficiency if the underlying assets
are away from normal distribution.
This essay is an example of a student's work
Disclaimer
This essay has been submitted to us by a
student in order to help you with your studies. This is not an example of the
work written by our professional essay writers.
Historical Simulation Approach
Estimating VaR by using the historical
simulation approach is not a complicated mathematically calculations but it
requires a lot of historical data. As presented in Chapter 2, the right window
size is critical because if the empirical data is too short, it might have a
highly varying VaR, while a longer window length would produce a better
estimation but the elder empirical data might be low relevance of future
returns.
The first task of this approach is to select
an empirical window length to forecast the future returns. We will select a
moving window of the previous 2000 observations which about eight calendar
years. The window length chosen are based on total sample size which is more
than 5000 observations for three underlying assets and confidence level 95%,
99% and 99.9% are used in this study. This window length should produce better
performance of this approach at higher confidence level.
We use PERCENTILE function in Excel to
calculate the n percent percentile of the value of a time series data. The
value by percentile function is usually not an exact value in data set, and
Excel will calculate a desired value between two closest values by doing a
linear interpolation. The results will be showed in later chapter.
Moving Average Approach
The first task of this approach is the same
as historical simulation approach is choosing a window size. In this study we
choose 45 days which is 9 calendar weeks to calculate the standard deviation.
It is easy to use STDEV function to calculate
standard deviation base on moving 45 days window size in Excel, then use the
result apply to the parametric VaR formula to get the value of VaR. The results
will be showed in later chapter.
GARCH Normal Approach
Like moving average approach we need to
calculate standard deviation first to calculate the final VaR, but before doing
that, we have to estimate the parameters , and first. The parameters are
estimated by using maximum likelihood estimation. It is a challenge job because
the previous literatures have been studied in this field only described the MLE
function but did not show how to implement it.
In this study, we estimate the parameters by
in EVIEWS. Then the question is how to decide the moving window size. As MLE
function also assumes that the returns are normally distributed, so the smaller
the window size, the larger the risk that those values away from normal
distribution. First estimate the value with window size of 1000, 2000, 3000,
4000, 5000 respectively, we found that the value of window size 3000 is most
close to the value shows by Jorion (2007) base on the similar financial assets,
so we take 3000 as the window size to estimate these three parameters value.
The little different between our estimated value compare to the value advised
by Jorion (2007) is because we use different time period and the underlying
asset is not exactly the same, and we believe the results is reliable. The
value and results showed by EVIEWS are showed in appendix 1.
After we got the value of parameters , and ,
we input the value to the formula of GARCH(1,1) model to calculate the
volatility, the VaR is estimated base on the volatility value accordingly and
the results will be shown in the results and analysis chapter.
GARCH student t Approach
Under this approach, we estimate the
parameters , and first, it is the same job as GARCH normal approach, we did it
by EVIEWS and the results are showed in appendix 1.
When we got the parameters value, we employ
it to estimate the value of volatility, then we apply it to calculate VaR with
the critical value under student t distribution ( Jorion, 2007). All these
calculations are done by Excel and the results of VaR will be showed in later
chapter.
Skewness
The skewness indicates a distribution looks
compare with a normal distribution. For normal distribution, the shape of
distribution is symmetrically distributed around its mean. In the case of
normal distribution, its skew is 0. As mentioned before, the financial assets
is not exactly normally distributed, they might have a positive or negative
skewness. Below graph show the negative skew and positive skew.
Negative skew means that it has a longer left
tail, most of the distribution is concentrated on the right again the mean.
While Positive skew is the positive case of negative skew. Skewness is to be
concerned because moving average approach and GARCH approach are assumes the
underlying assets are under normally distribution, if the underlying assets are
heavily skewed, their estimation of VaR will less accurate. The two approaches
might underestimate or overestimate the VaR value according to the skew of the
underlying assets return
Kurtosis
When kurtosis is 0 and symmetrically
distributed, it is the case of normal distribution. A high kurtosis of
underlying assets indicates that there are more extreme values on this
underlying asset distribution than those of normal distribution. The positive
excess kurtosis is called leptokurtic and the negative excess kurtosis is
called platkurtic. The VaR values will be underestimated when it is negative
kurtosis.
The Source of Data
In this study, we use daily empirical data of
three financial assets, they are time series data. The time period is selected
from 1st Jun 1989 to 29th May 2009 for all three assets. The first two
thousands observations (from 1989 to 1997) are used as historical data for
forecast the future, then the rest of data are classify into two period, period
1 is from 1997 to 2009 (more than 3000 observations) representing for the time
period including normal time and financial crisis time, while period 2 is from
2008 to 2009 (about 355 observations) representing for financial crisis time
period. Divide this two period is for the purpose of this study.
The data we collect is historical daily price
of three assets: S&P 500 index, Brent oil and US three month treasury bill.
It is easy to get the data from public source. S&P 500 data that we can get
it from Yahoo finance (http://finance.yahoo.com), Brent oil data can get from
Energy Information Administration (http://www.eia.doe.gov), US three month
treasury bill data we get from Federal Reserve Bank of St. Louis
(http://research.stlouisfed.org).
The S&P 500 is a valued weighted index
which consist 500 large-cap companies of United States, it can be viewed as a
well diversified portfolio, so their volatility is not as high as Brent oil and
US three month treasury bill. Figure 3.6a shows the daily returns of S&P
500. From table 3.5 we see the value of average daily volatility of S&P 500
is 1.38% and annual is 21.74%, it indicates that performance of price change is
pretty stable. The skewness of S&P 500 is -0.15 and the average daily ln
price change is 0.94%, it shows the distribution curve of S&P 500 is a nice
match with normal distribution without considering the kurtosis.
The kurtosis is 7.33, it shows that the
distribution is narrower than the normal distribution and has a fatter tails.
This value of kurtosis is between assets Brent oil and US three month treasury
bill, plus consideration of skewness value, it can be concluded that Brent oil
is the asset that most match normal distribution, then S&P 500 and US three
month treasury bill. This can be viewed by comparing the histogram of figure
3.6b, 3.7b and 3.8c. Therefore it can foresee that S&P 500 will to produce
a performance between Brent oil and US three month treasury bill by moving
average approach and GARCH normal approach. The value of high kurtosis make
this asset performs less effective by parametric approaches which assume assets
are follow normal distribution.
Brent Oil
The Brent oil prices is the second volatile
among the three underlying assets, this can be seen from the table 3.5 compare
with other two assets. The daily volatility of Brent oil is 2.71% and annual is
43.02% shows that it much more volatile than S&P 500.
The Kurtosis is 4.46 indicates that it is a
little bit narrower and has a slight fatter tails than normal distribution,
while the skewness is -0.18, it shows that the distribution is relatively high
symmetric. Combine with the value of kurtosis and skewness, it indicates that
Brent oil is the asset that most fit for normal distribution, and this can be
seen of figure 3.7b. Therefore asset Brent oil suppose to perform better in
parametric approach than other two assets. The daily ln price change is 1.94%,
it shows this asset are high volatility asset, it might performs less effective
by nonparametric approach because the nonparametric approach is not good at
assets with high volatility.
The US three month treasury bills is the most
volatility asset of all three assets we employed in this study, its daily
volatility is 8.74% and average ln price change is 2.32%, while the value of
skewness is -3.27 and a very high kurtosis of 360.84, above values suggest the
return distribution of this asset is poor fit normal distribution, it shows
this feature in figure 3.8c.essay is an example of a student's work
Disclaimer
This essay has been submitted to us by a
student in
From the figure 3.8b, US three month treasury
bill was not a high volatility asset before 2007, but since 2007, its
volatility sharply goes up and become very high volatility which can be seem
from figure 3.8a. If the time period chosen is before 2007, its volatility
should be less than Brent oil. But this period is not match the purpose of this
study, our study is focus on the time period including recent financial crisis.
Base on above characteristics of this asset, it can be expected that this asset
will perform the worst no matter nonparametric or parametric approaches.
Autocorrelation
For time series data, it is important to
check whether it has autocorrelation or others call serial correlation.
Autocorrelation in time series data means the data correlate with itself over
time and can be measured by a one lagged Durbin-Watson test in a regression.
The existence of autocorrelation indicates that the employed approach is poor
fit to the time series data that the price of today cannot be described as a
linear function of price of yesterday. Tsay (2002) states that in case of
autocorrelation for time series data, there are other factors besides the
historical prices that affect today's price, the results will be less effective
if we use this approaches forecast the future prices.
The null hypothesis will be rejected if the
value of DW is not in the regions of the interval which is the value between DL
and DU, from table 3.9 shows that all these three assets are within the
interval, therefore the null hypothesis will not be rejected and there is no
evidence of showing autocorrelation for these three time series data.
Backtesting
The best is the value equal to the number of
observations times the outcome of one minus the selected confidence level, the
regions shows the acceptable interval for the exceptions of VaR. The nearer of
value of exceptions close to best value, the higher performance the approach
do. If the exceptions are over the regions a lot, it indicates that the
approach underestimate risk a lot in future, on the other side, it
overestimates risk a lot in future.
Data Result & Analysis
Backtesting Results of Christoffersen
The backtesting results of Christoffersen
based on three underlying assets and four approaches are shown below. The
results will be analyzed and discussed according to the different assets,
different approaches and different time period at different confidence levels.
The summary results of Christoffersen for the four approaches are shown in
Appendix 2. We choose two period data to test the VaR performance. Period 1 is
from Apr 1997 to May 2009 (more than 3000 observations), representing the time
period that includes the normal economic time and the financial crisis time.
Period 2 represents the recent financial crisis time period from 2008 to 2009
(355observations). Period 2 was selected in order to find out how its VaR
performs compared to that of period 1. One might ask if period 2 makes sense by
using about 355 observations at a confidence level of 99.9%. It is true that
355 observations is not a good sample size for a 99.9% confidence level, but
our purpose here is to see whether the approaches underestimate risk in
financial crisis time, because this can be indicated by the exceptions figures
over the regions, we are not focusing on whether the approaches overestimate
risk at this confidence level 99.9% because the observations is too small.
Historical Simulation Approach
Regarding the asset S&P 500, it produces
bad results in period 1 and terrible results in period 2. This approach cannot
estimate the riskiness of this asset properly. At a confidence level 99% and
99.9%, the number of exceptions are twice compared to the region's maximum limit
in period 1, while in period 2, the figure reveals that it is worse. It
indicates that this approach produce a poor result to estimate VaR in the above
two periods with respect to the asset of S&P 500. We should notice that
because the above two periods include the recent financial crisis time period,
it affects the results of this approach given that it is not good at predicting
risk during extreme time periods. For example, at 99.9% confidence level, if
the time does not include the financial crisis time period (period 1 minus
period 2), its result is within the regions, that means it work at 99.9%
confidence.
This For Brent oil, it also produces bad result,
but it shows better than S&P 500, however it still shows that this approach
underestimate risk for this asset. The number of exceptions is slightly over
the regions at 95% and 99% confidence and within the regions at 99.9% in period
1. In period 2, it works poorly at 95% and 99% confidence level while at a
confidence of 99.9%, no clear results can be drawn because the number of
exceptions within the regions and the sample size are too small. If the period
is does not include period 2, the results will be within the regions at 99% and
99.9% confidence level, which means this approach can produces an acceptable
result for this underlying asset during the normal time period before the
recent financial crisis time period at 99% and 99.9% confidence level.
With respect to US three month treasury bill,
this approach performs the worst. This means the approach is not appropriate
for estimating the risk for this underlying asset. The figures are unacceptable
both in the two periods at all confidence levels. The reason is that this asset
is the most volatile one among the three underlying assets with a daily volatility
8.74% and with a very high kurtosis. Because historical simulation approach
weight all the returns equally, it takes time to react to extreme fluctuations
of the returns. Another reason for this bad results might be the choice of
window length, 2000 empirical returns might takes too much old information
which is not relevant for estimating future return for this highly volatile
asset, especially to estimate risk since 2007.
The results indicate that this approach
produces very poor performance at 95% confidence level and poor performance at
99% confidence level for all three assets, especially for the asset of US three
month treasury bill no matter the time period, its figure shows it
underestimate risk a lot. At 99.9% confidence level, this approach works to
some extent for asset of Brent oil, and it also works for S&P 500 in the
normal period (the period not including period 2). Therefore in this test,
historical simulation approach performs very poor.
This approach assumes that the future is
identical to the past, but with the fact that more and more uncertainty is
affecting the future, the volatility in future seems not identical to the past,
that is why this approach produces very bad results for the three assets.
Overall for all the approaches in this study, the performance of historical
simulation approach is the worst one compared to the other three. However this
does not mean that this approach is not valid for estimating VaR. It can be
used with respect to higher confidence level and proper assets, like the asset
Brent oil in high confidence level and not with assets like the US three month
treasury bill with very high volatiltiy.
Moving Average Approach
This approach assumes the return of
underlying assets follow a normal distribution which is not realistic because
the returns of most financial assets including the present one has been shown
by the skewness and kurtosis figure in chapter three. Because the three assets
are not really normally distributed, so the performance of this approach is affected
by the degree to which the underlying assets' distribution look like that of a
normal distribution.
For asset S&P 500, this approach does not
perform well both in period 1 and period 2. For all three confidence levels, it
produces the worst result for this asset among the three assets. The reason for
this may be because of the window size meaning that a 45 days window size is
more fit for the assets Brent oil and US three month treasury bill because they
have high volatility compared to asset S&P 500. It might have provided
better results if another window size were used.
For Brent oil, this approach produces a very
good result in period 2, and it is works well at a confidence level of 95% in
period 1. It indicates that this approach can produce a better performance at a
low confidence level for assets like Brent oil and can also work during the
extreme time period for this asset. The results are good because the
characteristic of this asset's returns ressemble that of the normal
distribution.
This Regarding the asset US three month treasury
bill, this approach only performs well at a confidence level of 95% in both
periods. It indicates again that this approach can produce a better result at a
low confidence level for an asset like US three month treasury bill with a very
high skewness and kurtosis. Compared to the low confidence level, it shows a
bad performance at a high confidence level no matter the time period. Another
reason why this approach cannot produce a good result for this asset is the clustering
issue. We can see the daily return graph in chapter three reveals that the US
three treasury bill have a seriously clustering extreme value during the recent
financial crisis, and moving average approach does not take the clustering
phenomenon into consideration.
Overall for all the approaches in this study,
this approach produces a good performance at low confidence level and is more
appropriate for assets like the Brent oil that whose return distribution
ressembles the normal distribution though it performs poorly at higher
confidence level. Compared to the GARCH approaches, it shows less efficiency to
estimate VaR.
GARCH normal Approach
The GARCH normal approach also works under
the assumption of normal distribution, so it is the same as the moving average
approach where the results might be less efficient if the distribution of
assets deviates from the normal distribution. Contrary to the moving average
approach, the GARCH model can take into consideration the clustering
phenomenon. That will enable the GARCH model to produce better results than the
moving average model.
The above two tables show that this approach
performs very well in period 1 at a confidence level of 95% with the number of
its exceptions being more or less like the best target number for three assets.
Compared to the low confidence level, it produces bad results at higher
confidence level in both period 1 and period 2 with the exception of the asset
Brent oil. The results again demonstrated that parametric approaches under the
assumption of normal distribution produce better results if the distribution of
the returns of the underlying asset ressembles the normal distribution.
Though GARCH model can deal with clustering
problems, but the results of GARCH normal approach is not largely superior to
the moving average approach. Goorbergh & Vlaar claim that the
characteristic of volatility clustering is the most important characteristic
when calculating VaR. However from the results using the moving average
approach and GARCH normal approaches which are tested by Christoffersen, it did
not make a big difference. We agree that the volatility clustering is an
important characteristic when estimating VaR, but it is not the most important
one. The most important should be the distribution of the underlying assets.
Because both the moving average approach and the GARCH normal approach are
under the assumption of normal distribution, if the underlying assets are not
actually like the normal distribution, it will produce more or less acceptable results.
The results of these two approaches prove above statement.
Another factor with respect to this approach
is the parameters estimation using the MLE. The value of parameters , and can
affect the results of VaR estimation. In this study we take 3000 observations
to estimate the parameters using EVIEWS, which might not precisely represents
the real value of the parameters , and . Also, because the maximum likelihood
function is under the assumption of the normal distribution hence, it might
make this approach less efficient.
Overall for all four approaches, the GARCH
normal produces better results than the historical simulation approach and the
moving average approach no matter whether in period 1 or period 2 at 95% and
99% confidence level. But at higher confidence level, it performs poorly.
Compared to the historical approach and moving average approach, GARCH normal
does a better job in dealing with the volatility clustering phenomenon by using
an advanced way to estimate volatility and make the estimation precisely. But
under the assumption of normal distribution, its performance is not powerful,
it still underestimate risk at higher confidence level and for the extreme time
period. To better deal with this problem, GARCH with student t approach does a very
good job.
GARCH with a student t-distribution
Unlike the above two parametric approaches,
this approach is under the assumption of student-t distribution which assumes
that the underlying assets have a heavy tail. This is more realistic for
financial assets and enables the GARCH model to produce better results. In
addition, this approach can take into account the volatility and clustering
phenomenon.
This essay is an example of a student's work
DisclaimerIIt can be seen from the
above two tables that GARCH with the student-t approach produces better results
for all the assets at all confidence levels in both periods. All the number of
exceptions is less than the maximum value for the regions implying that this
approach does not underestimate future risk for all three confidence levels and
underlying assets no matter what time period is used. This shows this approach
produces powerful results to estimate VaR among the four approaches. Similar to
the GARCH normal approach, the estimation of the parameters , and have an
effect on the accuracy of the VaR estimation. However from the number of
exceptions shown in the above two tables, it can observed that this approach
can capture the risk completely and may sometimes overestimate the riskiness.
In period 1, the number of exceptions at 95%
and 99% confidence level is less than the minimum value of the regions. This is
rejected by Christoffersen test because it largely overestimate risk. It is too
conservative to estimate VaR at these two confidence level in period 1. While
at 99.9% confidence level, it performs quite well as the number of exceptions
is more or less like the best expected value. Therefore this approach is too
conservative in period 1 at 95% and 99% confidence level, and is quite
acceptable at 99.9 confidence level.
In period 2, it performs better than in
period 1 while showing reasonable performance at 95% and 99.9% confidence level
for all three assets implying it does quite well during the financial crisis
time period. It is however too conservative at 99% confidence level for the
Brent oil and US three month treasury bill during the extreme time period.
The difference between this approach and the
GARCH normal approach is only the assumption of assets' distribution, but it
produces completely different results. This again demonstrates that the
distribution is the most important characteristic that affect VaR estimation.
Because these three underlying assets are in reality not normally distributed,
the moving average approach and GARCH normal produce poor results. The three
assets might not exactly follow student t distribution, but they have heavier
tails than normal distribution and the tails are not as heavy as student t
distribution, therefore, GARCH student t approach overestimate risk in period 1
at 95% and 99% confidence level in period 1.
Overall for the four approaches, this
approach produces the best result when estimating VaR. Also this approach shows
overestimated risk at lower confidence level in period 1 meaning that the
approach is too conservative when estimating VaR at lower confidence level.
Such a characteristic is not welcomed by firms because they do not want to keep
so much capital reserve to prevent risk which is not actually existing. On the
other hand, it performs very well at higher confidence level no matter the time
period, especially during the financial crisis time period.
Validity
We want to once again stress the validity of
the findings because it is really important for any study. We stress that the
approaches we applied and the results we got are highly valid in this study. We
will look at two aspects to show the validity of this study - one is the surface
validity and another one is external validity.
Regarding to the surface validity, the
results we concluded might be conflicting with previous studies, or previous
studies have not been done under the same conditions before because we are
using four approaches to estimate VaR for three underlying assets with
different characteristic, at three different confidence levels and the time
periods are very close to now (time period 1 is from 1st Jun 1989 to 29th May
2009, time period 2 is from 1st Jan 2008 to 29th May 2009). In addition, the
approaches are the parametric and nonparametric technique with the inclusion of
the assumption of normal distribution and student t distribution for the
parametric approach.
The previous studies shows that the
historical simulation approach is a simple approach but can still produce
relatively good results, though it performed poorly in this sturdy because the
time period and the assets we chose were different. Including recent financial
time period and a highly volatile asset like the US three month treasury bill
affect the results of the historical simulation approach. In addition, there
are some studies showing that the moving average approach outperform GARCH
normal approach or GARCH normal approach outperform moving average approach. In
this study, we found that the GARCH normal performs better than the moving
average approach as the number of exceptions indicate. For the underlying
assets, it is important to know their characteristics, for example, for the
Brent oil, both the moving average approach and the GARCH normal can do a good
job in period 1 at 95% confidence level. But for the asset US three month
treasury bill, the results provide a big different between the parametric and
the nonparametric approach.
This essay is an example of a student's work
Disclaimer
For the internal validity, although there are
some divergences from the results we expected compared to the actual results,
the results are quite good in general. We expected that the historical
simulation approach should have a very poor performance for the asset US three
month treasury bill which is highly volatile. The Brent oil performs the best
of the three assets using the parametric approaches under the assumption of the
normal distribution and the GARCH student-t approach produces superior results
than the other approaches used in this study.
The general degree of validity in this study
can be seen as relatively high, even though some results shows slightly
different as we expected, but the general results are quite good to reflect
this fact. The nonparametric approach is not useful when including extreme time
period though it might perform well at high confidence level in normal time
period. The parametric approach does not performs very well under the assumption
of normal distribution because the normal distribution is not realistic, while
under the student-t distribution, it performs well and not only can capture
risk completely, but also overestimate risk.
No comments:
Post a Comment