Measuring systematic and specific risk: Approach mean-entropy

Imen Mahmouda* Kamel Naouib

a*Faculty of Economic Sciences and Management of Tunis, Tunisia. Corresponding author's email address: mahmoudimen0806@gmail.com

bBusiness School of Tunis, Tunisia. email address: kamelnaoui@gmail.com

ABSTRACT

Our main objective in this paper is to revisit Markowitz's (1952) mean-variance approach by applying Shannon Entropy as an alternative measure of financial risk. We studied 33 randomly selected stocks of the Tunis Stock Exchange, representing the daily values of the Tunindex over a period of 8 years. The obtained results indicate that entropy behaves in a similar way to standard deviation, as it decreases depending on the number of stocks held in a portfolio. Likewise, Sharpe single-index model was reinterpreted under entropy theory where total risk is divided into systematic and non-systematic risk. Then, standard measures like standard deviation or beta seem to be inadequate to assess risk and uncertainty. Consequently, entropy offers an ideal alternative to identify investment-related risk.

Keywords: Uncertainty, entropy, asset pricing, CAPM

ARTICLE HISTORY: Received: 15-Nov-2016, Accepted: 03-Mar-2017, Online available: 18-Mar-2017

Contribution/ Originality

The current paper proposes an alternative approach to portfolio selection. Entropy is one of the main concepts of information theory and market participants react to information when taking decisions. Entropy is believed to contribute to significantly improving the assessment of systematic and specific risks and thus assets pricing.

1. INTRODUCTION

Markowitz (1952) proposed variance as a risk measure in his mean-variance model in view of determining an optimal portfolio by minimizing variance or maximizing returns. His model was largely applied by academics and practitioners alike to study portfolio selection or select portfolios. Sharpe (1964) simplified this approach and divided total risk into systematic and non-systematic risk. As a rule of thumb, systematic risk in financial theory is measured by Beta in CAPM. Nevertheless, these models show their limits when returns are not normally distributed. However, classic asset pricing models and traditional risk measures are irrelevant to how financial markets function in reality. Indeed, stock market crashes provide an example of the inability of classic models to describe real market behavior. As the assumptions underlying these models are restrictive in nature, the models fail to capture all relevant information and are then unable to quantify uncertainty over future movements.

During these latter years, there has been a growing common interest across disciplines aiming at appreciating in real and rich terms natural and social phenomena. Among these disciplines, a striking bondage is noticed between physics and financial theory. Uncertainty-wise, what financial theory aims to study can be possible using statistical measures applied by physics and information theory. One of these measures is Shannon Entropy. An advantage of this latter approach is that it is a more general measure as it captures higher-order moments in a probability distribution.

For Shannon and Weaver (1949): The quantity precisely needed to represent " information " fits exactly the notion of thermodynamics of entropy. In similar lines, a new research trend started focusing on this approach to better understand the dynamics, organization and functioning of markets.

-Use of entropy as a measure of uncertainty in financial looks promising and evolving from theoretical and empirical standpoints-. This diversity of measures reflects but one thing; uncertainty lies at the core of econophysics. Although economists define and at the same time exclude uncertainty using one unique model, econophycists opt for this concept in an epistemological fashion and study its different dimensions. Each of these entropies defines a specific statistical estimation that can be used as a real function in a study. With this diversity, econophycists provide a panoply of operational instruments to deal with uncertain situations. Accordingly, financial theory can be inspired by work on uncertainty, and use statistical measures proper to physics and information theory as proposed by Shannon Entropy. In what follows, we report on an empirical application of Shannon"s entropy on data collected from the Tunis Stock Exchange.

Our main purpose is to measure total risk, diversifiable and non-diversifiable, in portfolio management. To this end, we develop a new risk measure, inspired by entropy theory and which extrapolates the classic decision model on a real situation.

Shannon Entropy is a mathematical function that specifies quantity of information contained in a source of information. It can be viewed as a measure of uncertainty of a random event or more specifically its distribution. It posits that information issued from each new event is an uncertainty function of this event. Accordingly, we will use uncertainty degree as our risk measure of the selected stocks traded on the Tunisian Stock market. Initially, Rudolf Clausius (1867)1 introduced entropy into thermodynamics to measure the ratio of transferred heat through a reverse process in an isolated system. Later, around 1900, it was used by Boltzmann and Gibbs2 in physics. Similarly, mechanics interpreted entropy to measure uncertainty about a system after parameterizing in detail its components (pressure, temperature and volume). By mid the 20th century, the concept found its appeal to disciplines like engineering and mathematics thanks to the wok of Shannon (1948) in engineering and communication and Kolmogorov in probability theory and dynamic systems theory. However, it was not until recently that the concept was applied to study financial phenomena, in particular financial markets functioning.

Information contained in a supply system functions as a stochastic cybernetic system in which the message may be considered to be a random variable. As such, the use of entropy serves to quantify the expected value of information contained in a message. In other words, it measures the quantity of information. Indeed, information theory used entropy to compare uncertainty and inaccuracies of a message against a reference message. It is to similar ends that entropy is used in financial theory. Some researchers like Georgescu-Roegen (1971), Mayumi (1997), Gulko (1999), Dionisio (2001) used entropy to measure information because they thought it is able to determine the real value of information circulating in financial markets. It is worth noting that this approach was initially borrowed from statistical physics to exclusively quantify breaks and uncertainty in dynamic systems. In financial theory and in many other derived disciplines (market efficiency, asset pricing and portfolio management), entropy was largely thought to be the approach to use.

Pursuing these aspirations, Gulko (1999) first introduced Entropy Pricing Theory (EPT) to study financial time series, showing that the Principle of Maximum Entropy (MEP), known also as information efficiency, is able to make the Market Efficiency Hypothesis operational and testable.

Similarly, Philippatos and Wilson (1972) were the first two researchers who applied entropy to study portfolio selection. In this regard, they proposed the mean-entropy approach in order to maximize expected portfolio returns and minimize portfolio entropy. They compared their entropy-based approach to traditional construction methods of all the possible efficient portfolios on a random sample of 50 stocks over a period of 14 years. They found that entropy provided more general results and outperformed standard deviation. In particular, they found that mean-entropy portfolios were more consistent with Markowitz's model and Sharpe's single-index model. Although their study has several limitations, their results highly contributed to the study of portfolio selection. Kirchner and Zunckel (2011) believe that entropy is the best tool to identify a decrease in risk thanks to diversification. Moreover, Dionisio et al. (2006) indicate that entropy is able to observe diversification effect and as such it can measure in general terms uncertainty than variance, because it uses more information on distribution probability. In Dionisio et al. (2007), this was confirmed as the authors compared mutual information and conditional entropy with systematic and specific risk as estimated by CAPM.

On a larger scale, Zhou et al. (2013) reviewed entropy-related concepts and principles that were applied to study portfolio selection over a longer period. They found that entropy is uniquely superior in measuring risk and describing distributions. Consequently, applying entropy to finance is found to be very rewarding. Subscribing to these ends, Ormos and Zibriczky (2014) used entropy to measure financial risk. It was found that entropy explains portfolio and equity premiums in simple terms and at the same time it has a higher explanatory power than Beta of CAPM. Computing entropy to estimate risk, the authors found that it decreases, like standard deviation, depending on the number of stocks in a portfolio and that efficient portfolios are hyperbolically distributed depending on the expected returns of the randomly selected 150 stocks over a period of 27 years.

According to the above, it seems that there is a wide consensus that entropy may be a good risk measure. However, its application seems to be difficult. Therefore, bearing this in mind, our purpose in this study is twofold. First, we aim at showing that an entropy-based risk measure is more accurate. Second, this measure may improve assets pricing as it does not assume a specific distribution.

In this study, we explore Shannon Entropy as a risk measure in order to assess uncertainty degree in stocks traded on the Tunis Stock Exchange. Then, we compare this measure to standard risk measures estimated under CAPM.

This paper is organized as follows. In section 2, we present the theoretical background of entropy, its mathematical specifications, and its use a risk measure in portfolio management. In section 3, we describe our methodology and our uncertainty measure. In section 4, we present the obtained results on the Tunisian stock market, showing the similarities between standard risk measures and information theory risk measures (entropy, conditional entropy and mutual information). Finally, section 5 concludes the paper.

2. ENTROPY AS A RISK MEASURE

Measuring risk is about examining probability, occurrence of an event and its estimation. Frank Knight distinguished between risk and uncertainty. The author believes that our knowledge is often incomplete to determine the probability of all possible events. Uncertainty, however, prevails when an objective quantification of probabilities is made impossible. According to Knight (1921), risk denotes "measurable uncertainty", i.e. its probability is objectively quantified, whereas uncertainty denotes "true uncertainty" for which "there is no valid ground of whatever nature to compute its objective probability" Knight (1921). As for risk in decision-making, we believe that there are two main factors determining the decision process. First, there is an uncertainty of results issued from processing an uncertain event and the expected utility of undertaking a particular action. The higher uncertainty is, the higher risk and expected utility are. This conception of risk was our motive to develop an entropy measure to examine the decision process. Our hypothesis posits that uncertainty of observations may be interpreted as a risk. For this reason we apply entropy as a risk measure.

Generally, variance is the main measure of risk and uncertainty in financial markets. Some authors, like Maasoumi (1993) and Soofi (1997), believe that these measures may fail to measure uncertainty in some specific events because they require a symmetric probability distribution and ignore the possibility of extreme phenomena like tails. Therefore, for asymmetric or abnormal distributions, another uncertainty measure is needed. This latter should be more dynamic and general than variance and does not assume a specific distribution. As entropy is known to consider diversity, several authors attempted to apply it to the portfolio selection theory. The belief is it can be used as an alternative measure of dispersion. Entropy measures density inequality px(X) from the uniform distribution. It measures uncertainty in terms of the "utility" of px(X) instead of a uniform distribution. Variance uses the mean to measure distances of probability distribution results.

Ebrahimi et al. (1999), by ordering random distributions and perspectives, examined the role of entropy and variance. They concluded to the absence of a general relationship between these measures in terms of estimating distributions. In other words, these two measures reflect concentration but their respective concentration measures are different. By contrast, variance-measured density lies around the mean. However, entropy measures diffuseness of the density independently of location of concentration (Maasoumi, 1993).

They found that, in some conditions, variance and entropy order is identical to transform continuous variables and showed that entropy depends on several other distribution parameters than variance. Legendre series expansion showed that entropy relates to higher-order moments of a distribution and then, by contrast to variance, it may better operationalize px(x) because it uses more information on the probability distribution than variance. Maasoumi and Racine (2002) indicate that when the observed probability distribution is not perfectly known, entropy is an alternative measure to assess uncertainty, predictability and goodness of fit.

Mathematical properties-wise, entropy H (X) is non-negative in a discrete case. In this latter case, H (X) is one-to-one invariant for transformations of X, but variance is not. For a discrete case, neither entropy nor a continuous random variable X takes one-to-one its transformation values in ] -∞, + ∞ [ (Shannon,1948).

Likewise, Gulko (1999) believes that entropy determines the movement and uncertainty of a given security, as it is able to capture complexity of systems without a need for rigid hypotheses that might bias the obtained results.

2.1. Modeling entropy

Be X a discrete random variable and be X1, X2,....XN a random sample of X with size N. of these random variables, we consider a set of events with a probability function given by p1, p2,....pN.

For this set of events, entropy is given by the following equation:

H − k∑ni=1 pi log pi ...................... (1)

Where H denotes entropy and is the probability that an event i occurs and k is a constant. We use the logarithm of pi of basis 23 , with pi = P(X = xii).

According to Feldman and Crutchfield (1998), H (X) a\is a measure of total uncertainty of a probability distribution of an event X.

As an uncertainty measure, entropy properties are very well established in the literature Shannon and Weaver (1949). If the probability of an event is less than 1, the logarithm is negative and entropy has a positive sign.

If the system generates an event, there is no uncertainty and entropy is null. Similarly, as number of events doubles, entropy increases by a unit. Entropy takes its maximum value when events have the same occurrence probability. For continuous random variables, entropy measured by the density function of the probability of X, is given by the following:

H − k∑+∞-∞ p(x) log p(x)dx ...................... (2)

2.2. Joint-entropy and conditional entropy

H(X) and H(Y) are entropies of random variables X and Y. Note that H (X), H (X, Y) and H (Y | X) are entropies of X, joint-entropies of (X, Y) and conditional entropies of Y given X and I(X,Y) is mutual information between X and Y.

Joint-entropy of the two random variables X and Y.

H(X, Y) = -k∑ k∑ p(x, y) log2 p(x, y) ......................(3)

Joint-entropy mesures uncertainty related to a joint-distribution. In this way, conditional entropy is given by:

H(X \ Y) = -k∑ k∑ p(x, y) log2 p(x \ y) ......................(4)

p (x, y) = p (x) p (y | x), joints-entropy obeys the following principle:

H(X, Y) =H(X) + H(Y \ X) = H(Y) + H(X \ Y) ......................(5)

2.3. Mutual information as a measure of dependence

The principle is that it takes 0 in case of no dependence and 1 in case of total dependence. This measure is seen as one of the most practical tools to assess dependence between two vectors representing the random variables X, Y. Accordingly, Granger et al. (2004), Dionisio et al. (2006) posit that a good dependence measure should fill the following conditions:

  1. It should jointly define continuous and discrete variables.
  2. It should be standardized to zero if X and Y are independent. Generally, it ranges between -1 and 1.
  3. It should be weighted to 1 if there is a non-linear relationship between variables.
  4. It should be similar or simply related to a linear correlation coefficient for the case of a bivariate normal distribution.
  5. It should play the role of a true measure of "distance" and not simply "deviation".
  6. It should be an invariant measure under continuous and strictly increasing transformations.

Then, mutual information may be given by the following:

I(X, Y( = H(X) + H(Y) - H (X, Y)

We can then write:

H(Y \ X) = H(X, Y) - H(X) ......................(7)

Mutual information measures the extent to which two variables are associated. Shannon (1948) defines it as follows:

I(X,Y) = H(X)- H(X \ Y)
= H(Y) - H(Y \ X)
= H(Y) + H(X) - H(X, Y)
...................... (10)

Mutual information is a non-negative measure (Kullback, 1959), equal to zero if and only if X and Y are statistically independent.

H(X) ≥ H(Y \ X) , we have , I (X,Y) ≥ 0

Then, mutual information between the two random variables X and Y may be considered to measure dependence between these variables, or much better, it may measure correlation between X and Y. However, the fact that X determines Y or vice-versa is not given.

Mutual information between two variables implies reducing uncertainty in one variable because we know about the other. If knowledge of Y reduces our uncertainty about X, then we can conclude that Y informs X (Dionisio et al., 2006). As shown by Garner and McGill (1956), Philippatos and Wilson (1972), and others, it is crucially important that two information theory measures and standard statistical methods behave the same like a regression analysis.

Dionisio et al. (2006) has shown Similarities between regression analysis and information theory measures.

Regression Analysis Information Theory Similarity
Sum of squares ESS = β2 Σ (xi - x) 2 Mutual information I(Yi, Xi = H (Yi) - H (Yi \ Xi These two measures reflect variance of the dependent variable as explained by the independent variable.
Residuals Sum of squares Conditional entropy I(Yi \ = H (Yi, Yi) - H (Xi Variance if the dependent variable not explained by the independent variable.
Variance in the dependent variable Σ (Y i - Y)2 Total entropy H(Yi) Total dispersion of the dependent variable.

Similarity between these measures are true only if all hypotheses of the regression analysis are confirmed. Nevertheless, information theory measures do not assume linear hypotheses, homoscedasticity, stationarity, and normal distribution of errors, as these latter make these measures much more general (Dionisio et al., 2007).

3. METHODOLOGY

Nowadays, there is a growing concern with exploring concepts of financial markets in terms of their correlation, power law distributions, unpredictability of their time series and the random processes that may govern their behavior. The aim behind such a concern is understanding and explaining, in rich and realistic terms, social and natural phenomena. Analyzing uncertainty, a crucial step towards understanding financial phenomena, may report itself to statistical measures often used in physics and information theory. One of these is Shannon Entropy. One advantage of this measure is that entropy is general measure of variance because it represents higher-order moments of a probability function of a distribution.

In this study, we try to assess relevance of entropy to measure uncertainty in portfolio management and compare its performance to the most popular risk measures used in financial analysis (variance). We will use information theory measures. Of these we opt for Entropy, and mutual information, conditional entropy to assess dependence between the studied stocks and the stock market index.

Our aim is to assess dependence between each stock and stock market index. The above described information theory measures, entropy, joint-entropy, conditional entropy and mutual information, are used to assess dependence between the studied stocks and the stock market index.

3.1. The sample

Our sample consists of 40 stocks of different Tunisian firms were chosen haphazard, belonging to different sectors. The sample was fitted to consider daily closing prices of the firms listed on the Tunis Stock Exchange. After fitting, 33 stocks were selected out of the initial 40 stocks. Stocks of eliminated firms belong to those firms with irregular trading patterns or whose official listing date on the market is not covered by the sample period. Finally, data retained for the analysis amounts to 90% of the initial sample. Our sample covers the period ranging from 02/01/2006 to 31/12/2013, totaling 2087 observations per stock.

Tunindex is used as a reference market index as it represents better the Tunisian stock market.

3.2. The to-be-tested model

For long, the famous CAPM, a reference model to assess stocks and assets, have been considered the unique pricing model. CAPM was introduced by Treynor (1961), Sharpe (1964) and Lintner (1965). Under CAPM, systematic risk is measured by Beta. It is assumed that coefficient under CAPM is able to measure sensitivity of the return rate of an asset (or a portfolio) to risk premium, i.e. with systematic risk. CAPM divides a portfolio or an asset risk into systematic and specific risk: sum of squares and sum of squares of residuals issued from the regression analysis

σ2i = βiσ2m + σ2εi ...................... (11)

σ2m is variance in the dependent variable, reference market index (Tunindex and σ2εi is lenght of variance persistence which may be reduced thanks to a diversification process.

In order to estimate Beta of CAPM, we use the model market given by

Rit = Rft + [Rmt - Rft] βi + ε it ...................... (12)

Like variance, entropy of an asset may be divided as follows:

H(X) = I (X, Tunindex) + H(X \ Tunindex) ...................... (13)

With X representing the stock and Tunindex is the market index.

Where the first refers to association level or dependence between the asset and market index and the second refers to the variance of the asset. Then, we can distinguish between global uncertainty, i.e. I (X, Tunindex) and residual uncertainty H(X \ Tunindex) referring to entropy properties.

3.3. Entropy and the principle of portfolio diversification

As a reminder, we examine sensitivity of entropy to the diversification effect. Put differently, we study whether entropy is able to measure how diversification decreases risk in the selected sample. It is important, however, to keep an eye on variance properties (standard deviation) and entropy of uncertainty measure. Standard deviation is a convex function, which obeys Jensen's inequality E[σ(X)] ≥ σ[E(X)]2

This property makes it possible to use variance to measure portfolio return risk as it accounts for a diversification effect.

Moreover, entropy is a concave function and represents the maximum for most probability distributions. Therefore, we would like to think that entropy does not fit the diversification effect. However, it is worth mentioning that entropy is not a function of the values of variables, but the probability itself and the property H (X, Y) ≤ H (X) + H (Y) may bring some hope to that end.

Accordingly, we proceed like Elton and Gruber (1995), Dionisio et al. (2006). These authors showed that diversification is factor that reduces specific risk (as measured by standard deviation). Then, we assigned the selected stocks to portfolios. The amount invested in each asset is 1 / N, N is the number of stocks in a portfolio.

4. THE RESULTS AND THEIR INTERPRETATION

To test our model, we proceeded in three steps: analysis of residuals quantity, model stability and analysis of coefficients for significance.

4.1. Descriptive statistics

Table 1: Descriptive statistics of stock returns and Tunindex

Mean Median Maximum Minimum Standard Deviation Skewness Kurtosis Jarque-Bera P (Jarque-Bera)
Tunisie Placement 0.0001 0.0000 0.0822 -0.0530 0.0049 2.6736 59.3140 278121.2 0.0000
Tunindex 0.0002 6.00E-05 0.0170 -0.0217 0.0026 -0.6723 14.7038 12062.91 0.0000
Amen_Bq 0.0002 0.0000 0.0378 -0.0354 0.0066 0.3511 7.1874 1566.866 0.0000
Assad 8.52E-05 0.0000 0.0377 -0.0947 0.0066 -1.3949 27.9532 54796.29 0.0000
Astree 8.68E-05 0.0000 0.2039 -0.2079 0.0084 -0.5118 351.8444 10577187 0.0000
Atb 6.63E-05 0.0000 0.0295 -0.0637 0.0056 -0.4121 15.1851 12964.19 0.0000
Atl 5.36E-05 0.0000 0.0296 -0.1395 0.0071 -3.8507 79.4077 512586.5 0.0000
Attijari 0.000151 0.0000 0.0634 -0.0594 0.0060 0.3362 17.6789 18767.11 0.0000
Bh 3.78E-05 0.0000 0.0310 -0.0545 0.0059 -0.1375 9.7410 3956.200 0.0000
Bna 9.66E-05 0.0000 0.0423 -0.0491 0.0064 0.2819 9.8444 4099.351 0.0000
Bte -7.57E-07 0.0000 0.0538 -0.0369 0.0040 0.5853 30.4521 65621.06 0.0000
Cil 9.57E-05 0.0000 0.0763 -0.1305 0.0074 -3.9416 79.0066 507519.6 0.0000
Electrostar -5.16E-05 0.0000 0.0561 -0.0576 0.0090 0.3308 6.5781 1150.840 0.0000
Gif_Filter -0.000149 0.0000 0.1008 -0.0969 0.0092 0.0248 22.2434 32186.24 0.0000
Icf 0.000286 0.0000 0.0701 -0.0442 0.0067 0.8278 17.8967 19526.01 0.0000
Al kimia 4.90E-05 0.0000 0.0816 -0.0997 0.0082 -0.4774 31.5529 70939.60 0.0000
Air-Liquide 0.000103 0.0000 0.0256 -0.0600 0.0059 -1.4799 17.6786 19488.55 0.0000
Magasin Generale 0.000425 0.0000 0.0443 -0.0545 0.0071 0.1542 9.2087 3358.714 0.0000
Siame 1.73E-05 0.0000 0.1471 -0.1526 0.0088 1.0394 97.5732 777765.5 0.0000
Simpar 0.000274 0.0000 0.0573 -0.0570 0.0074 -0.0469 9.5470 3726.279 0.0000
Siphat -0.000220 0.0000 0.0444 -0.0541 0.0073 -0.1800 8.4622 2604.481 0.0000
Sitex 2.94E-05 0.0000 0.0749 -0.1038 0.0077 -1.7954 53.1976 220132.9 0.0000
Somocer -0.000108 0.0000 0.0560 -0.0588 0.0091 0.3081 6.6070 1163.819 0.0000
Sotetel -0.000263 0.0000 0.0473 -0.0902 0.0086 0.0425 10.2237 4536.097 0.0000
Sotrapil -0.000239 0.0000 0.0446 -0.0430 0.0077 0.2289 5.3130 483.2035 0.0000
Sotumag 0.000115 0.0000 0.1413 -0.1411 0.0085 1.0563 85.2266 588049.3 0.0000
Sotuver 0.000334 0.0000 0.0423 -0.1343 0.0086 -1.6156 34.5811 87595.26 0.0000
Star 0.000628 0.0000 0.0650 -0.0852 0.0081 -0.1458 20.0041 25138.47 0.0000
Stb 1.53E-05 0.0000 0.0255 -0.0467 0.0069 0.1099 6.4546 1041.484 0.0000
Steg 1.67E-05 0.0000 0.1128 -0.0921 0.0075 -1.0826 60.9732 292525.1 0.0000
Tuninvest 0.000131 0.0000 0.0568 -0.1254 0.0083 -2.6314 42.3247 136817.8 0.0000
Tunisair -0.000115 0.0000 0.0621 -0.0588 0.0075 0.5352 10.1199 4505.592 0.0000
Tunisie Lait 5.07E-06 0.0000 0.0666 -0.0786 0.0080 -0.2942 19.2801 23066.65 0.0000
Ubci 6.32E-05 0.0000 0.0447 -0.0822 0.0070 -0.5826 20.2172 25882.90 0.0000

As shown in Table 1 above, the "Star" and "Magazin generale" stocks score the highest returns (0.00062), followed by "Sotuver" stock (0.00033). Meanwhile, "Gif-filter" and "Somocer" stocks have the highest risk with a standard deviation of (0.00921), followed by "Electrostar" (0.00901).

Analysis of normality tests on stock return series shows that skewness is different from zero for all stocks indicating an absence of linearity in stock returns 4. Skewness is negative for 17 stocks and the Tunindex. Distribution of return series is skewed to the left (a negative area). This implies that losses distribution (downside risk) is higher than that of profits (upside risk). It is suggested then that returns are likely to react more to a negative shock than to a positive one. The remaining 16 stocks record a positive skewness coefficient, suggesting that the distribution's thickness lies at the right side (a positive area). Stock returns of "Amen banquet", "Bna" and "Bte" received more negative shocks (probability of obtaining positive returns is higher than that of negative returns). Moreover, we can suggest that investors reduce their aversion to risk when they believe that their wealth is best amassed with a distribution skewed to the right. This skewness manifests itself when the instantaneous variance of stock return series (volatility) is lower after an increase than after a decrease in returns. In the same line of thinking, kurtosis is neatly superior to 3 (kurtosis excess).

Higher kurtosis indicates that the distribution peaks towards the mean. This is leptokurtic distribution with "thick tails". Such a distribution reflects higher losses and profits than in a normal distribution. Therefore, the higher kurtosis is, the higher the probability of having significant losses or profits are. Finally, we conclude that the returns are not normally distributed.

To validate this assumption, we opt for the JARQUE 6 BERA test to synthesize skewness and kurtosis and their related probabilities. The assumption here is to test the probability of rejecting the null hypothesis (data is normally distributed), while it is true. In our study, it is 0.0000. Therefore, for a significance level of a = 5%, we reject the null hypothesis that the returns are normally distributed.

4.2. Tests of residuals: White's errors heteroscedasticity test (no cross terms)

Probabilities for the stocks Air_Liquide, Amen_Banque, Atb, Bh, Bna, Magasin Generale, Siphat, Sitex, Somocer, Sotrapil, Stb, Tunisair and Tunisie Lait are lower than 0.005. We accept, i.e. errors are heteroscedastic, whereas for the stocks in Electrostal, Al_Kimia, Assad, Astree, Atl Attijari_Banque, Bte, Cil, Gif_Filter, Icf, Tunisie Placement, Siame, Simpar, Sotetel, Star, Steg, Sotumag, Sotuver, Tuninvest et Ubci), the probabilities of the TR2 test are higher than 0.05. Then, we accept H1 , i.e. errors are heteroscedastic.

4.3. Model stability tests: CUSUM stability test (Appendix 1)

As the short-term model is dynamic, the test can be applied to the long-term model. If the curve does not cross the path (dotted line), then the model is said to be stable. However, the model is said unstable when the curve crosses the path. The result obtained through Eviews shows a continuous curve in the path. Then, for the stocks Air Liquide, Al_Kimia, Amen_Banque, Bna, Atb, Atl, Bte, Cil, Electrostar, Gif-Filter Magasin_Genrale, Tunisie Placement, Siphat, Somocer, Sitex, Sotetel, Siame, Star, Steg, Stb , Sotumag Tunisair, Tunisie_Lait, Tuninvest, Ubci, Simpar , CUSUM statistics remain within the thresholds. Therefore, we reject the structural change hypothesis. We can then conclude that our model is stable. However, for the models with the following dependent variables : " Astree", " Attijri-banque" and Astree, Assad, Attijri-Banque, Bh, Icf, Sotrapil, Sotuver, we accept the structural change hypothesis and we conclude to the instability of these models.

4.4. Model estimation: Interpreting significance of coefficients

α and β were estimated for each stock. The market model is generally estimated by an OLS. However, most models are found to be non-significant. Tests and data indicate that residuals are not white noise. An OLS is not appropriate in this case. Moreover, we also computed systematic risk

β2iσ2m

and specific risk

σ2εi

to compare between the two approaches.

The results obtained from the Ljung-Box and the Jarque-Bera tests on residuals and the CUSUM stability test point to the presence of residuals resulting from applying the CAPM linear market model on the returns. However, these latter present higher levels of non-linearity, therefore they show evidence of autocorrelation, heteroscedasticity and instability. This seems to indicate that a linear analysis is not enough to assess.

Table 2: CAPM-based analysis of Betas and stocks' systematic and specific risks

Alpha BETA Standard Deviation Systematic risk

β2iσ2m

Specific risk

σ2εi

Tunindex 0.0002 0.2646 0.0066 0.0477 4.2541
Amen Banque -0.0002 0.9529 0.0059 0.6189 2.8296
Bh -0.0001 1.0110 0.5479 0.6966 3.3366
Bna -0.0002 0.9825 0.0069 0.6578 4.0872
Stb -0.0003 0.5953 0.0075 0.2415 5.4168
Tunisair 0.0000 0.4871 0.0085 0.1617 6.9995
Sotumag -0.0001 0.4590 0.0090 0.0144 7.9843
Electrostar 0.0003 0.5953 0.0071 0.2415 4.7961
Magazin Genrale -0.0001 0.9991 0.0066 0.6803 3.6649
Assad -0.0004 0.6337 0.0077 0.2737 5.7041
Sotrapil -0.0001 1.0399 0.0056 0.7370 2.4375
Atb 0.0000 0.1548 0.0040 0.0163 1.5504
Bte 0.0000 0.0206 0.0075 0.0003 5.6506
Steg -0.0002 0.0892 0.0073 0.0054 5.3846
Siphat -0.0002 0.2635 0.0091 0.0473 8.3122
Somocer 0.0000 0.0683 0.0070 0.0032 4.8612
Ubci 0.0002 0.6431 0.0086 0.2819 7.0518
Sotuver 0.0000 0.1175 0.0077 0.0094 5.8968
Sitex 0.0001 -0.0667 0.0082 0.0030 6.7927
Al Kimia 0.0112 0.1652 0.0084 0.0186 7.0368
Astree -0.0001 0.8626 0.0071 0.5071 4.5691
Atl -0.0003 0.7560 0.0092 0.3895 8.1101
Gif 0.0003 0.1272 0.0067 0.0110 4.4897
Icf 0.0002 -0.0465 0.0049 0.0015 2.4427
Placement -0.0001 0.4941 0.0088 0.1664 7.5393
Siame 0.0000 0.2014 0.0080 0.0276 6.4108
Tunisie 0.0000 0.4606 0.0083 0.1446 6.8235
Tuninvest 0.0001 0.1729 0.0059 0.0204 3.4750
Airliquid 0.0001 0.6958 0.0074 0.3300 5.0928
Simpar 0.0005 0.8622 0.0081 0.5066 6.0455
Star -0.0001 0.8746 0.0074 0.5213 4.9989
Cil -0.0001 1.1346 0.0060 0.8774 2.6966
Attijari Bq -0.0005 0.9739 0.0086 0.6465 6.8089
Sotetel 0.0002 0.2646 0.0066 0.0477 4.2541

Table 3: Information theory-based analysis of stocks' systematic and specific risks

Total risk (Entropy) Systematic risk(Mutual Information) Joint-entropy Specific risk (conditional Entropy)
Tunindex Tunindex 0.7578
Amen banque 1.3417 0.1544 1.9451 1.1873
Bh 1.1871 0.2126 1.7323 0.9745
Bna 1.2871 0.2007 1.8442 1.0864
Stb 1.3789 0.2112 1.9255 1.1677
Tunisair 1.4518 0.1704 2.0392 1.2814
Sotumag 1.3441 0.1202 1.9817 1.2239
Electrostar 1.5634 0.0852 2.236 1.4782
Magazin genrale 1.3853 0.1626 1.9805 1.2227
Assad 1.3297 0.205 1.8825 1.1247
Sotrapil 1.4827 0.1643 2.0762 1.3184
Atb 1.219 0.2593 1.7175 0.9597
Bte 0.7652 0.0073 1.5157 0.7579
Steg 0.7965 0.0015 1.5528 0.795
Siphat 1.1782 0.0672 1.8688 1.111
Somocer 1.6147 0.1565 2.216 1.4582
Ubci 1.0292 0.0088 1.7782 1.0204
Sotuver 1.4756 0.1744 2.059 1.0204
Sotuver 1.4756 0.1744 2.059 1.3012
Sitex 0.5312 0.00085 1.2881 0.5303
Al kimia 0.8052 0.00085 1.5621 0.8043
Astree 0.5606 0.00062 1.3177 0.5599
Atl 1.3322 0.1925 1.8975 1.1397
Gif 1.474 0.1646 2.0672 1.3094
Icf 0.977 0.0639 1.6709 0.9131
Placement 0.3804 0.0015 1.1367 0.3789
Siame 1.3236 0.1521 1.9293 1.1715
Tunisie 0.8055 0.0005 1.5627 0.8049
Tuninvest 1.2874 0.0733 1.9719 1.2141
Airliquid 0.9074 0.074 1.5912 0.8334
Simpar 1.3798 0.1347 2.0029 1.2451
Star 1.4891 0.1693 2.0776 1.3198
Cil 1.3033 0.1568 1.9043 1.1465
Attijari bq 1.2612 0.2684 1.7506 0.9928
Sotetel 1.5325 0.1827 2.1076 1.3498

We computed mutual information between each stock and Tunindex I (X, Tunindex), conditional entropy H(X \ Tunindex) .

Figure 1: Comparison between systematic risk β2iσ2m and mutual information I (X, Tunindex)

As shown in Figures 1 and 2, there is a positive relationship between systematic risk and mutual information and between specific risk and conditional entropy. Our results join those reported by Dionisio et al. (2007).

Figure 2: Comparison between specific risk σ2εi, and conditional entropy H(X \ Tunindex)

Despite the significant and strong relationship between variance and information theory measures, we thought it fit to compare the measures that can be directly compared. Figures 1 and 2 indicate that entropy behaves similarly but not identically to standard deviation, in a way that it can be used as a good risk measure. As such, it can be used to accurately measure news risk by combining advantages of Beta (CAPM) and standard deviation.

Indeed, entropy can quantify jointly systematic and specific risks. We found that the relationship between the stock and the market index Tunindex deviates strongly under the global linear analysis. Referring to the standard deviation, we found that the most risky stocks are Somocer and Gif with a value of 0.0092, followed by Electrostar stock with a standard deviation of 1.6147, whereas the Electrostar stock comes second with entropy of 1.5634.

To find out about the possible reasons behind these differences, we run a number of tests on residuals issued from the linear estimation of the market model, i.e. the Jarque-Bera test, the Engle test, the CUSUM stability tests. The obtained results specifically point to the presence of residuals issued from applying the linear market model of returns, which seem to have higher levels of non-linearity. Such a finding implies the presence of autocorrelation, abnormality, heteroscedasticity and instability in the model. This amounts to say that the linear analysis is not enough to assess risk and uncertainty.

If residuals are white noise and the linear correlation coefficient (R2) is high, Beta is a good measure of systematic risk. Nevertheless, in case of non-linearity and irregularity of residuals, in a simple linear regression, the model is inadequate to detect a relationship between assets and market index Tunindex. Since entropy is a measure that jointly detects linear and non-linear dependences without specifying any dependence model, then mutual information and conditional entropy may potentially inform investors and be more adequate to measure risk.

Figure 3: Risk (entropy) and diversification

Accordingly, the mean-variance approach will be labelled as the mean-entropy approach of selecting assets. Likewise, Sharpe single-index model was treated under entropy theory where total risk is divided into systematic and non-systematic risk. We also examined the effect of diversification on the selected stocks.

In this study, we found that entropy tends to diminish like standard deviation with the inclusion of more assets in a portfolio. This finding is similar to that reported by Wagner and Low, with risk fades away with time. Then, we may conclude that entropy is sensitive to diversification corroborating thus the results of Dionisio et al. (2006), Ormos and Zibriczky (2014).

The obtained results help us to conclude that entropy detects diversification effect and is a more general measure of uncertainty of variance, as it uses more information on the probability distribution. This finding is consistent with results of Dionisio et al. (2006, 2007) and Ormos and Zibriczky (2014) who showed the ability of entropy to measure risk in portfolio management.

Moreover, these results can be explained by the fact that when the number of assets in a portfolio increases, the number of the portfolio's possible configurations gradually diminish and uncertainty in the portfolio tend to decrease as well. Mutual information and conditional entropy outperforms the linear market model under CAPM in terms of measuring systematic and specific risks.

Furthermore, the main differences found between entropy and standard deviation relate to returns, which show high levels of kurtosis, skewness, autocorrelation and heteroscedasticity. Therefore, entropy seems to be sensitive to higher-order moments, providing more information about returns and probability distribution.

5. CONCLUSION

Risk measurement was born along financial theory. Although development of financial theory is attributed to the work of Bachelier (1900), it was Markowitz (1952) and Sharpe (1964) who first conceptualized risk and developed the CAPM and Arbitrage Pricing Theory (APT).

In their modeling, risk is computed as a function of correlation between a portfolio's assets. These models rest on several fundamental assumptions of which are normality of stock returns distributions and preferences of agents. The first essential hypothesis pertains to investors' problem in choosing assets under an alternative risk. It is equally difficult to explain risk-averse agents' behavior (Friedman and Savagen, 1948). The second hypothesis expresses expected utility as an exact function of forecasting and variance of returns distribution, rooted in the mean-variance approach as its theoretical foundation.

Nevertheless, normal distribution of returns is neatly rejected in the theoretical and empirical literature (Engle, 1982; Mandelbrot, 1971). The leptokurtic type of observed distributions needs to be seriously considered in the analysis and modeling of financial risk.

This leads us to reject the presence of a linear relationship between systematic risk and returns and abandon the CAPM as a risk measure. These assumptions ushered in a great concern with measuring risk and finding new alternatives to CAPM (Fama and French, 1996; Hwang and Satchell, 1999; Harvey and Siddique, 2000; Pedersen and Satchell, 1998).

Our aim in this study was to extend risk measurement and decision-making to entropy theory, i.e. portfolio theory (a normative portfolio selection model) and Sharpe single-index model. This combination helped to develop an assets pricing model.

The main objective of modern finance rests on pricing assets. The choice of a pricing model depends on available information. In this regard, as entropy is one of the major concepts of information theory and as market agents react to information when taking decisions, we believe that entropy should find its way into financial analysis.

Moreover, as classic assets pricing models fail to consider thick tails and skewness of almost all assets distributions, entropy may be used to measure, in general terms, uncertainty derived from variance, providing them more information about an asset and its probability distribution. Its use is to examine risk highly contributes to improving risk and portfolio management.

The method we proposed in this study may be used reliably and efficiently in practice. Therefore, entropy may be preferred to variance when returns distributions are not Gaussian. The use of power laws in this study is particularly relevant to analyze uncertainty and assets values, as they do not assume the normal distribution as a requirement. Interestingly, the works of Markowiz were revised under such an understanding of uncertainty. Therefore, the mean-variance approach will be labelled the mean-entropy approach of assets selection. Likewise, Sharpe single-index model was reinterpreted under entropy theory where total risk is divided into systematic and non-systematic risk. Entropy is believed to contribute to significantly improving the assessment of systematic and specific risks and thus assets pricing.

In conclusion, researchers in finance and risk management have developed this coherent index that takes into account the reality and needs of firms, to measure volatility. However, use of the concept entropy is complex and needs a robust algorithm to be able to assess risk and portfolio selection. The implications of these results are numerous for researchers using measures of risk like VaR or involved in modeling volatility of returns.

Funding: This study received no specific financial support.
Competing Interests: The authors declared that they have no conflict of interests.
Contributors/Acknowledgement: All authors participated equally in designing and estimation of current research.
Views and opinions expressed in this study are the views and opinions of the authors, Asian Journal of Empirical Research shall not be responsible or answerable for any loss, damage or liability etc. caused in relation to/arising out of the use of the content.

References

Bachelier, L. (1900). Theory of speculation: The random character of stock market prices. MIT Press, Cambridge, 17-75.

Clausius, R., & Hirst, T. A. (1867). The mechanical theory of heat: with its application to the steam engine and to the physical properties of bodies. J. Van Voorst: London, UK. view at Google scholar

Dionisio, A., Menezes, R., & Mendes, D. A. (2006). An econophysics approach to analyse uncertainty in financial markets: An application to the Portuguese stock market. The European Physical Journal B-Condensed Matter and Complex Systems, 50(1), 161-164. view at Google scholar / view at publisher

Dionisio, A. (2001). Entropy analysis as uncertainty measure and information value in Portuguese stock market. Evora University. view at Google scholar

Dionisio, A., Menezes, R., & Mendes, D. A. (2006). Entropy and uncertainty analysis in financial markets. Papers 0709.0668. view at Google scholar

Dionisio, A., Menezes, R., & Mendes, D. A. (2007). Utility function estimation: The entropy approach. Papers 0709.0591. view at Google scholar / view at publisher

Ebrahimi, N., Maasoumi, E., & Soo?, E. (1999). Ordering univariate distributions by entropy and variance. Journal of Econometrics Reviews, 12, 137-181. view at Google scholar / view at publisher

Elton, E. J., & Gruber, M. J. (1995). Modern portfolio theory and inestement analysis. 5 ed, Jhon Wiley & sons. Inc, United States. view at Google scholar

Engle, R. F. (1982). Autoregressive conditional heteroskedasticity with estimates of the variance of United Kingdom inflation. Econometrica, 50, 987-100. view at Google scholar / view at publisher

Fama, E. F., & French, K. R. (1996). Multifactor explanations of asset pricing anomalies. Journal of Finance, 51(1), 55-84. view at Google scholar / view at publisher

Feldman, D. P., & Crutchfield, J. P. (1998). Statistical measures of complexity: Why?. Physics Letters A, 238(4-5), 244-252. view at Google scholar

Friedman, M., & Savage, L. J. (1948). The Utility analysis of choices involving risk. Journal of Political Economy, 56(4), 279-304. view at Google scholar / view at publisher

Garner, W. R., & Mcgill, W. J. (1956). The relation between information and variance analysis. Psychometrica, 21(3), 219-228. view at Google scholar / view at publisher

Georgescu-Roegen, N. (1971). The entropy law and the economic process. Harvard Univ. Press, Cambridge, MA. view at Google scholar

Gibbs, J. W. (1902). Elementary principles in statistical mechanics - developed with especial reference to the rational foundation of thermodynamics (C. Scribner's Sons, New York, 1902; Yale University Press, New Haven, 1948; OX Bow Press, Woodbridge, Connecticut, 1981). view at Google scholar / view at publisher

Granger, C., Maasoumi, E., & Racine, J. (2004). A dependence metric for possibly nonlinear processes. Journal of Time Series Analysis 25, 649-669. view at Google scholar / view at publisher

Gulko, L. (1999). The entropic market hypothesis. International Journal of Theoretical and Applied Finance, 2(3), 293-329. view at Google scholar / view at publisher

Harvey, C., & Siddique, S. (2000a). Conditional skewness in asset pricing tests. Journal of Finance 54, 1263-1296. view at Google scholar / view at publisher

Hwang, S., & Satchell, S. (1999). Modelling emerging market risk premia using higher moments. International Journal of Finance and Economics, 4(4), 271-296. view at Google scholar / view at publisher

Kirchner, U., & Zunckel, C. (2011). Measuring portfolio diversification. arXiv preprint arXiv:11024722. view at Google scholar

Knight, F. H. (1921). Risk, uncertainty, and profit. Boston, MA: Hart, Schaffner & Marx; Houghton Mifflin Company. view at Google scholar / view at publisher

Kullback, S. (1959). Information theory and statistics. Wiley, New York. view at Google scholar

Lintner, J. (1965). The valuation of risk assets and the selection of risky investments in stock portfolios and capital budgets. Review of Economics and Statistics, 47, 13-37. view at Google scholar / view at publisher

Maasoumi, E., & Racine, J. S. (2002). Entropy and predictability of stock market returns. Journal of Econometrics, 107(2), 291-312. view at Google scholar / view at publisher

Maasoumi, E. (1993). A compendium to information theory in economics and econometrics. Econometric Reviews, 12(10), 137-181. view at Google scholar / view at publisher

Mandelbrot, B. B. (1971). When can price be arbitraged efficiently? A limit to the validity of the random walk and martingale models. Review of Economics and Statistics, 53, 225-236. view at Google scholar / view at publisher

Markowitz, H. (1952). Portfolio selection. The Journal of Finance, 7(1), 77-91. view at Google scholar

Mayumi, K. (1997). Information, pseudo measures and entropy: An elaboration on nicholas Georgescu-roegen's critique. Ecological Economics, 22, 3, 249-259. view at Google scholar / view at publisher

Ormos, M., & Zibriczky, D. (2014). Entropy-based financial asset pricing. Plos one DOI:10.1371/journal.pone.0115742. view at Google scholar / view at publisher

Pedersen, C., & Satchell, S. (1998). An extended family of financial risk measures. Geneva Papers on Risk and Insurance Theory, 23, 89-117. view at Google scholar / view at publisher

Philippatos, G. C., & Wilson, C. (1972). Entropy, market risk and the selection of efficient portfolios. Applied Economics, 4, 209-220. view at Google scholar / view at publisher

Shannon, C. E. (1948). A mathematical theory of communication. System Technical Journal, 27, 379-423. view at Google scholar / view at publisher

Shannon, C., & Weaver, W. (1949). The mathematical theory of communication. Urbana IL: The University of Illinois Press. view at Google scholar / view at publisher

Sharpe, W. F. (1964). Capital asset prices: A theory of market equilibrium under conditions of risk. Journal of Finance, 19(3), 425-442. view at Google scholar / view at publisher

Soofi, E. (1997). Information theoretic regression methods: applying maximum entropy to econometric problems. Fomby, T. and R. Carter Hill eds. Vol. 12. Jai Press Inc., London. view at Google scholar

Treynor, J. L. (1961). Market value, time, and risk. Unpublished manuscript. view at Google scholar / view at publisher

Zhou, R., Cai, R., & Tong, G. (2013). Applications of entropy in finance: A review. Entropy, 15, 4909-4931. view at Google scholar / view at publisher

Appendix


  1. Clausius (1867) introduced entropy, which is a state function introduced in the second principle of thermodynamics. ?
  2. Boltzmann (1900) and Gibbs (1902) have developed statistical physics?
  3. Our results are measured by bits.?
  4. Non-linearity originates from the presence of whether an ARCH effect or long memory.?
Loading...