Business forecasting entered a new era around 1965, when the ready availability of substantial computing power allowed number-crunchers -- with techniques derived from "operations research" activities in World War II -- to produce projections of sales and earnings which were not simply a "point" forecast, but included some analysis of factors that could vary, and, as a result, a probabilistic forecast.
The most common technique to achieve this, Monte Carlo simulation, set up probability distributions for all the factors on which next year's operations were thought to depend and then conducted a series of "simulations" whereby random numbers were drawn for each of the varying factors.
The percentile corresponding to the random number was taken for each factor, and the resulting point estimates for the factors were combined to give a calculation of next year's results, given that particular set of random numbers. Repeat 50 or 1,000 times, and you get a probability distribution for next year's sales and profits. The same technique could be and was used in the public sector for economic forecasting.
After a few years, Monte Carlo simulation fell somewhat into disfavor because observers noticed a strange fact: the actual results achieved by the company being modeled were quite often -- far more than could be explained by Monte Carlo -- far worse than even the "1 percent confidence limit" that the Monte Carlo simulation experts told clients was the worst outcome they could realistically expect.
Naturally, management, and the analysts themselves, demanded an explanation. One possibility was that the probabilities assessed for the various outside factors were too optimistic -- optimism is a natural state of mankind, after all, particularly well-remunerated, youthful mankind. However, this appeared not to be the case; it was not that particular factors came out at levels far outside the range postulated, though there was some of that. It was that a whole range of factors would come out far to the negative side of their postulated outcome, thus repeatedly producing a result that was theoretically almost impossible.
In the field of risk management, which came into being in the late 1970s when large amounts of computer capability became available on desktops (first through terminals and later through PCs) the same problem applied. The Chicago Board Options Exchange came into existence in the same year as the Black-Scholes model for valuing options -- 1973. It is impossible to imagine one flourishing without the other; for options to be actively traded, it is essential that traders have an apparently foolproof even if imperfect methodology to value the portfolio of positions they have acquired.
However, in practice, Black-Scholes, and models derived from it, have proved to suffer from a problem similar to Monte Carlo simulation: things go wrong far more often than they should. It seems that every time the market moves in an unexpected way, some portfolio of options or futures is exposed as a disaster. Enron is only the largest example.
Of course, a lot of this is due to outright fraud, or simply to wishful thinking and accounting manipulation by options traders and their superiors seeking to maximize their bonuses or stock options. But even so, there have been a number of cases where no significant malfeasance seems to have occurred and yet where disaster has resulted -- it beggars belief to imagine that Enron's principal competitors, Williams and Dynegy, were also run by fraudsters, and even in Enron itself there seems to be evidence that much of the energy trading operation was legitimate if ill-conceived.
The problem with both Monte Carlo simulation and risk management equations such as Black-Scholes is that the various factors considered are not in fact "probabilistically independent" of each other. Monte Carlo simulation, Black-Scholes, and other probabilistic models all rely heavily on Bayes' Theorem, which states that the probability of several events all happening is the product of their probabilities, PROVIDED that the events concerned are probabilistically independent.
The Rev. Thomas Bayes (1703-61) was a nonconformist (interestingly, not Anglican) minister, whose first work was "Divine Benevolence" and who got into mathematics with "Introduction to the Doctrine of Fluxions" in 1736, an attempt to refute the criticism of Newtonian calculus by the Anglican bishop and philosopher George Berkeley. Bayes' magnum opus, published posthumously in 1764, was "Essay towards solving a problem in the doctrine of chances." It focused primarily on the problem of what effect a priori information had on probability, and studied the problem by considering random drawings of white and black balls from large urns. One result was Bayes' Theorem: the probability of randomly drawing two black balls from an urn containing white and black balls is the square of the probability of drawing one such black ball.
It's a pretty mathematical concept, and a major intellectual leap forward for 1764 (it's also interesting that he called it a "doctrine" -- indicating an intellectual rigidity on the subject that modern probabilists have largely shared).
There are three problems with Bayes' Theorem. First, it never occurred to Bayes that one might use his theorem for business forecasting, or indeed for options valuation -- though it is interesting to imagine what the directors of the South Sea Company, the great stock market bubble of 1720, might have done with the idea!
Second, it assumes an ideal of "probabilistic independence" that almost never exists in the gritty reality of real life. Techniques like Monte Carlo simulation and the Black-Scholes option valuation model that rest on Bayes' Theorem assume away the problem -- and thereby produce erroneous results.
Third, Bayes, and the French philosopher Blaise Pascal (1623-1662) who invented most of the underlying probability theory, did not distinguish adequately between the random and the unknown. One can speculate that there was a theological reason for this: God, in whom both Bayes and Pascal devoutly believed, knows equally what next year's gross domestic product growth will be, what the weather will be next week, and what color is the ball you will next draw out of the urn. To Him, who has perfect foreknowledge of the future, the whole of existence is deterministic.
To mere mortals trying to determine next year's marketing budget, or value their option portfolio, this is not the case. Drawing a ball out of an urn is random. Nobody makes forecasts (other than probabilistic ones) of which ball you will draw next. The weather, more than about 72 hours ahead, is also effectively random -- only the Farmer's Almanac makes predictions about how hot next summer will be, and its accuracy is not great.
But next year's GDP growth is mostly not random at all, it is unknown. It depends in mysterious but largely deterministic ways on forces that are already in place. Economists have been attempting to predict the behavior of the economy since the invention of econometrics, and they have achieved a considerably better than random (though by no means perfect) ability to do so.
Factors that are not random, and are heavily interdependent, have no obligation whatever to obey the laws of probability, such as Bayes' Theorem. And in fact, they don't. There is a much higher chance that several non-random things will go wrong at once than Bayes' Theorem would predict, and hence a Monte Carlo simulation or a risk management valuation model that is based on Bayes' Theorem will be seriously in error in predicting outcomes when such factors are involved.
Mathematically, there is a solution to this, on which I have done considerable work, some of it for publication ("The use of fuzzy logic in business decision making," Derivatives Quarterly, summer 1998.) That is to use the relatively new technique of fuzzy logic, the mathematics of partial set membership. Under fuzzy logic, an unknown factor such as next year's GDP growth is examined according to one's "belief" that it might be a member of various sets, say the "recession" set (less than zero) the "stagnation" set (0 to 1.5 percent) the "moderate growth" set (1.5 to 3 percent) the "healthy growth" set (3 percent to 5 percent) and the "roaring boom" set (more than 5 percent.) For each set, you can establish your belief that next year's GDP growth might be a member of that set. Other factors can be categorized in a similar manner.
Then a fuzzy logic model can be constructed in the same way as a Monte Carlo model, with one important difference: the belief of an intersection of two sets is the lower of the two beliefs, not the product as in probability. If you have four 1 in 10 probabilities, the probability of all 4 is 1 in 10,000 -- so you can ignore it. If you have four 1 in 10 beliefs, the belief of all four is still 1 in 10 -- so you absolutely mustn't ignore it.
The same type of analysis can be used for options valuation; it shows that Black-Scholes seriously undervalues options that are a long way "out of the money."
In reality of course, some factors are unknown, while others are random, so a truly sophisticated analysis would use both fuzzy logic and probability. The two are not at all incompatible, they are simply applicable to different things.
In the real world, fuzzy logic is not being used by mainstream analysts, who instead are wedded to Bayesian "risk management" techniques that are highly flawed ways to manage risk. What are the consequences of this?
-- Companies that engage in risk management are understating the cost of hedging and overstating profits -- this is of course in the interests of both traders and executives, but investors should beware
-- Risk management does not reduce risk, it increases it, because the spurious air of security it gives management allows traders and risk managers to take on larger and larger risk positions, in the belief that they are being adequately managed
-- Earnings of companies that engage in risk management are not only overstated, they are of low quality, and subject to huge unexpected fluctuations. Since so many companies engage in this practice, the appropriate price-earnings ratio for the stock market as a whole is much LOWER than the historical average, not higher as was the case through most of the 1990s and is still the case.
It is a technical and arcane mathematical problem. But it has had and will continue to have very real and unpleasant effects on our economy and our lives.
(The Bear's Lair is a weekly column that is intended to appear each Monday, an appropriately gloomy day of the week. Its rationale is that, in the long '90s boom, the proportion of "sell" recommendations put out by Wall Street houses declined from 9 percent of all research reports to 1 percent and has only modestly rebounded since. Accordingly, investors have an excess of positive information and very little negative information. The column thus takes the ursine view of life and the market, in the hope that it may be usefully different from what investors see elsewhere.)