“The Fundamental Error of Elliott Wave Theory”

by Orvin Five


The underlying premise of Elliott Wave theory is that human behavior—as manifested in the movement of the stock market—can be modeled as a series of discrete patterns that recur predictably. This conclusion arises from a mistaken analogy between stock price movements and other phenomena found in nature. For centuries, people noticed that there were predictable patterns in snail shells, planetary orbits, ocean tides, and many other processes or objects. Often, these patterns were so consistent that they could be mapped to mathematical functions with a high degree of accuracy. Crucially, it was not necessary to understand why a particular mathematical function was able to predict a natural pattern. In other words, the underlying inputs (causal variables) of the pattern could remain unknown for centuries even though the mathematical model continued to forecast the pattern accurately. As an example, people were able to predict the movements of celestial bodies long before they had identified and understood the main underlying input (gravity) responsible for the model.

If you want to understand why Elliott Wave theory is flawed, it is important to distinguish between two very different categories of knowledge (and/or ignorance) about an underlying input. The first, which I will call the “Input Identity” or “Input Existence”, is simply the identification that the input exists at all. The second, which I call the “Input Degree” is the quantitative (or at least, relatively ordinal qualitative) value for the underlying input once its existence/identity has already been determined.

Input Existence and Input Degree are two very different things. To illustrate this idea using our example of celestial motion, early astronomers had neither (1) identified that a force yet to be called “gravity” existed at all (i.e. they lacked knowledge of the Input Existence), nor had they (2) the means to assign any value (Input Degree) to this input, regardless of whether they had yet identified its existence or not. Thus, it was an “unknown unknown”.

Once both the gravitational force and the tools for calculating it were discovered, the patterns of celestial bodies suddenly made more sense. As mentioned above, the history of science is full of such examples of "ex-post-discoveries" of underlying inputs that retroactively explained empirically accurate (but theretofore mysterious) mathematical models of natural patterns. What’s important to note here is that the explanatory variables for these patterns were most often (1) relatively few in number (2) not considered to be exhaustive and/or capable of making the mathematical model perfect. In the case of the movements of celestial bodies, we know that the basic gravitational calculation of masses and distances won’t be enough to predict a planet’s movement with 100% accuracy, since there are relativistic and other effects at play as well (including even changes in mass due to atmospheric dissipation, etc.). But what’s important is that the errors are small—in other words, our knowledge of the Input Existence and Input Degree for celestial motion is robust enough to facilitate launching probes to other planets.

To quickly recapitulate: (1) an apparent pattern in nature is observed, (2) a mathematical model is developed which robustly expresses the pattern and has predictive power, (3) the mysterious underlying causal inputs of the natural phenomenon are finally discovered/identified, (4) a means of properly calculating the values of the underlying inputs is finally discovered/identified, and (5) the pattern is now robustly (though perhaps not 100%) understood.

This brings us to the fundamental error of Elliott Wave theory. Looking at a stock index chart, we see an apparent series of loosely identifiable patterns, some of which seem to repeat sporadically. We then ask ourselves if it is possible that these patterns can be modeled mathematically in the same way that other natural phenomena have been. For this to be true, we would have to first (1) identify the main underlying inputs to the model, and (2) assign such inputs their respective degrees of value. Even though we might miss a few inputs, our model should still be robust enough if we catch the main ones.

So, is it possible? The answer is no.

We can state the reason as follows: Stock prices do not move as a result of unknown unknowns that have yet to be BOTH identified and THEN valued. We’re not looking for something mysterious and new--we in fact ALREADY KNOW the handful of main Input Identities that robustly explain stock price movements. We also already know that these inputs (often correlated) are themselves dependent on countless other, secondary inputs that we cannot accurately value, even if we occasionally can estimate some of them correctly. And here is our theoretical lynchpin: We ALREADY KNOW, a priori, that no single theory or mathematical model can accurately predict the values (Input Degrees) of certain of these secondary inputs.

Or more verbosely and accurately, we already know that no single theory or mathematical model can accurately predict the values of the members in any arbitrary subset of these secondary inputs, as long as such arbitrary subset contains only members that are themselves substantially independent of each other. For the purpose of our analysis, we will focus on subsets of variables consisting of only two members (i.e. pairs of secondary inputs).

To clarify the above more concretely, let’s start by looking at an extremely important underlying “main input” that should enter into any mathematical model purported to predict the price movement of the S&P 500: next year’s earnings. Although a massively important input, the exact value of “next year’s earnings” would not alone allow you to predict the price direction of the stock market, although knowing this input’s exact value would give you a major advantage and probably make any model much more accurate. If you were to couple this input with a precise knowledge of the U.S. dollar’s value relative to other currencies at the end of next year, it would make your model even more robust. There are probably just a dozen or less such “most major” inputs that, when put together, would produce a mathematical model that was strikingly predictive if the actual (quantitative or ordinal qualitative) values of such inputs were known with certainty. The fact that many of these inputs (economic, behavioral, social, etc.) are correlated to various degrees does not change matters. Nor does it matter that these dozen inputs could be “swapped” with a dozen other similar inputs to yield a similar result (i.e. tell me next year’s Nasdaq earnings instead, and my S&P stock price model will still work robustly, due to correlation). As stated above, what’s important is that these dozen (or so) interchangeable “main inputs”, or “input categories”, are themselves functions of countless (effectively billions) of other, secondary inputs that no one mathematical model (including EWT) can predict.

But how can we claim that we know this a priori? Firstly, we can show (in practice actually, not just in theory) that within the set of billions of secondary inputs, there are countless pairs of any two such inputs that cannot be expressed as predictive functions of each other no matter how much we try. Usually, basic common sense alone will identify such pairs of inputs (for example, “average summer temperature in North America” and “probability that the Chinese minister of finance unexpectedly dies” are two inputs that have miniscule mathematical interdependence, while both influence the S&P 500 to at least some very small degree). By definition, if the two secondary inputs making up any particular pair showed a strong enough degree of correlation (or indeed any strong enough mathematical relationship), then either one of the inputs would become redundant. This is because that particular input would not add much explanatory power to a mathematical model that already contains the other input in the pair. But we find instead (in practice) that when we do properly estimate secondary inputs individually, our overall predictions meaningfully improve in increments. In addition to this, we also know that the values of our countless “non-interdependent secondary inputs” are not totally random. If they were totally random, then we would have to treat them as errors (or potentially disregard them) within our main mathematical model. Note that I use the word “totally” because all of these inputs reflect at least some degree of effective randomness (even prior to being deconstructed down to the physical level of quanta, if that were even possible). Now—how do we know that these secondary inputs are not totally random? Because IT IS POSSIBLE to sometimes predict, very accurately, the values of some of these inputs using direct, observational methods—i.e. imperfect but reliable, fundamental analysis. For example, a diligent analyst rolls up his sleeves, pours over tons of data, and then accurately (or at least robustly) predicts next year’s corn harvest. At the same time, he also correctly predicts some changes to certain of next year’s tax rates, after reading 21 articles on the subject. He looks at the data historically, and finds that the corn harvests and the tax rates show no meaningful correlation or other mathematical relationship to each other.

He plugs both of these values into separate areas of a larger mathematical model that he uses to predict agricultural sector profits for next year. He then plugs this figure into an even bigger model and uses it to predict the value of the S&P 500 slightly more accurately than he would have otherwise.

Note that our analyst has arrived at a value for each input “deterministically” from even more basic constituent data, and in total mathematical isolation from the other input in the pair. Now for Elliott Wave theory to make any sense, one would have to take the absurd position that these two inputs in fact are indeed mathematically related at an even higher functional level (already known to the EWT practioner) than what the analyst understands. Perhaps you might refer to it as holistic or biblical? With all due respect, I believe that it would be the Elliott Wave theoretician who should have the responsibility of proving this "already known" mathematical interdependence to the analyst, and not the other way around. It would need to be proven for each and every pair (or at least a heck of a lot of them) for the main mathematical model to be valid.

What our analyst has done is use two secondary inputs, each of which somehow fulfills the seemingly “miraculous” criteria of (1) being individually predictable by fundamental analysis of more basic data, (2) not mathematically interdependent with its counterpart, and (3) partially explanatory for the future value of the market. There is nothing odd about this unless you look at the problem with deterministic preconceptions. Now—the fact that ALL THREE of these criteria have been fulfilled proves that the two inputs cannot both be variables within any deterministic mathematical function whose final output value (the market) we already know. If they were, you would be able to use such a mathematical function in reverse to robustly predict either of the inputs themselves in terms of the other. Try using Elliott Waves to predict next year’s tax rates in terms of corn harvests, and you’ll see what I mean. At the same time, nobody could argue that tax rates have absolutely no influence on the S&P 500. You could repeat this argument using thousands of other examples. Every time, you would find that no “global” mathematical function (such as Elliott Wave theory) would be able to robustly predict your independent variables in reverse. By their absolute (rather than just marginal) nature, these thousands of predictive errors do not “cancel out” each other in the aggregate to yield a convergent result. Elliott Wave theory just can’t work, by definition.

At this point in the argument, an Elliott Wave adherent may cry foul. How could you use Elliott Waves “in reverse” to predict the value of just one variable in a massive function, when you don’t know all of the other variables (and their relative influences) within the function as well? My answer to this question is that you can’t have it both ways. If you accept the idea that fundamental analysis works at all (which I do), then that means that you also accept the idea that sufficiently robust (though hardly perfect) mathematical models of stock index movements can be constructed using values determined by rigorous, research-based estimates of future inputs (otherwise there would be no point to fundamental analysis). Now take your pick from any of these fundamental models that you care to choose. One by one, strip away the rigorously, independently determined value of each variable and replace it with the value (of that same specific variable) implied by plugging in the Elliott Wave prediction for the final output result (i.e. the future value of the stock market) while keeping the other variables the same. Soon, the model will begin to spit out nonsensical retro-predictions of crop harvests, tax rates, Alt-A mortgage defaults, or any other variable you care to name that might legitimately contribute to stock market movements. I bring this up because many Elliott Wave adherents claim to jointly use fundamental analysis, as if the two methods were complementary to each other. Ironically, they are in conflict. According to the logic behind the above refutation of Elliott Wave theory, every input variable correctly determined by fundamental analysis constitutes yet another reason why the remaining, undetermined input variables are not part of a deterministic mathematical function.

As you can see, the relationships of underlying inputs to stock values are totally different from the relationships of underlying causes to natural patterns that intrigued R.N. Elliott. When 19th century chemists could not understand why the elements of the periodic table displayed predictable, recurring patterns, it was the concept of electrons/orbitals that finally explained it all (just as gravity did for astronomers). For stock traders, there is no analogy—no all-encompassing force, particle, or wave that will one day be identified, valued and then plugged into an equation to predict the future. Rather, there are dozens of major factors that have ALREADY BEEN IDENTIFIED, yet whose values are known to be unpredictable in the aggregate—even if we can occasionally predict some of them individually by rigorous analysis of their constituent elements.

Proponents of Elliott Wave methods are likely to contend that my theoretical arguments are not relevant if the EW theory yields practical trading results. I will not dwell on the many arguments against the notion that an individual trader’s success necessarily proves the perpetual validity of the underlying method that he/she uses. But I will point out that it’s impossible to either prove or disprove the Elliott Wave theory’s validity by reference to its practical results. For this praxeological reason, a purely theoretical analysis is obligatory.

Finally, I should point out that, in practice, most Elliott Wave practitioners admit that their system is only meant to be a “general guide” with many possible outcomes (i.e. additional randomness and variability is introduced). That is, there is an “ideal” wave pattern according to the theory, but in practice there can be deviations from this pattern. Nothing about this admission refutes the above arguments, nor does it strengthen the underlying predictive power of Elliott Waves.

-Orvin Five
-----------------------------------------------------------------------------------------------------------------------------------------------
q87mg / Almog / 32t / Tal / 23oh8uid / Joshua / ** / 9w8v87t / Moses /h29g3r / 2wy / Romania Hungary Bulgaria Latvia Lithuania Estonia Moldova Serbia Croatia Macedonia / 24399847733232 / Elemental / 293hoiefefefe / Onstage5445