> So...How much will this do at forecasting stock prices? =)
Probably quite poorly (due to stocks appearing "random" at scale), especially for indexes, which are a sum of their parts.
On the other hand, this would probably be quite useful for things that have non-random trends (like the Global Energy Forecasting Competition: http://www.drhongtao.com/gefcom)
It would probably perform pretty poorly as other has suggested. This is mainly due to the fact that stock prices by itself is a pretty non-stationary dataset/measurement. Most of these probabilistic models are poorly equipped to make accurate predictions for non-stationary data since it's trends are seemingly similar to noise.
Faced with phenomena I view as self-affine, other students take an extremely different tack. Most economists, scientists and engineers from diverse fields begin by subdividing time into alternating periods of quiescence and activity. Examples are provided by the following contrasts:
between turbulent flow and its laminar inserts, between error-prone periods in communication and error-free periods, and between periods of orderly and agitated ("quiet" and "turbulent") Stock Market activity. Such subdivisions must be natural to human thinking, since they
are widely accepted with no obvious mutual consultation. Rene Descartes endorsed them by recommending that every difficulty be decomposed into parts to be handled separately. Such subdivisions were very successful in the past, but this does not guarantee their continuing success. Past investigations only tackled variability and randomness that are mild, hence, local. In every field where variability / randomness is wild, my view is that such subdivisions are powerless. They can only hide the important facts, and cannot provide understanding. My alternative is to move to the above-mentioned apparatus centered on scaling.
-Mandelbrot, in the foreward to Multifractals and 1/f Noise.
it's worth saying that Mandelbrot was apparently a large influence to E Fama, who proposed the efficient market hypothesis in the first place.
Buying the S&P 500 in 1950 and holding 67 years does.
One sample tells you nothing about randomness. What if you buy in August 1929? What if you hold for a more realistic 20 or 30 years from peak earning years to retirement?
Annual Total Return: 9.1%
Annual Real Total Return: 5.9%
Bought in January 1987, held for a realistic 30 years:
Annual Total Return: 9.8%
Annual Real Total Return: 7.0%
There's always going to be some deviation, but over any given multi-decade holding period, you will generally end up with a predictable 5-9% annualized (inflation-adjusted) return. That is more than zero. My point stands: long-term investment in the S&P 500 can be reasonably expected to gain value faster than inflation.
If you're interested, here's a simulator that looks at historic market data. You'll note that even the lowest possible percentile of 30-year holding periods will still yield a 3.43% inflation-adjusted total return: https://dqydj.com/sp-500-historical-return-calculator-popout...
Let's buy in August 1929 at 5338.69, and sell 20 years later, in August 1949, at 1822.87 (inflation-adjusted). Congratulations, you lost two thirds of your money.
Sell 30 years later instead? August 1959, at 5525.23. Wow, after 30 years you're up almost 3.5%!
> the stock market is a random walk with a meager trend upwards that doesn't beat inflation + trading costs.
That assumes that the efficient-market hypothesis holds true, but it has yet to be thoroughly proven or disproven... (and funds like Medallion would strongly suggest otherwise for the medium term: https://www.bloomberg.com/news/articles/2016-11-21/how-renai...)
It doesn't assume the Effiecient Market hypothesis - empirical studies of returns support random returns without the imposing a model (non-parametric tests).
That's not to say returns are actually random, but in any given time range, it appears to be.
Very cool though --- I would be interested to dive into the methods they've implemented sometime in the near future!