We are told time and time again that you can’t predict the market. And time and time again people try to predict the market. Many model the market using stochastic principles, and use it to predict the market. I find this completely amusing (stochastics is about multiple destinys based on a single context.)

TraderFeed a favorite blog of mine had the following to say (Are We Making a Bottom).

There is both the sense that we could go much lower in a washout (a “Black Monday” scenario) and that we could be seeing an important bottom in the making.

Fair enough, good point we might be at an inflection point.


To give a bit of perspective on this one-sidedness, we’ve only had 75 other occasions since 1960 (!) in which 70% or more of the volume has been in declining stocks over a two-week interval. That is out of almost 12,000 trading days. Stated otherwise, the current market is in the top 1% of all market occasions since 1960 for bearish concentration of volume.

This is where I say, so what! That was then and this is now. In fact if I thought about this completely I would say, “stay out of the market…” Though his statistics seem to say the following:

If we look across all 75 instances, the market was up 41 times and down 34 after a five day period for an average gain of .87%. When we look three weeks out, however, the market was up only 36 times and down 39 times, for a subnormal gain of only .08%.

This statistic tells me that we are in a crap shot and it could go either way. When you are up almost as many times as you are down I get the feeling that whatever you do will be both right and wrong. You could play this market by creating a straddle, but with an average gain of 0.87% I would be tempted to believe that you could not get the option premium paid for.

In the end TraderFeed says the following:

That tells me that the current weakness offers risk as well as reward for shorter timeframe traders, but also is a heads up for investors seeking value.

Which reminds me of a horoscope… Worded in such a way that everybody sees something they want. Look I am not harping on TraderFeed. In fact I think TraderFeed is very diplomatically saying, “Beats the crap out of me of what the market will do next.”

TraderFeed is not the only one trying to make predictions. Neural Market Trends referenced an article on Ugly, which referenced an article on New Scientist. You should read what each party has to say on the topic and you will see that each has their own take on the same topic matter.

What I see is an attempt to use AI to make predictions or find patterns, where as I have written before none exist. But wait, I want to prove to you that you can’t make predictions EVEN if you have 100% solid evidence of where to make trades.

Consider the following image:

This is my profit level for an average day of the algorithmic trading system doing its thing. On this day I happened to make a daily profit of 0.85%. I have days where I do much better, and much worse. As I wrote, this is an average day.

Look at the signals that indicate whether I should buy or sell.

When the signals are above I buy, below I sell. Now compare the profitability of my trading system, and the signals. Notice a pattern? The pattern, and it is a solid visual pattern, if the two signals diverge stop trading! Whenever my profitability drops notice how the signals diverge. Its not only like this once or twice, but whenever the signals diverge STOP trading!

From a visual perspective it should be pretty easy to spot this pattern and stop trading, right? WRONG! I have tried many statistical analysis (eg T-Test, F-Test, etc, etc) and they prove nothing. My signals end up looking like the following images:

All of the filters and statistics when applied result in me making quite a bit less money than taking my lumps. So why if my pattern is 100% rock solid and apparent can I make my trading system not be more profitable?

The answer is that you cannot predict the market!

When I presented my graphs to a person who studied statistics he had the following to say.

Statistics are great for knowing what has happened in the past. For example you can ask, “how many kids fell down the stairs in the past year.” But they are horrible at predicting the future. Statistics cannot be used to tell you what the problem with the stairs are, and it cannot predict how many kids will fall down the stairs. It could happen that you do nothing and less kids fall. Statistics will tell you something changed and that now less kids are falling down stairs.

When I look at your graphs I think that you can’t automate it and will have to rely on visual means.

What he was saying is that by applying statistics to predict, the noise of the data will get in the way of figuring out what is relevant and not relevant. The filtered signals are not wrong, but are being influenced by data that does not interest you. I could filter out the noise, but then I do what you should not do, which is over-fit the data to create the required signals. The problem is what is relevant data and what is noise?

Does this mean I need to take my lumps? Yeah it does…