
On Thu, Apr 23, 2009 at 12:18:00AM +0200, Wolfgang Denk wrote:
Dear Jean-Christophe PLAGNIOL-VILLARD,
In message 20090422212816.GA18705@game.jcrosoft.org you wrote:
Who needs this, and why and when, and why didn't we need it the past?
a lot of actual timer are not correct and we have problem on network timeout as example.
Hm... how muich of precision do we actuually need?
Well, I already complained about all such a testing on the IRC yesterday, so I'm not going to repeat...
And if I got Jean-Christophe correctly he cares about "real world" verification that timer code is written the right way.
So we need to known the precision of the timer to known the impact on all timer depends part of u-boot as timeout or bitbanging stack
so when you have to respect some delay to init some chip or other you will to known the delay you will have in reality. This will avoid you a lots of pain during the dev
In my experience, no parts of the code actually care about precision of the timers, especially not when implementing delay loops or timeouts using udelay() which always includes static overhead. For example, the following two snippets of code are only in theory equivalent:
for (i=0; i < 100; ++i) udelay (10000);
versus
for (i=0; i < 1000; ++i) { for (j=0; j < 1000; ++j) udelay (1); }
But - is this really a problem? I am not aware of any place in the code where a tolerance of +/- 10% or maybe even more would matter.
Well, more interesting case to test is:
reset_timer(); while (get_timer() < 100000) udelay(10000);
to prove get_timer has no bad interference with udelay. Proposed method also doesn't verify another corner case - timer {under,over}flow.
Note: when you are implementing a bit-banging protcol that requires precise timings and run into problems, then this is not a problem with U-Boot timer accuracy, but with incorrect system design on your system.
Seconded, same point made on IRC.
Best regards, ladis