
Dear Simon Glass,
In message BANLkTim-JZyMmzw_tWHhHQQYoVmK9mZyVw@mail.gmail.com you wrote:
Sure if you are tracking the timer, and wait for it to increment, and then wait for it to increment a second time, you can be confident that the time between the first and second increments is 10ms.
OK. Good.
But in general it is possible that your first read of the timer happens at 9.999ms after the timer last incremented, just because you are unlucky. Then perhaps two successive reads of the timer only 1us apart will see a difference of 10ms in time.
Agreed.
I believe this resolution problem could/should be solved by a function which returns a time not less than n ms in the future. It would work by returning something like current_time_ms + (delay_ms + resolution_ms * 2 - 1) / resolution_ms * resolution_ms, where resolution_ms is 10 in this case. I think someone else mentioned that too.
When the timer reaches that time then it is guaranteed that at least delay_ms ms has passed, although it might be up to delay_ms + 10.
We (well, Detlev and me) discussed this. We think it is important to realize (and to document) that the timing information provided by get_timer() is inherently subject to the principles of interval arithmetics with an implementation dependent interval width.
So far, most (all ?) of the code ignored this property, or silently assumed that the interval width was equal to or better than 1 milli- second which is the timing unit used by get_timer().
I think it is important to realize the (most important) use cases of get_timer(): 1) [longish] timeouts and 2) other timing computations. For completeness, I also include a third situation here: 0) [short] timeouts:
0) [short] timeouts:
Short timeouts are obviously all timeouts that are shorter than one millisecond - it is obvious that get_timer() cannot be used for these, and instead we use udelay() based delay loops.
Note that this method also can be used for longer timeouts, especially when we really have to wait for some event to happen, i. e. when we cannot do "something useful" while waiting.
A typical implementation to wait for some event with a timeout of 5 seconds might look like this:
int loops = 10; /* timeout after 10 * 500 milliseconds */
while (test_for_event() == 0) { if (--loops <= 0) handle_timeout();
udelay(500000); /delay 500 millisecond */ }
Note that this implementation has the disadvantage that it may introduce an unnecessary delay of up to 500 milliseconds if the event happens quickly after the call to test_for_event(), so typically it is more efficient to use a large number of short loops intead - due to the loop overhead these are less accurate, but for timeouts this usually does not matter, but at least they insert less heavy delays. Example:
int loops = 50000; /* timeout after 50,000 * 100 microseconds */
while (test_for_event() == 0) { if (--loops <= 0) handle_timeout();
udelay(100); }
Here we lose a maximim of 100 us in the delay call.
The inherent disadvantage of these delay based timeout loops is the low accuracy - depending on the loop overhead (for example the time spent in the test_for_event() call and on the size of the argument to the udelay() call the total execution time of the loop is always bigger than the assumed duration, especially for short delays. Usually this is not a problem in such contexts; also it is possible to trade accuracy for additional delays after the event happens - see first code example above.
1) [longish] timeouts:
Timeouts in the order of several milliseconds or longer are frequently implemented using get_timer(). Above 5 second timeout could be implemented like this:
u32 start,now; u32 timeout = 5 * CONFIG_SYS_HZ; /* timeout after 5 seconds */
start = get_timer(0);
while (test_for_event() == 0) { now = get_timer(0); if ((now - start) > timeout) handle_timeout();
udelay(100); }
Here the loop overhead caused by short delay which may result in a high number of calls to test_for_event() gets eliminated because the time reference is independet of the delay.
2) Other timing computations:
These are usually used to measure the time needed to perform some specific actions, for example like this:
u32 start, now;
start = get_timer(0);
do_something_complicated();
now = get_timer(0);
printf("execution time: %ld millisec\n", now - start);
Neither 1) nor 2) take into account that both "start" and "now" time stamps depend on the resolution of the underlying, implementation dependent timer. For large delays this is not critical, but for short delays (few milliseconds) this introduces large errors, and can even be fatal.
One solution to the problem could be to take the interval length into consideration. However, it seems unwise to make such a low level, implementation specific detail visible to normal "application" code.
Instead, we suggest to introduce a new function delta_timer() which hides this implementation detail from the user. delta_timer() will compute the differnce between two time stamps from get_timer(), and will return the number of milliseconds that are guaranteed to have passed AT LEAST between these two moments:
/* * Everybody uses a 1 millisecond interval, #ifdef CONFIG_NIOS2 static inline u32 timer_interval_size(void) { return 10; } #else static inline u32 timer_interval_size(void) { return 1; } #endif
u32 delta_timer(u32 from, u32 to) { u32 delta = to - from;
if (delta < timer_interval_size()) return 0;
return detla - timer_interval_size(); }
We could also design a more complicated API like this one, but I doubt this is needed:
/* * round - used to control rounding: * <0 : round down, return time that has passed AT LEAST * =0 : don't round, provide raw time difference * >0 : round up, return time that has passed AT MOST */ u32 delta_timer(u32 from, u32 to, int round) { u32 delta = to - from;
if (round == 0) /* raw result, no rounding*/ return delta;
if (round > 0) /* round up */ return delta + timer_interval_size();
/* round down */ if (delta < timer_interval_size()) return 0;
return delta - timer_interval_size(); }
So our timeout from case 1) above would now be written like this:
u32 start,now; u32 timeout = 5 * CONFIG_SYS_HZ; /* timeout after 5 seconds */
start = get_timer(0);
while (test_for_event() == 0) { now = get_timer(0);
if (delta_timer(start, now) > timeout) handle_timeout();
udelay(100); }
and would be guaranteed never to terminate early.
Comments?
Best regards,
Wolfgang Denk