
On Mon, May 30, 2011 at 3:57 AM, Wolfgang Denk wd@denx.de wrote:
Dear Simon Glass,
In message BANLkTim-JZyMmzw_tWHhHQQYoVmK9mZyVw@mail.gmail.com you wrote:
Sure if you are tracking the timer, and wait for it to increment, and then wait for it to increment a second time, you can be confident that the time between the first and second increments is 10ms.
OK. Good.
But in general it is possible that your first read of the timer happens at 9.999ms after the timer last incremented, just because you are unlucky. Then perhaps two successive reads of the timer only 1us apart will see a difference of 10ms in time.
Agreed.
I believe this resolution problem could/should be solved by a function which returns a time not less than n ms in the future. It would work by returning something like current_time_ms + (delay_ms + resolution_ms * 2 - 1) / resolution_ms * resolution_ms, where resolution_ms is 10 in this case. I think someone else mentioned that too.
When the timer reaches that time then it is guaranteed that at least delay_ms ms has passed, although it might be up to delay_ms + 10.
We (well, Detlev and me) discussed this. We think it is important to realize (and to document) that the timing information provided by get_timer() is inherently subject to the principles of interval arithmetics with an implementation dependent interval width.
So far, most (all ?) of the code ignored this property, or silently assumed that the interval width was equal to or better than 1 milli- second which is the timing unit used by get_timer().
I think it is important to realize the (most important) use cases of get_timer(): 1) [longish] timeouts and 2) other timing computations. For completeness, I also include a third situation here: 0) [short] timeouts:
[lots of horrible things that I believe we all want to deprecate]
Instead, we suggest to introduce a new function delta_timer() which hides this implementation detail from the user. delta_timer() will compute the differnce between two time stamps from get_timer(), and will return the number of milliseconds that are guaranteed to have passed AT LEAST between these two moments:
/* * Everybody uses a 1 millisecond interval, #ifdef CONFIG_NIOS2 static inline u32 timer_interval_size(void) { return 10; } #else static inline u32 timer_interval_size(void) { return 1; } #endif
u32 delta_timer(u32 from, u32 to) { u32 delta = to - from;
if (delta < timer_interval_size()) return 0;
return detla - timer_interval_size(); }
Hi Wolfgang,
I think this is a big step forward from what we might call the 'manually coded' loops.
We could also design a more complicated API like this one, but I doubt this is needed:
I doubt it too.
[snip]
So our timeout from case 1) above would now be written like this:
u32 start,now; u32 timeout = 5 * CONFIG_SYS_HZ; /* timeout after 5 seconds */
start = get_timer(0);
while (test_for_event() == 0) { now = get_timer(0);
if (delta_timer(start, now) > timeout) handle_timeout();
udelay(100); }
and would be guaranteed never to terminate early.
Comments?
Great!
I do think it would be nice to put a time_ prefix before all the time functions, but this is a pretty minor point.
See my other message about computing a future time. But the less ad-hoc time calculation we can leave to callers of get_timer() the better. I think these things are actually a sign of an API which is too low level. There is a certain purity and simplicity with get_timer(), sort of minimalist, but the ugly constructs that people build on top of it need to be considered and brought into the equation too.
Regards, Simon
Best regards,
Wolfgang Denk
-- DENX Software Engineering GmbH, MD: Wolfgang Denk & Detlev Zundel HRB 165235 Munich, Office: Kirchenstr.5, D-82194 Groebenzell, Germany Phone: (+49)-8142-66989-10 Fax: (+49)-8142-66989-80 Email: wd@denx.de "Beware of bugs in the above code; I have only proved it correct, not tried it." - Donald Knuth