
Dear Wolfgang Denk,
In message4CC62B6C.30601@emk-elektronik.de you wrote:
In such cases I prefer to use:
uint64_t etime; ... etime = get_ticks() + get_tbclk(); /* 1 second */ do { whatever; udelay (xx); } while (condition&& get_ticks()<= etime);
That is far more accurate than calling udelay() 100000 times.
It may be more accuratre, but it may also be HORRIBLY WRONG!!
Do NOT do that!! NEVER implement such a delay loop as
end = time() + delay; while (time()< end) ...
It fails in case the timer wraps around.
Assume 32 bit counters, start time = 0xFFFFFFF0, delay = 0x20. It will compute end = 0x10, the while codition is immediately false, and you don't have any delay at all, which most probably generates a false error condition.
I used and assumed a 64 bit counter, that will not wrap around while our civilization still exists...
If get_ticks() is only 32 bits worth, both methods will misbehave at a 32 bit wrap over.
Correct implementation of a timeout like that should always look like that:
start = time(); while ((time() - start)< delay) ...
This works much better (assuming unsigned arithmetics).
True, provided the underlying timer is really 64 bits, otherwise this fails, too...
Best would be to assign get_ticks() to a 32 bit unsigned and use 32 bit vars for start and delay as well.
The original udelay() implementation in AT91 would have failed at a 32 bit wrap over, too (fixed in mainline). I hope other implementations are done right, too...
Best regards,
Reinhard