
Dear Reinhard Meyer,
In message 4D3C9BFC.1010907@emk-elektronik.de you wrote:
get_timer() returns a monotonous upward counting time stamp with a resolution of milliseconds. After reaching ULONG_MAX the timer wraps around to 0.
Exactly that wrap makes the situation so complicated, since the simple code u32 get_timer(void) { return (ticks * 1000ULL) / tickspersec; } won't do that wrap.
Do you have a better suggestion?
The get_timer() implementation may be interrupt based and is only available after relocation.
Currently it is used before relocation in some places, I think I have seen it in NAND drivers... That would have to be changed then.
Indeed. It is unreliable or even broken now.
This is already implemented functionally very closely (apart from factoring and the get_timer(void) change) to this in AT91, the only (academic) hitch is that it will burp a few billion years after each reset :)
What bothers me is the need for 64 bit mul/div in each loop iteration, for CPUs without hardware for that this might slow down data transfer loops of the style
u32 start_time = get_timer(); do { if ("data_ready") /* transfer a byte */ if (get_timer() - start_time > timeout) /* fail and exit loop */ } while (--"bytestodo" > 0);
since get_timer() will be somewhat like:
return (tick * 1000ULL) / tickspersec;
As I stated before, tickspersec is a variable in, for example, AT91. So the expression cannot be optimized by the compiler.
I don't think this is the only way to implement this. How does Linux derive time info from jiffies?
Best regards,
Wolfgang Denk