
On Thu, May 26, 2011 at 3:44 PM, Graeme Russ graeme.russ@gmail.com wrote:
On Fri, May 27, 2011 at 3:28 AM, Wolfgang Denk wd@denx.de wrote:
Dear Simon Glass,
In message BANLkTikWwuymrJtMEHBZkvNgNBK1e=RdWA@mail.gmail.com you wrote:
Can we have a microsecond one also please? Some sort of microsecond
I guess you cannot, at least not in general. In worst case that would mean we have to process 1e6 interrupts per second, which leaves little time for anything useful.
Sorry Wolfgang I don't really understand this. We would only process when we read it, and then hopefully only a simple multiple or shift, after compiler optimizations kick in. Probably I am just missing what you are saying.
If we implemented a sync_us_timer(), we could either:
a) Never kick it using an interrupt at all (only kick it in udelay()) b) Kick it in a much slower interrupt (1ms+ period)
Remember, the kicking of the sync function does not need to correlate to the incrementing of the tick counter - Only to the roll-over period of the tick counter.
For a 64-bit sub microsecond tick counter, interrupts will probably not ever be needed (unless the tick frequency is ludicrous - even a nanosecond tick counter will take 213 days to wrap) so in this case, sync_us_timer() would be fine
Hi Graeme,
Well yes, I feel that you either worry about rollover or you use 64-bits. Since you are using 64-bits it shouldn't matter.
I hope we can avoid integer division in the microsecond case. Someone stated that time delays are the main use for the timer, but some of us have performance-monitoring plans.
Re the atomicity of handling 64-bit numbers, how about just disable/enable interrupts around this? I think 64-bit is overkill but at least it is simple, and prefer a u64 to a struct { u32 lo, hi; }.
Regards, Simon
Regards,
Graeme