
On 5/27/2011 8:13 AM, Simon Glass wrote:
On Fri, May 27, 2011 at 8:00 AM, J. William Campbell jwilliamcampbell@comcast.net wrote: [snip]
Hi All, A more precise statement of the problem is that all timer delays may be shortened by the timer resolution. So this means that if you have a timeout of 1 ms in your get_time(0) { } while ( ...< 1), then your actual delay may be anywhere between 0 and 1 ms. The problem arises when some piece of common code uses a delay of say 8 millisec, expecting the actual delay to be between 7 and 8. If the resolution is 10 ms, the delay will be between 0 and 10 ms, 0 being particularly bad. This can be fixed in get_timer, making the 8 ms delay become a minimum of 10 ms at the expense of it becoming up to 20 ms sometimes. Since these delays are used mostly for error conditions, making them longer will probably be ok, and doesn't require changing any of the common code. It probably will not make things slower either, because the error timeouts should not be reached. The reset of the hardware timer would cause all "short" delays to become 10 ms. This reset approach is bad in that it prevents proper nesting of timing loops. However, in this case it isn't so bad, in that the nested loops are just extended, not shortened. Note that if the reset is only resetting the HARDWARE interrupt generator, not the actual timestamp itself, we are just extending all existing timeouts by 0 to 10 ms.. So this just lengthens all pending timeouts. The other fix is in my opinion nicer, because it affects the nest loops less. If the inner loop is executed 100 times, with the reset, the outer loop timeout is extended by up to 1000 ms.
Best Regards, Bill Campbell
Hi Bill,
Yes I agree that this is ugly - I didn't realize that this is what reset_timer() does, but I think these 10ms platforms should have to live with the fact that timeouts will be 0-10ms longer than hoped. Perhaps reset_timer() should become a non-standard board thing that is deprecated. Really if you have a 10ms timer and are asking for a 10ms timeout you are being a bit hopeful.
Hi All, Yes, but the person writing the driver was writing "common" code. He probably didn't even know there was a timer whose resolution was not 1 ms.
But perhaps this argues for a function to check timeouts - at the moment get_timer() returns the time since an event and it is used at the start of the loop and the end. Perhaps we should have:
#define TIMEOUTMS 2000
stop_time = get_future_time(TIMEOUT_MS); // Returns current time + TIMEOUT_MS + (resolution of timer) while (get_timer(stop_time)< 0) // (I would much prefer while (!timed_out(stop_time)) wait for something }
Regards, Simon
In the existing system, you can get the same result by running the while loop with a condition of (get_timer(base) < TIMEOUTMS + TIMER_RESOLUTION). We could just make TIMER_RESOLUTION a mandatory define for all u-boots. Then common code would be wrong if the TIMER_RESOLUTION were omitted. For all I know, there may be such a define already. Anybody know of one?
Best Regards, Bill Campbell