
On Tuesday, May 05, 2015 at 11:46:56 PM, Stephen Warren wrote:
On 05/04/2015 02:54 PM, Marek Vasut wrote:
Switch to generic timer implementation from lib/time.c . This also fixes a signed overflow which was in __udelay() implementation.
Can you explain that a bit more?
-void __udelay(unsigned long usec) -{
- ulong endtime;
- signed long diff;
- endtime = get_timer_us(0) + usec;
- do {
ulong now = get_timer_us(0);
diff = endtime - now;
- } while (diff >= 0);
-}
I believe since endtime and now hold micro seconds, there shouldn't be any overflow so long as the microsecond difference fits into 31 bits, i.e. so long as usec is less than ~36 minutes. I doubt anything is calling __udelay() with that large of a value. Perhaps the issue this patch fixes is in get_timer_us(0) instead, or something else changed as a side-effect?
The generic implementation caters for full 32-bit range, that's all. Since the argument of this function is unsigned, it can overflow if you use argument which is bigger than 31 bits. OK like that ?
Best regards, Marek Vasut