
In message 87A697E50899F541B77272C331CEB744D506@exchange.Q-DOMAIN you wrote:
Ok, so just to clarify:
If I run the following code on a 1GHz machine vs. a 300 MHz machine and assuming that the read_reg always equals 0:
tmo = get_timer (0) + 1 * CFG_HZ; while ((((read_reg(0xa6)) == 0)&& tmo < get_timer (0)) /*NOP*/;
then the timeout should occur much quicker on the 1 GHz machine as opposed to the 300 MHz machine. The reason being that the timer will decrement much faster on the gig machine (assuming the timer is fed the system clock same as CPU).
No, this is not correct. The timeout will always oocur after one second. get_timer() returns the time (minus the offset passed in the argument) in milliseconds.
Is this a correct analysis or am I missing something? If true, then the
Yes, you are missing that get_time() is counting in milliseconds.
timeout has to account for the clock of the timer (rate of timer
No, it has not.
Best regards,
Wolfgang Denk