
Hi,
Ok, so just to clarify:
If I run the following code on a 1GHz machine vs. a 300 MHz machine and assuming that the read_reg always equals 0:
tmo = get_timer (0) + 1 * CFG_HZ; while ((((read_reg(0xa6)) == 0)&& tmo < get_timer (0)) /*NOP*/;
then the timeout should occur much quicker on the 1 GHz machine as opposed to the 300 MHz machine. The reason being that the timer will decrement much faster on the gig machine (assuming the timer is fed the system clock same as CPU).
Is this a correct analysis or am I missing something? If true, then the timeout has to account for the clock of the timer (rate of timer decrement) which will not necessarily be CFG_HZ. So in that case I'd need to set tmo to something as:
tmo = get_timer(0) + 1 * clock_speed; /*clock_speed is timer clock in MHz*/
Any inputs appreciated.
Regards, Umar
-----Original Message----- From: wd@denx.de [mailto:wd@denx.de] Sent: Wednesday, June 20, 2007 12:27 AM To: Umar Quershey Cc: u-boot-users@lists.sourceforge.net Subject: Re: [U-Boot-Users] CFG_HZ value
In message 87A697E50899F541B77272C331CEB744D502@exchange.Q-DOMAIN you wrote:
How should one define CFG_HZ? It seems that it's to be set to 1000.
Correct. Please consider CFG_HZ as a constant which has to be set to 1000 on all systems.
However my system clock speed is not this. What are the rules for
It has nothing to do with any clock speed, like the time unit (seconds) on you digital watch has nothing to do with the actual clock of the quartz used somewhere in it.
Should I replace the CFG_HZ with my real system clock speed? BTW, I
No. CFG_HZ is always 1000.
have to read a register at boot time to determine what speed I am running at. I do this early on and save a variable 'clock_speed' that other functions can use to determine the system clock speed.
With correctly implemented timer functions it just works.
------_=_NextPart_001_01C7B2DA.91121226 Content-Type: text/html;
Please never post HTML here.
Best regards,
Wolfgang Denk