
Dear Jean-Christophe PLAGNIOL-VILLARD,
In message 20090502200026.GL25959@game.jcrosoft.org you wrote:
If I understand you corrctly, your argument goes that board_init() needs delays (like udelay()), delays need timers, and timers need interrupts, so we must initialize first interrupts, then timers, and only then we can run board_init()? Is this your argument?
First it you wish to do delay before the timer is ready you will have to do loop delay which is cpu speed dependent so the code is not maintainable at the end
I disagree. See for example the PowerPC code, where we can use the always-present Time Base Register to provide reliable delays without using counted loops. I think similar features are available on other processors as well. And even if we have to initialize some general purpose hardware timer, that does not mean that we have to initialize full time services or even interrupts.
But the I ask why udelay() would need timers and interrupts? This does not fit into the design philosophy of U-Boot, which attempts to bring up a board at least to a state where we have serial console output with as little as possible requirements. Your change breaks this, because now we have to initialize timers and interrupts (which are not exactly a trivial thing to set up or debug if they aren't working correctly) BEFORE we have a console output.
On ARM we have two kind of "serial" port the physical one which need init to be used before and the DCC which can be used at the start of the CPU. So to debug the timer and earlier init you can use this one.
I'm referring to the standard serial port only.
But your argument was that board_init() would need delays (like udelay()) which would not work without timers and interrupts.
My changes does not break it as precedently the timer was init by interrupts_init the only change is that the board_init will be done after the timer
Yes, and I don't want to change this without real need. And I do not see any such need yet - the existing code is working, isn't it?
[I ignore the case of CONFIG_USE_IRQ here, because only 4 boards actually use this feature, and they could probably be changed to do without, too.]
IIRC as least the S3C must use interrupt for the timer
Maybe. But as mentioned before - full-fledged timer services and simple delays are two different things.
So while I really appreciate your attempts to clean up the timer code on ARM, the resulting consequences are expensive, and I am not yet convincet the advantages of the new code are bigger than this disadvantage, and especially I am not convinced thatthis is really necessary and unavoidable.
boards actually init the timer themself in thee board_init to have a correct delay
Hm... you said "as least the S3C must use interrupt for the timer" - but as fas as I can see there is no timer init code anywhere in the board specific code for smdk2400, smdk2410, or smdk6400.
Even for AT91 it is only done for a single board:
-> g timer board/atmel/*/* board/atmel/at91cap9adk/at91cap9adk.c: extern void timer_init(void); board/atmel/at91cap9adk/at91cap9adk.c: timer_init(); board/atmel/at91rm9200dk/flash.c: /* arm simple, non interrupt dependent timer */ board/atmel/at91rm9200dk/flash.c: reset_timer_masked (); board/atmel/at91rm9200dk/flash.c: if (get_timer_masked () > CONFIG_SYS_FLASH_ERASE_TOUT) { board/atmel/at91rm9200dk/flash.c: /* arm simple, non interrupt dependent timer */ board/atmel/at91rm9200dk/flash.c: reset_timer_masked (); board/atmel/at91rm9200dk/flash.c: if (get_timer_masked () > CONFIG_SYS_FLASH_ERASE_TOUT) { board/atmel/atstk1000/flash.c: start_time = get_timer(0); -> ls board/atmel/*/Makefile | wc -l 9
One out of 9...
Can we not do delays without interrupts?
no not on all soc
I don't buy that. Do you have an example?
And do we need full-blown timer services for delays?
Yes as we need to init the timer which is in nearly all case soc dependent
Please re-read my question. I accept that we may need a simple, free-running timer which we can poll for a delay loop. But we do not need full-blown timer services just to implement delays.
[Keep in mind that a delay is usually used to implement a timeout in the error branch; that means, it does not matter if it has not 10e-6 precision or better.]
not only it's use for slow clock, for chip reset timing etc...
What is "slow clock"?
And if you reset a chip, you don't really care if the delay loop is 10 or 12 milliseconds, or do you?
but the precision is just that if you need to wait 10us you will not wait less
If you have to wait 10us or more, then you will probably set a timeout ouf 20us anyway.
and if you want to wait 100ms you will not have to wait 1s
Indeed. But usually you don't care whether it's 100, 101 or 110 ms.
an other example for spi transfer you will have to use some delay more you timer is precise more you transfert will be work at higher rate it's the same for nand etc...
Come on - SPI transfers or NAND flash controllers use their own clocks; they do NOT depend on the accuracy of udelay() [and if there should be a port that does depend on udelay(), then it is seriously broken and urgently needs to be fixed by the board maintainer].
But I'm repeating myself.
Best regards,
Wolfgang Denk