
Hello Simon,
Am 01.08.2013 16:22, schrieb Simon Glass:
Hi,
On Thu, Aug 1, 2013 at 2:38 AM, Heiko Schocherhs@denx.de wrote:
Hello Albert,
Am 01.08.2013 08:53, schrieb Albert ARIBAUD:
Hi Heiko,
On Thu, 01 Aug 2013 08:02:42 +0200, Heiko Schocherhs@denx.de wrote:
I suppose you could. It seems conceptually /far/ simpler to just scan
the DT once up-front rather than having to defer all this stuff until
on the other hand we ring for every ms boot time ... and here we want to scan a complete dt with maybe a lot of nodes, we do not want to use?
Scanning all of DT seems to imply it has no strict or standard ordering. Could we mandate, suggest, of make it so that all entries in the DT needed at _f time are put first, and even maybe place an "end of _f" custom marker in DT to delimit them? (I assume that, for the sake of
I do not know, if this is possible, as I think the DT used in U-Boot should be the same as used in linux ... or?
Postel-ism, anything in DT which is not understandable is skipped, so
other users of the DT than us would not even be annoyed by such a marker)
This way, we'd avoid wasting time scanning most of the DT in this case.
Hmm.. why should we introduce such things, instead of scanning the node only if we need it?
We have "only" the problem, that we could not write to data at this moment ... but this problem should be solved in a seperate topic. I2C is usable before relocation, the problem is in conjunction with dt, that we can not save for example the base address of the controller, which we get from the DT ... If I understand it correct!
So we need an option when using dt, that we have (small ram) in which we can write some parameters parsed from dt ...
I think this problem have all subsystems used before relocation. (for example: environment on a spi flash?)
I think using a small RAM is a good approach. At least it is better than pretending there is no RAM at all. We currently have no facility to allocate RAM before relocation, other than to use the .data section. We can use global_data but that won't scale well for many drivers adding their own stuff in there. Samsung's driver uses .data, I don't think it is a big deal.
One option I should mention is to decode the I2C FDT nodes each time it is needed prior to relocation (i.e. to the stack), use it for the transaction, and throw it away. This is quite painful IMO this it involves putting calls in the driver to check if we are pre-reloc and have a stack space used every time. On tegra20 this would be 130 bytes or so. I mention it because console working this way for a while (decoding the console again on every byte).
Options as I see it:
- just fudge it for now and use .data (deal with it later with driver model)
- change the meaning of board_init_f() such that memory may be available
(e.g. if run from SPL)
- decode the FDT nodes every time we need them (ick)
Why? after relocation it is needed exactly once. You can do something like that in the i2c_init function:
+ bus = tegra_i2c_get_bus(adap); + if (bus) + return 0;
If you get a bus with tegra_i2c_get_bus(adap), it is "fdt initialized"
If you not get a bus ... init it ... only once
The problem before relocation could be solved with .data or later with the driver model ...
- adjust the ordering so that I2C normally happens post reloc except for
specific platforms with a CONFIG defined (Heiko, the difference as I understand it is that with your patch CONFIG_HARD_I2C or CONFIG_SOFT_I2C are now defined, and so I2C happens prior to reloc now)
Hmm.. BTW: CONFIG_SOFT_I2C is no longer in the code :-)
And there are a lot of boards that use i2c before relocation, that is not a new feature.
The problem pop up, because in arch/arm/lib/board.c@init_func_i2c() the new code calls i2c_init_all() instead i2c_init() ... This i2c_init_all() was introduced to init fix the SPD EEPROM bus and call i2c_init_board() before it...
So we can call here again i2c_init() I think ... but, i2c_init() also calls i2c_init_bus() and this i2c_init from the tegra driver, which sets speed and return 0 ... what is not really clean, it just works, because tegra_i2c_set_bus_speed() checks if the bus is valid, if not only returns ...
but i2c_init_board is not called so early, which is the cause for our problem with the tegra driver and the new framework, as i2c_init_board() hysterically was introduced to unblock blocked i2c busses (Correct me, if I am wrong)
My prefered way is still: - replace i2c_init_all() through i2c_init() in arch/arm/lib/board.c@init_func_i2c() to get tegra again working In the long term when all i2c drivers use the new framework init_func_i2c() will no longer necessary :-) As the goal is to change the i2c api, so all i2c functions pass "bus" as a parameter, and i2c_set_bus_num() get a static function in drivers/i2c/i2c_core.c no need longer for storing the old bus, setting the new bus, doing i2c work, set the old bus, as this sequence is spread over the hole u-boot code. - decoding fdt node in tegra_i2c_init, as I think it is the right place for it. optional more or less, but to prevent in future again such "order" problems ...
As Wolfgang said: "Agreed - actually we're entering an area hear that smells pretty strong like device model reorganization :-)"
BTW: How is this problem solved with the device model approach?
More that we need to solve it, probably with a limited malloc() pre-reloc.
bye, Heiko