
Wolfgank Denk wrote:
To: "David Müller (ELSOFT AG)" In message 407FCCDA.6040508@elsoft.ch you wrote:
What was the reason to make a new command "nand read.jffs2s" to handle bad block skipping in the read case.
Why not just integrate the bad block skipping into the standard "read.jffs2" command.
Good questions.
IMHO the "original functionality" is broken. The "nand write" command skips over bad blocks by default, while "nand read" and (even worse) "nandboot" don't.
I agree. Can you please submit a patch?
I implemented the current nand read.jffs2 bad block handling so that both the amount of the nand read and memory written would always be the amount requested. If I ask for 2 MiB and there are bad blocks, there would be some unknown amount of memory at the end with random data and I didn't see a good way to tell the jffs2 code to ignore it. Initializing all the memory to 0xff to account for that would have slowed my boot process too much.
The read.jffs2 command could skip bad blocks and pad the data with 0xff at the end instead so that the total bytes read is still always what was requested. This change to the "original functionality" wouldn't break any code (like mine :-) ) that depends on all of the destination memory being defined. I should have done it that way originally, but I didn't think of it.
It seems to me that the current functionality is fine for jffs2 and the jffs2s version is needed when you are not actually using jffs2. I think we should reserve read.jffs2 for jffs2 use. Then it can be optimized to read only the data really needed for jffs2. If it is used for other purposes these optimizations would break the other uses.
If what is really needed (and it isn't for jffs2) is what nandboot should (but doesn't) do, which is read enough blocks to get a certain amount of _good_ data, there should be a new command, something like "nand read.good". This would need a new option flag for nanda_rw().
Dave Ellis