
On 12/02/14 21:47, Scott Wood wrote:
On Wed, 2014-02-12 at 21:04 +0000, Gray Remlin wrote:
Example taken from include/configs/sheevaplug.h
#ifdef CONFIG_CMD_NAND #define CONFIG_ENV_IS_IN_NAND 1 #define CONFIG_ENV_SECT_SIZE 0x20000 /* 128K */ #else #define CONFIG_ENV_IS_NOWHERE 1 /* if env in SDRAM */ #endif /*
- max 4k env size is enough, but in case of nand
- it has to be rounded to sector size
*/ #define CONFIG_ENV_SIZE 0x20000 /* 128k */ #define CONFIG_ENV_ADDR 0x60000 #define CONFIG_ENV_OFFSET 0x60000 /* env starts here */
In the above configuration CONFIG_ENV_SIZE == CONFIG_ENV_SECT_SIZE, that is, the erase block size.
If there is just one bad block in the NAND mapped for the environment does this mean that 'saveenv' will fail ?
If so, shouldn't CONFIG_ENV_SIZE be set to less than CONFIG_ENV_SECT_SIZE (but be a multiple of the write sector size) to allow for bad block skipping ?
I am tired and have a headache that wont go, please don't scold me too harshly if I am being stupid....
Reducing CONFIG_ENV_SIZE would speed up I/O and CRC calculation, but it would not help with bad block skipping, because the granularity of skipping is the 128k block, not the 4k page.
Obvious, I looked at the 'writeenv' function numerous times and missed that every time!
What you want is CONFIG_ENV_RANGE.
Re-reading the source (hopefully correctly) this morning, has no effect on the bad block size, so does not help out in this instance as I only have 128K (one erase block) to play with.
It seems to me that the current BBT method does not scale well, and smacks of legacy implementation when flash was perhaps commonly a lot smaller than the +GB used today. It's just crazy that the loss of a couple of bits results in the loss of 128K.
Implementing a dynamic BBT that supports 'virtual partitioning' that would permit finer granularity for smaller virtual partitions without causing unnecessary overhead on larger virtual partitions will break everything, so I won't hold my breath.
-Scott
Thank you for your reply Scott