[U-Boot] NAND bad block Query

Example taken from include/configs/sheevaplug.h
#ifdef CONFIG_CMD_NAND #define CONFIG_ENV_IS_IN_NAND 1 #define CONFIG_ENV_SECT_SIZE 0x20000 /* 128K */ #else #define CONFIG_ENV_IS_NOWHERE 1 /* if env in SDRAM */ #endif /* * max 4k env size is enough, but in case of nand * it has to be rounded to sector size */ #define CONFIG_ENV_SIZE 0x20000 /* 128k */ #define CONFIG_ENV_ADDR 0x60000 #define CONFIG_ENV_OFFSET 0x60000 /* env starts here */
In the above configuration CONFIG_ENV_SIZE == CONFIG_ENV_SECT_SIZE, that is, the erase block size.
If there is just one bad block in the NAND mapped for the environment does this mean that 'saveenv' will fail ?
If so, shouldn't CONFIG_ENV_SIZE be set to less than CONFIG_ENV_SECT_SIZE (but be a multiple of the write sector size) to allow for bad block skipping ?
I am tired and have a headache that wont go, please don't scold me too harshly if I am being stupid....

On Wed, 2014-02-12 at 21:04 +0000, Gray Remlin wrote:
Example taken from include/configs/sheevaplug.h
#ifdef CONFIG_CMD_NAND #define CONFIG_ENV_IS_IN_NAND 1 #define CONFIG_ENV_SECT_SIZE 0x20000 /* 128K */ #else #define CONFIG_ENV_IS_NOWHERE 1 /* if env in SDRAM */ #endif /*
- max 4k env size is enough, but in case of nand
- it has to be rounded to sector size
*/ #define CONFIG_ENV_SIZE 0x20000 /* 128k */ #define CONFIG_ENV_ADDR 0x60000 #define CONFIG_ENV_OFFSET 0x60000 /* env starts here */
In the above configuration CONFIG_ENV_SIZE == CONFIG_ENV_SECT_SIZE, that is, the erase block size.
If there is just one bad block in the NAND mapped for the environment does this mean that 'saveenv' will fail ?
If so, shouldn't CONFIG_ENV_SIZE be set to less than CONFIG_ENV_SECT_SIZE (but be a multiple of the write sector size) to allow for bad block skipping ?
I am tired and have a headache that wont go, please don't scold me too harshly if I am being stupid....
Reducing CONFIG_ENV_SIZE would speed up I/O and CRC calculation, but it would not help with bad block skipping, because the granularity of skipping is the 128k block, not the 4k page.
What you want is CONFIG_ENV_RANGE.
-Scott

On 12/02/14 21:47, Scott Wood wrote:
On Wed, 2014-02-12 at 21:04 +0000, Gray Remlin wrote:
Example taken from include/configs/sheevaplug.h
#ifdef CONFIG_CMD_NAND #define CONFIG_ENV_IS_IN_NAND 1 #define CONFIG_ENV_SECT_SIZE 0x20000 /* 128K */ #else #define CONFIG_ENV_IS_NOWHERE 1 /* if env in SDRAM */ #endif /*
- max 4k env size is enough, but in case of nand
- it has to be rounded to sector size
*/ #define CONFIG_ENV_SIZE 0x20000 /* 128k */ #define CONFIG_ENV_ADDR 0x60000 #define CONFIG_ENV_OFFSET 0x60000 /* env starts here */
In the above configuration CONFIG_ENV_SIZE == CONFIG_ENV_SECT_SIZE, that is, the erase block size.
If there is just one bad block in the NAND mapped for the environment does this mean that 'saveenv' will fail ?
If so, shouldn't CONFIG_ENV_SIZE be set to less than CONFIG_ENV_SECT_SIZE (but be a multiple of the write sector size) to allow for bad block skipping ?
I am tired and have a headache that wont go, please don't scold me too harshly if I am being stupid....
Reducing CONFIG_ENV_SIZE would speed up I/O and CRC calculation, but it would not help with bad block skipping, because the granularity of skipping is the 128k block, not the 4k page.
Obvious, I looked at the 'writeenv' function numerous times and missed that every time!
What you want is CONFIG_ENV_RANGE.
Re-reading the source (hopefully correctly) this morning, has no effect on the bad block size, so does not help out in this instance as I only have 128K (one erase block) to play with.
It seems to me that the current BBT method does not scale well, and smacks of legacy implementation when flash was perhaps commonly a lot smaller than the +GB used today. It's just crazy that the loss of a couple of bits results in the loss of 128K.
Implementing a dynamic BBT that supports 'virtual partitioning' that would permit finer granularity for smaller virtual partitions without causing unnecessary overhead on larger virtual partitions will break everything, so I won't hold my breath.
-Scott
Thank you for your reply Scott

On Thu, 2014-02-13 at 14:59 +0000, Gray Remlin wrote:
On 12/02/14 21:47, Scott Wood wrote:
On Wed, 2014-02-12 at 21:04 +0000, Gray Remlin wrote:
Example taken from include/configs/sheevaplug.h
#ifdef CONFIG_CMD_NAND #define CONFIG_ENV_IS_IN_NAND 1 #define CONFIG_ENV_SECT_SIZE 0x20000 /* 128K */ #else #define CONFIG_ENV_IS_NOWHERE 1 /* if env in SDRAM */ #endif /*
- max 4k env size is enough, but in case of nand
- it has to be rounded to sector size
*/ #define CONFIG_ENV_SIZE 0x20000 /* 128k */ #define CONFIG_ENV_ADDR 0x60000 #define CONFIG_ENV_OFFSET 0x60000 /* env starts here */
In the above configuration CONFIG_ENV_SIZE == CONFIG_ENV_SECT_SIZE, that is, the erase block size.
If there is just one bad block in the NAND mapped for the environment does this mean that 'saveenv' will fail ?
If so, shouldn't CONFIG_ENV_SIZE be set to less than CONFIG_ENV_SECT_SIZE (but be a multiple of the write sector size) to allow for bad block skipping ?
I am tired and have a headache that wont go, please don't scold me too harshly if I am being stupid....
Reducing CONFIG_ENV_SIZE would speed up I/O and CRC calculation, but it would not help with bad block skipping, because the granularity of skipping is the 128k block, not the 4k page.
Obvious, I looked at the 'writeenv' function numerous times and missed that every time!
What you want is CONFIG_ENV_RANGE.
Re-reading the source (hopefully correctly) this morning, has no effect on the bad block size, so does not help out in this instance as I only have 128K (one erase block) to play with.
Another option is to use CONFIG_ENV_OFFSET_OOB, which stores the offset location in the OOB of known-good block zero, so you can choose any good block when writing a NAND image for a particular device instance. This only helps with factory-marked bad blocks, though.
The best option is probably env_ubi, if you are already using ubi elsewhere (or can convert) and don't need to access the environment before relocation.
It seems to me that the current BBT method does not scale well, and smacks of legacy implementation when flash was perhaps commonly a lot smaller than the +GB used today. It's just crazy that the loss of a couple of bits results in the loss of 128K.
The size of NAND chips has grown a lot faster than the size of erase blocks, so it actually used to be worse.
Implementing a dynamic BBT that supports 'virtual partitioning' that would permit finer granularity for smaller virtual partitions without causing unnecessary overhead on larger virtual partitions will break everything, so I won't hold my breath.
It's based on factory bad block markers, which are done on a per-erase-block basis (since otherwise you'd lose the marker when erasing good portions of the page). Perhaps something finer grained could be done for subsequently detected "bad pages" to be tracked in a BBT elsewhere, but yeah, I wouldn't hold my breath either. :-)
-Scott
participants (2)
-
Gray Remlin
-
Scott Wood