[PATCH 0/7] Correct confusing lmb error message

I ran into a very confusing error message about overlapping memory. I found that the message is not correct, so this series refactors the lmb code a little to permit the real message to be displayed, which is that the internal lmb table has overflowed.
It also tidies up the code a little.
Simon Glass (7): lmb: Drop surplus brackets and fix code style lmb: Tidy up structure access lmb: Avoid long for loop counters and function returns lmb: Change functions returning long to return int lmb: Tidy up lmb_overlaps_region() lmb: Document and tidy lmb_add_region_flags() fs: Fix a confusing error about overlapping regions
fs/fs.c | 7 +- include/lmb.h | 29 ++++++-- lib/lmb.c | 183 ++++++++++++++++++++++++++++++------------------- test/lib/lmb.c | 42 ++++++------ 4 files changed, 163 insertions(+), 98 deletions(-)

Use a blank line before the final return. Avoid using too many brackets to avoid confusion about precedence. Fix some overly long lines.
Signed-off-by: Simon Glass sjg@chromium.org ---
lib/lmb.c | 17 ++++++++++++----- 1 file changed, 12 insertions(+), 5 deletions(-)
diff --git a/lib/lmb.c b/lib/lmb.c index b2c233edb64e..ae1969893f00 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -60,7 +60,7 @@ static long lmb_addrs_overlap(phys_addr_t base1, phys_size_t size1, const phys_addr_t base1_end = base1 + size1 - 1; const phys_addr_t base2_end = base2 + size2 - 1;
- return ((base1 <= base2_end) && (base2 <= base1_end)); + return base1 <= base2_end && base2 <= base1_end; }
static long lmb_addrs_adjacent(phys_addr_t base1, phys_size_t size1, @@ -278,7 +278,7 @@ static long lmb_add_region_flags(struct lmb_region *rgn, phys_addr_t base, } }
- if ((i < rgn->cnt - 1) && lmb_regions_adjacent(rgn, i, i + 1)) { + if (i < rgn->cnt - 1 && lmb_regions_adjacent(rgn, i, i + 1)) { if (rgn->region[i].flags == rgn->region[i + 1].flags) { lmb_coalesce_regions(rgn, i, i + 1); coalesced++; @@ -375,6 +375,7 @@ long lmb_free(struct lmb *lmb, phys_addr_t base, phys_size_t size) * beginging of the hole and add the region after hole. */ rgn->region[i].size = base - rgn->region[i].base; + return lmb_add_region_flags(rgn, end + 1, rgnend - end, rgn->region[i].flags); } @@ -404,7 +405,7 @@ static long lmb_overlaps_region(struct lmb_region *rgn, phys_addr_t base, break; }
- return (i < rgn->cnt) ? i : -1; + return i < rgn->cnt ? i : -1; }
phys_addr_t lmb_alloc(struct lmb *lmb, phys_size_t size, ulong align) @@ -412,7 +413,8 @@ phys_addr_t lmb_alloc(struct lmb *lmb, phys_size_t size, ulong align) return lmb_alloc_base(lmb, size, align, LMB_ALLOC_ANYWHERE); }
-phys_addr_t lmb_alloc_base(struct lmb *lmb, phys_size_t size, ulong align, phys_addr_t max_addr) +phys_addr_t lmb_alloc_base(struct lmb *lmb, phys_size_t size, ulong align, + phys_addr_t max_addr) { phys_addr_t alloc;
@@ -430,7 +432,8 @@ static phys_addr_t lmb_align_down(phys_addr_t addr, phys_size_t size) return addr & ~(size - 1); }
-phys_addr_t __lmb_alloc_base(struct lmb *lmb, phys_size_t size, ulong align, phys_addr_t max_addr) +phys_addr_t __lmb_alloc_base(struct lmb *lmb, phys_size_t size, ulong align, + phys_addr_t max_addr) { long i, rgn; phys_addr_t base = 0; @@ -468,6 +471,7 @@ phys_addr_t __lmb_alloc_base(struct lmb *lmb, phys_size_t size, ulong align, phy base = lmb_align_down(res_base - size, align); } } + return 0; }
@@ -494,6 +498,7 @@ phys_addr_t lmb_alloc_addr(struct lmb *lmb, phys_addr_t base, phys_size_t size) return base; } } + return 0; }
@@ -521,6 +526,7 @@ phys_size_t lmb_get_free_size(struct lmb *lmb, phys_addr_t addr) return lmb->memory.region[lmb->memory.cnt - 1].base + lmb->memory.region[lmb->memory.cnt - 1].size - addr; } + return 0; }
@@ -534,6 +540,7 @@ int lmb_is_reserved_flags(struct lmb *lmb, phys_addr_t addr, int flags) if ((addr >= lmb->reserved.region[i].base) && (addr <= upper)) return (lmb->reserved.region[i].flags & flags) == flags; } + return 0; }

In some cases it helps to define a local variable pointing to the structure being accessed. This avoids lots of repeated code.
There is no need to individually assign each struct member, so use a structure assignment instead.
Signed-off-by: Simon Glass sjg@chromium.org ---
lib/lmb.c | 54 +++++++++++++++++++++++++----------------------------- 1 file changed, 25 insertions(+), 29 deletions(-)
diff --git a/lib/lmb.c b/lib/lmb.c index ae1969893f00..8b9a611c5216 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -23,20 +23,19 @@ DECLARE_GLOBAL_DATA_PTR;
static void lmb_dump_region(struct lmb_region *rgn, char *name) { - unsigned long long base, size, end; - enum lmb_flags flags; int i;
printf(" %s.cnt = 0x%lx / max = 0x%lx\n", name, rgn->cnt, rgn->max);
for (i = 0; i < rgn->cnt; i++) { - base = rgn->region[i].base; - size = rgn->region[i].size; - end = base + size - 1; - flags = rgn->region[i].flags; + struct lmb_property *prop = &rgn->region[i]; + unsigned long long end; + + end = prop->base + prop->size - 1;
printf(" %s[%d]\t[0x%llx-0x%llx], 0x%08llx bytes flags: %x\n", - name, i, base, end, size, flags); + name, i, (unsigned long long)prop->base, end, + (unsigned long long)prop->size, prop->flags); } }
@@ -89,11 +88,8 @@ static void lmb_remove_region(struct lmb_region *rgn, unsigned long r) { unsigned long i;
- for (i = r; i < rgn->cnt - 1; i++) { - rgn->region[i].base = rgn->region[i + 1].base; - rgn->region[i].size = rgn->region[i + 1].size; - rgn->region[i].flags = rgn->region[i + 1].flags; - } + for (i = r; i < rgn->cnt - 1; i++) + rgn->region[i] = rgn->region[i + 1]; rgn->cnt--; }
@@ -122,6 +118,7 @@ void lmb_init(struct lmb *lmb)
void arch_lmb_reserve_generic(struct lmb *lmb, ulong sp, ulong end, ulong align) { + struct bd_info *bd = gd->bd; ulong bank_end; int bank;
@@ -135,12 +132,10 @@ void arch_lmb_reserve_generic(struct lmb *lmb, ulong sp, ulong end, ulong align) /* adjust sp by 4K to be safe */ sp -= align; for (bank = 0; bank < CONFIG_NR_DRAM_BANKS; bank++) { - if (!gd->bd->bi_dram[bank].size || - sp < gd->bd->bi_dram[bank].start) + if (!bd->bi_dram[bank].size || sp < bd->bi_dram[bank].start) continue; /* Watch out for RAM at end of address space! */ - bank_end = gd->bd->bi_dram[bank].start + - gd->bd->bi_dram[bank].size - 1; + bank_end = bd->bi_dram[bank].start + bd->bi_dram[bank].size - 1; if (sp > bank_end) continue; if (bank_end > end) @@ -244,9 +239,11 @@ static long lmb_add_region_flags(struct lmb_region *rgn, phys_addr_t base,
/* First try and coalesce this LMB with another. */ for (i = 0; i < rgn->cnt; i++) { - phys_addr_t rgnbase = rgn->region[i].base; - phys_size_t rgnsize = rgn->region[i].size; - phys_size_t rgnflags = rgn->region[i].flags; + struct lmb_property *prop = &rgn->region[i]; + + phys_addr_t rgnbase = prop->base; + phys_size_t rgnsize = prop->size; + phys_size_t rgnflags = prop->flags; phys_addr_t end = base + size - 1; phys_addr_t rgnend = rgnbase + rgnsize - 1;
@@ -262,14 +259,14 @@ static long lmb_add_region_flags(struct lmb_region *rgn, phys_addr_t base, if (adjacent > 0) { if (flags != rgnflags) break; - rgn->region[i].base -= size; - rgn->region[i].size += size; + prop->base -= size; + prop->size += size; coalesced++; break; } else if (adjacent < 0) { if (flags != rgnflags) break; - rgn->region[i].size += size; + prop->size += size; coalesced++; break; } else if (lmb_addrs_overlap(base, size, rgnbase, rgnsize)) { @@ -293,9 +290,7 @@ static long lmb_add_region_flags(struct lmb_region *rgn, phys_addr_t base, /* Couldn't coalesce the LMB, so add it to the sorted table. */ for (i = rgn->cnt-1; i >= 0; i--) { if (base < rgn->region[i].base) { - rgn->region[i + 1].base = rgn->region[i].base; - rgn->region[i + 1].size = rgn->region[i].size; - rgn->region[i + 1].flags = rgn->region[i].flags; + rgn->region[i + 1] = rgn->region[i]; } else { rgn->region[i + 1].base = base; rgn->region[i + 1].size = size; @@ -535,10 +530,11 @@ int lmb_is_reserved_flags(struct lmb *lmb, phys_addr_t addr, int flags) int i;
for (i = 0; i < lmb->reserved.cnt; i++) { - phys_addr_t upper = lmb->reserved.region[i].base + - lmb->reserved.region[i].size - 1; - if ((addr >= lmb->reserved.region[i].base) && (addr <= upper)) - return (lmb->reserved.region[i].flags & flags) == flags; + struct lmb_property *prop = &lmb->reserved.region[i]; + phys_addr_t upper = prop->base + prop->size - 1; + + if (addr >= prop->base && addr <= upper) + return (prop->flags & flags) == flags; }
return 0;

On 23.08.23 15:41, Simon Glass wrote:
In some cases it helps to define a local variable pointing to the structure being accessed. This avoids lots of repeated code.
There is no need to individually assign each struct member, so use a structure assignment instead.
Signed-off-by: Simon Glass sjg@chromium.org
lib/lmb.c | 54 +++++++++++++++++++++++++----------------------------- 1 file changed, 25 insertions(+), 29 deletions(-)
diff --git a/lib/lmb.c b/lib/lmb.c index ae1969893f00..8b9a611c5216 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -23,20 +23,19 @@ DECLARE_GLOBAL_DATA_PTR;
static void lmb_dump_region(struct lmb_region *rgn, char *name) {
unsigned long long base, size, end;
enum lmb_flags flags; int i;
printf(" %s.cnt = 0x%lx / max = 0x%lx\n", name, rgn->cnt, rgn->max);
for (i = 0; i < rgn->cnt; i++) {
base = rgn->region[i].base;
size = rgn->region[i].size;
end = base + size - 1;
flags = rgn->region[i].flags;
struct lmb_property *prop = &rgn->region[i];
unsigned long long end;
end = prop->base + prop->size - 1;
printf(" %s[%d]\t[0x%llx-0x%llx], 0x%08llx bytes flags: %x\n",
name, i, base, end, size, flags);
name, i, (unsigned long long)prop->base, end,
} }(unsigned long long)prop->size, prop->flags);
@@ -89,11 +88,8 @@ static void lmb_remove_region(struct lmb_region *rgn, unsigned long r) { unsigned long i;
- for (i = r; i < rgn->cnt - 1; i++) {
rgn->region[i].base = rgn->region[i + 1].base;
rgn->region[i].size = rgn->region[i + 1].size;
rgn->region[i].flags = rgn->region[i + 1].flags;
- }
- for (i = r; i < rgn->cnt - 1; i++)
rgn->cnt--; }rgn->region[i] = rgn->region[i + 1];
@@ -122,6 +118,7 @@ void lmb_init(struct lmb *lmb)
void arch_lmb_reserve_generic(struct lmb *lmb, ulong sp, ulong end, ulong align) {
- struct bd_info *bd = gd->bd; ulong bank_end; int bank;
@@ -135,12 +132,10 @@ void arch_lmb_reserve_generic(struct lmb *lmb, ulong sp, ulong end, ulong align) /* adjust sp by 4K to be safe */ sp -= align; for (bank = 0; bank < CONFIG_NR_DRAM_BANKS; bank++) {
if (!gd->bd->bi_dram[bank].size ||
sp < gd->bd->bi_dram[bank].start)
/* Watch out for RAM at end of address space! */if (!bd->bi_dram[bank].size || sp < bd->bi_dram[bank].start) continue;
bank_end = gd->bd->bi_dram[bank].start +
gd->bd->bi_dram[bank].size - 1;
if (sp > bank_end) continue; if (bank_end > end)bank_end = bd->bi_dram[bank].start + bd->bi_dram[bank].size - 1;
@@ -244,9 +239,11 @@ static long lmb_add_region_flags(struct lmb_region *rgn, phys_addr_t base,
/* First try and coalesce this LMB with another. */ for (i = 0; i < rgn->cnt; i++) {
phys_addr_t rgnbase = rgn->region[i].base;
phys_size_t rgnsize = rgn->region[i].size;
phys_size_t rgnflags = rgn->region[i].flags;
struct lmb_property *prop = &rgn->region[i];
Why call a region prop? Can't we call it "region" or "rgn_ptr" or if you want to allude to the position in the array "pos"? This would avoid confusion.
Best regards
Heinrich
phys_addr_t rgnbase = prop->base;
phys_size_t rgnsize = prop->size;
phys_addr_t end = base + size - 1; phys_addr_t rgnend = rgnbase + rgnsize - 1;phys_size_t rgnflags = prop->flags;
@@ -262,14 +259,14 @@ static long lmb_add_region_flags(struct lmb_region *rgn, phys_addr_t base, if (adjacent > 0) { if (flags != rgnflags) break;
rgn->region[i].base -= size;
rgn->region[i].size += size;
prop->base -= size;
} else if (adjacent < 0) { if (flags != rgnflags) break;prop->size += size; coalesced++; break;
rgn->region[i].size += size;
} else if (lmb_addrs_overlap(base, size, rgnbase, rgnsize)) {prop->size += size; coalesced++; break;
@@ -293,9 +290,7 @@ static long lmb_add_region_flags(struct lmb_region *rgn, phys_addr_t base, /* Couldn't coalesce the LMB, so add it to the sorted table. */ for (i = rgn->cnt-1; i >= 0; i--) { if (base < rgn->region[i].base) {
rgn->region[i + 1].base = rgn->region[i].base;
rgn->region[i + 1].size = rgn->region[i].size;
rgn->region[i + 1].flags = rgn->region[i].flags;
} else { rgn->region[i + 1].base = base; rgn->region[i + 1].size = size;rgn->region[i + 1] = rgn->region[i];
@@ -535,10 +530,11 @@ int lmb_is_reserved_flags(struct lmb *lmb, phys_addr_t addr, int flags) int i;
for (i = 0; i < lmb->reserved.cnt; i++) {
phys_addr_t upper = lmb->reserved.region[i].base +
lmb->reserved.region[i].size - 1;
if ((addr >= lmb->reserved.region[i].base) && (addr <= upper))
return (lmb->reserved.region[i].flags & flags) == flags;
struct lmb_property *prop = &lmb->reserved.region[i];
phys_addr_t upper = prop->base + prop->size - 1;
if (addr >= prop->base && addr <= upper)
return (prop->flags & flags) == flags;
}
return 0;

Hi Heinrich,
On Wed, 23 Aug 2023 at 09:44, Heinrich Schuchardt xypron.glpk@gmx.de wrote:
On 23.08.23 15:41, Simon Glass wrote:
In some cases it helps to define a local variable pointing to the structure being accessed. This avoids lots of repeated code.
There is no need to individually assign each struct member, so use a structure assignment instead.
Signed-off-by: Simon Glass sjg@chromium.org
lib/lmb.c | 54 +++++++++++++++++++++++++----------------------------- 1 file changed, 25 insertions(+), 29 deletions(-)
diff --git a/lib/lmb.c b/lib/lmb.c index ae1969893f00..8b9a611c5216 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -23,20 +23,19 @@ DECLARE_GLOBAL_DATA_PTR;
static void lmb_dump_region(struct lmb_region *rgn, char *name) {
unsigned long long base, size, end;
enum lmb_flags flags; int i; printf(" %s.cnt = 0x%lx / max = 0x%lx\n", name, rgn->cnt, rgn->max); for (i = 0; i < rgn->cnt; i++) {
base = rgn->region[i].base;
size = rgn->region[i].size;
end = base + size - 1;
flags = rgn->region[i].flags;
struct lmb_property *prop = &rgn->region[i];
unsigned long long end;
end = prop->base + prop->size - 1; printf(" %s[%d]\t[0x%llx-0x%llx], 0x%08llx bytes flags: %x\n",
name, i, base, end, size, flags);
name, i, (unsigned long long)prop->base, end,
}(unsigned long long)prop->size, prop->flags); }
@@ -89,11 +88,8 @@ static void lmb_remove_region(struct lmb_region *rgn, unsigned long r) { unsigned long i;
for (i = r; i < rgn->cnt - 1; i++) {
rgn->region[i].base = rgn->region[i + 1].base;
rgn->region[i].size = rgn->region[i + 1].size;
rgn->region[i].flags = rgn->region[i + 1].flags;
}
for (i = r; i < rgn->cnt - 1; i++)
}rgn->region[i] = rgn->region[i + 1]; rgn->cnt--;
@@ -122,6 +118,7 @@ void lmb_init(struct lmb *lmb)
void arch_lmb_reserve_generic(struct lmb *lmb, ulong sp, ulong end, ulong align) {
struct bd_info *bd = gd->bd; ulong bank_end; int bank;
@@ -135,12 +132,10 @@ void arch_lmb_reserve_generic(struct lmb *lmb, ulong sp, ulong end, ulong align) /* adjust sp by 4K to be safe */ sp -= align; for (bank = 0; bank < CONFIG_NR_DRAM_BANKS; bank++) {
if (!gd->bd->bi_dram[bank].size ||
sp < gd->bd->bi_dram[bank].start)
if (!bd->bi_dram[bank].size || sp < bd->bi_dram[bank].start) continue; /* Watch out for RAM at end of address space! */
bank_end = gd->bd->bi_dram[bank].start +
gd->bd->bi_dram[bank].size - 1;
bank_end = bd->bi_dram[bank].start + bd->bi_dram[bank].size - 1; if (sp > bank_end) continue; if (bank_end > end)
@@ -244,9 +239,11 @@ static long lmb_add_region_flags(struct lmb_region *rgn, phys_addr_t base,
/* First try and coalesce this LMB with another. */ for (i = 0; i < rgn->cnt; i++) {
phys_addr_t rgnbase = rgn->region[i].base;
phys_size_t rgnsize = rgn->region[i].size;
phys_size_t rgnflags = rgn->region[i].flags;
struct lmb_property *prop = &rgn->region[i];
Why call a region prop? Can't we call it "region" or "rgn_ptr" or if you want to allude to the position in the array "pos"? This would avoid confusion.
We always have rgn so that would be hopelessly confusing.
I am using prop just because that is the struct name.
I did not invent the confusion:
struct lmb_region { unsigned long cnt; unsigned long max; struct lmb_property region[CONFIG_LMB_MAX_REGIONS]; };
There is a region within a region, and also:
struct lmb { struct lmb_region memory; struct lmb_region reserved; struct lmb_property memory_regions[CONFIG_LMB_MEMORY_REGIONS]; struct lmb_property reserved_regions[CONFIG_LMB_RESERVED_REGIONS]; };
I did consider trying to rename one of them...but 'region' is used extensively in the C code to mean the inner region. We could perhaps rename the outer 'region' to an 'area'?
Regards, Simon

Use int for loop counters since it is more common. Do the same with the return value of some internal functions.
Signed-off-by: Simon Glass sjg@chromium.org ---
lib/lmb.c | 36 +++++++++++++++++------------------- 1 file changed, 17 insertions(+), 19 deletions(-)
diff --git a/lib/lmb.c b/lib/lmb.c index 8b9a611c5216..2f7b9f7f15fb 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -53,8 +53,8 @@ void lmb_dump_all(struct lmb *lmb) #endif }
-static long lmb_addrs_overlap(phys_addr_t base1, phys_size_t size1, - phys_addr_t base2, phys_size_t size2) +static int lmb_addrs_overlap(phys_addr_t base1, phys_size_t size1, + phys_addr_t base2, phys_size_t size2) { const phys_addr_t base1_end = base1 + size1 - 1; const phys_addr_t base2_end = base2 + size2 - 1; @@ -62,8 +62,8 @@ static long lmb_addrs_overlap(phys_addr_t base1, phys_size_t size1, return base1 <= base2_end && base2 <= base1_end; }
-static long lmb_addrs_adjacent(phys_addr_t base1, phys_size_t size1, - phys_addr_t base2, phys_size_t size2) +static int lmb_addrs_adjacent(phys_addr_t base1, phys_size_t size1, + phys_addr_t base2, phys_size_t size2) { if (base2 == base1 + size1) return 1; @@ -73,8 +73,7 @@ static long lmb_addrs_adjacent(phys_addr_t base1, phys_size_t size1, return 0; }
-static long lmb_regions_adjacent(struct lmb_region *rgn, unsigned long r1, - unsigned long r2) +static int lmb_regions_adjacent(struct lmb_region *rgn, ulong r1, ulong r2) { phys_addr_t base1 = rgn->region[r1].base; phys_size_t size1 = rgn->region[r1].size; @@ -86,7 +85,7 @@ static long lmb_regions_adjacent(struct lmb_region *rgn, unsigned long r1,
static void lmb_remove_region(struct lmb_region *rgn, unsigned long r) { - unsigned long i; + uint i;
for (i = r; i < rgn->cnt - 1; i++) rgn->region[i] = rgn->region[i + 1]; @@ -223,11 +222,11 @@ void lmb_init_and_reserve_range(struct lmb *lmb, phys_addr_t base, }
/* This routine called with relocation disabled. */ -static long lmb_add_region_flags(struct lmb_region *rgn, phys_addr_t base, - phys_size_t size, enum lmb_flags flags) +static int lmb_add_region_flags(struct lmb_region *rgn, phys_addr_t base, + phys_size_t size, enum lmb_flags flags) { - unsigned long coalesced = 0; - long adjacent, i; + uint coalesced = 0; + int adjacent, i;
if (rgn->cnt == 0) { rgn->region[0].base = base; @@ -310,8 +309,8 @@ static long lmb_add_region_flags(struct lmb_region *rgn, phys_addr_t base, return 0; }
-static long lmb_add_region(struct lmb_region *rgn, phys_addr_t base, - phys_size_t size) +static int lmb_add_region(struct lmb_region *rgn, phys_addr_t base, + phys_size_t size) { return lmb_add_region_flags(rgn, base, size, LMB_NONE); } @@ -388,10 +387,10 @@ long lmb_reserve(struct lmb *lmb, phys_addr_t base, phys_size_t size) return lmb_reserve_flags(lmb, base, size, LMB_NONE); }
-static long lmb_overlaps_region(struct lmb_region *rgn, phys_addr_t base, - phys_size_t size) +static int lmb_overlaps_region(struct lmb_region *rgn, phys_addr_t base, + phys_size_t size) { - unsigned long i; + uint i;
for (i = 0; i < rgn->cnt; i++) { phys_addr_t rgnbase = rgn->region[i].base; @@ -430,7 +429,7 @@ static phys_addr_t lmb_align_down(phys_addr_t addr, phys_size_t size) phys_addr_t __lmb_alloc_base(struct lmb *lmb, phys_size_t size, ulong align, phys_addr_t max_addr) { - long i, rgn; + int i, rgn; phys_addr_t base = 0; phys_addr_t res_base;
@@ -500,8 +499,7 @@ phys_addr_t lmb_alloc_addr(struct lmb *lmb, phys_addr_t base, phys_size_t size) /* Return number of bytes from a given address that are free */ phys_size_t lmb_get_free_size(struct lmb *lmb, phys_addr_t addr) { - int i; - long rgn; + int i, rgn;
/* check if the requested address is in the memory regions */ rgn = lmb_overlaps_region(&lmb->memory, addr, 1);

It makes no sense to return long when an int is plenty to provide an error code and a region position. It is just confusing.
Update the return-value types to keep to this rule.
Signed-off-by: Simon Glass sjg@chromium.org ---
include/lmb.h | 10 +++++----- lib/lmb.c | 10 +++++----- 2 files changed, 10 insertions(+), 10 deletions(-)
diff --git a/include/lmb.h b/include/lmb.h index 231b68b27d91..79a3a12155b4 100644 --- a/include/lmb.h +++ b/include/lmb.h @@ -96,8 +96,8 @@ void lmb_init(struct lmb *lmb); void lmb_init_and_reserve(struct lmb *lmb, struct bd_info *bd, void *fdt_blob); void lmb_init_and_reserve_range(struct lmb *lmb, phys_addr_t base, phys_size_t size, void *fdt_blob); -long lmb_add(struct lmb *lmb, phys_addr_t base, phys_size_t size); -long lmb_reserve(struct lmb *lmb, phys_addr_t base, phys_size_t size); +int lmb_add(struct lmb *lmb, phys_addr_t base, phys_size_t size); +int lmb_reserve(struct lmb *lmb, phys_addr_t base, phys_size_t size); /** * lmb_reserve_flags - Reserve one region with a specific flags bitfield. * @@ -107,8 +107,8 @@ long lmb_reserve(struct lmb *lmb, phys_addr_t base, phys_size_t size); * @flags: flags for the memory region * Return: 0 if OK, > 0 for coalesced region or a negative error code. */ -long lmb_reserve_flags(struct lmb *lmb, phys_addr_t base, - phys_size_t size, enum lmb_flags flags); +int lmb_reserve_flags(struct lmb *lmb, phys_addr_t base, phys_size_t size, + enum lmb_flags flags); phys_addr_t lmb_alloc(struct lmb *lmb, phys_size_t size, ulong align); phys_addr_t lmb_alloc_base(struct lmb *lmb, phys_size_t size, ulong align, phys_addr_t max_addr); @@ -141,7 +141,7 @@ int lmb_is_reserved(struct lmb *lmb, phys_addr_t addr); */ int lmb_is_reserved_flags(struct lmb *lmb, phys_addr_t addr, int flags);
-long lmb_free(struct lmb *lmb, phys_addr_t base, phys_size_t size); +int lmb_free(struct lmb *lmb, phys_addr_t base, phys_size_t size);
void lmb_dump_all(struct lmb *lmb); void lmb_dump_all_force(struct lmb *lmb); diff --git a/lib/lmb.c b/lib/lmb.c index 2f7b9f7f15fb..bf30e0dc90b2 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -316,14 +316,14 @@ static int lmb_add_region(struct lmb_region *rgn, phys_addr_t base, }
/* This routine may be called with relocation disabled. */ -long lmb_add(struct lmb *lmb, phys_addr_t base, phys_size_t size) +int lmb_add(struct lmb *lmb, phys_addr_t base, phys_size_t size) { struct lmb_region *_rgn = &(lmb->memory);
return lmb_add_region(_rgn, base, size); }
-long lmb_free(struct lmb *lmb, phys_addr_t base, phys_size_t size) +int lmb_free(struct lmb *lmb, phys_addr_t base, phys_size_t size) { struct lmb_region *rgn = &(lmb->reserved); phys_addr_t rgnbegin, rgnend; @@ -374,15 +374,15 @@ long lmb_free(struct lmb *lmb, phys_addr_t base, phys_size_t size) rgn->region[i].flags); }
-long lmb_reserve_flags(struct lmb *lmb, phys_addr_t base, phys_size_t size, - enum lmb_flags flags) +int lmb_reserve_flags(struct lmb *lmb, phys_addr_t base, phys_size_t size, + enum lmb_flags flags) { struct lmb_region *_rgn = &(lmb->reserved);
return lmb_add_region_flags(_rgn, base, size, flags); }
-long lmb_reserve(struct lmb *lmb, phys_addr_t base, phys_size_t size) +int lmb_reserve(struct lmb *lmb, phys_addr_t base, phys_size_t size) { return lmb_reserve_flags(lmb, base, size, LMB_NONE); }

Add a comment to define what this returns. Return a specific error when no overlap is found.
Signed-off-by: Simon Glass sjg@chromium.org ---
lib/lmb.c | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/lib/lmb.c b/lib/lmb.c index bf30e0dc90b2..5f2ea45c3ba9 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -387,6 +387,13 @@ int lmb_reserve(struct lmb *lmb, phys_addr_t base, phys_size_t size) return lmb_reserve_flags(lmb, base, size, LMB_NONE); }
+/** + * lmb_overlaps_region() - Check if a region overlaps a given base/size + * + * @base: base address of the memory region + * @size: size of the memory region + * Returns: Region number of overlapping region, if found, else -ENOENT + */ static int lmb_overlaps_region(struct lmb_region *rgn, phys_addr_t base, phys_size_t size) { @@ -396,10 +403,10 @@ static int lmb_overlaps_region(struct lmb_region *rgn, phys_addr_t base, phys_addr_t rgnbase = rgn->region[i].base; phys_size_t rgnsize = rgn->region[i].size; if (lmb_addrs_overlap(base, size, rgnbase, rgnsize)) - break; + return i; }
- return i < rgn->cnt ? i : -1; + return -ENOENT; }
phys_addr_t lmb_alloc(struct lmb *lmb, phys_size_t size, ulong align)

Update this to return an error code so we can tell when it just ran out of space in its lmb list. Make lmb_addrs_overlap() return a bool.
Add a few function comments while we are here.
Signed-off-by: Simon Glass sjg@chromium.org ---
lib/lmb.c | 47 +++++++++++++++++++++++++++++++++++++++++------ 1 file changed, 41 insertions(+), 6 deletions(-)
diff --git a/lib/lmb.c b/lib/lmb.c index 5f2ea45c3ba9..340681961826 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -53,8 +53,17 @@ void lmb_dump_all(struct lmb *lmb) #endif }
-static int lmb_addrs_overlap(phys_addr_t base1, phys_size_t size1, - phys_addr_t base2, phys_size_t size2) +/** + * lmb_addrs_overlap() - Compare two address ranges for overlap + * + * @base1: base address of region 1 + * @size1: size of the region 1 + * @base2: base address of region 2 + * @size2: size of the region 2 + * Returns: true if the regions overlap, else false + */ +static bool lmb_addrs_overlap(phys_addr_t base1, phys_size_t size1, + phys_addr_t base2, phys_size_t size2) { const phys_addr_t base1_end = base1 + size1 - 1; const phys_addr_t base2_end = base2 + size2 - 1; @@ -62,6 +71,17 @@ static int lmb_addrs_overlap(phys_addr_t base1, phys_size_t size1, return base1 <= base2_end && base2 <= base1_end; }
+/** + * lmb_addrs_adjacent() - Compare two address ranges for adjacency + * + * @base1: base address of region 1 + * @size1: size of the region 1 + * @base2: base address of region 2 + * @size2: size of the region 2 + * Returns: 1 if region 2 start at the end of region 1, + * -1 if region 1 starts at the end of region 2, + * else 0 + */ static int lmb_addrs_adjacent(phys_addr_t base1, phys_size_t size1, phys_addr_t base2, phys_size_t size2) { @@ -221,7 +241,22 @@ void lmb_init_and_reserve_range(struct lmb *lmb, phys_addr_t base, lmb_reserve_common(lmb, fdt_blob); }
-/* This routine called with relocation disabled. */ +/** + * lmb_add_region_flags() - Add a region or coalesce with another similar one + * + * This will coalesce with an existing regions so long as @flags is the same. + * + * This routine is called with relocation disabled. + * + * @rgn: Region to process + * @base: base address of the memory region to add + * @size: size of the memory region to add + * @flags: flags for this new memory region + * Returns: 0 if OK, -EBUSY if an existing enveloping region has different + * flags, -EPERM if there is an existing non-adjacent region, -ENOSPC if there + * is no more room in the list of regions, ekse region number that was coalesced + * with this one + **/ static int lmb_add_region_flags(struct lmb_region *rgn, phys_addr_t base, phys_size_t size, enum lmb_flags flags) { @@ -251,7 +286,7 @@ static int lmb_add_region_flags(struct lmb_region *rgn, phys_addr_t base, /* Already have this region, so we're done */ return 0; else - return -1; /* regions with new flags */ + return -EBUSY; /* regions with new flags */ }
adjacent = lmb_addrs_adjacent(base, size, rgnbase, rgnsize); @@ -270,7 +305,7 @@ static int lmb_add_region_flags(struct lmb_region *rgn, phys_addr_t base, break; } else if (lmb_addrs_overlap(base, size, rgnbase, rgnsize)) { /* regions overlap */ - return -1; + return -EPERM; } }
@@ -284,7 +319,7 @@ static int lmb_add_region_flags(struct lmb_region *rgn, phys_addr_t base, if (coalesced) return coalesced; if (rgn->cnt >= rgn->max) - return -1; + return -ENOSPC;
/* Couldn't coalesce the LMB, so add it to the sorted table. */ for (i = rgn->cnt-1; i >= 0; i--) {

When lmb runs out of space in its internal tables, it gives errors on every fs load operation. This is horribly confusing, as the poor user tries different memory regions one at a time.
Use the updated API to check the error code and print a helpful message. Also, allow the operation to proceed.
Update the tests accordingly.
Signed-off-by: Simon Glass sjg@chromium.org ---
fs/fs.c | 7 ++++++- include/lmb.h | 19 ++++++++++++++++++- lib/lmb.c | 20 +++++++++++--------- test/lib/lmb.c | 42 ++++++++++++++++++++---------------------- 4 files changed, 55 insertions(+), 33 deletions(-)
diff --git a/fs/fs.c b/fs/fs.c index 2b815b1db0fe..1a84cb410bf9 100644 --- a/fs/fs.c +++ b/fs/fs.c @@ -569,8 +569,13 @@ static int fs_read_lmb_check(const char *filename, ulong addr, loff_t offset, lmb_init_and_reserve(&lmb, gd->bd, (void *)gd->fdt_blob); lmb_dump_all(&lmb);
- if (lmb_alloc_addr(&lmb, addr, read_len) == addr) + ret = lmb_alloc_addr(&lmb, addr, read_len, NULL); + if (!ret) return 0; + if (ret == -E2BIG) { + log_warning("Reservation tables exhausted: see CONFIG_LMB_USE_MAX_REGIONS\n"); + return 0; + }
log_err("** Reading file would overwrite reserved memory **\n"); return -ENOSPC; diff --git a/include/lmb.h b/include/lmb.h index 79a3a12155b4..2060280aff54 100644 --- a/include/lmb.h +++ b/include/lmb.h @@ -114,7 +114,24 @@ phys_addr_t lmb_alloc_base(struct lmb *lmb, phys_size_t size, ulong align, phys_addr_t max_addr); phys_addr_t __lmb_alloc_base(struct lmb *lmb, phys_size_t size, ulong align, phys_addr_t max_addr); -phys_addr_t lmb_alloc_addr(struct lmb *lmb, phys_addr_t base, phys_size_t size); + +/** + * lmb_alloc_addr() - Allocate memory within a region + * + * Try to allocate a specific address range: must be in defined memory but not + * reserved + * + * @lmb: the logical memory block struct + * @base: base address of the memory region + * @size: size of the memory region + * @addrp: if non-NULL, returns the allocated address, on success + * Return: 0 if OK, -EPERM if the memory is already allocated, -E2BIG if + * there is not enough room in the reservation tables, so + * CONFIG_LMB_USE_MAX_REGIONS should be increased + */ +int lmb_alloc_addr(struct lmb *lmb, phys_addr_t base, phys_size_t size, + phys_addr_t *addrp); + phys_size_t lmb_get_free_size(struct lmb *lmb, phys_addr_t addr);
/** diff --git a/lib/lmb.c b/lib/lmb.c index 340681961826..6bb8fa3829d3 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -253,7 +253,7 @@ void lmb_init_and_reserve_range(struct lmb *lmb, phys_addr_t base, * @size: size of the memory region to add * @flags: flags for this new memory region * Returns: 0 if OK, -EBUSY if an existing enveloping region has different - * flags, -EPERM if there is an existing non-adjacent region, -ENOSPC if there + * flags, -EPERM if there is an existing non-adjacent region, -E2BIG if there * is no more room in the list of regions, ekse region number that was coalesced * with this one **/ @@ -319,7 +319,7 @@ static int lmb_add_region_flags(struct lmb_region *rgn, phys_addr_t base, if (coalesced) return coalesced; if (rgn->cnt >= rgn->max) - return -ENOSPC; + return -E2BIG;
/* Couldn't coalesce the LMB, so add it to the sorted table. */ for (i = rgn->cnt-1; i >= 0; i--) { @@ -511,11 +511,8 @@ phys_addr_t __lmb_alloc_base(struct lmb *lmb, phys_size_t size, ulong align, return 0; }
-/* - * Try to allocate a specific address range: must be in defined memory but not - * reserved - */ -phys_addr_t lmb_alloc_addr(struct lmb *lmb, phys_addr_t base, phys_size_t size) +int lmb_alloc_addr(struct lmb *lmb, phys_addr_t base, phys_size_t size, + phys_addr_t *addrp) { long rgn;
@@ -529,9 +526,14 @@ phys_addr_t lmb_alloc_addr(struct lmb *lmb, phys_addr_t base, phys_size_t size) if (lmb_addrs_overlap(lmb->memory.region[rgn].base, lmb->memory.region[rgn].size, base + size - 1, 1)) { + int ret; + /* ok, reserve the memory */ - if (lmb_reserve(lmb, base, size) >= 0) - return base; + ret = lmb_reserve(lmb, base, size); + if (ret < 0) + return ret; + if (addrp) + *addrp = base; } }
diff --git a/test/lib/lmb.c b/test/lib/lmb.c index 162887503588..9ab5fe593ebd 100644 --- a/test/lib/lmb.c +++ b/test/lib/lmb.c @@ -497,22 +497,22 @@ static int test_alloc_addr(struct unit_test_state *uts, const phys_addr_t ram) alloc_addr_b, 0x10000, alloc_addr_c, 0x10000);
/* allocate blocks */ - a = lmb_alloc_addr(&lmb, ram, alloc_addr_a - ram); + ut_assertok(lmb_alloc_addr(&lmb, ram, alloc_addr_a - ram, &a)); ut_asserteq(a, ram); ASSERT_LMB(&lmb, ram, ram_size, 3, ram, 0x8010000, alloc_addr_b, 0x10000, alloc_addr_c, 0x10000); - b = lmb_alloc_addr(&lmb, alloc_addr_a + 0x10000, - alloc_addr_b - alloc_addr_a - 0x10000); + ut_assertok(lmb_alloc_addr(&lmb, alloc_addr_a + 0x10000, + alloc_addr_b - alloc_addr_a - 0x10000, &b)); ut_asserteq(b, alloc_addr_a + 0x10000); ASSERT_LMB(&lmb, ram, ram_size, 2, ram, 0x10010000, alloc_addr_c, 0x10000, 0, 0); - c = lmb_alloc_addr(&lmb, alloc_addr_b + 0x10000, - alloc_addr_c - alloc_addr_b - 0x10000); + ut_assertok(lmb_alloc_addr(&lmb, alloc_addr_b + 0x10000, + alloc_addr_c - alloc_addr_b - 0x10000, &c)); ut_asserteq(c, alloc_addr_b + 0x10000); ASSERT_LMB(&lmb, ram, ram_size, 1, ram, 0x18010000, 0, 0, 0, 0); - d = lmb_alloc_addr(&lmb, alloc_addr_c + 0x10000, - ram_end - alloc_addr_c - 0x10000); + ut_assertok(lmb_alloc_addr(&lmb, alloc_addr_c + 0x10000, + ram_end - alloc_addr_c - 0x10000, &d)); ut_asserteq(d, alloc_addr_c + 0x10000); ASSERT_LMB(&lmb, ram, ram_size, 1, ram, ram_size, 0, 0, 0, 0); @@ -528,7 +528,7 @@ static int test_alloc_addr(struct unit_test_state *uts, const phys_addr_t ram)
/* allocate at 3 points in free range */
- d = lmb_alloc_addr(&lmb, ram_end - 4, 4); + ut_assertok(lmb_alloc_addr(&lmb, ram_end - 4, 4, &d)); ut_asserteq(d, ram_end - 4); ASSERT_LMB(&lmb, ram, ram_size, 2, ram, 0x18010000, d, 4, 0, 0); @@ -537,7 +537,7 @@ static int test_alloc_addr(struct unit_test_state *uts, const phys_addr_t ram) ASSERT_LMB(&lmb, ram, ram_size, 1, ram, 0x18010000, 0, 0, 0, 0);
- d = lmb_alloc_addr(&lmb, ram_end - 128, 4); + ut_assertok(lmb_alloc_addr(&lmb, ram_end - 128, 4, &d)); ut_asserteq(d, ram_end - 128); ASSERT_LMB(&lmb, ram, ram_size, 2, ram, 0x18010000, d, 4, 0, 0); @@ -546,7 +546,7 @@ static int test_alloc_addr(struct unit_test_state *uts, const phys_addr_t ram) ASSERT_LMB(&lmb, ram, ram_size, 1, ram, 0x18010000, 0, 0, 0, 0);
- d = lmb_alloc_addr(&lmb, alloc_addr_c + 0x10000, 4); + ut_assertok(lmb_alloc_addr(&lmb, alloc_addr_c + 0x10000, 4, &d)); ut_asserteq(d, alloc_addr_c + 0x10000); ASSERT_LMB(&lmb, ram, ram_size, 1, ram, 0x18010004, 0, 0, 0, 0); @@ -560,19 +560,19 @@ static int test_alloc_addr(struct unit_test_state *uts, const phys_addr_t ram) ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 1, ram + 0x8000000, 0x10010000, 0, 0, 0, 0); - d = lmb_alloc_addr(&lmb, ram, 4); + ut_assertok(lmb_alloc_addr(&lmb, ram, 4, &d)); ut_asserteq(d, ram); ASSERT_LMB(&lmb, ram, ram_size, 2, d, 4, ram + 0x8000000, 0x10010000, 0, 0);
/* check that allocating outside memory fails */ if (ram_end != 0) { - ret = lmb_alloc_addr(&lmb, ram_end, 1); - ut_asserteq(ret, 0); + ut_assertok(lmb_alloc_addr(&lmb, ram_end, 1, &a)); + ut_asserteq(0x40000000, a); } if (ram != 0) { - ret = lmb_alloc_addr(&lmb, ram - 1, 1); - ut_asserteq(ret, 0); + ut_assertok(lmb_alloc_addr(&lmb, ram - 1, 1, &a)); + ut_asserteq(ram, a); }
return 0; @@ -588,7 +588,7 @@ static int lib_test_lmb_alloc_addr(struct unit_test_state *uts) return ret;
/* simulate 512 MiB RAM beginning at 1.5GiB */ - return test_alloc_addr(uts, 0xE0000000); + return test_alloc_addr(uts, 0xe0000000); }
DM_TEST(lib_test_lmb_alloc_addr, UT_TESTF_SCAN_PDATA | UT_TESTF_SCAN_FDT); @@ -699,8 +699,7 @@ static int lib_test_lmb_max_regions(struct unit_test_state *uts)
/* error for the (CONFIG_LMB_MAX_REGIONS + 1) memory regions */ offset = ram + 2 * (CONFIG_LMB_MAX_REGIONS + 1) * ram_size; - ret = lmb_add(&lmb, offset, ram_size); - ut_asserteq(ret, -1); + ut_asserteq(-E2BIG, lmb_add(&lmb, offset, ram_size));
ut_asserteq(lmb.memory.cnt, CONFIG_LMB_MAX_REGIONS); ut_asserteq(lmb.reserved.cnt, 0); @@ -717,8 +716,7 @@ static int lib_test_lmb_max_regions(struct unit_test_state *uts)
/* error for the 9th reserved blocks */ offset = ram + 2 * (CONFIG_LMB_MAX_REGIONS + 1) * blk_size; - ret = lmb_reserve(&lmb, offset, blk_size); - ut_asserteq(ret, -1); + ut_asserteq(-E2BIG, lmb_reserve(&lmb, offset, blk_size));
ut_asserteq(lmb.memory.cnt, CONFIG_LMB_MAX_REGIONS); ut_asserteq(lmb.reserved.cnt, CONFIG_LMB_MAX_REGIONS); @@ -762,8 +760,8 @@ static int lib_test_lmb_flags(struct unit_test_state *uts) 0, 0, 0, 0);
/* reserve again, new flag */ - ret = lmb_reserve_flags(&lmb, 0x40010000, 0x10000, LMB_NONE); - ut_asserteq(ret, -1); + ut_asserteq(-EBUSY, + lmb_reserve_flags(&lmb, 0x40010000, 0x10000, LMB_NONE)); ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40010000, 0x10000, 0, 0, 0, 0);

On Wed, Aug 23, 2023 at 07:42:03AM -0600, Simon Glass wrote:
When lmb runs out of space in its internal tables, it gives errors on every fs load operation. This is horribly confusing, as the poor user tries different memory regions one at a time.
Use the updated API to check the error code and print a helpful message. Also, allow the operation to proceed.
Update the tests accordingly.
Signed-off-by: Simon Glass sjg@chromium.org
[snip]
- if (ret == -E2BIG) {
log_warning("Reservation tables exhausted: see CONFIG_LMB_USE_MAX_REGIONS\n");
return 0;
- }
This isn't the right option. Everyone has CONFIG_LMB_USE_MAX_REGIONS set. You would want to increase CONFIG_LMB_MAX_REGIONS.
But it sounds like what you've found and fixed is an underlying problem as to why 16 regions isn't enough in some common cases now? So we could possibly avoid the string size growth here and have a comment that also matches up with the help under LMB_MAX_REGIONS?

Hi Tom,
On Wed, 23 Aug 2023 at 09:14, Tom Rini trini@konsulko.com wrote:
On Wed, Aug 23, 2023 at 07:42:03AM -0600, Simon Glass wrote:
When lmb runs out of space in its internal tables, it gives errors on every fs load operation. This is horribly confusing, as the poor user tries different memory regions one at a time.
Use the updated API to check the error code and print a helpful message. Also, allow the operation to proceed.
Update the tests accordingly.
Signed-off-by: Simon Glass sjg@chromium.org
[snip]
if (ret == -E2BIG) {
log_warning("Reservation tables exhausted: see CONFIG_LMB_USE_MAX_REGIONS\n");
return 0;
}
This isn't the right option. Everyone has CONFIG_LMB_USE_MAX_REGIONS set. You would want to increase CONFIG_LMB_MAX_REGIONS.
But it sounds like what you've found and fixed is an underlying problem as to why 16 regions isn't enough in some common cases now? So we could
I don't think I have fixed anything. But I'll send v2 and perhaps it will be clearer what is going on here.
possibly avoid the string size growth here and have a comment that also matches up with the help under LMB_MAX_REGIONS?
I don't know, sorry. The size of struct(lmb) on 64-bit sandbox is about 400 bytes. There seems to be a lot of code to save not much memory.
Regards, Simom

On Thu, Aug 31, 2023 at 06:15:19PM -0600, Simon Glass wrote:
Hi Tom,
On Wed, 23 Aug 2023 at 09:14, Tom Rini trini@konsulko.com wrote:
On Wed, Aug 23, 2023 at 07:42:03AM -0600, Simon Glass wrote:
When lmb runs out of space in its internal tables, it gives errors on every fs load operation. This is horribly confusing, as the poor user tries different memory regions one at a time.
Use the updated API to check the error code and print a helpful message. Also, allow the operation to proceed.
Update the tests accordingly.
Signed-off-by: Simon Glass sjg@chromium.org
[snip]
if (ret == -E2BIG) {
log_warning("Reservation tables exhausted: see CONFIG_LMB_USE_MAX_REGIONS\n");
return 0;
}
This isn't the right option. Everyone has CONFIG_LMB_USE_MAX_REGIONS set. You would want to increase CONFIG_LMB_MAX_REGIONS.
But it sounds like what you've found and fixed is an underlying problem as to why 16 regions isn't enough in some common cases now? So we could
I don't think I have fixed anything. But I'll send v2 and perhaps it will be clearer what is going on here.
possibly avoid the string size growth here and have a comment that also matches up with the help under LMB_MAX_REGIONS?
I don't know, sorry. The size of struct(lmb) on 64-bit sandbox is about 400 bytes. There seems to be a lot of code to save not much memory.
What do you mean here? The alternative is not unlimited ranges but instead to define the limit of memory regions and limit of reserved ranges.

Hi Tom,
On Sun, 3 Sept 2023 at 10:42, Tom Rini trini@konsulko.com wrote:
On Thu, Aug 31, 2023 at 06:15:19PM -0600, Simon Glass wrote:
Hi Tom,
On Wed, 23 Aug 2023 at 09:14, Tom Rini trini@konsulko.com wrote:
On Wed, Aug 23, 2023 at 07:42:03AM -0600, Simon Glass wrote:
When lmb runs out of space in its internal tables, it gives errors on every fs load operation. This is horribly confusing, as the poor user tries different memory regions one at a time.
Use the updated API to check the error code and print a helpful message. Also, allow the operation to proceed.
Update the tests accordingly.
Signed-off-by: Simon Glass sjg@chromium.org
[snip]
if (ret == -E2BIG) {
log_warning("Reservation tables exhausted: see CONFIG_LMB_USE_MAX_REGIONS\n");
return 0;
}
This isn't the right option. Everyone has CONFIG_LMB_USE_MAX_REGIONS set. You would want to increase CONFIG_LMB_MAX_REGIONS.
But it sounds like what you've found and fixed is an underlying problem as to why 16 regions isn't enough in some common cases now? So we could
I don't think I have fixed anything. But I'll send v2 and perhaps it will be clearer what is going on here.
possibly avoid the string size growth here and have a comment that also matches up with the help under LMB_MAX_REGIONS?
I don't know, sorry. The size of struct(lmb) on 64-bit sandbox is about 400 bytes. There seems to be a lot of code to save not much memory.
What do you mean here? The alternative is not unlimited ranges but instead to define the limit of memory regions and limit of reserved ranges.
A better alternative would be not to limit the number of regions, IMO, i.e. have an array of regions that can grow as needed.
Regards, Simon

On Sun, Sep 03, 2023 at 12:06:13PM -0600, Simon Glass wrote:
Hi Tom,
On Sun, 3 Sept 2023 at 10:42, Tom Rini trini@konsulko.com wrote:
On Thu, Aug 31, 2023 at 06:15:19PM -0600, Simon Glass wrote:
Hi Tom,
On Wed, 23 Aug 2023 at 09:14, Tom Rini trini@konsulko.com wrote:
On Wed, Aug 23, 2023 at 07:42:03AM -0600, Simon Glass wrote:
When lmb runs out of space in its internal tables, it gives errors on every fs load operation. This is horribly confusing, as the poor user tries different memory regions one at a time.
Use the updated API to check the error code and print a helpful message. Also, allow the operation to proceed.
Update the tests accordingly.
Signed-off-by: Simon Glass sjg@chromium.org
[snip]
if (ret == -E2BIG) {
log_warning("Reservation tables exhausted: see CONFIG_LMB_USE_MAX_REGIONS\n");
return 0;
}
This isn't the right option. Everyone has CONFIG_LMB_USE_MAX_REGIONS set. You would want to increase CONFIG_LMB_MAX_REGIONS.
But it sounds like what you've found and fixed is an underlying problem as to why 16 regions isn't enough in some common cases now? So we could
I don't think I have fixed anything. But I'll send v2 and perhaps it will be clearer what is going on here.
possibly avoid the string size growth here and have a comment that also matches up with the help under LMB_MAX_REGIONS?
I don't know, sorry. The size of struct(lmb) on 64-bit sandbox is about 400 bytes. There seems to be a lot of code to save not much memory.
What do you mean here? The alternative is not unlimited ranges but instead to define the limit of memory regions and limit of reserved ranges.
A better alternative would be not to limit the number of regions, IMO, i.e. have an array of regions that can grow as needed.
That's not something that we have in the code today, however.

Hi Tom,
On Sun, 3 Sept 2023 at 13:25, Tom Rini trini@konsulko.com wrote:
On Sun, Sep 03, 2023 at 12:06:13PM -0600, Simon Glass wrote:
Hi Tom,
On Sun, 3 Sept 2023 at 10:42, Tom Rini trini@konsulko.com wrote:
On Thu, Aug 31, 2023 at 06:15:19PM -0600, Simon Glass wrote:
Hi Tom,
On Wed, 23 Aug 2023 at 09:14, Tom Rini trini@konsulko.com wrote:
On Wed, Aug 23, 2023 at 07:42:03AM -0600, Simon Glass wrote:
When lmb runs out of space in its internal tables, it gives
errors on
every fs load operation. This is horribly confusing, as the
poor user
tries different memory regions one at a time.
Use the updated API to check the error code and print a helpful
message.
Also, allow the operation to proceed.
Update the tests accordingly.
Signed-off-by: Simon Glass sjg@chromium.org
[snip]
if (ret == -E2BIG) {
log_warning("Reservation tables exhausted: see
CONFIG_LMB_USE_MAX_REGIONS\n");
return 0;
}
This isn't the right option. Everyone has
CONFIG_LMB_USE_MAX_REGIONS
set. You would want to increase CONFIG_LMB_MAX_REGIONS.
But it sounds like what you've found and fixed is an underlying
problem
as to why 16 regions isn't enough in some common cases now? So
we could
I don't think I have fixed anything. But I'll send v2 and perhaps it will be clearer what is going on here.
possibly avoid the string size growth here and have a comment
that also
matches up with the help under LMB_MAX_REGIONS?
I don't know, sorry. The size of struct(lmb) on 64-bit sandbox is about 400 bytes. There seems to be a lot of code to save not much memory.
What do you mean here? The alternative is not unlimited ranges but instead to define the limit of memory regions and limit of reserved ranges.
A better alternative would be not to limit the number of regions, IMO, i.e. have an array of regions that can grow as needed.
That's not something that we have in the code today, however.
No, I do have an arraylist thing that I plan to upstream, which could handle that.
But for this series, what do you think of the memregion idea? I am still unsure of the saming we should use - see Heinrich's comments too.
Regards, Simon

On Tue, Sep 05, 2023 at 12:16:56PM -0600, Simon Glass wrote:
Hi Tom,
On Sun, 3 Sept 2023 at 13:25, Tom Rini trini@konsulko.com wrote:
On Sun, Sep 03, 2023 at 12:06:13PM -0600, Simon Glass wrote:
Hi Tom,
On Sun, 3 Sept 2023 at 10:42, Tom Rini trini@konsulko.com wrote:
On Thu, Aug 31, 2023 at 06:15:19PM -0600, Simon Glass wrote:
Hi Tom,
On Wed, 23 Aug 2023 at 09:14, Tom Rini trini@konsulko.com wrote:
On Wed, Aug 23, 2023 at 07:42:03AM -0600, Simon Glass wrote:
> When lmb runs out of space in its internal tables, it gives
errors on
> every fs load operation. This is horribly confusing, as the
poor user
> tries different memory regions one at a time. > > Use the updated API to check the error code and print a helpful
message.
> Also, allow the operation to proceed. > > Update the tests accordingly. > > Signed-off-by: Simon Glass sjg@chromium.org [snip] > + if (ret == -E2BIG) { > + log_warning("Reservation tables exhausted: see
CONFIG_LMB_USE_MAX_REGIONS\n");
> + return 0; > + }
This isn't the right option. Everyone has
CONFIG_LMB_USE_MAX_REGIONS
set. You would want to increase CONFIG_LMB_MAX_REGIONS.
But it sounds like what you've found and fixed is an underlying
problem
as to why 16 regions isn't enough in some common cases now? So
we could
I don't think I have fixed anything. But I'll send v2 and perhaps it will be clearer what is going on here.
possibly avoid the string size growth here and have a comment
that also
matches up with the help under LMB_MAX_REGIONS?
I don't know, sorry. The size of struct(lmb) on 64-bit sandbox is about 400 bytes. There seems to be a lot of code to save not much memory.
What do you mean here? The alternative is not unlimited ranges but instead to define the limit of memory regions and limit of reserved ranges.
A better alternative would be not to limit the number of regions, IMO, i.e. have an array of regions that can grow as needed.
That's not something that we have in the code today, however.
No, I do have an arraylist thing that I plan to upstream, which could handle that.
But for this series, what do you think of the memregion idea? I am still unsure of the saming we should use - see Heinrich's comments too.
My biggest worry here is that we're papering over a real issue, and should either dig at that and see what's going on (should something be coalescing adjacent allocations? Were many platforms just right at the previous limit?) or just bump the default from 16 to "64 if EFI_LOADER" and then see if anything else really needs tweaking / cleaning up in the code itself. I know Heinrich has previously said the LMB system could be better (or something to that effect, I'm going from memory, sorry if I'm mis-stating things) and I don't object to replacing what we have with something more robust/smarter/etc.

Hi Tom,
On Wed, 6 Sept 2023 at 11:58, Tom Rini trini@konsulko.com wrote:
On Tue, Sep 05, 2023 at 12:16:56PM -0600, Simon Glass wrote:
Hi Tom,
On Sun, 3 Sept 2023 at 13:25, Tom Rini trini@konsulko.com wrote:
On Sun, Sep 03, 2023 at 12:06:13PM -0600, Simon Glass wrote:
Hi Tom,
On Sun, 3 Sept 2023 at 10:42, Tom Rini trini@konsulko.com wrote:
On Thu, Aug 31, 2023 at 06:15:19PM -0600, Simon Glass wrote:
Hi Tom,
On Wed, 23 Aug 2023 at 09:14, Tom Rini trini@konsulko.com wrote: > > On Wed, Aug 23, 2023 at 07:42:03AM -0600, Simon Glass wrote: > > > When lmb runs out of space in its internal tables, it gives
errors on
> > every fs load operation. This is horribly confusing, as the
poor user
> > tries different memory regions one at a time. > > > > Use the updated API to check the error code and print a helpful
message.
> > Also, allow the operation to proceed. > > > > Update the tests accordingly. > > > > Signed-off-by: Simon Glass sjg@chromium.org > [snip] > > + if (ret == -E2BIG) { > > + log_warning("Reservation tables exhausted: see
CONFIG_LMB_USE_MAX_REGIONS\n");
> > + return 0; > > + } > > This isn't the right option. Everyone has
CONFIG_LMB_USE_MAX_REGIONS
> set. You would want to increase CONFIG_LMB_MAX_REGIONS. > > But it sounds like what you've found and fixed is an underlying
problem
> as to why 16 regions isn't enough in some common cases now? So
we could
I don't think I have fixed anything. But I'll send v2 and perhaps it will be clearer what is going on here.
> possibly avoid the string size growth here and have a comment
that also
> matches up with the help under LMB_MAX_REGIONS?
I don't know, sorry. The size of struct(lmb) on 64-bit sandbox is about 400 bytes. There seems to be a lot of code to save not much memory.
What do you mean here? The alternative is not unlimited ranges but instead to define the limit of memory regions and limit of reserved ranges.
A better alternative would be not to limit the number of regions, IMO, i.e. have an array of regions that can grow as needed.
That's not something that we have in the code today, however.
No, I do have an arraylist thing that I plan to upstream, which could handle that.
But for this series, what do you think of the memregion idea? I am still unsure of the saming we should use - see Heinrich's comments too.
My biggest worry here is that we're papering over a real issue, and should either dig at that and see what's going on (should something be coalescing adjacent allocations? Were many platforms just right at the previous limit?) or just bump the default from 16 to "64 if EFI_LOADER" and then see if anything else really needs tweaking / cleaning up in the code itself. I know Heinrich has previously said the LMB system could be better (or something to that effect, I'm going from memory, sorry if I'm mis-stating things) and I don't object to replacing what we have with something more robust/smarter/etc.
I am not sure...my series was designed to rename the code to reduce confusion, and print a useful message when we run out of regions. It does not paper over the problem, but actually makes it more visible.
Regards, Simon

On Thu, Sep 07, 2023 at 06:23:06AM -0600, Simon Glass wrote:
Hi Tom,
On Wed, 6 Sept 2023 at 11:58, Tom Rini trini@konsulko.com wrote:
On Tue, Sep 05, 2023 at 12:16:56PM -0600, Simon Glass wrote:
Hi Tom,
On Sun, 3 Sept 2023 at 13:25, Tom Rini trini@konsulko.com wrote:
On Sun, Sep 03, 2023 at 12:06:13PM -0600, Simon Glass wrote:
Hi Tom,
On Sun, 3 Sept 2023 at 10:42, Tom Rini trini@konsulko.com wrote:
On Thu, Aug 31, 2023 at 06:15:19PM -0600, Simon Glass wrote: > Hi Tom, > > On Wed, 23 Aug 2023 at 09:14, Tom Rini trini@konsulko.com wrote: > > > > On Wed, Aug 23, 2023 at 07:42:03AM -0600, Simon Glass wrote: > > > > > When lmb runs out of space in its internal tables, it gives
errors on
> > > every fs load operation. This is horribly confusing, as the
poor user
> > > tries different memory regions one at a time. > > > > > > Use the updated API to check the error code and print a helpful
message.
> > > Also, allow the operation to proceed. > > > > > > Update the tests accordingly. > > > > > > Signed-off-by: Simon Glass sjg@chromium.org > > [snip] > > > + if (ret == -E2BIG) { > > > + log_warning("Reservation tables exhausted: see
CONFIG_LMB_USE_MAX_REGIONS\n");
> > > + return 0; > > > + } > > > > This isn't the right option. Everyone has
CONFIG_LMB_USE_MAX_REGIONS
> > set. You would want to increase CONFIG_LMB_MAX_REGIONS. > > > > But it sounds like what you've found and fixed is an underlying
problem
> > as to why 16 regions isn't enough in some common cases now? So
we could
> > I don't think I have fixed anything. But I'll send v2 and perhaps it > will be clearer what is going on here. > > > possibly avoid the string size growth here and have a comment
that also
> > matches up with the help under LMB_MAX_REGIONS? > > I don't know, sorry. The size of struct(lmb) on 64-bit sandbox is > about 400 bytes. There seems to be a lot of code to save not much > memory.
What do you mean here? The alternative is not unlimited ranges but instead to define the limit of memory regions and limit of reserved ranges.
A better alternative would be not to limit the number of regions, IMO, i.e. have an array of regions that can grow as needed.
That's not something that we have in the code today, however.
No, I do have an arraylist thing that I plan to upstream, which could handle that.
But for this series, what do you think of the memregion idea? I am still unsure of the saming we should use - see Heinrich's comments too.
My biggest worry here is that we're papering over a real issue, and should either dig at that and see what's going on (should something be coalescing adjacent allocations? Were many platforms just right at the previous limit?) or just bump the default from 16 to "64 if EFI_LOADER" and then see if anything else really needs tweaking / cleaning up in the code itself. I know Heinrich has previously said the LMB system could be better (or something to that effect, I'm going from memory, sorry if I'm mis-stating things) and I don't object to replacing what we have with something more robust/smarter/etc.
I am not sure...my series was designed to rename the code to reduce confusion, and print a useful message when we run out of regions. It does not paper over the problem, but actually makes it more visible.
Well, "papering over" maybe wasn't the best choice of words on my part. But since the series of events was something like: - We nee to use LMB to cover my U-Boot regions to address a handful of CVEs over arbitrarily overwriting U-Boot at run-time. - AFAICT no platforms suddenly ran out of LMB reservation space, but maybe I'm wrong? - Someone noted that the EFI subsystem was exposing the same kind of issue. - Heinrich adds logic for EFI_LOADER (iirc) to also add allocations there to LMB - Assorted platforms start changing the max allocation to 64 to fix the problems that get reported sometimes by booting. - Heinrich notes that our memory reservation system (LMB) could be better designed than it is today. - And, iirc, EFI_LOADER or similar has something maybe we can leverage?
Which brings me to what I was trying to say. I'm not sure it's worth cleaning up the code a little, or refactoring/renaming things within it without both: - Understanding why EFI_LOADER being enabled causes us to run out of allocations and so if the answer is "increase the default" OR "fix some underlying issue such as coalescing being too strict or broken". - Understanding if there's a better memory reservation system we can use instead.

Hi Tom,
On Thu, 7 Sept 2023 at 13:40, Tom Rini trini@konsulko.com wrote:
On Thu, Sep 07, 2023 at 06:23:06AM -0600, Simon Glass wrote:
Hi Tom,
On Wed, 6 Sept 2023 at 11:58, Tom Rini trini@konsulko.com wrote:
On Tue, Sep 05, 2023 at 12:16:56PM -0600, Simon Glass wrote:
Hi Tom,
On Sun, 3 Sept 2023 at 13:25, Tom Rini trini@konsulko.com wrote:
On Sun, Sep 03, 2023 at 12:06:13PM -0600, Simon Glass wrote:
Hi Tom,
On Sun, 3 Sept 2023 at 10:42, Tom Rini trini@konsulko.com wrote: > > On Thu, Aug 31, 2023 at 06:15:19PM -0600, Simon Glass wrote: > > Hi Tom, > > > > On Wed, 23 Aug 2023 at 09:14, Tom Rini trini@konsulko.com wrote: > > > > > > On Wed, Aug 23, 2023 at 07:42:03AM -0600, Simon Glass wrote: > > > > > > > When lmb runs out of space in its internal tables, it gives
errors on
> > > > every fs load operation. This is horribly confusing, as the
poor user
> > > > tries different memory regions one at a time. > > > > > > > > Use the updated API to check the error code and print a helpful
message.
> > > > Also, allow the operation to proceed. > > > > > > > > Update the tests accordingly. > > > > > > > > Signed-off-by: Simon Glass sjg@chromium.org > > > [snip] > > > > + if (ret == -E2BIG) { > > > > + log_warning("Reservation tables exhausted: see
CONFIG_LMB_USE_MAX_REGIONS\n");
> > > > + return 0; > > > > + } > > > > > > This isn't the right option. Everyone has
CONFIG_LMB_USE_MAX_REGIONS
> > > set. You would want to increase CONFIG_LMB_MAX_REGIONS. > > > > > > But it sounds like what you've found and fixed is an underlying
problem
> > > as to why 16 regions isn't enough in some common cases now? So
we could
> > > > I don't think I have fixed anything. But I'll send v2 and perhaps it > > will be clearer what is going on here. > > > > > possibly avoid the string size growth here and have a comment
that also
> > > matches up with the help under LMB_MAX_REGIONS? > > > > I don't know, sorry. The size of struct(lmb) on 64-bit sandbox is > > about 400 bytes. There seems to be a lot of code to save not much > > memory. > > What do you mean here? The alternative is not unlimited ranges but > instead to define the limit of memory regions and limit of reserved > ranges.
A better alternative would be not to limit the number of regions, IMO, i.e. have an array of regions that can grow as needed.
That's not something that we have in the code today, however.
No, I do have an arraylist thing that I plan to upstream, which could handle that.
But for this series, what do you think of the memregion idea? I am still unsure of the saming we should use - see Heinrich's comments too.
My biggest worry here is that we're papering over a real issue, and should either dig at that and see what's going on (should something be coalescing adjacent allocations? Were many platforms just right at the previous limit?) or just bump the default from 16 to "64 if EFI_LOADER" and then see if anything else really needs tweaking / cleaning up in the code itself. I know Heinrich has previously said the LMB system could be better (or something to that effect, I'm going from memory, sorry if I'm mis-stating things) and I don't object to replacing what we have with something more robust/smarter/etc.
I am not sure...my series was designed to rename the code to reduce confusion, and print a useful message when we run out of regions. It does not paper over the problem, but actually makes it more visible.
Well, "papering over" maybe wasn't the best choice of words on my part. But since the series of events was something like:
- We nee to use LMB to cover my U-Boot regions to address a handful of CVEs over arbitrarily overwriting U-Boot at run-time.
- AFAICT no platforms suddenly ran out of LMB reservation space, but maybe I'm wrong?
- Someone noted that the EFI subsystem was exposing the same kind of issue.
- Heinrich adds logic for EFI_LOADER (iirc) to also add allocations there to LMB
- Assorted platforms start changing the max allocation to 64 to fix the problems that get reported sometimes by booting.
- Heinrich notes that our memory reservation system (LMB) could be better designed than it is today.
- And, iirc, EFI_LOADER or similar has something maybe we can leverage?
Which brings me to what I was trying to say. I'm not sure it's worth cleaning up the code a little, or refactoring/renaming things within it without both:
- Understanding why EFI_LOADER being enabled causes us to run out of allocations and so if the answer is "increase the default" OR "fix some underlying issue such as coalescing being too strict or broken".
- Understanding if there's a better memory reservation system we can use instead.
Perhaps Heinrich has some thoughts on that and/or the memregion question. EFI loves memory regions so perhaps it needs more?
But it should report a useful error, not the silly one it shows today.
Regards, Simon
participants (3)
-
Heinrich Schuchardt
-
Simon Glass
-
Tom Rini