[PATCH 00/40] Make LMB memory map global and persistent

This is a follow-up from an earlier RFC series [1] for making the LMB and EFI memory allocations work together. This is a non-rfc version with only the LMB part of the patches, for making the LMB memory map global and persistent.
This is part one of a set of patches which aim to have the LMB and EFI memory allocations work together. This requires making the LMB memory map global and persistent, instead of having local, caller specific maps. This is being done keeping in mind the usage of LMB memory by platforms where the same memory region can be used to load multiple different images. What is not allowed is to overwrite memory that has been allocated by the other module, currently the EFI memory module. This is being achieved by introducing a new flag, LMB_NOOVERWRITE, which represents memory which cannot be re-requested once allocated.
The data structures (alloced lists) required for maintaining the LMB map are initialised during board init. The LMB module is enabled by default for the main U-Boot image, while it needs to be enabled for SPL. The allocation API's have been tweaked to introduce a flag argument, which will pass any flags that might be associated with a lmb memory region. This is being done to avoid having a separate set of API's for the EFI memory module, thus saving on space.
The tests have been tweaked where needed because of these changes.
The second part of the patches, to be sent subsequently, would work on having the EFI allocations work with the LMB API's.
[1] - https://lore.kernel.org/u-boot/20240704073544.670249-1-sughosh.ganu@linaro.o...
Changes since rfc: * Squash patches 9 - 11, 13 from the rfc v2 series into a single patch to make it bisectable. * Add a function comment for lmb_add_region_flags(). * Change the wording of a comment in lmb_merge_overlap_regions() as per review comment from Simon Glass. * Replace conditional compilation of lmb code to an if (CONFIG_IS_ENABLED()) in store_block() and tftp_init_load_addr(). * Add a function for reserving common areas in SPL, lmb_reserve_common_spl() * Enable SPL config for sandbox_noinst * Change the lmb_mem_regions_init() function to have it called from the lmb tests as well.
Simon Glass (4): malloc: Support testing with realloc() lib: Handle a special case with str_to_list() alist: Add support for an allocated pointer list lib: Convert str_to_list() to use alist
Sughosh Ganu (36): spl: alloc: call full alloc functions if malloc pool is available alist: add a couple of helper functions lmb: ut: disable unit tests for lmb changes lmb: remove the unused lmb_is_reserved() function lmb: staticize __lmb_alloc_base() lmb: use the BIT macro for lmb flags lmb: make LMB memory map persistent and global lmb: allow for resizing lmb regions lmb: remove config symbols used for lmb region count test: lmb: remove the test for max regions lmb: config: add lmb config symbols for SPL lmb: allow lmb module to be used in SPL lmb: introduce a function to add memory to the lmb memory map lmb: remove the lmb_init_and_reserve() function lmb: reserve common areas during board init lmb: remove lmb_init_and_reserve_range() function lmb: bootm: remove superfluous lmb stub functions lmb: init: initialise the lmb data structures during board init lmb: add a flags parameter to the API's lmb: add a common implementation of arch_lmb_reserve() sandbox: move the TCG event log to the start of ram memory spl: call spl_board_init() at the end of the spl init sequence spl: sandbox: initialise the ram banksize in spl lmb: config: make lmb config symbol def_bool sandbox: spl: enable lmb config for SPL sandbox: iommu: remove lmb allocation in the driver zynq: lmb: do not add to lmb map before relocation stm32mp: do not add lmb memory before relocation test: cedit: use allocated address for reading file test: lmb: tweak the tests for the persistent lmb memory map test: lmb: run lmb tests only manually test: lmb: add a separate class of unit tests for lmb test: lmb: invoke the LMB unit tests from a separate script test: bdinfo: dump the global LMB memory map sandbox: adjust load address of couple of tests lmb: ut: re-enable unit tests
arch/arc/lib/cache.c | 14 - arch/arm/lib/stack.c | 14 - arch/arm/mach-apple/board.c | 20 +- arch/arm/mach-snapdragon/board.c | 17 +- arch/arm/mach-stm32mp/dram_init.c | 13 +- arch/arm/mach-stm32mp/stm32mp1/cpu.c | 7 +- arch/m68k/lib/bootm.c | 20 +- arch/microblaze/lib/bootm.c | 14 - arch/mips/lib/bootm.c | 22 +- arch/nios2/lib/bootm.c | 13 - arch/powerpc/cpu/mpc85xx/mp.c | 4 +- arch/powerpc/include/asm/mp.h | 4 +- arch/powerpc/lib/bootm.c | 25 +- arch/riscv/lib/bootm.c | 13 - arch/sandbox/cpu/spl.c | 13 +- arch/sandbox/dts/test.dts | 2 +- arch/sh/lib/bootm.c | 13 - arch/x86/lib/bootm.c | 18 - arch/xtensa/lib/bootm.c | 13 - board/xilinx/common/board.c | 33 - boot/bootm.c | 39 +- boot/bootm_os.c | 5 +- boot/image-board.c | 47 +- boot/image-fdt.c | 37 +- cmd/bdinfo.c | 5 +- cmd/booti.c | 2 +- cmd/bootz.c | 2 +- cmd/elf.c | 2 +- cmd/load.c | 7 +- common/board_r.c | 4 + common/dlmalloc.c | 4 + common/malloc_simple.c | 13 + common/spl/spl.c | 9 +- configs/a3y17lte_defconfig | 1 - configs/a5y17lte_defconfig | 1 - configs/a7y17lte_defconfig | 1 - configs/apple_m1_defconfig | 1 - configs/mt7981_emmc_rfb_defconfig | 1 - configs/mt7981_rfb_defconfig | 1 - configs/mt7981_sd_rfb_defconfig | 1 - configs/mt7986_rfb_defconfig | 1 - configs/mt7986a_bpir3_emmc_defconfig | 1 - configs/mt7986a_bpir3_sd_defconfig | 1 - configs/mt7988_rfb_defconfig | 1 - configs/mt7988_sd_rfb_defconfig | 1 - configs/qcom_defconfig | 1 - configs/sandbox_noinst_defconfig | 1 + configs/sandbox_spl_defconfig | 1 + configs/sandbox_vpl_defconfig | 1 - configs/stm32mp13_defconfig | 3 - configs/stm32mp15_basic_defconfig | 3 - configs/stm32mp15_defconfig | 3 - configs/stm32mp15_trusted_defconfig | 3 - configs/stm32mp25_defconfig | 3 - configs/th1520_lpi4a_defconfig | 1 - drivers/iommu/apple_dart.c | 8 +- drivers/iommu/sandbox_iommu.c | 19 +- fs/fs.c | 10 +- include/alist.h | 236 ++++++ include/image.h | 28 +- include/lmb.h | 159 ++-- include/test/suites.h | 1 + lib/Kconfig | 46 +- lib/Makefile | 3 +- lib/alist.c | 147 ++++ lib/efi_loader/efi_dt_fixup.c | 2 +- lib/efi_loader/efi_helper.c | 2 +- lib/lmb.c | 672 +++++++++++----- lib/strto.c | 33 +- net/tftp.c | 39 +- net/wget.c | 9 +- test/Kconfig | 9 + test/Makefile | 1 + test/boot/cedit.c | 6 +- test/cmd/bdinfo.c | 39 +- test/cmd_ut.c | 7 + test/lib/Makefile | 2 +- test/lib/alist.c | 197 +++++ test/lib/lmb.c | 825 -------------------- test/lmb_ut.c | 813 +++++++++++++++++++ test/py/tests/test_android/test_abootimg.py | 2 +- test/py/tests/test_lmb.py | 24 + test/py/tests/test_vbe.py | 2 +- test/str_ut.c | 4 +- 84 files changed, 2192 insertions(+), 1653 deletions(-) create mode 100644 include/alist.h create mode 100644 lib/alist.c create mode 100644 test/lib/alist.c delete mode 100644 test/lib/lmb.c create mode 100644 test/lmb_ut.c create mode 100644 test/py/tests/test_lmb.py

From: Simon Glass sjg@chromium.org
At present in tests it is possible to cause an out-of-memory condition with malloc() but not realloc(). Add support to realloc() too, so code which uses that function can be tested.
Signed-off-by: Simon Glass sjg@chromium.org Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- common/dlmalloc.c | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/common/dlmalloc.c b/common/dlmalloc.c index 62e8557daa..1e1602a24d 100644 --- a/common/dlmalloc.c +++ b/common/dlmalloc.c @@ -1698,6 +1698,10 @@ Void_t* rEALLOc_impl(oldmem, bytes) Void_t* oldmem; size_t bytes; panic("pre-reloc realloc() is not supported"); } #endif + if (CONFIG_IS_ENABLED(UNIT_TEST) && malloc_testing) { + if (--malloc_max_allocs < 0) + return NULL; + }
newp = oldp = mem2chunk(oldmem); newsize = oldsize = chunksize(oldp);

From: Simon Glass sjg@chromium.org
The current implementation can return an extra result at the end when the string ends with a space. Fix this by adding a special case.
Signed-off-by: Simon Glass sjg@chromium.org Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- lib/strto.c | 4 +++- test/str_ut.c | 4 +--- 2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/lib/strto.c b/lib/strto.c index 5157332d6c..f83ac67c66 100644 --- a/lib/strto.c +++ b/lib/strto.c @@ -236,12 +236,14 @@ const char **str_to_list(const char *instr) return NULL;
/* count the number of space-separated strings */ - for (count = *str != '\0', p = str; *p; p++) { + for (count = 0, p = str; *p; p++) { if (*p == ' ') { count++; *p = '\0'; } } + if (p != str && p[-1]) + count++;
/* allocate the pointer array, allowing for a NULL terminator */ ptr = calloc(count + 1, sizeof(char *)); diff --git a/test/str_ut.c b/test/str_ut.c index 389779859a..96e048975d 100644 --- a/test/str_ut.c +++ b/test/str_ut.c @@ -342,9 +342,7 @@ static int test_str_to_list(struct unit_test_state *uts) ut_asserteq_str("space", ptr[3]); ut_assertnonnull(ptr[4]); ut_asserteq_str("", ptr[4]); - ut_assertnonnull(ptr[5]); - ut_asserteq_str("", ptr[5]); - ut_assertnull(ptr[6]); + ut_assertnull(ptr[5]); str_free_list(ptr); ut_assertok(ut_check_delta(start));

From: Simon Glass sjg@chromium.org
In various places it is useful to have an array of structures, but allow it to grow. In some cases we work around it by setting maximum number of entries, using a Kconfig option. In other places we use a linked list, which does not provide for random access and can complicate the code.
Introduce a new data structure, which is a variable-sized list of structs each of the same, pre-set size. It provides O(1) access and is reasonably efficient at expanding linearly, since it doubles in size when it runs out of space.
Signed-off-by: Simon Glass sjg@chromium.org Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- include/alist.h | 214 ++++++++++++++++++++++++++++++++++++++++++++++ lib/Makefile | 1 + lib/alist.c | 147 +++++++++++++++++++++++++++++++ test/lib/Makefile | 1 + test/lib/alist.c | 197 ++++++++++++++++++++++++++++++++++++++++++ 5 files changed, 560 insertions(+) create mode 100644 include/alist.h create mode 100644 lib/alist.c create mode 100644 test/lib/alist.c
diff --git a/include/alist.h b/include/alist.h new file mode 100644 index 0000000000..6cc3161dcd --- /dev/null +++ b/include/alist.h @@ -0,0 +1,214 @@ +/* SPDX-License-Identifier: GPL-2.0+ */ +/* + * Handles a contiguous list of pointers which be allocated and freed + * + * Copyright 2023 Google LLC + * Written by Simon Glass sjg@chromium.org + */ + +#ifndef __ALIST_H +#define __ALIST_H + +#include <stdbool.h> +#include <linux/bitops.h> +#include <linux/types.h> + +/** + * struct alist - object list that can be allocated and freed + * + * Holds a list of objects, each of the same size. The object is typically a + * C struct. The array is alloced in memory can change in size. + * + * The list rememebers the size of the list, but has a separate count of how + * much space is allocated, This allows it increase in size in steps as more + * elements are added, which is more efficient that reallocating the list every + * time a single item is added + * + * Two types of access are provided: + * + * alist_get...(index) + * gets an existing element, if its index is less that size + * + * alist_ensure(index) + * address an existing element, or creates a new one if not present + * + * @data: object data of size `@obj_size * @alloc`. The list can grow as + * needed but never shrinks + * @obj_size: Size of each object in bytes + * @count: number of objects in array + * @alloc: allocated length of array, to which @count can grow + * @flags: flags for the alist (ALISTF_...) + */ +struct alist { + void *data; + u16 obj_size; + u16 count; + u16 alloc; + u16 flags; +}; + +/** + * enum alist_flags - Flags for the alist + * + * @ALIST_FAIL: true if any allocation has failed. Once this has happened, the + * alist is dead and cannot grow further + */ +enum alist_flags { + ALISTF_FAIL = BIT(0), +}; + +/** + * alist_has() - Check if an index is within the list range + * + * Checks if index is within the current alist count + * + * @lst: alist to check + * @index: Index to check + * Returns: true if value, else false + */ +static inline bool alist_has(struct alist *lst, uint index) +{ + return index < lst->count; +} + +/** + * alist_err() - Check if the alist is still valid + * + * @lst: List to check + * Return: false if OK, true if any previous allocation failed + */ +static inline bool alist_err(struct alist *lst) +{ + return lst->flags & ALISTF_FAIL; +} + +/** + * alist_get_ptr() - Get the value of a pointer + * + * @lst: alist to check + * @index: Index to read from + * Returns: pointer, if present, else NULL + */ +const void *alist_get_ptr(const struct alist *lst, uint index); + +/** + * alist_getd() - Get the value of a pointer directly, with no checking + * + * This must only be called on indexes for which alist_has() returns true + * + * @lst: alist to check + * @index: Index to read from + * Returns: pointer value (may be NULL) + */ +static inline const void *alist_getd(struct alist *lst, uint index) +{ + return lst->data + index * lst->obj_size; +} + +#define alist_get(_lst, _index, _struct) \ + ((const _struct *)alist_get_ptr(_lst, _index)) + +/** + * alist_ensure_ptr() - Ensure an object exists at a given index + * + * This provides read/write access to an array element. If it does not exist, + * it is allocated, reading for the caller to store the object into + * + * Allocates a object at the given index if needed + * + * @lst: alist to check + * @index: Index to address + * Returns: pointer where struct can be read/written, or NULL if out of memory + */ +void *alist_ensure_ptr(struct alist *lst, uint index); + +/** + * alist_ensure() - Address a struct, the correct object type + * + * Use as: + * struct my_struct *ptr = alist_ensure(&lst, 4, struct my_struct); + */ +#define alist_ensure(_lst, _index, _struct) \ + ((_struct *)alist_ensure_ptr(_lst, _index)) + +/** + * alist_add_ptr() - Ad a new object to the list + * + * @lst: alist to add to + * @obj: Pointer to object to copy in + * Returns: pointer to where the object was copied, or NULL if out of memory + */ +void *alist_add_ptr(struct alist *lst, void *obj); + +/** + * alist_expand_by() - Expand a list by the given amount + * + * @lst: alist to expand + * @inc_by: Amount to expand by + * Return: true if OK, false if out of memory + */ +bool alist_expand_by(struct alist *lst, uint inc_by); + +/** + * alist_add() - Used to add an object type with the correct type + * + * Use as: + * struct my_struct obj; + * struct my_struct *ptr = alist_add(&lst, &obj, struct my_struct); + */ +#define alist_add(_lst, _obj, _struct) \ + ((_struct *)alist_add_ptr(_lst, (_struct *)&(_obj))) + +/** + * alist_init() - Set up a new object list + * + * Sets up a list of objects, initially empty + * + * @lst: alist to set up + * @obj_size: Size of each element in bytes + * @alloc_size: Number of items to allowed to start, before reallocation is + * needed (0 to start with no space) + * Return: true if OK, false if out of memory + */ +bool alist_init(struct alist *lst, uint obj_size, uint alloc_size); + +#define alist_init_struct(_lst, _struct) \ + alist_init(_lst, sizeof(_struct), 0) + +/** + * alist_uninit_move_ptr() - Return the allocated contents and uninit the alist + * + * This returns the alist data to the caller, so that the caller receives data + * that it can be sure will hang around. The caller is responsible for freeing + * the data. + * + * If the alist size is 0, this returns NULL + * + * The alist is uninited as part of this. + * + * The alist must be inited before this can be called. + * + * @alist: alist to uninit + * @countp: if non-NULL, returns the number of objects in the returned data + * (which is @alist->size) + * Return: data contents, allocated with malloc(), or NULL if the data could not + * be allocated, or the data size is 0 + */ +void *alist_uninit_move_ptr(struct alist *alist, size_t *countp); + +/** + * alist_uninit_move() - Typed version of alist_uninit_move_ptr() + */ +#define alist_uninit_move(_lst, _countp, _struct) \ + (_struct *)alist_uninit_move_ptr(_lst, _countp) + +/** + * alist_uninit() - Free any memory used by an alist + * + * The alist must be inited before this can be called. + * + * @alist: alist to uninit + */ +void alist_uninit(struct alist *alist); + +#endif /* __ALIST_H */ diff --git a/lib/Makefile b/lib/Makefile index e389ad014f..81b503ab52 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -147,6 +147,7 @@ endif obj-$(CONFIG_$(SPL_)OID_REGISTRY) += oid_registry.o
obj-y += abuf.o +obj-y += alist.o obj-y += date.o obj-y += rtc-lib.o obj-$(CONFIG_LIB_ELF) += elf.o diff --git a/lib/alist.c b/lib/alist.c new file mode 100644 index 0000000000..bf5220e4f3 --- /dev/null +++ b/lib/alist.c @@ -0,0 +1,147 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* + * Handles a contiguous list of pointers which be allocated and freed + * + * Copyright 2023 Google LLC + * Written by Simon Glass sjg@chromium.org + */ + +#include <alist.h> +#include <display_options.h> +#include <malloc.h> +#include <stdio.h> +#include <string.h> + +enum { + ALIST_INITIAL_SIZE = 4, /* default size of unsized list */ +}; + +bool alist_init(struct alist *lst, uint obj_size, uint start_size) +{ + /* Avoid realloc for the initial size to help malloc_simple */ + memset(lst, '\0', sizeof(struct alist)); + if (start_size) { + lst->data = calloc(obj_size, start_size); + if (!lst->data) { + lst->flags = ALISTF_FAIL; + return false; + } + lst->alloc = start_size; + } + lst->obj_size = obj_size; + + return true; +} + +void alist_uninit(struct alist *lst) +{ + free(lst->data); + + /* Clear fields to avoid any confusion */ + memset(lst, '\0', sizeof(struct alist)); +} + +/** + * alist_expand_to() - Expand a list to the given size + * + * @lst: List to modify + * @inc_by: Amount to expand to + * Return: true if OK, false if out of memory + */ +static bool alist_expand_to(struct alist *lst, uint new_alloc) +{ + void *new_data; + + if (lst->flags & ALISTF_FAIL) + return false; + new_data = realloc(lst->data, lst->obj_size * new_alloc); + if (!new_data) { + lst->flags |= ALISTF_FAIL; + return false; + } + memset(new_data + lst->obj_size * lst->alloc, '\0', + lst->obj_size * (new_alloc - lst->alloc)); + lst->alloc = new_alloc; + lst->data = new_data; + + return true; +} + +bool alist_expand_by(struct alist *lst, uint inc_by) +{ + return alist_expand_to(lst, lst->alloc + inc_by); +} + +/** + * alist_expand_min() - Expand to at least the provided size + * + * Expands to the lowest power of two which can incorporate the new size + * + * @lst: alist to expand + * @min_alloc: Minimum new allocated size; if 0 then ALIST_INITIAL_SIZE is used + * Return: true if OK, false if out of memory + */ +static bool alist_expand_min(struct alist *lst, uint min_alloc) +{ + uint new_alloc; + + for (new_alloc = lst->alloc ?: ALIST_INITIAL_SIZE; + new_alloc < min_alloc;) + new_alloc *= 2; + + return alist_expand_to(lst, new_alloc); +} + +const void *alist_get_ptr(const struct alist *lst, uint index) +{ + if (index >= lst->count) + return NULL; + + return lst->data + index * lst->obj_size; +} + +void *alist_ensure_ptr(struct alist *lst, uint index) +{ + uint minsize = index + 1; + void *ptr; + + if (index >= lst->alloc && !alist_expand_min(lst, minsize)) + return NULL; + + ptr = lst->data + index * lst->obj_size; + if (minsize >= lst->count) + lst->count = minsize; + + return ptr; +} + +void *alist_add_ptr(struct alist *lst, void *obj) +{ + void *ptr; + + ptr = alist_ensure_ptr(lst, lst->count); + if (!ptr) + return ptr; + memcpy(ptr, obj, lst->obj_size); + + return ptr; +} + +void *alist_uninit_move_ptr(struct alist *alist, size_t *countp) +{ + void *ptr; + + if (countp) + *countp = alist->count; + if (!alist->count) { + alist_uninit(alist); + return NULL; + } + + ptr = alist->data; + + /* Clear everything out so there is no record of the data */ + alist_init(alist, alist->obj_size, 0); + + return ptr; +} diff --git a/test/lib/Makefile b/test/lib/Makefile index e75a263e6a..70f14c46b1 100644 --- a/test/lib/Makefile +++ b/test/lib/Makefile @@ -5,6 +5,7 @@ ifeq ($(CONFIG_SPL_BUILD),) obj-y += cmd_ut_lib.o obj-y += abuf.o +obj-y += alist.o obj-$(CONFIG_EFI_LOADER) += efi_device_path.o obj-$(CONFIG_EFI_SECURE_BOOT) += efi_image_region.o obj-y += hexdump.o diff --git a/test/lib/alist.c b/test/lib/alist.c new file mode 100644 index 0000000000..f9050a963e --- /dev/null +++ b/test/lib/alist.c @@ -0,0 +1,197 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* + * Copyright 2023 Google LLC + * Written by Simon Glass sjg@chromium.org + */ + +#include <alist.h> +#include <string.h> +#include <test/lib.h> +#include <test/test.h> +#include <test/ut.h> + +struct my_struct { + uint val; + uint other_val; +}; + +enum { + obj_size = sizeof(struct my_struct), +}; + +/* Test alist_init() */ +static int lib_test_alist_init(struct unit_test_state *uts) +{ + struct alist lst; + ulong start; + + start = ut_check_free(); + + /* with a size of 0, the fields should be inited, with no memory used */ + memset(&lst, '\xff', sizeof(lst)); + ut_assert(alist_init_struct(&lst, struct my_struct)); + ut_asserteq_ptr(NULL, lst.data); + ut_asserteq(0, lst.count); + ut_asserteq(0, lst.alloc); + ut_assertok(ut_check_delta(start)); + alist_uninit(&lst); + ut_asserteq_ptr(NULL, lst.data); + ut_asserteq(0, lst.count); + ut_asserteq(0, lst.alloc); + + /* use an impossible size */ + ut_asserteq(false, alist_init(&lst, obj_size, + CONFIG_SYS_MALLOC_LEN)); + ut_assertnull(lst.data); + ut_asserteq(0, lst.count); + ut_asserteq(0, lst.alloc); + + /* use a small size */ + ut_assert(alist_init(&lst, obj_size, 4)); + ut_assertnonnull(lst.data); + ut_asserteq(0, lst.count); + ut_asserteq(4, lst.alloc); + + /* free it */ + alist_uninit(&lst); + ut_asserteq_ptr(NULL, lst.data); + ut_asserteq(0, lst.count); + ut_asserteq(0, lst.alloc); + ut_assertok(ut_check_delta(start)); + + /* Check for memory leaks */ + ut_assertok(ut_check_delta(start)); + + return 0; +} +LIB_TEST(lib_test_alist_init, 0); + +/* Test alist_get() and alist_getd() */ +static int lib_test_alist_get(struct unit_test_state *uts) +{ + struct alist lst; + ulong start; + void *ptr; + + start = ut_check_free(); + + ut_assert(alist_init(&lst, obj_size, 3)); + ut_asserteq(0, lst.count); + ut_asserteq(3, lst.alloc); + + ut_assertnull(alist_get_ptr(&lst, 2)); + ut_assertnull(alist_get_ptr(&lst, 3)); + + ptr = alist_ensure_ptr(&lst, 1); + ut_assertnonnull(ptr); + ut_asserteq(2, lst.count); + ptr = alist_ensure_ptr(&lst, 2); + ut_asserteq(3, lst.count); + ut_assertnonnull(ptr); + + ptr = alist_ensure_ptr(&lst, 3); + ut_assertnonnull(ptr); + ut_asserteq(4, lst.count); + ut_asserteq(6, lst.alloc); + + ut_assertnull(alist_get_ptr(&lst, 4)); + + alist_uninit(&lst); + + /* Check for memory leaks */ + ut_assertok(ut_check_delta(start)); + + return 0; +} +LIB_TEST(lib_test_alist_get, 0); + +/* Test alist_has() */ +static int lib_test_alist_has(struct unit_test_state *uts) +{ + struct alist lst; + ulong start; + void *ptr; + + start = ut_check_free(); + + ut_assert(alist_init(&lst, obj_size, 3)); + + ut_assert(!alist_has(&lst, 0)); + ut_assert(!alist_has(&lst, 1)); + ut_assert(!alist_has(&lst, 2)); + ut_assert(!alist_has(&lst, 3)); + + /* create a new one to force expansion */ + ptr = alist_ensure_ptr(&lst, 4); + ut_assertnonnull(ptr); + + ut_assert(alist_has(&lst, 0)); + ut_assert(alist_has(&lst, 1)); + ut_assert(alist_has(&lst, 2)); + ut_assert(alist_has(&lst, 3)); + ut_assert(alist_has(&lst, 4)); + ut_assert(!alist_has(&lst, 5)); + + alist_uninit(&lst); + + /* Check for memory leaks */ + ut_assertok(ut_check_delta(start)); + + return 0; +} +LIB_TEST(lib_test_alist_has, 0); + +/* Test alist_ensure() */ +static int lib_test_alist_ensure(struct unit_test_state *uts) +{ + struct my_struct *ptr3, *ptr4; + struct alist lst; + ulong start; + + start = ut_check_free(); + + ut_assert(alist_init_struct(&lst, struct my_struct)); + ut_asserteq(obj_size, lst.obj_size); + ut_asserteq(0, lst.count); + ut_asserteq(0, lst.alloc); + ptr3 = alist_ensure_ptr(&lst, 3); + ut_asserteq(4, lst.count); + ut_asserteq(4, lst.alloc); + ut_assertnonnull(ptr3); + ptr3->val = 3; + + ptr4 = alist_ensure_ptr(&lst, 4); + ut_asserteq(8, lst.alloc); + ut_asserteq(5, lst.count); + ut_assertnonnull(ptr4); + ptr4->val = 4; + ut_asserteq(4, alist_get(&lst, 4, struct my_struct)->val); + + ut_asserteq_ptr(ptr4, alist_ensure(&lst, 4, struct my_struct)); + + alist_ensure(&lst, 4, struct my_struct)->val = 44; + ut_asserteq(44, alist_get(&lst, 4, struct my_struct)->val); + ut_asserteq(3, alist_get(&lst, 3, struct my_struct)->val); + ut_assertnull(alist_get(&lst, 7, struct my_struct)); + ut_asserteq(8, lst.alloc); + ut_asserteq(5, lst.count); + + /* add some more, checking handling of malloc() failure */ + malloc_enable_testing(0); + ut_assertnonnull(alist_ensure(&lst, 7, struct my_struct)); + ut_assertnull(alist_ensure(&lst, 8, struct my_struct)); + malloc_disable_testing(); + + lst.flags &= ~ALISTF_FAIL; + ut_assertnonnull(alist_ensure(&lst, 8, struct my_struct)); + ut_asserteq(16, lst.alloc); + ut_asserteq(9, lst.count); + + alist_uninit(&lst); + + /* Check for memory leaks */ + ut_assertok(ut_check_delta(start)); + + return 0; +} +LIB_TEST(lib_test_alist_ensure, 0);

From: Simon Glass sjg@chromium.org
Use this new data structure in the utility function.
Signed-off-by: Simon Glass sjg@chromium.org Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- lib/strto.c | 35 +++++++++++++++++++---------------- 1 file changed, 19 insertions(+), 16 deletions(-)
diff --git a/lib/strto.c b/lib/strto.c index f83ac67c66..277e83b20f 100644 --- a/lib/strto.c +++ b/lib/strto.c @@ -9,6 +9,7 @@ * Wirzenius wrote this portably, Torvalds fucked it up :-) */
+#include <alist.h> #include <errno.h> #include <malloc.h> #include <vsprintf.h> @@ -226,37 +227,39 @@ void str_to_upper(const char *in, char *out, size_t len)
const char **str_to_list(const char *instr) { - const char **ptr; - char *str, *p; - int count, i; + struct alist alist; + char *str, *p, *start;
/* don't allocate if the string is empty */ str = *instr ? strdup(instr) : (char *)instr; if (!str) return NULL;
- /* count the number of space-separated strings */ - for (count = 0, p = str; *p; p++) { + alist_init_struct(&alist, char *); + + if (*str) + alist_add(&alist, str, char *); + for (start = str, p = str; *p; p++) { if (*p == ' ') { - count++; *p = '\0'; + start = p + 1; + if (*start) + alist_add(&alist, start, char *); } } - if (p != str && p[-1]) - count++;
- /* allocate the pointer array, allowing for a NULL terminator */ - ptr = calloc(count + 1, sizeof(char *)); - if (!ptr) { - if (*str) + /* terminate list */ + p = NULL; + alist_add(&alist, p, char *); + if (alist_err(&alist)) { + alist_uninit(&alist); + + if (*instr) free(str); return NULL; }
- for (i = 0, p = str; i < count; p += strlen(p) + 1, i++) - ptr[i] = p; - - return ptr; + return alist_uninit_move(&alist, NULL, const char *); }
void str_free_list(const char **ptr)

On Wed, Jul 24, 2024 at 11:31:48AM +0530, Sughosh Ganu wrote:
From: Simon Glass sjg@chromium.org
Use this new data structure in the utility function.
Signed-off-by: Simon Glass sjg@chromium.org Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
lib/strto.c | 35 +++++++++++++++++++---------------- 1 file changed, 19 insertions(+), 16 deletions(-)
This is rather big growth when we didn't already have realloc: 05: lib: Convert str_to_list() to use alist aarch64: (for 1/1 boards) all +1765.0 rodata +37.0 text +1728.0 xilinx_versal_mini_emmc0: all +1765 rodata +37 text +1728 u-boot: add: 7/0, grow: 1/0 bytes: 1728/0 (1728) function old new delta realloc - 1120 +1120 alist_ensure_ptr - 140 +140 alist_expand_to - 136 +136 alist_init - 108 +108 alist_uninit_move_ptr - 76 +76 alist_add_ptr - 72 +72 alist_uninit - 48 +48 str_to_list 204 232 +28

Hi Tom,
On Wed, 24 Jul 2024 at 14:54, Tom Rini trini@konsulko.com wrote:
On Wed, Jul 24, 2024 at 11:31:48AM +0530, Sughosh Ganu wrote:
From: Simon Glass sjg@chromium.org
Use this new data structure in the utility function.
Signed-off-by: Simon Glass sjg@chromium.org Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
lib/strto.c | 35 +++++++++++++++++++---------------- 1 file changed, 19 insertions(+), 16 deletions(-)
This is rather big growth when we didn't already have realloc: 05: lib: Convert str_to_list() to use alist aarch64: (for 1/1 boards) all +1765.0 rodata +37.0 text +1728.0 xilinx_versal_mini_emmc0: all +1765 rodata +37 text +1728 u-boot: add: 7/0, grow: 1/0 bytes: 1728/0 (1728) function old new delta realloc - 1120 +1120 alist_ensure_ptr - 140 +140 alist_expand_to - 136 +136 alist_init - 108 +108 alist_uninit_move_ptr - 76 +76 alist_add_ptr - 72 +72 alist_uninit - 48 +48 str_to_list 204 232 +28
-- Tom
We can just drop this patch, perhaps? It was really just a demo of alist.
Regards, SImon

On Wed, Jul 24, 2024 at 04:17:32PM -0600, Simon Glass wrote:
Hi Tom,
On Wed, 24 Jul 2024 at 14:54, Tom Rini trini@konsulko.com wrote:
On Wed, Jul 24, 2024 at 11:31:48AM +0530, Sughosh Ganu wrote:
From: Simon Glass sjg@chromium.org
Use this new data structure in the utility function.
Signed-off-by: Simon Glass sjg@chromium.org Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
lib/strto.c | 35 +++++++++++++++++++---------------- 1 file changed, 19 insertions(+), 16 deletions(-)
This is rather big growth when we didn't already have realloc: 05: lib: Convert str_to_list() to use alist aarch64: (for 1/1 boards) all +1765.0 rodata +37.0 text +1728.0 xilinx_versal_mini_emmc0: all +1765 rodata +37 text +1728 u-boot: add: 7/0, grow: 1/0 bytes: 1728/0 (1728) function old new delta realloc - 1120 +1120 alist_ensure_ptr - 140 +140 alist_expand_to - 136 +136 alist_init - 108 +108 alist_uninit_move_ptr - 76 +76 alist_add_ptr - 72 +72 alist_uninit - 48 +48 str_to_list 204 232 +28
We can just drop this patch, perhaps? It was really just a demo of alist.
Yes, and no. The next good example would be on cortina_presidio-asic-pnand where the reason this series is a size growth rather than reduction (where it is BTW on LTO using platforms) is again, realloc. Is this just an unfortunate given then?

Hi Tom,
On Wed, 24 Jul 2024 at 16:35, Tom Rini trini@konsulko.com wrote:
On Wed, Jul 24, 2024 at 04:17:32PM -0600, Simon Glass wrote:
Hi Tom,
On Wed, 24 Jul 2024 at 14:54, Tom Rini trini@konsulko.com wrote:
On Wed, Jul 24, 2024 at 11:31:48AM +0530, Sughosh Ganu wrote:
From: Simon Glass sjg@chromium.org
Use this new data structure in the utility function.
Signed-off-by: Simon Glass sjg@chromium.org Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
lib/strto.c | 35 +++++++++++++++++++---------------- 1 file changed, 19 insertions(+), 16 deletions(-)
This is rather big growth when we didn't already have realloc: 05: lib: Convert str_to_list() to use alist aarch64: (for 1/1 boards) all +1765.0 rodata +37.0 text +1728.0 xilinx_versal_mini_emmc0: all +1765 rodata +37 text +1728 u-boot: add: 7/0, grow: 1/0 bytes: 1728/0 (1728) function old new delta realloc - 1120 +1120 alist_ensure_ptr - 140 +140 alist_expand_to - 136 +136 alist_init - 108 +108 alist_uninit_move_ptr - 76 +76 alist_add_ptr - 72 +72 alist_uninit - 48 +48 str_to_list 204 232 +28
We can just drop this patch, perhaps? It was really just a demo of alist.
Yes, and no. The next good example would be on cortina_presidio-asic-pnand where the reason this series is a size growth rather than reduction (where it is BTW on LTO using platforms) is again, realloc. Is this just an unfortunate given then?
Hmmm maybe. We can of course malloc() and then free() rather than calling realloc(), although that is likely not as efficient. We could have a 'cut down' alist implementation which requires that the initial size be declared, but that defeats the point, really.
Anyway we certainly don't need to apply this patch to strto...there is no particular benefit. I'll leave it up to you.
Regards, Simon

On 7/24/24 22:54, Tom Rini wrote:
On Wed, Jul 24, 2024 at 11:31:48AM +0530, Sughosh Ganu wrote:
From: Simon Glass sjg@chromium.org
Use this new data structure in the utility function.
Signed-off-by: Simon Glass sjg@chromium.org Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
lib/strto.c | 35 +++++++++++++++++++---------------- 1 file changed, 19 insertions(+), 16 deletions(-)
This is rather big growth when we didn't already have realloc: 05: lib: Convert str_to_list() to use alist aarch64: (for 1/1 boards) all +1765.0 rodata +37.0 text +1728.0 xilinx_versal_mini_emmc0: all +1765 rodata +37 text +1728 u-boot: add: 7/0, grow: 1/0 bytes: 1728/0 (1728) function old new delta realloc - 1120 +1120 alist_ensure_ptr - 140 +140 alist_expand_to - 136 +136 alist_init - 108 +108 alist_uninit_move_ptr - 76 +76 alist_add_ptr - 72 +72 alist_uninit - 48 +48 str_to_list 204 232 +28
this is definitely not acceptable. This mini configuration is running out of OCM and we are already pretty close to limit.
What's the reason for this change? I can't see any explanation in commit message.
Thanks, Michal

Hi Michal,
On Thu, 25 Jul 2024 at 00:14, Michal Simek michal.simek@amd.com wrote:
On 7/24/24 22:54, Tom Rini wrote:
On Wed, Jul 24, 2024 at 11:31:48AM +0530, Sughosh Ganu wrote:
From: Simon Glass sjg@chromium.org
Use this new data structure in the utility function.
Signed-off-by: Simon Glass sjg@chromium.org Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
lib/strto.c | 35 +++++++++++++++++++---------------- 1 file changed, 19 insertions(+), 16 deletions(-)
This is rather big growth when we didn't already have realloc: 05: lib: Convert str_to_list() to use alist aarch64: (for 1/1 boards) all +1765.0 rodata +37.0 text +1728.0 xilinx_versal_mini_emmc0: all +1765 rodata +37 text +1728 u-boot: add: 7/0, grow: 1/0 bytes: 1728/0 (1728) function old new delta realloc - 1120 +1120 alist_ensure_ptr - 140 +140 alist_expand_to - 136 +136 alist_init - 108 +108 alist_uninit_move_ptr - 76 +76 alist_add_ptr - 72 +72 alist_uninit - 48 +48 str_to_list 204 232 +28
this is definitely not acceptable. This mini configuration is running out of OCM and we are already pretty close to limit.
Also I forgot to mention that mx6sabresd exceeds its limit with this patch.
What's the reason for this change? I can't see any explanation in commit message.
Much like the abuf library (which initially had no users), I try to avoid adding dead code. So some sort of example is helpful. The bug that I found in str_to_list() was fixed in the earlier commit in the series, without needing alist. so it isn't necessary for that.
Regards, Simon

On Thu, Jul 25, 2024 at 08:13:38AM +0200, Michal Simek wrote:
On 7/24/24 22:54, Tom Rini wrote:
On Wed, Jul 24, 2024 at 11:31:48AM +0530, Sughosh Ganu wrote:
From: Simon Glass sjg@chromium.org
Use this new data structure in the utility function.
Signed-off-by: Simon Glass sjg@chromium.org Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
lib/strto.c | 35 +++++++++++++++++++---------------- 1 file changed, 19 insertions(+), 16 deletions(-)
This is rather big growth when we didn't already have realloc: 05: lib: Convert str_to_list() to use alist aarch64: (for 1/1 boards) all +1765.0 rodata +37.0 text +1728.0 xilinx_versal_mini_emmc0: all +1765 rodata +37 text +1728 u-boot: add: 7/0, grow: 1/0 bytes: 1728/0 (1728) function old new delta realloc - 1120 +1120 alist_ensure_ptr - 140 +140 alist_expand_to - 136 +136 alist_init - 108 +108 alist_uninit_move_ptr - 76 +76 alist_add_ptr - 72 +72 alist_uninit - 48 +48 str_to_list 204 232 +28
this is definitely not acceptable. This mini configuration is running out of OCM and we are already pretty close to limit.
What's the reason for this change? I can't see any explanation in commit message.
It was more clearly explained in the cover thread for when Simon posted alist. This conversion was an example, so dropping it from the lmb rework series is fine.

On 7/25/24 16:49, Tom Rini wrote:
On Thu, Jul 25, 2024 at 08:13:38AM +0200, Michal Simek wrote:
On 7/24/24 22:54, Tom Rini wrote:
On Wed, Jul 24, 2024 at 11:31:48AM +0530, Sughosh Ganu wrote:
From: Simon Glass sjg@chromium.org
Use this new data structure in the utility function.
Signed-off-by: Simon Glass sjg@chromium.org Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
lib/strto.c | 35 +++++++++++++++++++---------------- 1 file changed, 19 insertions(+), 16 deletions(-)
This is rather big growth when we didn't already have realloc: 05: lib: Convert str_to_list() to use alist aarch64: (for 1/1 boards) all +1765.0 rodata +37.0 text +1728.0 xilinx_versal_mini_emmc0: all +1765 rodata +37 text +1728 u-boot: add: 7/0, grow: 1/0 bytes: 1728/0 (1728) function old new delta realloc - 1120 +1120 alist_ensure_ptr - 140 +140 alist_expand_to - 136 +136 alist_init - 108 +108 alist_uninit_move_ptr - 76 +76 alist_add_ptr - 72 +72 alist_uninit - 48 +48 str_to_list 204 232 +28
this is definitely not acceptable. This mini configuration is running out of OCM and we are already pretty close to limit.
What's the reason for this change? I can't see any explanation in commit message.
It was more clearly explained in the cover thread for when Simon posted alist. This conversion was an example, so dropping it from the lmb rework series is fine.
Perfect. We will also extend our configurations to have limit setup.
Thanks, Michal

On Thu, 25 Jul 2024 at 02:24, Tom Rini trini@konsulko.com wrote:
On Wed, Jul 24, 2024 at 11:31:48AM +0530, Sughosh Ganu wrote:
From: Simon Glass sjg@chromium.org
Use this new data structure in the utility function.
Signed-off-by: Simon Glass sjg@chromium.org Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
lib/strto.c | 35 +++++++++++++++++++---------------- 1 file changed, 19 insertions(+), 16 deletions(-)
This is rather big growth when we didn't already have realloc: 05: lib: Convert str_to_list() to use alist aarch64: (for 1/1 boards) all +1765.0 rodata +37.0 text +1728.0 xilinx_versal_mini_emmc0: all +1765 rodata +37 text +1728 u-boot: add: 7/0, grow: 1/0 bytes: 1728/0 (1728) function old new delta realloc - 1120 +1120 alist_ensure_ptr - 140 +140 alist_expand_to - 136 +136 alist_init - 108 +108 alist_uninit_move_ptr - 76 +76 alist_add_ptr - 72 +72 alist_uninit - 48 +48 str_to_list 204 232 +28
I am working on an implementation of lmb maps using lists. The list nodes are then allocated with calloc, which I believe is included in most of the board images. We can then compare the size impact with the two implementations (alist vs list). Thanks.
-sughosh
-- Tom

On 7/25/24 14:54, Sughosh Ganu wrote:
On Thu, 25 Jul 2024 at 02:24, Tom Rini trini@konsulko.com wrote:
On Wed, Jul 24, 2024 at 11:31:48AM +0530, Sughosh Ganu wrote:
From: Simon Glass sjg@chromium.org
Use this new data structure in the utility function.
Signed-off-by: Simon Glass sjg@chromium.org Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
lib/strto.c | 35 +++++++++++++++++++---------------- 1 file changed, 19 insertions(+), 16 deletions(-)
This is rather big growth when we didn't already have realloc: 05: lib: Convert str_to_list() to use alist aarch64: (for 1/1 boards) all +1765.0 rodata +37.0 text +1728.0 xilinx_versal_mini_emmc0: all +1765 rodata +37 text +1728 u-boot: add: 7/0, grow: 1/0 bytes: 1728/0 (1728) function old new delta realloc - 1120 +1120 alist_ensure_ptr - 140 +140 alist_expand_to - 136 +136 alist_init - 108 +108 alist_uninit_move_ptr - 76 +76 alist_add_ptr - 72 +72 alist_uninit - 48 +48 str_to_list 204 232 +28
I am working on an implementation of lmb maps using lists. The list nodes are then allocated with calloc, which I believe is included in most of the board images. We can then compare the size impact with the two implementations (alist vs list). Thanks.
Inside LMB code it should be fine for us because as I wrote we are disabling LMB in these mini configurations.
Thanks, Michal

Hi Sughosh,
On Thu, 25 Jul 2024 at 06:54, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Thu, 25 Jul 2024 at 02:24, Tom Rini trini@konsulko.com wrote:
On Wed, Jul 24, 2024 at 11:31:48AM +0530, Sughosh Ganu wrote:
From: Simon Glass sjg@chromium.org
Use this new data structure in the utility function.
Signed-off-by: Simon Glass sjg@chromium.org Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
lib/strto.c | 35 +++++++++++++++++++---------------- 1 file changed, 19 insertions(+), 16 deletions(-)
This is rather big growth when we didn't already have realloc: 05: lib: Convert str_to_list() to use alist aarch64: (for 1/1 boards) all +1765.0 rodata +37.0 text +1728.0 xilinx_versal_mini_emmc0: all +1765 rodata +37 text +1728 u-boot: add: 7/0, grow: 1/0 bytes: 1728/0 (1728) function old new delta realloc - 1120 +1120 alist_ensure_ptr - 140 +140 alist_expand_to - 136 +136 alist_init - 108 +108 alist_uninit_move_ptr - 76 +76 alist_add_ptr - 72 +72 alist_uninit - 48 +48 str_to_list 204 232 +28
I am working on an implementation of lmb maps using lists. The list nodes are then allocated with calloc, which I believe is included in most of the board images. We can then compare the size impact with the two implementations (alist vs list). Thanks.
This is for use with EFI, which is very large, so I doubt it is worth it. The alist code is simpler to work with and will be used in UPL and likely some other areas.
We could provide a variant which uses malloc()/free() instead of realloc(), as I mentioned/ The impact then would be very small.
Regards, Simon

On Thu, Jul 25, 2024 at 07:59:52AM -0600, Simon Glass wrote:
Hi Sughosh,
On Thu, 25 Jul 2024 at 06:54, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Thu, 25 Jul 2024 at 02:24, Tom Rini trini@konsulko.com wrote:
On Wed, Jul 24, 2024 at 11:31:48AM +0530, Sughosh Ganu wrote:
From: Simon Glass sjg@chromium.org
Use this new data structure in the utility function.
Signed-off-by: Simon Glass sjg@chromium.org Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
lib/strto.c | 35 +++++++++++++++++++---------------- 1 file changed, 19 insertions(+), 16 deletions(-)
This is rather big growth when we didn't already have realloc: 05: lib: Convert str_to_list() to use alist aarch64: (for 1/1 boards) all +1765.0 rodata +37.0 text +1728.0 xilinx_versal_mini_emmc0: all +1765 rodata +37 text +1728 u-boot: add: 7/0, grow: 1/0 bytes: 1728/0 (1728) function old new delta realloc - 1120 +1120 alist_ensure_ptr - 140 +140 alist_expand_to - 136 +136 alist_init - 108 +108 alist_uninit_move_ptr - 76 +76 alist_add_ptr - 72 +72 alist_uninit - 48 +48 str_to_list 204 232 +28
I am working on an implementation of lmb maps using lists. The list nodes are then allocated with calloc, which I believe is included in most of the board images. We can then compare the size impact with the two implementations (alist vs list). Thanks.
This is for use with EFI,
Am I missing some context? The LMB rework is because what we have today doesn't work all that well for what we need in a modern system (and with modern security concious eyes on the code) with EFI_LOADER=n.
which is very large, so I doubt it is worth it. The alist code is simpler to work with and will be used in UPL and likely some other areas.
We could provide a variant which uses malloc()/free() instead of realloc(), as I mentioned/ The impact then would be very small.
Switching to malloc/free instead might be fine as well, yes. Aside from the realloc growth most platforms are at a minor size shrink. If we address realloc then we get something more functional and smaller. This should make everyone happy.

On Thu, 25 Jul 2024 at 18:24, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Thu, 25 Jul 2024 at 02:24, Tom Rini trini@konsulko.com wrote:
On Wed, Jul 24, 2024 at 11:31:48AM +0530, Sughosh Ganu wrote:
From: Simon Glass sjg@chromium.org
Use this new data structure in the utility function.
Signed-off-by: Simon Glass sjg@chromium.org Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
lib/strto.c | 35 +++++++++++++++++++---------------- 1 file changed, 19 insertions(+), 16 deletions(-)
This is rather big growth when we didn't already have realloc: 05: lib: Convert str_to_list() to use alist aarch64: (for 1/1 boards) all +1765.0 rodata +37.0 text +1728.0 xilinx_versal_mini_emmc0: all +1765 rodata +37 text +1728 u-boot: add: 7/0, grow: 1/0 bytes: 1728/0 (1728) function old new delta realloc - 1120 +1120 alist_ensure_ptr - 140 +140 alist_expand_to - 136 +136 alist_init - 108 +108 alist_uninit_move_ptr - 76 +76 alist_add_ptr - 72 +72 alist_uninit - 48 +48 str_to_list 204 232 +28
I am working on an implementation of lmb maps using lists. The list nodes are then allocated with calloc, which I believe is included in most of the board images. We can then compare the size impact with the two implementations (alist vs list). Thanks.
I have worked on implementing the LMB maps using simple lists, and we do get a good size benefit with the list based implementation. I have used the script that was shared by Tom some time back to get the size impact [1]. The readings are with 1) alists with realloc 2) alist with malloc and 3) using linked list. These are on a RPI4 build, which has LMB enabled. I have checked the impact on a xilinx_versal_mini config as well, which does not have LMB enabled, and the size impact result is best with lists. The list based changes can be checked on my github [2].
-sughosh
[1] - https://gist.github.com/sughoshg/d4f9bda8d8a33f715dab892738d192ba [2] - https://github.com/sughoshg/u-boot/tree/lmb_only_linked_lists_v3

Hi Sughosh,
On Sun, 28 Jul 2024 at 12:07, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Thu, 25 Jul 2024 at 18:24, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Thu, 25 Jul 2024 at 02:24, Tom Rini trini@konsulko.com wrote:
On Wed, Jul 24, 2024 at 11:31:48AM +0530, Sughosh Ganu wrote:
From: Simon Glass sjg@chromium.org
Use this new data structure in the utility function.
Signed-off-by: Simon Glass sjg@chromium.org Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
lib/strto.c | 35 +++++++++++++++++++---------------- 1 file changed, 19 insertions(+), 16 deletions(-)
This is rather big growth when we didn't already have realloc: 05: lib: Convert str_to_list() to use alist aarch64: (for 1/1 boards) all +1765.0 rodata +37.0 text +1728.0 xilinx_versal_mini_emmc0: all +1765 rodata +37 text +1728 u-boot: add: 7/0, grow: 1/0 bytes: 1728/0 (1728) function old new delta realloc - 1120 +1120 alist_ensure_ptr - 140 +140 alist_expand_to - 136 +136 alist_init - 108 +108 alist_uninit_move_ptr - 76 +76 alist_add_ptr - 72 +72 alist_uninit - 48 +48 str_to_list 204 232 +28
I am working on an implementation of lmb maps using lists. The list nodes are then allocated with calloc, which I believe is included in most of the board images. We can then compare the size impact with the two implementations (alist vs list). Thanks.
I have worked on implementing the LMB maps using simple lists, and we do get a good size benefit with the list based implementation. I have used the script that was shared by Tom some time back to get the size impact [1]. The readings are with 1) alists with realloc 2) alist with malloc and 3) using linked list. These are on a RPI4 build, which has LMB enabled. I have checked the impact on a xilinx_versal_mini config as well, which does not have LMB enabled, and the size impact result is best with lists. The list based changes can be checked on my github [2].
Thanks for doing that. It is quite hard to read with the LTO enabled. You can try with the -L flag and should get a cleaner result. Also see my comment about how you are using alist directly instead of going through the API...that might affect things.
Anyway (once you are using the API), my assumption from my previous refactor attempt was that lmb would be easier to deal with with simple arrays, which I why I wrote alist. Is that true, or not?
-sughosh
[1] - https://gist.github.com/sughoshg/d4f9bda8d8a33f715dab892738d192ba [2] - https://github.com/sughoshg/u-boot/tree/lmb_only_linked_lists_v3
Regards, Simon

On Mon, 29 Jul 2024 at 20:59, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Sun, 28 Jul 2024 at 12:07, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Thu, 25 Jul 2024 at 18:24, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Thu, 25 Jul 2024 at 02:24, Tom Rini trini@konsulko.com wrote:
On Wed, Jul 24, 2024 at 11:31:48AM +0530, Sughosh Ganu wrote:
From: Simon Glass sjg@chromium.org
Use this new data structure in the utility function.
Signed-off-by: Simon Glass sjg@chromium.org Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
lib/strto.c | 35 +++++++++++++++++++---------------- 1 file changed, 19 insertions(+), 16 deletions(-)
This is rather big growth when we didn't already have realloc: 05: lib: Convert str_to_list() to use alist aarch64: (for 1/1 boards) all +1765.0 rodata +37.0 text +1728.0 xilinx_versal_mini_emmc0: all +1765 rodata +37 text +1728 u-boot: add: 7/0, grow: 1/0 bytes: 1728/0 (1728) function old new delta realloc - 1120 +1120 alist_ensure_ptr - 140 +140 alist_expand_to - 136 +136 alist_init - 108 +108 alist_uninit_move_ptr - 76 +76 alist_add_ptr - 72 +72 alist_uninit - 48 +48 str_to_list 204 232 +28
I am working on an implementation of lmb maps using lists. The list nodes are then allocated with calloc, which I believe is included in most of the board images. We can then compare the size impact with the two implementations (alist vs list). Thanks.
I have worked on implementing the LMB maps using simple lists, and we do get a good size benefit with the list based implementation. I have used the script that was shared by Tom some time back to get the size impact [1]. The readings are with 1) alists with realloc 2) alist with malloc and 3) using linked list. These are on a RPI4 build, which has LMB enabled. I have checked the impact on a xilinx_versal_mini config as well, which does not have LMB enabled, and the size impact result is best with lists. The list based changes can be checked on my github [2].
Thanks for doing that. It is quite hard to read with the LTO enabled. You can try with the -L flag and should get a cleaner result. Also see my comment about how you are using alist directly instead of going through the API...that might affect things.
I can post the results with the -L flag used. Regarding your comment on using the alist API's, I don't think that has a bearing on the size of the image. Using the alist_add() might be faster in certain scenarios, but I don't see how it impacts the size.
Anyway (once you are using the API), my assumption from my previous refactor attempt was that lmb would be easier to deal with with simple arrays, which I why I wrote alist. Is that true, or not?
Yes, using the arrays keeps the code on similar lines to it's current state. But I thought that the main consideration here was the size impact.
-sughosh
-sughosh
[1] - https://gist.github.com/sughoshg/d4f9bda8d8a33f715dab892738d192ba [2] - https://github.com/sughoshg/u-boot/tree/lmb_only_linked_lists_v3
Regards, Simon

On Mon, Jul 29, 2024 at 09:28:57AM -0600, Simon Glass wrote:
Hi Sughosh,
On Sun, 28 Jul 2024 at 12:07, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Thu, 25 Jul 2024 at 18:24, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Thu, 25 Jul 2024 at 02:24, Tom Rini trini@konsulko.com wrote:
On Wed, Jul 24, 2024 at 11:31:48AM +0530, Sughosh Ganu wrote:
From: Simon Glass sjg@chromium.org
Use this new data structure in the utility function.
Signed-off-by: Simon Glass sjg@chromium.org Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
lib/strto.c | 35 +++++++++++++++++++---------------- 1 file changed, 19 insertions(+), 16 deletions(-)
This is rather big growth when we didn't already have realloc: 05: lib: Convert str_to_list() to use alist aarch64: (for 1/1 boards) all +1765.0 rodata +37.0 text +1728.0 xilinx_versal_mini_emmc0: all +1765 rodata +37 text +1728 u-boot: add: 7/0, grow: 1/0 bytes: 1728/0 (1728) function old new delta realloc - 1120 +1120 alist_ensure_ptr - 140 +140 alist_expand_to - 136 +136 alist_init - 108 +108 alist_uninit_move_ptr - 76 +76 alist_add_ptr - 72 +72 alist_uninit - 48 +48 str_to_list 204 232 +28
I am working on an implementation of lmb maps using lists. The list nodes are then allocated with calloc, which I believe is included in most of the board images. We can then compare the size impact with the two implementations (alist vs list). Thanks.
I have worked on implementing the LMB maps using simple lists, and we do get a good size benefit with the list based implementation. I have used the script that was shared by Tom some time back to get the size impact [1]. The readings are with 1) alists with realloc 2) alist with malloc and 3) using linked list. These are on a RPI4 build, which has LMB enabled. I have checked the impact on a xilinx_versal_mini config as well, which does not have LMB enabled, and the size impact result is best with lists. The list based changes can be checked on my github [2].
Thanks for doing that. It is quite hard to read with the LTO enabled.
I don't think LTO is enabled there?

On Mon, 29 Jul 2024 at 23:46, Tom Rini trini@konsulko.com wrote:
On Mon, Jul 29, 2024 at 09:28:57AM -0600, Simon Glass wrote:
Hi Sughosh,
On Sun, 28 Jul 2024 at 12:07, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Thu, 25 Jul 2024 at 18:24, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Thu, 25 Jul 2024 at 02:24, Tom Rini trini@konsulko.com wrote:
On Wed, Jul 24, 2024 at 11:31:48AM +0530, Sughosh Ganu wrote:
From: Simon Glass sjg@chromium.org
Use this new data structure in the utility function.
Signed-off-by: Simon Glass sjg@chromium.org Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
lib/strto.c | 35 +++++++++++++++++++---------------- 1 file changed, 19 insertions(+), 16 deletions(-)
This is rather big growth when we didn't already have realloc: 05: lib: Convert str_to_list() to use alist aarch64: (for 1/1 boards) all +1765.0 rodata +37.0 text +1728.0 xilinx_versal_mini_emmc0: all +1765 rodata +37 text +1728 u-boot: add: 7/0, grow: 1/0 bytes: 1728/0 (1728) function old new delta realloc - 1120 +1120 alist_ensure_ptr - 140 +140 alist_expand_to - 136 +136 alist_init - 108 +108 alist_uninit_move_ptr - 76 +76 alist_add_ptr - 72 +72 alist_uninit - 48 +48 str_to_list 204 232 +28
I am working on an implementation of lmb maps using lists. The list nodes are then allocated with calloc, which I believe is included in most of the board images. We can then compare the size impact with the two implementations (alist vs list). Thanks.
I have worked on implementing the LMB maps using simple lists, and we do get a good size benefit with the list based implementation. I have used the script that was shared by Tom some time back to get the size impact [1]. The readings are with 1) alists with realloc 2) alist with malloc and 3) using linked list. These are on a RPI4 build, which has LMB enabled. I have checked the impact on a xilinx_versal_mini config as well, which does not have LMB enabled, and the size impact result is best with lists. The list based changes can be checked on my github [2].
Thanks for doing that. It is quite hard to read with the LTO enabled.
I don't think LTO is enabled there?
I ran the size checks with -L flag [1]. Do not see any difference.
-sughosh
[1] - https://gist.github.com/sughoshg/8a82946727904944601021a7ee732661

Hi Sughosh,
On Mon, 29 Jul 2024 at 19:49, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Mon, 29 Jul 2024 at 23:46, Tom Rini trini@konsulko.com wrote:
On Mon, Jul 29, 2024 at 09:28:57AM -0600, Simon Glass wrote:
Hi Sughosh,
On Sun, 28 Jul 2024 at 12:07, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Thu, 25 Jul 2024 at 18:24, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Thu, 25 Jul 2024 at 02:24, Tom Rini trini@konsulko.com wrote:
On Wed, Jul 24, 2024 at 11:31:48AM +0530, Sughosh Ganu wrote:
> From: Simon Glass sjg@chromium.org > > Use this new data structure in the utility function. > > Signed-off-by: Simon Glass sjg@chromium.org > Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org > --- > lib/strto.c | 35 +++++++++++++++++++---------------- > 1 file changed, 19 insertions(+), 16 deletions(-)
This is rather big growth when we didn't already have realloc: 05: lib: Convert str_to_list() to use alist aarch64: (for 1/1 boards) all +1765.0 rodata +37.0 text +1728.0 xilinx_versal_mini_emmc0: all +1765 rodata +37 text +1728 u-boot: add: 7/0, grow: 1/0 bytes: 1728/0 (1728) function old new delta realloc - 1120 +1120 alist_ensure_ptr - 140 +140 alist_expand_to - 136 +136 alist_init - 108 +108 alist_uninit_move_ptr - 76 +76 alist_add_ptr - 72 +72 alist_uninit - 48 +48 str_to_list 204 232 +28
I am working on an implementation of lmb maps using lists. The list nodes are then allocated with calloc, which I believe is included in most of the board images. We can then compare the size impact with the two implementations (alist vs list). Thanks.
I have worked on implementing the LMB maps using simple lists, and we do get a good size benefit with the list based implementation. I have used the script that was shared by Tom some time back to get the size impact [1]. The readings are with 1) alists with realloc 2) alist with malloc and 3) using linked list. These are on a RPI4 build, which has LMB enabled. I have checked the impact on a xilinx_versal_mini config as well, which does not have LMB enabled, and the size impact result is best with lists. The list based changes can be checked on my github [2].
Thanks for doing that. It is quite hard to read with the LTO enabled.
I don't think LTO is enabled there?
I ran the size checks with -L flag [1]. Do not see any difference.\
OK, thanks, as Tom says probably not actually enabled. It's just hard to compare two branches!
I took a look at the lmb thing and decided to write a few patches to show what I am getting at, particularly with the testing side. I will send those as an RFC...
Regards, Simon
-sughosh
[1] - https://gist.github.com/sughoshg/8a82946727904944601021a7ee732661

If the malloc simple functionality is enabled in SPL, it is not possible to call the full-implementation alloc functions even after the heap is set-up in ram memory. Check for this condition and call the functions when enabled.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since rfc: New patch
common/malloc_simple.c | 13 +++++++++++++ 1 file changed, 13 insertions(+)
diff --git a/common/malloc_simple.c b/common/malloc_simple.c index 4e6d7952b3..982287defe 100644 --- a/common/malloc_simple.c +++ b/common/malloc_simple.c @@ -40,6 +40,10 @@ void *malloc_simple(size_t bytes) { void *ptr;
+#if IS_ENABLED(CONFIG_SPL_SYS_MALLOC) && IS_ENABLED(CONFIG_SPL_LMB) + if (gd->flags & GD_FLG_FULL_MALLOC_INIT) + return mALLOc(bytes); +#endif ptr = alloc_simple(bytes, 1); if (!ptr) return ptr; @@ -50,6 +54,15 @@ void *malloc_simple(size_t bytes) return ptr; }
+void *realloc_simple(void *oldmem, size_t bytes) +{ +#if IS_ENABLED(CONFIG_SPL_SYS_MALLOC) && IS_ENABLED(CONFIG_SPL_LMB) + if (gd->flags & GD_FLG_FULL_MALLOC_INIT) + return rEALLOc(oldmem, bytes); +#endif + return NULL; +} + void *memalign_simple(size_t align, size_t bytes) { void *ptr;

Hi Sughosh,
On Wed, 24 Jul 2024 at 00:03, Sughosh Ganu sughosh.ganu@linaro.org wrote:
If the malloc simple functionality is enabled in SPL, it is not possible to call the full-implementation alloc functions even after the heap is set-up in ram memory. Check for this condition and call the functions when enabled.
Is this because you want to use lmb in SPL. Is that needed?
BTW I'll send a patch to allow alist to run without realloc().
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since rfc: New patch
common/malloc_simple.c | 13 +++++++++++++ 1 file changed, 13 insertions(+)
Regards, SImon

hi Simon,
On Fri, 26 Jul 2024 at 05:02, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Wed, 24 Jul 2024 at 00:03, Sughosh Ganu sughosh.ganu@linaro.org wrote:
If the malloc simple functionality is enabled in SPL, it is not possible to call the full-implementation alloc functions even after the heap is set-up in ram memory. Check for this condition and call the functions when enabled.
Is this because you want to use lmb in SPL. Is that needed?
Yes, there was a discussion on this, and Tom [1], and you also [2] had mentioned on the earlier series that we do need lmb in SPL. You had mentioned that this would subsequently be needed for VPL too.
-sughosh
[1] - https://lists.denx.de/pipermail/u-boot/2024-July/558250.html [2] - https://lists.denx.de/pipermail/u-boot/2024-July/558644.html
BTW I'll send a patch to allow alist to run without realloc().
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since rfc: New patch
common/malloc_simple.c | 13 +++++++++++++ 1 file changed, 13 insertions(+)
Regards, SImon

Hi Sughosh,
On Mon, 29 Jul 2024 at 01:46, Sughosh Ganu sughosh.ganu@linaro.org wrote:
hi Simon,
On Fri, 26 Jul 2024 at 05:02, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Wed, 24 Jul 2024 at 00:03, Sughosh Ganu sughosh.ganu@linaro.org wrote:
If the malloc simple functionality is enabled in SPL, it is not possible to call the full-implementation alloc functions even after the heap is set-up in ram memory. Check for this condition and call the functions when enabled.
Is this because you want to use lmb in SPL. Is that needed?
Yes, there was a discussion on this, and Tom [1], and you also [2] had mentioned on the earlier series that we do need lmb in SPL. You had mentioned that this would subsequently be needed for VPL too.
OK, well at least this way we can make sure it is disabled if not needed. We can worry about how to pass information through the phases, later.
-sughosh
[1] - https://lists.denx.de/pipermail/u-boot/2024-July/558250.html [2] - https://lists.denx.de/pipermail/u-boot/2024-July/558644.html
BTW I'll send a patch to allow alist to run without realloc().
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since rfc: New patch
common/malloc_simple.c | 13 +++++++++++++ 1 file changed, 13 insertions(+)
Regards, Simon

Add a couple of helper functions to detect an empty and full alist.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since rfc: None
include/alist.h | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+)
diff --git a/include/alist.h b/include/alist.h index 6cc3161dcd..06ae137102 100644 --- a/include/alist.h +++ b/include/alist.h @@ -82,6 +82,28 @@ static inline bool alist_err(struct alist *lst) return lst->flags & ALISTF_FAIL; }
+/** + * alist_full() - Check if the alist is full + * + * @lst: List to check + * Return: true if full, false otherwise + */ +static inline bool alist_full(struct alist *lst) +{ + return lst->count == lst->alloc; +} + +/** + * alist_empty() - Check if the alist is empty + * + * @lst: List to check + * Return: true if empty, false otherwise + */ +static inline bool alist_empty(struct alist *lst) +{ + return !lst->count && lst->alloc; +} + /** * alist_get_ptr() - Get the value of a pointer *

Hi Sughosh,
On Wed, 24 Jul 2024 at 00:03, Sughosh Ganu sughosh.ganu@linaro.org wrote:
Add a couple of helper functions to detect an empty and full alist.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since rfc: None
include/alist.h | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+)
I had to hunt around to see why these are needed. It's fine to add new functions to the API, but in this case I want to make a few points.
diff --git a/include/alist.h b/include/alist.h index 6cc3161dcd..06ae137102 100644 --- a/include/alist.h +++ b/include/alist.h @@ -82,6 +82,28 @@ static inline bool alist_err(struct alist *lst) return lst->flags & ALISTF_FAIL; }
+/**
- alist_full() - Check if the alist is full
- @lst: List to check
- Return: true if full, false otherwise
- */
+static inline bool alist_full(struct alist *lst) +{
return lst->count == lst->alloc;
+}
In general I see you manually modifying the members of the alist, rather than using the API to add a new item. I think it is better to use the API.
struct lmb_region rgn;
rgn.base = ... rgn.size = ... rgn.flags = ... if (!alist_add(&lmb.used, rgn. struct lmb_region)) return -ENOMEM;
or you could make a function to add a new region to a list, with base, size & flags as args.
+/**
- alist_empty() - Check if the alist is empty
- @lst: List to check
- Return: true if empty, false otherwise
- */
+static inline bool alist_empty(struct alist *lst) +{
return !lst->count && lst->alloc;
+}
I would argue that this is a surprising definition of 'empty'. Why the second term? It seems to be because you want to know if it is safe to set values in item[0]. But see above for how to use the API.
/**
- alist_get_ptr() - Get the value of a pointer
-- 2.34.1
Regards, Simon

On Fri, 26 Jul 2024 at 05:03, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Wed, 24 Jul 2024 at 00:03, Sughosh Ganu sughosh.ganu@linaro.org wrote:
Add a couple of helper functions to detect an empty and full alist.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since rfc: None
include/alist.h | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+)
I had to hunt around to see why these are needed. It's fine to add new functions to the API, but in this case I want to make a few points.
diff --git a/include/alist.h b/include/alist.h index 6cc3161dcd..06ae137102 100644 --- a/include/alist.h +++ b/include/alist.h @@ -82,6 +82,28 @@ static inline bool alist_err(struct alist *lst) return lst->flags & ALISTF_FAIL; }
+/**
- alist_full() - Check if the alist is full
- @lst: List to check
- Return: true if full, false otherwise
- */
+static inline bool alist_full(struct alist *lst) +{
return lst->count == lst->alloc;
+}
In general I see you manually modifying the members of the alist, rather than using the API to add a new item. I think it is better to use the API.
struct lmb_region rgn;
rgn.base = ... rgn.size = ... rgn.flags = ... if (!alist_add(&lmb.used, rgn. struct lmb_region)) return -ENOMEM;
Yes, I had seen this usage of the API in another of your patch. However, my personal opinion of the API's is that alist_expand_by() gives more clarity on what the function is doing. One gets a feeling that alist_add() is only adding the given node to the list, whereas it is doing a lot more under the hood. I feel that using the alist_expand_by() API makes the code much easier to understand, but that's my personal opinion.
or you could make a function to add a new region to a list, with base, size & flags as args.
+/**
- alist_empty() - Check if the alist is empty
- @lst: List to check
- Return: true if empty, false otherwise
- */
+static inline bool alist_empty(struct alist *lst) +{
return !lst->count && lst->alloc;
+}
I would argue that this is a surprising definition of 'empty'. Why the second term? It seems to be because you want to know if it is safe to set values in item[0]. But see above for how to use the API.
I want the initialisation of the list to be kept separate from adding more 'nodes' to the list. Hence my usage.
-sughosh
/**
- alist_get_ptr() - Get the value of a pointer
-- 2.34.1
Regards, Simon

Hi Sughosh,
On Mon, 29 Jul 2024 at 12:05, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Fri, 26 Jul 2024 at 05:03, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Wed, 24 Jul 2024 at 00:03, Sughosh Ganu sughosh.ganu@linaro.org wrote:
Add a couple of helper functions to detect an empty and full alist.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since rfc: None
include/alist.h | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+)
I had to hunt around to see why these are needed. It's fine to add new functions to the API, but in this case I want to make a few points.
diff --git a/include/alist.h b/include/alist.h index 6cc3161dcd..06ae137102 100644 --- a/include/alist.h +++ b/include/alist.h @@ -82,6 +82,28 @@ static inline bool alist_err(struct alist *lst) return lst->flags & ALISTF_FAIL; }
+/**
- alist_full() - Check if the alist is full
- @lst: List to check
- Return: true if full, false otherwise
- */
+static inline bool alist_full(struct alist *lst) +{
return lst->count == lst->alloc;
+}
In general I see you manually modifying the members of the alist, rather than using the API to add a new item. I think it is better to use the API.
struct lmb_region rgn;
rgn.base = ... rgn.size = ... rgn.flags = ... if (!alist_add(&lmb.used, rgn. struct lmb_region)) return -ENOMEM;
Yes, I had seen this usage of the API in another of your patch. However, my personal opinion of the API's is that alist_expand_by() gives more clarity on what the function is doing. One gets a feeling that alist_add() is only adding the given node to the list, whereas it is doing a lot more under the hood. I feel that using the alist_expand_by() API makes the code much easier to understand, but that's my personal opinion.
With lmb there is the case of inserting a new element into the array, something which the API doesn't support at present. Perhaps that would help? It is a little hard to design the API until there are more users of it.
alist_add() is really just adding an element (not a node) to the list, isn't it? Do you mean that it is copying the data into the list?
or you could make a function to add a new region to a list, with base, size & flags as args.
+/**
- alist_empty() - Check if the alist is empty
- @lst: List to check
- Return: true if empty, false otherwise
- */
+static inline bool alist_empty(struct alist *lst) +{
return !lst->count && lst->alloc;
+}
I would argue that this is a surprising definition of 'empty'. Why the second term? It seems to be because you want to know if it is safe to set values in item[0]. But see above for how to use the API.
I want the initialisation of the list to be kept separate from adding more 'nodes' to the list. Hence my usage.
In that case, please rename the function to something other than 'empty'.
Regards, Simon

The LMB module code is being overhauled to make it's memory map global and persistent. This involves extensive changes to the LMB code. Disable the unit test code temporarily till the changes are in place. These tests will be enabled in a subsequent commit once all the LMB module and the corresponding test code changes have been made.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since rfc: New patch
configs/sandbox64_defconfig | 4 +--- configs/sandbox_defconfig | 7 ++++--- configs/sandbox_flattree_defconfig | 4 +--- configs/sandbox_noinst_defconfig | 8 ++++---- configs/sandbox_spl_defconfig | 8 ++++---- configs/sandbox_vpl_defconfig | 7 ++----- configs/snow_defconfig | 2 +- 7 files changed, 17 insertions(+), 23 deletions(-)
diff --git a/configs/sandbox64_defconfig b/configs/sandbox64_defconfig index dd0582d2a0..dbcf65d29f 100644 --- a/configs/sandbox64_defconfig +++ b/configs/sandbox64_defconfig @@ -3,6 +3,7 @@ CONFIG_SYS_MALLOC_LEN=0x6000000 CONFIG_NR_DRAM_BANKS=1 CONFIG_ENV_SIZE=0x2000 CONFIG_DEFAULT_DEVICE_TREE="sandbox64" +CONFIG_OF_LIBFDT_OVERLAY=y CONFIG_DM_RESET=y CONFIG_PRE_CON_BUF_ADDR=0x100000 CONFIG_SYS_LOAD_ADDR=0x0 @@ -270,6 +271,3 @@ CONFIG_GETOPT=y CONFIG_EFI_RT_VOLATILE_STORE=y CONFIG_EFI_SECURE_BOOT=y CONFIG_TEST_FDTDEC=y -CONFIG_UNIT_TEST=y -CONFIG_UT_TIME=y -CONFIG_UT_DM=y diff --git a/configs/sandbox_defconfig b/configs/sandbox_defconfig index dc5fcdbd1c..9bb1813e77 100644 --- a/configs/sandbox_defconfig +++ b/configs/sandbox_defconfig @@ -3,6 +3,7 @@ CONFIG_SYS_MALLOC_LEN=0x6000000 CONFIG_NR_DRAM_BANKS=1 CONFIG_ENV_SIZE=0x2000 CONFIG_DEFAULT_DEVICE_TREE="sandbox" +CONFIG_OF_LIBFDT_OVERLAY=y CONFIG_DM_RESET=y CONFIG_PRE_CON_BUF_ADDR=0xf0000 CONFIG_SYS_LOAD_ADDR=0x0 @@ -345,6 +346,9 @@ CONFIG_ADDR_MAP=y CONFIG_CMD_DHRYSTONE=y CONFIG_ECDSA=y CONFIG_ECDSA_VERIFY=y +CONFIG_CRYPT_PW=y +CONFIG_CRYPT_PW_SHA256=y +CONFIG_CRYPT_PW_SHA512=y CONFIG_TPM=y CONFIG_ERRNO_STR=y CONFIG_GETOPT=y @@ -356,6 +360,3 @@ CONFIG_EFI_CAPSULE_AUTHENTICATE=y CONFIG_EFI_CAPSULE_CRT_FILE="board/sandbox/capsule_pub_key_good.crt" CONFIG_EFI_SECURE_BOOT=y CONFIG_TEST_FDTDEC=y -CONFIG_UNIT_TEST=y -CONFIG_UT_TIME=y -CONFIG_UT_DM=y diff --git a/configs/sandbox_flattree_defconfig b/configs/sandbox_flattree_defconfig index 049a606613..b87bd145bb 100644 --- a/configs/sandbox_flattree_defconfig +++ b/configs/sandbox_flattree_defconfig @@ -2,6 +2,7 @@ CONFIG_TEXT_BASE=0 CONFIG_NR_DRAM_BANKS=1 CONFIG_ENV_SIZE=0x2000 CONFIG_DEFAULT_DEVICE_TREE="sandbox" +CONFIG_OF_LIBFDT_OVERLAY=y CONFIG_DM_RESET=y CONFIG_SYS_LOAD_ADDR=0x0 CONFIG_PCI=y @@ -228,6 +229,3 @@ CONFIG_EFI_CAPSULE_ON_DISK=y CONFIG_EFI_CAPSULE_FIRMWARE_FIT=y CONFIG_EFI_CAPSULE_AUTHENTICATE=y CONFIG_EFI_CAPSULE_CRT_FILE="board/sandbox/capsule_pub_key_good.crt" -CONFIG_UNIT_TEST=y -CONFIG_UT_TIME=y -CONFIG_UT_DM=y diff --git a/configs/sandbox_noinst_defconfig b/configs/sandbox_noinst_defconfig index f37230151a..3e5ef854f6 100644 --- a/configs/sandbox_noinst_defconfig +++ b/configs/sandbox_noinst_defconfig @@ -6,6 +6,7 @@ CONFIG_NR_DRAM_BANKS=1 CONFIG_ENV_SIZE=0x2000 CONFIG_SPL_DM_SPI=y CONFIG_DEFAULT_DEVICE_TREE="sandbox" +CONFIG_OF_LIBFDT_OVERLAY=y CONFIG_DM_RESET=y CONFIG_SPL_MMC=y CONFIG_SPL_SERIAL=y @@ -131,6 +132,7 @@ CONFIG_NETCONSOLE=y CONFIG_IP_DEFRAG=y CONFIG_BOOTP_SERVERIP=y CONFIG_SPL_DM=y +CONFIG_SPL_DM_DEVICE_REMOVE=y CONFIG_DM_DMA=y CONFIG_REGMAP=y CONFIG_SPL_REGMAP=y @@ -277,11 +279,9 @@ CONFIG_FS_CRAMFS=y # CONFIG_SPL_USE_TINY_PRINTF is not set CONFIG_CMD_DHRYSTONE=y CONFIG_RSA_VERIFY_WITH_PKEY=y +CONFIG_X509_CERTIFICATE_PARSER=y +CONFIG_PKCS7_MESSAGE_PARSER=y CONFIG_TPM=y CONFIG_ZSTD=y CONFIG_SPL_LZMA=y CONFIG_ERRNO_STR=y -CONFIG_UNIT_TEST=y -CONFIG_SPL_UNIT_TEST=y -CONFIG_UT_TIME=y -CONFIG_UT_DM=y diff --git a/configs/sandbox_spl_defconfig b/configs/sandbox_spl_defconfig index f7b92dc844..2823bde492 100644 --- a/configs/sandbox_spl_defconfig +++ b/configs/sandbox_spl_defconfig @@ -5,6 +5,7 @@ CONFIG_SPL_LIBGENERIC_SUPPORT=y CONFIG_NR_DRAM_BANKS=1 CONFIG_ENV_SIZE=0x2000 CONFIG_DEFAULT_DEVICE_TREE="sandbox" +CONFIG_OF_LIBFDT_OVERLAY=y CONFIG_DM_RESET=y CONFIG_SPL_SERIAL=y CONFIG_SPL_DRIVERS_MISC=y @@ -107,6 +108,7 @@ CONFIG_NETCONSOLE=y CONFIG_IP_DEFRAG=y CONFIG_BOOTP_SERVERIP=y CONFIG_SPL_DM=y +CONFIG_SPL_DM_DEVICE_REMOVE=y CONFIG_DM_DMA=y CONFIG_REGMAP=y CONFIG_SPL_REGMAP=y @@ -243,13 +245,11 @@ CONFIG_FS_CRAMFS=y # CONFIG_SPL_USE_TINY_PRINTF is not set CONFIG_CMD_DHRYSTONE=y CONFIG_RSA_VERIFY_WITH_PKEY=y +CONFIG_X509_CERTIFICATE_PARSER=y +CONFIG_PKCS7_MESSAGE_PARSER=y CONFIG_TPM=y CONFIG_SPL_CRC8=y CONFIG_ZSTD=y CONFIG_SPL_LZMA=y CONFIG_ERRNO_STR=y CONFIG_SPL_HEXDUMP=y -CONFIG_UNIT_TEST=y -CONFIG_SPL_UNIT_TEST=y -CONFIG_UT_TIME=y -CONFIG_UT_DM=y diff --git a/configs/sandbox_vpl_defconfig b/configs/sandbox_vpl_defconfig index 72483d8ba1..a8e9e16746 100644 --- a/configs/sandbox_vpl_defconfig +++ b/configs/sandbox_vpl_defconfig @@ -5,6 +5,7 @@ CONFIG_NR_DRAM_BANKS=1 CONFIG_ENV_SIZE=0x2000 CONFIG_DEFAULT_DEVICE_TREE="sandbox" CONFIG_SPL_TEXT_BASE=0x100000 +CONFIG_OF_LIBFDT_OVERLAY=y CONFIG_DM_RESET=y CONFIG_SPL_MMC=y CONFIG_SPL_SERIAL=y @@ -118,6 +119,7 @@ CONFIG_NETCONSOLE=y CONFIG_IP_DEFRAG=y CONFIG_SPL_DM=y CONFIG_TPL_DM=y +CONFIG_SPL_DM_DEVICE_REMOVE=y CONFIG_SPL_DM_SEQ_ALIAS=y CONFIG_DM_DMA=y CONFIG_REGMAP=y @@ -252,8 +254,3 @@ CONFIG_TPM=y CONFIG_ZSTD=y # CONFIG_VPL_LZMA is not set CONFIG_ERRNO_STR=y -CONFIG_UNIT_TEST=y -CONFIG_SPL_UNIT_TEST=y -CONFIG_UT_TIME=y -CONFIG_UT_DM=y -# CONFIG_SPL_UT_LOAD_OS is not set diff --git a/configs/snow_defconfig b/configs/snow_defconfig index 2c0757194b..637c51d2c2 100644 --- a/configs/snow_defconfig +++ b/configs/snow_defconfig @@ -19,6 +19,7 @@ CONFIG_ENV_OFFSET=0x3FC000 CONFIG_ENV_SECT_SIZE=0x4000 CONFIG_DEFAULT_DEVICE_TREE="exynos5250-snow" CONFIG_SPL_TEXT_BASE=0x02023400 +CONFIG_OF_LIBFDT_OVERLAY=y CONFIG_SPL=y CONFIG_DEBUG_UART_BASE=0x12c30000 CONFIG_DEBUG_UART_CLOCK=100000000 @@ -107,4 +108,3 @@ CONFIG_VIDEO_BRIDGE_PARADE_PS862X=y CONFIG_VIDEO_BRIDGE_NXP_PTN3460=y CONFIG_TPM=y CONFIG_ERRNO_STR=y -CONFIG_UNIT_TEST=y

The lmb_is_reserved() API is not used. There is another API, lmb_is_reserved_flags() which can be used to check if a particular memory region is reserved. Remove the unused API.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org Reviewed-by: Ilias Apalodimas ilias.apalodimas@linaro.org Reviewed-by: Simon Glass sjg@chromium.org --- Changes since rfc: None
include/lmb.h | 11 ----------- lib/lmb.c | 5 ----- 2 files changed, 16 deletions(-)
diff --git a/include/lmb.h b/include/lmb.h index 231b68b27d..6c50d93e83 100644 --- a/include/lmb.h +++ b/include/lmb.h @@ -117,17 +117,6 @@ phys_addr_t __lmb_alloc_base(struct lmb *lmb, phys_size_t size, ulong align, phys_addr_t lmb_alloc_addr(struct lmb *lmb, phys_addr_t base, phys_size_t size); phys_size_t lmb_get_free_size(struct lmb *lmb, phys_addr_t addr);
-/** - * lmb_is_reserved() - test if address is in reserved region - * - * The function checks if a reserved region comprising @addr exists. - * - * @lmb: the logical memory block struct - * @addr: address to be tested - * Return: 1 if reservation exists, 0 otherwise - */ -int lmb_is_reserved(struct lmb *lmb, phys_addr_t addr); - /** * lmb_is_reserved_flags() - test if address is in reserved region with flag bits set * diff --git a/lib/lmb.c b/lib/lmb.c index 44f9820531..adc3abd5b4 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -565,11 +565,6 @@ int lmb_is_reserved_flags(struct lmb *lmb, phys_addr_t addr, int flags) return 0; }
-int lmb_is_reserved(struct lmb *lmb, phys_addr_t addr) -{ - return lmb_is_reserved_flags(lmb, addr, LMB_NONE); -} - __weak void board_lmb_reserve(struct lmb *lmb) { /* please define platform specific board_lmb_reserve() */

The __lmb_alloc_base() function is only called from within the lmb module. Moreover, the lmb_alloc() and lmb_alloc_base() API's are good enough for the allocation API calls. Make the __lmb_alloc_base() function static.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org Reviewed-by: Ilias Apalodimas ilias.apalodimas@linaro.org Reviewed-by: Simon Glass sjg@chromium.org --- Changes since rfc: None
include/lmb.h | 2 -- lib/lmb.c | 39 ++++++++++++++++++++------------------- 2 files changed, 20 insertions(+), 21 deletions(-)
diff --git a/include/lmb.h b/include/lmb.h index 6c50d93e83..7b87181b9e 100644 --- a/include/lmb.h +++ b/include/lmb.h @@ -112,8 +112,6 @@ long lmb_reserve_flags(struct lmb *lmb, phys_addr_t base, phys_addr_t lmb_alloc(struct lmb *lmb, phys_size_t size, ulong align); phys_addr_t lmb_alloc_base(struct lmb *lmb, phys_size_t size, ulong align, phys_addr_t max_addr); -phys_addr_t __lmb_alloc_base(struct lmb *lmb, phys_size_t size, ulong align, - phys_addr_t max_addr); phys_addr_t lmb_alloc_addr(struct lmb *lmb, phys_addr_t base, phys_size_t size); phys_size_t lmb_get_free_size(struct lmb *lmb, phys_addr_t addr);
diff --git a/lib/lmb.c b/lib/lmb.c index adc3abd5b4..4d39c0d1f9 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -435,30 +435,13 @@ static long lmb_overlaps_region(struct lmb_region *rgn, phys_addr_t base, return (i < rgn->cnt) ? i : -1; }
-phys_addr_t lmb_alloc(struct lmb *lmb, phys_size_t size, ulong align) -{ - return lmb_alloc_base(lmb, size, align, LMB_ALLOC_ANYWHERE); -} - -phys_addr_t lmb_alloc_base(struct lmb *lmb, phys_size_t size, ulong align, phys_addr_t max_addr) -{ - phys_addr_t alloc; - - alloc = __lmb_alloc_base(lmb, size, align, max_addr); - - if (alloc == 0) - printf("ERROR: Failed to allocate 0x%lx bytes below 0x%lx.\n", - (ulong)size, (ulong)max_addr); - - return alloc; -} - static phys_addr_t lmb_align_down(phys_addr_t addr, phys_size_t size) { return addr & ~(size - 1); }
-phys_addr_t __lmb_alloc_base(struct lmb *lmb, phys_size_t size, ulong align, phys_addr_t max_addr) +static phys_addr_t __lmb_alloc_base(struct lmb *lmb, phys_size_t size, + ulong align, phys_addr_t max_addr) { long i, rgn; phys_addr_t base = 0; @@ -499,6 +482,24 @@ phys_addr_t __lmb_alloc_base(struct lmb *lmb, phys_size_t size, ulong align, phy return 0; }
+phys_addr_t lmb_alloc(struct lmb *lmb, phys_size_t size, ulong align) +{ + return lmb_alloc_base(lmb, size, align, LMB_ALLOC_ANYWHERE); +} + +phys_addr_t lmb_alloc_base(struct lmb *lmb, phys_size_t size, ulong align, phys_addr_t max_addr) +{ + phys_addr_t alloc; + + alloc = __lmb_alloc_base(lmb, size, align, max_addr); + + if (alloc == 0) + printf("ERROR: Failed to allocate 0x%lx bytes below 0x%lx.\n", + (ulong)size, (ulong)max_addr); + + return alloc; +} + /* * Try to allocate a specific address range: must be in defined memory but not * reserved

Use the BIT macro for assigning values to the LMB flags instead of assigning random values to them.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since rfc: None
include/lmb.h | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/include/lmb.h b/include/lmb.h index 7b87181b9e..a1de18c3cb 100644 --- a/include/lmb.h +++ b/include/lmb.h @@ -5,6 +5,7 @@
#include <asm/types.h> #include <asm/u-boot.h> +#include <linux/bitops.h>
/* * Logical memory blocks. @@ -18,8 +19,8 @@ * @LMB_NOMAP: don't add to mmu configuration */ enum lmb_flags { - LMB_NONE = 0x0, - LMB_NOMAP = 0x4, + LMB_NONE = BIT(0), + LMB_NOMAP = BIT(1), };
/**

On Wed, 24 Jul 2024 at 09:03, Sughosh Ganu sughosh.ganu@linaro.org wrote:
Use the BIT macro for assigning values to the LMB flags instead of assigning random values to them.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since rfc: None
include/lmb.h | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/include/lmb.h b/include/lmb.h index 7b87181b9e..a1de18c3cb 100644 --- a/include/lmb.h +++ b/include/lmb.h @@ -5,6 +5,7 @@
#include <asm/types.h> #include <asm/u-boot.h> +#include <linux/bitops.h>
/*
- Logical memory blocks.
@@ -18,8 +19,8 @@
- @LMB_NOMAP: don't add to mmu configuration
*/ enum lmb_flags {
LMB_NONE = 0x0,
LMB_NOMAP = 0x4,
LMB_NONE = BIT(0),
LMB_NOMAP = BIT(1),
};
/**
2.34.1
Reviewed-by: Ilias Apalodimas ilias.apalodimas@linaro.org

The current LMB API's for allocating and reserving memory use a per-caller based memory view. Memory allocated by a caller can then be overwritten by another caller. Make these allocations and reservations persistent using the alloced list data structure.
Two alloced lists are declared -- one for the available(free) memory, and one for the used memory. Once full, the list can then be extended at runtime.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since rfc: * Squash patches 9 - 11, 13 from the rfc v2 series into a single patch to make it bisectable.
arch/arc/lib/cache.c | 4 +- arch/arm/lib/stack.c | 4 +- arch/arm/mach-apple/board.c | 17 +- arch/arm/mach-snapdragon/board.c | 17 +- arch/arm/mach-stm32mp/dram_init.c | 8 +- arch/arm/mach-stm32mp/stm32mp1/cpu.c | 6 +- arch/m68k/lib/bootm.c | 7 +- arch/microblaze/lib/bootm.c | 4 +- arch/mips/lib/bootm.c | 11 +- arch/nios2/lib/bootm.c | 4 +- arch/powerpc/cpu/mpc85xx/mp.c | 4 +- arch/powerpc/include/asm/mp.h | 4 +- arch/powerpc/lib/bootm.c | 14 +- arch/riscv/lib/bootm.c | 4 +- arch/sh/lib/bootm.c | 4 +- arch/x86/lib/bootm.c | 4 +- arch/xtensa/lib/bootm.c | 4 +- board/xilinx/common/board.c | 8 +- boot/bootm.c | 26 +- boot/bootm_os.c | 5 +- boot/image-board.c | 34 +-- boot/image-fdt.c | 36 ++- cmd/bdinfo.c | 6 +- cmd/booti.c | 2 +- cmd/bootz.c | 2 +- cmd/elf.c | 2 +- cmd/load.c | 7 +- drivers/iommu/apple_dart.c | 8 +- drivers/iommu/sandbox_iommu.c | 16 +- fs/fs.c | 7 +- include/image.h | 28 +- include/lmb.h | 114 +++----- lib/efi_loader/efi_dt_fixup.c | 2 +- lib/efi_loader/efi_helper.c | 2 +- lib/lmb.c | 395 +++++++++++++++------------ net/tftp.c | 5 +- net/wget.c | 5 +- test/cmd/bdinfo.c | 2 +- test/lib/lmb.c | 205 ++++++-------- 39 files changed, 477 insertions(+), 560 deletions(-)
diff --git a/arch/arc/lib/cache.c b/arch/arc/lib/cache.c index 22e748868a..5151af917a 100644 --- a/arch/arc/lib/cache.c +++ b/arch/arc/lib/cache.c @@ -829,7 +829,7 @@ static ulong get_sp(void) return ret; }
-void arch_lmb_reserve(struct lmb *lmb) +void arch_lmb_reserve(void) { - arch_lmb_reserve_generic(lmb, get_sp(), gd->ram_top, 4096); + arch_lmb_reserve_generic(get_sp(), gd->ram_top, 4096); } diff --git a/arch/arm/lib/stack.c b/arch/arm/lib/stack.c index ea1b937add..87d5c962d7 100644 --- a/arch/arm/lib/stack.c +++ b/arch/arm/lib/stack.c @@ -42,7 +42,7 @@ static ulong get_sp(void) return ret; }
-void arch_lmb_reserve(struct lmb *lmb) +void arch_lmb_reserve(void) { - arch_lmb_reserve_generic(lmb, get_sp(), gd->ram_top, 16384); + arch_lmb_reserve_generic(get_sp(), gd->ram_top, 16384); } diff --git a/arch/arm/mach-apple/board.c b/arch/arm/mach-apple/board.c index 8bace3005e..213390d6e8 100644 --- a/arch/arm/mach-apple/board.c +++ b/arch/arm/mach-apple/board.c @@ -773,23 +773,22 @@ u64 get_page_table_size(void)
int board_late_init(void) { - struct lmb lmb; u32 status = 0;
- lmb_init_and_reserve(&lmb, gd->bd, (void *)gd->fdt_blob); + lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob);
/* somewhat based on the Linux Kernel boot requirements: * align by 2M and maximal FDT size 2M */ - status |= env_set_hex("loadaddr", lmb_alloc(&lmb, SZ_1G, SZ_2M)); - status |= env_set_hex("fdt_addr_r", lmb_alloc(&lmb, SZ_2M, SZ_2M)); - status |= env_set_hex("kernel_addr_r", lmb_alloc(&lmb, SZ_128M, SZ_2M)); - status |= env_set_hex("ramdisk_addr_r", lmb_alloc(&lmb, SZ_1G, SZ_2M)); + status |= env_set_hex("loadaddr", lmb_alloc(SZ_1G, SZ_2M)); + status |= env_set_hex("fdt_addr_r", lmb_alloc(SZ_2M, SZ_2M)); + status |= env_set_hex("kernel_addr_r", lmb_alloc(SZ_128M, SZ_2M)); + status |= env_set_hex("ramdisk_addr_r", lmb_alloc(SZ_1G, SZ_2M)); status |= env_set_hex("kernel_comp_addr_r", - lmb_alloc(&lmb, KERNEL_COMP_SIZE, SZ_2M)); + lmb_alloc(KERNEL_COMP_SIZE, SZ_2M)); status |= env_set_hex("kernel_comp_size", KERNEL_COMP_SIZE); - status |= env_set_hex("scriptaddr", lmb_alloc(&lmb, SZ_4M, SZ_2M)); - status |= env_set_hex("pxefile_addr_r", lmb_alloc(&lmb, SZ_4M, SZ_2M)); + status |= env_set_hex("scriptaddr", lmb_alloc(SZ_4M, SZ_2M)); + status |= env_set_hex("pxefile_addr_r", lmb_alloc(SZ_4M, SZ_2M));
if (status) log_warning("late_init: Failed to set run time variables\n"); diff --git a/arch/arm/mach-snapdragon/board.c b/arch/arm/mach-snapdragon/board.c index b439a19ec7..a63c8bec45 100644 --- a/arch/arm/mach-snapdragon/board.c +++ b/arch/arm/mach-snapdragon/board.c @@ -275,24 +275,23 @@ void __weak qcom_late_init(void)
#define KERNEL_COMP_SIZE SZ_64M
-#define addr_alloc(lmb, size) lmb_alloc(lmb, size, SZ_2M) +#define addr_alloc(size) lmb_alloc(size, SZ_2M)
/* Stolen from arch/arm/mach-apple/board.c */ int board_late_init(void) { - struct lmb lmb; u32 status = 0;
- lmb_init_and_reserve(&lmb, gd->bd, (void *)gd->fdt_blob); + lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob);
/* We need to be fairly conservative here as we support boards with just 1G of TOTAL RAM */ - status |= env_set_hex("kernel_addr_r", addr_alloc(&lmb, SZ_128M)); - status |= env_set_hex("ramdisk_addr_r", addr_alloc(&lmb, SZ_128M)); - status |= env_set_hex("kernel_comp_addr_r", addr_alloc(&lmb, KERNEL_COMP_SIZE)); + status |= env_set_hex("kernel_addr_r", addr_alloc(SZ_128M)); + status |= env_set_hex("ramdisk_addr_r", addr_alloc(SZ_128M)); + status |= env_set_hex("kernel_comp_addr_r", addr_alloc(KERNEL_COMP_SIZE)); status |= env_set_hex("kernel_comp_size", KERNEL_COMP_SIZE); - status |= env_set_hex("scriptaddr", addr_alloc(&lmb, SZ_4M)); - status |= env_set_hex("pxefile_addr_r", addr_alloc(&lmb, SZ_4M)); - status |= env_set_hex("fdt_addr_r", addr_alloc(&lmb, SZ_2M)); + status |= env_set_hex("scriptaddr", addr_alloc(SZ_4M)); + status |= env_set_hex("pxefile_addr_r", addr_alloc(SZ_4M)); + status |= env_set_hex("fdt_addr_r", addr_alloc(SZ_2M));
if (status) log_warning("%s: Failed to set run time variables\n", __func__); diff --git a/arch/arm/mach-stm32mp/dram_init.c b/arch/arm/mach-stm32mp/dram_init.c index 6024959b97..e8b0a38be1 100644 --- a/arch/arm/mach-stm32mp/dram_init.c +++ b/arch/arm/mach-stm32mp/dram_init.c @@ -47,7 +47,6 @@ phys_addr_t board_get_usable_ram_top(phys_size_t total_size) { phys_size_t size; phys_addr_t reg; - struct lmb lmb;
if (!total_size) return gd->ram_top; @@ -59,12 +58,11 @@ phys_addr_t board_get_usable_ram_top(phys_size_t total_size) gd->ram_top = clamp_val(gd->ram_top, 0, SZ_4G - 1);
/* found enough not-reserved memory to relocated U-Boot */ - lmb_init(&lmb); - lmb_add(&lmb, gd->ram_base, gd->ram_top - gd->ram_base); - boot_fdt_add_mem_rsv_regions(&lmb, (void *)gd->fdt_blob); + lmb_add(gd->ram_base, gd->ram_top - gd->ram_base); + boot_fdt_add_mem_rsv_regions((void *)gd->fdt_blob); /* add 8M for reserved memory for display, fdt, gd,... */ size = ALIGN(SZ_8M + CONFIG_SYS_MALLOC_LEN + total_size, MMU_SECTION_SIZE), - reg = lmb_alloc(&lmb, size, MMU_SECTION_SIZE); + reg = lmb_alloc(size, MMU_SECTION_SIZE);
if (!reg) reg = gd->ram_top - size; diff --git a/arch/arm/mach-stm32mp/stm32mp1/cpu.c b/arch/arm/mach-stm32mp/stm32mp1/cpu.c index 478c3efae7..a913737342 100644 --- a/arch/arm/mach-stm32mp/stm32mp1/cpu.c +++ b/arch/arm/mach-stm32mp/stm32mp1/cpu.c @@ -30,8 +30,6 @@ */ u8 early_tlb[PGTABLE_SIZE] __section(".data") __aligned(0x4000);
-struct lmb lmb; - u32 get_bootmode(void) { /* read bootmode from TAMP backup register */ @@ -80,7 +78,7 @@ void dram_bank_mmu_setup(int bank) i < (start >> MMU_SECTION_SHIFT) + (size >> MMU_SECTION_SHIFT); i++) { option = DCACHE_DEFAULT_OPTION; - if (use_lmb && lmb_is_reserved_flags(&lmb, i << MMU_SECTION_SHIFT, LMB_NOMAP)) + if (use_lmb && lmb_is_reserved_flags(i << MMU_SECTION_SHIFT, LMB_NOMAP)) option = 0; /* INVALID ENTRY in TLB */ set_section_dcache(i, option); } @@ -144,7 +142,7 @@ int mach_cpu_init(void) void enable_caches(void) { /* parse device tree when data cache is still activated */ - lmb_init_and_reserve(&lmb, gd->bd, (void *)gd->fdt_blob); + lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob);
/* I-cache is already enabled in start.S: icache_enable() not needed */
diff --git a/arch/m68k/lib/bootm.c b/arch/m68k/lib/bootm.c index f2d02e4376..eb220d178d 100644 --- a/arch/m68k/lib/bootm.c +++ b/arch/m68k/lib/bootm.c @@ -30,9 +30,9 @@ DECLARE_GLOBAL_DATA_PTR; static ulong get_sp (void); static void set_clocks_in_mhz (struct bd_info *kbd);
-void arch_lmb_reserve(struct lmb *lmb) +void arch_lmb_reserve(void) { - arch_lmb_reserve_generic(lmb, get_sp(), gd->ram_top, 1024); + arch_lmb_reserve_generic(get_sp(), gd->ram_top, 1024); }
int do_bootm_linux(int flag, struct bootm_info *bmi) @@ -41,7 +41,6 @@ int do_bootm_linux(int flag, struct bootm_info *bmi) int ret; struct bd_info *kbd; void (*kernel) (struct bd_info *, ulong, ulong, ulong, ulong); - struct lmb *lmb = &images->lmb;
/* * allow the PREP bootm subcommand, it is required for bootm to work @@ -53,7 +52,7 @@ int do_bootm_linux(int flag, struct bootm_info *bmi) return 1;
/* allocate space for kernel copy of board info */ - ret = boot_get_kbd (lmb, &kbd); + ret = boot_get_kbd (&kbd); if (ret) { puts("ERROR with allocation of kernel bd\n"); goto error; diff --git a/arch/microblaze/lib/bootm.c b/arch/microblaze/lib/bootm.c index cbe9d85aa9..ce96bca28f 100644 --- a/arch/microblaze/lib/bootm.c +++ b/arch/microblaze/lib/bootm.c @@ -32,9 +32,9 @@ static ulong get_sp(void) return ret; }
-void arch_lmb_reserve(struct lmb *lmb) +void arch_lmb_reserve(void) { - arch_lmb_reserve_generic(lmb, get_sp(), gd->ram_top, 4096); + arch_lmb_reserve_generic(get_sp(), gd->ram_top, 4096); }
static void boot_jump_linux(struct bootm_headers *images, int flag) diff --git a/arch/mips/lib/bootm.c b/arch/mips/lib/bootm.c index adb6b6cc22..8fb3a3923f 100644 --- a/arch/mips/lib/bootm.c +++ b/arch/mips/lib/bootm.c @@ -37,9 +37,9 @@ static ulong arch_get_sp(void) return ret; }
-void arch_lmb_reserve(struct lmb *lmb) +void arch_lmb_reserve(void) { - arch_lmb_reserve_generic(lmb, arch_get_sp(), gd->ram_top, 4096); + arch_lmb_reserve_generic(arch_get_sp(), gd->ram_top, 4096); }
static void linux_cmdline_init(void) @@ -225,9 +225,8 @@ static int boot_reloc_fdt(struct bootm_headers *images) }
#if CONFIG_IS_ENABLED(MIPS_BOOT_FDT) && CONFIG_IS_ENABLED(OF_LIBFDT) - boot_fdt_add_mem_rsv_regions(&images->lmb, images->ft_addr); - return boot_relocate_fdt(&images->lmb, &images->ft_addr, - &images->ft_len); + boot_fdt_add_mem_rsv_regions(images->ft_addr); + return boot_relocate_fdt(&images->ft_addr, &images->ft_len); #else return 0; #endif @@ -248,7 +247,7 @@ static int boot_setup_fdt(struct bootm_headers *images) images->initrd_start = virt_to_phys((void *)images->initrd_start); images->initrd_end = virt_to_phys((void *)images->initrd_end);
- return image_setup_libfdt(images, images->ft_addr, &images->lmb); + return image_setup_libfdt(images, images->ft_addr, true); }
static void boot_prep_linux(struct bootm_headers *images) diff --git a/arch/nios2/lib/bootm.c b/arch/nios2/lib/bootm.c index ce939ff5e1..d33d45d28f 100644 --- a/arch/nios2/lib/bootm.c +++ b/arch/nios2/lib/bootm.c @@ -73,7 +73,7 @@ static ulong get_sp(void) return ret; }
-void arch_lmb_reserve(struct lmb *lmb) +void arch_lmb_reserve(void) { - arch_lmb_reserve_generic(lmb, get_sp(), gd->ram_top, 4096); + arch_lmb_reserve_generic(get_sp(), gd->ram_top, 4096); } diff --git a/arch/powerpc/cpu/mpc85xx/mp.c b/arch/powerpc/cpu/mpc85xx/mp.c index 03f801ebbb..bed465cb2c 100644 --- a/arch/powerpc/cpu/mpc85xx/mp.c +++ b/arch/powerpc/cpu/mpc85xx/mp.c @@ -408,11 +408,11 @@ static void plat_mp_up(unsigned long bootpg, unsigned int pagesize) } #endif
-void cpu_mp_lmb_reserve(struct lmb *lmb) +void cpu_mp_lmb_reserve(void) { u32 bootpg = determine_mp_bootpg(NULL);
- lmb_reserve(lmb, bootpg, 4096); + lmb_reserve(bootpg, 4096); }
void setup_mp(void) diff --git a/arch/powerpc/include/asm/mp.h b/arch/powerpc/include/asm/mp.h index 8dacd2781d..b3f59be840 100644 --- a/arch/powerpc/include/asm/mp.h +++ b/arch/powerpc/include/asm/mp.h @@ -6,10 +6,8 @@ #ifndef _ASM_MP_H_ #define _ASM_MP_H_
-#include <lmb.h> - void setup_mp(void); -void cpu_mp_lmb_reserve(struct lmb *lmb); +void cpu_mp_lmb_reserve(void); u32 determine_mp_bootpg(unsigned int *pagesize); int is_core_disabled(int nr);
diff --git a/arch/powerpc/lib/bootm.c b/arch/powerpc/lib/bootm.c index 61e08728dd..6c35664ff3 100644 --- a/arch/powerpc/lib/bootm.c +++ b/arch/powerpc/lib/bootm.c @@ -116,7 +116,7 @@ static void boot_jump_linux(struct bootm_headers *images) return; }
-void arch_lmb_reserve(struct lmb *lmb) +void arch_lmb_reserve(void) { phys_size_t bootm_size; ulong size, bootmap_base; @@ -139,13 +139,13 @@ void arch_lmb_reserve(struct lmb *lmb) ulong base = bootmap_base + size; printf("WARNING: adjusting available memory from 0x%lx to 0x%llx\n", size, (unsigned long long)bootm_size); - lmb_reserve(lmb, base, bootm_size - size); + lmb_reserve(base, bootm_size - size); }
- arch_lmb_reserve_generic(lmb, get_sp(), gd->ram_top, 4096); + arch_lmb_reserve_generic(get_sp(), gd->ram_top, 4096);
#ifdef CONFIG_MP - cpu_mp_lmb_reserve(lmb); + cpu_mp_lmb_reserve(); #endif
return; @@ -166,7 +166,6 @@ static void boot_prep_linux(struct bootm_headers *images) static int boot_cmdline_linux(struct bootm_headers *images) { ulong of_size = images->ft_len; - struct lmb *lmb = &images->lmb; ulong *cmd_start = &images->cmdline_start; ulong *cmd_end = &images->cmdline_end;
@@ -174,7 +173,7 @@ static int boot_cmdline_linux(struct bootm_headers *images)
if (!of_size) { /* allocate space and init command line */ - ret = boot_get_cmdline (lmb, cmd_start, cmd_end); + ret = boot_get_cmdline (cmd_start, cmd_end); if (ret) { puts("ERROR with allocation of cmdline\n"); return ret; @@ -187,14 +186,13 @@ static int boot_cmdline_linux(struct bootm_headers *images) static int boot_bd_t_linux(struct bootm_headers *images) { ulong of_size = images->ft_len; - struct lmb *lmb = &images->lmb; struct bd_info **kbd = &images->kbd;
int ret = 0;
if (!of_size) { /* allocate space for kernel copy of board info */ - ret = boot_get_kbd (lmb, kbd); + ret = boot_get_kbd (kbd); if (ret) { puts("ERROR with allocation of kernel bd\n"); return ret; diff --git a/arch/riscv/lib/bootm.c b/arch/riscv/lib/bootm.c index 13cbaaba68..bbf62f9e05 100644 --- a/arch/riscv/lib/bootm.c +++ b/arch/riscv/lib/bootm.c @@ -142,7 +142,7 @@ static ulong get_sp(void) return ret; }
-void arch_lmb_reserve(struct lmb *lmb) +void arch_lmb_reserve(void) { - arch_lmb_reserve_generic(lmb, get_sp(), gd->ram_top, 4096); + arch_lmb_reserve_generic(get_sp(), gd->ram_top, 4096); } diff --git a/arch/sh/lib/bootm.c b/arch/sh/lib/bootm.c index e298d766b5..44ac05988c 100644 --- a/arch/sh/lib/bootm.c +++ b/arch/sh/lib/bootm.c @@ -110,7 +110,7 @@ static ulong get_sp(void) return ret; }
-void arch_lmb_reserve(struct lmb *lmb) +void arch_lmb_reserve(void) { - arch_lmb_reserve_generic(lmb, get_sp(), gd->ram_top, 4096); + arch_lmb_reserve_generic(get_sp(), gd->ram_top, 4096); } diff --git a/arch/x86/lib/bootm.c b/arch/x86/lib/bootm.c index 2c889bcd33..114b31012e 100644 --- a/arch/x86/lib/bootm.c +++ b/arch/x86/lib/bootm.c @@ -267,7 +267,7 @@ static ulong get_sp(void) return ret; }
-void arch_lmb_reserve(struct lmb *lmb) +void arch_lmb_reserve(void) { - arch_lmb_reserve_generic(lmb, get_sp(), gd->ram_top, 4096); + arch_lmb_reserve_generic(get_sp(), gd->ram_top, 4096); } diff --git a/arch/xtensa/lib/bootm.c b/arch/xtensa/lib/bootm.c index 1de06b7fb5..bdbd6d4692 100644 --- a/arch/xtensa/lib/bootm.c +++ b/arch/xtensa/lib/bootm.c @@ -206,7 +206,7 @@ static ulong get_sp(void) return ret; }
-void arch_lmb_reserve(struct lmb *lmb) +void arch_lmb_reserve(void) { - arch_lmb_reserve_generic(lmb, get_sp(), gd->ram_top, 4096); + arch_lmb_reserve_generic(get_sp(), gd->ram_top, 4096); } diff --git a/board/xilinx/common/board.c b/board/xilinx/common/board.c index 0b43407b9e..4056884400 100644 --- a/board/xilinx/common/board.c +++ b/board/xilinx/common/board.c @@ -675,7 +675,6 @@ phys_addr_t board_get_usable_ram_top(phys_size_t total_size) { phys_size_t size; phys_addr_t reg; - struct lmb lmb;
if (!total_size) return gd->ram_top; @@ -684,11 +683,10 @@ phys_addr_t board_get_usable_ram_top(phys_size_t total_size) panic("Not 64bit aligned DT location: %p\n", gd->fdt_blob);
/* found enough not-reserved memory to relocated U-Boot */ - lmb_init(&lmb); - lmb_add(&lmb, gd->ram_base, gd->ram_size); - boot_fdt_add_mem_rsv_regions(&lmb, (void *)gd->fdt_blob); + lmb_add(gd->ram_base, gd->ram_size); + boot_fdt_add_mem_rsv_regions((void *)gd->fdt_blob); size = ALIGN(CONFIG_SYS_MALLOC_LEN + total_size, MMU_SECTION_SIZE); - reg = lmb_alloc(&lmb, size, MMU_SECTION_SIZE); + reg = lmb_alloc(size, MMU_SECTION_SIZE);
if (!reg) reg = gd->ram_top - size; diff --git a/boot/bootm.c b/boot/bootm.c index 376d63aafc..7b3fe551de 100644 --- a/boot/bootm.c +++ b/boot/bootm.c @@ -240,7 +240,7 @@ static int boot_get_kernel(const char *addr_fit, struct bootm_headers *images, }
#ifdef CONFIG_LMB -static void boot_start_lmb(struct bootm_headers *images) +static void boot_start_lmb(void) { phys_addr_t mem_start; phys_size_t mem_size; @@ -248,12 +248,11 @@ static void boot_start_lmb(struct bootm_headers *images) mem_start = env_get_bootm_low(); mem_size = env_get_bootm_size();
- lmb_init_and_reserve_range(&images->lmb, mem_start, - mem_size, NULL); + lmb_init_and_reserve_range(mem_start, mem_size, NULL); } #else -#define lmb_reserve(lmb, base, size) -static inline void boot_start_lmb(struct bootm_headers *images) { } +#define lmb_reserve(base, size) +static inline void boot_start_lmb(void) { } #endif
static int bootm_start(void) @@ -261,7 +260,7 @@ static int bootm_start(void) memset((void *)&images, 0, sizeof(images)); images.verify = env_get_yesno("verify");
- boot_start_lmb(&images); + boot_start_lmb();
bootstage_mark_name(BOOTSTAGE_ID_BOOTM_START, "bootm_start"); images.state = BOOTM_STATE_START; @@ -640,7 +639,7 @@ static int bootm_load_os(struct bootm_headers *images, int boot_progress) if (os.type == IH_TYPE_KERNEL_NOLOAD && os.comp != IH_COMP_NONE) { ulong req_size = ALIGN(image_len * 4, SZ_1M);
- load = lmb_alloc(&images->lmb, req_size, SZ_2M); + load = lmb_alloc(req_size, SZ_2M); if (!load) return 1; os.load = load; @@ -714,8 +713,7 @@ static int bootm_load_os(struct bootm_headers *images, int boot_progress) images->os.end = relocated_addr + image_size; }
- lmb_reserve(&images->lmb, images->os.load, (load_end - - images->os.load)); + lmb_reserve(images->os.load, (load_end - images->os.load)); return 0; }
@@ -1041,8 +1039,9 @@ int bootm_run_states(struct bootm_info *bmi, int states) if (!ret && (states & BOOTM_STATE_RAMDISK)) { ulong rd_len = images->rd_end - images->rd_start;
- ret = boot_ramdisk_high(&images->lmb, images->rd_start, - rd_len, &images->initrd_start, &images->initrd_end); + ret = boot_ramdisk_high(images->rd_start, rd_len, + &images->initrd_start, + &images->initrd_end); if (!ret) { env_set_hex("initrd_start", images->initrd_start); env_set_hex("initrd_end", images->initrd_end); @@ -1051,9 +1050,8 @@ int bootm_run_states(struct bootm_info *bmi, int states) #endif #if CONFIG_IS_ENABLED(OF_LIBFDT) && defined(CONFIG_LMB) if (!ret && (states & BOOTM_STATE_FDT)) { - boot_fdt_add_mem_rsv_regions(&images->lmb, images->ft_addr); - ret = boot_relocate_fdt(&images->lmb, &images->ft_addr, - &images->ft_len); + boot_fdt_add_mem_rsv_regions(images->ft_addr); + ret = boot_relocate_fdt(&images->ft_addr, &images->ft_len); } #endif
diff --git a/boot/bootm_os.c b/boot/bootm_os.c index 6a6621706f..e9522cd329 100644 --- a/boot/bootm_os.c +++ b/boot/bootm_os.c @@ -260,12 +260,11 @@ static void do_bootvx_fdt(struct bootm_headers *images) char *bootline; ulong of_size = images->ft_len; char **of_flat_tree = &images->ft_addr; - struct lmb *lmb = &images->lmb;
if (*of_flat_tree) { - boot_fdt_add_mem_rsv_regions(lmb, *of_flat_tree); + boot_fdt_add_mem_rsv_regions(*of_flat_tree);
- ret = boot_relocate_fdt(lmb, of_flat_tree, &of_size); + ret = boot_relocate_fdt(of_flat_tree, &of_size); if (ret) return;
diff --git a/boot/image-board.c b/boot/image-board.c index f212401304..1f8c1ac69f 100644 --- a/boot/image-board.c +++ b/boot/image-board.c @@ -515,7 +515,6 @@ int boot_get_ramdisk(char const *select, struct bootm_headers *images,
/** * boot_ramdisk_high - relocate init ramdisk - * @lmb: pointer to lmb handle, will be used for memory mgmt * @rd_data: ramdisk data start address * @rd_len: ramdisk data length * @initrd_start: pointer to a ulong variable, will hold final init ramdisk @@ -534,8 +533,8 @@ int boot_get_ramdisk(char const *select, struct bootm_headers *images, * 0 - success * -1 - failure */ -int boot_ramdisk_high(struct lmb *lmb, ulong rd_data, ulong rd_len, - ulong *initrd_start, ulong *initrd_end) +int boot_ramdisk_high(ulong rd_data, ulong rd_len, ulong *initrd_start, + ulong *initrd_end) { char *s; phys_addr_t initrd_high; @@ -561,13 +560,14 @@ int boot_ramdisk_high(struct lmb *lmb, ulong rd_data, ulong rd_len, debug(" in-place initrd\n"); *initrd_start = rd_data; *initrd_end = rd_data + rd_len; - lmb_reserve(lmb, rd_data, rd_len); + lmb_reserve(rd_data, rd_len); } else { if (initrd_high) - *initrd_start = (ulong)lmb_alloc_base(lmb, - rd_len, 0x1000, initrd_high); + *initrd_start = (ulong)lmb_alloc_base(rd_len, + 0x1000, + initrd_high); else - *initrd_start = (ulong)lmb_alloc(lmb, rd_len, + *initrd_start = (ulong)lmb_alloc(rd_len, 0x1000);
if (*initrd_start == 0) { @@ -800,7 +800,6 @@ int boot_get_loadable(struct bootm_headers *images)
/** * boot_get_cmdline - allocate and initialize kernel cmdline - * @lmb: pointer to lmb handle, will be used for memory mgmt * @cmd_start: pointer to a ulong variable, will hold cmdline start * @cmd_end: pointer to a ulong variable, will hold cmdline end * @@ -813,7 +812,7 @@ int boot_get_loadable(struct bootm_headers *images) * 0 - success * -1 - failure */ -int boot_get_cmdline(struct lmb *lmb, ulong *cmd_start, ulong *cmd_end) +int boot_get_cmdline(ulong *cmd_start, ulong *cmd_end) { int barg; char *cmdline; @@ -827,7 +826,7 @@ int boot_get_cmdline(struct lmb *lmb, ulong *cmd_start, ulong *cmd_end) return 0;
barg = IF_ENABLED_INT(CONFIG_SYS_BOOT_GET_CMDLINE, CONFIG_SYS_BARGSIZE); - cmdline = (char *)(ulong)lmb_alloc_base(lmb, barg, 0xf, + cmdline = (char *)(ulong)lmb_alloc_base(barg, 0xf, env_get_bootm_mapsize() + env_get_bootm_low()); if (!cmdline) return -1; @@ -848,7 +847,6 @@ int boot_get_cmdline(struct lmb *lmb, ulong *cmd_start, ulong *cmd_end)
/** * boot_get_kbd - allocate and initialize kernel copy of board info - * @lmb: pointer to lmb handle, will be used for memory mgmt * @kbd: double pointer to board info data * * boot_get_kbd() allocates space for kernel copy of board info data below @@ -859,10 +857,9 @@ int boot_get_cmdline(struct lmb *lmb, ulong *cmd_start, ulong *cmd_end) * 0 - success * -1 - failure */ -int boot_get_kbd(struct lmb *lmb, struct bd_info **kbd) +int boot_get_kbd(struct bd_info **kbd) { - *kbd = (struct bd_info *)(ulong)lmb_alloc_base(lmb, - sizeof(struct bd_info), + *kbd = (struct bd_info *)(ulong)lmb_alloc_base(sizeof(struct bd_info), 0xf, env_get_bootm_mapsize() + env_get_bootm_low()); @@ -883,17 +880,16 @@ int image_setup_linux(struct bootm_headers *images) { ulong of_size = images->ft_len; char **of_flat_tree = &images->ft_addr; - struct lmb *lmb = images_lmb(images); int ret;
/* This function cannot be called without lmb support */ if (!IS_ENABLED(CONFIG_LMB)) return -EFAULT; if (CONFIG_IS_ENABLED(OF_LIBFDT)) - boot_fdt_add_mem_rsv_regions(lmb, *of_flat_tree); + boot_fdt_add_mem_rsv_regions(*of_flat_tree);
if (IS_ENABLED(CONFIG_SYS_BOOT_GET_CMDLINE)) { - ret = boot_get_cmdline(lmb, &images->cmdline_start, + ret = boot_get_cmdline(&images->cmdline_start, &images->cmdline_end); if (ret) { puts("ERROR with allocation of cmdline\n"); @@ -902,13 +898,13 @@ int image_setup_linux(struct bootm_headers *images) }
if (CONFIG_IS_ENABLED(OF_LIBFDT)) { - ret = boot_relocate_fdt(lmb, of_flat_tree, &of_size); + ret = boot_relocate_fdt(of_flat_tree, &of_size); if (ret) return ret; }
if (CONFIG_IS_ENABLED(OF_LIBFDT) && of_size) { - ret = image_setup_libfdt(images, *of_flat_tree, lmb); + ret = image_setup_libfdt(images, *of_flat_tree, true); if (ret) return ret; } diff --git a/boot/image-fdt.c b/boot/image-fdt.c index 8332792b8e..ccafadec0d 100644 --- a/boot/image-fdt.c +++ b/boot/image-fdt.c @@ -68,12 +68,12 @@ static const struct legacy_img_hdr *image_get_fdt(ulong fdt_addr) } #endif
-static void boot_fdt_reserve_region(struct lmb *lmb, uint64_t addr, - uint64_t size, enum lmb_flags flags) +static void boot_fdt_reserve_region(uint64_t addr, uint64_t size, + enum lmb_flags flags) { long ret;
- ret = lmb_reserve_flags(lmb, addr, size, flags); + ret = lmb_reserve_flags(addr, size, flags); if (ret >= 0) { debug(" reserving fdt memory region: addr=%llx size=%llx flags=%x\n", (unsigned long long)addr, @@ -89,14 +89,13 @@ static void boot_fdt_reserve_region(struct lmb *lmb, uint64_t addr, /** * boot_fdt_add_mem_rsv_regions - Mark the memreserve and reserved-memory * sections as unusable - * @lmb: pointer to lmb handle, will be used for memory mgmt * @fdt_blob: pointer to fdt blob base address * * Adds the and reserved-memorymemreserve regions in the dtb to the lmb block. * Adding the memreserve regions prevents u-boot from using them to store the * initrd or the fdt blob. */ -void boot_fdt_add_mem_rsv_regions(struct lmb *lmb, void *fdt_blob) +void boot_fdt_add_mem_rsv_regions(void *fdt_blob) { uint64_t addr, size; int i, total, ret; @@ -112,7 +111,7 @@ void boot_fdt_add_mem_rsv_regions(struct lmb *lmb, void *fdt_blob) for (i = 0; i < total; i++) { if (fdt_get_mem_rsv(fdt_blob, i, &addr, &size) != 0) continue; - boot_fdt_reserve_region(lmb, addr, size, LMB_NONE); + boot_fdt_reserve_region(addr, size, LMB_NONE); }
/* process reserved-memory */ @@ -130,7 +129,7 @@ void boot_fdt_add_mem_rsv_regions(struct lmb *lmb, void *fdt_blob) flags = LMB_NOMAP; addr = res.start; size = res.end - res.start + 1; - boot_fdt_reserve_region(lmb, addr, size, flags); + boot_fdt_reserve_region(addr, size, flags); }
subnode = fdt_next_subnode(fdt_blob, subnode); @@ -140,7 +139,6 @@ void boot_fdt_add_mem_rsv_regions(struct lmb *lmb, void *fdt_blob)
/** * boot_relocate_fdt - relocate flat device tree - * @lmb: pointer to lmb handle, will be used for memory mgmt * @of_flat_tree: pointer to a char* variable, will hold fdt start address * @of_size: pointer to a ulong variable, will hold fdt length * @@ -155,7 +153,7 @@ void boot_fdt_add_mem_rsv_regions(struct lmb *lmb, void *fdt_blob) * 0 - success * 1 - failure */ -int boot_relocate_fdt(struct lmb *lmb, char **of_flat_tree, ulong *of_size) +int boot_relocate_fdt(char **of_flat_tree, ulong *of_size) { u64 start, size, usable, addr, low, mapsize; void *fdt_blob = *of_flat_tree; @@ -187,18 +185,17 @@ int boot_relocate_fdt(struct lmb *lmb, char **of_flat_tree, ulong *of_size) if (desired_addr == ~0UL) { /* All ones means use fdt in place */ of_start = fdt_blob; - lmb_reserve(lmb, map_to_sysmem(of_start), of_len); + lmb_reserve(map_to_sysmem(of_start), of_len); disable_relocation = 1; } else if (desired_addr) { - addr = lmb_alloc_base(lmb, of_len, 0x1000, - desired_addr); + addr = lmb_alloc_base(of_len, 0x1000, desired_addr); of_start = map_sysmem(addr, of_len); if (of_start == NULL) { puts("Failed using fdt_high value for Device Tree"); goto error; } } else { - addr = lmb_alloc(lmb, of_len, 0x1000); + addr = lmb_alloc(of_len, 0x1000); of_start = map_sysmem(addr, of_len); } } else { @@ -220,7 +217,7 @@ int boot_relocate_fdt(struct lmb *lmb, char **of_flat_tree, ulong *of_size) * for LMB allocation. */ usable = min(start + size, low + mapsize); - addr = lmb_alloc_base(lmb, of_len, 0x1000, usable); + addr = lmb_alloc_base(of_len, 0x1000, usable); of_start = map_sysmem(addr, of_len); /* Allocation succeeded, use this block. */ if (of_start != NULL) @@ -569,8 +566,7 @@ __weak int arch_fixup_fdt(void *blob) return 0; }
-int image_setup_libfdt(struct bootm_headers *images, void *blob, - struct lmb *lmb) +int image_setup_libfdt(struct bootm_headers *images, void *blob, bool lmb) { ulong *initrd_start = &images->initrd_start; ulong *initrd_end = &images->initrd_end; @@ -670,8 +666,8 @@ int image_setup_libfdt(struct bootm_headers *images, void *blob, }
/* Delete the old LMB reservation */ - if (lmb) - lmb_free(lmb, map_to_sysmem(blob), fdt_totalsize(blob)); + if (CONFIG_IS_ENABLED(LMB) && lmb) + lmb_free(map_to_sysmem(blob), fdt_totalsize(blob));
ret = fdt_shrink_to_minimum(blob, 0); if (ret < 0) @@ -679,8 +675,8 @@ int image_setup_libfdt(struct bootm_headers *images, void *blob, of_size = ret;
/* Create a new LMB reservation */ - if (lmb) - lmb_reserve(lmb, map_to_sysmem(blob), of_size); + if (CONFIG_IS_ENABLED(LMB) && lmb) + lmb_reserve(map_to_sysmem(blob), of_size);
#if defined(CONFIG_ARCH_KEYSTONE) if (IS_ENABLED(CONFIG_OF_BOARD_SETUP)) diff --git a/cmd/bdinfo.c b/cmd/bdinfo.c index 437ac4e863..b31e0208df 100644 --- a/cmd/bdinfo.c +++ b/cmd/bdinfo.c @@ -162,10 +162,8 @@ static int bdinfo_print_all(struct bd_info *bd) bdinfo_print_num_l("multi_dtb_fit", (ulong)gd->multi_dtb_fit); #endif if (IS_ENABLED(CONFIG_LMB) && gd->fdt_blob) { - struct lmb lmb; - - lmb_init_and_reserve(&lmb, gd->bd, (void *)gd->fdt_blob); - lmb_dump_all_force(&lmb); + lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob); + lmb_dump_all_force(); if (IS_ENABLED(CONFIG_OF_REAL)) printf("devicetree = %s\n", fdtdec_get_srcname()); } diff --git a/cmd/booti.c b/cmd/booti.c index 62b19e8343..6018cbacf0 100644 --- a/cmd/booti.c +++ b/cmd/booti.c @@ -87,7 +87,7 @@ static int booti_start(struct bootm_info *bmi) images->os.start = relocated_addr; images->os.end = relocated_addr + image_size;
- lmb_reserve(&images->lmb, images->ep, le32_to_cpu(image_size)); + lmb_reserve(images->ep, le32_to_cpu(image_size));
/* * Handle the BOOTM_STATE_FINDOTHER state ourselves as we do not diff --git a/cmd/bootz.c b/cmd/bootz.c index 55837a7599..787203f5bd 100644 --- a/cmd/bootz.c +++ b/cmd/bootz.c @@ -56,7 +56,7 @@ static int bootz_start(struct cmd_tbl *cmdtp, int flag, int argc, if (ret != 0) return 1;
- lmb_reserve(&images->lmb, images->ep, zi_end - zi_start); + lmb_reserve(images->ep, zi_end - zi_start);
/* * Handle the BOOTM_STATE_FINDOTHER state ourselves as we do not diff --git a/cmd/elf.c b/cmd/elf.c index 673c6c3051..f07e344a59 100644 --- a/cmd/elf.c +++ b/cmd/elf.c @@ -70,7 +70,7 @@ int do_bootelf(struct cmd_tbl *cmdtp, int flag, int argc, char *const argv[])
fdt_set_totalsize((void *)fdt_addr, fdt_totalsize(fdt_addr) + CONFIG_SYS_FDT_PAD); - if (image_setup_libfdt(&img, (void *)fdt_addr, NULL)) + if (image_setup_libfdt(&img, (void *)fdt_addr, false)) return 1; } #endif diff --git a/cmd/load.c b/cmd/load.c index d773a25d70..56da3a4c5d 100644 --- a/cmd/load.c +++ b/cmd/load.c @@ -141,7 +141,6 @@ static int do_load_serial(struct cmd_tbl *cmdtp, int flag, int argc,
static ulong load_serial(long offset) { - struct lmb lmb; char record[SREC_MAXRECLEN + 1]; /* buffer for one S-Record */ char binbuf[SREC_MAXBINLEN]; /* buffer for binary data */ int binlen; /* no. of data bytes in S-Rec. */ @@ -154,7 +153,7 @@ static ulong load_serial(long offset) int line_count = 0; long ret;
- lmb_init_and_reserve(&lmb, gd->bd, (void *)gd->fdt_blob); + lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob);
while (read_record(record, SREC_MAXRECLEN + 1) >= 0) { type = srec_decode(record, &binlen, &addr, binbuf); @@ -182,7 +181,7 @@ static ulong load_serial(long offset) { void *dst;
- ret = lmb_reserve(&lmb, store_addr, binlen); + ret = lmb_reserve(store_addr, binlen); if (ret) { printf("\nCannot overwrite reserved area (%08lx..%08lx)\n", store_addr, store_addr + binlen); @@ -191,7 +190,7 @@ static ulong load_serial(long offset) dst = map_sysmem(store_addr, binlen); memcpy(dst, binbuf, binlen); unmap_sysmem(dst); - lmb_free(&lmb, store_addr, binlen); + lmb_free(store_addr, binlen); } if ((store_addr) < start_addr) start_addr = store_addr; diff --git a/drivers/iommu/apple_dart.c b/drivers/iommu/apple_dart.c index 9327dea1e3..611ac7cd6d 100644 --- a/drivers/iommu/apple_dart.c +++ b/drivers/iommu/apple_dart.c @@ -70,7 +70,6 @@
struct apple_dart_priv { void *base; - struct lmb lmb; u64 *l1, *l2; int bypass, shift;
@@ -124,7 +123,7 @@ static dma_addr_t apple_dart_map(struct udevice *dev, void *addr, size_t size) off = (phys_addr_t)addr - paddr; psize = ALIGN(size + off, DART_PAGE_SIZE);
- dva = lmb_alloc(&priv->lmb, psize, DART_PAGE_SIZE); + dva = lmb_alloc(psize, DART_PAGE_SIZE);
idx = dva / DART_PAGE_SIZE; for (i = 0; i < psize / DART_PAGE_SIZE; i++) { @@ -160,7 +159,7 @@ static void apple_dart_unmap(struct udevice *dev, dma_addr_t addr, size_t size) (unsigned long)&priv->l2[idx + i]); priv->flush_tlb(priv);
- lmb_free(&priv->lmb, dva, psize); + lmb_free(dva, psize); }
static struct iommu_ops apple_dart_ops = { @@ -213,8 +212,7 @@ static int apple_dart_probe(struct udevice *dev) priv->dvabase = DART_PAGE_SIZE; priv->dvaend = SZ_4G - DART_PAGE_SIZE;
- lmb_init(&priv->lmb); - lmb_add(&priv->lmb, priv->dvabase, priv->dvaend - priv->dvabase); + lmb_add(priv->dvabase, priv->dvaend - priv->dvabase);
/* Disable translations. */ for (sid = 0; sid < priv->nsid; sid++) diff --git a/drivers/iommu/sandbox_iommu.c b/drivers/iommu/sandbox_iommu.c index e37976f86f..5b4a6a8982 100644 --- a/drivers/iommu/sandbox_iommu.c +++ b/drivers/iommu/sandbox_iommu.c @@ -11,14 +11,9 @@
#define IOMMU_PAGE_SIZE SZ_4K
-struct sandbox_iommu_priv { - struct lmb lmb; -}; - static dma_addr_t sandbox_iommu_map(struct udevice *dev, void *addr, size_t size) { - struct sandbox_iommu_priv *priv = dev_get_priv(dev); phys_addr_t paddr, dva; phys_size_t psize, off;
@@ -26,7 +21,7 @@ static dma_addr_t sandbox_iommu_map(struct udevice *dev, void *addr, off = virt_to_phys(addr) - paddr; psize = ALIGN(size + off, IOMMU_PAGE_SIZE);
- dva = lmb_alloc(&priv->lmb, psize, IOMMU_PAGE_SIZE); + dva = lmb_alloc(psize, IOMMU_PAGE_SIZE);
return dva + off; } @@ -34,7 +29,6 @@ static dma_addr_t sandbox_iommu_map(struct udevice *dev, void *addr, static void sandbox_iommu_unmap(struct udevice *dev, dma_addr_t addr, size_t size) { - struct sandbox_iommu_priv *priv = dev_get_priv(dev); phys_addr_t dva; phys_size_t psize;
@@ -42,7 +36,7 @@ static void sandbox_iommu_unmap(struct udevice *dev, dma_addr_t addr, psize = size + (addr - dva); psize = ALIGN(psize, IOMMU_PAGE_SIZE);
- lmb_free(&priv->lmb, dva, psize); + lmb_free(dva, psize); }
static struct iommu_ops sandbox_iommu_ops = { @@ -52,10 +46,7 @@ static struct iommu_ops sandbox_iommu_ops = {
static int sandbox_iommu_probe(struct udevice *dev) { - struct sandbox_iommu_priv *priv = dev_get_priv(dev); - - lmb_init(&priv->lmb); - lmb_add(&priv->lmb, 0x89abc000, SZ_16K); + lmb_add(0x89abc000, SZ_16K);
return 0; } @@ -69,7 +60,6 @@ U_BOOT_DRIVER(sandbox_iommu) = { .name = "sandbox_iommu", .id = UCLASS_IOMMU, .of_match = sandbox_iommu_ids, - .priv_auto = sizeof(struct sandbox_iommu_priv), .ops = &sandbox_iommu_ops, .probe = sandbox_iommu_probe, }; diff --git a/fs/fs.c b/fs/fs.c index 0c47943f33..2c835eef86 100644 --- a/fs/fs.c +++ b/fs/fs.c @@ -531,7 +531,6 @@ int fs_size(const char *filename, loff_t *size) static int fs_read_lmb_check(const char *filename, ulong addr, loff_t offset, loff_t len, struct fstype_info *info) { - struct lmb lmb; int ret; loff_t size; loff_t read_len; @@ -550,10 +549,10 @@ static int fs_read_lmb_check(const char *filename, ulong addr, loff_t offset, if (len && len < read_len) read_len = len;
- lmb_init_and_reserve(&lmb, gd->bd, (void *)gd->fdt_blob); - lmb_dump_all(&lmb); + lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob); + lmb_dump_all();
- if (lmb_alloc_addr(&lmb, addr, read_len) == addr) + if (lmb_alloc_addr(addr, read_len) == addr) return 0;
log_err("** Reading file would overwrite reserved memory **\n"); diff --git a/include/image.h b/include/image.h index dd4042d1bd..74838a2f75 100644 --- a/include/image.h +++ b/include/image.h @@ -20,7 +20,6 @@ #include <stdbool.h>
/* Define this to avoid #ifdefs later on */ -struct lmb; struct fdt_region;
#ifdef USE_HOSTCC @@ -412,18 +411,8 @@ struct bootm_headers { #define BOOTM_STATE_PRE_LOAD 0x00000800 #define BOOTM_STATE_MEASURE 0x00001000 int state; - -#if defined(CONFIG_LMB) && !defined(USE_HOSTCC) - struct lmb lmb; /* for memory mgmt */ -#endif };
-#ifdef CONFIG_LMB -#define images_lmb(_images) (&(_images)->lmb) -#else -#define images_lmb(_images) NULL -#endif - extern struct bootm_headers images;
/* @@ -835,13 +824,13 @@ int boot_get_fdt(void *buf, const char *select, uint arch, struct bootm_headers *images, char **of_flat_tree, ulong *of_size);
-void boot_fdt_add_mem_rsv_regions(struct lmb *lmb, void *fdt_blob); -int boot_relocate_fdt(struct lmb *lmb, char **of_flat_tree, ulong *of_size); +void boot_fdt_add_mem_rsv_regions(void *fdt_blob); +int boot_relocate_fdt(char **of_flat_tree, ulong *of_size);
-int boot_ramdisk_high(struct lmb *lmb, ulong rd_data, ulong rd_len, - ulong *initrd_start, ulong *initrd_end); -int boot_get_cmdline(struct lmb *lmb, ulong *cmd_start, ulong *cmd_end); -int boot_get_kbd(struct lmb *lmb, struct bd_info **kbd); +int boot_ramdisk_high(ulong rd_data, ulong rd_len, ulong *initrd_start, + ulong *initrd_end); +int boot_get_cmdline(ulong *cmd_start, ulong *cmd_end); +int boot_get_kbd(struct bd_info **kbd);
/*******************************************************************/ /* Legacy format specific code (prefixed with image_) */ @@ -1029,11 +1018,10 @@ int image_decomp(int comp, ulong load, ulong image_start, int type, * * @images: Images information * @blob: FDT to update - * @lmb: Points to logical memory block structure + * @lmb: Flag indicating use of lmb for reserving FDT memory region * Return: 0 if ok, <0 on failure */ -int image_setup_libfdt(struct bootm_headers *images, void *blob, - struct lmb *lmb); +int image_setup_libfdt(struct bootm_headers *images, void *blob, bool lmb);
/** * Set up the FDT to use for booting a kernel diff --git a/include/lmb.h b/include/lmb.h index a1de18c3cb..a1cc45b726 100644 --- a/include/lmb.h +++ b/include/lmb.h @@ -24,97 +24,37 @@ enum lmb_flags { };
/** - * struct lmb_property - Description of one region. + * struct lmb_region - Description of one region. * * @base: Base address of the region. * @size: Size of the region * @flags: memory region attributes */ -struct lmb_property { +struct lmb_region { phys_addr_t base; phys_size_t size; enum lmb_flags flags; };
-/* - * For regions size management, see LMB configuration in KConfig - * all the #if test are done with CONFIG_LMB_USE_MAX_REGIONS (boolean) - * - * case 1. CONFIG_LMB_USE_MAX_REGIONS is defined (legacy mode) - * => CONFIG_LMB_MAX_REGIONS is used to configure the region size, - * directly in the array lmb_region.region[], with the same - * configuration for memory and reserved regions. - * - * case 2. CONFIG_LMB_USE_MAX_REGIONS is not defined, the size of each - * region is configurated *independently* with - * => CONFIG_LMB_MEMORY_REGIONS: struct lmb.memory_regions - * => CONFIG_LMB_RESERVED_REGIONS: struct lmb.reserved_regions - * lmb_region.region is only a pointer to the correct buffer, - * initialized in lmb_init(). This configuration is useful to manage - * more reserved memory regions with CONFIG_LMB_RESERVED_REGIONS. - */ - -/** - * struct lmb_region - Description of a set of region. - * - * @cnt: Number of regions. - * @max: Size of the region array, max value of cnt. - * @region: Array of the region properties - */ -struct lmb_region { - unsigned long cnt; - unsigned long max; -#if IS_ENABLED(CONFIG_LMB_USE_MAX_REGIONS) - struct lmb_property region[CONFIG_LMB_MAX_REGIONS]; -#else - struct lmb_property *region; -#endif -}; - -/** - * struct lmb - Logical memory block handle. - * - * Clients provide storage for Logical memory block (lmb) handles. - * The content of the structure is managed by the lmb library. - * A lmb struct is initialized by lmb_init() functions. - * The lmb struct is passed to all other lmb APIs. - * - * @memory: Description of memory regions. - * @reserved: Description of reserved regions. - * @memory_regions: Array of the memory regions (statically allocated) - * @reserved_regions: Array of the reserved regions (statically allocated) - */ -struct lmb { - struct lmb_region memory; - struct lmb_region reserved; -#if !IS_ENABLED(CONFIG_LMB_USE_MAX_REGIONS) - struct lmb_property memory_regions[CONFIG_LMB_MEMORY_REGIONS]; - struct lmb_property reserved_regions[CONFIG_LMB_RESERVED_REGIONS]; -#endif -}; - -void lmb_init(struct lmb *lmb); -void lmb_init_and_reserve(struct lmb *lmb, struct bd_info *bd, void *fdt_blob); -void lmb_init_and_reserve_range(struct lmb *lmb, phys_addr_t base, - phys_size_t size, void *fdt_blob); -long lmb_add(struct lmb *lmb, phys_addr_t base, phys_size_t size); -long lmb_reserve(struct lmb *lmb, phys_addr_t base, phys_size_t size); +void lmb_init_and_reserve(struct bd_info *bd, void *fdt_blob); +void lmb_init_and_reserve_range(phys_addr_t base, phys_size_t size, + void *fdt_blob); +long lmb_add(phys_addr_t base, phys_size_t size); +long lmb_reserve(phys_addr_t base, phys_size_t size); /** * lmb_reserve_flags - Reserve one region with a specific flags bitfield. * - * @lmb: the logical memory block struct * @base: base address of the memory region * @size: size of the memory region * @flags: flags for the memory region * Return: 0 if OK, > 0 for coalesced region or a negative error code. */ -long lmb_reserve_flags(struct lmb *lmb, phys_addr_t base, - phys_size_t size, enum lmb_flags flags); -phys_addr_t lmb_alloc(struct lmb *lmb, phys_size_t size, ulong align); -phys_addr_t lmb_alloc_base(struct lmb *lmb, phys_size_t size, ulong align, - phys_addr_t max_addr); -phys_addr_t lmb_alloc_addr(struct lmb *lmb, phys_addr_t base, phys_size_t size); -phys_size_t lmb_get_free_size(struct lmb *lmb, phys_addr_t addr); +long lmb_reserve_flags(phys_addr_t base, phys_size_t size, + enum lmb_flags flags); +phys_addr_t lmb_alloc(phys_size_t size, ulong align); +phys_addr_t lmb_alloc_base(phys_size_t size, ulong align, phys_addr_t max_addr); +phys_addr_t lmb_alloc_addr(phys_addr_t base, phys_size_t size); +phys_size_t lmb_get_free_size(phys_addr_t addr);
/** * lmb_is_reserved_flags() - test if address is in reserved region with flag bits set @@ -122,21 +62,33 @@ phys_size_t lmb_get_free_size(struct lmb *lmb, phys_addr_t addr); * The function checks if a reserved region comprising @addr exists which has * all flag bits set which are set in @flags. * - * @lmb: the logical memory block struct * @addr: address to be tested * @flags: bitmap with bits to be tested * Return: 1 if matching reservation exists, 0 otherwise */ -int lmb_is_reserved_flags(struct lmb *lmb, phys_addr_t addr, int flags); +int lmb_is_reserved_flags(phys_addr_t addr, int flags);
-long lmb_free(struct lmb *lmb, phys_addr_t base, phys_size_t size); +long lmb_free(phys_addr_t base, phys_size_t size);
-void lmb_dump_all(struct lmb *lmb); -void lmb_dump_all_force(struct lmb *lmb); +void lmb_dump_all(void); +void lmb_dump_all_force(void);
-void board_lmb_reserve(struct lmb *lmb); -void arch_lmb_reserve(struct lmb *lmb); -void arch_lmb_reserve_generic(struct lmb *lmb, ulong sp, ulong end, ulong align); +void board_lmb_reserve(void); +void arch_lmb_reserve(void); +void arch_lmb_reserve_generic(ulong sp, ulong end, ulong align); + +/** + * lmb_mem_regions_init() - Initialise the LMB memory + * + * Initialise the LMB subsystem related data structures. There are two + * alloced lists that are initialised, one for the free memory, and one + * for the used memory. + * + * Initialise the two lists as part of board init. + * + * Return: 0 if OK, -ve on failure. + */ +int lmb_mem_regions_init(void);
#endif /* __KERNEL__ */
diff --git a/lib/efi_loader/efi_dt_fixup.c b/lib/efi_loader/efi_dt_fixup.c index 9886e6897c..9d017804ee 100644 --- a/lib/efi_loader/efi_dt_fixup.c +++ b/lib/efi_loader/efi_dt_fixup.c @@ -172,7 +172,7 @@ efi_dt_fixup(struct efi_dt_fixup_protocol *this, void *dtb, }
fdt_set_totalsize(dtb, *buffer_size); - if (image_setup_libfdt(&img, dtb, NULL)) { + if (image_setup_libfdt(&img, dtb, false)) { log_err("failed to process device tree\n"); ret = EFI_INVALID_PARAMETER; goto out; diff --git a/lib/efi_loader/efi_helper.c b/lib/efi_loader/efi_helper.c index 348612c3da..13e97fb741 100644 --- a/lib/efi_loader/efi_helper.c +++ b/lib/efi_loader/efi_helper.c @@ -513,7 +513,7 @@ efi_status_t efi_install_fdt(void *fdt) return EFI_OUT_OF_RESOURCES; }
- if (image_setup_libfdt(&img, fdt, NULL)) { + if (image_setup_libfdt(&img, fdt, false)) { log_err("ERROR: failed to process device tree\n"); return EFI_LOAD_ERROR; } diff --git a/lib/lmb.c b/lib/lmb.c index 4d39c0d1f9..dd6f22654c 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -6,6 +6,7 @@ * Copyright (C) 2001 Peter Bergner. */
+#include <alist.h> #include <efi_loader.h> #include <image.h> #include <mapmem.h> @@ -15,41 +16,47 @@
#include <asm/global_data.h> #include <asm/sections.h> +#include <linux/kernel.h>
DECLARE_GLOBAL_DATA_PTR;
#define LMB_ALLOC_ANYWHERE 0 +#define LMB_ALIST_INITIAL_SIZE 4
-static void lmb_dump_region(struct lmb_region *rgn, char *name) +struct alist lmb_free_mem; +struct alist lmb_used_mem; + +static void lmb_dump_region(struct alist *lmb_rgn_lst, char *name) { + struct lmb_region *rgn = lmb_rgn_lst->data; unsigned long long base, size, end; enum lmb_flags flags; int i;
- printf(" %s.cnt = 0x%lx / max = 0x%lx\n", name, rgn->cnt, rgn->max); + printf(" %s.count = 0x%hx\n", name, lmb_rgn_lst->count);
- for (i = 0; i < rgn->cnt; i++) { - base = rgn->region[i].base; - size = rgn->region[i].size; + for (i = 0; i < lmb_rgn_lst->count; i++) { + base = rgn[i].base; + size = rgn[i].size; end = base + size - 1; - flags = rgn->region[i].flags; + flags = rgn[i].flags;
printf(" %s[%d]\t[0x%llx-0x%llx], 0x%08llx bytes flags: %x\n", name, i, base, end, size, flags); } }
-void lmb_dump_all_force(struct lmb *lmb) +void lmb_dump_all_force(void) { printf("lmb_dump_all:\n"); - lmb_dump_region(&lmb->memory, "memory"); - lmb_dump_region(&lmb->reserved, "reserved"); + lmb_dump_region(&lmb_free_mem, "memory"); + lmb_dump_region(&lmb_used_mem, "reserved"); }
-void lmb_dump_all(struct lmb *lmb) +void lmb_dump_all(void) { #ifdef DEBUG - lmb_dump_all_force(lmb); + lmb_dump_all_force(); #endif }
@@ -73,79 +80,74 @@ static long lmb_addrs_adjacent(phys_addr_t base1, phys_size_t size1, return 0; }
-static long lmb_regions_overlap(struct lmb_region *rgn, unsigned long r1, +static long lmb_regions_overlap(struct alist *lmb_rgn_lst, unsigned long r1, unsigned long r2) { - phys_addr_t base1 = rgn->region[r1].base; - phys_size_t size1 = rgn->region[r1].size; - phys_addr_t base2 = rgn->region[r2].base; - phys_size_t size2 = rgn->region[r2].size; + struct lmb_region *rgn = lmb_rgn_lst->data; + + phys_addr_t base1 = rgn[r1].base; + phys_size_t size1 = rgn[r1].size; + phys_addr_t base2 = rgn[r2].base; + phys_size_t size2 = rgn[r2].size;
return lmb_addrs_overlap(base1, size1, base2, size2); } -static long lmb_regions_adjacent(struct lmb_region *rgn, unsigned long r1, + +static long lmb_regions_adjacent(struct alist *lmb_rgn_lst, unsigned long r1, unsigned long r2) { - phys_addr_t base1 = rgn->region[r1].base; - phys_size_t size1 = rgn->region[r1].size; - phys_addr_t base2 = rgn->region[r2].base; - phys_size_t size2 = rgn->region[r2].size; + struct lmb_region *rgn = lmb_rgn_lst->data; + + phys_addr_t base1 = rgn[r1].base; + phys_size_t size1 = rgn[r1].size; + phys_addr_t base2 = rgn[r2].base; + phys_size_t size2 = rgn[r2].size; return lmb_addrs_adjacent(base1, size1, base2, size2); }
-static void lmb_remove_region(struct lmb_region *rgn, unsigned long r) +static void lmb_remove_region(struct alist *lmb_rgn_lst, unsigned long r) { unsigned long i; + struct lmb_region *rgn = lmb_rgn_lst->data;
- for (i = r; i < rgn->cnt - 1; i++) { - rgn->region[i].base = rgn->region[i + 1].base; - rgn->region[i].size = rgn->region[i + 1].size; - rgn->region[i].flags = rgn->region[i + 1].flags; + for (i = r; i < lmb_rgn_lst->count - 1; i++) { + rgn[i].base = rgn[i + 1].base; + rgn[i].size = rgn[i + 1].size; + rgn[i].flags = rgn[i + 1].flags; } - rgn->cnt--; + lmb_rgn_lst->count--; }
/* Assumption: base addr of region 1 < base addr of region 2 */ -static void lmb_coalesce_regions(struct lmb_region *rgn, unsigned long r1, +static void lmb_coalesce_regions(struct alist *lmb_rgn_lst, unsigned long r1, unsigned long r2) { - rgn->region[r1].size += rgn->region[r2].size; - lmb_remove_region(rgn, r2); + struct lmb_region *rgn = lmb_rgn_lst->data; + + rgn[r1].size += rgn[r2].size; + lmb_remove_region(lmb_rgn_lst, r2); }
/*Assumption : base addr of region 1 < base addr of region 2*/ -static void lmb_fix_over_lap_regions(struct lmb_region *rgn, unsigned long r1, - unsigned long r2) +static void lmb_fix_over_lap_regions(struct alist *lmb_rgn_lst, + unsigned long r1, unsigned long r2) { - phys_addr_t base1 = rgn->region[r1].base; - phys_size_t size1 = rgn->region[r1].size; - phys_addr_t base2 = rgn->region[r2].base; - phys_size_t size2 = rgn->region[r2].size; + struct lmb_region *rgn = lmb_rgn_lst->data; + + phys_addr_t base1 = rgn[r1].base; + phys_size_t size1 = rgn[r1].size; + phys_addr_t base2 = rgn[r2].base; + phys_size_t size2 = rgn[r2].size;
if (base1 + size1 > base2 + size2) { printf("This will not be a case any time\n"); return; } - rgn->region[r1].size = base2 + size2 - base1; - lmb_remove_region(rgn, r2); -} - -void lmb_init(struct lmb *lmb) -{ -#if IS_ENABLED(CONFIG_LMB_USE_MAX_REGIONS) - lmb->memory.max = CONFIG_LMB_MAX_REGIONS; - lmb->reserved.max = CONFIG_LMB_MAX_REGIONS; -#else - lmb->memory.max = CONFIG_LMB_MEMORY_REGIONS; - lmb->reserved.max = CONFIG_LMB_RESERVED_REGIONS; - lmb->memory.region = lmb->memory_regions; - lmb->reserved.region = lmb->reserved_regions; -#endif - lmb->memory.cnt = 0; - lmb->reserved.cnt = 0; + rgn[r1].size = base2 + size2 - base1; + lmb_remove_region(lmb_rgn_lst, r2); }
-void arch_lmb_reserve_generic(struct lmb *lmb, ulong sp, ulong end, ulong align) +void arch_lmb_reserve_generic(ulong sp, ulong end, ulong align) { ulong bank_end; int bank; @@ -171,10 +173,10 @@ void arch_lmb_reserve_generic(struct lmb *lmb, ulong sp, ulong end, ulong align) if (bank_end > end) bank_end = end - 1;
- lmb_reserve(lmb, sp, bank_end - sp + 1); + lmb_reserve(sp, bank_end - sp + 1);
if (gd->flags & GD_FLG_SKIP_RELOC) - lmb_reserve(lmb, (phys_addr_t)(uintptr_t)_start, gd->mon_len); + lmb_reserve((phys_addr_t)(uintptr_t)_start, gd->mon_len);
break; } @@ -186,10 +188,9 @@ void arch_lmb_reserve_generic(struct lmb *lmb, ulong sp, ulong end, ulong align) * Add reservations for all EFI memory areas that are not * EFI_CONVENTIONAL_MEMORY. * - * @lmb: lmb environment * Return: 0 on success, 1 on failure */ -static __maybe_unused int efi_lmb_reserve(struct lmb *lmb) +static __maybe_unused int efi_lmb_reserve(void) { struct efi_mem_desc *memmap = NULL, *map; efi_uintn_t i, map_size = 0; @@ -201,8 +202,7 @@ static __maybe_unused int efi_lmb_reserve(struct lmb *lmb)
for (i = 0, map = memmap; i < map_size / sizeof(*map); ++map, ++i) { if (map->type != EFI_CONVENTIONAL_MEMORY) { - lmb_reserve_flags(lmb, - map_to_sysmem((void *)(uintptr_t) + lmb_reserve_flags(map_to_sysmem((void *)(uintptr_t) map->physical_start), map->num_pages * EFI_PAGE_SIZE, map->type == EFI_RESERVED_MEMORY_TYPE @@ -214,64 +214,63 @@ static __maybe_unused int efi_lmb_reserve(struct lmb *lmb) return 0; }
-static void lmb_reserve_common(struct lmb *lmb, void *fdt_blob) +static void lmb_reserve_common(void *fdt_blob) { - arch_lmb_reserve(lmb); - board_lmb_reserve(lmb); + arch_lmb_reserve(); + board_lmb_reserve();
if (CONFIG_IS_ENABLED(OF_LIBFDT) && fdt_blob) - boot_fdt_add_mem_rsv_regions(lmb, fdt_blob); + boot_fdt_add_mem_rsv_regions(fdt_blob);
if (CONFIG_IS_ENABLED(EFI_LOADER)) - efi_lmb_reserve(lmb); + efi_lmb_reserve(); }
/* Initialize the struct, add memory and call arch/board reserve functions */ -void lmb_init_and_reserve(struct lmb *lmb, struct bd_info *bd, void *fdt_blob) +void lmb_init_and_reserve(struct bd_info *bd, void *fdt_blob) { int i;
- lmb_init(lmb); - for (i = 0; i < CONFIG_NR_DRAM_BANKS; i++) { - if (bd->bi_dram[i].size) { - lmb_add(lmb, bd->bi_dram[i].start, - bd->bi_dram[i].size); - } + if (bd->bi_dram[i].size) + lmb_add(bd->bi_dram[i].start, bd->bi_dram[i].size); }
- lmb_reserve_common(lmb, fdt_blob); + lmb_reserve_common(fdt_blob); }
/* Initialize the struct, add memory and call arch/board reserve functions */ -void lmb_init_and_reserve_range(struct lmb *lmb, phys_addr_t base, - phys_size_t size, void *fdt_blob) +void lmb_init_and_reserve_range(phys_addr_t base, phys_size_t size, + void *fdt_blob) { - lmb_init(lmb); - lmb_add(lmb, base, size); - lmb_reserve_common(lmb, fdt_blob); + lmb_add(base, size); + lmb_reserve_common(fdt_blob); }
/* This routine called with relocation disabled. */ -static long lmb_add_region_flags(struct lmb_region *rgn, phys_addr_t base, +static long lmb_add_region_flags(struct alist *lmb_rgn_lst, phys_addr_t base, phys_size_t size, enum lmb_flags flags) { unsigned long coalesced = 0; long adjacent, i; + struct lmb_region *rgn = lmb_rgn_lst->data; + + if (alist_err(lmb_rgn_lst)) + return -1;
- if (rgn->cnt == 0) { - rgn->region[0].base = base; - rgn->region[0].size = size; - rgn->region[0].flags = flags; - rgn->cnt = 1; + if (alist_empty(lmb_rgn_lst)) { + rgn[0].base = base; + rgn[0].size = size; + rgn[0].flags = flags; + lmb_rgn_lst->count = 1; return 0; }
/* First try and coalesce this LMB with another. */ - for (i = 0; i < rgn->cnt; i++) { - phys_addr_t rgnbase = rgn->region[i].base; - phys_size_t rgnsize = rgn->region[i].size; - phys_size_t rgnflags = rgn->region[i].flags; + for (i = 0; i < lmb_rgn_lst->count; i++) { + phys_addr_t rgnbase = rgn[i].base; + phys_size_t rgnsize = rgn[i].size; + phys_size_t rgnflags = rgn[i].flags; phys_addr_t end = base + size - 1; phys_addr_t rgnend = rgnbase + rgnsize - 1; if (rgnbase <= base && end <= rgnend) { @@ -286,14 +285,14 @@ static long lmb_add_region_flags(struct lmb_region *rgn, phys_addr_t base, if (adjacent > 0) { if (flags != rgnflags) break; - rgn->region[i].base -= size; - rgn->region[i].size += size; + rgn[i].base -= size; + rgn[i].size += size; coalesced++; break; } else if (adjacent < 0) { if (flags != rgnflags) break; - rgn->region[i].size += size; + rgn[i].size += size; coalesced++; break; } else if (lmb_addrs_overlap(base, size, rgnbase, rgnsize)) { @@ -302,99 +301,106 @@ static long lmb_add_region_flags(struct lmb_region *rgn, phys_addr_t base, } }
- if (i < rgn->cnt - 1 && rgn->region[i].flags == rgn->region[i + 1].flags) { - if (lmb_regions_adjacent(rgn, i, i + 1)) { - lmb_coalesce_regions(rgn, i, i + 1); + if (i < lmb_rgn_lst->count - 1 && + rgn[i].flags == rgn[i + 1].flags) { + if (lmb_regions_adjacent(lmb_rgn_lst, i, i + 1)) { + lmb_coalesce_regions(lmb_rgn_lst, i, i + 1); coalesced++; - } else if (lmb_regions_overlap(rgn, i, i + 1)) { + } else if (lmb_regions_overlap(lmb_rgn_lst, i, i + 1)) { /* fix overlapping area */ - lmb_fix_over_lap_regions(rgn, i, i + 1); + lmb_fix_over_lap_regions(lmb_rgn_lst, i, i + 1); coalesced++; } }
if (coalesced) return coalesced; - if (rgn->cnt >= rgn->max) - return -1; + + if (alist_full(lmb_rgn_lst)) { + if (!alist_expand_by(lmb_rgn_lst, lmb_rgn_lst->alloc * 2)) + return -1; + else + rgn = lmb_rgn_lst->data; + }
/* Couldn't coalesce the LMB, so add it to the sorted table. */ - for (i = rgn->cnt-1; i >= 0; i--) { - if (base < rgn->region[i].base) { - rgn->region[i + 1].base = rgn->region[i].base; - rgn->region[i + 1].size = rgn->region[i].size; - rgn->region[i + 1].flags = rgn->region[i].flags; + for (i = lmb_rgn_lst->count - 1; i >= 0; i--) { + if (base < rgn[i].base) { + rgn[i + 1].base = rgn[i].base; + rgn[i + 1].size = rgn[i].size; + rgn[i + 1].flags = rgn[i].flags; } else { - rgn->region[i + 1].base = base; - rgn->region[i + 1].size = size; - rgn->region[i + 1].flags = flags; + rgn[i + 1].base = base; + rgn[i + 1].size = size; + rgn[i + 1].flags = flags; break; } }
- if (base < rgn->region[0].base) { - rgn->region[0].base = base; - rgn->region[0].size = size; - rgn->region[0].flags = flags; + if (base < rgn[0].base) { + rgn[0].base = base; + rgn[0].size = size; + rgn[0].flags = flags; }
- rgn->cnt++; + lmb_rgn_lst->count++;
return 0; }
-static long lmb_add_region(struct lmb_region *rgn, phys_addr_t base, +static long lmb_add_region(struct alist *lmb_rgn_lst, phys_addr_t base, phys_size_t size) { - return lmb_add_region_flags(rgn, base, size, LMB_NONE); + return lmb_add_region_flags(lmb_rgn_lst, base, size, LMB_NONE); }
/* This routine may be called with relocation disabled. */ -long lmb_add(struct lmb *lmb, phys_addr_t base, phys_size_t size) +long lmb_add(phys_addr_t base, phys_size_t size) { - struct lmb_region *_rgn = &(lmb->memory); + struct alist *lmb_rgn_lst = &lmb_free_mem;
- return lmb_add_region(_rgn, base, size); + return lmb_add_region(lmb_rgn_lst, base, size); }
-long lmb_free(struct lmb *lmb, phys_addr_t base, phys_size_t size) +long lmb_free(phys_addr_t base, phys_size_t size) { - struct lmb_region *rgn = &(lmb->reserved); + struct lmb_region *rgn; + struct alist *lmb_rgn_lst = &lmb_used_mem; phys_addr_t rgnbegin, rgnend; phys_addr_t end = base + size - 1; int i;
rgnbegin = rgnend = 0; /* supress gcc warnings */ - + rgn = lmb_rgn_lst->data; /* Find the region where (base, size) belongs to */ - for (i = 0; i < rgn->cnt; i++) { - rgnbegin = rgn->region[i].base; - rgnend = rgnbegin + rgn->region[i].size - 1; + for (i = 0; i < lmb_rgn_lst->count; i++) { + rgnbegin = rgn[i].base; + rgnend = rgnbegin + rgn[i].size - 1;
if ((rgnbegin <= base) && (end <= rgnend)) break; }
/* Didn't find the region */ - if (i == rgn->cnt) + if (i == lmb_rgn_lst->count) return -1;
/* Check to see if we are removing entire region */ if ((rgnbegin == base) && (rgnend == end)) { - lmb_remove_region(rgn, i); + lmb_remove_region(lmb_rgn_lst, i); return 0; }
/* Check to see if region is matching at the front */ if (rgnbegin == base) { - rgn->region[i].base = end + 1; - rgn->region[i].size -= size; + rgn[i].base = end + 1; + rgn[i].size -= size; return 0; }
/* Check to see if the region is matching at the end */ if (rgnend == end) { - rgn->region[i].size -= size; + rgn[i].size -= size; return 0; }
@@ -402,37 +408,37 @@ long lmb_free(struct lmb *lmb, phys_addr_t base, phys_size_t size) * We need to split the entry - adjust the current one to the * beginging of the hole and add the region after hole. */ - rgn->region[i].size = base - rgn->region[i].base; - return lmb_add_region_flags(rgn, end + 1, rgnend - end, - rgn->region[i].flags); + rgn[i].size = base - rgn[i].base; + return lmb_add_region_flags(lmb_rgn_lst, end + 1, rgnend - end, + rgn[i].flags); }
-long lmb_reserve_flags(struct lmb *lmb, phys_addr_t base, phys_size_t size, - enum lmb_flags flags) +long lmb_reserve_flags(phys_addr_t base, phys_size_t size, enum lmb_flags flags) { - struct lmb_region *_rgn = &(lmb->reserved); + struct alist *lmb_rgn_lst = &lmb_used_mem;
- return lmb_add_region_flags(_rgn, base, size, flags); + return lmb_add_region_flags(lmb_rgn_lst, base, size, flags); }
-long lmb_reserve(struct lmb *lmb, phys_addr_t base, phys_size_t size) +long lmb_reserve(phys_addr_t base, phys_size_t size) { - return lmb_reserve_flags(lmb, base, size, LMB_NONE); + return lmb_reserve_flags(base, size, LMB_NONE); }
-static long lmb_overlaps_region(struct lmb_region *rgn, phys_addr_t base, +static long lmb_overlaps_region(struct alist *lmb_rgn_lst, phys_addr_t base, phys_size_t size) { unsigned long i; + struct lmb_region *rgn = lmb_rgn_lst->data;
- for (i = 0; i < rgn->cnt; i++) { - phys_addr_t rgnbase = rgn->region[i].base; - phys_size_t rgnsize = rgn->region[i].size; + for (i = 0; i < lmb_rgn_lst->count; i++) { + phys_addr_t rgnbase = rgn[i].base; + phys_size_t rgnsize = rgn[i].size; if (lmb_addrs_overlap(base, size, rgnbase, rgnsize)) break; }
- return (i < rgn->cnt) ? i : -1; + return (i < lmb_rgn_lst->count) ? i : -1; }
static phys_addr_t lmb_align_down(phys_addr_t addr, phys_size_t size) @@ -440,16 +446,18 @@ static phys_addr_t lmb_align_down(phys_addr_t addr, phys_size_t size) return addr & ~(size - 1); }
-static phys_addr_t __lmb_alloc_base(struct lmb *lmb, phys_size_t size, - ulong align, phys_addr_t max_addr) +static phys_addr_t __lmb_alloc_base(phys_size_t size, ulong align, + phys_addr_t max_addr) { long i, rgn; phys_addr_t base = 0; phys_addr_t res_base; + struct lmb_region *lmb_used = lmb_used_mem.data; + struct lmb_region *lmb_memory = lmb_free_mem.data;
- for (i = lmb->memory.cnt - 1; i >= 0; i--) { - phys_addr_t lmbbase = lmb->memory.region[i].base; - phys_size_t lmbsize = lmb->memory.region[i].size; + for (i = lmb_free_mem.count - 1; i >= 0; i--) { + phys_addr_t lmbbase = lmb_memory[i].base; + phys_size_t lmbsize = lmb_memory[i].size;
if (lmbsize < size) continue; @@ -465,15 +473,16 @@ static phys_addr_t __lmb_alloc_base(struct lmb *lmb, phys_size_t size, continue;
while (base && lmbbase <= base) { - rgn = lmb_overlaps_region(&lmb->reserved, base, size); + rgn = lmb_overlaps_region(&lmb_used_mem, base, size); if (rgn < 0) { /* This area isn't reserved, take it */ - if (lmb_add_region(&lmb->reserved, base, + if (lmb_add_region(&lmb_used_mem, base, size) < 0) return 0; return base; } - res_base = lmb->reserved.region[rgn].base; + + res_base = lmb_used[rgn].base; if (res_base < size) break; base = lmb_align_down(res_base - size, align); @@ -482,16 +491,16 @@ static phys_addr_t __lmb_alloc_base(struct lmb *lmb, phys_size_t size, return 0; }
-phys_addr_t lmb_alloc(struct lmb *lmb, phys_size_t size, ulong align) +phys_addr_t lmb_alloc(phys_size_t size, ulong align) { - return lmb_alloc_base(lmb, size, align, LMB_ALLOC_ANYWHERE); + return lmb_alloc_base(size, align, LMB_ALLOC_ANYWHERE); }
-phys_addr_t lmb_alloc_base(struct lmb *lmb, phys_size_t size, ulong align, phys_addr_t max_addr) +phys_addr_t lmb_alloc_base(phys_size_t size, ulong align, phys_addr_t max_addr) { phys_addr_t alloc;
- alloc = __lmb_alloc_base(lmb, size, align, max_addr); + alloc = __lmb_alloc_base(size, align, max_addr);
if (alloc == 0) printf("ERROR: Failed to allocate 0x%lx bytes below 0x%lx.\n", @@ -504,22 +513,23 @@ phys_addr_t lmb_alloc_base(struct lmb *lmb, phys_size_t size, ulong align, phys_ * Try to allocate a specific address range: must be in defined memory but not * reserved */ -phys_addr_t lmb_alloc_addr(struct lmb *lmb, phys_addr_t base, phys_size_t size) +phys_addr_t lmb_alloc_addr(phys_addr_t base, phys_size_t size) { long rgn; + struct lmb_region *lmb_memory = lmb_free_mem.data;
/* Check if the requested address is in one of the memory regions */ - rgn = lmb_overlaps_region(&lmb->memory, base, size); + rgn = lmb_overlaps_region(&lmb_free_mem, base, size); if (rgn >= 0) { /* * Check if the requested end address is in the same memory * region we found. */ - if (lmb_addrs_overlap(lmb->memory.region[rgn].base, - lmb->memory.region[rgn].size, + if (lmb_addrs_overlap(lmb_memory[rgn].base, + lmb_memory[rgn].size, base + size - 1, 1)) { /* ok, reserve the memory */ - if (lmb_reserve(lmb, base, size) >= 0) + if (lmb_reserve(base, size) >= 0) return base; } } @@ -527,51 +537,86 @@ phys_addr_t lmb_alloc_addr(struct lmb *lmb, phys_addr_t base, phys_size_t size) }
/* Return number of bytes from a given address that are free */ -phys_size_t lmb_get_free_size(struct lmb *lmb, phys_addr_t addr) +phys_size_t lmb_get_free_size(phys_addr_t addr) { int i; long rgn; + struct lmb_region *lmb_used = lmb_used_mem.data; + struct lmb_region *lmb_memory = lmb_free_mem.data;
/* check if the requested address is in the memory regions */ - rgn = lmb_overlaps_region(&lmb->memory, addr, 1); + rgn = lmb_overlaps_region(&lmb_free_mem, addr, 1); if (rgn >= 0) { - for (i = 0; i < lmb->reserved.cnt; i++) { - if (addr < lmb->reserved.region[i].base) { + for (i = 0; i < lmb_used_mem.count; i++) { + if (addr < lmb_used[i].base) { /* first reserved range > requested address */ - return lmb->reserved.region[i].base - addr; + return lmb_used[i].base - addr; } - if (lmb->reserved.region[i].base + - lmb->reserved.region[i].size > addr) { + if (lmb_used[i].base + + lmb_used[i].size > addr) { /* requested addr is in this reserved range */ return 0; } } /* if we come here: no reserved ranges above requested addr */ - return lmb->memory.region[lmb->memory.cnt - 1].base + - lmb->memory.region[lmb->memory.cnt - 1].size - addr; + return lmb_memory[lmb_free_mem.count - 1].base + + lmb_memory[lmb_free_mem.count - 1].size - addr; } return 0; }
-int lmb_is_reserved_flags(struct lmb *lmb, phys_addr_t addr, int flags) +int lmb_is_reserved_flags(phys_addr_t addr, int flags) { int i; + struct lmb_region *lmb_used = lmb_used_mem.data;
- for (i = 0; i < lmb->reserved.cnt; i++) { - phys_addr_t upper = lmb->reserved.region[i].base + - lmb->reserved.region[i].size - 1; - if ((addr >= lmb->reserved.region[i].base) && (addr <= upper)) - return (lmb->reserved.region[i].flags & flags) == flags; + for (i = 0; i < lmb_used_mem.count; i++) { + phys_addr_t upper = lmb_used[i].base + + lmb_used[i].size - 1; + if ((addr >= lmb_used[i].base) && (addr <= upper)) + return (lmb_used[i].flags & flags) == flags; } return 0; }
-__weak void board_lmb_reserve(struct lmb *lmb) +__weak void board_lmb_reserve(void) { /* please define platform specific board_lmb_reserve() */ }
-__weak void arch_lmb_reserve(struct lmb *lmb) +__weak void arch_lmb_reserve(void) { /* please define platform specific arch_lmb_reserve() */ } + +/** + * lmb_mem_regions_init() - Initialise the LMB memory + * + * Initialise the LMB subsystem related data structures. There are two + * alloced lists that are initialised, one for the free memory, and one + * for the used memory. + * + * Initialise the two lists as part of board init. + * + * Return: 0 if OK, -ve on failure. + */ +int lmb_mem_regions_init(void) +{ + bool ret; + + ret = alist_init(&lmb_free_mem, sizeof(struct lmb_region), + (uint)LMB_ALIST_INITIAL_SIZE); + if (!ret) { + log_debug("Unable to initialise the list for LMB free memory\n"); + return -1; + } + + ret = alist_init(&lmb_used_mem, sizeof(struct lmb_region), + (uint)LMB_ALIST_INITIAL_SIZE); + if (!ret) { + log_debug("Unable to initialise the list for LMB used memory\n"); + return -1; + } + + return 0; +} diff --git a/net/tftp.c b/net/tftp.c index 65c39d7fb7..a199f4a6df 100644 --- a/net/tftp.c +++ b/net/tftp.c @@ -710,12 +710,11 @@ static void tftp_timeout_handler(void) static int tftp_init_load_addr(void) { #ifdef CONFIG_LMB - struct lmb lmb; phys_size_t max_size;
- lmb_init_and_reserve(&lmb, gd->bd, (void *)gd->fdt_blob); + lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob);
- max_size = lmb_get_free_size(&lmb, image_load_addr); + max_size = lmb_get_free_size(image_load_addr); if (!max_size) return -1;
diff --git a/net/wget.c b/net/wget.c index f1dd7abeff..7cf809a8ef 100644 --- a/net/wget.c +++ b/net/wget.c @@ -73,12 +73,11 @@ static ulong wget_load_size; */ static int wget_init_load_size(void) { - struct lmb lmb; phys_size_t max_size;
- lmb_init_and_reserve(&lmb, gd->bd, (void *)gd->fdt_blob); + lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob);
- max_size = lmb_get_free_size(&lmb, image_load_addr); + max_size = lmb_get_free_size(image_load_addr); if (!max_size) return -1;
diff --git a/test/cmd/bdinfo.c b/test/cmd/bdinfo.c index 027848c3e2..34d2b141d8 100644 --- a/test/cmd/bdinfo.c +++ b/test/cmd/bdinfo.c @@ -200,7 +200,7 @@ static int bdinfo_test_all(struct unit_test_state *uts) if (IS_ENABLED(CONFIG_LMB) && gd->fdt_blob) { struct lmb lmb;
- lmb_init_and_reserve(&lmb, gd->bd, (void *)gd->fdt_blob); + lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob); ut_assertok(lmb_test_dump_all(uts, &lmb)); if (IS_ENABLED(CONFIG_OF_REAL)) ut_assert_nextline("devicetree = %s", fdtdec_get_srcname()); diff --git a/test/lib/lmb.c b/test/lib/lmb.c index 4b5b6e5e20..9f6b5633a7 100644 --- a/test/lib/lmb.c +++ b/test/lib/lmb.c @@ -12,6 +12,8 @@ #include <test/test.h> #include <test/ut.h>
+extern struct lmb lmb; + static inline bool lmb_is_nomap(struct lmb_property *m) { return m->flags & LMB_NOMAP; @@ -64,7 +66,6 @@ static int test_multi_alloc(struct unit_test_state *uts, const phys_addr_t ram, const phys_addr_t ram_end = ram + ram_size; const phys_addr_t alloc_64k_end = alloc_64k_addr + 0x10000;
- struct lmb lmb; long ret; phys_addr_t a, a2, b, b2, c, d;
@@ -75,14 +76,12 @@ static int test_multi_alloc(struct unit_test_state *uts, const phys_addr_t ram, ut_assert(alloc_64k_addr >= ram + 8); ut_assert(alloc_64k_end <= ram_end - 8);
- lmb_init(&lmb); - if (ram0_size) { - ret = lmb_add(&lmb, ram0, ram0_size); + ret = lmb_add(ram0, ram0_size); ut_asserteq(ret, 0); }
- ret = lmb_add(&lmb, ram, ram_size); + ret = lmb_add(ram, ram_size); ut_asserteq(ret, 0);
if (ram0_size) { @@ -98,67 +97,67 @@ static int test_multi_alloc(struct unit_test_state *uts, const phys_addr_t ram, }
/* reserve 64KiB somewhere */ - ret = lmb_reserve(&lmb, alloc_64k_addr, 0x10000); + ret = lmb_reserve(alloc_64k_addr, 0x10000); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, 0, 0, 1, alloc_64k_addr, 0x10000, 0, 0, 0, 0);
/* allocate somewhere, should be at the end of RAM */ - a = lmb_alloc(&lmb, 4, 1); + a = lmb_alloc(4, 1); ut_asserteq(a, ram_end - 4); ASSERT_LMB(&lmb, 0, 0, 2, alloc_64k_addr, 0x10000, ram_end - 4, 4, 0, 0); /* alloc below end of reserved region -> below reserved region */ - b = lmb_alloc_base(&lmb, 4, 1, alloc_64k_end); + b = lmb_alloc_base(4, 1, alloc_64k_end); ut_asserteq(b, alloc_64k_addr - 4); ASSERT_LMB(&lmb, 0, 0, 2, alloc_64k_addr - 4, 0x10000 + 4, ram_end - 4, 4, 0, 0);
/* 2nd time */ - c = lmb_alloc(&lmb, 4, 1); + c = lmb_alloc(4, 1); ut_asserteq(c, ram_end - 8); ASSERT_LMB(&lmb, 0, 0, 2, alloc_64k_addr - 4, 0x10000 + 4, ram_end - 8, 8, 0, 0); - d = lmb_alloc_base(&lmb, 4, 1, alloc_64k_end); + d = lmb_alloc_base(4, 1, alloc_64k_end); ut_asserteq(d, alloc_64k_addr - 8); ASSERT_LMB(&lmb, 0, 0, 2, alloc_64k_addr - 8, 0x10000 + 8, ram_end - 8, 8, 0, 0);
- ret = lmb_free(&lmb, a, 4); + ret = lmb_free(a, 4); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, 0, 0, 2, alloc_64k_addr - 8, 0x10000 + 8, ram_end - 8, 4, 0, 0); /* allocate again to ensure we get the same address */ - a2 = lmb_alloc(&lmb, 4, 1); + a2 = lmb_alloc(4, 1); ut_asserteq(a, a2); ASSERT_LMB(&lmb, 0, 0, 2, alloc_64k_addr - 8, 0x10000 + 8, ram_end - 8, 8, 0, 0); - ret = lmb_free(&lmb, a2, 4); + ret = lmb_free(a2, 4); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, 0, 0, 2, alloc_64k_addr - 8, 0x10000 + 8, ram_end - 8, 4, 0, 0);
- ret = lmb_free(&lmb, b, 4); + ret = lmb_free(b, 4); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, 0, 0, 3, alloc_64k_addr - 8, 4, alloc_64k_addr, 0x10000, ram_end - 8, 4); /* allocate again to ensure we get the same address */ - b2 = lmb_alloc_base(&lmb, 4, 1, alloc_64k_end); + b2 = lmb_alloc_base(4, 1, alloc_64k_end); ut_asserteq(b, b2); ASSERT_LMB(&lmb, 0, 0, 2, alloc_64k_addr - 8, 0x10000 + 8, ram_end - 8, 4, 0, 0); - ret = lmb_free(&lmb, b2, 4); + ret = lmb_free(b2, 4); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, 0, 0, 3, alloc_64k_addr - 8, 4, alloc_64k_addr, 0x10000, ram_end - 8, 4);
- ret = lmb_free(&lmb, c, 4); + ret = lmb_free(c, 4); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, 0, 0, 2, alloc_64k_addr - 8, 4, alloc_64k_addr, 0x10000, 0, 0); - ret = lmb_free(&lmb, d, 4); + ret = lmb_free(d, 4); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, 0, 0, 1, alloc_64k_addr, 0x10000, 0, 0, 0, 0); @@ -229,44 +228,41 @@ static int test_bigblock(struct unit_test_state *uts, const phys_addr_t ram) const phys_size_t big_block_size = 0x10000000; const phys_addr_t ram_end = ram + ram_size; const phys_addr_t alloc_64k_addr = ram + 0x10000000; - struct lmb lmb; long ret; phys_addr_t a, b;
/* check for overflow */ ut_assert(ram_end == 0 || ram_end > ram);
- lmb_init(&lmb); - - ret = lmb_add(&lmb, ram, ram_size); + ret = lmb_add(ram, ram_size); ut_asserteq(ret, 0);
/* reserve 64KiB in the middle of RAM */ - ret = lmb_reserve(&lmb, alloc_64k_addr, 0x10000); + ret = lmb_reserve(alloc_64k_addr, 0x10000); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 1, alloc_64k_addr, 0x10000, 0, 0, 0, 0);
/* allocate a big block, should be below reserved */ - a = lmb_alloc(&lmb, big_block_size, 1); + a = lmb_alloc(big_block_size, 1); ut_asserteq(a, ram); ASSERT_LMB(&lmb, ram, ram_size, 1, a, big_block_size + 0x10000, 0, 0, 0, 0); /* allocate 2nd big block */ /* This should fail, printing an error */ - b = lmb_alloc(&lmb, big_block_size, 1); + b = lmb_alloc(big_block_size, 1); ut_asserteq(b, 0); ASSERT_LMB(&lmb, ram, ram_size, 1, a, big_block_size + 0x10000, 0, 0, 0, 0);
- ret = lmb_free(&lmb, a, big_block_size); + ret = lmb_free(a, big_block_size); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 1, alloc_64k_addr, 0x10000, 0, 0, 0, 0);
/* allocate too big block */ /* This should fail, printing an error */ - a = lmb_alloc(&lmb, ram_size, 1); + a = lmb_alloc(ram_size, 1); ut_asserteq(a, 0); ASSERT_LMB(&lmb, ram, ram_size, 1, alloc_64k_addr, 0x10000, 0, 0, 0, 0); @@ -294,7 +290,6 @@ static int test_noreserved(struct unit_test_state *uts, const phys_addr_t ram, { const phys_size_t ram_size = 0x20000000; const phys_addr_t ram_end = ram + ram_size; - struct lmb lmb; long ret; phys_addr_t a, b; const phys_addr_t alloc_size_aligned = (alloc_size + align - 1) & @@ -303,19 +298,17 @@ static int test_noreserved(struct unit_test_state *uts, const phys_addr_t ram, /* check for overflow */ ut_assert(ram_end == 0 || ram_end > ram);
- lmb_init(&lmb); - - ret = lmb_add(&lmb, ram, ram_size); + ret = lmb_add(ram, ram_size); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 0, 0, 0, 0, 0, 0, 0);
/* allocate a block */ - a = lmb_alloc(&lmb, alloc_size, align); + a = lmb_alloc(alloc_size, align); ut_assert(a != 0); ASSERT_LMB(&lmb, ram, ram_size, 1, ram + ram_size - alloc_size_aligned, alloc_size, 0, 0, 0, 0); /* allocate another block */ - b = lmb_alloc(&lmb, alloc_size, align); + b = lmb_alloc(alloc_size, align); ut_assert(b != 0); if (alloc_size == alloc_size_aligned) { ASSERT_LMB(&lmb, ram, ram_size, 1, ram + ram_size - @@ -327,21 +320,21 @@ static int test_noreserved(struct unit_test_state *uts, const phys_addr_t ram, - alloc_size_aligned, alloc_size, 0, 0); } /* and free them */ - ret = lmb_free(&lmb, b, alloc_size); + ret = lmb_free(b, alloc_size); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 1, ram + ram_size - alloc_size_aligned, alloc_size, 0, 0, 0, 0); - ret = lmb_free(&lmb, a, alloc_size); + ret = lmb_free(a, alloc_size); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 0, 0, 0, 0, 0, 0, 0);
/* allocate a block with base*/ - b = lmb_alloc_base(&lmb, alloc_size, align, ram_end); + b = lmb_alloc_base(alloc_size, align, ram_end); ut_assert(a == b); ASSERT_LMB(&lmb, ram, ram_size, 1, ram + ram_size - alloc_size_aligned, alloc_size, 0, 0, 0, 0); /* and free it */ - ret = lmb_free(&lmb, b, alloc_size); + ret = lmb_free(b, alloc_size); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 0, 0, 0, 0, 0, 0, 0);
@@ -385,34 +378,31 @@ static int lib_test_lmb_at_0(struct unit_test_state *uts) { const phys_addr_t ram = 0; const phys_size_t ram_size = 0x20000000; - struct lmb lmb; long ret; phys_addr_t a, b;
- lmb_init(&lmb); - - ret = lmb_add(&lmb, ram, ram_size); + ret = lmb_add(ram, ram_size); ut_asserteq(ret, 0);
/* allocate nearly everything */ - a = lmb_alloc(&lmb, ram_size - 4, 1); + a = lmb_alloc(ram_size - 4, 1); ut_asserteq(a, ram + 4); ASSERT_LMB(&lmb, ram, ram_size, 1, a, ram_size - 4, 0, 0, 0, 0); /* allocate the rest */ /* This should fail as the allocated address would be 0 */ - b = lmb_alloc(&lmb, 4, 1); + b = lmb_alloc(4, 1); ut_asserteq(b, 0); /* check that this was an error by checking lmb */ ASSERT_LMB(&lmb, ram, ram_size, 1, a, ram_size - 4, 0, 0, 0, 0); /* check that this was an error by freeing b */ - ret = lmb_free(&lmb, b, 4); + ret = lmb_free(b, 4); ut_asserteq(ret, -1); ASSERT_LMB(&lmb, ram, ram_size, 1, a, ram_size - 4, 0, 0, 0, 0);
- ret = lmb_free(&lmb, a, ram_size - 4); + ret = lmb_free(a, ram_size - 4); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 0, 0, 0, 0, 0, 0, 0);
@@ -425,42 +415,39 @@ static int lib_test_lmb_overlapping_reserve(struct unit_test_state *uts) { const phys_addr_t ram = 0x40000000; const phys_size_t ram_size = 0x20000000; - struct lmb lmb; long ret;
- lmb_init(&lmb); - - ret = lmb_add(&lmb, ram, ram_size); + ret = lmb_add(ram, ram_size); ut_asserteq(ret, 0);
- ret = lmb_reserve(&lmb, 0x40010000, 0x10000); + ret = lmb_reserve(0x40010000, 0x10000); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40010000, 0x10000, 0, 0, 0, 0); /* allocate overlapping region should fail */ - ret = lmb_reserve(&lmb, 0x40011000, 0x10000); + ret = lmb_reserve(0x40011000, 0x10000); ut_asserteq(ret, -1); ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40010000, 0x10000, 0, 0, 0, 0); /* allocate 3nd region */ - ret = lmb_reserve(&lmb, 0x40030000, 0x10000); + ret = lmb_reserve(0x40030000, 0x10000); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 2, 0x40010000, 0x10000, 0x40030000, 0x10000, 0, 0); /* allocate 2nd region , This should coalesced all region into one */ - ret = lmb_reserve(&lmb, 0x40020000, 0x10000); + ret = lmb_reserve(0x40020000, 0x10000); ut_assert(ret >= 0); ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40010000, 0x30000, 0, 0, 0, 0);
/* allocate 2nd region, which should be added as first region */ - ret = lmb_reserve(&lmb, 0x40000000, 0x8000); + ret = lmb_reserve(0x40000000, 0x8000); ut_assert(ret >= 0); ASSERT_LMB(&lmb, ram, ram_size, 2, 0x40000000, 0x8000, 0x40010000, 0x30000, 0, 0);
/* allocate 3rd region, coalesce with first and overlap with second */ - ret = lmb_reserve(&lmb, 0x40008000, 0x10000); + ret = lmb_reserve(0x40008000, 0x10000); ut_assert(ret >= 0); ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40000000, 0x40000, 0, 0, 0, 0); @@ -479,104 +466,101 @@ static int test_alloc_addr(struct unit_test_state *uts, const phys_addr_t ram) const phys_size_t alloc_addr_a = ram + 0x8000000; const phys_size_t alloc_addr_b = ram + 0x8000000 * 2; const phys_size_t alloc_addr_c = ram + 0x8000000 * 3; - struct lmb lmb; long ret; phys_addr_t a, b, c, d, e;
/* check for overflow */ ut_assert(ram_end == 0 || ram_end > ram);
- lmb_init(&lmb); - - ret = lmb_add(&lmb, ram, ram_size); + ret = lmb_add(ram, ram_size); ut_asserteq(ret, 0);
/* reserve 3 blocks */ - ret = lmb_reserve(&lmb, alloc_addr_a, 0x10000); + ret = lmb_reserve(alloc_addr_a, 0x10000); ut_asserteq(ret, 0); - ret = lmb_reserve(&lmb, alloc_addr_b, 0x10000); + ret = lmb_reserve(alloc_addr_b, 0x10000); ut_asserteq(ret, 0); - ret = lmb_reserve(&lmb, alloc_addr_c, 0x10000); + ret = lmb_reserve(alloc_addr_c, 0x10000); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 3, alloc_addr_a, 0x10000, alloc_addr_b, 0x10000, alloc_addr_c, 0x10000);
/* allocate blocks */ - a = lmb_alloc_addr(&lmb, ram, alloc_addr_a - ram); + a = lmb_alloc_addr(ram, alloc_addr_a - ram); ut_asserteq(a, ram); ASSERT_LMB(&lmb, ram, ram_size, 3, ram, 0x8010000, alloc_addr_b, 0x10000, alloc_addr_c, 0x10000); - b = lmb_alloc_addr(&lmb, alloc_addr_a + 0x10000, + b = lmb_alloc_addr(alloc_addr_a + 0x10000, alloc_addr_b - alloc_addr_a - 0x10000); ut_asserteq(b, alloc_addr_a + 0x10000); ASSERT_LMB(&lmb, ram, ram_size, 2, ram, 0x10010000, alloc_addr_c, 0x10000, 0, 0); - c = lmb_alloc_addr(&lmb, alloc_addr_b + 0x10000, + c = lmb_alloc_addr(alloc_addr_b + 0x10000, alloc_addr_c - alloc_addr_b - 0x10000); ut_asserteq(c, alloc_addr_b + 0x10000); ASSERT_LMB(&lmb, ram, ram_size, 1, ram, 0x18010000, 0, 0, 0, 0); - d = lmb_alloc_addr(&lmb, alloc_addr_c + 0x10000, + d = lmb_alloc_addr(alloc_addr_c + 0x10000, ram_end - alloc_addr_c - 0x10000); ut_asserteq(d, alloc_addr_c + 0x10000); ASSERT_LMB(&lmb, ram, ram_size, 1, ram, ram_size, 0, 0, 0, 0);
/* allocating anything else should fail */ - e = lmb_alloc(&lmb, 1, 1); + e = lmb_alloc(1, 1); ut_asserteq(e, 0); ASSERT_LMB(&lmb, ram, ram_size, 1, ram, ram_size, 0, 0, 0, 0);
- ret = lmb_free(&lmb, d, ram_end - alloc_addr_c - 0x10000); + ret = lmb_free(d, ram_end - alloc_addr_c - 0x10000); ut_asserteq(ret, 0);
/* allocate at 3 points in free range */
- d = lmb_alloc_addr(&lmb, ram_end - 4, 4); + d = lmb_alloc_addr(ram_end - 4, 4); ut_asserteq(d, ram_end - 4); ASSERT_LMB(&lmb, ram, ram_size, 2, ram, 0x18010000, d, 4, 0, 0); - ret = lmb_free(&lmb, d, 4); + ret = lmb_free(d, 4); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 1, ram, 0x18010000, 0, 0, 0, 0);
- d = lmb_alloc_addr(&lmb, ram_end - 128, 4); + d = lmb_alloc_addr(ram_end - 128, 4); ut_asserteq(d, ram_end - 128); ASSERT_LMB(&lmb, ram, ram_size, 2, ram, 0x18010000, d, 4, 0, 0); - ret = lmb_free(&lmb, d, 4); + ret = lmb_free(d, 4); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 1, ram, 0x18010000, 0, 0, 0, 0);
- d = lmb_alloc_addr(&lmb, alloc_addr_c + 0x10000, 4); + d = lmb_alloc_addr(alloc_addr_c + 0x10000, 4); ut_asserteq(d, alloc_addr_c + 0x10000); ASSERT_LMB(&lmb, ram, ram_size, 1, ram, 0x18010004, 0, 0, 0, 0); - ret = lmb_free(&lmb, d, 4); + ret = lmb_free(d, 4); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 1, ram, 0x18010000, 0, 0, 0, 0);
/* allocate at the bottom */ - ret = lmb_free(&lmb, a, alloc_addr_a - ram); + ret = lmb_free(a, alloc_addr_a - ram); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 1, ram + 0x8000000, 0x10010000, 0, 0, 0, 0); - d = lmb_alloc_addr(&lmb, ram, 4); + d = lmb_alloc_addr(ram, 4); ut_asserteq(d, ram); ASSERT_LMB(&lmb, ram, ram_size, 2, d, 4, ram + 0x8000000, 0x10010000, 0, 0);
/* check that allocating outside memory fails */ if (ram_end != 0) { - ret = lmb_alloc_addr(&lmb, ram_end, 1); + ret = lmb_alloc_addr(ram_end, 1); ut_asserteq(ret, 0); } if (ram != 0) { - ret = lmb_alloc_addr(&lmb, ram - 1, 1); + ret = lmb_alloc_addr(ram - 1, 1); ut_asserteq(ret, 0); }
@@ -606,48 +590,45 @@ static int test_get_unreserved_size(struct unit_test_state *uts, const phys_size_t alloc_addr_a = ram + 0x8000000; const phys_size_t alloc_addr_b = ram + 0x8000000 * 2; const phys_size_t alloc_addr_c = ram + 0x8000000 * 3; - struct lmb lmb; long ret; phys_size_t s;
/* check for overflow */ ut_assert(ram_end == 0 || ram_end > ram);
- lmb_init(&lmb); - - ret = lmb_add(&lmb, ram, ram_size); + ret = lmb_add(ram, ram_size); ut_asserteq(ret, 0);
/* reserve 3 blocks */ - ret = lmb_reserve(&lmb, alloc_addr_a, 0x10000); + ret = lmb_reserve(alloc_addr_a, 0x10000); ut_asserteq(ret, 0); - ret = lmb_reserve(&lmb, alloc_addr_b, 0x10000); + ret = lmb_reserve(alloc_addr_b, 0x10000); ut_asserteq(ret, 0); - ret = lmb_reserve(&lmb, alloc_addr_c, 0x10000); + ret = lmb_reserve(alloc_addr_c, 0x10000); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 3, alloc_addr_a, 0x10000, alloc_addr_b, 0x10000, alloc_addr_c, 0x10000);
/* check addresses in between blocks */ - s = lmb_get_free_size(&lmb, ram); + s = lmb_get_free_size(ram); ut_asserteq(s, alloc_addr_a - ram); - s = lmb_get_free_size(&lmb, ram + 0x10000); + s = lmb_get_free_size(ram + 0x10000); ut_asserteq(s, alloc_addr_a - ram - 0x10000); - s = lmb_get_free_size(&lmb, alloc_addr_a - 4); + s = lmb_get_free_size(alloc_addr_a - 4); ut_asserteq(s, 4);
- s = lmb_get_free_size(&lmb, alloc_addr_a + 0x10000); + s = lmb_get_free_size(alloc_addr_a + 0x10000); ut_asserteq(s, alloc_addr_b - alloc_addr_a - 0x10000); - s = lmb_get_free_size(&lmb, alloc_addr_a + 0x20000); + s = lmb_get_free_size(alloc_addr_a + 0x20000); ut_asserteq(s, alloc_addr_b - alloc_addr_a - 0x20000); - s = lmb_get_free_size(&lmb, alloc_addr_b - 4); + s = lmb_get_free_size(alloc_addr_b - 4); ut_asserteq(s, 4);
- s = lmb_get_free_size(&lmb, alloc_addr_c + 0x10000); + s = lmb_get_free_size(alloc_addr_c + 0x10000); ut_asserteq(s, ram_end - alloc_addr_c - 0x10000); - s = lmb_get_free_size(&lmb, alloc_addr_c + 0x20000); + s = lmb_get_free_size(alloc_addr_c + 0x20000); ut_asserteq(s, ram_end - alloc_addr_c - 0x20000); - s = lmb_get_free_size(&lmb, ram_end - 4); + s = lmb_get_free_size(ram_end - 4); ut_asserteq(s, 4);
return 0; @@ -680,11 +661,8 @@ static int lib_test_lmb_max_regions(struct unit_test_state *uts) + 1) * CONFIG_LMB_MAX_REGIONS; const phys_size_t blk_size = 0x10000; phys_addr_t offset; - struct lmb lmb; int ret, i;
- lmb_init(&lmb); - ut_asserteq(lmb.memory.cnt, 0); ut_asserteq(lmb.memory.max, CONFIG_LMB_MAX_REGIONS); ut_asserteq(lmb.reserved.cnt, 0); @@ -693,7 +671,7 @@ static int lib_test_lmb_max_regions(struct unit_test_state *uts) /* Add CONFIG_LMB_MAX_REGIONS memory regions */ for (i = 0; i < CONFIG_LMB_MAX_REGIONS; i++) { offset = ram + 2 * i * ram_size; - ret = lmb_add(&lmb, offset, ram_size); + ret = lmb_add(offset, ram_size); ut_asserteq(ret, 0); } ut_asserteq(lmb.memory.cnt, CONFIG_LMB_MAX_REGIONS); @@ -701,7 +679,7 @@ static int lib_test_lmb_max_regions(struct unit_test_state *uts)
/* error for the (CONFIG_LMB_MAX_REGIONS + 1) memory regions */ offset = ram + 2 * (CONFIG_LMB_MAX_REGIONS + 1) * ram_size; - ret = lmb_add(&lmb, offset, ram_size); + ret = lmb_add(offset, ram_size); ut_asserteq(ret, -1);
ut_asserteq(lmb.memory.cnt, CONFIG_LMB_MAX_REGIONS); @@ -710,7 +688,7 @@ static int lib_test_lmb_max_regions(struct unit_test_state *uts) /* reserve CONFIG_LMB_MAX_REGIONS regions */ for (i = 0; i < CONFIG_LMB_MAX_REGIONS; i++) { offset = ram + 2 * i * blk_size; - ret = lmb_reserve(&lmb, offset, blk_size); + ret = lmb_reserve(offset, blk_size); ut_asserteq(ret, 0); }
@@ -719,7 +697,7 @@ static int lib_test_lmb_max_regions(struct unit_test_state *uts)
/* error for the 9th reserved blocks */ offset = ram + 2 * (CONFIG_LMB_MAX_REGIONS + 1) * blk_size; - ret = lmb_reserve(&lmb, offset, blk_size); + ret = lmb_reserve(offset, blk_size); ut_asserteq(ret, -1);
ut_asserteq(lmb.memory.cnt, CONFIG_LMB_MAX_REGIONS); @@ -741,28 +719,25 @@ static int lib_test_lmb_flags(struct unit_test_state *uts) { const phys_addr_t ram = 0x40000000; const phys_size_t ram_size = 0x20000000; - struct lmb lmb; long ret;
- lmb_init(&lmb); - - ret = lmb_add(&lmb, ram, ram_size); + ret = lmb_add(ram, ram_size); ut_asserteq(ret, 0);
/* reserve, same flag */ - ret = lmb_reserve_flags(&lmb, 0x40010000, 0x10000, LMB_NOMAP); + ret = lmb_reserve_flags(0x40010000, 0x10000, LMB_NOMAP); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40010000, 0x10000, 0, 0, 0, 0);
/* reserve again, same flag */ - ret = lmb_reserve_flags(&lmb, 0x40010000, 0x10000, LMB_NOMAP); + ret = lmb_reserve_flags(0x40010000, 0x10000, LMB_NOMAP); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40010000, 0x10000, 0, 0, 0, 0);
/* reserve again, new flag */ - ret = lmb_reserve_flags(&lmb, 0x40010000, 0x10000, LMB_NONE); + ret = lmb_reserve_flags(0x40010000, 0x10000, LMB_NONE); ut_asserteq(ret, -1); ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40010000, 0x10000, 0, 0, 0, 0); @@ -770,20 +745,20 @@ static int lib_test_lmb_flags(struct unit_test_state *uts) ut_asserteq(lmb_is_nomap(&lmb.reserved.region[0]), 1);
/* merge after */ - ret = lmb_reserve_flags(&lmb, 0x40020000, 0x10000, LMB_NOMAP); + ret = lmb_reserve_flags(0x40020000, 0x10000, LMB_NOMAP); ut_asserteq(ret, 1); ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40010000, 0x20000, 0, 0, 0, 0);
/* merge before */ - ret = lmb_reserve_flags(&lmb, 0x40000000, 0x10000, LMB_NOMAP); + ret = lmb_reserve_flags(0x40000000, 0x10000, LMB_NOMAP); ut_asserteq(ret, 1); ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40000000, 0x30000, 0, 0, 0, 0);
ut_asserteq(lmb_is_nomap(&lmb.reserved.region[0]), 1);
- ret = lmb_reserve_flags(&lmb, 0x40030000, 0x10000, LMB_NONE); + ret = lmb_reserve_flags(0x40030000, 0x10000, LMB_NONE); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 2, 0x40000000, 0x30000, 0x40030000, 0x10000, 0, 0); @@ -792,7 +767,7 @@ static int lib_test_lmb_flags(struct unit_test_state *uts) ut_asserteq(lmb_is_nomap(&lmb.reserved.region[1]), 0);
/* test that old API use LMB_NONE */ - ret = lmb_reserve(&lmb, 0x40040000, 0x10000); + ret = lmb_reserve(0x40040000, 0x10000); ut_asserteq(ret, 1); ASSERT_LMB(&lmb, ram, ram_size, 2, 0x40000000, 0x30000, 0x40030000, 0x20000, 0, 0); @@ -800,18 +775,18 @@ static int lib_test_lmb_flags(struct unit_test_state *uts) ut_asserteq(lmb_is_nomap(&lmb.reserved.region[0]), 1); ut_asserteq(lmb_is_nomap(&lmb.reserved.region[1]), 0);
- ret = lmb_reserve_flags(&lmb, 0x40070000, 0x10000, LMB_NOMAP); + ret = lmb_reserve_flags(0x40070000, 0x10000, LMB_NOMAP); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 3, 0x40000000, 0x30000, 0x40030000, 0x20000, 0x40070000, 0x10000);
- ret = lmb_reserve_flags(&lmb, 0x40050000, 0x10000, LMB_NOMAP); + ret = lmb_reserve_flags(0x40050000, 0x10000, LMB_NOMAP); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 4, 0x40000000, 0x30000, 0x40030000, 0x20000, 0x40050000, 0x10000);
/* merge with 2 adjacent regions */ - ret = lmb_reserve_flags(&lmb, 0x40060000, 0x10000, LMB_NOMAP); + ret = lmb_reserve_flags(0x40060000, 0x10000, LMB_NOMAP); ut_asserteq(ret, 2); ASSERT_LMB(&lmb, ram, ram_size, 3, 0x40000000, 0x30000, 0x40030000, 0x20000, 0x40050000, 0x30000);

Hi Sughosh,
On Wed, 24 Jul 2024 at 00:03, Sughosh Ganu sughosh.ganu@linaro.org wrote:
The current LMB API's for allocating and reserving memory use a per-caller based memory view. Memory allocated by a caller can then be overwritten by another caller. Make these allocations and reservations persistent using the alloced list data structure.
Two alloced lists are declared -- one for the available(free) memory, and one for the used memory. Once full, the list can then be extended at runtime.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since rfc:
- Squash patches 9 - 11, 13 from the rfc v2 series into a single patch to make it bisectable.
arch/arc/lib/cache.c | 4 +- arch/arm/lib/stack.c | 4 +- arch/arm/mach-apple/board.c | 17 +- arch/arm/mach-snapdragon/board.c | 17 +- arch/arm/mach-stm32mp/dram_init.c | 8 +- arch/arm/mach-stm32mp/stm32mp1/cpu.c | 6 +- arch/m68k/lib/bootm.c | 7 +- arch/microblaze/lib/bootm.c | 4 +- arch/mips/lib/bootm.c | 11 +- arch/nios2/lib/bootm.c | 4 +- arch/powerpc/cpu/mpc85xx/mp.c | 4 +- arch/powerpc/include/asm/mp.h | 4 +- arch/powerpc/lib/bootm.c | 14 +- arch/riscv/lib/bootm.c | 4 +- arch/sh/lib/bootm.c | 4 +- arch/x86/lib/bootm.c | 4 +- arch/xtensa/lib/bootm.c | 4 +- board/xilinx/common/board.c | 8 +- boot/bootm.c | 26 +- boot/bootm_os.c | 5 +- boot/image-board.c | 34 +-- boot/image-fdt.c | 36 ++- cmd/bdinfo.c | 6 +- cmd/booti.c | 2 +- cmd/bootz.c | 2 +- cmd/elf.c | 2 +- cmd/load.c | 7 +- drivers/iommu/apple_dart.c | 8 +- drivers/iommu/sandbox_iommu.c | 16 +- fs/fs.c | 7 +- include/image.h | 28 +- include/lmb.h | 114 +++----- lib/efi_loader/efi_dt_fixup.c | 2 +- lib/efi_loader/efi_helper.c | 2 +- lib/lmb.c | 395 +++++++++++++++------------ net/tftp.c | 5 +- net/wget.c | 5 +- test/cmd/bdinfo.c | 2 +- test/lib/lmb.c | 205 ++++++-------- 39 files changed, 477 insertions(+), 560 deletions(-)
I might have missed it, but I didn't see a response to my comment on the previous patch. I think we should have the lmb state in a struct so that sandbox tests can reset it as needed.
Regards, Simon

On Fri, 26 Jul 2024 at 05:02, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Wed, 24 Jul 2024 at 00:03, Sughosh Ganu sughosh.ganu@linaro.org wrote:
The current LMB API's for allocating and reserving memory use a per-caller based memory view. Memory allocated by a caller can then be overwritten by another caller. Make these allocations and reservations persistent using the alloced list data structure.
Two alloced lists are declared -- one for the available(free) memory, and one for the used memory. Once full, the list can then be extended at runtime.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since rfc:
- Squash patches 9 - 11, 13 from the rfc v2 series into a single patch to make it bisectable.
arch/arc/lib/cache.c | 4 +- arch/arm/lib/stack.c | 4 +- arch/arm/mach-apple/board.c | 17 +- arch/arm/mach-snapdragon/board.c | 17 +- arch/arm/mach-stm32mp/dram_init.c | 8 +- arch/arm/mach-stm32mp/stm32mp1/cpu.c | 6 +- arch/m68k/lib/bootm.c | 7 +- arch/microblaze/lib/bootm.c | 4 +- arch/mips/lib/bootm.c | 11 +- arch/nios2/lib/bootm.c | 4 +- arch/powerpc/cpu/mpc85xx/mp.c | 4 +- arch/powerpc/include/asm/mp.h | 4 +- arch/powerpc/lib/bootm.c | 14 +- arch/riscv/lib/bootm.c | 4 +- arch/sh/lib/bootm.c | 4 +- arch/x86/lib/bootm.c | 4 +- arch/xtensa/lib/bootm.c | 4 +- board/xilinx/common/board.c | 8 +- boot/bootm.c | 26 +- boot/bootm_os.c | 5 +- boot/image-board.c | 34 +-- boot/image-fdt.c | 36 ++- cmd/bdinfo.c | 6 +- cmd/booti.c | 2 +- cmd/bootz.c | 2 +- cmd/elf.c | 2 +- cmd/load.c | 7 +- drivers/iommu/apple_dart.c | 8 +- drivers/iommu/sandbox_iommu.c | 16 +- fs/fs.c | 7 +- include/image.h | 28 +- include/lmb.h | 114 +++----- lib/efi_loader/efi_dt_fixup.c | 2 +- lib/efi_loader/efi_helper.c | 2 +- lib/lmb.c | 395 +++++++++++++++------------ net/tftp.c | 5 +- net/wget.c | 5 +- test/cmd/bdinfo.c | 2 +- test/lib/lmb.c | 205 ++++++-------- 39 files changed, 477 insertions(+), 560 deletions(-)
I might have missed it, but I didn't see a response to my comment on the previous patch. I think we should have the lmb state in a struct so that sandbox tests can reset it as needed.
Based on your comment [1], my understanding was that you were okay with not introducing a struct for this. I believe Tom too was not very keen on including the lmb struct into gd.
-sughosh
[1] - https://lists.denx.de/pipermail/u-boot/2024-July/558843.html

Hi Sughosh,
On Mon, 29 Jul 2024 at 01:50, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Fri, 26 Jul 2024 at 05:02, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Wed, 24 Jul 2024 at 00:03, Sughosh Ganu sughosh.ganu@linaro.org wrote:
The current LMB API's for allocating and reserving memory use a per-caller based memory view. Memory allocated by a caller can then be overwritten by another caller. Make these allocations and reservations persistent using the alloced list data structure.
Two alloced lists are declared -- one for the available(free) memory, and one for the used memory. Once full, the list can then be extended at runtime.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since rfc:
- Squash patches 9 - 11, 13 from the rfc v2 series into a single patch to make it bisectable.
arch/arc/lib/cache.c | 4 +- arch/arm/lib/stack.c | 4 +- arch/arm/mach-apple/board.c | 17 +- arch/arm/mach-snapdragon/board.c | 17 +- arch/arm/mach-stm32mp/dram_init.c | 8 +- arch/arm/mach-stm32mp/stm32mp1/cpu.c | 6 +- arch/m68k/lib/bootm.c | 7 +- arch/microblaze/lib/bootm.c | 4 +- arch/mips/lib/bootm.c | 11 +- arch/nios2/lib/bootm.c | 4 +- arch/powerpc/cpu/mpc85xx/mp.c | 4 +- arch/powerpc/include/asm/mp.h | 4 +- arch/powerpc/lib/bootm.c | 14 +- arch/riscv/lib/bootm.c | 4 +- arch/sh/lib/bootm.c | 4 +- arch/x86/lib/bootm.c | 4 +- arch/xtensa/lib/bootm.c | 4 +- board/xilinx/common/board.c | 8 +- boot/bootm.c | 26 +- boot/bootm_os.c | 5 +- boot/image-board.c | 34 +-- boot/image-fdt.c | 36 ++- cmd/bdinfo.c | 6 +- cmd/booti.c | 2 +- cmd/bootz.c | 2 +- cmd/elf.c | 2 +- cmd/load.c | 7 +- drivers/iommu/apple_dart.c | 8 +- drivers/iommu/sandbox_iommu.c | 16 +- fs/fs.c | 7 +- include/image.h | 28 +- include/lmb.h | 114 +++----- lib/efi_loader/efi_dt_fixup.c | 2 +- lib/efi_loader/efi_helper.c | 2 +- lib/lmb.c | 395 +++++++++++++++------------ net/tftp.c | 5 +- net/wget.c | 5 +- test/cmd/bdinfo.c | 2 +- test/lib/lmb.c | 205 ++++++-------- 39 files changed, 477 insertions(+), 560 deletions(-)
I might have missed it, but I didn't see a response to my comment on the previous patch. I think we should have the lmb state in a struct so that sandbox tests can reset it as needed.
Based on your comment [1], my understanding was that you were okay with not introducing a struct for this. I believe Tom too was not very keen on including the lmb struct into gd.
It should be in a struct, so that its state is all in one place...for testing if nothing else.
Re gd, it doesn't need to be in there, if there is only one.
Regards, Simon

Hi Sughosh,
On Wed, 24 Jul 2024 at 00:03, Sughosh Ganu sughosh.ganu@linaro.org wrote:
The current LMB API's for allocating and reserving memory use a per-caller based memory view. Memory allocated by a caller can then be overwritten by another caller. Make these allocations and reservations persistent using the alloced list data structure.
Two alloced lists are declared -- one for the available(free) memory, and one for the used memory. Once full, the list can then be extended at runtime.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since rfc:
- Squash patches 9 - 11, 13 from the rfc v2 series into a single patch to make it bisectable.
arch/arc/lib/cache.c | 4 +- arch/arm/lib/stack.c | 4 +- arch/arm/mach-apple/board.c | 17 +- arch/arm/mach-snapdragon/board.c | 17 +- arch/arm/mach-stm32mp/dram_init.c | 8 +- arch/arm/mach-stm32mp/stm32mp1/cpu.c | 6 +- arch/m68k/lib/bootm.c | 7 +- arch/microblaze/lib/bootm.c | 4 +- arch/mips/lib/bootm.c | 11 +- arch/nios2/lib/bootm.c | 4 +- arch/powerpc/cpu/mpc85xx/mp.c | 4 +- arch/powerpc/include/asm/mp.h | 4 +- arch/powerpc/lib/bootm.c | 14 +- arch/riscv/lib/bootm.c | 4 +- arch/sh/lib/bootm.c | 4 +- arch/x86/lib/bootm.c | 4 +- arch/xtensa/lib/bootm.c | 4 +- board/xilinx/common/board.c | 8 +- boot/bootm.c | 26 +- boot/bootm_os.c | 5 +- boot/image-board.c | 34 +-- boot/image-fdt.c | 36 ++- cmd/bdinfo.c | 6 +- cmd/booti.c | 2 +- cmd/bootz.c | 2 +- cmd/elf.c | 2 +- cmd/load.c | 7 +- drivers/iommu/apple_dart.c | 8 +- drivers/iommu/sandbox_iommu.c | 16 +- fs/fs.c | 7 +- include/image.h | 28 +- include/lmb.h | 114 +++----- lib/efi_loader/efi_dt_fixup.c | 2 +- lib/efi_loader/efi_helper.c | 2 +- lib/lmb.c | 395 +++++++++++++++------------ net/tftp.c | 5 +- net/wget.c | 5 +- test/cmd/bdinfo.c | 2 +- test/lib/lmb.c | 205 ++++++-------- 39 files changed, 477 insertions(+), 560 deletions(-)
[..]
diff --git a/lib/lmb.c b/lib/lmb.c index 4d39c0d1f9..dd6f22654c 100644 --- a/lib/lmb.c +++ b/lib/lmb.c
[..]
-static void lmb_remove_region(struct lmb_region *rgn, unsigned long r) +static void lmb_remove_region(struct alist *lmb_rgn_lst, unsigned long r) { unsigned long i;
struct lmb_region *rgn = lmb_rgn_lst->data;
for (i = r; i < rgn->cnt - 1; i++) {
rgn->region[i].base = rgn->region[i + 1].base;
rgn->region[i].size = rgn->region[i + 1].size;
rgn->region[i].flags = rgn->region[i + 1].flags;
for (i = r; i < lmb_rgn_lst->count - 1; i++) {
rgn[i].base = rgn[i + 1].base;
rgn[i].size = rgn[i + 1].size;
rgn[i].flags = rgn[i + 1].flags;
This should be:
rgn[i] = rgn[i+1] [..]
Regards, Simon

On Wed, 31 Jul 2024 at 01:17, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Wed, 24 Jul 2024 at 00:03, Sughosh Ganu sughosh.ganu@linaro.org wrote:
The current LMB API's for allocating and reserving memory use a per-caller based memory view. Memory allocated by a caller can then be overwritten by another caller. Make these allocations and reservations persistent using the alloced list data structure.
Two alloced lists are declared -- one for the available(free) memory, and one for the used memory. Once full, the list can then be extended at runtime.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since rfc:
- Squash patches 9 - 11, 13 from the rfc v2 series into a single patch to make it bisectable.
arch/arc/lib/cache.c | 4 +- arch/arm/lib/stack.c | 4 +- arch/arm/mach-apple/board.c | 17 +- arch/arm/mach-snapdragon/board.c | 17 +- arch/arm/mach-stm32mp/dram_init.c | 8 +- arch/arm/mach-stm32mp/stm32mp1/cpu.c | 6 +- arch/m68k/lib/bootm.c | 7 +- arch/microblaze/lib/bootm.c | 4 +- arch/mips/lib/bootm.c | 11 +- arch/nios2/lib/bootm.c | 4 +- arch/powerpc/cpu/mpc85xx/mp.c | 4 +- arch/powerpc/include/asm/mp.h | 4 +- arch/powerpc/lib/bootm.c | 14 +- arch/riscv/lib/bootm.c | 4 +- arch/sh/lib/bootm.c | 4 +- arch/x86/lib/bootm.c | 4 +- arch/xtensa/lib/bootm.c | 4 +- board/xilinx/common/board.c | 8 +- boot/bootm.c | 26 +- boot/bootm_os.c | 5 +- boot/image-board.c | 34 +-- boot/image-fdt.c | 36 ++- cmd/bdinfo.c | 6 +- cmd/booti.c | 2 +- cmd/bootz.c | 2 +- cmd/elf.c | 2 +- cmd/load.c | 7 +- drivers/iommu/apple_dart.c | 8 +- drivers/iommu/sandbox_iommu.c | 16 +- fs/fs.c | 7 +- include/image.h | 28 +- include/lmb.h | 114 +++----- lib/efi_loader/efi_dt_fixup.c | 2 +- lib/efi_loader/efi_helper.c | 2 +- lib/lmb.c | 395 +++++++++++++++------------ net/tftp.c | 5 +- net/wget.c | 5 +- test/cmd/bdinfo.c | 2 +- test/lib/lmb.c | 205 ++++++-------- 39 files changed, 477 insertions(+), 560 deletions(-)
[..]
diff --git a/lib/lmb.c b/lib/lmb.c index 4d39c0d1f9..dd6f22654c 100644 --- a/lib/lmb.c +++ b/lib/lmb.c
[..]
-static void lmb_remove_region(struct lmb_region *rgn, unsigned long r) +static void lmb_remove_region(struct alist *lmb_rgn_lst, unsigned long r) { unsigned long i;
struct lmb_region *rgn = lmb_rgn_lst->data;
for (i = r; i < rgn->cnt - 1; i++) {
rgn->region[i].base = rgn->region[i + 1].base;
rgn->region[i].size = rgn->region[i + 1].size;
rgn->region[i].flags = rgn->region[i + 1].flags;
for (i = r; i < lmb_rgn_lst->count - 1; i++) {
rgn[i].base = rgn[i + 1].base;
rgn[i].size = rgn[i + 1].size;
rgn[i].flags = rgn[i + 1].flags;
This should be:
rgn[i] = rgn[i+1]
Yes, direct struct assignment is a better way of doing this. Thanks for pointing it out!
-sughosh

Hi,
On 24/07/2024 08:01, Sughosh Ganu wrote:
The current LMB API's for allocating and reserving memory use a per-caller based memory view. Memory allocated by a caller can then be overwritten by another caller. Make these allocations and reservations persistent using the alloced list data structure.
Really happy to see this change!
Two alloced lists are declared -- one for the available(free) memory, and one for the used memory. Once full, the list can then be extended at runtime.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Acked-by: Caleb Connolly caleb.connolly@linaro.org # mach-snapdragon
Changes since rfc:
Squash patches 9 - 11, 13 from the rfc v2 series into a single patch to make it bisectable.
arch/arc/lib/cache.c | 4 +- arch/arm/lib/stack.c | 4 +- arch/arm/mach-apple/board.c | 17 +- arch/arm/mach-snapdragon/board.c | 17 +- arch/arm/mach-stm32mp/dram_init.c | 8 +- arch/arm/mach-stm32mp/stm32mp1/cpu.c | 6 +- arch/m68k/lib/bootm.c | 7 +- arch/microblaze/lib/bootm.c | 4 +- arch/mips/lib/bootm.c | 11 +- arch/nios2/lib/bootm.c | 4 +- arch/powerpc/cpu/mpc85xx/mp.c | 4 +- arch/powerpc/include/asm/mp.h | 4 +- arch/powerpc/lib/bootm.c | 14 +- arch/riscv/lib/bootm.c | 4 +- arch/sh/lib/bootm.c | 4 +- arch/x86/lib/bootm.c | 4 +- arch/xtensa/lib/bootm.c | 4 +- board/xilinx/common/board.c | 8 +- boot/bootm.c | 26 +- boot/bootm_os.c | 5 +- boot/image-board.c | 34 +-- boot/image-fdt.c | 36 ++- cmd/bdinfo.c | 6 +- cmd/booti.c | 2 +- cmd/bootz.c | 2 +- cmd/elf.c | 2 +- cmd/load.c | 7 +- drivers/iommu/apple_dart.c | 8 +- drivers/iommu/sandbox_iommu.c | 16 +- fs/fs.c | 7 +- include/image.h | 28 +- include/lmb.h | 114 +++----- lib/efi_loader/efi_dt_fixup.c | 2 +- lib/efi_loader/efi_helper.c | 2 +- lib/lmb.c | 395 +++++++++++++++------------ net/tftp.c | 5 +- net/wget.c | 5 +- test/cmd/bdinfo.c | 2 +- test/lib/lmb.c | 205 ++++++-------- 39 files changed, 477 insertions(+), 560 deletions(-)
diff --git a/arch/arc/lib/cache.c b/arch/arc/lib/cache.c index 22e748868a..5151af917a 100644 --- a/arch/arc/lib/cache.c +++ b/arch/arc/lib/cache.c @@ -829,7 +829,7 @@ static ulong get_sp(void) return ret; }
-void arch_lmb_reserve(struct lmb *lmb) +void arch_lmb_reserve(void) {
- arch_lmb_reserve_generic(lmb, get_sp(), gd->ram_top, 4096);
- arch_lmb_reserve_generic(get_sp(), gd->ram_top, 4096); }
diff --git a/arch/arm/lib/stack.c b/arch/arm/lib/stack.c index ea1b937add..87d5c962d7 100644 --- a/arch/arm/lib/stack.c +++ b/arch/arm/lib/stack.c @@ -42,7 +42,7 @@ static ulong get_sp(void) return ret; }
-void arch_lmb_reserve(struct lmb *lmb) +void arch_lmb_reserve(void) {
- arch_lmb_reserve_generic(lmb, get_sp(), gd->ram_top, 16384);
- arch_lmb_reserve_generic(get_sp(), gd->ram_top, 16384); }
diff --git a/arch/arm/mach-apple/board.c b/arch/arm/mach-apple/board.c index 8bace3005e..213390d6e8 100644 --- a/arch/arm/mach-apple/board.c +++ b/arch/arm/mach-apple/board.c @@ -773,23 +773,22 @@ u64 get_page_table_size(void)
int board_late_init(void) {
struct lmb lmb; u32 status = 0;
lmb_init_and_reserve(&lmb, gd->bd, (void *)gd->fdt_blob);
lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob);
/* somewhat based on the Linux Kernel boot requirements:
- align by 2M and maximal FDT size 2M
*/
- status |= env_set_hex("loadaddr", lmb_alloc(&lmb, SZ_1G, SZ_2M));
- status |= env_set_hex("fdt_addr_r", lmb_alloc(&lmb, SZ_2M, SZ_2M));
- status |= env_set_hex("kernel_addr_r", lmb_alloc(&lmb, SZ_128M, SZ_2M));
- status |= env_set_hex("ramdisk_addr_r", lmb_alloc(&lmb, SZ_1G, SZ_2M));
- status |= env_set_hex("loadaddr", lmb_alloc(SZ_1G, SZ_2M));
- status |= env_set_hex("fdt_addr_r", lmb_alloc(SZ_2M, SZ_2M));
- status |= env_set_hex("kernel_addr_r", lmb_alloc(SZ_128M, SZ_2M));
- status |= env_set_hex("ramdisk_addr_r", lmb_alloc(SZ_1G, SZ_2M)); status |= env_set_hex("kernel_comp_addr_r",
lmb_alloc(&lmb, KERNEL_COMP_SIZE, SZ_2M));
status |= env_set_hex("kernel_comp_size", KERNEL_COMP_SIZE);lmb_alloc(KERNEL_COMP_SIZE, SZ_2M));
- status |= env_set_hex("scriptaddr", lmb_alloc(&lmb, SZ_4M, SZ_2M));
- status |= env_set_hex("pxefile_addr_r", lmb_alloc(&lmb, SZ_4M, SZ_2M));
status |= env_set_hex("scriptaddr", lmb_alloc(SZ_4M, SZ_2M));
status |= env_set_hex("pxefile_addr_r", lmb_alloc(SZ_4M, SZ_2M));
if (status) log_warning("late_init: Failed to set run time variables\n");
diff --git a/arch/arm/mach-snapdragon/board.c b/arch/arm/mach-snapdragon/board.c index b439a19ec7..a63c8bec45 100644 --- a/arch/arm/mach-snapdragon/board.c +++ b/arch/arm/mach-snapdragon/board.c @@ -275,24 +275,23 @@ void __weak qcom_late_init(void)
#define KERNEL_COMP_SIZE SZ_64M
-#define addr_alloc(lmb, size) lmb_alloc(lmb, size, SZ_2M) +#define addr_alloc(size) lmb_alloc(size, SZ_2M)
/* Stolen from arch/arm/mach-apple/board.c */ int board_late_init(void) {
struct lmb lmb; u32 status = 0;
lmb_init_and_reserve(&lmb, gd->bd, (void *)gd->fdt_blob);
lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob);
/* We need to be fairly conservative here as we support boards with just 1G of TOTAL RAM */
- status |= env_set_hex("kernel_addr_r", addr_alloc(&lmb, SZ_128M));
- status |= env_set_hex("ramdisk_addr_r", addr_alloc(&lmb, SZ_128M));
- status |= env_set_hex("kernel_comp_addr_r", addr_alloc(&lmb, KERNEL_COMP_SIZE));
- status |= env_set_hex("kernel_addr_r", addr_alloc(SZ_128M));
- status |= env_set_hex("ramdisk_addr_r", addr_alloc(SZ_128M));
- status |= env_set_hex("kernel_comp_addr_r", addr_alloc(KERNEL_COMP_SIZE)); status |= env_set_hex("kernel_comp_size", KERNEL_COMP_SIZE);
- status |= env_set_hex("scriptaddr", addr_alloc(&lmb, SZ_4M));
- status |= env_set_hex("pxefile_addr_r", addr_alloc(&lmb, SZ_4M));
- status |= env_set_hex("fdt_addr_r", addr_alloc(&lmb, SZ_2M));
status |= env_set_hex("scriptaddr", addr_alloc(SZ_4M));
status |= env_set_hex("pxefile_addr_r", addr_alloc(SZ_4M));
status |= env_set_hex("fdt_addr_r", addr_alloc(SZ_2M));
if (status) log_warning("%s: Failed to set run time variables\n", __func__);
diff --git a/arch/arm/mach-stm32mp/dram_init.c b/arch/arm/mach-stm32mp/dram_init.c index 6024959b97..e8b0a38be1 100644 --- a/arch/arm/mach-stm32mp/dram_init.c +++ b/arch/arm/mach-stm32mp/dram_init.c @@ -47,7 +47,6 @@ phys_addr_t board_get_usable_ram_top(phys_size_t total_size) { phys_size_t size; phys_addr_t reg;
struct lmb lmb;
if (!total_size) return gd->ram_top;
@@ -59,12 +58,11 @@ phys_addr_t board_get_usable_ram_top(phys_size_t total_size) gd->ram_top = clamp_val(gd->ram_top, 0, SZ_4G - 1);
/* found enough not-reserved memory to relocated U-Boot */
- lmb_init(&lmb);
- lmb_add(&lmb, gd->ram_base, gd->ram_top - gd->ram_base);
- boot_fdt_add_mem_rsv_regions(&lmb, (void *)gd->fdt_blob);
- lmb_add(gd->ram_base, gd->ram_top - gd->ram_base);
- boot_fdt_add_mem_rsv_regions((void *)gd->fdt_blob); /* add 8M for reserved memory for display, fdt, gd,... */ size = ALIGN(SZ_8M + CONFIG_SYS_MALLOC_LEN + total_size, MMU_SECTION_SIZE),
- reg = lmb_alloc(&lmb, size, MMU_SECTION_SIZE);
reg = lmb_alloc(size, MMU_SECTION_SIZE);
if (!reg) reg = gd->ram_top - size;
diff --git a/arch/arm/mach-stm32mp/stm32mp1/cpu.c b/arch/arm/mach-stm32mp/stm32mp1/cpu.c index 478c3efae7..a913737342 100644 --- a/arch/arm/mach-stm32mp/stm32mp1/cpu.c +++ b/arch/arm/mach-stm32mp/stm32mp1/cpu.c @@ -30,8 +30,6 @@ */ u8 early_tlb[PGTABLE_SIZE] __section(".data") __aligned(0x4000);
-struct lmb lmb;
- u32 get_bootmode(void) { /* read bootmode from TAMP backup register */
@@ -80,7 +78,7 @@ void dram_bank_mmu_setup(int bank) i < (start >> MMU_SECTION_SHIFT) + (size >> MMU_SECTION_SHIFT); i++) { option = DCACHE_DEFAULT_OPTION;
if (use_lmb && lmb_is_reserved_flags(&lmb, i << MMU_SECTION_SHIFT, LMB_NOMAP))
set_section_dcache(i, option); }if (use_lmb && lmb_is_reserved_flags(i << MMU_SECTION_SHIFT, LMB_NOMAP)) option = 0; /* INVALID ENTRY in TLB */
@@ -144,7 +142,7 @@ int mach_cpu_init(void) void enable_caches(void) { /* parse device tree when data cache is still activated */
- lmb_init_and_reserve(&lmb, gd->bd, (void *)gd->fdt_blob);
lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob);
/* I-cache is already enabled in start.S: icache_enable() not needed */
diff --git a/arch/m68k/lib/bootm.c b/arch/m68k/lib/bootm.c index f2d02e4376..eb220d178d 100644 --- a/arch/m68k/lib/bootm.c +++ b/arch/m68k/lib/bootm.c @@ -30,9 +30,9 @@ DECLARE_GLOBAL_DATA_PTR; static ulong get_sp (void); static void set_clocks_in_mhz (struct bd_info *kbd);
-void arch_lmb_reserve(struct lmb *lmb) +void arch_lmb_reserve(void) {
- arch_lmb_reserve_generic(lmb, get_sp(), gd->ram_top, 1024);
arch_lmb_reserve_generic(get_sp(), gd->ram_top, 1024); }
int do_bootm_linux(int flag, struct bootm_info *bmi)
@@ -41,7 +41,6 @@ int do_bootm_linux(int flag, struct bootm_info *bmi) int ret; struct bd_info *kbd; void (*kernel) (struct bd_info *, ulong, ulong, ulong, ulong);
struct lmb *lmb = &images->lmb;
/*
- allow the PREP bootm subcommand, it is required for bootm to work
@@ -53,7 +52,7 @@ int do_bootm_linux(int flag, struct bootm_info *bmi) return 1;
/* allocate space for kernel copy of board info */
- ret = boot_get_kbd (lmb, &kbd);
- ret = boot_get_kbd (&kbd); if (ret) { puts("ERROR with allocation of kernel bd\n"); goto error;
diff --git a/arch/microblaze/lib/bootm.c b/arch/microblaze/lib/bootm.c index cbe9d85aa9..ce96bca28f 100644 --- a/arch/microblaze/lib/bootm.c +++ b/arch/microblaze/lib/bootm.c @@ -32,9 +32,9 @@ static ulong get_sp(void) return ret; }
-void arch_lmb_reserve(struct lmb *lmb) +void arch_lmb_reserve(void) {
- arch_lmb_reserve_generic(lmb, get_sp(), gd->ram_top, 4096);
arch_lmb_reserve_generic(get_sp(), gd->ram_top, 4096); }
static void boot_jump_linux(struct bootm_headers *images, int flag)
diff --git a/arch/mips/lib/bootm.c b/arch/mips/lib/bootm.c index adb6b6cc22..8fb3a3923f 100644 --- a/arch/mips/lib/bootm.c +++ b/arch/mips/lib/bootm.c @@ -37,9 +37,9 @@ static ulong arch_get_sp(void) return ret; }
-void arch_lmb_reserve(struct lmb *lmb) +void arch_lmb_reserve(void) {
- arch_lmb_reserve_generic(lmb, arch_get_sp(), gd->ram_top, 4096);
arch_lmb_reserve_generic(arch_get_sp(), gd->ram_top, 4096); }
static void linux_cmdline_init(void)
@@ -225,9 +225,8 @@ static int boot_reloc_fdt(struct bootm_headers *images) }
#if CONFIG_IS_ENABLED(MIPS_BOOT_FDT) && CONFIG_IS_ENABLED(OF_LIBFDT)
- boot_fdt_add_mem_rsv_regions(&images->lmb, images->ft_addr);
- return boot_relocate_fdt(&images->lmb, &images->ft_addr,
&images->ft_len);
- boot_fdt_add_mem_rsv_regions(images->ft_addr);
- return boot_relocate_fdt(&images->ft_addr, &images->ft_len); #else return 0; #endif
@@ -248,7 +247,7 @@ static int boot_setup_fdt(struct bootm_headers *images) images->initrd_start = virt_to_phys((void *)images->initrd_start); images->initrd_end = virt_to_phys((void *)images->initrd_end);
- return image_setup_libfdt(images, images->ft_addr, &images->lmb);
return image_setup_libfdt(images, images->ft_addr, true); }
static void boot_prep_linux(struct bootm_headers *images)
diff --git a/arch/nios2/lib/bootm.c b/arch/nios2/lib/bootm.c index ce939ff5e1..d33d45d28f 100644 --- a/arch/nios2/lib/bootm.c +++ b/arch/nios2/lib/bootm.c @@ -73,7 +73,7 @@ static ulong get_sp(void) return ret; }
-void arch_lmb_reserve(struct lmb *lmb) +void arch_lmb_reserve(void) {
- arch_lmb_reserve_generic(lmb, get_sp(), gd->ram_top, 4096);
- arch_lmb_reserve_generic(get_sp(), gd->ram_top, 4096); }
diff --git a/arch/powerpc/cpu/mpc85xx/mp.c b/arch/powerpc/cpu/mpc85xx/mp.c index 03f801ebbb..bed465cb2c 100644 --- a/arch/powerpc/cpu/mpc85xx/mp.c +++ b/arch/powerpc/cpu/mpc85xx/mp.c @@ -408,11 +408,11 @@ static void plat_mp_up(unsigned long bootpg, unsigned int pagesize) } #endif
-void cpu_mp_lmb_reserve(struct lmb *lmb) +void cpu_mp_lmb_reserve(void) { u32 bootpg = determine_mp_bootpg(NULL);
- lmb_reserve(lmb, bootpg, 4096);
lmb_reserve(bootpg, 4096); }
void setup_mp(void)
diff --git a/arch/powerpc/include/asm/mp.h b/arch/powerpc/include/asm/mp.h index 8dacd2781d..b3f59be840 100644 --- a/arch/powerpc/include/asm/mp.h +++ b/arch/powerpc/include/asm/mp.h @@ -6,10 +6,8 @@ #ifndef _ASM_MP_H_ #define _ASM_MP_H_
-#include <lmb.h>
- void setup_mp(void);
-void cpu_mp_lmb_reserve(struct lmb *lmb); +void cpu_mp_lmb_reserve(void); u32 determine_mp_bootpg(unsigned int *pagesize); int is_core_disabled(int nr);
diff --git a/arch/powerpc/lib/bootm.c b/arch/powerpc/lib/bootm.c index 61e08728dd..6c35664ff3 100644 --- a/arch/powerpc/lib/bootm.c +++ b/arch/powerpc/lib/bootm.c @@ -116,7 +116,7 @@ static void boot_jump_linux(struct bootm_headers *images) return; }
-void arch_lmb_reserve(struct lmb *lmb) +void arch_lmb_reserve(void) { phys_size_t bootm_size; ulong size, bootmap_base; @@ -139,13 +139,13 @@ void arch_lmb_reserve(struct lmb *lmb) ulong base = bootmap_base + size; printf("WARNING: adjusting available memory from 0x%lx to 0x%llx\n", size, (unsigned long long)bootm_size);
lmb_reserve(lmb, base, bootm_size - size);
}lmb_reserve(base, bootm_size - size);
- arch_lmb_reserve_generic(lmb, get_sp(), gd->ram_top, 4096);
arch_lmb_reserve_generic(get_sp(), gd->ram_top, 4096);
#ifdef CONFIG_MP
- cpu_mp_lmb_reserve(lmb);
cpu_mp_lmb_reserve(); #endif
return;
@@ -166,7 +166,6 @@ static void boot_prep_linux(struct bootm_headers *images) static int boot_cmdline_linux(struct bootm_headers *images) { ulong of_size = images->ft_len;
- struct lmb *lmb = &images->lmb; ulong *cmd_start = &images->cmdline_start; ulong *cmd_end = &images->cmdline_end;
@@ -174,7 +173,7 @@ static int boot_cmdline_linux(struct bootm_headers *images)
if (!of_size) { /* allocate space and init command line */
ret = boot_get_cmdline (lmb, cmd_start, cmd_end);
if (ret) { puts("ERROR with allocation of cmdline\n"); return ret;ret = boot_get_cmdline (cmd_start, cmd_end);
@@ -187,14 +186,13 @@ static int boot_cmdline_linux(struct bootm_headers *images) static int boot_bd_t_linux(struct bootm_headers *images) { ulong of_size = images->ft_len;
struct lmb *lmb = &images->lmb; struct bd_info **kbd = &images->kbd;
int ret = 0;
if (!of_size) { /* allocate space for kernel copy of board info */
ret = boot_get_kbd (lmb, kbd);
if (ret) { puts("ERROR with allocation of kernel bd\n"); return ret;ret = boot_get_kbd (kbd);
diff --git a/arch/riscv/lib/bootm.c b/arch/riscv/lib/bootm.c index 13cbaaba68..bbf62f9e05 100644 --- a/arch/riscv/lib/bootm.c +++ b/arch/riscv/lib/bootm.c @@ -142,7 +142,7 @@ static ulong get_sp(void) return ret; }
-void arch_lmb_reserve(struct lmb *lmb) +void arch_lmb_reserve(void) {
- arch_lmb_reserve_generic(lmb, get_sp(), gd->ram_top, 4096);
- arch_lmb_reserve_generic(get_sp(), gd->ram_top, 4096); }
diff --git a/arch/sh/lib/bootm.c b/arch/sh/lib/bootm.c index e298d766b5..44ac05988c 100644 --- a/arch/sh/lib/bootm.c +++ b/arch/sh/lib/bootm.c @@ -110,7 +110,7 @@ static ulong get_sp(void) return ret; }
-void arch_lmb_reserve(struct lmb *lmb) +void arch_lmb_reserve(void) {
- arch_lmb_reserve_generic(lmb, get_sp(), gd->ram_top, 4096);
- arch_lmb_reserve_generic(get_sp(), gd->ram_top, 4096); }
diff --git a/arch/x86/lib/bootm.c b/arch/x86/lib/bootm.c index 2c889bcd33..114b31012e 100644 --- a/arch/x86/lib/bootm.c +++ b/arch/x86/lib/bootm.c @@ -267,7 +267,7 @@ static ulong get_sp(void) return ret; }
-void arch_lmb_reserve(struct lmb *lmb) +void arch_lmb_reserve(void) {
- arch_lmb_reserve_generic(lmb, get_sp(), gd->ram_top, 4096);
- arch_lmb_reserve_generic(get_sp(), gd->ram_top, 4096); }
diff --git a/arch/xtensa/lib/bootm.c b/arch/xtensa/lib/bootm.c index 1de06b7fb5..bdbd6d4692 100644 --- a/arch/xtensa/lib/bootm.c +++ b/arch/xtensa/lib/bootm.c @@ -206,7 +206,7 @@ static ulong get_sp(void) return ret; }
-void arch_lmb_reserve(struct lmb *lmb) +void arch_lmb_reserve(void) {
- arch_lmb_reserve_generic(lmb, get_sp(), gd->ram_top, 4096);
- arch_lmb_reserve_generic(get_sp(), gd->ram_top, 4096); }
diff --git a/board/xilinx/common/board.c b/board/xilinx/common/board.c index 0b43407b9e..4056884400 100644 --- a/board/xilinx/common/board.c +++ b/board/xilinx/common/board.c @@ -675,7 +675,6 @@ phys_addr_t board_get_usable_ram_top(phys_size_t total_size) { phys_size_t size; phys_addr_t reg;
struct lmb lmb;
if (!total_size) return gd->ram_top;
@@ -684,11 +683,10 @@ phys_addr_t board_get_usable_ram_top(phys_size_t total_size) panic("Not 64bit aligned DT location: %p\n", gd->fdt_blob);
/* found enough not-reserved memory to relocated U-Boot */
- lmb_init(&lmb);
- lmb_add(&lmb, gd->ram_base, gd->ram_size);
- boot_fdt_add_mem_rsv_regions(&lmb, (void *)gd->fdt_blob);
- lmb_add(gd->ram_base, gd->ram_size);
- boot_fdt_add_mem_rsv_regions((void *)gd->fdt_blob); size = ALIGN(CONFIG_SYS_MALLOC_LEN + total_size, MMU_SECTION_SIZE);
- reg = lmb_alloc(&lmb, size, MMU_SECTION_SIZE);
reg = lmb_alloc(size, MMU_SECTION_SIZE);
if (!reg) reg = gd->ram_top - size;
diff --git a/boot/bootm.c b/boot/bootm.c index 376d63aafc..7b3fe551de 100644 --- a/boot/bootm.c +++ b/boot/bootm.c @@ -240,7 +240,7 @@ static int boot_get_kernel(const char *addr_fit, struct bootm_headers *images, }
#ifdef CONFIG_LMB -static void boot_start_lmb(struct bootm_headers *images) +static void boot_start_lmb(void) { phys_addr_t mem_start; phys_size_t mem_size; @@ -248,12 +248,11 @@ static void boot_start_lmb(struct bootm_headers *images) mem_start = env_get_bootm_low(); mem_size = env_get_bootm_size();
- lmb_init_and_reserve_range(&images->lmb, mem_start,
mem_size, NULL);
- lmb_init_and_reserve_range(mem_start, mem_size, NULL); } #else
-#define lmb_reserve(lmb, base, size) -static inline void boot_start_lmb(struct bootm_headers *images) { } +#define lmb_reserve(base, size) +static inline void boot_start_lmb(void) { } #endif
static int bootm_start(void) @@ -261,7 +260,7 @@ static int bootm_start(void) memset((void *)&images, 0, sizeof(images)); images.verify = env_get_yesno("verify");
- boot_start_lmb(&images);
boot_start_lmb();
bootstage_mark_name(BOOTSTAGE_ID_BOOTM_START, "bootm_start"); images.state = BOOTM_STATE_START;
@@ -640,7 +639,7 @@ static int bootm_load_os(struct bootm_headers *images, int boot_progress) if (os.type == IH_TYPE_KERNEL_NOLOAD && os.comp != IH_COMP_NONE) { ulong req_size = ALIGN(image_len * 4, SZ_1M);
load = lmb_alloc(&images->lmb, req_size, SZ_2M);
if (!load) return 1; os.load = load;load = lmb_alloc(req_size, SZ_2M);
@@ -714,8 +713,7 @@ static int bootm_load_os(struct bootm_headers *images, int boot_progress) images->os.end = relocated_addr + image_size; }
- lmb_reserve(&images->lmb, images->os.load, (load_end -
images->os.load));
- lmb_reserve(images->os.load, (load_end - images->os.load)); return 0; }
@@ -1041,8 +1039,9 @@ int bootm_run_states(struct bootm_info *bmi, int states) if (!ret && (states & BOOTM_STATE_RAMDISK)) { ulong rd_len = images->rd_end - images->rd_start;
ret = boot_ramdisk_high(&images->lmb, images->rd_start,
rd_len, &images->initrd_start, &images->initrd_end);
ret = boot_ramdisk_high(images->rd_start, rd_len,
&images->initrd_start,
if (!ret) { env_set_hex("initrd_start", images->initrd_start); env_set_hex("initrd_end", images->initrd_end);&images->initrd_end);
@@ -1051,9 +1050,8 @@ int bootm_run_states(struct bootm_info *bmi, int states) #endif #if CONFIG_IS_ENABLED(OF_LIBFDT) && defined(CONFIG_LMB) if (!ret && (states & BOOTM_STATE_FDT)) {
boot_fdt_add_mem_rsv_regions(&images->lmb, images->ft_addr);
ret = boot_relocate_fdt(&images->lmb, &images->ft_addr,
&images->ft_len);
boot_fdt_add_mem_rsv_regions(images->ft_addr);
} #endifret = boot_relocate_fdt(&images->ft_addr, &images->ft_len);
diff --git a/boot/bootm_os.c b/boot/bootm_os.c index 6a6621706f..e9522cd329 100644 --- a/boot/bootm_os.c +++ b/boot/bootm_os.c @@ -260,12 +260,11 @@ static void do_bootvx_fdt(struct bootm_headers *images) char *bootline; ulong of_size = images->ft_len; char **of_flat_tree = &images->ft_addr;
struct lmb *lmb = &images->lmb;
if (*of_flat_tree) {
boot_fdt_add_mem_rsv_regions(lmb, *of_flat_tree);
boot_fdt_add_mem_rsv_regions(*of_flat_tree);
ret = boot_relocate_fdt(lmb, of_flat_tree, &of_size);
if (ret) return;ret = boot_relocate_fdt(of_flat_tree, &of_size);
diff --git a/boot/image-board.c b/boot/image-board.c index f212401304..1f8c1ac69f 100644 --- a/boot/image-board.c +++ b/boot/image-board.c @@ -515,7 +515,6 @@ int boot_get_ramdisk(char const *select, struct bootm_headers *images,
/**
- boot_ramdisk_high - relocate init ramdisk
- @lmb: pointer to lmb handle, will be used for memory mgmt
- @rd_data: ramdisk data start address
- @rd_len: ramdisk data length
- @initrd_start: pointer to a ulong variable, will hold final init ramdisk
@@ -534,8 +533,8 @@ int boot_get_ramdisk(char const *select, struct bootm_headers *images,
0 - success
-1 - failure
*/ -int boot_ramdisk_high(struct lmb *lmb, ulong rd_data, ulong rd_len,
ulong *initrd_start, ulong *initrd_end)
+int boot_ramdisk_high(ulong rd_data, ulong rd_len, ulong *initrd_start,
{ char *s; phys_addr_t initrd_high;ulong *initrd_end)
@@ -561,13 +560,14 @@ int boot_ramdisk_high(struct lmb *lmb, ulong rd_data, ulong rd_len, debug(" in-place initrd\n"); *initrd_start = rd_data; *initrd_end = rd_data + rd_len;
lmb_reserve(lmb, rd_data, rd_len);
} else { if (initrd_high)lmb_reserve(rd_data, rd_len);
*initrd_start = (ulong)lmb_alloc_base(lmb,
rd_len, 0x1000, initrd_high);
*initrd_start = (ulong)lmb_alloc_base(rd_len,
0x1000,
initrd_high); else
*initrd_start = (ulong)lmb_alloc(lmb, rd_len,
*initrd_start = (ulong)lmb_alloc(rd_len, 0x1000); if (*initrd_start == 0) {
@@ -800,7 +800,6 @@ int boot_get_loadable(struct bootm_headers *images)
/**
- boot_get_cmdline - allocate and initialize kernel cmdline
- @lmb: pointer to lmb handle, will be used for memory mgmt
- @cmd_start: pointer to a ulong variable, will hold cmdline start
- @cmd_end: pointer to a ulong variable, will hold cmdline end
@@ -813,7 +812,7 @@ int boot_get_loadable(struct bootm_headers *images)
0 - success
-1 - failure
*/ -int boot_get_cmdline(struct lmb *lmb, ulong *cmd_start, ulong *cmd_end) +int boot_get_cmdline(ulong *cmd_start, ulong *cmd_end) { int barg; char *cmdline; @@ -827,7 +826,7 @@ int boot_get_cmdline(struct lmb *lmb, ulong *cmd_start, ulong *cmd_end) return 0;
barg = IF_ENABLED_INT(CONFIG_SYS_BOOT_GET_CMDLINE, CONFIG_SYS_BARGSIZE);
- cmdline = (char *)(ulong)lmb_alloc_base(lmb, barg, 0xf,
- cmdline = (char *)(ulong)lmb_alloc_base(barg, 0xf, env_get_bootm_mapsize() + env_get_bootm_low()); if (!cmdline) return -1;
@@ -848,7 +847,6 @@ int boot_get_cmdline(struct lmb *lmb, ulong *cmd_start, ulong *cmd_end)
/**
- boot_get_kbd - allocate and initialize kernel copy of board info
- @lmb: pointer to lmb handle, will be used for memory mgmt
- @kbd: double pointer to board info data
- boot_get_kbd() allocates space for kernel copy of board info data below
@@ -859,10 +857,9 @@ int boot_get_cmdline(struct lmb *lmb, ulong *cmd_start, ulong *cmd_end)
0 - success
-1 - failure
*/ -int boot_get_kbd(struct lmb *lmb, struct bd_info **kbd) +int boot_get_kbd(struct bd_info **kbd) {
- *kbd = (struct bd_info *)(ulong)lmb_alloc_base(lmb,
sizeof(struct bd_info),
- *kbd = (struct bd_info *)(ulong)lmb_alloc_base(sizeof(struct bd_info), 0xf, env_get_bootm_mapsize() + env_get_bootm_low());
@@ -883,17 +880,16 @@ int image_setup_linux(struct bootm_headers *images) { ulong of_size = images->ft_len; char **of_flat_tree = &images->ft_addr;
struct lmb *lmb = images_lmb(images); int ret;
/* This function cannot be called without lmb support */ if (!IS_ENABLED(CONFIG_LMB)) return -EFAULT; if (CONFIG_IS_ENABLED(OF_LIBFDT))
boot_fdt_add_mem_rsv_regions(lmb, *of_flat_tree);
boot_fdt_add_mem_rsv_regions(*of_flat_tree);
if (IS_ENABLED(CONFIG_SYS_BOOT_GET_CMDLINE)) {
ret = boot_get_cmdline(lmb, &images->cmdline_start,
if (ret) { puts("ERROR with allocation of cmdline\n");ret = boot_get_cmdline(&images->cmdline_start, &images->cmdline_end);
@@ -902,13 +898,13 @@ int image_setup_linux(struct bootm_headers *images) }
if (CONFIG_IS_ENABLED(OF_LIBFDT)) {
ret = boot_relocate_fdt(lmb, of_flat_tree, &of_size);
ret = boot_relocate_fdt(of_flat_tree, &of_size);
if (ret) return ret; }
if (CONFIG_IS_ENABLED(OF_LIBFDT) && of_size) {
ret = image_setup_libfdt(images, *of_flat_tree, lmb);
if (ret) return ret; }ret = image_setup_libfdt(images, *of_flat_tree, true);
diff --git a/boot/image-fdt.c b/boot/image-fdt.c index 8332792b8e..ccafadec0d 100644 --- a/boot/image-fdt.c +++ b/boot/image-fdt.c @@ -68,12 +68,12 @@ static const struct legacy_img_hdr *image_get_fdt(ulong fdt_addr) } #endif
-static void boot_fdt_reserve_region(struct lmb *lmb, uint64_t addr,
uint64_t size, enum lmb_flags flags)
+static void boot_fdt_reserve_region(uint64_t addr, uint64_t size,
{ long ret;enum lmb_flags flags)
- ret = lmb_reserve_flags(lmb, addr, size, flags);
- ret = lmb_reserve_flags(addr, size, flags); if (ret >= 0) { debug(" reserving fdt memory region: addr=%llx size=%llx flags=%x\n", (unsigned long long)addr,
@@ -89,14 +89,13 @@ static void boot_fdt_reserve_region(struct lmb *lmb, uint64_t addr, /**
- boot_fdt_add_mem_rsv_regions - Mark the memreserve and reserved-memory
- sections as unusable
*/
- @lmb: pointer to lmb handle, will be used for memory mgmt
- @fdt_blob: pointer to fdt blob base address
- Adds the and reserved-memorymemreserve regions in the dtb to the lmb block.
- Adding the memreserve regions prevents u-boot from using them to store the
- initrd or the fdt blob.
-void boot_fdt_add_mem_rsv_regions(struct lmb *lmb, void *fdt_blob) +void boot_fdt_add_mem_rsv_regions(void *fdt_blob) { uint64_t addr, size; int i, total, ret; @@ -112,7 +111,7 @@ void boot_fdt_add_mem_rsv_regions(struct lmb *lmb, void *fdt_blob) for (i = 0; i < total; i++) { if (fdt_get_mem_rsv(fdt_blob, i, &addr, &size) != 0) continue;
boot_fdt_reserve_region(lmb, addr, size, LMB_NONE);
boot_fdt_reserve_region(addr, size, LMB_NONE);
}
/* process reserved-memory */
@@ -130,7 +129,7 @@ void boot_fdt_add_mem_rsv_regions(struct lmb *lmb, void *fdt_blob) flags = LMB_NOMAP; addr = res.start; size = res.end - res.start + 1;
boot_fdt_reserve_region(lmb, addr, size, flags);
boot_fdt_reserve_region(addr, size, flags); } subnode = fdt_next_subnode(fdt_blob, subnode);
@@ -140,7 +139,6 @@ void boot_fdt_add_mem_rsv_regions(struct lmb *lmb, void *fdt_blob)
/**
- boot_relocate_fdt - relocate flat device tree
- @lmb: pointer to lmb handle, will be used for memory mgmt
- @of_flat_tree: pointer to a char* variable, will hold fdt start address
- @of_size: pointer to a ulong variable, will hold fdt length
@@ -155,7 +153,7 @@ void boot_fdt_add_mem_rsv_regions(struct lmb *lmb, void *fdt_blob)
0 - success
1 - failure
*/ -int boot_relocate_fdt(struct lmb *lmb, char **of_flat_tree, ulong *of_size) +int boot_relocate_fdt(char **of_flat_tree, ulong *of_size) { u64 start, size, usable, addr, low, mapsize; void *fdt_blob = *of_flat_tree; @@ -187,18 +185,17 @@ int boot_relocate_fdt(struct lmb *lmb, char **of_flat_tree, ulong *of_size) if (desired_addr == ~0UL) { /* All ones means use fdt in place */ of_start = fdt_blob;
lmb_reserve(lmb, map_to_sysmem(of_start), of_len);
} else if (desired_addr) {lmb_reserve(map_to_sysmem(of_start), of_len); disable_relocation = 1;
addr = lmb_alloc_base(lmb, of_len, 0x1000,
desired_addr);
} else {addr = lmb_alloc_base(of_len, 0x1000, desired_addr); of_start = map_sysmem(addr, of_len); if (of_start == NULL) { puts("Failed using fdt_high value for Device Tree"); goto error; }
addr = lmb_alloc(lmb, of_len, 0x1000);
} } else {addr = lmb_alloc(of_len, 0x1000); of_start = map_sysmem(addr, of_len);
@@ -220,7 +217,7 @@ int boot_relocate_fdt(struct lmb *lmb, char **of_flat_tree, ulong *of_size) * for LMB allocation. */ usable = min(start + size, low + mapsize);
addr = lmb_alloc_base(lmb, of_len, 0x1000, usable);
addr = lmb_alloc_base(of_len, 0x1000, usable); of_start = map_sysmem(addr, of_len); /* Allocation succeeded, use this block. */ if (of_start != NULL)
@@ -569,8 +566,7 @@ __weak int arch_fixup_fdt(void *blob) return 0; }
-int image_setup_libfdt(struct bootm_headers *images, void *blob,
struct lmb *lmb)
+int image_setup_libfdt(struct bootm_headers *images, void *blob, bool lmb) { ulong *initrd_start = &images->initrd_start; ulong *initrd_end = &images->initrd_end; @@ -670,8 +666,8 @@ int image_setup_libfdt(struct bootm_headers *images, void *blob, }
/* Delete the old LMB reservation */
- if (lmb)
lmb_free(lmb, map_to_sysmem(blob), fdt_totalsize(blob));
if (CONFIG_IS_ENABLED(LMB) && lmb)
lmb_free(map_to_sysmem(blob), fdt_totalsize(blob));
ret = fdt_shrink_to_minimum(blob, 0); if (ret < 0)
@@ -679,8 +675,8 @@ int image_setup_libfdt(struct bootm_headers *images, void *blob, of_size = ret;
/* Create a new LMB reservation */
- if (lmb)
lmb_reserve(lmb, map_to_sysmem(blob), of_size);
if (CONFIG_IS_ENABLED(LMB) && lmb)
lmb_reserve(map_to_sysmem(blob), of_size);
#if defined(CONFIG_ARCH_KEYSTONE) if (IS_ENABLED(CONFIG_OF_BOARD_SETUP))
diff --git a/cmd/bdinfo.c b/cmd/bdinfo.c index 437ac4e863..b31e0208df 100644 --- a/cmd/bdinfo.c +++ b/cmd/bdinfo.c @@ -162,10 +162,8 @@ static int bdinfo_print_all(struct bd_info *bd) bdinfo_print_num_l("multi_dtb_fit", (ulong)gd->multi_dtb_fit); #endif if (IS_ENABLED(CONFIG_LMB) && gd->fdt_blob) {
struct lmb lmb;
lmb_init_and_reserve(&lmb, gd->bd, (void *)gd->fdt_blob);
lmb_dump_all_force(&lmb);
lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob);
if (IS_ENABLED(CONFIG_OF_REAL)) printf("devicetree = %s\n", fdtdec_get_srcname()); }lmb_dump_all_force();
diff --git a/cmd/booti.c b/cmd/booti.c index 62b19e8343..6018cbacf0 100644 --- a/cmd/booti.c +++ b/cmd/booti.c @@ -87,7 +87,7 @@ static int booti_start(struct bootm_info *bmi) images->os.start = relocated_addr; images->os.end = relocated_addr + image_size;
- lmb_reserve(&images->lmb, images->ep, le32_to_cpu(image_size));
lmb_reserve(images->ep, le32_to_cpu(image_size));
/*
- Handle the BOOTM_STATE_FINDOTHER state ourselves as we do not
diff --git a/cmd/bootz.c b/cmd/bootz.c index 55837a7599..787203f5bd 100644 --- a/cmd/bootz.c +++ b/cmd/bootz.c @@ -56,7 +56,7 @@ static int bootz_start(struct cmd_tbl *cmdtp, int flag, int argc, if (ret != 0) return 1;
- lmb_reserve(&images->lmb, images->ep, zi_end - zi_start);
lmb_reserve(images->ep, zi_end - zi_start);
/*
- Handle the BOOTM_STATE_FINDOTHER state ourselves as we do not
diff --git a/cmd/elf.c b/cmd/elf.c index 673c6c3051..f07e344a59 100644 --- a/cmd/elf.c +++ b/cmd/elf.c @@ -70,7 +70,7 @@ int do_bootelf(struct cmd_tbl *cmdtp, int flag, int argc, char *const argv[])
fdt_set_totalsize((void *)fdt_addr, fdt_totalsize(fdt_addr) + CONFIG_SYS_FDT_PAD);
if (image_setup_libfdt(&img, (void *)fdt_addr, NULL))
} #endifif (image_setup_libfdt(&img, (void *)fdt_addr, false)) return 1;
diff --git a/cmd/load.c b/cmd/load.c index d773a25d70..56da3a4c5d 100644 --- a/cmd/load.c +++ b/cmd/load.c @@ -141,7 +141,6 @@ static int do_load_serial(struct cmd_tbl *cmdtp, int flag, int argc,
static ulong load_serial(long offset) {
- struct lmb lmb; char record[SREC_MAXRECLEN + 1]; /* buffer for one S-Record */ char binbuf[SREC_MAXBINLEN]; /* buffer for binary data */ int binlen; /* no. of data bytes in S-Rec. */
@@ -154,7 +153,7 @@ static ulong load_serial(long offset) int line_count = 0; long ret;
- lmb_init_and_reserve(&lmb, gd->bd, (void *)gd->fdt_blob);
lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob);
while (read_record(record, SREC_MAXRECLEN + 1) >= 0) { type = srec_decode(record, &binlen, &addr, binbuf);
@@ -182,7 +181,7 @@ static ulong load_serial(long offset) { void *dst;
ret = lmb_reserve(&lmb, store_addr, binlen);
ret = lmb_reserve(store_addr, binlen); if (ret) { printf("\nCannot overwrite reserved area (%08lx..%08lx)\n", store_addr, store_addr + binlen);
@@ -191,7 +190,7 @@ static ulong load_serial(long offset) dst = map_sysmem(store_addr, binlen); memcpy(dst, binbuf, binlen); unmap_sysmem(dst);
lmb_free(&lmb, store_addr, binlen);
} if ((store_addr) < start_addr) start_addr = store_addr;lmb_free(store_addr, binlen);
diff --git a/drivers/iommu/apple_dart.c b/drivers/iommu/apple_dart.c index 9327dea1e3..611ac7cd6d 100644 --- a/drivers/iommu/apple_dart.c +++ b/drivers/iommu/apple_dart.c @@ -70,7 +70,6 @@
struct apple_dart_priv { void *base;
- struct lmb lmb; u64 *l1, *l2; int bypass, shift;
@@ -124,7 +123,7 @@ static dma_addr_t apple_dart_map(struct udevice *dev, void *addr, size_t size) off = (phys_addr_t)addr - paddr; psize = ALIGN(size + off, DART_PAGE_SIZE);
- dva = lmb_alloc(&priv->lmb, psize, DART_PAGE_SIZE);
dva = lmb_alloc(psize, DART_PAGE_SIZE);
idx = dva / DART_PAGE_SIZE; for (i = 0; i < psize / DART_PAGE_SIZE; i++) {
@@ -160,7 +159,7 @@ static void apple_dart_unmap(struct udevice *dev, dma_addr_t addr, size_t size) (unsigned long)&priv->l2[idx + i]); priv->flush_tlb(priv);
- lmb_free(&priv->lmb, dva, psize);
lmb_free(dva, psize); }
static struct iommu_ops apple_dart_ops = {
@@ -213,8 +212,7 @@ static int apple_dart_probe(struct udevice *dev) priv->dvabase = DART_PAGE_SIZE; priv->dvaend = SZ_4G - DART_PAGE_SIZE;
- lmb_init(&priv->lmb);
- lmb_add(&priv->lmb, priv->dvabase, priv->dvaend - priv->dvabase);
lmb_add(priv->dvabase, priv->dvaend - priv->dvabase);
/* Disable translations. */ for (sid = 0; sid < priv->nsid; sid++)
diff --git a/drivers/iommu/sandbox_iommu.c b/drivers/iommu/sandbox_iommu.c index e37976f86f..5b4a6a8982 100644 --- a/drivers/iommu/sandbox_iommu.c +++ b/drivers/iommu/sandbox_iommu.c @@ -11,14 +11,9 @@
#define IOMMU_PAGE_SIZE SZ_4K
-struct sandbox_iommu_priv {
- struct lmb lmb;
-};
- static dma_addr_t sandbox_iommu_map(struct udevice *dev, void *addr, size_t size) {
- struct sandbox_iommu_priv *priv = dev_get_priv(dev); phys_addr_t paddr, dva; phys_size_t psize, off;
@@ -26,7 +21,7 @@ static dma_addr_t sandbox_iommu_map(struct udevice *dev, void *addr, off = virt_to_phys(addr) - paddr; psize = ALIGN(size + off, IOMMU_PAGE_SIZE);
- dva = lmb_alloc(&priv->lmb, psize, IOMMU_PAGE_SIZE);
dva = lmb_alloc(psize, IOMMU_PAGE_SIZE);
return dva + off; }
@@ -34,7 +29,6 @@ static dma_addr_t sandbox_iommu_map(struct udevice *dev, void *addr, static void sandbox_iommu_unmap(struct udevice *dev, dma_addr_t addr, size_t size) {
- struct sandbox_iommu_priv *priv = dev_get_priv(dev); phys_addr_t dva; phys_size_t psize;
@@ -42,7 +36,7 @@ static void sandbox_iommu_unmap(struct udevice *dev, dma_addr_t addr, psize = size + (addr - dva); psize = ALIGN(psize, IOMMU_PAGE_SIZE);
- lmb_free(&priv->lmb, dva, psize);
lmb_free(dva, psize); }
static struct iommu_ops sandbox_iommu_ops = {
@@ -52,10 +46,7 @@ static struct iommu_ops sandbox_iommu_ops = {
static int sandbox_iommu_probe(struct udevice *dev) {
- struct sandbox_iommu_priv *priv = dev_get_priv(dev);
- lmb_init(&priv->lmb);
- lmb_add(&priv->lmb, 0x89abc000, SZ_16K);
lmb_add(0x89abc000, SZ_16K);
return 0; }
@@ -69,7 +60,6 @@ U_BOOT_DRIVER(sandbox_iommu) = { .name = "sandbox_iommu", .id = UCLASS_IOMMU, .of_match = sandbox_iommu_ids,
- .priv_auto = sizeof(struct sandbox_iommu_priv), .ops = &sandbox_iommu_ops, .probe = sandbox_iommu_probe, };
diff --git a/fs/fs.c b/fs/fs.c index 0c47943f33..2c835eef86 100644 --- a/fs/fs.c +++ b/fs/fs.c @@ -531,7 +531,6 @@ int fs_size(const char *filename, loff_t *size) static int fs_read_lmb_check(const char *filename, ulong addr, loff_t offset, loff_t len, struct fstype_info *info) {
- struct lmb lmb; int ret; loff_t size; loff_t read_len;
@@ -550,10 +549,10 @@ static int fs_read_lmb_check(const char *filename, ulong addr, loff_t offset, if (len && len < read_len) read_len = len;
- lmb_init_and_reserve(&lmb, gd->bd, (void *)gd->fdt_blob);
- lmb_dump_all(&lmb);
- lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob);
- lmb_dump_all();
- if (lmb_alloc_addr(&lmb, addr, read_len) == addr)
if (lmb_alloc_addr(addr, read_len) == addr) return 0;
log_err("** Reading file would overwrite reserved memory **\n");
diff --git a/include/image.h b/include/image.h index dd4042d1bd..74838a2f75 100644 --- a/include/image.h +++ b/include/image.h @@ -20,7 +20,6 @@ #include <stdbool.h>
/* Define this to avoid #ifdefs later on */ -struct lmb; struct fdt_region;
#ifdef USE_HOSTCC @@ -412,18 +411,8 @@ struct bootm_headers { #define BOOTM_STATE_PRE_LOAD 0x00000800 #define BOOTM_STATE_MEASURE 0x00001000 int state;
-#if defined(CONFIG_LMB) && !defined(USE_HOSTCC)
- struct lmb lmb; /* for memory mgmt */
-#endif };
-#ifdef CONFIG_LMB -#define images_lmb(_images) (&(_images)->lmb) -#else -#define images_lmb(_images) NULL -#endif
extern struct bootm_headers images;
/*
@@ -835,13 +824,13 @@ int boot_get_fdt(void *buf, const char *select, uint arch, struct bootm_headers *images, char **of_flat_tree, ulong *of_size);
-void boot_fdt_add_mem_rsv_regions(struct lmb *lmb, void *fdt_blob); -int boot_relocate_fdt(struct lmb *lmb, char **of_flat_tree, ulong *of_size); +void boot_fdt_add_mem_rsv_regions(void *fdt_blob); +int boot_relocate_fdt(char **of_flat_tree, ulong *of_size);
-int boot_ramdisk_high(struct lmb *lmb, ulong rd_data, ulong rd_len,
ulong *initrd_start, ulong *initrd_end);
-int boot_get_cmdline(struct lmb *lmb, ulong *cmd_start, ulong *cmd_end); -int boot_get_kbd(struct lmb *lmb, struct bd_info **kbd); +int boot_ramdisk_high(ulong rd_data, ulong rd_len, ulong *initrd_start,
ulong *initrd_end);
+int boot_get_cmdline(ulong *cmd_start, ulong *cmd_end); +int boot_get_kbd(struct bd_info **kbd);
/*******************************************************************/ /* Legacy format specific code (prefixed with image_) */ @@ -1029,11 +1018,10 @@ int image_decomp(int comp, ulong load, ulong image_start, int type,
- @images: Images information
- @blob: FDT to update
- @lmb: Points to logical memory block structure
*/
- @lmb: Flag indicating use of lmb for reserving FDT memory region
- Return: 0 if ok, <0 on failure
-int image_setup_libfdt(struct bootm_headers *images, void *blob,
struct lmb *lmb);
+int image_setup_libfdt(struct bootm_headers *images, void *blob, bool lmb);
/**
- Set up the FDT to use for booting a kernel
diff --git a/include/lmb.h b/include/lmb.h index a1de18c3cb..a1cc45b726 100644 --- a/include/lmb.h +++ b/include/lmb.h @@ -24,97 +24,37 @@ enum lmb_flags { };
/**
- struct lmb_property - Description of one region.
*/
- struct lmb_region - Description of one region.
- @base: Base address of the region.
- @size: Size of the region
- @flags: memory region attributes
-struct lmb_property { +struct lmb_region { phys_addr_t base; phys_size_t size; enum lmb_flags flags; };
-/*
- For regions size management, see LMB configuration in KConfig
- all the #if test are done with CONFIG_LMB_USE_MAX_REGIONS (boolean)
- case 1. CONFIG_LMB_USE_MAX_REGIONS is defined (legacy mode)
=> CONFIG_LMB_MAX_REGIONS is used to configure the region size,
directly in the array lmb_region.region[], with the same
configuration for memory and reserved regions.
- case 2. CONFIG_LMB_USE_MAX_REGIONS is not defined, the size of each
region is configurated *independently* with
=> CONFIG_LMB_MEMORY_REGIONS: struct lmb.memory_regions
=> CONFIG_LMB_RESERVED_REGIONS: struct lmb.reserved_regions
lmb_region.region is only a pointer to the correct buffer,
initialized in lmb_init(). This configuration is useful to manage
more reserved memory regions with CONFIG_LMB_RESERVED_REGIONS.
- */
-/**
- struct lmb_region - Description of a set of region.
- @cnt: Number of regions.
- @max: Size of the region array, max value of cnt.
- @region: Array of the region properties
- */
-struct lmb_region {
- unsigned long cnt;
- unsigned long max;
-#if IS_ENABLED(CONFIG_LMB_USE_MAX_REGIONS)
- struct lmb_property region[CONFIG_LMB_MAX_REGIONS];
-#else
- struct lmb_property *region;
-#endif -};
-/**
- struct lmb - Logical memory block handle.
- Clients provide storage for Logical memory block (lmb) handles.
- The content of the structure is managed by the lmb library.
- A lmb struct is initialized by lmb_init() functions.
- The lmb struct is passed to all other lmb APIs.
- @memory: Description of memory regions.
- @reserved: Description of reserved regions.
- @memory_regions: Array of the memory regions (statically allocated)
- @reserved_regions: Array of the reserved regions (statically allocated)
- */
-struct lmb {
- struct lmb_region memory;
- struct lmb_region reserved;
-#if !IS_ENABLED(CONFIG_LMB_USE_MAX_REGIONS)
- struct lmb_property memory_regions[CONFIG_LMB_MEMORY_REGIONS];
- struct lmb_property reserved_regions[CONFIG_LMB_RESERVED_REGIONS];
-#endif -};
-void lmb_init(struct lmb *lmb); -void lmb_init_and_reserve(struct lmb *lmb, struct bd_info *bd, void *fdt_blob); -void lmb_init_and_reserve_range(struct lmb *lmb, phys_addr_t base,
phys_size_t size, void *fdt_blob);
-long lmb_add(struct lmb *lmb, phys_addr_t base, phys_size_t size); -long lmb_reserve(struct lmb *lmb, phys_addr_t base, phys_size_t size); +void lmb_init_and_reserve(struct bd_info *bd, void *fdt_blob); +void lmb_init_and_reserve_range(phys_addr_t base, phys_size_t size,
void *fdt_blob);
+long lmb_add(phys_addr_t base, phys_size_t size); +long lmb_reserve(phys_addr_t base, phys_size_t size); /**
- lmb_reserve_flags - Reserve one region with a specific flags bitfield.
*/
- @lmb: the logical memory block struct
- @base: base address of the memory region
- @size: size of the memory region
- @flags: flags for the memory region
- Return: 0 if OK, > 0 for coalesced region or a negative error code.
-long lmb_reserve_flags(struct lmb *lmb, phys_addr_t base,
phys_size_t size, enum lmb_flags flags);
-phys_addr_t lmb_alloc(struct lmb *lmb, phys_size_t size, ulong align); -phys_addr_t lmb_alloc_base(struct lmb *lmb, phys_size_t size, ulong align,
phys_addr_t max_addr);
-phys_addr_t lmb_alloc_addr(struct lmb *lmb, phys_addr_t base, phys_size_t size); -phys_size_t lmb_get_free_size(struct lmb *lmb, phys_addr_t addr); +long lmb_reserve_flags(phys_addr_t base, phys_size_t size,
enum lmb_flags flags);
+phys_addr_t lmb_alloc(phys_size_t size, ulong align); +phys_addr_t lmb_alloc_base(phys_size_t size, ulong align, phys_addr_t max_addr); +phys_addr_t lmb_alloc_addr(phys_addr_t base, phys_size_t size); +phys_size_t lmb_get_free_size(phys_addr_t addr);
/**
- lmb_is_reserved_flags() - test if address is in reserved region with flag bits set
@@ -122,21 +62,33 @@ phys_size_t lmb_get_free_size(struct lmb *lmb, phys_addr_t addr);
- The function checks if a reserved region comprising @addr exists which has
- all flag bits set which are set in @flags.
*/
- @lmb: the logical memory block struct
- @addr: address to be tested
- @flags: bitmap with bits to be tested
- Return: 1 if matching reservation exists, 0 otherwise
-int lmb_is_reserved_flags(struct lmb *lmb, phys_addr_t addr, int flags); +int lmb_is_reserved_flags(phys_addr_t addr, int flags);
-long lmb_free(struct lmb *lmb, phys_addr_t base, phys_size_t size); +long lmb_free(phys_addr_t base, phys_size_t size);
-void lmb_dump_all(struct lmb *lmb); -void lmb_dump_all_force(struct lmb *lmb); +void lmb_dump_all(void); +void lmb_dump_all_force(void);
-void board_lmb_reserve(struct lmb *lmb); -void arch_lmb_reserve(struct lmb *lmb); -void arch_lmb_reserve_generic(struct lmb *lmb, ulong sp, ulong end, ulong align); +void board_lmb_reserve(void); +void arch_lmb_reserve(void); +void arch_lmb_reserve_generic(ulong sp, ulong end, ulong align);
+/**
- lmb_mem_regions_init() - Initialise the LMB memory
- Initialise the LMB subsystem related data structures. There are two
- alloced lists that are initialised, one for the free memory, and one
- for the used memory.
- Initialise the two lists as part of board init.
- Return: 0 if OK, -ve on failure.
- */
+int lmb_mem_regions_init(void);
#endif /* __KERNEL__ */
diff --git a/lib/efi_loader/efi_dt_fixup.c b/lib/efi_loader/efi_dt_fixup.c index 9886e6897c..9d017804ee 100644 --- a/lib/efi_loader/efi_dt_fixup.c +++ b/lib/efi_loader/efi_dt_fixup.c @@ -172,7 +172,7 @@ efi_dt_fixup(struct efi_dt_fixup_protocol *this, void *dtb, }
fdt_set_totalsize(dtb, *buffer_size);
if (image_setup_libfdt(&img, dtb, NULL)) {
if (image_setup_libfdt(&img, dtb, false)) { log_err("failed to process device tree\n"); ret = EFI_INVALID_PARAMETER; goto out;
diff --git a/lib/efi_loader/efi_helper.c b/lib/efi_loader/efi_helper.c index 348612c3da..13e97fb741 100644 --- a/lib/efi_loader/efi_helper.c +++ b/lib/efi_loader/efi_helper.c @@ -513,7 +513,7 @@ efi_status_t efi_install_fdt(void *fdt) return EFI_OUT_OF_RESOURCES; }
- if (image_setup_libfdt(&img, fdt, NULL)) {
- if (image_setup_libfdt(&img, fdt, false)) { log_err("ERROR: failed to process device tree\n"); return EFI_LOAD_ERROR; }
diff --git a/lib/lmb.c b/lib/lmb.c index 4d39c0d1f9..dd6f22654c 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -6,6 +6,7 @@
- Copyright (C) 2001 Peter Bergner.
*/
+#include <alist.h> #include <efi_loader.h> #include <image.h> #include <mapmem.h> @@ -15,41 +16,47 @@
#include <asm/global_data.h> #include <asm/sections.h> +#include <linux/kernel.h>
DECLARE_GLOBAL_DATA_PTR;
#define LMB_ALLOC_ANYWHERE 0 +#define LMB_ALIST_INITIAL_SIZE 4
-static void lmb_dump_region(struct lmb_region *rgn, char *name) +struct alist lmb_free_mem; +struct alist lmb_used_mem;
+static void lmb_dump_region(struct alist *lmb_rgn_lst, char *name) {
- struct lmb_region *rgn = lmb_rgn_lst->data; unsigned long long base, size, end; enum lmb_flags flags; int i;
- printf(" %s.cnt = 0x%lx / max = 0x%lx\n", name, rgn->cnt, rgn->max);
- printf(" %s.count = 0x%hx\n", name, lmb_rgn_lst->count);
- for (i = 0; i < rgn->cnt; i++) {
base = rgn->region[i].base;
size = rgn->region[i].size;
- for (i = 0; i < lmb_rgn_lst->count; i++) {
base = rgn[i].base;
end = base + size - 1;size = rgn[i].size;
flags = rgn->region[i].flags;
flags = rgn[i].flags;
printf(" %s[%d]\t[0x%llx-0x%llx], 0x%08llx bytes flags: %x\n", name, i, base, end, size, flags); } }
-void lmb_dump_all_force(struct lmb *lmb) +void lmb_dump_all_force(void) { printf("lmb_dump_all:\n");
- lmb_dump_region(&lmb->memory, "memory");
- lmb_dump_region(&lmb->reserved, "reserved");
- lmb_dump_region(&lmb_free_mem, "memory");
- lmb_dump_region(&lmb_used_mem, "reserved"); }
-void lmb_dump_all(struct lmb *lmb) +void lmb_dump_all(void) { #ifdef DEBUG
- lmb_dump_all_force(lmb);
- lmb_dump_all_force(); #endif }
@@ -73,79 +80,74 @@ static long lmb_addrs_adjacent(phys_addr_t base1, phys_size_t size1, return 0; }
-static long lmb_regions_overlap(struct lmb_region *rgn, unsigned long r1, +static long lmb_regions_overlap(struct alist *lmb_rgn_lst, unsigned long r1, unsigned long r2) {
- phys_addr_t base1 = rgn->region[r1].base;
- phys_size_t size1 = rgn->region[r1].size;
- phys_addr_t base2 = rgn->region[r2].base;
- phys_size_t size2 = rgn->region[r2].size;
struct lmb_region *rgn = lmb_rgn_lst->data;
phys_addr_t base1 = rgn[r1].base;
phys_size_t size1 = rgn[r1].size;
phys_addr_t base2 = rgn[r2].base;
phys_size_t size2 = rgn[r2].size;
return lmb_addrs_overlap(base1, size1, base2, size2); }
-static long lmb_regions_adjacent(struct lmb_region *rgn, unsigned long r1,
+static long lmb_regions_adjacent(struct alist *lmb_rgn_lst, unsigned long r1, unsigned long r2) {
- phys_addr_t base1 = rgn->region[r1].base;
- phys_size_t size1 = rgn->region[r1].size;
- phys_addr_t base2 = rgn->region[r2].base;
- phys_size_t size2 = rgn->region[r2].size;
- struct lmb_region *rgn = lmb_rgn_lst->data;
- phys_addr_t base1 = rgn[r1].base;
- phys_size_t size1 = rgn[r1].size;
- phys_addr_t base2 = rgn[r2].base;
- phys_size_t size2 = rgn[r2].size; return lmb_addrs_adjacent(base1, size1, base2, size2); }
-static void lmb_remove_region(struct lmb_region *rgn, unsigned long r) +static void lmb_remove_region(struct alist *lmb_rgn_lst, unsigned long r) { unsigned long i;
- struct lmb_region *rgn = lmb_rgn_lst->data;
- for (i = r; i < rgn->cnt - 1; i++) {
rgn->region[i].base = rgn->region[i + 1].base;
rgn->region[i].size = rgn->region[i + 1].size;
rgn->region[i].flags = rgn->region[i + 1].flags;
- for (i = r; i < lmb_rgn_lst->count - 1; i++) {
rgn[i].base = rgn[i + 1].base;
rgn[i].size = rgn[i + 1].size;
}rgn[i].flags = rgn[i + 1].flags;
- rgn->cnt--;
lmb_rgn_lst->count--; }
/* Assumption: base addr of region 1 < base addr of region 2 */
-static void lmb_coalesce_regions(struct lmb_region *rgn, unsigned long r1, +static void lmb_coalesce_regions(struct alist *lmb_rgn_lst, unsigned long r1, unsigned long r2) {
- rgn->region[r1].size += rgn->region[r2].size;
- lmb_remove_region(rgn, r2);
struct lmb_region *rgn = lmb_rgn_lst->data;
rgn[r1].size += rgn[r2].size;
lmb_remove_region(lmb_rgn_lst, r2); }
/*Assumption : base addr of region 1 < base addr of region 2*/
-static void lmb_fix_over_lap_regions(struct lmb_region *rgn, unsigned long r1,
unsigned long r2)
+static void lmb_fix_over_lap_regions(struct alist *lmb_rgn_lst,
{unsigned long r1, unsigned long r2)
- phys_addr_t base1 = rgn->region[r1].base;
- phys_size_t size1 = rgn->region[r1].size;
- phys_addr_t base2 = rgn->region[r2].base;
- phys_size_t size2 = rgn->region[r2].size;
struct lmb_region *rgn = lmb_rgn_lst->data;
phys_addr_t base1 = rgn[r1].base;
phys_size_t size1 = rgn[r1].size;
phys_addr_t base2 = rgn[r2].base;
phys_size_t size2 = rgn[r2].size;
if (base1 + size1 > base2 + size2) { printf("This will not be a case any time\n"); return; }
- rgn->region[r1].size = base2 + size2 - base1;
- lmb_remove_region(rgn, r2);
-}
-void lmb_init(struct lmb *lmb) -{ -#if IS_ENABLED(CONFIG_LMB_USE_MAX_REGIONS)
- lmb->memory.max = CONFIG_LMB_MAX_REGIONS;
- lmb->reserved.max = CONFIG_LMB_MAX_REGIONS;
-#else
- lmb->memory.max = CONFIG_LMB_MEMORY_REGIONS;
- lmb->reserved.max = CONFIG_LMB_RESERVED_REGIONS;
- lmb->memory.region = lmb->memory_regions;
- lmb->reserved.region = lmb->reserved_regions;
-#endif
- lmb->memory.cnt = 0;
- lmb->reserved.cnt = 0;
- rgn[r1].size = base2 + size2 - base1;
- lmb_remove_region(lmb_rgn_lst, r2); }
-void arch_lmb_reserve_generic(struct lmb *lmb, ulong sp, ulong end, ulong align) +void arch_lmb_reserve_generic(ulong sp, ulong end, ulong align) { ulong bank_end; int bank; @@ -171,10 +173,10 @@ void arch_lmb_reserve_generic(struct lmb *lmb, ulong sp, ulong end, ulong align) if (bank_end > end) bank_end = end - 1;
lmb_reserve(lmb, sp, bank_end - sp + 1);
lmb_reserve(sp, bank_end - sp + 1);
if (gd->flags & GD_FLG_SKIP_RELOC)
lmb_reserve(lmb, (phys_addr_t)(uintptr_t)_start, gd->mon_len);
lmb_reserve((phys_addr_t)(uintptr_t)_start, gd->mon_len);
break; }
@@ -186,10 +188,9 @@ void arch_lmb_reserve_generic(struct lmb *lmb, ulong sp, ulong end, ulong align)
- Add reservations for all EFI memory areas that are not
- EFI_CONVENTIONAL_MEMORY.
*/
- @lmb: lmb environment
- Return: 0 on success, 1 on failure
-static __maybe_unused int efi_lmb_reserve(struct lmb *lmb) +static __maybe_unused int efi_lmb_reserve(void) { struct efi_mem_desc *memmap = NULL, *map; efi_uintn_t i, map_size = 0; @@ -201,8 +202,7 @@ static __maybe_unused int efi_lmb_reserve(struct lmb *lmb)
for (i = 0, map = memmap; i < map_size / sizeof(*map); ++map, ++i) { if (map->type != EFI_CONVENTIONAL_MEMORY) {
lmb_reserve_flags(lmb,
map_to_sysmem((void *)(uintptr_t)
lmb_reserve_flags(map_to_sysmem((void *)(uintptr_t) map->physical_start), map->num_pages * EFI_PAGE_SIZE, map->type == EFI_RESERVED_MEMORY_TYPE
@@ -214,64 +214,63 @@ static __maybe_unused int efi_lmb_reserve(struct lmb *lmb) return 0; }
-static void lmb_reserve_common(struct lmb *lmb, void *fdt_blob) +static void lmb_reserve_common(void *fdt_blob) {
- arch_lmb_reserve(lmb);
- board_lmb_reserve(lmb);
arch_lmb_reserve();
board_lmb_reserve();
if (CONFIG_IS_ENABLED(OF_LIBFDT) && fdt_blob)
boot_fdt_add_mem_rsv_regions(lmb, fdt_blob);
boot_fdt_add_mem_rsv_regions(fdt_blob);
if (CONFIG_IS_ENABLED(EFI_LOADER))
efi_lmb_reserve(lmb);
efi_lmb_reserve();
}
/* Initialize the struct, add memory and call arch/board reserve functions */
-void lmb_init_and_reserve(struct lmb *lmb, struct bd_info *bd, void *fdt_blob) +void lmb_init_and_reserve(struct bd_info *bd, void *fdt_blob) { int i;
- lmb_init(lmb);
- for (i = 0; i < CONFIG_NR_DRAM_BANKS; i++) {
if (bd->bi_dram[i].size) {
lmb_add(lmb, bd->bi_dram[i].start,
bd->bi_dram[i].size);
}
if (bd->bi_dram[i].size)
}lmb_add(bd->bi_dram[i].start, bd->bi_dram[i].size);
- lmb_reserve_common(lmb, fdt_blob);
lmb_reserve_common(fdt_blob); }
/* Initialize the struct, add memory and call arch/board reserve functions */
-void lmb_init_and_reserve_range(struct lmb *lmb, phys_addr_t base,
phys_size_t size, void *fdt_blob)
+void lmb_init_and_reserve_range(phys_addr_t base, phys_size_t size,
{void *fdt_blob)
- lmb_init(lmb);
- lmb_add(lmb, base, size);
- lmb_reserve_common(lmb, fdt_blob);
lmb_add(base, size);
lmb_reserve_common(fdt_blob); }
/* This routine called with relocation disabled. */
-static long lmb_add_region_flags(struct lmb_region *rgn, phys_addr_t base, +static long lmb_add_region_flags(struct alist *lmb_rgn_lst, phys_addr_t base, phys_size_t size, enum lmb_flags flags) { unsigned long coalesced = 0; long adjacent, i;
- struct lmb_region *rgn = lmb_rgn_lst->data;
- if (alist_err(lmb_rgn_lst))
return -1;
- if (rgn->cnt == 0) {
rgn->region[0].base = base;
rgn->region[0].size = size;
rgn->region[0].flags = flags;
rgn->cnt = 1;
if (alist_empty(lmb_rgn_lst)) {
rgn[0].base = base;
rgn[0].size = size;
rgn[0].flags = flags;
lmb_rgn_lst->count = 1;
return 0; }
/* First try and coalesce this LMB with another. */
- for (i = 0; i < rgn->cnt; i++) {
phys_addr_t rgnbase = rgn->region[i].base;
phys_size_t rgnsize = rgn->region[i].size;
phys_size_t rgnflags = rgn->region[i].flags;
- for (i = 0; i < lmb_rgn_lst->count; i++) {
phys_addr_t rgnbase = rgn[i].base;
phys_size_t rgnsize = rgn[i].size;
phys_addr_t end = base + size - 1; phys_addr_t rgnend = rgnbase + rgnsize - 1; if (rgnbase <= base && end <= rgnend) {phys_size_t rgnflags = rgn[i].flags;
@@ -286,14 +285,14 @@ static long lmb_add_region_flags(struct lmb_region *rgn, phys_addr_t base, if (adjacent > 0) { if (flags != rgnflags) break;
rgn->region[i].base -= size;
rgn->region[i].size += size;
rgn[i].base -= size;
} else if (adjacent < 0) { if (flags != rgnflags) break;rgn[i].size += size; coalesced++; break;
rgn->region[i].size += size;
} else if (lmb_addrs_overlap(base, size, rgnbase, rgnsize)) {rgn[i].size += size; coalesced++; break;
@@ -302,99 +301,106 @@ static long lmb_add_region_flags(struct lmb_region *rgn, phys_addr_t base, } }
- if (i < rgn->cnt - 1 && rgn->region[i].flags == rgn->region[i + 1].flags) {
if (lmb_regions_adjacent(rgn, i, i + 1)) {
lmb_coalesce_regions(rgn, i, i + 1);
- if (i < lmb_rgn_lst->count - 1 &&
rgn[i].flags == rgn[i + 1].flags) {
if (lmb_regions_adjacent(lmb_rgn_lst, i, i + 1)) {
lmb_coalesce_regions(lmb_rgn_lst, i, i + 1); coalesced++;
} else if (lmb_regions_overlap(rgn, i, i + 1)) {
} else if (lmb_regions_overlap(lmb_rgn_lst, i, i + 1)) { /* fix overlapping area */
lmb_fix_over_lap_regions(rgn, i, i + 1);
lmb_fix_over_lap_regions(lmb_rgn_lst, i, i + 1); coalesced++;
} }
if (coalesced) return coalesced;
- if (rgn->cnt >= rgn->max)
return -1;
if (alist_full(lmb_rgn_lst)) {
if (!alist_expand_by(lmb_rgn_lst, lmb_rgn_lst->alloc * 2))
return -1;
else
rgn = lmb_rgn_lst->data;
}
/* Couldn't coalesce the LMB, so add it to the sorted table. */
- for (i = rgn->cnt-1; i >= 0; i--) {
if (base < rgn->region[i].base) {
rgn->region[i + 1].base = rgn->region[i].base;
rgn->region[i + 1].size = rgn->region[i].size;
rgn->region[i + 1].flags = rgn->region[i].flags;
- for (i = lmb_rgn_lst->count - 1; i >= 0; i--) {
if (base < rgn[i].base) {
rgn[i + 1].base = rgn[i].base;
rgn[i + 1].size = rgn[i].size;
} else {rgn[i + 1].flags = rgn[i].flags;
rgn->region[i + 1].base = base;
rgn->region[i + 1].size = size;
rgn->region[i + 1].flags = flags;
rgn[i + 1].base = base;
rgn[i + 1].size = size;
} }rgn[i + 1].flags = flags; break;
- if (base < rgn->region[0].base) {
rgn->region[0].base = base;
rgn->region[0].size = size;
rgn->region[0].flags = flags;
- if (base < rgn[0].base) {
rgn[0].base = base;
rgn[0].size = size;
}rgn[0].flags = flags;
- rgn->cnt++;
lmb_rgn_lst->count++;
return 0; }
-static long lmb_add_region(struct lmb_region *rgn, phys_addr_t base, +static long lmb_add_region(struct alist *lmb_rgn_lst, phys_addr_t base, phys_size_t size) {
- return lmb_add_region_flags(rgn, base, size, LMB_NONE);
return lmb_add_region_flags(lmb_rgn_lst, base, size, LMB_NONE); }
/* This routine may be called with relocation disabled. */
-long lmb_add(struct lmb *lmb, phys_addr_t base, phys_size_t size) +long lmb_add(phys_addr_t base, phys_size_t size) {
- struct lmb_region *_rgn = &(lmb->memory);
- struct alist *lmb_rgn_lst = &lmb_free_mem;
- return lmb_add_region(_rgn, base, size);
- return lmb_add_region(lmb_rgn_lst, base, size); }
-long lmb_free(struct lmb *lmb, phys_addr_t base, phys_size_t size) +long lmb_free(phys_addr_t base, phys_size_t size) {
- struct lmb_region *rgn = &(lmb->reserved);
struct lmb_region *rgn;
struct alist *lmb_rgn_lst = &lmb_used_mem; phys_addr_t rgnbegin, rgnend; phys_addr_t end = base + size - 1; int i;
rgnbegin = rgnend = 0; /* supress gcc warnings */
- rgn = lmb_rgn_lst->data; /* Find the region where (base, size) belongs to */
- for (i = 0; i < rgn->cnt; i++) {
rgnbegin = rgn->region[i].base;
rgnend = rgnbegin + rgn->region[i].size - 1;
for (i = 0; i < lmb_rgn_lst->count; i++) {
rgnbegin = rgn[i].base;
rgnend = rgnbegin + rgn[i].size - 1;
if ((rgnbegin <= base) && (end <= rgnend)) break; }
/* Didn't find the region */
- if (i == rgn->cnt)
if (i == lmb_rgn_lst->count) return -1;
/* Check to see if we are removing entire region */ if ((rgnbegin == base) && (rgnend == end)) {
lmb_remove_region(rgn, i);
lmb_remove_region(lmb_rgn_lst, i);
return 0; }
/* Check to see if region is matching at the front */ if (rgnbegin == base) {
rgn->region[i].base = end + 1;
rgn->region[i].size -= size;
rgn[i].base = end + 1;
rgn[i].size -= size;
return 0; }
/* Check to see if the region is matching at the end */ if (rgnend == end) {
rgn->region[i].size -= size;
return 0; }rgn[i].size -= size;
@@ -402,37 +408,37 @@ long lmb_free(struct lmb *lmb, phys_addr_t base, phys_size_t size) * We need to split the entry - adjust the current one to the * beginging of the hole and add the region after hole. */
- rgn->region[i].size = base - rgn->region[i].base;
- return lmb_add_region_flags(rgn, end + 1, rgnend - end,
rgn->region[i].flags);
- rgn[i].size = base - rgn[i].base;
- return lmb_add_region_flags(lmb_rgn_lst, end + 1, rgnend - end,
}rgn[i].flags);
-long lmb_reserve_flags(struct lmb *lmb, phys_addr_t base, phys_size_t size,
enum lmb_flags flags)
+long lmb_reserve_flags(phys_addr_t base, phys_size_t size, enum lmb_flags flags) {
- struct lmb_region *_rgn = &(lmb->reserved);
- struct alist *lmb_rgn_lst = &lmb_used_mem;
- return lmb_add_region_flags(_rgn, base, size, flags);
- return lmb_add_region_flags(lmb_rgn_lst, base, size, flags); }
-long lmb_reserve(struct lmb *lmb, phys_addr_t base, phys_size_t size) +long lmb_reserve(phys_addr_t base, phys_size_t size) {
- return lmb_reserve_flags(lmb, base, size, LMB_NONE);
- return lmb_reserve_flags(base, size, LMB_NONE); }
-static long lmb_overlaps_region(struct lmb_region *rgn, phys_addr_t base, +static long lmb_overlaps_region(struct alist *lmb_rgn_lst, phys_addr_t base, phys_size_t size) { unsigned long i;
- struct lmb_region *rgn = lmb_rgn_lst->data;
- for (i = 0; i < rgn->cnt; i++) {
phys_addr_t rgnbase = rgn->region[i].base;
phys_size_t rgnsize = rgn->region[i].size;
- for (i = 0; i < lmb_rgn_lst->count; i++) {
phys_addr_t rgnbase = rgn[i].base;
if (lmb_addrs_overlap(base, size, rgnbase, rgnsize)) break; }phys_size_t rgnsize = rgn[i].size;
- return (i < rgn->cnt) ? i : -1;
return (i < lmb_rgn_lst->count) ? i : -1; }
static phys_addr_t lmb_align_down(phys_addr_t addr, phys_size_t size)
@@ -440,16 +446,18 @@ static phys_addr_t lmb_align_down(phys_addr_t addr, phys_size_t size) return addr & ~(size - 1); }
-static phys_addr_t __lmb_alloc_base(struct lmb *lmb, phys_size_t size,
ulong align, phys_addr_t max_addr)
+static phys_addr_t __lmb_alloc_base(phys_size_t size, ulong align,
{ long i, rgn; phys_addr_t base = 0; phys_addr_t res_base;phys_addr_t max_addr)
- struct lmb_region *lmb_used = lmb_used_mem.data;
- struct lmb_region *lmb_memory = lmb_free_mem.data;
- for (i = lmb->memory.cnt - 1; i >= 0; i--) {
phys_addr_t lmbbase = lmb->memory.region[i].base;
phys_size_t lmbsize = lmb->memory.region[i].size;
for (i = lmb_free_mem.count - 1; i >= 0; i--) {
phys_addr_t lmbbase = lmb_memory[i].base;
phys_size_t lmbsize = lmb_memory[i].size;
if (lmbsize < size) continue;
@@ -465,15 +473,16 @@ static phys_addr_t __lmb_alloc_base(struct lmb *lmb, phys_size_t size, continue;
while (base && lmbbase <= base) {
rgn = lmb_overlaps_region(&lmb->reserved, base, size);
rgn = lmb_overlaps_region(&lmb_used_mem, base, size); if (rgn < 0) { /* This area isn't reserved, take it */
if (lmb_add_region(&lmb->reserved, base,
if (lmb_add_region(&lmb_used_mem, base, size) < 0) return 0; return base; }
res_base = lmb->reserved.region[rgn].base;
res_base = lmb_used[rgn].base; if (res_base < size) break; base = lmb_align_down(res_base - size, align);
@@ -482,16 +491,16 @@ static phys_addr_t __lmb_alloc_base(struct lmb *lmb, phys_size_t size, return 0; }
-phys_addr_t lmb_alloc(struct lmb *lmb, phys_size_t size, ulong align) +phys_addr_t lmb_alloc(phys_size_t size, ulong align) {
- return lmb_alloc_base(lmb, size, align, LMB_ALLOC_ANYWHERE);
- return lmb_alloc_base(size, align, LMB_ALLOC_ANYWHERE); }
-phys_addr_t lmb_alloc_base(struct lmb *lmb, phys_size_t size, ulong align, phys_addr_t max_addr) +phys_addr_t lmb_alloc_base(phys_size_t size, ulong align, phys_addr_t max_addr) { phys_addr_t alloc;
- alloc = __lmb_alloc_base(lmb, size, align, max_addr);
alloc = __lmb_alloc_base(size, align, max_addr);
if (alloc == 0) printf("ERROR: Failed to allocate 0x%lx bytes below 0x%lx.\n",
@@ -504,22 +513,23 @@ phys_addr_t lmb_alloc_base(struct lmb *lmb, phys_size_t size, ulong align, phys_
- Try to allocate a specific address range: must be in defined memory but not
- reserved
*/ -phys_addr_t lmb_alloc_addr(struct lmb *lmb, phys_addr_t base, phys_size_t size) +phys_addr_t lmb_alloc_addr(phys_addr_t base, phys_size_t size) { long rgn;
struct lmb_region *lmb_memory = lmb_free_mem.data;
/* Check if the requested address is in one of the memory regions */
- rgn = lmb_overlaps_region(&lmb->memory, base, size);
- rgn = lmb_overlaps_region(&lmb_free_mem, base, size); if (rgn >= 0) { /*
*/
- Check if the requested end address is in the same memory
- region we found.
if (lmb_addrs_overlap(lmb->memory.region[rgn].base,
lmb->memory.region[rgn].size,
if (lmb_addrs_overlap(lmb_memory[rgn].base,
lmb_memory[rgn].size, base + size - 1, 1)) { /* ok, reserve the memory */
if (lmb_reserve(lmb, base, size) >= 0)
} }if (lmb_reserve(base, size) >= 0) return base;
@@ -527,51 +537,86 @@ phys_addr_t lmb_alloc_addr(struct lmb *lmb, phys_addr_t base, phys_size_t size) }
/* Return number of bytes from a given address that are free */ -phys_size_t lmb_get_free_size(struct lmb *lmb, phys_addr_t addr) +phys_size_t lmb_get_free_size(phys_addr_t addr) { int i; long rgn;
struct lmb_region *lmb_used = lmb_used_mem.data;
struct lmb_region *lmb_memory = lmb_free_mem.data;
/* check if the requested address is in the memory regions */
- rgn = lmb_overlaps_region(&lmb->memory, addr, 1);
- rgn = lmb_overlaps_region(&lmb_free_mem, addr, 1); if (rgn >= 0) {
for (i = 0; i < lmb->reserved.cnt; i++) {
if (addr < lmb->reserved.region[i].base) {
for (i = 0; i < lmb_used_mem.count; i++) {
if (addr < lmb_used[i].base) { /* first reserved range > requested address */
return lmb->reserved.region[i].base - addr;
return lmb_used[i].base - addr; }
if (lmb->reserved.region[i].base +
lmb->reserved.region[i].size > addr) {
if (lmb_used[i].base +
} /* if we come here: no reserved ranges above requested addr */lmb_used[i].size > addr) { /* requested addr is in this reserved range */ return 0; }
return lmb->memory.region[lmb->memory.cnt - 1].base +
lmb->memory.region[lmb->memory.cnt - 1].size - addr;
return lmb_memory[lmb_free_mem.count - 1].base +
} return 0; }lmb_memory[lmb_free_mem.count - 1].size - addr;
-int lmb_is_reserved_flags(struct lmb *lmb, phys_addr_t addr, int flags) +int lmb_is_reserved_flags(phys_addr_t addr, int flags) { int i;
- struct lmb_region *lmb_used = lmb_used_mem.data;
- for (i = 0; i < lmb->reserved.cnt; i++) {
phys_addr_t upper = lmb->reserved.region[i].base +
lmb->reserved.region[i].size - 1;
if ((addr >= lmb->reserved.region[i].base) && (addr <= upper))
return (lmb->reserved.region[i].flags & flags) == flags;
- for (i = 0; i < lmb_used_mem.count; i++) {
phys_addr_t upper = lmb_used[i].base +
lmb_used[i].size - 1;
if ((addr >= lmb_used[i].base) && (addr <= upper))
} return 0; }return (lmb_used[i].flags & flags) == flags;
-__weak void board_lmb_reserve(struct lmb *lmb) +__weak void board_lmb_reserve(void) { /* please define platform specific board_lmb_reserve() */ }
-__weak void arch_lmb_reserve(struct lmb *lmb) +__weak void arch_lmb_reserve(void) { /* please define platform specific arch_lmb_reserve() */ }
+/**
- lmb_mem_regions_init() - Initialise the LMB memory
- Initialise the LMB subsystem related data structures. There are two
- alloced lists that are initialised, one for the free memory, and one
- for the used memory.
- Initialise the two lists as part of board init.
- Return: 0 if OK, -ve on failure.
- */
+int lmb_mem_regions_init(void) +{
- bool ret;
- ret = alist_init(&lmb_free_mem, sizeof(struct lmb_region),
(uint)LMB_ALIST_INITIAL_SIZE);
- if (!ret) {
log_debug("Unable to initialise the list for LMB free memory\n");
return -1;
- }
- ret = alist_init(&lmb_used_mem, sizeof(struct lmb_region),
(uint)LMB_ALIST_INITIAL_SIZE);
- if (!ret) {
log_debug("Unable to initialise the list for LMB used memory\n");
return -1;
- }
- return 0;
+} diff --git a/net/tftp.c b/net/tftp.c index 65c39d7fb7..a199f4a6df 100644 --- a/net/tftp.c +++ b/net/tftp.c @@ -710,12 +710,11 @@ static void tftp_timeout_handler(void) static int tftp_init_load_addr(void) { #ifdef CONFIG_LMB
struct lmb lmb; phys_size_t max_size;
lmb_init_and_reserve(&lmb, gd->bd, (void *)gd->fdt_blob);
- lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob);
- max_size = lmb_get_free_size(&lmb, image_load_addr);
- max_size = lmb_get_free_size(image_load_addr); if (!max_size) return -1;
diff --git a/net/wget.c b/net/wget.c index f1dd7abeff..7cf809a8ef 100644 --- a/net/wget.c +++ b/net/wget.c @@ -73,12 +73,11 @@ static ulong wget_load_size; */ static int wget_init_load_size(void) {
struct lmb lmb; phys_size_t max_size;
lmb_init_and_reserve(&lmb, gd->bd, (void *)gd->fdt_blob);
- lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob);
- max_size = lmb_get_free_size(&lmb, image_load_addr);
- max_size = lmb_get_free_size(image_load_addr); if (!max_size) return -1;
diff --git a/test/cmd/bdinfo.c b/test/cmd/bdinfo.c index 027848c3e2..34d2b141d8 100644 --- a/test/cmd/bdinfo.c +++ b/test/cmd/bdinfo.c @@ -200,7 +200,7 @@ static int bdinfo_test_all(struct unit_test_state *uts) if (IS_ENABLED(CONFIG_LMB) && gd->fdt_blob) { struct lmb lmb;
lmb_init_and_reserve(&lmb, gd->bd, (void *)gd->fdt_blob);
ut_assertok(lmb_test_dump_all(uts, &lmb)); if (IS_ENABLED(CONFIG_OF_REAL)) ut_assert_nextline("devicetree = %s", fdtdec_get_srcname());lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob);
diff --git a/test/lib/lmb.c b/test/lib/lmb.c index 4b5b6e5e20..9f6b5633a7 100644 --- a/test/lib/lmb.c +++ b/test/lib/lmb.c @@ -12,6 +12,8 @@ #include <test/test.h> #include <test/ut.h>
+extern struct lmb lmb;
- static inline bool lmb_is_nomap(struct lmb_property *m) { return m->flags & LMB_NOMAP;
@@ -64,7 +66,6 @@ static int test_multi_alloc(struct unit_test_state *uts, const phys_addr_t ram, const phys_addr_t ram_end = ram + ram_size; const phys_addr_t alloc_64k_end = alloc_64k_addr + 0x10000;
- struct lmb lmb; long ret; phys_addr_t a, a2, b, b2, c, d;
@@ -75,14 +76,12 @@ static int test_multi_alloc(struct unit_test_state *uts, const phys_addr_t ram, ut_assert(alloc_64k_addr >= ram + 8); ut_assert(alloc_64k_end <= ram_end - 8);
- lmb_init(&lmb);
- if (ram0_size) {
ret = lmb_add(&lmb, ram0, ram0_size);
ut_asserteq(ret, 0); }ret = lmb_add(ram0, ram0_size);
- ret = lmb_add(&lmb, ram, ram_size);
ret = lmb_add(ram, ram_size); ut_asserteq(ret, 0);
if (ram0_size) {
@@ -98,67 +97,67 @@ static int test_multi_alloc(struct unit_test_state *uts, const phys_addr_t ram, }
/* reserve 64KiB somewhere */
- ret = lmb_reserve(&lmb, alloc_64k_addr, 0x10000);
ret = lmb_reserve(alloc_64k_addr, 0x10000); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, 0, 0, 1, alloc_64k_addr, 0x10000, 0, 0, 0, 0);
/* allocate somewhere, should be at the end of RAM */
- a = lmb_alloc(&lmb, 4, 1);
- a = lmb_alloc(4, 1); ut_asserteq(a, ram_end - 4); ASSERT_LMB(&lmb, 0, 0, 2, alloc_64k_addr, 0x10000, ram_end - 4, 4, 0, 0); /* alloc below end of reserved region -> below reserved region */
- b = lmb_alloc_base(&lmb, 4, 1, alloc_64k_end);
b = lmb_alloc_base(4, 1, alloc_64k_end); ut_asserteq(b, alloc_64k_addr - 4); ASSERT_LMB(&lmb, 0, 0, 2, alloc_64k_addr - 4, 0x10000 + 4, ram_end - 4, 4, 0, 0);
/* 2nd time */
- c = lmb_alloc(&lmb, 4, 1);
- c = lmb_alloc(4, 1); ut_asserteq(c, ram_end - 8); ASSERT_LMB(&lmb, 0, 0, 2, alloc_64k_addr - 4, 0x10000 + 4, ram_end - 8, 8, 0, 0);
- d = lmb_alloc_base(&lmb, 4, 1, alloc_64k_end);
- d = lmb_alloc_base(4, 1, alloc_64k_end); ut_asserteq(d, alloc_64k_addr - 8); ASSERT_LMB(&lmb, 0, 0, 2, alloc_64k_addr - 8, 0x10000 + 8, ram_end - 8, 8, 0, 0);
- ret = lmb_free(&lmb, a, 4);
- ret = lmb_free(a, 4); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, 0, 0, 2, alloc_64k_addr - 8, 0x10000 + 8, ram_end - 8, 4, 0, 0); /* allocate again to ensure we get the same address */
- a2 = lmb_alloc(&lmb, 4, 1);
- a2 = lmb_alloc(4, 1); ut_asserteq(a, a2); ASSERT_LMB(&lmb, 0, 0, 2, alloc_64k_addr - 8, 0x10000 + 8, ram_end - 8, 8, 0, 0);
- ret = lmb_free(&lmb, a2, 4);
- ret = lmb_free(a2, 4); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, 0, 0, 2, alloc_64k_addr - 8, 0x10000 + 8, ram_end - 8, 4, 0, 0);
- ret = lmb_free(&lmb, b, 4);
- ret = lmb_free(b, 4); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, 0, 0, 3, alloc_64k_addr - 8, 4, alloc_64k_addr, 0x10000, ram_end - 8, 4); /* allocate again to ensure we get the same address */
- b2 = lmb_alloc_base(&lmb, 4, 1, alloc_64k_end);
- b2 = lmb_alloc_base(4, 1, alloc_64k_end); ut_asserteq(b, b2); ASSERT_LMB(&lmb, 0, 0, 2, alloc_64k_addr - 8, 0x10000 + 8, ram_end - 8, 4, 0, 0);
- ret = lmb_free(&lmb, b2, 4);
- ret = lmb_free(b2, 4); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, 0, 0, 3, alloc_64k_addr - 8, 4, alloc_64k_addr, 0x10000, ram_end - 8, 4);
- ret = lmb_free(&lmb, c, 4);
- ret = lmb_free(c, 4); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, 0, 0, 2, alloc_64k_addr - 8, 4, alloc_64k_addr, 0x10000, 0, 0);
- ret = lmb_free(&lmb, d, 4);
- ret = lmb_free(d, 4); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, 0, 0, 1, alloc_64k_addr, 0x10000, 0, 0, 0, 0);
@@ -229,44 +228,41 @@ static int test_bigblock(struct unit_test_state *uts, const phys_addr_t ram) const phys_size_t big_block_size = 0x10000000; const phys_addr_t ram_end = ram + ram_size; const phys_addr_t alloc_64k_addr = ram + 0x10000000;
struct lmb lmb; long ret; phys_addr_t a, b;
/* check for overflow */ ut_assert(ram_end == 0 || ram_end > ram);
lmb_init(&lmb);
ret = lmb_add(&lmb, ram, ram_size);
ret = lmb_add(ram, ram_size); ut_asserteq(ret, 0);
/* reserve 64KiB in the middle of RAM */
- ret = lmb_reserve(&lmb, alloc_64k_addr, 0x10000);
ret = lmb_reserve(alloc_64k_addr, 0x10000); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 1, alloc_64k_addr, 0x10000, 0, 0, 0, 0);
/* allocate a big block, should be below reserved */
- a = lmb_alloc(&lmb, big_block_size, 1);
- a = lmb_alloc(big_block_size, 1); ut_asserteq(a, ram); ASSERT_LMB(&lmb, ram, ram_size, 1, a, big_block_size + 0x10000, 0, 0, 0, 0); /* allocate 2nd big block */ /* This should fail, printing an error */
- b = lmb_alloc(&lmb, big_block_size, 1);
- b = lmb_alloc(big_block_size, 1); ut_asserteq(b, 0); ASSERT_LMB(&lmb, ram, ram_size, 1, a, big_block_size + 0x10000, 0, 0, 0, 0);
- ret = lmb_free(&lmb, a, big_block_size);
ret = lmb_free(a, big_block_size); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 1, alloc_64k_addr, 0x10000, 0, 0, 0, 0);
/* allocate too big block */ /* This should fail, printing an error */
- a = lmb_alloc(&lmb, ram_size, 1);
- a = lmb_alloc(ram_size, 1); ut_asserteq(a, 0); ASSERT_LMB(&lmb, ram, ram_size, 1, alloc_64k_addr, 0x10000, 0, 0, 0, 0);
@@ -294,7 +290,6 @@ static int test_noreserved(struct unit_test_state *uts, const phys_addr_t ram, { const phys_size_t ram_size = 0x20000000; const phys_addr_t ram_end = ram + ram_size;
- struct lmb lmb; long ret; phys_addr_t a, b; const phys_addr_t alloc_size_aligned = (alloc_size + align - 1) &
@@ -303,19 +298,17 @@ static int test_noreserved(struct unit_test_state *uts, const phys_addr_t ram, /* check for overflow */ ut_assert(ram_end == 0 || ram_end > ram);
- lmb_init(&lmb);
- ret = lmb_add(&lmb, ram, ram_size);
ret = lmb_add(ram, ram_size); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 0, 0, 0, 0, 0, 0, 0);
/* allocate a block */
- a = lmb_alloc(&lmb, alloc_size, align);
- a = lmb_alloc(alloc_size, align); ut_assert(a != 0); ASSERT_LMB(&lmb, ram, ram_size, 1, ram + ram_size - alloc_size_aligned, alloc_size, 0, 0, 0, 0); /* allocate another block */
- b = lmb_alloc(&lmb, alloc_size, align);
- b = lmb_alloc(alloc_size, align); ut_assert(b != 0); if (alloc_size == alloc_size_aligned) { ASSERT_LMB(&lmb, ram, ram_size, 1, ram + ram_size -
@@ -327,21 +320,21 @@ static int test_noreserved(struct unit_test_state *uts, const phys_addr_t ram, - alloc_size_aligned, alloc_size, 0, 0); } /* and free them */
- ret = lmb_free(&lmb, b, alloc_size);
- ret = lmb_free(b, alloc_size); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 1, ram + ram_size - alloc_size_aligned, alloc_size, 0, 0, 0, 0);
- ret = lmb_free(&lmb, a, alloc_size);
ret = lmb_free(a, alloc_size); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 0, 0, 0, 0, 0, 0, 0);
/* allocate a block with base*/
- b = lmb_alloc_base(&lmb, alloc_size, align, ram_end);
- b = lmb_alloc_base(alloc_size, align, ram_end); ut_assert(a == b); ASSERT_LMB(&lmb, ram, ram_size, 1, ram + ram_size - alloc_size_aligned, alloc_size, 0, 0, 0, 0); /* and free it */
- ret = lmb_free(&lmb, b, alloc_size);
- ret = lmb_free(b, alloc_size); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 0, 0, 0, 0, 0, 0, 0);
@@ -385,34 +378,31 @@ static int lib_test_lmb_at_0(struct unit_test_state *uts) { const phys_addr_t ram = 0; const phys_size_t ram_size = 0x20000000;
struct lmb lmb; long ret; phys_addr_t a, b;
lmb_init(&lmb);
ret = lmb_add(&lmb, ram, ram_size);
ret = lmb_add(ram, ram_size); ut_asserteq(ret, 0);
/* allocate nearly everything */
- a = lmb_alloc(&lmb, ram_size - 4, 1);
- a = lmb_alloc(ram_size - 4, 1); ut_asserteq(a, ram + 4); ASSERT_LMB(&lmb, ram, ram_size, 1, a, ram_size - 4, 0, 0, 0, 0); /* allocate the rest */ /* This should fail as the allocated address would be 0 */
- b = lmb_alloc(&lmb, 4, 1);
- b = lmb_alloc(4, 1); ut_asserteq(b, 0); /* check that this was an error by checking lmb */ ASSERT_LMB(&lmb, ram, ram_size, 1, a, ram_size - 4, 0, 0, 0, 0); /* check that this was an error by freeing b */
- ret = lmb_free(&lmb, b, 4);
- ret = lmb_free(b, 4); ut_asserteq(ret, -1); ASSERT_LMB(&lmb, ram, ram_size, 1, a, ram_size - 4, 0, 0, 0, 0);
- ret = lmb_free(&lmb, a, ram_size - 4);
- ret = lmb_free(a, ram_size - 4); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 0, 0, 0, 0, 0, 0, 0);
@@ -425,42 +415,39 @@ static int lib_test_lmb_overlapping_reserve(struct unit_test_state *uts) { const phys_addr_t ram = 0x40000000; const phys_size_t ram_size = 0x20000000;
struct lmb lmb; long ret;
lmb_init(&lmb);
ret = lmb_add(&lmb, ram, ram_size);
- ret = lmb_add(ram, ram_size); ut_asserteq(ret, 0);
- ret = lmb_reserve(&lmb, 0x40010000, 0x10000);
- ret = lmb_reserve(0x40010000, 0x10000); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40010000, 0x10000, 0, 0, 0, 0); /* allocate overlapping region should fail */
- ret = lmb_reserve(&lmb, 0x40011000, 0x10000);
- ret = lmb_reserve(0x40011000, 0x10000); ut_asserteq(ret, -1); ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40010000, 0x10000, 0, 0, 0, 0); /* allocate 3nd region */
- ret = lmb_reserve(&lmb, 0x40030000, 0x10000);
- ret = lmb_reserve(0x40030000, 0x10000); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 2, 0x40010000, 0x10000, 0x40030000, 0x10000, 0, 0); /* allocate 2nd region , This should coalesced all region into one */
- ret = lmb_reserve(&lmb, 0x40020000, 0x10000);
ret = lmb_reserve(0x40020000, 0x10000); ut_assert(ret >= 0); ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40010000, 0x30000, 0, 0, 0, 0);
/* allocate 2nd region, which should be added as first region */
- ret = lmb_reserve(&lmb, 0x40000000, 0x8000);
ret = lmb_reserve(0x40000000, 0x8000); ut_assert(ret >= 0); ASSERT_LMB(&lmb, ram, ram_size, 2, 0x40000000, 0x8000, 0x40010000, 0x30000, 0, 0);
/* allocate 3rd region, coalesce with first and overlap with second */
- ret = lmb_reserve(&lmb, 0x40008000, 0x10000);
- ret = lmb_reserve(0x40008000, 0x10000); ut_assert(ret >= 0); ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40000000, 0x40000, 0, 0, 0, 0);
@@ -479,104 +466,101 @@ static int test_alloc_addr(struct unit_test_state *uts, const phys_addr_t ram) const phys_size_t alloc_addr_a = ram + 0x8000000; const phys_size_t alloc_addr_b = ram + 0x8000000 * 2; const phys_size_t alloc_addr_c = ram + 0x8000000 * 3;
struct lmb lmb; long ret; phys_addr_t a, b, c, d, e;
/* check for overflow */ ut_assert(ram_end == 0 || ram_end > ram);
lmb_init(&lmb);
ret = lmb_add(&lmb, ram, ram_size);
ret = lmb_add(ram, ram_size); ut_asserteq(ret, 0);
/* reserve 3 blocks */
- ret = lmb_reserve(&lmb, alloc_addr_a, 0x10000);
- ret = lmb_reserve(alloc_addr_a, 0x10000); ut_asserteq(ret, 0);
- ret = lmb_reserve(&lmb, alloc_addr_b, 0x10000);
- ret = lmb_reserve(alloc_addr_b, 0x10000); ut_asserteq(ret, 0);
- ret = lmb_reserve(&lmb, alloc_addr_c, 0x10000);
ret = lmb_reserve(alloc_addr_c, 0x10000); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 3, alloc_addr_a, 0x10000, alloc_addr_b, 0x10000, alloc_addr_c, 0x10000);
/* allocate blocks */
- a = lmb_alloc_addr(&lmb, ram, alloc_addr_a - ram);
- a = lmb_alloc_addr(ram, alloc_addr_a - ram); ut_asserteq(a, ram); ASSERT_LMB(&lmb, ram, ram_size, 3, ram, 0x8010000, alloc_addr_b, 0x10000, alloc_addr_c, 0x10000);
- b = lmb_alloc_addr(&lmb, alloc_addr_a + 0x10000,
- b = lmb_alloc_addr(alloc_addr_a + 0x10000, alloc_addr_b - alloc_addr_a - 0x10000); ut_asserteq(b, alloc_addr_a + 0x10000); ASSERT_LMB(&lmb, ram, ram_size, 2, ram, 0x10010000, alloc_addr_c, 0x10000, 0, 0);
- c = lmb_alloc_addr(&lmb, alloc_addr_b + 0x10000,
- c = lmb_alloc_addr(alloc_addr_b + 0x10000, alloc_addr_c - alloc_addr_b - 0x10000); ut_asserteq(c, alloc_addr_b + 0x10000); ASSERT_LMB(&lmb, ram, ram_size, 1, ram, 0x18010000, 0, 0, 0, 0);
- d = lmb_alloc_addr(&lmb, alloc_addr_c + 0x10000,
d = lmb_alloc_addr(alloc_addr_c + 0x10000, ram_end - alloc_addr_c - 0x10000); ut_asserteq(d, alloc_addr_c + 0x10000); ASSERT_LMB(&lmb, ram, ram_size, 1, ram, ram_size, 0, 0, 0, 0);
/* allocating anything else should fail */
- e = lmb_alloc(&lmb, 1, 1);
- e = lmb_alloc(1, 1); ut_asserteq(e, 0); ASSERT_LMB(&lmb, ram, ram_size, 1, ram, ram_size, 0, 0, 0, 0);
- ret = lmb_free(&lmb, d, ram_end - alloc_addr_c - 0x10000);
ret = lmb_free(d, ram_end - alloc_addr_c - 0x10000); ut_asserteq(ret, 0);
/* allocate at 3 points in free range */
- d = lmb_alloc_addr(&lmb, ram_end - 4, 4);
- d = lmb_alloc_addr(ram_end - 4, 4); ut_asserteq(d, ram_end - 4); ASSERT_LMB(&lmb, ram, ram_size, 2, ram, 0x18010000, d, 4, 0, 0);
- ret = lmb_free(&lmb, d, 4);
- ret = lmb_free(d, 4); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 1, ram, 0x18010000, 0, 0, 0, 0);
- d = lmb_alloc_addr(&lmb, ram_end - 128, 4);
- d = lmb_alloc_addr(ram_end - 128, 4); ut_asserteq(d, ram_end - 128); ASSERT_LMB(&lmb, ram, ram_size, 2, ram, 0x18010000, d, 4, 0, 0);
- ret = lmb_free(&lmb, d, 4);
- ret = lmb_free(d, 4); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 1, ram, 0x18010000, 0, 0, 0, 0);
- d = lmb_alloc_addr(&lmb, alloc_addr_c + 0x10000, 4);
- d = lmb_alloc_addr(alloc_addr_c + 0x10000, 4); ut_asserteq(d, alloc_addr_c + 0x10000); ASSERT_LMB(&lmb, ram, ram_size, 1, ram, 0x18010004, 0, 0, 0, 0);
- ret = lmb_free(&lmb, d, 4);
ret = lmb_free(d, 4); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 1, ram, 0x18010000, 0, 0, 0, 0);
/* allocate at the bottom */
- ret = lmb_free(&lmb, a, alloc_addr_a - ram);
- ret = lmb_free(a, alloc_addr_a - ram); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 1, ram + 0x8000000, 0x10010000, 0, 0, 0, 0);
- d = lmb_alloc_addr(&lmb, ram, 4);
d = lmb_alloc_addr(ram, 4); ut_asserteq(d, ram); ASSERT_LMB(&lmb, ram, ram_size, 2, d, 4, ram + 0x8000000, 0x10010000, 0, 0);
/* check that allocating outside memory fails */ if (ram_end != 0) {
ret = lmb_alloc_addr(&lmb, ram_end, 1);
ut_asserteq(ret, 0); } if (ram != 0) {ret = lmb_alloc_addr(ram_end, 1);
ret = lmb_alloc_addr(&lmb, ram - 1, 1);
ut_asserteq(ret, 0); }ret = lmb_alloc_addr(ram - 1, 1);
@@ -606,48 +590,45 @@ static int test_get_unreserved_size(struct unit_test_state *uts, const phys_size_t alloc_addr_a = ram + 0x8000000; const phys_size_t alloc_addr_b = ram + 0x8000000 * 2; const phys_size_t alloc_addr_c = ram + 0x8000000 * 3;
struct lmb lmb; long ret; phys_size_t s;
/* check for overflow */ ut_assert(ram_end == 0 || ram_end > ram);
lmb_init(&lmb);
ret = lmb_add(&lmb, ram, ram_size);
ret = lmb_add(ram, ram_size); ut_asserteq(ret, 0);
/* reserve 3 blocks */
- ret = lmb_reserve(&lmb, alloc_addr_a, 0x10000);
- ret = lmb_reserve(alloc_addr_a, 0x10000); ut_asserteq(ret, 0);
- ret = lmb_reserve(&lmb, alloc_addr_b, 0x10000);
- ret = lmb_reserve(alloc_addr_b, 0x10000); ut_asserteq(ret, 0);
- ret = lmb_reserve(&lmb, alloc_addr_c, 0x10000);
ret = lmb_reserve(alloc_addr_c, 0x10000); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 3, alloc_addr_a, 0x10000, alloc_addr_b, 0x10000, alloc_addr_c, 0x10000);
/* check addresses in between blocks */
- s = lmb_get_free_size(&lmb, ram);
- s = lmb_get_free_size(ram); ut_asserteq(s, alloc_addr_a - ram);
- s = lmb_get_free_size(&lmb, ram + 0x10000);
- s = lmb_get_free_size(ram + 0x10000); ut_asserteq(s, alloc_addr_a - ram - 0x10000);
- s = lmb_get_free_size(&lmb, alloc_addr_a - 4);
- s = lmb_get_free_size(alloc_addr_a - 4); ut_asserteq(s, 4);
- s = lmb_get_free_size(&lmb, alloc_addr_a + 0x10000);
- s = lmb_get_free_size(alloc_addr_a + 0x10000); ut_asserteq(s, alloc_addr_b - alloc_addr_a - 0x10000);
- s = lmb_get_free_size(&lmb, alloc_addr_a + 0x20000);
- s = lmb_get_free_size(alloc_addr_a + 0x20000); ut_asserteq(s, alloc_addr_b - alloc_addr_a - 0x20000);
- s = lmb_get_free_size(&lmb, alloc_addr_b - 4);
- s = lmb_get_free_size(alloc_addr_b - 4); ut_asserteq(s, 4);
- s = lmb_get_free_size(&lmb, alloc_addr_c + 0x10000);
- s = lmb_get_free_size(alloc_addr_c + 0x10000); ut_asserteq(s, ram_end - alloc_addr_c - 0x10000);
- s = lmb_get_free_size(&lmb, alloc_addr_c + 0x20000);
- s = lmb_get_free_size(alloc_addr_c + 0x20000); ut_asserteq(s, ram_end - alloc_addr_c - 0x20000);
- s = lmb_get_free_size(&lmb, ram_end - 4);
s = lmb_get_free_size(ram_end - 4); ut_asserteq(s, 4);
return 0;
@@ -680,11 +661,8 @@ static int lib_test_lmb_max_regions(struct unit_test_state *uts) + 1) * CONFIG_LMB_MAX_REGIONS; const phys_size_t blk_size = 0x10000; phys_addr_t offset;
struct lmb lmb; int ret, i;
lmb_init(&lmb);
ut_asserteq(lmb.memory.cnt, 0); ut_asserteq(lmb.memory.max, CONFIG_LMB_MAX_REGIONS); ut_asserteq(lmb.reserved.cnt, 0);
@@ -693,7 +671,7 @@ static int lib_test_lmb_max_regions(struct unit_test_state *uts) /* Add CONFIG_LMB_MAX_REGIONS memory regions */ for (i = 0; i < CONFIG_LMB_MAX_REGIONS; i++) { offset = ram + 2 * i * ram_size;
ret = lmb_add(&lmb, offset, ram_size);
ut_asserteq(ret, 0); } ut_asserteq(lmb.memory.cnt, CONFIG_LMB_MAX_REGIONS);ret = lmb_add(offset, ram_size);
@@ -701,7 +679,7 @@ static int lib_test_lmb_max_regions(struct unit_test_state *uts)
/* error for the (CONFIG_LMB_MAX_REGIONS + 1) memory regions */ offset = ram + 2 * (CONFIG_LMB_MAX_REGIONS + 1) * ram_size;
- ret = lmb_add(&lmb, offset, ram_size);
ret = lmb_add(offset, ram_size); ut_asserteq(ret, -1);
ut_asserteq(lmb.memory.cnt, CONFIG_LMB_MAX_REGIONS);
@@ -710,7 +688,7 @@ static int lib_test_lmb_max_regions(struct unit_test_state *uts) /* reserve CONFIG_LMB_MAX_REGIONS regions */ for (i = 0; i < CONFIG_LMB_MAX_REGIONS; i++) { offset = ram + 2 * i * blk_size;
ret = lmb_reserve(&lmb, offset, blk_size);
ut_asserteq(ret, 0); }ret = lmb_reserve(offset, blk_size);
@@ -719,7 +697,7 @@ static int lib_test_lmb_max_regions(struct unit_test_state *uts)
/* error for the 9th reserved blocks */ offset = ram + 2 * (CONFIG_LMB_MAX_REGIONS + 1) * blk_size;
- ret = lmb_reserve(&lmb, offset, blk_size);
ret = lmb_reserve(offset, blk_size); ut_asserteq(ret, -1);
ut_asserteq(lmb.memory.cnt, CONFIG_LMB_MAX_REGIONS);
@@ -741,28 +719,25 @@ static int lib_test_lmb_flags(struct unit_test_state *uts) { const phys_addr_t ram = 0x40000000; const phys_size_t ram_size = 0x20000000;
struct lmb lmb; long ret;
lmb_init(&lmb);
ret = lmb_add(&lmb, ram, ram_size);
ret = lmb_add(ram, ram_size); ut_asserteq(ret, 0);
/* reserve, same flag */
- ret = lmb_reserve_flags(&lmb, 0x40010000, 0x10000, LMB_NOMAP);
ret = lmb_reserve_flags(0x40010000, 0x10000, LMB_NOMAP); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40010000, 0x10000, 0, 0, 0, 0);
/* reserve again, same flag */
- ret = lmb_reserve_flags(&lmb, 0x40010000, 0x10000, LMB_NOMAP);
ret = lmb_reserve_flags(0x40010000, 0x10000, LMB_NOMAP); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40010000, 0x10000, 0, 0, 0, 0);
/* reserve again, new flag */
- ret = lmb_reserve_flags(&lmb, 0x40010000, 0x10000, LMB_NONE);
- ret = lmb_reserve_flags(0x40010000, 0x10000, LMB_NONE); ut_asserteq(ret, -1); ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40010000, 0x10000, 0, 0, 0, 0);
@@ -770,20 +745,20 @@ static int lib_test_lmb_flags(struct unit_test_state *uts) ut_asserteq(lmb_is_nomap(&lmb.reserved.region[0]), 1);
/* merge after */
- ret = lmb_reserve_flags(&lmb, 0x40020000, 0x10000, LMB_NOMAP);
ret = lmb_reserve_flags(0x40020000, 0x10000, LMB_NOMAP); ut_asserteq(ret, 1); ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40010000, 0x20000, 0, 0, 0, 0);
/* merge before */
- ret = lmb_reserve_flags(&lmb, 0x40000000, 0x10000, LMB_NOMAP);
ret = lmb_reserve_flags(0x40000000, 0x10000, LMB_NOMAP); ut_asserteq(ret, 1); ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40000000, 0x30000, 0, 0, 0, 0);
ut_asserteq(lmb_is_nomap(&lmb.reserved.region[0]), 1);
- ret = lmb_reserve_flags(&lmb, 0x40030000, 0x10000, LMB_NONE);
- ret = lmb_reserve_flags(0x40030000, 0x10000, LMB_NONE); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 2, 0x40000000, 0x30000, 0x40030000, 0x10000, 0, 0);
@@ -792,7 +767,7 @@ static int lib_test_lmb_flags(struct unit_test_state *uts) ut_asserteq(lmb_is_nomap(&lmb.reserved.region[1]), 0);
/* test that old API use LMB_NONE */
- ret = lmb_reserve(&lmb, 0x40040000, 0x10000);
- ret = lmb_reserve(0x40040000, 0x10000); ut_asserteq(ret, 1); ASSERT_LMB(&lmb, ram, ram_size, 2, 0x40000000, 0x30000, 0x40030000, 0x20000, 0, 0);
@@ -800,18 +775,18 @@ static int lib_test_lmb_flags(struct unit_test_state *uts) ut_asserteq(lmb_is_nomap(&lmb.reserved.region[0]), 1); ut_asserteq(lmb_is_nomap(&lmb.reserved.region[1]), 0);
- ret = lmb_reserve_flags(&lmb, 0x40070000, 0x10000, LMB_NOMAP);
- ret = lmb_reserve_flags(0x40070000, 0x10000, LMB_NOMAP); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 3, 0x40000000, 0x30000, 0x40030000, 0x20000, 0x40070000, 0x10000);
- ret = lmb_reserve_flags(&lmb, 0x40050000, 0x10000, LMB_NOMAP);
ret = lmb_reserve_flags(0x40050000, 0x10000, LMB_NOMAP); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 4, 0x40000000, 0x30000, 0x40030000, 0x20000, 0x40050000, 0x10000);
/* merge with 2 adjacent regions */
- ret = lmb_reserve_flags(&lmb, 0x40060000, 0x10000, LMB_NOMAP);
- ret = lmb_reserve_flags(0x40060000, 0x10000, LMB_NOMAP); ut_asserteq(ret, 2); ASSERT_LMB(&lmb, ram, ram_size, 3, 0x40000000, 0x30000, 0x40030000, 0x20000, 0x40050000, 0x30000);

Allow for resizing of LMB regions if the region attributes match. The current code returns a failure status on detecting an overlapping address. This worked up until now since the LMB calls were not persistent and global -- the LMB memory map was specific and private to a given caller of the LMB API's.
With the change in the LMB code to make the LMB reservations persistent, there needs to be a check on whether the memory region can be resized, and then do it if so. To distinguish between memory that cannot be resized, add a new flag, LMB_NOOVERWRITE. Reserving a region of memory with this attribute would indicate that the region cannot be resized.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since rfc: * Add a function comment for lmb_add_region_flags(). * Change the wording of a comment in lmb_merge_overlap_regions() as per review comment from Simon Glass.
include/lmb.h | 1 + lib/lmb.c | 144 ++++++++++++++++++++++++++++++++++++++++++++------ 2 files changed, 128 insertions(+), 17 deletions(-)
diff --git a/include/lmb.h b/include/lmb.h index a1cc45b726..a308796d58 100644 --- a/include/lmb.h +++ b/include/lmb.h @@ -21,6 +21,7 @@ enum lmb_flags { LMB_NONE = BIT(0), LMB_NOMAP = BIT(1), + LMB_NOOVERWRITE = BIT(2), };
/** diff --git a/lib/lmb.c b/lib/lmb.c index dd6f22654c..88352e9a25 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -247,12 +247,106 @@ void lmb_init_and_reserve_range(phys_addr_t base, phys_size_t size, lmb_reserve_common(fdt_blob); }
-/* This routine called with relocation disabled. */ +static bool lmb_region_flags_match(struct lmb_region *rgn, unsigned long r1, + enum lmb_flags flags) +{ + return rgn[r1].flags == flags; +} + +static long lmb_merge_overlap_regions(struct alist *lmb_rgn_lst, + unsigned long i, phys_addr_t base, + phys_size_t size, enum lmb_flags flags) +{ + phys_size_t rgnsize; + unsigned long rgn_cnt, idx; + phys_addr_t rgnbase, rgnend; + phys_addr_t mergebase, mergeend; + struct lmb_region *rgn = lmb_rgn_lst->data; + + rgn_cnt = 0; + idx = i; + + /* + * First thing to do is to identify how many regions + * the requested region overlaps. + * If the flags match, combine all these overlapping + * regions into a single region, and remove the merged + * regions. + */ + while (idx < lmb_rgn_lst->count - 1) { + rgnbase = rgn[idx].base; + rgnsize = rgn[idx].size; + + if (lmb_addrs_overlap(base, size, rgnbase, + rgnsize)) { + if (!lmb_region_flags_match(rgn, idx, flags)) + return -1; + rgn_cnt++; + idx++; + } + } + + /* The merged region's base and size */ + rgnbase = rgn[i].base; + mergebase = min(base, rgnbase); + rgnend = rgn[idx].base + rgn[idx].size; + mergeend = max(rgnend, (base + size)); + + rgn[i].base = mergebase; + rgn[i].size = mergeend - mergebase; + + /* Now remove the merged regions */ + while (--rgn_cnt) + lmb_remove_region(lmb_rgn_lst, i + 1); + + return 0; +} + +static long lmb_resize_regions(struct alist *lmb_rgn_lst, unsigned long i, + phys_addr_t base, phys_size_t size, + enum lmb_flags flags) +{ + long ret = 0; + phys_addr_t rgnend; + struct lmb_region *rgn = lmb_rgn_lst->data; + + if (i == lmb_rgn_lst->count - 1 || + base + size < rgn[i + 1].base) { + if (!lmb_region_flags_match(rgn, i, flags)) + return -1; + + rgnend = rgn[i].base + rgn[i].size; + rgn[i].base = min(base, rgn[i].base); + rgnend = max(base + size, rgnend); + rgn[i].size = rgnend - rgn[i].base; + } else { + ret = lmb_merge_overlap_regions(lmb_rgn_lst, i, base, size, + flags); + } + + return ret; +} + +/** + * lmb_add_region_flags() - Add an lmb region to the given list + * @lmb_rgn_lst: LMB list to which region is to be added(free/used) + * @base: Start address of the region + * @size: Size of the region to be added + * @flags: Attributes of the LMB region + * + * Add a region of memory to the list. If the region does not exist, add + * it to the list. Depending on the attributes of the region to be added, + * the function might resize an already existing region or coalesce two + * adjacent regions. + * + * + * Returns: 0 if the region addition successful, -1 on failure + */ static long lmb_add_region_flags(struct alist *lmb_rgn_lst, phys_addr_t base, phys_size_t size, enum lmb_flags flags) { unsigned long coalesced = 0; - long adjacent, i; + long ret, i; struct lmb_region *rgn = lmb_rgn_lst->data;
if (alist_err(lmb_rgn_lst)) @@ -281,23 +375,32 @@ static long lmb_add_region_flags(struct alist *lmb_rgn_lst, phys_addr_t base, return -1; /* regions with new flags */ }
- adjacent = lmb_addrs_adjacent(base, size, rgnbase, rgnsize); - if (adjacent > 0) { + ret = lmb_addrs_adjacent(base, size, rgnbase, rgnsize); + if (ret > 0) { if (flags != rgnflags) break; rgn[i].base -= size; rgn[i].size += size; coalesced++; break; - } else if (adjacent < 0) { + } else if (ret < 0) { if (flags != rgnflags) break; rgn[i].size += size; coalesced++; break; } else if (lmb_addrs_overlap(base, size, rgnbase, rgnsize)) { - /* regions overlap */ - return -1; + if (flags == LMB_NONE) { + ret = lmb_resize_regions(lmb_rgn_lst, i, base, + size, flags); + if (ret < 0) + return -1; + + coalesced++; + break; + } else { + return -1; + } } }
@@ -447,7 +550,7 @@ static phys_addr_t lmb_align_down(phys_addr_t addr, phys_size_t size) }
static phys_addr_t __lmb_alloc_base(phys_size_t size, ulong align, - phys_addr_t max_addr) + phys_addr_t max_addr, enum lmb_flags flags) { long i, rgn; phys_addr_t base = 0; @@ -476,8 +579,8 @@ static phys_addr_t __lmb_alloc_base(phys_size_t size, ulong align, rgn = lmb_overlaps_region(&lmb_used_mem, base, size); if (rgn < 0) { /* This area isn't reserved, take it */ - if (lmb_add_region(&lmb_used_mem, base, - size) < 0) + if (lmb_add_region_flags(&lmb_used_mem, base, + size, flags) < 0) return 0; return base; } @@ -500,7 +603,7 @@ phys_addr_t lmb_alloc_base(phys_size_t size, ulong align, phys_addr_t max_addr) { phys_addr_t alloc;
- alloc = __lmb_alloc_base(size, align, max_addr); + alloc = __lmb_alloc_base(size, align, max_addr, LMB_NONE);
if (alloc == 0) printf("ERROR: Failed to allocate 0x%lx bytes below 0x%lx.\n", @@ -509,11 +612,8 @@ phys_addr_t lmb_alloc_base(phys_size_t size, ulong align, phys_addr_t max_addr) return alloc; }
-/* - * Try to allocate a specific address range: must be in defined memory but not - * reserved - */ -phys_addr_t lmb_alloc_addr(phys_addr_t base, phys_size_t size) +static phys_addr_t __lmb_alloc_addr(phys_addr_t base, phys_size_t size, + enum lmb_flags flags) { long rgn; struct lmb_region *lmb_memory = lmb_free_mem.data; @@ -529,13 +629,23 @@ phys_addr_t lmb_alloc_addr(phys_addr_t base, phys_size_t size) lmb_memory[rgn].size, base + size - 1, 1)) { /* ok, reserve the memory */ - if (lmb_reserve(base, size) >= 0) + if (lmb_reserve_flags(base, size, flags) >= 0) return base; } } + return 0; }
+/* + * Try to allocate a specific address range: must be in defined memory but not + * reserved + */ +phys_addr_t lmb_alloc_addr(phys_addr_t base, phys_size_t size) +{ + return __lmb_alloc_addr(base, size, LMB_NONE); +} + /* Return number of bytes from a given address that are free */ phys_size_t lmb_get_free_size(phys_addr_t addr) {

The LMB memory maps are now being maintained through a couple of alloced lists, one for the available(added) memory, and one for the used memory. These lists are not static arrays but can be extended at runtime. Remove the config symbols which were being used to define the size of these lists with the earlier implementation of static arrays.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org Reviewed-by: Simon Glass sjg@chromium.org Reviewed-by: Ilias Apalodimas ilias.apalodimas@linaro.org --- Changes since rfc: None
configs/a3y17lte_defconfig | 1 - configs/a5y17lte_defconfig | 1 - configs/a7y17lte_defconfig | 1 - configs/apple_m1_defconfig | 1 - configs/mt7981_emmc_rfb_defconfig | 1 - configs/mt7981_rfb_defconfig | 1 - configs/mt7981_sd_rfb_defconfig | 1 - configs/mt7986_rfb_defconfig | 1 - configs/mt7986a_bpir3_emmc_defconfig | 1 - configs/mt7986a_bpir3_sd_defconfig | 1 - configs/mt7988_rfb_defconfig | 1 - configs/mt7988_sd_rfb_defconfig | 1 - configs/qcom_defconfig | 1 - configs/stm32mp13_defconfig | 3 --- configs/stm32mp15_basic_defconfig | 3 --- configs/stm32mp15_defconfig | 3 --- configs/stm32mp15_trusted_defconfig | 3 --- configs/stm32mp25_defconfig | 3 --- configs/th1520_lpi4a_defconfig | 1 - lib/Kconfig | 34 ---------------------------- 20 files changed, 63 deletions(-)
diff --git a/configs/a3y17lte_defconfig b/configs/a3y17lte_defconfig index 5c15d51fdc..b012b985a3 100644 --- a/configs/a3y17lte_defconfig +++ b/configs/a3y17lte_defconfig @@ -23,4 +23,3 @@ CONFIG_HUSH_PARSER=y CONFIG_CMD_GPIO=y CONFIG_CMD_I2C=y CONFIG_DM_I2C_GPIO=y -CONFIG_LMB_MAX_REGIONS=64 diff --git a/configs/a5y17lte_defconfig b/configs/a5y17lte_defconfig index 7c9b6b2511..25a7d5bc98 100644 --- a/configs/a5y17lte_defconfig +++ b/configs/a5y17lte_defconfig @@ -23,4 +23,3 @@ CONFIG_HUSH_PARSER=y CONFIG_CMD_GPIO=y CONFIG_CMD_I2C=y CONFIG_DM_I2C_GPIO=y -CONFIG_LMB_MAX_REGIONS=64 diff --git a/configs/a7y17lte_defconfig b/configs/a7y17lte_defconfig index c7297f7d75..c87379ab39 100644 --- a/configs/a7y17lte_defconfig +++ b/configs/a7y17lte_defconfig @@ -23,4 +23,3 @@ CONFIG_HUSH_PARSER=y CONFIG_CMD_GPIO=y CONFIG_CMD_I2C=y CONFIG_DM_I2C_GPIO=y -CONFIG_LMB_MAX_REGIONS=64 diff --git a/configs/apple_m1_defconfig b/configs/apple_m1_defconfig index 20d2cff93f..dca6e0ca8b 100644 --- a/configs/apple_m1_defconfig +++ b/configs/apple_m1_defconfig @@ -26,4 +26,3 @@ CONFIG_SYS_WHITE_ON_BLACK=y CONFIG_NO_FB_CLEAR=y CONFIG_VIDEO_SIMPLE=y # CONFIG_SMBIOS is not set -CONFIG_LMB_MAX_REGIONS=64 diff --git a/configs/mt7981_emmc_rfb_defconfig b/configs/mt7981_emmc_rfb_defconfig index 76ee2aa2d6..d3e833905f 100644 --- a/configs/mt7981_emmc_rfb_defconfig +++ b/configs/mt7981_emmc_rfb_defconfig @@ -62,4 +62,3 @@ CONFIG_MTK_SERIAL=y CONFIG_FAT_WRITE=y CONFIG_HEXDUMP=y # CONFIG_EFI_LOADER is not set -CONFIG_LMB_MAX_REGIONS=64 diff --git a/configs/mt7981_rfb_defconfig b/configs/mt7981_rfb_defconfig index 3989c79d2b..4bc2173f13 100644 --- a/configs/mt7981_rfb_defconfig +++ b/configs/mt7981_rfb_defconfig @@ -65,4 +65,3 @@ CONFIG_DM_SPI=y CONFIG_MTK_SPIM=y CONFIG_HEXDUMP=y # CONFIG_EFI_LOADER is not set -CONFIG_LMB_MAX_REGIONS=64 diff --git a/configs/mt7981_sd_rfb_defconfig b/configs/mt7981_sd_rfb_defconfig index 9b33245527..8721b4074a 100644 --- a/configs/mt7981_sd_rfb_defconfig +++ b/configs/mt7981_sd_rfb_defconfig @@ -62,4 +62,3 @@ CONFIG_MTK_SERIAL=y CONFIG_FAT_WRITE=y CONFIG_HEXDUMP=y # CONFIG_EFI_LOADER is not set -CONFIG_LMB_MAX_REGIONS=64 diff --git a/configs/mt7986_rfb_defconfig b/configs/mt7986_rfb_defconfig index 4d0cc85d0e..15c31de236 100644 --- a/configs/mt7986_rfb_defconfig +++ b/configs/mt7986_rfb_defconfig @@ -65,4 +65,3 @@ CONFIG_DM_SPI=y CONFIG_MTK_SPIM=y CONFIG_HEXDUMP=y # CONFIG_EFI_LOADER is not set -CONFIG_LMB_MAX_REGIONS=64 diff --git a/configs/mt7986a_bpir3_emmc_defconfig b/configs/mt7986a_bpir3_emmc_defconfig index 3c296ab803..56921f3605 100644 --- a/configs/mt7986a_bpir3_emmc_defconfig +++ b/configs/mt7986a_bpir3_emmc_defconfig @@ -62,4 +62,3 @@ CONFIG_MTK_SERIAL=y CONFIG_FAT_WRITE=y CONFIG_HEXDUMP=y # CONFIG_EFI_LOADER is not set -CONFIG_LMB_MAX_REGIONS=64 diff --git a/configs/mt7986a_bpir3_sd_defconfig b/configs/mt7986a_bpir3_sd_defconfig index f644070f4e..4ed06b72d5 100644 --- a/configs/mt7986a_bpir3_sd_defconfig +++ b/configs/mt7986a_bpir3_sd_defconfig @@ -62,4 +62,3 @@ CONFIG_MTK_SERIAL=y CONFIG_FAT_WRITE=y CONFIG_HEXDUMP=y # CONFIG_EFI_LOADER is not set -CONFIG_LMB_MAX_REGIONS=64 diff --git a/configs/mt7988_rfb_defconfig b/configs/mt7988_rfb_defconfig index d0ed2cc1c9..f7ceaceb30 100644 --- a/configs/mt7988_rfb_defconfig +++ b/configs/mt7988_rfb_defconfig @@ -81,4 +81,3 @@ CONFIG_MTK_SPIM=y CONFIG_LZO=y CONFIG_HEXDUMP=y # CONFIG_EFI_LOADER is not set -CONFIG_LMB_MAX_REGIONS=64 diff --git a/configs/mt7988_sd_rfb_defconfig b/configs/mt7988_sd_rfb_defconfig index 5631eaa338..808c8b9011 100644 --- a/configs/mt7988_sd_rfb_defconfig +++ b/configs/mt7988_sd_rfb_defconfig @@ -69,4 +69,3 @@ CONFIG_MTK_SPIM=y CONFIG_LZO=y CONFIG_HEXDUMP=y # CONFIG_EFI_LOADER is not set -CONFIG_LMB_MAX_REGIONS=64 diff --git a/configs/qcom_defconfig b/configs/qcom_defconfig index 4942237402..5e971389df 100644 --- a/configs/qcom_defconfig +++ b/configs/qcom_defconfig @@ -112,4 +112,3 @@ CONFIG_SYS_WHITE_ON_BLACK=y CONFIG_NO_FB_CLEAR=y CONFIG_VIDEO_SIMPLE=y CONFIG_HEXDUMP=y -CONFIG_LMB_MAX_REGIONS=64 diff --git a/configs/stm32mp13_defconfig b/configs/stm32mp13_defconfig index caaabf39ef..9aa3560c7e 100644 --- a/configs/stm32mp13_defconfig +++ b/configs/stm32mp13_defconfig @@ -103,6 +103,3 @@ CONFIG_USB_GADGET_VENDOR_NUM=0x0483 CONFIG_USB_GADGET_PRODUCT_NUM=0x5720 CONFIG_USB_GADGET_DWC2_OTG=y CONFIG_ERRNO_STR=y -# CONFIG_LMB_USE_MAX_REGIONS is not set -CONFIG_LMB_MEMORY_REGIONS=2 -CONFIG_LMB_RESERVED_REGIONS=16 diff --git a/configs/stm32mp15_basic_defconfig b/configs/stm32mp15_basic_defconfig index 2e22bf8600..806935f389 100644 --- a/configs/stm32mp15_basic_defconfig +++ b/configs/stm32mp15_basic_defconfig @@ -190,6 +190,3 @@ CONFIG_WDT=y CONFIG_WDT_STM32MP=y # CONFIG_BINMAN_FDT is not set CONFIG_ERRNO_STR=y -# CONFIG_LMB_USE_MAX_REGIONS is not set -CONFIG_LMB_MEMORY_REGIONS=2 -CONFIG_LMB_RESERVED_REGIONS=16 diff --git a/configs/stm32mp15_defconfig b/configs/stm32mp15_defconfig index ffe7512650..5f050ee0d0 100644 --- a/configs/stm32mp15_defconfig +++ b/configs/stm32mp15_defconfig @@ -166,6 +166,3 @@ CONFIG_WDT=y CONFIG_WDT_STM32MP=y # CONFIG_BINMAN_FDT is not set CONFIG_ERRNO_STR=y -# CONFIG_LMB_USE_MAX_REGIONS is not set -CONFIG_LMB_MEMORY_REGIONS=2 -CONFIG_LMB_RESERVED_REGIONS=16 diff --git a/configs/stm32mp15_trusted_defconfig b/configs/stm32mp15_trusted_defconfig index 74deaaba2e..3c591d74af 100644 --- a/configs/stm32mp15_trusted_defconfig +++ b/configs/stm32mp15_trusted_defconfig @@ -166,6 +166,3 @@ CONFIG_WDT=y CONFIG_WDT_STM32MP=y # CONFIG_BINMAN_FDT is not set CONFIG_ERRNO_STR=y -# CONFIG_LMB_USE_MAX_REGIONS is not set -CONFIG_LMB_MEMORY_REGIONS=2 -CONFIG_LMB_RESERVED_REGIONS=16 diff --git a/configs/stm32mp25_defconfig b/configs/stm32mp25_defconfig index 87038cc773..f5623a19bb 100644 --- a/configs/stm32mp25_defconfig +++ b/configs/stm32mp25_defconfig @@ -49,6 +49,3 @@ CONFIG_WDT_STM32MP=y CONFIG_WDT_ARM_SMC=y CONFIG_ERRNO_STR=y # CONFIG_EFI_LOADER is not set -# CONFIG_LMB_USE_MAX_REGIONS is not set -CONFIG_LMB_MEMORY_REGIONS=2 -CONFIG_LMB_RESERVED_REGIONS=32 diff --git a/configs/th1520_lpi4a_defconfig b/configs/th1520_lpi4a_defconfig index 49ff92f6de..db80e33870 100644 --- a/configs/th1520_lpi4a_defconfig +++ b/configs/th1520_lpi4a_defconfig @@ -79,4 +79,3 @@ CONFIG_BZIP2=y CONFIG_ZSTD=y CONFIG_LIB_RATIONAL=y # CONFIG_EFI_LOADER is not set -# CONFIG_LMB_USE_MAX_REGIONS is not set diff --git a/lib/Kconfig b/lib/Kconfig index 2059219a12..f8ac8daad3 100644 --- a/lib/Kconfig +++ b/lib/Kconfig @@ -1105,40 +1105,6 @@ config LMB help Support the library logical memory blocks.
-config LMB_USE_MAX_REGIONS - bool "Use a common number of memory and reserved regions in lmb lib" - default y - help - Define the number of supported memory regions in the library logical - memory blocks. - This feature allow to reduce the lmb library size by using compiler - optimization when LMB_MEMORY_REGIONS == LMB_RESERVED_REGIONS. - -config LMB_MAX_REGIONS - int "Number of memory and reserved regions in lmb lib" - depends on LMB_USE_MAX_REGIONS - default 16 - help - Define the number of supported regions, memory and reserved, in the - library logical memory blocks. - -config LMB_MEMORY_REGIONS - int "Number of memory regions in lmb lib" - depends on !LMB_USE_MAX_REGIONS - default 8 - help - Define the number of supported memory regions in the library logical - memory blocks. - The minimal value is CONFIG_NR_DRAM_BANKS. - -config LMB_RESERVED_REGIONS - int "Number of reserved regions in lmb lib" - depends on !LMB_USE_MAX_REGIONS - default 8 - help - Define the number of supported reserved regions in the library logical - memory blocks. - config PHANDLE_CHECK_SEQ bool "Enable phandle check while getting sequence number" help

The LMB memory map is now persistent and global, and the CONFIG_LMB_USE_MAX_REGIONS config symbol has now been removed. Remove the corresponding lmb test case.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org Reviewed-by: Simon Glass sjg@chromium.org Reviewed-by: Ilias Apalodimas ilias.apalodimas@linaro.org --- Changes since rfc: None
test/lib/lmb.c | 67 -------------------------------------------------- 1 file changed, 67 deletions(-)
diff --git a/test/lib/lmb.c b/test/lib/lmb.c index 9f6b5633a7..a3a7ad904c 100644 --- a/test/lib/lmb.c +++ b/test/lib/lmb.c @@ -648,73 +648,6 @@ static int lib_test_lmb_get_free_size(struct unit_test_state *uts) } LIB_TEST(lib_test_lmb_get_free_size, 0);
-#ifdef CONFIG_LMB_USE_MAX_REGIONS -static int lib_test_lmb_max_regions(struct unit_test_state *uts) -{ - const phys_addr_t ram = 0x00000000; - /* - * All of 32bit memory space will contain regions for this test, so - * we need to scale ram_size (which in this case is the size of the lmb - * region) to match. - */ - const phys_size_t ram_size = ((0xFFFFFFFF >> CONFIG_LMB_MAX_REGIONS) - + 1) * CONFIG_LMB_MAX_REGIONS; - const phys_size_t blk_size = 0x10000; - phys_addr_t offset; - int ret, i; - - ut_asserteq(lmb.memory.cnt, 0); - ut_asserteq(lmb.memory.max, CONFIG_LMB_MAX_REGIONS); - ut_asserteq(lmb.reserved.cnt, 0); - ut_asserteq(lmb.reserved.max, CONFIG_LMB_MAX_REGIONS); - - /* Add CONFIG_LMB_MAX_REGIONS memory regions */ - for (i = 0; i < CONFIG_LMB_MAX_REGIONS; i++) { - offset = ram + 2 * i * ram_size; - ret = lmb_add(offset, ram_size); - ut_asserteq(ret, 0); - } - ut_asserteq(lmb.memory.cnt, CONFIG_LMB_MAX_REGIONS); - ut_asserteq(lmb.reserved.cnt, 0); - - /* error for the (CONFIG_LMB_MAX_REGIONS + 1) memory regions */ - offset = ram + 2 * (CONFIG_LMB_MAX_REGIONS + 1) * ram_size; - ret = lmb_add(offset, ram_size); - ut_asserteq(ret, -1); - - ut_asserteq(lmb.memory.cnt, CONFIG_LMB_MAX_REGIONS); - ut_asserteq(lmb.reserved.cnt, 0); - - /* reserve CONFIG_LMB_MAX_REGIONS regions */ - for (i = 0; i < CONFIG_LMB_MAX_REGIONS; i++) { - offset = ram + 2 * i * blk_size; - ret = lmb_reserve(offset, blk_size); - ut_asserteq(ret, 0); - } - - ut_asserteq(lmb.memory.cnt, CONFIG_LMB_MAX_REGIONS); - ut_asserteq(lmb.reserved.cnt, CONFIG_LMB_MAX_REGIONS); - - /* error for the 9th reserved blocks */ - offset = ram + 2 * (CONFIG_LMB_MAX_REGIONS + 1) * blk_size; - ret = lmb_reserve(offset, blk_size); - ut_asserteq(ret, -1); - - ut_asserteq(lmb.memory.cnt, CONFIG_LMB_MAX_REGIONS); - ut_asserteq(lmb.reserved.cnt, CONFIG_LMB_MAX_REGIONS); - - /* check each regions */ - for (i = 0; i < CONFIG_LMB_MAX_REGIONS; i++) - ut_asserteq(lmb.memory.region[i].base, ram + 2 * i * ram_size); - - for (i = 0; i < CONFIG_LMB_MAX_REGIONS; i++) - ut_asserteq(lmb.reserved.region[i].base, ram + 2 * i * blk_size); - - return 0; -} -LIB_TEST(lib_test_lmb_max_regions, 0); -#endif - static int lib_test_lmb_flags(struct unit_test_state *uts) { const phys_addr_t ram = 0x40000000;

Add separate config symbols for enabling the LMB module for the SPL phase. The LMB module implementation now relies on alloced list data structure which requires heap area to be present. Add specific config symbol for the SPL phase of U-Boot so that this can be enabled on platforms which support a heap in SPL.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since rfc: None
lib/Kconfig | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-)
diff --git a/lib/Kconfig b/lib/Kconfig index f8ac8daad3..6a9338390a 100644 --- a/lib/Kconfig +++ b/lib/Kconfig @@ -1103,7 +1103,17 @@ config LMB default y if ARC || ARM || M68K || MICROBLAZE || MIPS || \ NIOS2 || PPC || RISCV || SANDBOX || SH || X86 || XTENSA help - Support the library logical memory blocks. + Support the library logical memory blocks. This will require + a malloc() implementation for defining the data structures + needed for maintaining the LMB memory map. + +config SPL_LMB + bool "Enable LMB module for SPL" + depends on SPL && SPL_FRAMEWORK && SPL_SYS_MALLOC + help + Enable support for Logical Memory Block library routines in + SPL. This will require a malloc() implementation for defining + the data structures needed for maintaining the LMB memory map.
config PHANDLE_CHECK_SEQ bool "Enable phandle check while getting sequence number"

Hi Sughosh,
On Wed, 24 Jul 2024 at 00:04, Sughosh Ganu sughosh.ganu@linaro.org wrote:
Add separate config symbols for enabling the LMB module for the SPL phase. The LMB module implementation now relies on alloced list data structure which requires heap area to be present. Add specific config symbol for the SPL phase of U-Boot so that this can be enabled on platforms which support a heap in SPL.
Could you please add here why we need lmb in SPL? If I missed an email, please can you point to it?
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since rfc: None
lib/Kconfig | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-)
Regards, SImon

On Fri, 26 Jul 2024 at 05:02, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Wed, 24 Jul 2024 at 00:04, Sughosh Ganu sughosh.ganu@linaro.org wrote:
Add separate config symbols for enabling the LMB module for the SPL phase. The LMB module implementation now relies on alloced list data structure which requires heap area to be present. Add specific config symbol for the SPL phase of U-Boot so that this can be enabled on platforms which support a heap in SPL.
Could you please add here why we need lmb in SPL? If I missed an email, please can you point to it?
Like I mentioned in another thread, both you and Tom wanted this supported in SPL.
-sughosh
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since rfc: None
lib/Kconfig | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-)
Regards, SImon

With the introduction of separate config symbols for the SPL phase of U-Boot, the condition checks need to be tweaked so that platforms that enable the LMB module in SPL are also able to call the LMB API's. Use the appropriate condition checks to achieve this.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since rfc: * Replace conditional compilation of lmb code to an if (CONFIG_IS_ENABLED()) in store_block() and tftp_init_load_addr().
board/xilinx/common/board.c | 2 +- boot/bootm.c | 4 ++-- boot/image-board.c | 2 +- fs/fs.c | 4 ++-- lib/Makefile | 2 +- net/tftp.c | 40 ++++++++++++++++++------------------- net/wget.c | 4 ++-- 7 files changed, 29 insertions(+), 29 deletions(-)
diff --git a/board/xilinx/common/board.c b/board/xilinx/common/board.c index 4056884400..f04c92a70f 100644 --- a/board/xilinx/common/board.c +++ b/board/xilinx/common/board.c @@ -665,7 +665,7 @@ int embedded_dtb_select(void) } #endif
-#if defined(CONFIG_LMB) +#if IS_ENABLED(CONFIG_LMB)
#ifndef MMU_SECTION_SIZE #define MMU_SECTION_SIZE (1 * 1024 * 1024) diff --git a/boot/bootm.c b/boot/bootm.c index 7b3fe551de..5ce84b73b5 100644 --- a/boot/bootm.c +++ b/boot/bootm.c @@ -239,7 +239,7 @@ static int boot_get_kernel(const char *addr_fit, struct bootm_headers *images, return 0; }
-#ifdef CONFIG_LMB +#if CONFIG_IS_ENABLED(LMB) static void boot_start_lmb(void) { phys_addr_t mem_start; @@ -1048,7 +1048,7 @@ int bootm_run_states(struct bootm_info *bmi, int states) } } #endif -#if CONFIG_IS_ENABLED(OF_LIBFDT) && defined(CONFIG_LMB) +#if CONFIG_IS_ENABLED(OF_LIBFDT) && CONFIG_IS_ENABLED(LMB) if (!ret && (states & BOOTM_STATE_FDT)) { boot_fdt_add_mem_rsv_regions(images->ft_addr); ret = boot_relocate_fdt(&images->ft_addr, &images->ft_len); diff --git a/boot/image-board.c b/boot/image-board.c index 1f8c1ac69f..99ee7968ba 100644 --- a/boot/image-board.c +++ b/boot/image-board.c @@ -883,7 +883,7 @@ int image_setup_linux(struct bootm_headers *images) int ret;
/* This function cannot be called without lmb support */ - if (!IS_ENABLED(CONFIG_LMB)) + if (!CONFIG_IS_ENABLED(LMB)) return -EFAULT; if (CONFIG_IS_ENABLED(OF_LIBFDT)) boot_fdt_add_mem_rsv_regions(*of_flat_tree); diff --git a/fs/fs.c b/fs/fs.c index 2c835eef86..3fb00590be 100644 --- a/fs/fs.c +++ b/fs/fs.c @@ -526,7 +526,7 @@ int fs_size(const char *filename, loff_t *size) return ret; }
-#ifdef CONFIG_LMB +#if CONFIG_IS_ENABLED(LMB) /* Check if a file may be read to the given address */ static int fs_read_lmb_check(const char *filename, ulong addr, loff_t offset, loff_t len, struct fstype_info *info) @@ -567,7 +567,7 @@ static int _fs_read(const char *filename, ulong addr, loff_t offset, loff_t len, void *buf; int ret;
-#ifdef CONFIG_LMB +#if CONFIG_IS_ENABLED(LMB) if (do_lmb_check) { ret = fs_read_lmb_check(filename, addr, offset, len, info); if (ret) diff --git a/lib/Makefile b/lib/Makefile index 81b503ab52..398c11726e 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -118,7 +118,7 @@ obj-$(CONFIG_$(SPL_TPL_)OF_LIBFDT) += fdtdec.o fdtdec_common.o obj-y += hang.o obj-y += linux_compat.o obj-y += linux_string.o -obj-$(CONFIG_LMB) += lmb.o +obj-$(CONFIG_$(SPL_)LMB) += lmb.o obj-y += membuff.o obj-$(CONFIG_REGEX) += slre.o obj-y += string.o diff --git a/net/tftp.c b/net/tftp.c index a199f4a6df..f841f6b701 100644 --- a/net/tftp.c +++ b/net/tftp.c @@ -82,9 +82,7 @@ static ulong tftp_block_wrap; static ulong tftp_block_wrap_offset; static int tftp_state; static ulong tftp_load_addr; -#ifdef CONFIG_LMB static ulong tftp_load_size; -#endif #ifdef CONFIG_TFTP_TSIZE /* The file size reported by the server */ static int tftp_tsize; @@ -160,19 +158,20 @@ static inline int store_block(int block, uchar *src, unsigned int len) ulong store_addr = tftp_load_addr + offset; void *ptr;
-#ifdef CONFIG_LMB - ulong end_addr = tftp_load_addr + tftp_load_size; + if (CONFIG_IS_ENABLED(LMB)) { + ulong end_addr = tftp_load_addr + tftp_load_size;
- if (!end_addr) - end_addr = ULONG_MAX; + if (!end_addr) + end_addr = ULONG_MAX;
- if (store_addr < tftp_load_addr || - store_addr + len > end_addr) { - puts("\nTFTP error: "); - puts("trying to overwrite reserved memory...\n"); - return -1; + if (store_addr < tftp_load_addr || + store_addr + len > end_addr) { + puts("\nTFTP error: "); + puts("trying to overwrite reserved memory...\n"); + return -1; + } } -#endif + ptr = map_sysmem(store_addr, len); memcpy(ptr, src, len); unmap_sysmem(ptr); @@ -709,17 +708,18 @@ static void tftp_timeout_handler(void) /* Initialize tftp_load_addr and tftp_load_size from image_load_addr and lmb */ static int tftp_init_load_addr(void) { -#ifdef CONFIG_LMB - phys_size_t max_size; + if (CONFIG_IS_ENABLED(LMB)) { + phys_size_t max_size;
- lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob); + lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob);
- max_size = lmb_get_free_size(image_load_addr); - if (!max_size) - return -1; + max_size = lmb_get_free_size(image_load_addr); + if (!max_size) + return -1; + + tftp_load_size = max_size; + }
- tftp_load_size = max_size; -#endif tftp_load_addr = image_load_addr; return 0; } diff --git a/net/wget.c b/net/wget.c index 7cf809a8ef..b8ea43e7f0 100644 --- a/net/wget.c +++ b/net/wget.c @@ -98,7 +98,7 @@ static inline int store_block(uchar *src, unsigned int offset, unsigned int len) ulong newsize = offset + len; uchar *ptr;
- if (IS_ENABLED(CONFIG_LMB)) { + if (CONFIG_IS_ENABLED(LMB)) { ulong end_addr = image_load_addr + wget_load_size;
if (!end_addr) @@ -496,7 +496,7 @@ void wget_start(void) debug_cond(DEBUG_WGET, "\nwget:Load address: 0x%lx\nLoading: *\b", image_load_addr);
- if (IS_ENABLED(CONFIG_LMB)) { + if (CONFIG_IS_ENABLED(LMB)) { if (wget_init_load_size()) { printf("\nwget error: "); printf("trying to overwrite reserved memory...\n");

Hi Sughosh,
On Wed, 24 Jul 2024 at 00:04, Sughosh Ganu sughosh.ganu@linaro.org wrote:
With the introduction of separate config symbols for the SPL phase of U-Boot, the condition checks need to be tweaked so that platforms that enable the LMB module in SPL are also able to call the LMB API's. Use the appropriate condition checks to achieve this.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since rfc:
- Replace conditional compilation of lmb code to an if (CONFIG_IS_ENABLED()) in store_block() and tftp_init_load_addr().
board/xilinx/common/board.c | 2 +- boot/bootm.c | 4 ++-- boot/image-board.c | 2 +- fs/fs.c | 4 ++-- lib/Makefile | 2 +- net/tftp.c | 40 ++++++++++++++++++------------------- net/wget.c | 4 ++-- 7 files changed, 29 insertions(+), 29 deletions(-)
Same question as previous patch...do any platforms need lmb in SPL?
The cover letter says "The LMB module is enabled by default for the main U-Boot image, while it needs to be enabled for SPL." but I don't really understand that.
Regards, Simon

On Fri, 26 Jul 2024 at 05:02, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Wed, 24 Jul 2024 at 00:04, Sughosh Ganu sughosh.ganu@linaro.org wrote:
With the introduction of separate config symbols for the SPL phase of U-Boot, the condition checks need to be tweaked so that platforms that enable the LMB module in SPL are also able to call the LMB API's. Use the appropriate condition checks to achieve this.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since rfc:
- Replace conditional compilation of lmb code to an if (CONFIG_IS_ENABLED()) in store_block() and tftp_init_load_addr().
board/xilinx/common/board.c | 2 +- boot/bootm.c | 4 ++-- boot/image-board.c | 2 +- fs/fs.c | 4 ++-- lib/Makefile | 2 +- net/tftp.c | 40 ++++++++++++++++++------------------- net/wget.c | 4 ++-- 7 files changed, 29 insertions(+), 29 deletions(-)
Same question as previous patch...do any platforms need lmb in SPL?
Same answer as my other two emails. This was called for by the maintainers.
-sughosh
The cover letter says "The LMB module is enabled by default for the main U-Boot image, while it needs to be enabled for SPL." but I don't really understand that.
Regards, Simon

Introduce a function lmb_add_memory() to add available memory to the LMB memory map. Call this function during board init once the LMB data structures have been initialised.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since rfc: None
include/lmb.h | 12 ++++++++++++ lib/lmb.c | 42 ++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 54 insertions(+)
diff --git a/include/lmb.h b/include/lmb.h index a308796d58..77e2a23c0d 100644 --- a/include/lmb.h +++ b/include/lmb.h @@ -40,6 +40,18 @@ struct lmb_region { void lmb_init_and_reserve(struct bd_info *bd, void *fdt_blob); void lmb_init_and_reserve_range(phys_addr_t base, phys_size_t size, void *fdt_blob); + +/** + * lmb_add_memory() - Add memory range for LMB allocations + * + * Add the entire available memory range to the pool of memory that + * can be used by the LMB module for allocations. + * + * Return: None + * + */ +void lmb_add_memory(void); + long lmb_add(phys_addr_t base, phys_size_t size); long lmb_reserve(phys_addr_t base, phys_size_t size); /** diff --git a/lib/lmb.c b/lib/lmb.c index 88352e9a25..db0874371a 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -239,6 +239,46 @@ void lmb_init_and_reserve(struct bd_info *bd, void *fdt_blob) lmb_reserve_common(fdt_blob); }
+/** + * lmb_add_memory() - Add memory range for LMB allocations + * + * Add the entire available memory range to the pool of memory that + * can be used by the LMB module for allocations. + * + * This can be overridden for specific boards/architectures. + * + * Return: None + * + */ +__weak void lmb_add_memory(void) +{ + int i; + phys_size_t size; + phys_addr_t rgn_top; + u64 ram_top = gd->ram_top; + struct bd_info *bd = gd->bd; + + /* Assume a 4GB ram_top if not defined */ + if (!ram_top) + ram_top = 0x100000000ULL; + + for (i = 0; i < CONFIG_NR_DRAM_BANKS; i++) { + size = bd->bi_dram[i].size; + if (size) { + if (bd->bi_dram[i].start > ram_top) + continue; + + rgn_top = bd->bi_dram[i].start + + bd->bi_dram[i].size; + + if (rgn_top > ram_top) + size -= rgn_top - ram_top; + + lmb_add(bd->bi_dram[i].start, size); + } + } +} + /* Initialize the struct, add memory and call arch/board reserve functions */ void lmb_init_and_reserve_range(phys_addr_t base, phys_size_t size, void *fdt_blob) @@ -728,5 +768,7 @@ int lmb_mem_regions_init(void) return -1; }
+ lmb_add_memory(); + return 0; }

Hi Sughosh,
On Wed, 24 Jul 2024 at 00:04, Sughosh Ganu sughosh.ganu@linaro.org wrote:
Introduce a function lmb_add_memory() to add available memory to the LMB memory map. Call this function during board init once the LMB data structures have been initialised.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since rfc: None
include/lmb.h | 12 ++++++++++++ lib/lmb.c | 42 ++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 54 insertions(+)
diff --git a/include/lmb.h b/include/lmb.h index a308796d58..77e2a23c0d 100644 --- a/include/lmb.h +++ b/include/lmb.h @@ -40,6 +40,18 @@ struct lmb_region { void lmb_init_and_reserve(struct bd_info *bd, void *fdt_blob); void lmb_init_and_reserve_range(phys_addr_t base, phys_size_t size, void *fdt_blob);
+/**
- lmb_add_memory() - Add memory range for LMB allocations
- Add the entire available memory range to the pool of memory that
- can be used by the LMB module for allocations.
- Return: None
- */
+void lmb_add_memory(void);
long lmb_add(phys_addr_t base, phys_size_t size); long lmb_reserve(phys_addr_t base, phys_size_t size); /** diff --git a/lib/lmb.c b/lib/lmb.c index 88352e9a25..db0874371a 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -239,6 +239,46 @@ void lmb_init_and_reserve(struct bd_info *bd, void *fdt_blob) lmb_reserve_common(fdt_blob); }
+/**
- lmb_add_memory() - Add memory range for LMB allocations
- Add the entire available memory range to the pool of memory that
- can be used by the LMB module for allocations.
- This can be overridden for specific boards/architectures.
Why is this needed? I'm really not a fan of weak functions...it often means we should have a proper API for it.
- Return: None
- */
+__weak void lmb_add_memory(void) +{
int i;
phys_size_t size;
phys_addr_t rgn_top;
u64 ram_top = gd->ram_top;
struct bd_info *bd = gd->bd;
/* Assume a 4GB ram_top if not defined */
if (!ram_top)
ram_top = 0x100000000ULL;
for (i = 0; i < CONFIG_NR_DRAM_BANKS; i++) {
size = bd->bi_dram[i].size;
if (size) {
if (bd->bi_dram[i].start > ram_top)
continue;
rgn_top = bd->bi_dram[i].start +
bd->bi_dram[i].size;
if (rgn_top > ram_top)
size -= rgn_top - ram_top;
lmb_add(bd->bi_dram[i].start, size);
}
}
+}
/* Initialize the struct, add memory and call arch/board reserve functions */ void lmb_init_and_reserve_range(phys_addr_t base, phys_size_t size, void *fdt_blob) @@ -728,5 +768,7 @@ int lmb_mem_regions_init(void) return -1; }
lmb_add_memory();
return 0;
}
2.34.1
Regards, Simon

On Fri, 26 Jul 2024 at 05:02, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Wed, 24 Jul 2024 at 00:04, Sughosh Ganu sughosh.ganu@linaro.org wrote:
Introduce a function lmb_add_memory() to add available memory to the LMB memory map. Call this function during board init once the LMB data structures have been initialised.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since rfc: None
include/lmb.h | 12 ++++++++++++ lib/lmb.c | 42 ++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 54 insertions(+)
diff --git a/include/lmb.h b/include/lmb.h index a308796d58..77e2a23c0d 100644 --- a/include/lmb.h +++ b/include/lmb.h @@ -40,6 +40,18 @@ struct lmb_region { void lmb_init_and_reserve(struct bd_info *bd, void *fdt_blob); void lmb_init_and_reserve_range(phys_addr_t base, phys_size_t size, void *fdt_blob);
+/**
- lmb_add_memory() - Add memory range for LMB allocations
- Add the entire available memory range to the pool of memory that
- can be used by the LMB module for allocations.
- Return: None
- */
+void lmb_add_memory(void);
long lmb_add(phys_addr_t base, phys_size_t size); long lmb_reserve(phys_addr_t base, phys_size_t size); /** diff --git a/lib/lmb.c b/lib/lmb.c index 88352e9a25..db0874371a 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -239,6 +239,46 @@ void lmb_init_and_reserve(struct bd_info *bd, void *fdt_blob) lmb_reserve_common(fdt_blob); }
+/**
- lmb_add_memory() - Add memory range for LMB allocations
- Add the entire available memory range to the pool of memory that
- can be used by the LMB module for allocations.
- This can be overridden for specific boards/architectures.
Why is this needed?
There is a definition of this function for the freescale layerscape platforms, and the e820 platform.
I'm really not a fan of weak functions...it often means we should have a proper API for it.
Is there some coding style guideline which prohibits using weak function definitions? I see it being used all across U-Boot. Moreover, I don't think that this particular case asks for adding an API for this one function. That would just add to the size without much benefit.
-sughosh
- Return: None
- */
+__weak void lmb_add_memory(void) +{
int i;
phys_size_t size;
phys_addr_t rgn_top;
u64 ram_top = gd->ram_top;
struct bd_info *bd = gd->bd;
/* Assume a 4GB ram_top if not defined */
if (!ram_top)
ram_top = 0x100000000ULL;
for (i = 0; i < CONFIG_NR_DRAM_BANKS; i++) {
size = bd->bi_dram[i].size;
if (size) {
if (bd->bi_dram[i].start > ram_top)
continue;
rgn_top = bd->bi_dram[i].start +
bd->bi_dram[i].size;
if (rgn_top > ram_top)
size -= rgn_top - ram_top;
lmb_add(bd->bi_dram[i].start, size);
}
}
+}
/* Initialize the struct, add memory and call arch/board reserve functions */ void lmb_init_and_reserve_range(phys_addr_t base, phys_size_t size, void *fdt_blob) @@ -728,5 +768,7 @@ int lmb_mem_regions_init(void) return -1; }
lmb_add_memory();
return 0;
}
2.34.1
Regards, Simon

Hi Sughosh,
On Mon, 29 Jul 2024 at 02:00, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Fri, 26 Jul 2024 at 05:02, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Wed, 24 Jul 2024 at 00:04, Sughosh Ganu sughosh.ganu@linaro.org wrote:
Introduce a function lmb_add_memory() to add available memory to the LMB memory map. Call this function during board init once the LMB data structures have been initialised.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since rfc: None
include/lmb.h | 12 ++++++++++++ lib/lmb.c | 42 ++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 54 insertions(+)
diff --git a/include/lmb.h b/include/lmb.h index a308796d58..77e2a23c0d 100644 --- a/include/lmb.h +++ b/include/lmb.h @@ -40,6 +40,18 @@ struct lmb_region { void lmb_init_and_reserve(struct bd_info *bd, void *fdt_blob); void lmb_init_and_reserve_range(phys_addr_t base, phys_size_t size, void *fdt_blob);
+/**
- lmb_add_memory() - Add memory range for LMB allocations
- Add the entire available memory range to the pool of memory that
- can be used by the LMB module for allocations.
- Return: None
- */
+void lmb_add_memory(void);
long lmb_add(phys_addr_t base, phys_size_t size); long lmb_reserve(phys_addr_t base, phys_size_t size); /** diff --git a/lib/lmb.c b/lib/lmb.c index 88352e9a25..db0874371a 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -239,6 +239,46 @@ void lmb_init_and_reserve(struct bd_info *bd, void *fdt_blob) lmb_reserve_common(fdt_blob); }
+/**
- lmb_add_memory() - Add memory range for LMB allocations
- Add the entire available memory range to the pool of memory that
- can be used by the LMB module for allocations.
- This can be overridden for specific boards/architectures.
Why is this needed?
There is a definition of this function for the freescale layerscape platforms, and the e820 platform.
Can you please point me to that? I cannot see it. I am looking for lmb_add_memory().
I'm really not a fan of weak functions...it often means we should have a proper API for it.
Is there some coding style guideline which prohibits using weak function definitions? I see it being used all across U-Boot.
In fact, before driver model, almost everything was a weak function.
I am not a fan of them in general...it can be a useful get-of-jail card, but we should not base things on it. It is hard to figure out what is actually being called...sometimes I have to resort to disassembly to work it out. Normally there is a better way.
Events were created to reduce the need for weak functions. The overhead is small, so long as events are already enabled.
Moreover, I don't think that this particular case asks for adding an API for this one function. That would just add to the size without much benefit.
That's what everyone says :-) Good design is very important, particularly the project is under rapid development.
Regards, Simon

With the changes to make the LMB reservations persistent, the common memory regions are being added during board init. Remove the now superfluous lmb_init_and_reserve() function.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org Reviewed-by: Simon Glass sjg@chromium.org Reviewed-by: Ilias Apalodimas ilias.apalodimas@linaro.org --- Changes since rfc: None
arch/arm/mach-apple/board.c | 2 -- arch/arm/mach-snapdragon/board.c | 2 -- arch/arm/mach-stm32mp/stm32mp1/cpu.c | 3 --- cmd/bdinfo.c | 1 - cmd/load.c | 2 -- fs/fs.c | 1 - include/lmb.h | 1 - lib/lmb.c | 13 ------------- net/tftp.c | 2 -- net/wget.c | 2 -- test/cmd/bdinfo.c | 9 --------- 11 files changed, 38 deletions(-)
diff --git a/arch/arm/mach-apple/board.c b/arch/arm/mach-apple/board.c index 213390d6e8..0b6d290b8a 100644 --- a/arch/arm/mach-apple/board.c +++ b/arch/arm/mach-apple/board.c @@ -775,8 +775,6 @@ int board_late_init(void) { u32 status = 0;
- lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob); - /* somewhat based on the Linux Kernel boot requirements: * align by 2M and maximal FDT size 2M */ diff --git a/arch/arm/mach-snapdragon/board.c b/arch/arm/mach-snapdragon/board.c index a63c8bec45..22a7d2a637 100644 --- a/arch/arm/mach-snapdragon/board.c +++ b/arch/arm/mach-snapdragon/board.c @@ -282,8 +282,6 @@ int board_late_init(void) { u32 status = 0;
- lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob); - /* We need to be fairly conservative here as we support boards with just 1G of TOTAL RAM */ status |= env_set_hex("kernel_addr_r", addr_alloc(SZ_128M)); status |= env_set_hex("ramdisk_addr_r", addr_alloc(SZ_128M)); diff --git a/arch/arm/mach-stm32mp/stm32mp1/cpu.c b/arch/arm/mach-stm32mp/stm32mp1/cpu.c index a913737342..64480da9f8 100644 --- a/arch/arm/mach-stm32mp/stm32mp1/cpu.c +++ b/arch/arm/mach-stm32mp/stm32mp1/cpu.c @@ -141,9 +141,6 @@ int mach_cpu_init(void)
void enable_caches(void) { - /* parse device tree when data cache is still activated */ - lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob); - /* I-cache is already enabled in start.S: icache_enable() not needed */
/* deactivate the data cache, early enabled in arch_cpu_init() */ diff --git a/cmd/bdinfo.c b/cmd/bdinfo.c index b31e0208df..3c40dee143 100644 --- a/cmd/bdinfo.c +++ b/cmd/bdinfo.c @@ -162,7 +162,6 @@ static int bdinfo_print_all(struct bd_info *bd) bdinfo_print_num_l("multi_dtb_fit", (ulong)gd->multi_dtb_fit); #endif if (IS_ENABLED(CONFIG_LMB) && gd->fdt_blob) { - lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob); lmb_dump_all_force(); if (IS_ENABLED(CONFIG_OF_REAL)) printf("devicetree = %s\n", fdtdec_get_srcname()); diff --git a/cmd/load.c b/cmd/load.c index 56da3a4c5d..20d802502a 100644 --- a/cmd/load.c +++ b/cmd/load.c @@ -153,8 +153,6 @@ static ulong load_serial(long offset) int line_count = 0; long ret;
- lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob); - while (read_record(record, SREC_MAXRECLEN + 1) >= 0) { type = srec_decode(record, &binlen, &addr, binbuf);
diff --git a/fs/fs.c b/fs/fs.c index 3fb00590be..4bc28d1dff 100644 --- a/fs/fs.c +++ b/fs/fs.c @@ -549,7 +549,6 @@ static int fs_read_lmb_check(const char *filename, ulong addr, loff_t offset, if (len && len < read_len) read_len = len;
- lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob); lmb_dump_all();
if (lmb_alloc_addr(addr, read_len) == addr) diff --git a/include/lmb.h b/include/lmb.h index 77e2a23c0d..50977e6561 100644 --- a/include/lmb.h +++ b/include/lmb.h @@ -37,7 +37,6 @@ struct lmb_region { enum lmb_flags flags; };
-void lmb_init_and_reserve(struct bd_info *bd, void *fdt_blob); void lmb_init_and_reserve_range(phys_addr_t base, phys_size_t size, void *fdt_blob);
diff --git a/lib/lmb.c b/lib/lmb.c index db0874371a..f1142033ef 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -226,19 +226,6 @@ static void lmb_reserve_common(void *fdt_blob) efi_lmb_reserve(); }
-/* Initialize the struct, add memory and call arch/board reserve functions */ -void lmb_init_and_reserve(struct bd_info *bd, void *fdt_blob) -{ - int i; - - for (i = 0; i < CONFIG_NR_DRAM_BANKS; i++) { - if (bd->bi_dram[i].size) - lmb_add(bd->bi_dram[i].start, bd->bi_dram[i].size); - } - - lmb_reserve_common(fdt_blob); -} - /** * lmb_add_memory() - Add memory range for LMB allocations * diff --git a/net/tftp.c b/net/tftp.c index f841f6b701..9b1f06ba8e 100644 --- a/net/tftp.c +++ b/net/tftp.c @@ -711,8 +711,6 @@ static int tftp_init_load_addr(void) if (CONFIG_IS_ENABLED(LMB)) { phys_size_t max_size;
- lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob); - max_size = lmb_get_free_size(image_load_addr); if (!max_size) return -1; diff --git a/net/wget.c b/net/wget.c index b8ea43e7f0..82a7e30ea7 100644 --- a/net/wget.c +++ b/net/wget.c @@ -75,8 +75,6 @@ static int wget_init_load_size(void) { phys_size_t max_size;
- lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob); - max_size = lmb_get_free_size(image_load_addr); if (!max_size) return -1; diff --git a/test/cmd/bdinfo.c b/test/cmd/bdinfo.c index 34d2b141d8..1cd81a195b 100644 --- a/test/cmd/bdinfo.c +++ b/test/cmd/bdinfo.c @@ -113,14 +113,6 @@ static int lmb_test_dump_region(struct unit_test_state *uts, end = base + size - 1; flags = rgn->region[i].flags;
- /* - * this entry includes the stack (get_sp()) on many platforms - * so will different each time lmb_init_and_reserve() is called. - * We could instead have the bdinfo command put its lmb region - * in a known location, so we can check it directly, rather than - * calling lmb_init_and_reserve() to create a new (and hopefully - * identical one). But for now this seems good enough. - */ if (!IS_ENABLED(CONFIG_SANDBOX) && i == 3) { ut_assert_nextlinen(" %s[%d]\t[", name, i); continue; @@ -200,7 +192,6 @@ static int bdinfo_test_all(struct unit_test_state *uts) if (IS_ENABLED(CONFIG_LMB) && gd->fdt_blob) { struct lmb lmb;
- lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob); ut_assertok(lmb_test_dump_all(uts, &lmb)); if (IS_ENABLED(CONFIG_OF_REAL)) ut_assert_nextline("devicetree = %s", fdtdec_get_srcname());

Hi,
On 24/07/2024 08:02, Sughosh Ganu wrote:
With the changes to make the LMB reservations persistent, the common memory regions are being added during board init. Remove the now superfluous lmb_init_and_reserve() function.
This patch comes before the patches doing these reservations in generic code. I think this could break bisecting.
Could you instead make the switch over to the generic code atomic?
Kind regards,
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org Reviewed-by: Simon Glass sjg@chromium.org Reviewed-by: Ilias Apalodimas ilias.apalodimas@linaro.org
Changes since rfc: None
arch/arm/mach-apple/board.c | 2 -- arch/arm/mach-snapdragon/board.c | 2 -- arch/arm/mach-stm32mp/stm32mp1/cpu.c | 3 --- cmd/bdinfo.c | 1 - cmd/load.c | 2 -- fs/fs.c | 1 - include/lmb.h | 1 - lib/lmb.c | 13 ------------- net/tftp.c | 2 -- net/wget.c | 2 -- test/cmd/bdinfo.c | 9 --------- 11 files changed, 38 deletions(-)
diff --git a/arch/arm/mach-apple/board.c b/arch/arm/mach-apple/board.c index 213390d6e8..0b6d290b8a 100644 --- a/arch/arm/mach-apple/board.c +++ b/arch/arm/mach-apple/board.c @@ -775,8 +775,6 @@ int board_late_init(void) { u32 status = 0;
- lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob);
- /* somewhat based on the Linux Kernel boot requirements:
*/
- align by 2M and maximal FDT size 2M
diff --git a/arch/arm/mach-snapdragon/board.c b/arch/arm/mach-snapdragon/board.c index a63c8bec45..22a7d2a637 100644 --- a/arch/arm/mach-snapdragon/board.c +++ b/arch/arm/mach-snapdragon/board.c @@ -282,8 +282,6 @@ int board_late_init(void) { u32 status = 0;
- lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob);
- /* We need to be fairly conservative here as we support boards with just 1G of TOTAL RAM */ status |= env_set_hex("kernel_addr_r", addr_alloc(SZ_128M)); status |= env_set_hex("ramdisk_addr_r", addr_alloc(SZ_128M));
diff --git a/arch/arm/mach-stm32mp/stm32mp1/cpu.c b/arch/arm/mach-stm32mp/stm32mp1/cpu.c index a913737342..64480da9f8 100644 --- a/arch/arm/mach-stm32mp/stm32mp1/cpu.c +++ b/arch/arm/mach-stm32mp/stm32mp1/cpu.c @@ -141,9 +141,6 @@ int mach_cpu_init(void)
void enable_caches(void) {
/* parse device tree when data cache is still activated */
lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob);
/* I-cache is already enabled in start.S: icache_enable() not needed */
/* deactivate the data cache, early enabled in arch_cpu_init() */
diff --git a/cmd/bdinfo.c b/cmd/bdinfo.c index b31e0208df..3c40dee143 100644 --- a/cmd/bdinfo.c +++ b/cmd/bdinfo.c @@ -162,7 +162,6 @@ static int bdinfo_print_all(struct bd_info *bd) bdinfo_print_num_l("multi_dtb_fit", (ulong)gd->multi_dtb_fit); #endif if (IS_ENABLED(CONFIG_LMB) && gd->fdt_blob) {
lmb_dump_all_force(); if (IS_ENABLED(CONFIG_OF_REAL)) printf("devicetree = %s\n", fdtdec_get_srcname());lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob);
diff --git a/cmd/load.c b/cmd/load.c index 56da3a4c5d..20d802502a 100644 --- a/cmd/load.c +++ b/cmd/load.c @@ -153,8 +153,6 @@ static ulong load_serial(long offset) int line_count = 0; long ret;
- lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob);
- while (read_record(record, SREC_MAXRECLEN + 1) >= 0) { type = srec_decode(record, &binlen, &addr, binbuf);
diff --git a/fs/fs.c b/fs/fs.c index 3fb00590be..4bc28d1dff 100644 --- a/fs/fs.c +++ b/fs/fs.c @@ -549,7 +549,6 @@ static int fs_read_lmb_check(const char *filename, ulong addr, loff_t offset, if (len && len < read_len) read_len = len;
lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob); lmb_dump_all();
if (lmb_alloc_addr(addr, read_len) == addr)
diff --git a/include/lmb.h b/include/lmb.h index 77e2a23c0d..50977e6561 100644 --- a/include/lmb.h +++ b/include/lmb.h @@ -37,7 +37,6 @@ struct lmb_region { enum lmb_flags flags; };
-void lmb_init_and_reserve(struct bd_info *bd, void *fdt_blob); void lmb_init_and_reserve_range(phys_addr_t base, phys_size_t size, void *fdt_blob);
diff --git a/lib/lmb.c b/lib/lmb.c index db0874371a..f1142033ef 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -226,19 +226,6 @@ static void lmb_reserve_common(void *fdt_blob) efi_lmb_reserve(); }
-/* Initialize the struct, add memory and call arch/board reserve functions */ -void lmb_init_and_reserve(struct bd_info *bd, void *fdt_blob) -{
- int i;
- for (i = 0; i < CONFIG_NR_DRAM_BANKS; i++) {
if (bd->bi_dram[i].size)
lmb_add(bd->bi_dram[i].start, bd->bi_dram[i].size);
- }
- lmb_reserve_common(fdt_blob);
-}
- /**
- lmb_add_memory() - Add memory range for LMB allocations
diff --git a/net/tftp.c b/net/tftp.c index f841f6b701..9b1f06ba8e 100644 --- a/net/tftp.c +++ b/net/tftp.c @@ -711,8 +711,6 @@ static int tftp_init_load_addr(void) if (CONFIG_IS_ENABLED(LMB)) { phys_size_t max_size;
lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob);
- max_size = lmb_get_free_size(image_load_addr); if (!max_size) return -1;
diff --git a/net/wget.c b/net/wget.c index b8ea43e7f0..82a7e30ea7 100644 --- a/net/wget.c +++ b/net/wget.c @@ -75,8 +75,6 @@ static int wget_init_load_size(void) { phys_size_t max_size;
- lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob);
- max_size = lmb_get_free_size(image_load_addr); if (!max_size) return -1;
diff --git a/test/cmd/bdinfo.c b/test/cmd/bdinfo.c index 34d2b141d8..1cd81a195b 100644 --- a/test/cmd/bdinfo.c +++ b/test/cmd/bdinfo.c @@ -113,14 +113,6 @@ static int lmb_test_dump_region(struct unit_test_state *uts, end = base + size - 1; flags = rgn->region[i].flags;
/*
* this entry includes the stack (get_sp()) on many platforms
* so will different each time lmb_init_and_reserve() is called.
* We could instead have the bdinfo command put its lmb region
* in a known location, so we can check it directly, rather than
* calling lmb_init_and_reserve() to create a new (and hopefully
* identical one). But for now this seems good enough.
if (!IS_ENABLED(CONFIG_SANDBOX) && i == 3) { ut_assert_nextlinen(" %s[%d]\t[", name, i); continue;*/
@@ -200,7 +192,6 @@ static int bdinfo_test_all(struct unit_test_state *uts) if (IS_ENABLED(CONFIG_LMB) && gd->fdt_blob) { struct lmb lmb;
ut_assertok(lmb_test_dump_all(uts, &lmb)); if (IS_ENABLED(CONFIG_OF_REAL)) ut_assert_nextline("devicetree = %s", fdtdec_get_srcname());lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob);

On Thu, 15 Aug 2024 at 17:44, Caleb Connolly caleb.connolly@linaro.org wrote:
Hi,
On 24/07/2024 08:02, Sughosh Ganu wrote:
With the changes to make the LMB reservations persistent, the common memory regions are being added during board init. Remove the now superfluous lmb_init_and_reserve() function.
This patch comes before the patches doing these reservations in generic code. I think this could break bisecting.
Could you instead make the switch over to the generic code atomic?
+1
Kind regards,
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org Reviewed-by: Simon Glass sjg@chromium.org Reviewed-by: Ilias Apalodimas ilias.apalodimas@linaro.org
Changes since rfc: None
arch/arm/mach-apple/board.c | 2 -- arch/arm/mach-snapdragon/board.c | 2 -- arch/arm/mach-stm32mp/stm32mp1/cpu.c | 3 --- cmd/bdinfo.c | 1 - cmd/load.c | 2 -- fs/fs.c | 1 - include/lmb.h | 1 - lib/lmb.c | 13 ------------- net/tftp.c | 2 -- net/wget.c | 2 -- test/cmd/bdinfo.c | 9 --------- 11 files changed, 38 deletions(-)
diff --git a/arch/arm/mach-apple/board.c b/arch/arm/mach-apple/board.c index 213390d6e8..0b6d290b8a 100644 --- a/arch/arm/mach-apple/board.c +++ b/arch/arm/mach-apple/board.c @@ -775,8 +775,6 @@ int board_late_init(void) { u32 status = 0;
lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob);
/* somewhat based on the Linux Kernel boot requirements: * align by 2M and maximal FDT size 2M */
diff --git a/arch/arm/mach-snapdragon/board.c b/arch/arm/mach-snapdragon/board.c index a63c8bec45..22a7d2a637 100644 --- a/arch/arm/mach-snapdragon/board.c +++ b/arch/arm/mach-snapdragon/board.c @@ -282,8 +282,6 @@ int board_late_init(void) { u32 status = 0;
lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob);
/* We need to be fairly conservative here as we support boards with just 1G of TOTAL RAM */ status |= env_set_hex("kernel_addr_r", addr_alloc(SZ_128M)); status |= env_set_hex("ramdisk_addr_r", addr_alloc(SZ_128M));
diff --git a/arch/arm/mach-stm32mp/stm32mp1/cpu.c b/arch/arm/mach-stm32mp/stm32mp1/cpu.c index a913737342..64480da9f8 100644 --- a/arch/arm/mach-stm32mp/stm32mp1/cpu.c +++ b/arch/arm/mach-stm32mp/stm32mp1/cpu.c @@ -141,9 +141,6 @@ int mach_cpu_init(void)
void enable_caches(void) {
/* parse device tree when data cache is still activated */
lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob);
/* I-cache is already enabled in start.S: icache_enable() not needed */ /* deactivate the data cache, early enabled in arch_cpu_init() */
diff --git a/cmd/bdinfo.c b/cmd/bdinfo.c index b31e0208df..3c40dee143 100644 --- a/cmd/bdinfo.c +++ b/cmd/bdinfo.c @@ -162,7 +162,6 @@ static int bdinfo_print_all(struct bd_info *bd) bdinfo_print_num_l("multi_dtb_fit", (ulong)gd->multi_dtb_fit); #endif if (IS_ENABLED(CONFIG_LMB) && gd->fdt_blob) {
lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob); lmb_dump_all_force(); if (IS_ENABLED(CONFIG_OF_REAL)) printf("devicetree = %s\n", fdtdec_get_srcname());
diff --git a/cmd/load.c b/cmd/load.c index 56da3a4c5d..20d802502a 100644 --- a/cmd/load.c +++ b/cmd/load.c @@ -153,8 +153,6 @@ static ulong load_serial(long offset) int line_count = 0; long ret;
lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob);
while (read_record(record, SREC_MAXRECLEN + 1) >= 0) { type = srec_decode(record, &binlen, &addr, binbuf);
diff --git a/fs/fs.c b/fs/fs.c index 3fb00590be..4bc28d1dff 100644 --- a/fs/fs.c +++ b/fs/fs.c @@ -549,7 +549,6 @@ static int fs_read_lmb_check(const char *filename, ulong addr, loff_t offset, if (len && len < read_len) read_len = len;
lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob); lmb_dump_all(); if (lmb_alloc_addr(addr, read_len) == addr)
diff --git a/include/lmb.h b/include/lmb.h index 77e2a23c0d..50977e6561 100644 --- a/include/lmb.h +++ b/include/lmb.h @@ -37,7 +37,6 @@ struct lmb_region { enum lmb_flags flags; };
-void lmb_init_and_reserve(struct bd_info *bd, void *fdt_blob); void lmb_init_and_reserve_range(phys_addr_t base, phys_size_t size, void *fdt_blob);
diff --git a/lib/lmb.c b/lib/lmb.c index db0874371a..f1142033ef 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -226,19 +226,6 @@ static void lmb_reserve_common(void *fdt_blob) efi_lmb_reserve(); }
-/* Initialize the struct, add memory and call arch/board reserve functions */ -void lmb_init_and_reserve(struct bd_info *bd, void *fdt_blob) -{
int i;
for (i = 0; i < CONFIG_NR_DRAM_BANKS; i++) {
if (bd->bi_dram[i].size)
lmb_add(bd->bi_dram[i].start, bd->bi_dram[i].size);
}
lmb_reserve_common(fdt_blob);
-}
- /**
- lmb_add_memory() - Add memory range for LMB allocations
diff --git a/net/tftp.c b/net/tftp.c index f841f6b701..9b1f06ba8e 100644 --- a/net/tftp.c +++ b/net/tftp.c @@ -711,8 +711,6 @@ static int tftp_init_load_addr(void) if (CONFIG_IS_ENABLED(LMB)) { phys_size_t max_size;
lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob);
max_size = lmb_get_free_size(image_load_addr); if (!max_size) return -1;
diff --git a/net/wget.c b/net/wget.c index b8ea43e7f0..82a7e30ea7 100644 --- a/net/wget.c +++ b/net/wget.c @@ -75,8 +75,6 @@ static int wget_init_load_size(void) { phys_size_t max_size;
lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob);
max_size = lmb_get_free_size(image_load_addr); if (!max_size) return -1;
diff --git a/test/cmd/bdinfo.c b/test/cmd/bdinfo.c index 34d2b141d8..1cd81a195b 100644 --- a/test/cmd/bdinfo.c +++ b/test/cmd/bdinfo.c @@ -113,14 +113,6 @@ static int lmb_test_dump_region(struct unit_test_state *uts, end = base + size - 1; flags = rgn->region[i].flags;
/*
* this entry includes the stack (get_sp()) on many platforms
* so will different each time lmb_init_and_reserve() is called.
* We could instead have the bdinfo command put its lmb region
* in a known location, so we can check it directly, rather than
* calling lmb_init_and_reserve() to create a new (and hopefully
* identical one). But for now this seems good enough.
*/ if (!IS_ENABLED(CONFIG_SANDBOX) && i == 3) { ut_assert_nextlinen(" %s[%d]\t[", name, i); continue;
@@ -200,7 +192,6 @@ static int bdinfo_test_all(struct unit_test_state *uts) if (IS_ENABLED(CONFIG_LMB) && gd->fdt_blob) { struct lmb lmb;
lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob); ut_assertok(lmb_test_dump_all(uts, &lmb)); if (IS_ENABLED(CONFIG_OF_REAL)) ut_assert_nextline("devicetree = %s", fdtdec_get_srcname());
-- // Caleb (they/them)

On Thu, 15 Aug 2024 at 22:14, Caleb Connolly caleb.connolly@linaro.org wrote:
Hi,
On 24/07/2024 08:02, Sughosh Ganu wrote:
With the changes to make the LMB reservations persistent, the common memory regions are being added during board init. Remove the now superfluous lmb_init_and_reserve() function.
This patch comes before the patches doing these reservations in generic code. I think this could break bisecting.
Could you instead make the switch over to the generic code atomic?
Yes, I will flip the order of the patches such that the patches remain bisectable. Thanks.
-sughosh
Kind regards,
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org Reviewed-by: Simon Glass sjg@chromium.org Reviewed-by: Ilias Apalodimas ilias.apalodimas@linaro.org
Changes since rfc: None
arch/arm/mach-apple/board.c | 2 -- arch/arm/mach-snapdragon/board.c | 2 -- arch/arm/mach-stm32mp/stm32mp1/cpu.c | 3 --- cmd/bdinfo.c | 1 - cmd/load.c | 2 -- fs/fs.c | 1 - include/lmb.h | 1 - lib/lmb.c | 13 ------------- net/tftp.c | 2 -- net/wget.c | 2 -- test/cmd/bdinfo.c | 9 --------- 11 files changed, 38 deletions(-)
diff --git a/arch/arm/mach-apple/board.c b/arch/arm/mach-apple/board.c index 213390d6e8..0b6d290b8a 100644 --- a/arch/arm/mach-apple/board.c +++ b/arch/arm/mach-apple/board.c @@ -775,8 +775,6 @@ int board_late_init(void) { u32 status = 0;
lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob);
/* somewhat based on the Linux Kernel boot requirements: * align by 2M and maximal FDT size 2M */
diff --git a/arch/arm/mach-snapdragon/board.c b/arch/arm/mach-snapdragon/board.c index a63c8bec45..22a7d2a637 100644 --- a/arch/arm/mach-snapdragon/board.c +++ b/arch/arm/mach-snapdragon/board.c @@ -282,8 +282,6 @@ int board_late_init(void) { u32 status = 0;
lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob);
/* We need to be fairly conservative here as we support boards with just 1G of TOTAL RAM */ status |= env_set_hex("kernel_addr_r", addr_alloc(SZ_128M)); status |= env_set_hex("ramdisk_addr_r", addr_alloc(SZ_128M));
diff --git a/arch/arm/mach-stm32mp/stm32mp1/cpu.c b/arch/arm/mach-stm32mp/stm32mp1/cpu.c index a913737342..64480da9f8 100644 --- a/arch/arm/mach-stm32mp/stm32mp1/cpu.c +++ b/arch/arm/mach-stm32mp/stm32mp1/cpu.c @@ -141,9 +141,6 @@ int mach_cpu_init(void)
void enable_caches(void) {
/* parse device tree when data cache is still activated */
lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob);
/* I-cache is already enabled in start.S: icache_enable() not needed */ /* deactivate the data cache, early enabled in arch_cpu_init() */
diff --git a/cmd/bdinfo.c b/cmd/bdinfo.c index b31e0208df..3c40dee143 100644 --- a/cmd/bdinfo.c +++ b/cmd/bdinfo.c @@ -162,7 +162,6 @@ static int bdinfo_print_all(struct bd_info *bd) bdinfo_print_num_l("multi_dtb_fit", (ulong)gd->multi_dtb_fit); #endif if (IS_ENABLED(CONFIG_LMB) && gd->fdt_blob) {
lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob); lmb_dump_all_force(); if (IS_ENABLED(CONFIG_OF_REAL)) printf("devicetree = %s\n", fdtdec_get_srcname());
diff --git a/cmd/load.c b/cmd/load.c index 56da3a4c5d..20d802502a 100644 --- a/cmd/load.c +++ b/cmd/load.c @@ -153,8 +153,6 @@ static ulong load_serial(long offset) int line_count = 0; long ret;
lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob);
while (read_record(record, SREC_MAXRECLEN + 1) >= 0) { type = srec_decode(record, &binlen, &addr, binbuf);
diff --git a/fs/fs.c b/fs/fs.c index 3fb00590be..4bc28d1dff 100644 --- a/fs/fs.c +++ b/fs/fs.c @@ -549,7 +549,6 @@ static int fs_read_lmb_check(const char *filename, ulong addr, loff_t offset, if (len && len < read_len) read_len = len;
lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob); lmb_dump_all(); if (lmb_alloc_addr(addr, read_len) == addr)
diff --git a/include/lmb.h b/include/lmb.h index 77e2a23c0d..50977e6561 100644 --- a/include/lmb.h +++ b/include/lmb.h @@ -37,7 +37,6 @@ struct lmb_region { enum lmb_flags flags; };
-void lmb_init_and_reserve(struct bd_info *bd, void *fdt_blob); void lmb_init_and_reserve_range(phys_addr_t base, phys_size_t size, void *fdt_blob);
diff --git a/lib/lmb.c b/lib/lmb.c index db0874371a..f1142033ef 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -226,19 +226,6 @@ static void lmb_reserve_common(void *fdt_blob) efi_lmb_reserve(); }
-/* Initialize the struct, add memory and call arch/board reserve functions */ -void lmb_init_and_reserve(struct bd_info *bd, void *fdt_blob) -{
int i;
for (i = 0; i < CONFIG_NR_DRAM_BANKS; i++) {
if (bd->bi_dram[i].size)
lmb_add(bd->bi_dram[i].start, bd->bi_dram[i].size);
}
lmb_reserve_common(fdt_blob);
-}
- /**
- lmb_add_memory() - Add memory range for LMB allocations
diff --git a/net/tftp.c b/net/tftp.c index f841f6b701..9b1f06ba8e 100644 --- a/net/tftp.c +++ b/net/tftp.c @@ -711,8 +711,6 @@ static int tftp_init_load_addr(void) if (CONFIG_IS_ENABLED(LMB)) { phys_size_t max_size;
lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob);
max_size = lmb_get_free_size(image_load_addr); if (!max_size) return -1;
diff --git a/net/wget.c b/net/wget.c index b8ea43e7f0..82a7e30ea7 100644 --- a/net/wget.c +++ b/net/wget.c @@ -75,8 +75,6 @@ static int wget_init_load_size(void) { phys_size_t max_size;
lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob);
max_size = lmb_get_free_size(image_load_addr); if (!max_size) return -1;
diff --git a/test/cmd/bdinfo.c b/test/cmd/bdinfo.c index 34d2b141d8..1cd81a195b 100644 --- a/test/cmd/bdinfo.c +++ b/test/cmd/bdinfo.c @@ -113,14 +113,6 @@ static int lmb_test_dump_region(struct unit_test_state *uts, end = base + size - 1; flags = rgn->region[i].flags;
/*
* this entry includes the stack (get_sp()) on many platforms
* so will different each time lmb_init_and_reserve() is called.
* We could instead have the bdinfo command put its lmb region
* in a known location, so we can check it directly, rather than
* calling lmb_init_and_reserve() to create a new (and hopefully
* identical one). But for now this seems good enough.
*/ if (!IS_ENABLED(CONFIG_SANDBOX) && i == 3) { ut_assert_nextlinen(" %s[%d]\t[", name, i); continue;
@@ -200,7 +192,6 @@ static int bdinfo_test_all(struct unit_test_state *uts) if (IS_ENABLED(CONFIG_LMB) && gd->fdt_blob) { struct lmb lmb;
lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob); ut_assertok(lmb_test_dump_all(uts, &lmb)); if (IS_ENABLED(CONFIG_OF_REAL)) ut_assert_nextline("devicetree = %s", fdtdec_get_srcname());
-- // Caleb (they/them)

The LMB module provides API's for allocating and reserving chunks of memory which is then typically used for things like loading images for booting. Reserve the portion of memory that is occupied by the U-Boot image itself, and other parts of memory that might have been marked as reserved in the board's DTB. When executing in SPL, reserve the sections that get relocated to the ram memory, the stack and the global data structure and also the bss.
Mark these regions of memory with the LMB_NOOVERWRITE flag to indicate that these regions cannot be re-requested or overwritten.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org Reviewed-by: Simon Glass sjg@chromium.org --- Changes since rfc: * Add a function for reserving common areas in SPL, lmb_reserve_common_spl()
lib/lmb.c | 36 ++++++++++++++++++++++++++++++++++-- 1 file changed, 34 insertions(+), 2 deletions(-)
diff --git a/lib/lmb.c b/lib/lmb.c index f1142033ef..ce1b0204c9 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -13,6 +13,7 @@ #include <lmb.h> #include <log.h> #include <malloc.h> +#include <spl.h>
#include <asm/global_data.h> #include <asm/sections.h> @@ -173,10 +174,11 @@ void arch_lmb_reserve_generic(ulong sp, ulong end, ulong align) if (bank_end > end) bank_end = end - 1;
- lmb_reserve(sp, bank_end - sp + 1); + lmb_reserve_flags(sp, bank_end - sp + 1, LMB_NOOVERWRITE);
if (gd->flags & GD_FLG_SKIP_RELOC) - lmb_reserve((phys_addr_t)(uintptr_t)_start, gd->mon_len); + lmb_reserve_flags((phys_addr_t)(uintptr_t)_start, + gd->mon_len, LMB_NOOVERWRITE);
break; } @@ -226,6 +228,30 @@ static void lmb_reserve_common(void *fdt_blob) efi_lmb_reserve(); }
+static __maybe_unused void lmb_reserve_common_spl(void) +{ + phys_addr_t rsv_start; + phys_size_t rsv_size; + + /* + * Assume a SPL stack of 16KB. This must be + * more than enough for the SPL stage. + */ + if (IS_ENABLED(CONFIG_SPL_STACK_R_ADDR)) { + rsv_start = gd->start_addr_sp - 16384; + rsv_size = 16384; + lmb_reserve_flags(rsv_start, rsv_size, LMB_NOOVERWRITE); + } + + if (IS_ENABLED(CONFIG_SPL_SEPARATE_BSS)) { + /* Reserve the bss region */ + rsv_start = (phys_addr_t)(uintptr_t)__bss_start; + rsv_size = (phys_addr_t)(uintptr_t)__bss_end - + (phys_addr_t)(uintptr_t)__bss_start; + lmb_reserve_flags(rsv_start, rsv_size, LMB_NOOVERWRITE); + } +} + /** * lmb_add_memory() - Add memory range for LMB allocations * @@ -757,5 +783,11 @@ int lmb_mem_regions_init(void)
lmb_add_memory();
+ /* Reserve the U-Boot image region once U-Boot has relocated */ + if (spl_phase() == PHASE_SPL) + lmb_reserve_common_spl(); + else if (spl_phase() == PHASE_BOARD_R) + lmb_reserve_common((void *)gd->fdt_blob); + return 0; }

With the move to make the LMB allocations persistent and the common memory regions being reserved during board init, there is no need for an explicit reservation of a memory range. Remove the lmb_init_and_reserve_range() function.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org Reviewed-by: Ilias Apalodimas ilias.apalodimas@linaro.org Reviewed-by: Tom Rini trini@konsulko.com --- Changes since rfc: None
boot/bootm.c | 15 +-------------- include/lmb.h | 3 --- lib/lmb.c | 8 -------- 3 files changed, 1 insertion(+), 25 deletions(-)
diff --git a/boot/bootm.c b/boot/bootm.c index 5ce84b73b5..d44fd2ed87 100644 --- a/boot/bootm.c +++ b/boot/bootm.c @@ -239,18 +239,7 @@ static int boot_get_kernel(const char *addr_fit, struct bootm_headers *images, return 0; }
-#if CONFIG_IS_ENABLED(LMB) -static void boot_start_lmb(void) -{ - phys_addr_t mem_start; - phys_size_t mem_size; - - mem_start = env_get_bootm_low(); - mem_size = env_get_bootm_size(); - - lmb_init_and_reserve_range(mem_start, mem_size, NULL); -} -#else +#if !CONFIG_IS_ENABLED(LMB) #define lmb_reserve(base, size) static inline void boot_start_lmb(void) { } #endif @@ -260,8 +249,6 @@ static int bootm_start(void) memset((void *)&images, 0, sizeof(images)); images.verify = env_get_yesno("verify");
- boot_start_lmb(); - bootstage_mark_name(BOOTSTAGE_ID_BOOTM_START, "bootm_start"); images.state = BOOTM_STATE_START;
diff --git a/include/lmb.h b/include/lmb.h index 50977e6561..2d8f9a6b71 100644 --- a/include/lmb.h +++ b/include/lmb.h @@ -37,9 +37,6 @@ struct lmb_region { enum lmb_flags flags; };
-void lmb_init_and_reserve_range(phys_addr_t base, phys_size_t size, - void *fdt_blob); - /** * lmb_add_memory() - Add memory range for LMB allocations * diff --git a/lib/lmb.c b/lib/lmb.c index ce1b0204c9..5afdfc32fb 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -292,14 +292,6 @@ __weak void lmb_add_memory(void) } }
-/* Initialize the struct, add memory and call arch/board reserve functions */ -void lmb_init_and_reserve_range(phys_addr_t base, phys_size_t size, - void *fdt_blob) -{ - lmb_add(base, size); - lmb_reserve_common(fdt_blob); -} - static bool lmb_region_flags_match(struct lmb_region *rgn, unsigned long r1, enum lmb_flags flags) {

On Wed, 24 Jul 2024 at 00:04, Sughosh Ganu sughosh.ganu@linaro.org wrote:
With the move to make the LMB allocations persistent and the common memory regions being reserved during board init, there is no need for an explicit reservation of a memory range. Remove the lmb_init_and_reserve_range() function.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org Reviewed-by: Ilias Apalodimas ilias.apalodimas@linaro.org Reviewed-by: Tom Rini trini@konsulko.com
Changes since rfc: None
boot/bootm.c | 15 +-------------- include/lmb.h | 3 --- lib/lmb.c | 8 -------- 3 files changed, 1 insertion(+), 25 deletions(-)
Reviewed-by: Simon Glass sjg@chromium.org

Remove a couple of superfluous LMB stub functions, and instead put a check for calling the lmb_reserve() function.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since rfc: New patch
boot/bootm.c | 9 +++------ 1 file changed, 3 insertions(+), 6 deletions(-)
diff --git a/boot/bootm.c b/boot/bootm.c index d44fd2ed87..2689f36977 100644 --- a/boot/bootm.c +++ b/boot/bootm.c @@ -239,11 +239,6 @@ static int boot_get_kernel(const char *addr_fit, struct bootm_headers *images, return 0; }
-#if !CONFIG_IS_ENABLED(LMB) -#define lmb_reserve(base, size) -static inline void boot_start_lmb(void) { } -#endif - static int bootm_start(void) { memset((void *)&images, 0, sizeof(images)); @@ -700,7 +695,9 @@ static int bootm_load_os(struct bootm_headers *images, int boot_progress) images->os.end = relocated_addr + image_size; }
- lmb_reserve(images->os.load, (load_end - images->os.load)); + if (CONFIG_IS_ENABLED(LMB)) + lmb_reserve(images->os.load, (load_end - images->os.load)); + return 0; }

On Wed, 24 Jul 2024 at 00:04, Sughosh Ganu sughosh.ganu@linaro.org wrote:
Remove a couple of superfluous LMB stub functions, and instead put a check for calling the lmb_reserve() function.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since rfc: New patch
boot/bootm.c | 9 +++------ 1 file changed, 3 insertions(+), 6 deletions(-)
Reviewed-by: Simon Glass sjg@chromium.org

The memory map maintained by the LMB module is now persistent and global. This memory map is being maintained through the alloced list structure which can be extended at runtime -- there is one list for the available memory, and one for the used memory. Allocate and initialise these lists during the board init.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since rfc: None
common/board_r.c | 4 ++++ common/spl/spl.c | 3 +++ include/lmb.h | 11 +++++++++++ lib/lmb.c | 20 ++++++++++++++++++++ 4 files changed, 38 insertions(+)
diff --git a/common/board_r.c b/common/board_r.c index d4ba245ac6..eaf9b40ec0 100644 --- a/common/board_r.c +++ b/common/board_r.c @@ -22,6 +22,7 @@ #include <hang.h> #include <image.h> #include <irq_func.h> +#include <lmb.h> #include <log.h> #include <net.h> #include <asm/cache.h> @@ -610,6 +611,9 @@ static init_fnc_t init_sequence_r[] = { #ifdef CONFIG_CLOCKS set_cpu_clk_info, /* Setup clock information */ #endif +#if CONFIG_IS_ENABLED(LMB) + initr_lmb, +#endif #ifdef CONFIG_EFI_LOADER efi_memory_init, #endif diff --git a/common/spl/spl.c b/common/spl/spl.c index 7794ddccad..38ac0608bb 100644 --- a/common/spl/spl.c +++ b/common/spl/spl.c @@ -723,6 +723,9 @@ void board_init_r(gd_t *dummy1, ulong dummy2) IS_ENABLED(CONFIG_SPL_ATF)) dram_init_banksize();
+ if (IS_ENABLED(CONFIG_SPL_LMB)) + initr_lmb(); + if (CONFIG_IS_ENABLED(PCI) && !(gd->flags & GD_FLG_DM_DEAD)) { ret = pci_init(); if (ret) diff --git a/include/lmb.h b/include/lmb.h index 2d8f9a6b71..cc4cf9f3c8 100644 --- a/include/lmb.h +++ b/include/lmb.h @@ -37,6 +37,17 @@ struct lmb_region { enum lmb_flags flags; };
+/** + * initr_lmb() - Initialise the LMB lists + * + * Initialise the LMB lists needed for keeping the memory map. There + * are two lists, in form of alloced list data structure. One for the + * available memory, and one for the used memory. + * + * Return: 0 on success, -ve on error + */ +int initr_lmb(void); + /** * lmb_add_memory() - Add memory range for LMB allocations * diff --git a/lib/lmb.c b/lib/lmb.c index 5afdfc32fb..5baa5c4c52 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -783,3 +783,23 @@ int lmb_mem_regions_init(void)
return 0; } + +/** + * initr_lmb() - Initialise the LMB lists + * + * Initialise the LMB lists needed for keeping the memory map. There + * are two lists, in form of alloced list data structure. One for the + * available memory, and one for the used memory. + * + * Return: 0 on success, -ve on error + */ +int initr_lmb(void) +{ + int ret; + + ret = lmb_mem_regions_init(); + if (ret) + printf("Unable to initialise the LMB data structures\n"); + + return ret; +}

Hi Sughosh,
On Wed, 24 Jul 2024 at 00:04, Sughosh Ganu sughosh.ganu@linaro.org wrote:
The memory map maintained by the LMB module is now persistent and global. This memory map is being maintained through the alloced list structure which can be extended at runtime -- there is one list for the available memory, and one for the used memory. Allocate and initialise these lists during the board init.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since rfc: None
common/board_r.c | 4 ++++ common/spl/spl.c | 3 +++ include/lmb.h | 11 +++++++++++ lib/lmb.c | 20 ++++++++++++++++++++ 4 files changed, 38 insertions(+)
diff --git a/common/board_r.c b/common/board_r.c index d4ba245ac6..eaf9b40ec0 100644 --- a/common/board_r.c +++ b/common/board_r.c @@ -22,6 +22,7 @@ #include <hang.h> #include <image.h> #include <irq_func.h> +#include <lmb.h> #include <log.h> #include <net.h> #include <asm/cache.h> @@ -610,6 +611,9 @@ static init_fnc_t init_sequence_r[] = { #ifdef CONFIG_CLOCKS set_cpu_clk_info, /* Setup clock information */ #endif +#if CONFIG_IS_ENABLED(LMB)
initr_lmb,
+#endif
Can you put the if() in the function, to avoid #ifdefs here?
#ifdef CONFIG_EFI_LOADER efi_memory_init, #endif diff --git a/common/spl/spl.c b/common/spl/spl.c index 7794ddccad..38ac0608bb 100644 --- a/common/spl/spl.c +++ b/common/spl/spl.c @@ -723,6 +723,9 @@ void board_init_r(gd_t *dummy1, ulong dummy2) IS_ENABLED(CONFIG_SPL_ATF)) dram_init_banksize();
if (IS_ENABLED(CONFIG_SPL_LMB))
initr_lmb();
Still unsure why this is needed. I'm also not sure how it works, since SPL allocations will not transfer to U-Boot proper.
if (CONFIG_IS_ENABLED(PCI) && !(gd->flags & GD_FLG_DM_DEAD)) { ret = pci_init(); if (ret)
diff --git a/include/lmb.h b/include/lmb.h index 2d8f9a6b71..cc4cf9f3c8 100644 --- a/include/lmb.h +++ b/include/lmb.h @@ -37,6 +37,17 @@ struct lmb_region { enum lmb_flags flags; };
+/**
- initr_lmb() - Initialise the LMB lists
- Initialise the LMB lists needed for keeping the memory map. There
- are two lists, in form of alloced list data structure. One for the
- available memory, and one for the used memory.
- Return: 0 on success, -ve on error
- */
+int initr_lmb(void);
/**
- lmb_add_memory() - Add memory range for LMB allocations
diff --git a/lib/lmb.c b/lib/lmb.c index 5afdfc32fb..5baa5c4c52 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -783,3 +783,23 @@ int lmb_mem_regions_init(void)
return 0;
}
+/**
- initr_lmb() - Initialise the LMB lists
- Initialise the LMB lists needed for keeping the memory map. There
- are two lists, in form of alloced list data structure. One for the
- available memory, and one for the used memory.
- Return: 0 on success, -ve on error
- */
+int initr_lmb(void) +{
int ret;
ret = lmb_mem_regions_init();
if (ret)
printf("Unable to initialise the LMB data structures\n");
return ret;
+}
2.34.1
Regards, Simon

On Fri, 26 Jul 2024 at 05:02, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Wed, 24 Jul 2024 at 00:04, Sughosh Ganu sughosh.ganu@linaro.org wrote:
The memory map maintained by the LMB module is now persistent and global. This memory map is being maintained through the alloced list structure which can be extended at runtime -- there is one list for the available memory, and one for the used memory. Allocate and initialise these lists during the board init.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since rfc: None
common/board_r.c | 4 ++++ common/spl/spl.c | 3 +++ include/lmb.h | 11 +++++++++++ lib/lmb.c | 20 ++++++++++++++++++++ 4 files changed, 38 insertions(+)
diff --git a/common/board_r.c b/common/board_r.c index d4ba245ac6..eaf9b40ec0 100644 --- a/common/board_r.c +++ b/common/board_r.c @@ -22,6 +22,7 @@ #include <hang.h> #include <image.h> #include <irq_func.h> +#include <lmb.h> #include <log.h> #include <net.h> #include <asm/cache.h> @@ -610,6 +611,9 @@ static init_fnc_t init_sequence_r[] = { #ifdef CONFIG_CLOCKS set_cpu_clk_info, /* Setup clock information */ #endif +#if CONFIG_IS_ENABLED(LMB)
initr_lmb,
+#endif
Can you put the if() in the function, to avoid #ifdefs here?
Will do
#ifdef CONFIG_EFI_LOADER efi_memory_init, #endif diff --git a/common/spl/spl.c b/common/spl/spl.c index 7794ddccad..38ac0608bb 100644 --- a/common/spl/spl.c +++ b/common/spl/spl.c @@ -723,6 +723,9 @@ void board_init_r(gd_t *dummy1, ulong dummy2) IS_ENABLED(CONFIG_SPL_ATF)) dram_init_banksize();
if (IS_ENABLED(CONFIG_SPL_LMB))
initr_lmb();
Still unsure why this is needed. I'm also not sure how it works, since SPL allocations will not transfer to U-Boot proper.
I am not sure if I get your question right, but we initialise the LMB lists, and add to it's available memory map, both in SPL and in U-Boot proper. These are being initialised for SPL separately once the ram memory has been set up, including the dram banksize values.
-sughosh
if (CONFIG_IS_ENABLED(PCI) && !(gd->flags & GD_FLG_DM_DEAD)) { ret = pci_init(); if (ret)
diff --git a/include/lmb.h b/include/lmb.h index 2d8f9a6b71..cc4cf9f3c8 100644 --- a/include/lmb.h +++ b/include/lmb.h @@ -37,6 +37,17 @@ struct lmb_region { enum lmb_flags flags; };
+/**
- initr_lmb() - Initialise the LMB lists
- Initialise the LMB lists needed for keeping the memory map. There
- are two lists, in form of alloced list data structure. One for the
- available memory, and one for the used memory.
- Return: 0 on success, -ve on error
- */
+int initr_lmb(void);
/**
- lmb_add_memory() - Add memory range for LMB allocations
diff --git a/lib/lmb.c b/lib/lmb.c index 5afdfc32fb..5baa5c4c52 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -783,3 +783,23 @@ int lmb_mem_regions_init(void)
return 0;
}
+/**
- initr_lmb() - Initialise the LMB lists
- Initialise the LMB lists needed for keeping the memory map. There
- are two lists, in form of alloced list data structure. One for the
- available memory, and one for the used memory.
- Return: 0 on success, -ve on error
- */
+int initr_lmb(void) +{
int ret;
ret = lmb_mem_regions_init();
if (ret)
printf("Unable to initialise the LMB data structures\n");
return ret;
+}
2.34.1
Regards, Simon

Hi Sughosh,
On Mon, 29 Jul 2024 at 02:36, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Fri, 26 Jul 2024 at 05:02, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Wed, 24 Jul 2024 at 00:04, Sughosh Ganu sughosh.ganu@linaro.org wrote:
The memory map maintained by the LMB module is now persistent and global. This memory map is being maintained through the alloced list structure which can be extended at runtime -- there is one list for the available memory, and one for the used memory. Allocate and initialise these lists during the board init.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since rfc: None
common/board_r.c | 4 ++++ common/spl/spl.c | 3 +++ include/lmb.h | 11 +++++++++++ lib/lmb.c | 20 ++++++++++++++++++++ 4 files changed, 38 insertions(+)
diff --git a/common/board_r.c b/common/board_r.c index d4ba245ac6..eaf9b40ec0 100644 --- a/common/board_r.c +++ b/common/board_r.c @@ -22,6 +22,7 @@ #include <hang.h> #include <image.h> #include <irq_func.h> +#include <lmb.h> #include <log.h> #include <net.h> #include <asm/cache.h> @@ -610,6 +611,9 @@ static init_fnc_t init_sequence_r[] = { #ifdef CONFIG_CLOCKS set_cpu_clk_info, /* Setup clock information */ #endif +#if CONFIG_IS_ENABLED(LMB)
initr_lmb,
+#endif
Can you put the if() in the function, to avoid #ifdefs here?
Will do
#ifdef CONFIG_EFI_LOADER efi_memory_init, #endif diff --git a/common/spl/spl.c b/common/spl/spl.c index 7794ddccad..38ac0608bb 100644 --- a/common/spl/spl.c +++ b/common/spl/spl.c @@ -723,6 +723,9 @@ void board_init_r(gd_t *dummy1, ulong dummy2) IS_ENABLED(CONFIG_SPL_ATF)) dram_init_banksize();
if (IS_ENABLED(CONFIG_SPL_LMB))
initr_lmb();
Still unsure why this is needed. I'm also not sure how it works, since SPL allocations will not transfer to U-Boot proper.
I am not sure if I get your question right, but we initialise the LMB lists, and add to it's available memory map, both in SPL and in U-Boot proper. These are being initialised for SPL separately once the ram memory has been set up, including the dram banksize values.
We can worry about it later, but we don't have a mechanism to transfer the LMB from SPL to U-Boot proper, so any SPL allocations will go away. I suspect that is fine for now.
Regards, Simon

Add a flags parameter to the LMB API functions. The parameter can then be used to pass any other type of reservations or allocations needed by the callers. These will be used in a subsequent set of changes for allocation requests coming from the EFI subsystem.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since rfc: New patch
arch/arm/mach-apple/board.c | 17 ++-- arch/arm/mach-snapdragon/board.c | 2 +- arch/arm/mach-stm32mp/dram_init.c | 4 +- arch/powerpc/cpu/mpc85xx/mp.c | 2 +- arch/powerpc/lib/bootm.c | 2 +- board/xilinx/common/board.c | 4 +- boot/bootm.c | 5 +- boot/image-board.c | 15 ++- boot/image-fdt.c | 15 +-- cmd/booti.c | 2 +- cmd/bootz.c | 2 +- cmd/load.c | 4 +- drivers/iommu/apple_dart.c | 6 +- drivers/iommu/sandbox_iommu.c | 6 +- fs/fs.c | 2 +- include/lmb.h | 23 ++--- lib/lmb.c | 48 ++++------ test/lib/lmb.c | 150 +++++++++++++++--------------- 18 files changed, 150 insertions(+), 159 deletions(-)
diff --git a/arch/arm/mach-apple/board.c b/arch/arm/mach-apple/board.c index 0b6d290b8a..72b1ab9053 100644 --- a/arch/arm/mach-apple/board.c +++ b/arch/arm/mach-apple/board.c @@ -778,15 +778,18 @@ int board_late_init(void) /* somewhat based on the Linux Kernel boot requirements: * align by 2M and maximal FDT size 2M */ - status |= env_set_hex("loadaddr", lmb_alloc(SZ_1G, SZ_2M)); - status |= env_set_hex("fdt_addr_r", lmb_alloc(SZ_2M, SZ_2M)); - status |= env_set_hex("kernel_addr_r", lmb_alloc(SZ_128M, SZ_2M)); - status |= env_set_hex("ramdisk_addr_r", lmb_alloc(SZ_1G, SZ_2M)); + status |= env_set_hex("loadaddr", lmb_alloc(SZ_1G, SZ_2M, LMB_NONE)); + status |= env_set_hex("fdt_addr_r", lmb_alloc(SZ_2M, SZ_2M, LMB_NONE)); + status |= env_set_hex("kernel_addr_r", lmb_alloc(SZ_128M, SZ_2M, + LMB_NONE)); + status |= env_set_hex("ramdisk_addr_r", lmb_alloc(SZ_1G, SZ_2M, + LMB_NONE)); status |= env_set_hex("kernel_comp_addr_r", - lmb_alloc(KERNEL_COMP_SIZE, SZ_2M)); + lmb_alloc(KERNEL_COMP_SIZE, SZ_2M, LMB_NONE)); status |= env_set_hex("kernel_comp_size", KERNEL_COMP_SIZE); - status |= env_set_hex("scriptaddr", lmb_alloc(SZ_4M, SZ_2M)); - status |= env_set_hex("pxefile_addr_r", lmb_alloc(SZ_4M, SZ_2M)); + status |= env_set_hex("scriptaddr", lmb_alloc(SZ_4M, SZ_2M, LMB_NONE)); + status |= env_set_hex("pxefile_addr_r", lmb_alloc(SZ_4M, SZ_2M, + LMB_NONE));
if (status) log_warning("late_init: Failed to set run time variables\n"); diff --git a/arch/arm/mach-snapdragon/board.c b/arch/arm/mach-snapdragon/board.c index 22a7d2a637..151d36d7eb 100644 --- a/arch/arm/mach-snapdragon/board.c +++ b/arch/arm/mach-snapdragon/board.c @@ -275,7 +275,7 @@ void __weak qcom_late_init(void)
#define KERNEL_COMP_SIZE SZ_64M
-#define addr_alloc(size) lmb_alloc(size, SZ_2M) +#define addr_alloc(size) lmb_alloc(size, SZ_2M, LMB_NONE)
/* Stolen from arch/arm/mach-apple/board.c */ int board_late_init(void) diff --git a/arch/arm/mach-stm32mp/dram_init.c b/arch/arm/mach-stm32mp/dram_init.c index e8b0a38be1..97d894d05f 100644 --- a/arch/arm/mach-stm32mp/dram_init.c +++ b/arch/arm/mach-stm32mp/dram_init.c @@ -58,11 +58,11 @@ phys_addr_t board_get_usable_ram_top(phys_size_t total_size) gd->ram_top = clamp_val(gd->ram_top, 0, SZ_4G - 1);
/* found enough not-reserved memory to relocated U-Boot */ - lmb_add(gd->ram_base, gd->ram_top - gd->ram_base); + lmb_add(gd->ram_base, gd->ram_top - gd->ram_base, LMB_NONE); boot_fdt_add_mem_rsv_regions((void *)gd->fdt_blob); /* add 8M for reserved memory for display, fdt, gd,... */ size = ALIGN(SZ_8M + CONFIG_SYS_MALLOC_LEN + total_size, MMU_SECTION_SIZE), - reg = lmb_alloc(size, MMU_SECTION_SIZE); + reg = lmb_alloc(size, MMU_SECTION_SIZE, LMB_NONE);
if (!reg) reg = gd->ram_top - size; diff --git a/arch/powerpc/cpu/mpc85xx/mp.c b/arch/powerpc/cpu/mpc85xx/mp.c index bed465cb2c..8918a401fa 100644 --- a/arch/powerpc/cpu/mpc85xx/mp.c +++ b/arch/powerpc/cpu/mpc85xx/mp.c @@ -412,7 +412,7 @@ void cpu_mp_lmb_reserve(void) { u32 bootpg = determine_mp_bootpg(NULL);
- lmb_reserve(bootpg, 4096); + lmb_reserve(bootpg, 4096, LMB_NONE); }
void setup_mp(void) diff --git a/arch/powerpc/lib/bootm.c b/arch/powerpc/lib/bootm.c index 6c35664ff3..350cddf1dd 100644 --- a/arch/powerpc/lib/bootm.c +++ b/arch/powerpc/lib/bootm.c @@ -139,7 +139,7 @@ void arch_lmb_reserve(void) ulong base = bootmap_base + size; printf("WARNING: adjusting available memory from 0x%lx to 0x%llx\n", size, (unsigned long long)bootm_size); - lmb_reserve(base, bootm_size - size); + lmb_reserve(base, bootm_size - size, LMB_NONE); }
arch_lmb_reserve_generic(get_sp(), gd->ram_top, 4096); diff --git a/board/xilinx/common/board.c b/board/xilinx/common/board.c index f04c92a70f..1fcf7c3d8f 100644 --- a/board/xilinx/common/board.c +++ b/board/xilinx/common/board.c @@ -683,10 +683,10 @@ phys_addr_t board_get_usable_ram_top(phys_size_t total_size) panic("Not 64bit aligned DT location: %p\n", gd->fdt_blob);
/* found enough not-reserved memory to relocated U-Boot */ - lmb_add(gd->ram_base, gd->ram_size); + lmb_add(gd->ram_base, gd->ram_size, LMB_NONE); boot_fdt_add_mem_rsv_regions((void *)gd->fdt_blob); size = ALIGN(CONFIG_SYS_MALLOC_LEN + total_size, MMU_SECTION_SIZE); - reg = lmb_alloc(size, MMU_SECTION_SIZE); + reg = lmb_alloc(size, MMU_SECTION_SIZE, LMB_NONE);
if (!reg) reg = gd->ram_top - size; diff --git a/boot/bootm.c b/boot/bootm.c index 2689f36977..9feedd72da 100644 --- a/boot/bootm.c +++ b/boot/bootm.c @@ -621,7 +621,7 @@ static int bootm_load_os(struct bootm_headers *images, int boot_progress) if (os.type == IH_TYPE_KERNEL_NOLOAD && os.comp != IH_COMP_NONE) { ulong req_size = ALIGN(image_len * 4, SZ_1M);
- load = lmb_alloc(req_size, SZ_2M); + load = lmb_alloc(req_size, SZ_2M, LMB_NONE); if (!load) return 1; os.load = load; @@ -696,7 +696,8 @@ static int bootm_load_os(struct bootm_headers *images, int boot_progress) }
if (CONFIG_IS_ENABLED(LMB)) - lmb_reserve(images->os.load, (load_end - images->os.load)); + lmb_reserve(images->os.load, (load_end - images->os.load), + LMB_NONE);
return 0; } diff --git a/boot/image-board.c b/boot/image-board.c index 99ee7968ba..b3281d66a4 100644 --- a/boot/image-board.c +++ b/boot/image-board.c @@ -560,15 +560,17 @@ int boot_ramdisk_high(ulong rd_data, ulong rd_len, ulong *initrd_start, debug(" in-place initrd\n"); *initrd_start = rd_data; *initrd_end = rd_data + rd_len; - lmb_reserve(rd_data, rd_len); + lmb_reserve(rd_data, rd_len, LMB_NONE); } else { if (initrd_high) *initrd_start = (ulong)lmb_alloc_base(rd_len, 0x1000, - initrd_high); + initrd_high, + LMB_NONE); else *initrd_start = (ulong)lmb_alloc(rd_len, - 0x1000); + 0x1000, + LMB_NONE);
if (*initrd_start == 0) { puts("ramdisk - allocation error\n"); @@ -827,7 +829,9 @@ int boot_get_cmdline(ulong *cmd_start, ulong *cmd_end)
barg = IF_ENABLED_INT(CONFIG_SYS_BOOT_GET_CMDLINE, CONFIG_SYS_BARGSIZE); cmdline = (char *)(ulong)lmb_alloc_base(barg, 0xf, - env_get_bootm_mapsize() + env_get_bootm_low()); + env_get_bootm_mapsize() + + env_get_bootm_low(), + LMB_NONE); if (!cmdline) return -1;
@@ -862,7 +866,8 @@ int boot_get_kbd(struct bd_info **kbd) *kbd = (struct bd_info *)(ulong)lmb_alloc_base(sizeof(struct bd_info), 0xf, env_get_bootm_mapsize() + - env_get_bootm_low()); + env_get_bootm_low(), + LMB_NONE); if (!*kbd) return -1;
diff --git a/boot/image-fdt.c b/boot/image-fdt.c index ccafadec0d..589e84c950 100644 --- a/boot/image-fdt.c +++ b/boot/image-fdt.c @@ -73,7 +73,7 @@ static void boot_fdt_reserve_region(uint64_t addr, uint64_t size, { long ret;
- ret = lmb_reserve_flags(addr, size, flags); + ret = lmb_reserve(addr, size, flags); if (ret >= 0) { debug(" reserving fdt memory region: addr=%llx size=%llx flags=%x\n", (unsigned long long)addr, @@ -185,17 +185,18 @@ int boot_relocate_fdt(char **of_flat_tree, ulong *of_size) if (desired_addr == ~0UL) { /* All ones means use fdt in place */ of_start = fdt_blob; - lmb_reserve(map_to_sysmem(of_start), of_len); + lmb_reserve(map_to_sysmem(of_start), of_len, LMB_NONE); disable_relocation = 1; } else if (desired_addr) { - addr = lmb_alloc_base(of_len, 0x1000, desired_addr); + addr = lmb_alloc_base(of_len, 0x1000, desired_addr, + LMB_NONE); of_start = map_sysmem(addr, of_len); if (of_start == NULL) { puts("Failed using fdt_high value for Device Tree"); goto error; } } else { - addr = lmb_alloc(of_len, 0x1000); + addr = lmb_alloc(of_len, 0x1000, LMB_NONE); of_start = map_sysmem(addr, of_len); } } else { @@ -217,7 +218,7 @@ int boot_relocate_fdt(char **of_flat_tree, ulong *of_size) * for LMB allocation. */ usable = min(start + size, low + mapsize); - addr = lmb_alloc_base(of_len, 0x1000, usable); + addr = lmb_alloc_base(of_len, 0x1000, usable, LMB_NONE); of_start = map_sysmem(addr, of_len); /* Allocation succeeded, use this block. */ if (of_start != NULL) @@ -667,7 +668,7 @@ int image_setup_libfdt(struct bootm_headers *images, void *blob, bool lmb)
/* Delete the old LMB reservation */ if (CONFIG_IS_ENABLED(LMB) && lmb) - lmb_free(map_to_sysmem(blob), fdt_totalsize(blob)); + lmb_free(map_to_sysmem(blob), fdt_totalsize(blob), LMB_NONE);
ret = fdt_shrink_to_minimum(blob, 0); if (ret < 0) @@ -676,7 +677,7 @@ int image_setup_libfdt(struct bootm_headers *images, void *blob, bool lmb)
/* Create a new LMB reservation */ if (CONFIG_IS_ENABLED(LMB) && lmb) - lmb_reserve(map_to_sysmem(blob), of_size); + lmb_reserve(map_to_sysmem(blob), of_size, LMB_NONE);
#if defined(CONFIG_ARCH_KEYSTONE) if (IS_ENABLED(CONFIG_OF_BOARD_SETUP)) diff --git a/cmd/booti.c b/cmd/booti.c index 6018cbacf0..b26ad1c6a4 100644 --- a/cmd/booti.c +++ b/cmd/booti.c @@ -87,7 +87,7 @@ static int booti_start(struct bootm_info *bmi) images->os.start = relocated_addr; images->os.end = relocated_addr + image_size;
- lmb_reserve(images->ep, le32_to_cpu(image_size)); + lmb_reserve(images->ep, le32_to_cpu(image_size), LMB_NONE);
/* * Handle the BOOTM_STATE_FINDOTHER state ourselves as we do not diff --git a/cmd/bootz.c b/cmd/bootz.c index 787203f5bd..99318ff213 100644 --- a/cmd/bootz.c +++ b/cmd/bootz.c @@ -56,7 +56,7 @@ static int bootz_start(struct cmd_tbl *cmdtp, int flag, int argc, if (ret != 0) return 1;
- lmb_reserve(images->ep, zi_end - zi_start); + lmb_reserve(images->ep, zi_end - zi_start, LMB_NONE);
/* * Handle the BOOTM_STATE_FINDOTHER state ourselves as we do not diff --git a/cmd/load.c b/cmd/load.c index 20d802502a..489c13dde0 100644 --- a/cmd/load.c +++ b/cmd/load.c @@ -179,7 +179,7 @@ static ulong load_serial(long offset) { void *dst;
- ret = lmb_reserve(store_addr, binlen); + ret = lmb_reserve(store_addr, binlen, LMB_NONE); if (ret) { printf("\nCannot overwrite reserved area (%08lx..%08lx)\n", store_addr, store_addr + binlen); @@ -188,7 +188,7 @@ static ulong load_serial(long offset) dst = map_sysmem(store_addr, binlen); memcpy(dst, binbuf, binlen); unmap_sysmem(dst); - lmb_free(store_addr, binlen); + lmb_free(store_addr, binlen, LMB_NONE); } if ((store_addr) < start_addr) start_addr = store_addr; diff --git a/drivers/iommu/apple_dart.c b/drivers/iommu/apple_dart.c index 611ac7cd6d..b575cd44d4 100644 --- a/drivers/iommu/apple_dart.c +++ b/drivers/iommu/apple_dart.c @@ -123,7 +123,7 @@ static dma_addr_t apple_dart_map(struct udevice *dev, void *addr, size_t size) off = (phys_addr_t)addr - paddr; psize = ALIGN(size + off, DART_PAGE_SIZE);
- dva = lmb_alloc(psize, DART_PAGE_SIZE); + dva = lmb_alloc(psize, DART_PAGE_SIZE, LMB_NONE);
idx = dva / DART_PAGE_SIZE; for (i = 0; i < psize / DART_PAGE_SIZE; i++) { @@ -159,7 +159,7 @@ static void apple_dart_unmap(struct udevice *dev, dma_addr_t addr, size_t size) (unsigned long)&priv->l2[idx + i]); priv->flush_tlb(priv);
- lmb_free(dva, psize); + lmb_free(dva, psize, LMB_NONE); }
static struct iommu_ops apple_dart_ops = { @@ -212,7 +212,7 @@ static int apple_dart_probe(struct udevice *dev) priv->dvabase = DART_PAGE_SIZE; priv->dvaend = SZ_4G - DART_PAGE_SIZE;
- lmb_add(priv->dvabase, priv->dvaend - priv->dvabase); + lmb_add(priv->dvabase, priv->dvaend - priv->dvabase, LMB_NONE);
/* Disable translations. */ for (sid = 0; sid < priv->nsid; sid++) diff --git a/drivers/iommu/sandbox_iommu.c b/drivers/iommu/sandbox_iommu.c index 5b4a6a8982..505f2c3250 100644 --- a/drivers/iommu/sandbox_iommu.c +++ b/drivers/iommu/sandbox_iommu.c @@ -21,7 +21,7 @@ static dma_addr_t sandbox_iommu_map(struct udevice *dev, void *addr, off = virt_to_phys(addr) - paddr; psize = ALIGN(size + off, IOMMU_PAGE_SIZE);
- dva = lmb_alloc(psize, IOMMU_PAGE_SIZE); + dva = lmb_alloc(psize, IOMMU_PAGE_SIZE, LMB_NONE);
return dva + off; } @@ -36,7 +36,7 @@ static void sandbox_iommu_unmap(struct udevice *dev, dma_addr_t addr, psize = size + (addr - dva); psize = ALIGN(psize, IOMMU_PAGE_SIZE);
- lmb_free(dva, psize); + lmb_free(dva, psize, LMB_NONE); }
static struct iommu_ops sandbox_iommu_ops = { @@ -46,7 +46,7 @@ static struct iommu_ops sandbox_iommu_ops = {
static int sandbox_iommu_probe(struct udevice *dev) { - lmb_add(0x89abc000, SZ_16K); + lmb_add(0x89abc000, SZ_16K, LMB_NONE);
return 0; } diff --git a/fs/fs.c b/fs/fs.c index 4bc28d1dff..4d7af46fc7 100644 --- a/fs/fs.c +++ b/fs/fs.c @@ -551,7 +551,7 @@ static int fs_read_lmb_check(const char *filename, ulong addr, loff_t offset,
lmb_dump_all();
- if (lmb_alloc_addr(addr, read_len) == addr) + if (lmb_alloc_addr(addr, read_len, LMB_NONE) == addr) return 0;
log_err("** Reading file would overwrite reserved memory **\n"); diff --git a/include/lmb.h b/include/lmb.h index cc4cf9f3c8..c90f167eec 100644 --- a/include/lmb.h +++ b/include/lmb.h @@ -59,21 +59,12 @@ int initr_lmb(void); */ void lmb_add_memory(void);
-long lmb_add(phys_addr_t base, phys_size_t size); -long lmb_reserve(phys_addr_t base, phys_size_t size); -/** - * lmb_reserve_flags - Reserve one region with a specific flags bitfield. - * - * @base: base address of the memory region - * @size: size of the memory region - * @flags: flags for the memory region - * Return: 0 if OK, > 0 for coalesced region or a negative error code. - */ -long lmb_reserve_flags(phys_addr_t base, phys_size_t size, - enum lmb_flags flags); -phys_addr_t lmb_alloc(phys_size_t size, ulong align); -phys_addr_t lmb_alloc_base(phys_size_t size, ulong align, phys_addr_t max_addr); -phys_addr_t lmb_alloc_addr(phys_addr_t base, phys_size_t size); +long lmb_add(phys_addr_t base, phys_size_t size, uint flags); +long lmb_reserve(phys_addr_t base, phys_size_t size, uint flags); +phys_addr_t lmb_alloc(phys_size_t size, ulong align, uint flags); +phys_addr_t lmb_alloc_base(phys_size_t size, ulong align, phys_addr_t max_addr, + uint flags); +phys_addr_t lmb_alloc_addr(phys_addr_t base, phys_size_t size, uint flags); phys_size_t lmb_get_free_size(phys_addr_t addr);
/** @@ -88,7 +79,7 @@ phys_size_t lmb_get_free_size(phys_addr_t addr); */ int lmb_is_reserved_flags(phys_addr_t addr, int flags);
-long lmb_free(phys_addr_t base, phys_size_t size); +long lmb_free(phys_addr_t base, phys_size_t size, uint flags);
void lmb_dump_all(void); void lmb_dump_all_force(void); diff --git a/lib/lmb.c b/lib/lmb.c index 5baa5c4c52..ed5988bb1b 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -174,11 +174,11 @@ void arch_lmb_reserve_generic(ulong sp, ulong end, ulong align) if (bank_end > end) bank_end = end - 1;
- lmb_reserve_flags(sp, bank_end - sp + 1, LMB_NOOVERWRITE); + lmb_reserve(sp, bank_end - sp + 1, LMB_NOOVERWRITE);
if (gd->flags & GD_FLG_SKIP_RELOC) - lmb_reserve_flags((phys_addr_t)(uintptr_t)_start, - gd->mon_len, LMB_NOOVERWRITE); + lmb_reserve((phys_addr_t)(uintptr_t)_start, gd->mon_len, + LMB_NOOVERWRITE);
break; } @@ -204,7 +204,7 @@ static __maybe_unused int efi_lmb_reserve(void)
for (i = 0, map = memmap; i < map_size / sizeof(*map); ++map, ++i) { if (map->type != EFI_CONVENTIONAL_MEMORY) { - lmb_reserve_flags(map_to_sysmem((void *)(uintptr_t) + lmb_reserve(map_to_sysmem((void *)(uintptr_t) map->physical_start), map->num_pages * EFI_PAGE_SIZE, map->type == EFI_RESERVED_MEMORY_TYPE @@ -240,7 +240,7 @@ static __maybe_unused void lmb_reserve_common_spl(void) if (IS_ENABLED(CONFIG_SPL_STACK_R_ADDR)) { rsv_start = gd->start_addr_sp - 16384; rsv_size = 16384; - lmb_reserve_flags(rsv_start, rsv_size, LMB_NOOVERWRITE); + lmb_reserve(rsv_start, rsv_size, LMB_NOOVERWRITE); }
if (IS_ENABLED(CONFIG_SPL_SEPARATE_BSS)) { @@ -248,7 +248,7 @@ static __maybe_unused void lmb_reserve_common_spl(void) rsv_start = (phys_addr_t)(uintptr_t)__bss_start; rsv_size = (phys_addr_t)(uintptr_t)__bss_end - (phys_addr_t)(uintptr_t)__bss_start; - lmb_reserve_flags(rsv_start, rsv_size, LMB_NOOVERWRITE); + lmb_reserve(rsv_start, rsv_size, LMB_NOOVERWRITE); } }
@@ -287,7 +287,7 @@ __weak void lmb_add_memory(void) if (rgn_top > ram_top) size -= rgn_top - ram_top;
- lmb_add(bd->bi_dram[i].start, size); + lmb_add(bd->bi_dram[i].start, size, LMB_NONE); } } } @@ -496,21 +496,15 @@ static long lmb_add_region_flags(struct alist *lmb_rgn_lst, phys_addr_t base, return 0; }
-static long lmb_add_region(struct alist *lmb_rgn_lst, phys_addr_t base, - phys_size_t size) -{ - return lmb_add_region_flags(lmb_rgn_lst, base, size, LMB_NONE); -} - /* This routine may be called with relocation disabled. */ -long lmb_add(phys_addr_t base, phys_size_t size) +long lmb_add(phys_addr_t base, phys_size_t size, uint flags) { struct alist *lmb_rgn_lst = &lmb_free_mem;
- return lmb_add_region(lmb_rgn_lst, base, size); + return lmb_add_region_flags(lmb_rgn_lst, base, size, flags); }
-long lmb_free(phys_addr_t base, phys_size_t size) +long lmb_free(phys_addr_t base, phys_size_t size, uint flags) { struct lmb_region *rgn; struct alist *lmb_rgn_lst = &lmb_used_mem; @@ -561,18 +555,13 @@ long lmb_free(phys_addr_t base, phys_size_t size) rgn[i].flags); }
-long lmb_reserve_flags(phys_addr_t base, phys_size_t size, enum lmb_flags flags) +long lmb_reserve(phys_addr_t base, phys_size_t size, uint flags) { struct alist *lmb_rgn_lst = &lmb_used_mem;
return lmb_add_region_flags(lmb_rgn_lst, base, size, flags); }
-long lmb_reserve(phys_addr_t base, phys_size_t size) -{ - return lmb_reserve_flags(base, size, LMB_NONE); -} - static long lmb_overlaps_region(struct alist *lmb_rgn_lst, phys_addr_t base, phys_size_t size) { @@ -639,16 +628,17 @@ static phys_addr_t __lmb_alloc_base(phys_size_t size, ulong align, return 0; }
-phys_addr_t lmb_alloc(phys_size_t size, ulong align) +phys_addr_t lmb_alloc(phys_size_t size, ulong align, uint flags) { - return lmb_alloc_base(size, align, LMB_ALLOC_ANYWHERE); + return lmb_alloc_base(size, align, LMB_ALLOC_ANYWHERE, flags); }
-phys_addr_t lmb_alloc_base(phys_size_t size, ulong align, phys_addr_t max_addr) +phys_addr_t lmb_alloc_base(phys_size_t size, ulong align, phys_addr_t max_addr, + uint flags) { phys_addr_t alloc;
- alloc = __lmb_alloc_base(size, align, max_addr, LMB_NONE); + alloc = __lmb_alloc_base(size, align, max_addr, flags);
if (alloc == 0) printf("ERROR: Failed to allocate 0x%lx bytes below 0x%lx.\n", @@ -674,7 +664,7 @@ static phys_addr_t __lmb_alloc_addr(phys_addr_t base, phys_size_t size, lmb_memory[rgn].size, base + size - 1, 1)) { /* ok, reserve the memory */ - if (lmb_reserve_flags(base, size, flags) >= 0) + if (lmb_reserve(base, size, flags) >= 0) return base; } } @@ -686,9 +676,9 @@ static phys_addr_t __lmb_alloc_addr(phys_addr_t base, phys_size_t size, * Try to allocate a specific address range: must be in defined memory but not * reserved */ -phys_addr_t lmb_alloc_addr(phys_addr_t base, phys_size_t size) +phys_addr_t lmb_alloc_addr(phys_addr_t base, phys_size_t size, uint flags) { - return __lmb_alloc_addr(base, size, LMB_NONE); + return __lmb_alloc_addr(base, size, flags); }
/* Return number of bytes from a given address that are free */ diff --git a/test/lib/lmb.c b/test/lib/lmb.c index a3a7ad904c..3f99156cf3 100644 --- a/test/lib/lmb.c +++ b/test/lib/lmb.c @@ -77,11 +77,11 @@ static int test_multi_alloc(struct unit_test_state *uts, const phys_addr_t ram, ut_assert(alloc_64k_end <= ram_end - 8);
if (ram0_size) { - ret = lmb_add(ram0, ram0_size); + ret = lmb_add(ram0, ram0_size, LMB_NONE); ut_asserteq(ret, 0); }
- ret = lmb_add(ram, ram_size); + ret = lmb_add(ram, ram_size, LMB_NONE); ut_asserteq(ret, 0);
if (ram0_size) { @@ -97,67 +97,67 @@ static int test_multi_alloc(struct unit_test_state *uts, const phys_addr_t ram, }
/* reserve 64KiB somewhere */ - ret = lmb_reserve(alloc_64k_addr, 0x10000); + ret = lmb_reserve(alloc_64k_addr, 0x10000, LMB_NONE); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, 0, 0, 1, alloc_64k_addr, 0x10000, 0, 0, 0, 0);
/* allocate somewhere, should be at the end of RAM */ - a = lmb_alloc(4, 1); + a = lmb_alloc(4, 1, LMB_NONE); ut_asserteq(a, ram_end - 4); ASSERT_LMB(&lmb, 0, 0, 2, alloc_64k_addr, 0x10000, ram_end - 4, 4, 0, 0); /* alloc below end of reserved region -> below reserved region */ - b = lmb_alloc_base(4, 1, alloc_64k_end); + b = lmb_alloc_base(4, 1, alloc_64k_end, LMB_NONE); ut_asserteq(b, alloc_64k_addr - 4); ASSERT_LMB(&lmb, 0, 0, 2, alloc_64k_addr - 4, 0x10000 + 4, ram_end - 4, 4, 0, 0);
/* 2nd time */ - c = lmb_alloc(4, 1); + c = lmb_alloc(4, 1, LMB_NONE); ut_asserteq(c, ram_end - 8); ASSERT_LMB(&lmb, 0, 0, 2, alloc_64k_addr - 4, 0x10000 + 4, ram_end - 8, 8, 0, 0); - d = lmb_alloc_base(4, 1, alloc_64k_end); + d = lmb_alloc_base(4, 1, alloc_64k_end, LMB_NONE); ut_asserteq(d, alloc_64k_addr - 8); ASSERT_LMB(&lmb, 0, 0, 2, alloc_64k_addr - 8, 0x10000 + 8, ram_end - 8, 8, 0, 0);
- ret = lmb_free(a, 4); + ret = lmb_free(a, 4, LMB_NONE); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, 0, 0, 2, alloc_64k_addr - 8, 0x10000 + 8, ram_end - 8, 4, 0, 0); /* allocate again to ensure we get the same address */ - a2 = lmb_alloc(4, 1); + a2 = lmb_alloc(4, 1, LMB_NONE); ut_asserteq(a, a2); ASSERT_LMB(&lmb, 0, 0, 2, alloc_64k_addr - 8, 0x10000 + 8, ram_end - 8, 8, 0, 0); - ret = lmb_free(a2, 4); + ret = lmb_free(a2, 4, LMB_NONE); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, 0, 0, 2, alloc_64k_addr - 8, 0x10000 + 8, ram_end - 8, 4, 0, 0);
- ret = lmb_free(b, 4); + ret = lmb_free(b, 4, LMB_NONE); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, 0, 0, 3, alloc_64k_addr - 8, 4, alloc_64k_addr, 0x10000, ram_end - 8, 4); /* allocate again to ensure we get the same address */ - b2 = lmb_alloc_base(4, 1, alloc_64k_end); + b2 = lmb_alloc_base(4, 1, alloc_64k_end, LMB_NONE); ut_asserteq(b, b2); ASSERT_LMB(&lmb, 0, 0, 2, alloc_64k_addr - 8, 0x10000 + 8, ram_end - 8, 4, 0, 0); - ret = lmb_free(b2, 4); + ret = lmb_free(b2, 4, LMB_NONE); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, 0, 0, 3, alloc_64k_addr - 8, 4, alloc_64k_addr, 0x10000, ram_end - 8, 4);
- ret = lmb_free(c, 4); + ret = lmb_free(c, 4, LMB_NONE); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, 0, 0, 2, alloc_64k_addr - 8, 4, alloc_64k_addr, 0x10000, 0, 0); - ret = lmb_free(d, 4); + ret = lmb_free(d, 4, LMB_NONE); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, 0, 0, 1, alloc_64k_addr, 0x10000, 0, 0, 0, 0); @@ -234,35 +234,35 @@ static int test_bigblock(struct unit_test_state *uts, const phys_addr_t ram) /* check for overflow */ ut_assert(ram_end == 0 || ram_end > ram);
- ret = lmb_add(ram, ram_size); + ret = lmb_add(ram, ram_size, LMB_NONE); ut_asserteq(ret, 0);
/* reserve 64KiB in the middle of RAM */ - ret = lmb_reserve(alloc_64k_addr, 0x10000); + ret = lmb_reserve(alloc_64k_addr, 0x10000, LMB_NONE); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 1, alloc_64k_addr, 0x10000, 0, 0, 0, 0);
/* allocate a big block, should be below reserved */ - a = lmb_alloc(big_block_size, 1); + a = lmb_alloc(big_block_size, 1, LMB_NONE); ut_asserteq(a, ram); ASSERT_LMB(&lmb, ram, ram_size, 1, a, big_block_size + 0x10000, 0, 0, 0, 0); /* allocate 2nd big block */ /* This should fail, printing an error */ - b = lmb_alloc(big_block_size, 1); + b = lmb_alloc(big_block_size, 1, LMB_NONE); ut_asserteq(b, 0); ASSERT_LMB(&lmb, ram, ram_size, 1, a, big_block_size + 0x10000, 0, 0, 0, 0);
- ret = lmb_free(a, big_block_size); + ret = lmb_free(a, big_block_size, LMB_NONE); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 1, alloc_64k_addr, 0x10000, 0, 0, 0, 0);
/* allocate too big block */ /* This should fail, printing an error */ - a = lmb_alloc(ram_size, 1); + a = lmb_alloc(ram_size, 1, LMB_NONE); ut_asserteq(a, 0); ASSERT_LMB(&lmb, ram, ram_size, 1, alloc_64k_addr, 0x10000, 0, 0, 0, 0); @@ -298,17 +298,17 @@ static int test_noreserved(struct unit_test_state *uts, const phys_addr_t ram, /* check for overflow */ ut_assert(ram_end == 0 || ram_end > ram);
- ret = lmb_add(ram, ram_size); + ret = lmb_add(ram, ram_size, LMB_NONE); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 0, 0, 0, 0, 0, 0, 0);
/* allocate a block */ - a = lmb_alloc(alloc_size, align); + a = lmb_alloc(alloc_size, align, LMB_NONE); ut_assert(a != 0); ASSERT_LMB(&lmb, ram, ram_size, 1, ram + ram_size - alloc_size_aligned, alloc_size, 0, 0, 0, 0); /* allocate another block */ - b = lmb_alloc(alloc_size, align); + b = lmb_alloc(alloc_size, align, LMB_NONE); ut_assert(b != 0); if (alloc_size == alloc_size_aligned) { ASSERT_LMB(&lmb, ram, ram_size, 1, ram + ram_size - @@ -320,21 +320,21 @@ static int test_noreserved(struct unit_test_state *uts, const phys_addr_t ram, - alloc_size_aligned, alloc_size, 0, 0); } /* and free them */ - ret = lmb_free(b, alloc_size); + ret = lmb_free(b, alloc_size, LMB_NONE); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 1, ram + ram_size - alloc_size_aligned, alloc_size, 0, 0, 0, 0); - ret = lmb_free(a, alloc_size); + ret = lmb_free(a, alloc_size, LMB_NONE); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 0, 0, 0, 0, 0, 0, 0);
/* allocate a block with base*/ - b = lmb_alloc_base(alloc_size, align, ram_end); + b = lmb_alloc_base(alloc_size, align, ram_end, LMB_NONE); ut_assert(a == b); ASSERT_LMB(&lmb, ram, ram_size, 1, ram + ram_size - alloc_size_aligned, alloc_size, 0, 0, 0, 0); /* and free it */ - ret = lmb_free(b, alloc_size); + ret = lmb_free(b, alloc_size, LMB_NONE); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 0, 0, 0, 0, 0, 0, 0);
@@ -381,28 +381,28 @@ static int lib_test_lmb_at_0(struct unit_test_state *uts) long ret; phys_addr_t a, b;
- ret = lmb_add(ram, ram_size); + ret = lmb_add(ram, ram_size, LMB_NONE); ut_asserteq(ret, 0);
/* allocate nearly everything */ - a = lmb_alloc(ram_size - 4, 1); + a = lmb_alloc(ram_size - 4, 1, LMB_NONE); ut_asserteq(a, ram + 4); ASSERT_LMB(&lmb, ram, ram_size, 1, a, ram_size - 4, 0, 0, 0, 0); /* allocate the rest */ /* This should fail as the allocated address would be 0 */ - b = lmb_alloc(4, 1); + b = lmb_alloc(4, 1, LMB_NONE); ut_asserteq(b, 0); /* check that this was an error by checking lmb */ ASSERT_LMB(&lmb, ram, ram_size, 1, a, ram_size - 4, 0, 0, 0, 0); /* check that this was an error by freeing b */ - ret = lmb_free(b, 4); + ret = lmb_free(b, 4, LMB_NONE); ut_asserteq(ret, -1); ASSERT_LMB(&lmb, ram, ram_size, 1, a, ram_size - 4, 0, 0, 0, 0);
- ret = lmb_free(a, ram_size - 4); + ret = lmb_free(a, ram_size - 4, LMB_NONE); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 0, 0, 0, 0, 0, 0, 0);
@@ -417,37 +417,37 @@ static int lib_test_lmb_overlapping_reserve(struct unit_test_state *uts) const phys_size_t ram_size = 0x20000000; long ret;
- ret = lmb_add(ram, ram_size); + ret = lmb_add(ram, ram_size, LMB_NONE); ut_asserteq(ret, 0);
- ret = lmb_reserve(0x40010000, 0x10000); + ret = lmb_reserve(0x40010000, 0x10000, LMB_NONE); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40010000, 0x10000, 0, 0, 0, 0); /* allocate overlapping region should fail */ - ret = lmb_reserve(0x40011000, 0x10000); + ret = lmb_reserve(0x40011000, 0x10000, LMB_NONE); ut_asserteq(ret, -1); ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40010000, 0x10000, 0, 0, 0, 0); /* allocate 3nd region */ - ret = lmb_reserve(0x40030000, 0x10000); + ret = lmb_reserve(0x40030000, 0x10000, LMB_NONE); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 2, 0x40010000, 0x10000, 0x40030000, 0x10000, 0, 0); /* allocate 2nd region , This should coalesced all region into one */ - ret = lmb_reserve(0x40020000, 0x10000); + ret = lmb_reserve(0x40020000, 0x10000, LMB_NONE); ut_assert(ret >= 0); ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40010000, 0x30000, 0, 0, 0, 0);
/* allocate 2nd region, which should be added as first region */ - ret = lmb_reserve(0x40000000, 0x8000); + ret = lmb_reserve(0x40000000, 0x8000, LMB_NONE); ut_assert(ret >= 0); ASSERT_LMB(&lmb, ram, ram_size, 2, 0x40000000, 0x8000, 0x40010000, 0x30000, 0, 0);
/* allocate 3rd region, coalesce with first and overlap with second */ - ret = lmb_reserve(0x40008000, 0x10000); + ret = lmb_reserve(0x40008000, 0x10000, LMB_NONE); ut_assert(ret >= 0); ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40000000, 0x40000, 0, 0, 0, 0); @@ -472,95 +472,95 @@ static int test_alloc_addr(struct unit_test_state *uts, const phys_addr_t ram) /* check for overflow */ ut_assert(ram_end == 0 || ram_end > ram);
- ret = lmb_add(ram, ram_size); + ret = lmb_add(ram, ram_size, LMB_NONE); ut_asserteq(ret, 0);
/* reserve 3 blocks */ - ret = lmb_reserve(alloc_addr_a, 0x10000); + ret = lmb_reserve(alloc_addr_a, 0x10000, LMB_NONE); ut_asserteq(ret, 0); - ret = lmb_reserve(alloc_addr_b, 0x10000); + ret = lmb_reserve(alloc_addr_b, 0x10000, LMB_NONE); ut_asserteq(ret, 0); - ret = lmb_reserve(alloc_addr_c, 0x10000); + ret = lmb_reserve(alloc_addr_c, 0x10000, LMB_NONE); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 3, alloc_addr_a, 0x10000, alloc_addr_b, 0x10000, alloc_addr_c, 0x10000);
/* allocate blocks */ - a = lmb_alloc_addr(ram, alloc_addr_a - ram); + a = lmb_alloc_addr(ram, alloc_addr_a - ram, LMB_NONE); ut_asserteq(a, ram); ASSERT_LMB(&lmb, ram, ram_size, 3, ram, 0x8010000, alloc_addr_b, 0x10000, alloc_addr_c, 0x10000); b = lmb_alloc_addr(alloc_addr_a + 0x10000, - alloc_addr_b - alloc_addr_a - 0x10000); + alloc_addr_b - alloc_addr_a - 0x10000, LMB_NONE); ut_asserteq(b, alloc_addr_a + 0x10000); ASSERT_LMB(&lmb, ram, ram_size, 2, ram, 0x10010000, alloc_addr_c, 0x10000, 0, 0); c = lmb_alloc_addr(alloc_addr_b + 0x10000, - alloc_addr_c - alloc_addr_b - 0x10000); + alloc_addr_c - alloc_addr_b - 0x10000, LMB_NONE); ut_asserteq(c, alloc_addr_b + 0x10000); ASSERT_LMB(&lmb, ram, ram_size, 1, ram, 0x18010000, 0, 0, 0, 0); d = lmb_alloc_addr(alloc_addr_c + 0x10000, - ram_end - alloc_addr_c - 0x10000); + ram_end - alloc_addr_c - 0x10000, LMB_NONE); ut_asserteq(d, alloc_addr_c + 0x10000); ASSERT_LMB(&lmb, ram, ram_size, 1, ram, ram_size, 0, 0, 0, 0);
/* allocating anything else should fail */ - e = lmb_alloc(1, 1); + e = lmb_alloc(1, 1, LMB_NONE); ut_asserteq(e, 0); ASSERT_LMB(&lmb, ram, ram_size, 1, ram, ram_size, 0, 0, 0, 0);
- ret = lmb_free(d, ram_end - alloc_addr_c - 0x10000); + ret = lmb_free(d, ram_end - alloc_addr_c - 0x10000, LMB_NONE); ut_asserteq(ret, 0);
/* allocate at 3 points in free range */
- d = lmb_alloc_addr(ram_end - 4, 4); + d = lmb_alloc_addr(ram_end - 4, 4, LMB_NONE); ut_asserteq(d, ram_end - 4); ASSERT_LMB(&lmb, ram, ram_size, 2, ram, 0x18010000, d, 4, 0, 0); - ret = lmb_free(d, 4); + ret = lmb_free(d, 4, LMB_NONE); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 1, ram, 0x18010000, 0, 0, 0, 0);
- d = lmb_alloc_addr(ram_end - 128, 4); + d = lmb_alloc_addr(ram_end - 128, 4, LMB_NONE); ut_asserteq(d, ram_end - 128); ASSERT_LMB(&lmb, ram, ram_size, 2, ram, 0x18010000, d, 4, 0, 0); - ret = lmb_free(d, 4); + ret = lmb_free(d, 4, LMB_NONE); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 1, ram, 0x18010000, 0, 0, 0, 0);
- d = lmb_alloc_addr(alloc_addr_c + 0x10000, 4); + d = lmb_alloc_addr(alloc_addr_c + 0x10000, 4, LMB_NONE); ut_asserteq(d, alloc_addr_c + 0x10000); ASSERT_LMB(&lmb, ram, ram_size, 1, ram, 0x18010004, 0, 0, 0, 0); - ret = lmb_free(d, 4); + ret = lmb_free(d, 4, LMB_NONE); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 1, ram, 0x18010000, 0, 0, 0, 0);
/* allocate at the bottom */ - ret = lmb_free(a, alloc_addr_a - ram); + ret = lmb_free(a, alloc_addr_a - ram, LMB_NONE); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 1, ram + 0x8000000, 0x10010000, 0, 0, 0, 0); - d = lmb_alloc_addr(ram, 4); + d = lmb_alloc_addr(ram, 4, LMB_NONE); ut_asserteq(d, ram); ASSERT_LMB(&lmb, ram, ram_size, 2, d, 4, ram + 0x8000000, 0x10010000, 0, 0);
/* check that allocating outside memory fails */ if (ram_end != 0) { - ret = lmb_alloc_addr(ram_end, 1); + ret = lmb_alloc_addr(ram_end, 1, LMB_NONE); ut_asserteq(ret, 0); } if (ram != 0) { - ret = lmb_alloc_addr(ram - 1, 1); + ret = lmb_alloc_addr(ram - 1, 1, LMB_NONE); ut_asserteq(ret, 0); }
@@ -596,15 +596,15 @@ static int test_get_unreserved_size(struct unit_test_state *uts, /* check for overflow */ ut_assert(ram_end == 0 || ram_end > ram);
- ret = lmb_add(ram, ram_size); + ret = lmb_add(ram, ram_size, LMB_NONE); ut_asserteq(ret, 0);
/* reserve 3 blocks */ - ret = lmb_reserve(alloc_addr_a, 0x10000); + ret = lmb_reserve(alloc_addr_a, 0x10000, LMB_NONE); ut_asserteq(ret, 0); - ret = lmb_reserve(alloc_addr_b, 0x10000); + ret = lmb_reserve(alloc_addr_b, 0x10000, LMB_NONE); ut_asserteq(ret, 0); - ret = lmb_reserve(alloc_addr_c, 0x10000); + ret = lmb_reserve(alloc_addr_c, 0x10000, LMB_NONE); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 3, alloc_addr_a, 0x10000, alloc_addr_b, 0x10000, alloc_addr_c, 0x10000); @@ -654,23 +654,23 @@ static int lib_test_lmb_flags(struct unit_test_state *uts) const phys_size_t ram_size = 0x20000000; long ret;
- ret = lmb_add(ram, ram_size); + ret = lmb_add(ram, ram_size, LMB_NONE); ut_asserteq(ret, 0);
/* reserve, same flag */ - ret = lmb_reserve_flags(0x40010000, 0x10000, LMB_NOMAP); + ret = lmb_reserve(0x40010000, 0x10000, LMB_NOMAP); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40010000, 0x10000, 0, 0, 0, 0);
/* reserve again, same flag */ - ret = lmb_reserve_flags(0x40010000, 0x10000, LMB_NOMAP); + ret = lmb_reserve(0x40010000, 0x10000, LMB_NOMAP); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40010000, 0x10000, 0, 0, 0, 0);
/* reserve again, new flag */ - ret = lmb_reserve_flags(0x40010000, 0x10000, LMB_NONE); + ret = lmb_reserve(0x40010000, 0x10000, LMB_NONE); ut_asserteq(ret, -1); ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40010000, 0x10000, 0, 0, 0, 0); @@ -678,20 +678,20 @@ static int lib_test_lmb_flags(struct unit_test_state *uts) ut_asserteq(lmb_is_nomap(&lmb.reserved.region[0]), 1);
/* merge after */ - ret = lmb_reserve_flags(0x40020000, 0x10000, LMB_NOMAP); + ret = lmb_reserve(0x40020000, 0x10000, LMB_NOMAP); ut_asserteq(ret, 1); ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40010000, 0x20000, 0, 0, 0, 0);
/* merge before */ - ret = lmb_reserve_flags(0x40000000, 0x10000, LMB_NOMAP); + ret = lmb_reserve(0x40000000, 0x10000, LMB_NOMAP); ut_asserteq(ret, 1); ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40000000, 0x30000, 0, 0, 0, 0);
ut_asserteq(lmb_is_nomap(&lmb.reserved.region[0]), 1);
- ret = lmb_reserve_flags(0x40030000, 0x10000, LMB_NONE); + ret = lmb_reserve(0x40030000, 0x10000, LMB_NONE); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 2, 0x40000000, 0x30000, 0x40030000, 0x10000, 0, 0); @@ -700,7 +700,7 @@ static int lib_test_lmb_flags(struct unit_test_state *uts) ut_asserteq(lmb_is_nomap(&lmb.reserved.region[1]), 0);
/* test that old API use LMB_NONE */ - ret = lmb_reserve(0x40040000, 0x10000); + ret = lmb_reserve(0x40040000, 0x10000, LMB_NONE); ut_asserteq(ret, 1); ASSERT_LMB(&lmb, ram, ram_size, 2, 0x40000000, 0x30000, 0x40030000, 0x20000, 0, 0); @@ -708,18 +708,18 @@ static int lib_test_lmb_flags(struct unit_test_state *uts) ut_asserteq(lmb_is_nomap(&lmb.reserved.region[0]), 1); ut_asserteq(lmb_is_nomap(&lmb.reserved.region[1]), 0);
- ret = lmb_reserve_flags(0x40070000, 0x10000, LMB_NOMAP); + ret = lmb_reserve(0x40070000, 0x10000, LMB_NOMAP); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 3, 0x40000000, 0x30000, 0x40030000, 0x20000, 0x40070000, 0x10000);
- ret = lmb_reserve_flags(0x40050000, 0x10000, LMB_NOMAP); + ret = lmb_reserve(0x40050000, 0x10000, LMB_NOMAP); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 4, 0x40000000, 0x30000, 0x40030000, 0x20000, 0x40050000, 0x10000);
/* merge with 2 adjacent regions */ - ret = lmb_reserve_flags(0x40060000, 0x10000, LMB_NOMAP); + ret = lmb_reserve(0x40060000, 0x10000, LMB_NOMAP); ut_asserteq(ret, 2); ASSERT_LMB(&lmb, ram, ram_size, 3, 0x40000000, 0x30000, 0x40030000, 0x20000, 0x40050000, 0x30000);

Hi Sughosh,
On Wed, 24 Jul 2024 at 00:04, Sughosh Ganu sughosh.ganu@linaro.org wrote:
Add a flags parameter to the LMB API functions. The parameter can then be used to pass any other type of reservations or allocations needed by the callers. These will be used in a subsequent set of changes for allocation requests coming from the EFI subsystem.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since rfc: New patch
arch/arm/mach-apple/board.c | 17 ++-- arch/arm/mach-snapdragon/board.c | 2 +- arch/arm/mach-stm32mp/dram_init.c | 4 +- arch/powerpc/cpu/mpc85xx/mp.c | 2 +- arch/powerpc/lib/bootm.c | 2 +- board/xilinx/common/board.c | 4 +- boot/bootm.c | 5 +- boot/image-board.c | 15 ++- boot/image-fdt.c | 15 +-- cmd/booti.c | 2 +- cmd/bootz.c | 2 +- cmd/load.c | 4 +- drivers/iommu/apple_dart.c | 6 +- drivers/iommu/sandbox_iommu.c | 6 +- fs/fs.c | 2 +- include/lmb.h | 23 ++--- lib/lmb.c | 48 ++++------ test/lib/lmb.c | 150 +++++++++++++++--------------- 18 files changed, 150 insertions(+), 159 deletions(-)
This negates any code-size advantage of dropping the lmb parameter.
All of these are LMB_NONE. Can we have a separate function (e.g. lmb_alloc_type()) for when we actually need to specify the type?
Regards, Simon

On Fri, 26 Jul 2024 at 05:02, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Wed, 24 Jul 2024 at 00:04, Sughosh Ganu sughosh.ganu@linaro.org wrote:
Add a flags parameter to the LMB API functions. The parameter can then be used to pass any other type of reservations or allocations needed by the callers. These will be used in a subsequent set of changes for allocation requests coming from the EFI subsystem.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since rfc: New patch
arch/arm/mach-apple/board.c | 17 ++-- arch/arm/mach-snapdragon/board.c | 2 +- arch/arm/mach-stm32mp/dram_init.c | 4 +- arch/powerpc/cpu/mpc85xx/mp.c | 2 +- arch/powerpc/lib/bootm.c | 2 +- board/xilinx/common/board.c | 4 +- boot/bootm.c | 5 +- boot/image-board.c | 15 ++- boot/image-fdt.c | 15 +-- cmd/booti.c | 2 +- cmd/bootz.c | 2 +- cmd/load.c | 4 +- drivers/iommu/apple_dart.c | 6 +- drivers/iommu/sandbox_iommu.c | 6 +- fs/fs.c | 2 +- include/lmb.h | 23 ++--- lib/lmb.c | 48 ++++------ test/lib/lmb.c | 150 +++++++++++++++--------------- 18 files changed, 150 insertions(+), 159 deletions(-)
This negates any code-size advantage of dropping the lmb parameter.
All of these are LMB_NONE. Can we have a separate function (e.g. lmb_alloc_type()) for when we actually need to specify the type?
We will be passing different values when we call the LMB API's from the EFI allocation function. This is only adding a parameter to the allocation API's, which I believe is better than adding separate functions which take a flag parameter only to be called from the EFI subsystem.
-sughosh

Hi Sughosh,
On Mon, 29 Jul 2024 at 02:40, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Fri, 26 Jul 2024 at 05:02, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Wed, 24 Jul 2024 at 00:04, Sughosh Ganu sughosh.ganu@linaro.org wrote:
Add a flags parameter to the LMB API functions. The parameter can then be used to pass any other type of reservations or allocations needed by the callers. These will be used in a subsequent set of changes for allocation requests coming from the EFI subsystem.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since rfc: New patch
arch/arm/mach-apple/board.c | 17 ++-- arch/arm/mach-snapdragon/board.c | 2 +- arch/arm/mach-stm32mp/dram_init.c | 4 +- arch/powerpc/cpu/mpc85xx/mp.c | 2 +- arch/powerpc/lib/bootm.c | 2 +- board/xilinx/common/board.c | 4 +- boot/bootm.c | 5 +- boot/image-board.c | 15 ++- boot/image-fdt.c | 15 +-- cmd/booti.c | 2 +- cmd/bootz.c | 2 +- cmd/load.c | 4 +- drivers/iommu/apple_dart.c | 6 +- drivers/iommu/sandbox_iommu.c | 6 +- fs/fs.c | 2 +- include/lmb.h | 23 ++--- lib/lmb.c | 48 ++++------ test/lib/lmb.c | 150 +++++++++++++++--------------- 18 files changed, 150 insertions(+), 159 deletions(-)
This negates any code-size advantage of dropping the lmb parameter.
All of these are LMB_NONE. Can we have a separate function (e.g. lmb_alloc_type()) for when we actually need to specify the type?
We will be passing different values when we call the LMB API's from the EFI allocation function. This is only adding a parameter to the allocation API's, which I believe is better than adding separate functions which take a flag parameter only to be called from the EFI subsystem.
No i believe it is worse, unless there are a lot of such functions. The flags are a special case, not the common case.
Regards, SImon

On Mon, 29 Jul 2024 at 20:56, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Mon, 29 Jul 2024 at 02:40, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Fri, 26 Jul 2024 at 05:02, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Wed, 24 Jul 2024 at 00:04, Sughosh Ganu sughosh.ganu@linaro.org wrote:
Add a flags parameter to the LMB API functions. The parameter can then be used to pass any other type of reservations or allocations needed by the callers. These will be used in a subsequent set of changes for allocation requests coming from the EFI subsystem.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since rfc: New patch
arch/arm/mach-apple/board.c | 17 ++-- arch/arm/mach-snapdragon/board.c | 2 +- arch/arm/mach-stm32mp/dram_init.c | 4 +- arch/powerpc/cpu/mpc85xx/mp.c | 2 +- arch/powerpc/lib/bootm.c | 2 +- board/xilinx/common/board.c | 4 +- boot/bootm.c | 5 +- boot/image-board.c | 15 ++- boot/image-fdt.c | 15 +-- cmd/booti.c | 2 +- cmd/bootz.c | 2 +- cmd/load.c | 4 +- drivers/iommu/apple_dart.c | 6 +- drivers/iommu/sandbox_iommu.c | 6 +- fs/fs.c | 2 +- include/lmb.h | 23 ++--- lib/lmb.c | 48 ++++------ test/lib/lmb.c | 150 +++++++++++++++--------------- 18 files changed, 150 insertions(+), 159 deletions(-)
This negates any code-size advantage of dropping the lmb parameter.
All of these are LMB_NONE. Can we have a separate function (e.g. lmb_alloc_type()) for when we actually need to specify the type?
We will be passing different values when we call the LMB API's from the EFI allocation function. This is only adding a parameter to the allocation API's, which I believe is better than adding separate functions which take a flag parameter only to be called from the EFI subsystem.
No i believe it is worse, unless there are a lot of such functions. The flags are a special case, not the common case.
I have done some size impact tests on the two scenarios, one where we have a common set of lmb allocation API functions, with an added flags parameter, and second where we have separate API's to be called from the EFI memory module. I have put out the results of the size impact [1].
You will see that with common API's, we are not losing much even on boards with EFI_LOADER disabled. But otoh, on boards which have EFI_LOADER enabled, the gains are pretty significant. I believe we should reconsider using a common LMB API with the flags parameter.
-sughosh
[1] - https://gist.github.com/sughoshg/a20207f26e19238fef86f710134d6efd
Regards, SImon

Hi Sughosh,
On Mon, 5 Aug 2024 at 05:55, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Mon, 29 Jul 2024 at 20:56, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Mon, 29 Jul 2024 at 02:40, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Fri, 26 Jul 2024 at 05:02, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Wed, 24 Jul 2024 at 00:04, Sughosh Ganu sughosh.ganu@linaro.org wrote:
Add a flags parameter to the LMB API functions. The parameter can then be used to pass any other type of reservations or allocations needed by the callers. These will be used in a subsequent set of changes for allocation requests coming from the EFI subsystem.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since rfc: New patch
arch/arm/mach-apple/board.c | 17 ++-- arch/arm/mach-snapdragon/board.c | 2 +- arch/arm/mach-stm32mp/dram_init.c | 4 +- arch/powerpc/cpu/mpc85xx/mp.c | 2 +- arch/powerpc/lib/bootm.c | 2 +- board/xilinx/common/board.c | 4 +- boot/bootm.c | 5 +- boot/image-board.c | 15 ++- boot/image-fdt.c | 15 +-- cmd/booti.c | 2 +- cmd/bootz.c | 2 +- cmd/load.c | 4 +- drivers/iommu/apple_dart.c | 6 +- drivers/iommu/sandbox_iommu.c | 6 +- fs/fs.c | 2 +- include/lmb.h | 23 ++--- lib/lmb.c | 48 ++++------ test/lib/lmb.c | 150 +++++++++++++++--------------- 18 files changed, 150 insertions(+), 159 deletions(-)
This negates any code-size advantage of dropping the lmb parameter.
All of these are LMB_NONE. Can we have a separate function (e.g. lmb_alloc_type()) for when we actually need to specify the type?
We will be passing different values when we call the LMB API's from the EFI allocation function. This is only adding a parameter to the allocation API's, which I believe is better than adding separate functions which take a flag parameter only to be called from the EFI subsystem.
No i believe it is worse, unless there are a lot of such functions. The flags are a special case, not the common case.
I have done some size impact tests on the two scenarios, one where we have a common set of lmb allocation API functions, with an added flags parameter, and second where we have separate API's to be called from the EFI memory module. I have put out the results of the size impact [1].
You will see that with common API's, we are not losing much even on boards with EFI_LOADER disabled. But otoh, on boards which have EFI_LOADER enabled, the gains are pretty significant. I believe we should reconsider using a common LMB API with the flags parameter.
Thanks for looking at it.
Did you do special versions of just lmb_alloc() and lmb_add() which call the flags versions? It seems that there is no size advantage with EFI_LOADER and only a small one with !EFI_LOADER. Can you please point me to the code?
Regards, Simon
[1] - https://gist.github.com/sughoshg/a20207f26e19238fef86f710134d6efd
Regards, SImon

On Wed, 7 Aug 2024 at 03:21, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Mon, 5 Aug 2024 at 05:55, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Mon, 29 Jul 2024 at 20:56, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Mon, 29 Jul 2024 at 02:40, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Fri, 26 Jul 2024 at 05:02, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Wed, 24 Jul 2024 at 00:04, Sughosh Ganu sughosh.ganu@linaro.org wrote:
Add a flags parameter to the LMB API functions. The parameter can then be used to pass any other type of reservations or allocations needed by the callers. These will be used in a subsequent set of changes for allocation requests coming from the EFI subsystem.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since rfc: New patch
arch/arm/mach-apple/board.c | 17 ++-- arch/arm/mach-snapdragon/board.c | 2 +- arch/arm/mach-stm32mp/dram_init.c | 4 +- arch/powerpc/cpu/mpc85xx/mp.c | 2 +- arch/powerpc/lib/bootm.c | 2 +- board/xilinx/common/board.c | 4 +- boot/bootm.c | 5 +- boot/image-board.c | 15 ++- boot/image-fdt.c | 15 +-- cmd/booti.c | 2 +- cmd/bootz.c | 2 +- cmd/load.c | 4 +- drivers/iommu/apple_dart.c | 6 +- drivers/iommu/sandbox_iommu.c | 6 +- fs/fs.c | 2 +- include/lmb.h | 23 ++--- lib/lmb.c | 48 ++++------ test/lib/lmb.c | 150 +++++++++++++++--------------- 18 files changed, 150 insertions(+), 159 deletions(-)
This negates any code-size advantage of dropping the lmb parameter.
All of these are LMB_NONE. Can we have a separate function (e.g. lmb_alloc_type()) for when we actually need to specify the type?
We will be passing different values when we call the LMB API's from the EFI allocation function. This is only adding a parameter to the allocation API's, which I believe is better than adding separate functions which take a flag parameter only to be called from the EFI subsystem.
No i believe it is worse, unless there are a lot of such functions. The flags are a special case, not the common case.
I have done some size impact tests on the two scenarios, one where we have a common set of lmb allocation API functions, with an added flags parameter, and second where we have separate API's to be called from the EFI memory module. I have put out the results of the size impact [1].
You will see that with common API's, we are not losing much even on boards with EFI_LOADER disabled. But otoh, on boards which have EFI_LOADER enabled, the gains are pretty significant. I believe we should reconsider using a common LMB API with the flags parameter.
Thanks for looking at it.
Did you do special versions of just lmb_alloc() and lmb_add() which call the flags versions? It seems that there is no size advantage with EFI_LOADER and only a small one with !EFI_LOADER. Can you please point me to the code?
For the separate API version, I introduced new versions lmb_alloc_flags(), lmb_alloc_base_flags(), lmb_alloc_addr_flags() and lmb_free_flags(), which are being called from the EFI memory module. I have pushed the two branches [1] [2] on my github. Please take a look.
Btw, both these branches are based on your v5 of the alist patches, and also incorporate the stack based implementation for running the lmb tests. So except for either having common API's, or not, there are no other differences between the two. Thanks.
-sughosh
[1] - https://github.com/sughoshg/u-boot/tree/lmb_efi_common_apis_nrfc_v2 [2] - https://github.com/sughoshg/u-boot/tree/lmb_efi_separate_flags_apis_nrfc_v2
Regards, Simon
[1] - https://gist.github.com/sughoshg/a20207f26e19238fef86f710134d6efd
Regards, SImon

Hi Sughosh,
On Wed, 7 Aug 2024 at 00:32, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Wed, 7 Aug 2024 at 03:21, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Mon, 5 Aug 2024 at 05:55, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Mon, 29 Jul 2024 at 20:56, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Mon, 29 Jul 2024 at 02:40, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Fri, 26 Jul 2024 at 05:02, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Wed, 24 Jul 2024 at 00:04, Sughosh Ganu sughosh.ganu@linaro.org wrote: > > Add a flags parameter to the LMB API functions. The parameter can then > be used to pass any other type of reservations or allocations needed > by the callers. These will be used in a subsequent set of changes for > allocation requests coming from the EFI subsystem. > > Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org > --- > Changes since rfc: New patch > > arch/arm/mach-apple/board.c | 17 ++-- > arch/arm/mach-snapdragon/board.c | 2 +- > arch/arm/mach-stm32mp/dram_init.c | 4 +- > arch/powerpc/cpu/mpc85xx/mp.c | 2 +- > arch/powerpc/lib/bootm.c | 2 +- > board/xilinx/common/board.c | 4 +- > boot/bootm.c | 5 +- > boot/image-board.c | 15 ++- > boot/image-fdt.c | 15 +-- > cmd/booti.c | 2 +- > cmd/bootz.c | 2 +- > cmd/load.c | 4 +- > drivers/iommu/apple_dart.c | 6 +- > drivers/iommu/sandbox_iommu.c | 6 +- > fs/fs.c | 2 +- > include/lmb.h | 23 ++--- > lib/lmb.c | 48 ++++------ > test/lib/lmb.c | 150 +++++++++++++++--------------- > 18 files changed, 150 insertions(+), 159 deletions(-)
This negates any code-size advantage of dropping the lmb parameter.
All of these are LMB_NONE. Can we have a separate function (e.g. lmb_alloc_type()) for when we actually need to specify the type?
We will be passing different values when we call the LMB API's from the EFI allocation function. This is only adding a parameter to the allocation API's, which I believe is better than adding separate functions which take a flag parameter only to be called from the EFI subsystem.
No i believe it is worse, unless there are a lot of such functions. The flags are a special case, not the common case.
I have done some size impact tests on the two scenarios, one where we have a common set of lmb allocation API functions, with an added flags parameter, and second where we have separate API's to be called from the EFI memory module. I have put out the results of the size impact [1].
You will see that with common API's, we are not losing much even on boards with EFI_LOADER disabled. But otoh, on boards which have EFI_LOADER enabled, the gains are pretty significant. I believe we should reconsider using a common LMB API with the flags parameter.
Thanks for looking at it.
Did you do special versions of just lmb_alloc() and lmb_add() which call the flags versions? It seems that there is no size advantage with EFI_LOADER and only a small one with !EFI_LOADER. Can you please point me to the code?
For the separate API version, I introduced new versions lmb_alloc_flags(), lmb_alloc_base_flags(), lmb_alloc_addr_flags() and lmb_free_flags(), which are being called from the EFI memory module. I have pushed the two branches [1] [2] on my github. Please take a look.
Btw, both these branches are based on your v5 of the alist patches, and also incorporate the stack based implementation for running the lmb tests. So except for either having common API's, or not, there are no other differences between the two. Thanks.
Thanks for the info.
The non-flags functions can call the flags functions, so that you don't create a new code path. Something like this:
diff --git a/lib/lmb.c b/lib/lmb.c index 726e6c38227..0a251c587fe 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -528,7 +528,7 @@ long lmb_free_flags(phys_addr_t base, phys_size_t size,
long lmb_free(phys_addr_t base, phys_size_t size) { - return __lmb_free(base, size); + return lmb_free_flags(base, size, LMB_NONE); }
long lmb_reserve_flags(phys_addr_t base, phys_size_t size, enum lmb_flags flags) @@ -624,7 +624,7 @@ static phys_addr_t __lmb_alloc_base(phys_size_t size, ulong align,
phys_addr_t lmb_alloc(phys_size_t size, ulong align) { - return lmb_alloc_base(size, align, LMB_ALLOC_ANYWHERE); + return lmb_alloc_flags(size, align, LMB_NONE); }
phys_addr_t lmb_alloc_flags(phys_size_t size, ulong align, uint flags) @@ -635,15 +635,7 @@ phys_addr_t lmb_alloc_flags(phys_size_t size, ulong align, uint flags)
phys_addr_t lmb_alloc_base(phys_size_t size, ulong align, phys_addr_t max_addr) { - phys_addr_t alloc; - - alloc = __lmb_alloc_base(size, align, max_addr, LMB_NONE); - - if (alloc == 0) - printf("ERROR: Failed to allocate 0x%lx bytes below 0x%lx.\n", - (ulong)size, (ulong)max_addr); - - return alloc; + return lmb_alloc_base_flags(size, align, max_addr, LMB_NONE); }
phys_addr_t lmb_alloc_base_flags(phys_size_t size, ulong align, @@ -691,7 +683,7 @@ static phys_addr_t __lmb_alloc_addr(phys_addr_t base, phys_size_t size, */ phys_addr_t lmb_alloc_addr(phys_addr_t base, phys_size_t size) { - return __lmb_alloc_addr(base, size, LMB_NONE); + return lmb_alloc_addr_flags(base, size, LMB_NONE); }
phys_addr_t lmb_alloc_addr_flags(phys_addr_t base, phys_size_t size,
But it only saves about 40 bytes on Thumb2. You can save another 16 by using the placeholder API:
diff --git a/lib/lmb.c b/lib/lmb.c index 0a251c587fe..f6c8f06629c 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -416,13 +416,12 @@ static long lmb_add_region_flags(struct alist *lmb_rgn_lst, phys_addr_t base, if (coalesced) return coalesced;
- if (alist_full(lmb_rgn_lst) && - !alist_expand_by(lmb_rgn_lst, lmb_rgn_lst->alloc)) + if (!alist_add_placeholder(lmb_rgn_lst)) return -1; rgn = lmb_rgn_lst->data;
/* Couldn't coalesce the LMB, so add it to the sorted table. */ - for (i = lmb_rgn_lst->count; i >= 0; i--) { + for (i = lmb_rgn_lst->count - 1; i >= 0; i--) { if (i && base < rgn[i - 1].base) { rgn[i] = rgn[i - 1]; } else { @@ -433,8 +432,6 @@ static long lmb_add_region_flags(struct alist *lmb_rgn_lst, phys_addr_t base, } }
- lmb_rgn_lst->count++; - return 0; }
@@ -444,7 +441,6 @@ static long lmb_add_region(struct alist *lmb_rgn_lst, phys_addr_t base, return lmb_add_region_flags(lmb_rgn_lst, base, size, LMB_NONE); }
-/* This routine may be called with relocation disabled. */ long lmb_add(phys_addr_t base, phys_size_t size) { long ret;
-- 2.34.1
(patches are a bit rough, but I didn't think it worth sending them to the ML as real patches)
If I am correct and we don't need to publish events, then that will save a little more space.
Regards, Simon
-sughosh
[1] - https://github.com/sughoshg/u-boot/tree/lmb_efi_common_apis_nrfc_v2 [2] - https://github.com/sughoshg/u-boot/tree/lmb_efi_separate_flags_apis_nrfc_v2
Regards, Simon
[1] - https://gist.github.com/sughoshg/a20207f26e19238fef86f710134d6efd
Regards, SImon

On Thu, 8 Aug 2024 at 19:58, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Wed, 7 Aug 2024 at 00:32, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Wed, 7 Aug 2024 at 03:21, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Mon, 5 Aug 2024 at 05:55, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Mon, 29 Jul 2024 at 20:56, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Mon, 29 Jul 2024 at 02:40, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Fri, 26 Jul 2024 at 05:02, Simon Glass sjg@chromium.org wrote: > > Hi Sughosh, > > On Wed, 24 Jul 2024 at 00:04, Sughosh Ganu sughosh.ganu@linaro.org wrote: > > > > Add a flags parameter to the LMB API functions. The parameter can then > > be used to pass any other type of reservations or allocations needed > > by the callers. These will be used in a subsequent set of changes for > > allocation requests coming from the EFI subsystem. > > > > Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org > > --- > > Changes since rfc: New patch > > > > arch/arm/mach-apple/board.c | 17 ++-- > > arch/arm/mach-snapdragon/board.c | 2 +- > > arch/arm/mach-stm32mp/dram_init.c | 4 +- > > arch/powerpc/cpu/mpc85xx/mp.c | 2 +- > > arch/powerpc/lib/bootm.c | 2 +- > > board/xilinx/common/board.c | 4 +- > > boot/bootm.c | 5 +- > > boot/image-board.c | 15 ++- > > boot/image-fdt.c | 15 +-- > > cmd/booti.c | 2 +- > > cmd/bootz.c | 2 +- > > cmd/load.c | 4 +- > > drivers/iommu/apple_dart.c | 6 +- > > drivers/iommu/sandbox_iommu.c | 6 +- > > fs/fs.c | 2 +- > > include/lmb.h | 23 ++--- > > lib/lmb.c | 48 ++++------ > > test/lib/lmb.c | 150 +++++++++++++++--------------- > > 18 files changed, 150 insertions(+), 159 deletions(-) > > This negates any code-size advantage of dropping the lmb parameter. > > All of these are LMB_NONE. Can we have a separate function (e.g. > lmb_alloc_type()) for when we actually need to specify the type?
We will be passing different values when we call the LMB API's from the EFI allocation function. This is only adding a parameter to the allocation API's, which I believe is better than adding separate functions which take a flag parameter only to be called from the EFI subsystem.
No i believe it is worse, unless there are a lot of such functions. The flags are a special case, not the common case.
I have done some size impact tests on the two scenarios, one where we have a common set of lmb allocation API functions, with an added flags parameter, and second where we have separate API's to be called from the EFI memory module. I have put out the results of the size impact [1].
You will see that with common API's, we are not losing much even on boards with EFI_LOADER disabled. But otoh, on boards which have EFI_LOADER enabled, the gains are pretty significant. I believe we should reconsider using a common LMB API with the flags parameter.
Thanks for looking at it.
Did you do special versions of just lmb_alloc() and lmb_add() which call the flags versions? It seems that there is no size advantage with EFI_LOADER and only a small one with !EFI_LOADER. Can you please point me to the code?
For the separate API version, I introduced new versions lmb_alloc_flags(), lmb_alloc_base_flags(), lmb_alloc_addr_flags() and lmb_free_flags(), which are being called from the EFI memory module. I have pushed the two branches [1] [2] on my github. Please take a look.
Btw, both these branches are based on your v5 of the alist patches, and also incorporate the stack based implementation for running the lmb tests. So except for either having common API's, or not, there are no other differences between the two. Thanks.
Thanks for the info.
The non-flags functions can call the flags functions, so that you don't create a new code path. Something like this:
diff --git a/lib/lmb.c b/lib/lmb.c index 726e6c38227..0a251c587fe 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -528,7 +528,7 @@ long lmb_free_flags(phys_addr_t base, phys_size_t size,
long lmb_free(phys_addr_t base, phys_size_t size) {
return __lmb_free(base, size);
return lmb_free_flags(base, size, LMB_NONE);
}
long lmb_reserve_flags(phys_addr_t base, phys_size_t size, enum lmb_flags flags) @@ -624,7 +624,7 @@ static phys_addr_t __lmb_alloc_base(phys_size_t size, ulong align,
phys_addr_t lmb_alloc(phys_size_t size, ulong align) {
return lmb_alloc_base(size, align, LMB_ALLOC_ANYWHERE);
return lmb_alloc_flags(size, align, LMB_NONE);
}
phys_addr_t lmb_alloc_flags(phys_size_t size, ulong align, uint flags) @@ -635,15 +635,7 @@ phys_addr_t lmb_alloc_flags(phys_size_t size, ulong align, uint flags)
phys_addr_t lmb_alloc_base(phys_size_t size, ulong align, phys_addr_t max_addr) {
phys_addr_t alloc;
alloc = __lmb_alloc_base(size, align, max_addr, LMB_NONE);
if (alloc == 0)
printf("ERROR: Failed to allocate 0x%lx bytes below 0x%lx.\n",
(ulong)size, (ulong)max_addr);
return alloc;
return lmb_alloc_base_flags(size, align, max_addr, LMB_NONE);
}
phys_addr_t lmb_alloc_base_flags(phys_size_t size, ulong align, @@ -691,7 +683,7 @@ static phys_addr_t __lmb_alloc_addr(phys_addr_t base, phys_size_t size, */ phys_addr_t lmb_alloc_addr(phys_addr_t base, phys_size_t size) {
return __lmb_alloc_addr(base, size, LMB_NONE);
return lmb_alloc_addr_flags(base, size, LMB_NONE);
}
phys_addr_t lmb_alloc_addr_flags(phys_addr_t base, phys_size_t size,
But it only saves about 40 bytes on Thumb2. You can save another 16 by using the placeholder API:
Can you please explain the issue that you see with having a common set of API's with the flags parameter added? The way I see it, the API's are already undergoing a change where we are removing the struct lmb instance as a parameter from all the API functions. With the change that I propose, we are simply replacing the lmb instance parameter with the flags parameter. So arguably we are not adding any additional size here. Also, like the size tests show, we get a pretty good size benefit when the EFI_LOADER is enabled.
So, if your argument is to keep the API's similar to their earlier form, I think that they are undergoing a change in any case, so adding the flags parameter is not so much of an issue. If there is any problem that I am missing, I would like to understand that before we go with separate API's. Thanks.
-sughosh
diff --git a/lib/lmb.c b/lib/lmb.c index 0a251c587fe..f6c8f06629c 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -416,13 +416,12 @@ static long lmb_add_region_flags(struct alist *lmb_rgn_lst, phys_addr_t base, if (coalesced) return coalesced;
if (alist_full(lmb_rgn_lst) &&
!alist_expand_by(lmb_rgn_lst, lmb_rgn_lst->alloc))
if (!alist_add_placeholder(lmb_rgn_lst)) return -1; rgn = lmb_rgn_lst->data; /* Couldn't coalesce the LMB, so add it to the sorted table. */
for (i = lmb_rgn_lst->count; i >= 0; i--) {
for (i = lmb_rgn_lst->count - 1; i >= 0; i--) { if (i && base < rgn[i - 1].base) { rgn[i] = rgn[i - 1]; } else {
@@ -433,8 +432,6 @@ static long lmb_add_region_flags(struct alist *lmb_rgn_lst, phys_addr_t base, } }
lmb_rgn_lst->count++;
return 0;
}
@@ -444,7 +441,6 @@ static long lmb_add_region(struct alist *lmb_rgn_lst, phys_addr_t base, return lmb_add_region_flags(lmb_rgn_lst, base, size, LMB_NONE); }
-/* This routine may be called with relocation disabled. */ long lmb_add(phys_addr_t base, phys_size_t size) { long ret;
-- 2.34.1
(patches are a bit rough, but I didn't think it worth sending them to the ML as real patches)
If I am correct and we don't need to publish events, then that will save a little more space.
Regards, Simon
-sughosh
[1] - https://github.com/sughoshg/u-boot/tree/lmb_efi_common_apis_nrfc_v2 [2] - https://github.com/sughoshg/u-boot/tree/lmb_efi_separate_flags_apis_nrfc_v2
Regards, Simon
[1] - https://gist.github.com/sughoshg/a20207f26e19238fef86f710134d6efd
Regards, SImon

Hi Sughosh,
On Fri, 9 Aug 2024 at 02:25, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Thu, 8 Aug 2024 at 19:58, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Wed, 7 Aug 2024 at 00:32, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Wed, 7 Aug 2024 at 03:21, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Mon, 5 Aug 2024 at 05:55, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Mon, 29 Jul 2024 at 20:56, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Mon, 29 Jul 2024 at 02:40, Sughosh Ganu sughosh.ganu@linaro.org wrote: > > On Fri, 26 Jul 2024 at 05:02, Simon Glass sjg@chromium.org wrote: > > > > Hi Sughosh, > > > > On Wed, 24 Jul 2024 at 00:04, Sughosh Ganu sughosh.ganu@linaro.org wrote: > > > > > > Add a flags parameter to the LMB API functions. The parameter can then > > > be used to pass any other type of reservations or allocations needed > > > by the callers. These will be used in a subsequent set of changes for > > > allocation requests coming from the EFI subsystem. > > > > > > Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org > > > --- > > > Changes since rfc: New patch > > > > > > arch/arm/mach-apple/board.c | 17 ++-- > > > arch/arm/mach-snapdragon/board.c | 2 +- > > > arch/arm/mach-stm32mp/dram_init.c | 4 +- > > > arch/powerpc/cpu/mpc85xx/mp.c | 2 +- > > > arch/powerpc/lib/bootm.c | 2 +- > > > board/xilinx/common/board.c | 4 +- > > > boot/bootm.c | 5 +- > > > boot/image-board.c | 15 ++- > > > boot/image-fdt.c | 15 +-- > > > cmd/booti.c | 2 +- > > > cmd/bootz.c | 2 +- > > > cmd/load.c | 4 +- > > > drivers/iommu/apple_dart.c | 6 +- > > > drivers/iommu/sandbox_iommu.c | 6 +- > > > fs/fs.c | 2 +- > > > include/lmb.h | 23 ++--- > > > lib/lmb.c | 48 ++++------ > > > test/lib/lmb.c | 150 +++++++++++++++--------------- > > > 18 files changed, 150 insertions(+), 159 deletions(-) > > > > This negates any code-size advantage of dropping the lmb parameter. > > > > All of these are LMB_NONE. Can we have a separate function (e.g. > > lmb_alloc_type()) for when we actually need to specify the type? > > We will be passing different values when we call the LMB API's from > the EFI allocation function. This is only adding a parameter to the > allocation API's, which I believe is better than adding separate > functions which take a flag parameter only to be called from the EFI > subsystem.
No i believe it is worse, unless there are a lot of such functions. The flags are a special case, not the common case.
I have done some size impact tests on the two scenarios, one where we have a common set of lmb allocation API functions, with an added flags parameter, and second where we have separate API's to be called from the EFI memory module. I have put out the results of the size impact [1].
You will see that with common API's, we are not losing much even on boards with EFI_LOADER disabled. But otoh, on boards which have EFI_LOADER enabled, the gains are pretty significant. I believe we should reconsider using a common LMB API with the flags parameter.
Thanks for looking at it.
Did you do special versions of just lmb_alloc() and lmb_add() which call the flags versions? It seems that there is no size advantage with EFI_LOADER and only a small one with !EFI_LOADER. Can you please point me to the code?
For the separate API version, I introduced new versions lmb_alloc_flags(), lmb_alloc_base_flags(), lmb_alloc_addr_flags() and lmb_free_flags(), which are being called from the EFI memory module. I have pushed the two branches [1] [2] on my github. Please take a look.
Btw, both these branches are based on your v5 of the alist patches, and also incorporate the stack based implementation for running the lmb tests. So except for either having common API's, or not, there are no other differences between the two. Thanks.
Thanks for the info.
The non-flags functions can call the flags functions, so that you don't create a new code path. Something like this:
diff --git a/lib/lmb.c b/lib/lmb.c index 726e6c38227..0a251c587fe 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -528,7 +528,7 @@ long lmb_free_flags(phys_addr_t base, phys_size_t size,
long lmb_free(phys_addr_t base, phys_size_t size) {
return __lmb_free(base, size);
return lmb_free_flags(base, size, LMB_NONE);
}
long lmb_reserve_flags(phys_addr_t base, phys_size_t size, enum lmb_flags flags) @@ -624,7 +624,7 @@ static phys_addr_t __lmb_alloc_base(phys_size_t size, ulong align,
phys_addr_t lmb_alloc(phys_size_t size, ulong align) {
return lmb_alloc_base(size, align, LMB_ALLOC_ANYWHERE);
return lmb_alloc_flags(size, align, LMB_NONE);
}
phys_addr_t lmb_alloc_flags(phys_size_t size, ulong align, uint flags) @@ -635,15 +635,7 @@ phys_addr_t lmb_alloc_flags(phys_size_t size, ulong align, uint flags)
phys_addr_t lmb_alloc_base(phys_size_t size, ulong align, phys_addr_t max_addr) {
phys_addr_t alloc;
alloc = __lmb_alloc_base(size, align, max_addr, LMB_NONE);
if (alloc == 0)
printf("ERROR: Failed to allocate 0x%lx bytes below 0x%lx.\n",
(ulong)size, (ulong)max_addr);
return alloc;
return lmb_alloc_base_flags(size, align, max_addr, LMB_NONE);
}
phys_addr_t lmb_alloc_base_flags(phys_size_t size, ulong align, @@ -691,7 +683,7 @@ static phys_addr_t __lmb_alloc_addr(phys_addr_t base, phys_size_t size, */ phys_addr_t lmb_alloc_addr(phys_addr_t base, phys_size_t size) {
return __lmb_alloc_addr(base, size, LMB_NONE);
return lmb_alloc_addr_flags(base, size, LMB_NONE);
}
phys_addr_t lmb_alloc_addr_flags(phys_addr_t base, phys_size_t size,
But it only saves about 40 bytes on Thumb2. You can save another 16 by using the placeholder API:
Can you please explain the issue that you see with having a common set of API's with the flags parameter added? The way I see it, the API's are already undergoing a change where we are removing the struct lmb instance as a parameter from all the API functions. With the change that I propose, we are simply replacing the lmb instance parameter with the flags parameter. So arguably we are not adding any additional size here. Also, like the size tests show, we get a pretty good size benefit when the EFI_LOADER is enabled.
So, if your argument is to keep the API's similar to their earlier form, I think that they are undergoing a change in any case, so adding the flags parameter is not so much of an issue. If there is any problem that I am missing, I would like to understand that before we go with separate API's. Thanks.
I thought I explained this already, but perhaps not. We change APIs all the time, so that is not a problem.
Almost all calls don't need to pass flags since it is LMB_NONE (which really should be 0, BTW, not BIT(0)). We already have lmb_reserve_flags() and lmb_reserve() to deal with this difference. So passing a parameter which is almost always the same is not helpful for code size.
Outside the tests, only one place (boot/image-fdt.c) uses lmb_reserve_flags() - BTW some of these are using the flags version of the function but passing LMB_NONE.
Here are the five functions:
lmb_reserve lmb_alloc lmb_alloc_base lmb_alloc_addr lmb_free
If some of them aren't worth having two versions, then I suppose we don't need them all. But note that if some of my EFI patches make it through code review, we won't need this circular relationship between EFI and lmb, so some of the 'flags' versions can be dropped again.
But EFI only even seems to pass LMB_NOOVERWRITE | LMB_NONOTIFY...?
Part of my frustration with all of this is that I created an lmb cleanup series[1] nearly a year ago, which was either ignored or blocked (I'm not sure which). That series tidied up the code quite a lot and took much effort. I'm not sure if you even saw it?
Finally, it would help the project if you could do some code reviews. For example, how about [2] and [3] - they are very much in your area.
Regards, Simon
[1] https://patchwork.ozlabs.org/project/uboot/list/?series=371258&state=* [2] https://patchwork.ozlabs.org/project/uboot/list/?series=417669 [3] https://patchwork.ozlabs.org/project/uboot/list/?series=418212
-sughosh
diff --git a/lib/lmb.c b/lib/lmb.c index 0a251c587fe..f6c8f06629c 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -416,13 +416,12 @@ static long lmb_add_region_flags(struct alist *lmb_rgn_lst, phys_addr_t base, if (coalesced) return coalesced;
if (alist_full(lmb_rgn_lst) &&
!alist_expand_by(lmb_rgn_lst, lmb_rgn_lst->alloc))
if (!alist_add_placeholder(lmb_rgn_lst)) return -1; rgn = lmb_rgn_lst->data; /* Couldn't coalesce the LMB, so add it to the sorted table. */
for (i = lmb_rgn_lst->count; i >= 0; i--) {
for (i = lmb_rgn_lst->count - 1; i >= 0; i--) { if (i && base < rgn[i - 1].base) { rgn[i] = rgn[i - 1]; } else {
@@ -433,8 +432,6 @@ static long lmb_add_region_flags(struct alist *lmb_rgn_lst, phys_addr_t base, } }
lmb_rgn_lst->count++;
return 0;
}
@@ -444,7 +441,6 @@ static long lmb_add_region(struct alist *lmb_rgn_lst, phys_addr_t base, return lmb_add_region_flags(lmb_rgn_lst, base, size, LMB_NONE); }
-/* This routine may be called with relocation disabled. */ long lmb_add(phys_addr_t base, phys_size_t size) { long ret;
-- 2.34.1
(patches are a bit rough, but I didn't think it worth sending them to the ML as real patches)
If I am correct and we don't need to publish events, then that will save a little more space.
Regards, Simon
-sughosh
[1] - https://github.com/sughoshg/u-boot/tree/lmb_efi_common_apis_nrfc_v2 [2] - https://github.com/sughoshg/u-boot/tree/lmb_efi_separate_flags_apis_nrfc_v2
Regards, Simon
[1] - https://gist.github.com/sughoshg/a20207f26e19238fef86f710134d6efd
Regards, SImon

On Fri, 9 Aug 2024 at 21:28, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Fri, 9 Aug 2024 at 02:25, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Thu, 8 Aug 2024 at 19:58, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Wed, 7 Aug 2024 at 00:32, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Wed, 7 Aug 2024 at 03:21, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Mon, 5 Aug 2024 at 05:55, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Mon, 29 Jul 2024 at 20:56, Simon Glass sjg@chromium.org wrote: > > Hi Sughosh, > > On Mon, 29 Jul 2024 at 02:40, Sughosh Ganu sughosh.ganu@linaro.org wrote: > > > > On Fri, 26 Jul 2024 at 05:02, Simon Glass sjg@chromium.org wrote: > > > > > > Hi Sughosh, > > > > > > On Wed, 24 Jul 2024 at 00:04, Sughosh Ganu sughosh.ganu@linaro.org wrote: > > > > > > > > Add a flags parameter to the LMB API functions. The parameter can then > > > > be used to pass any other type of reservations or allocations needed > > > > by the callers. These will be used in a subsequent set of changes for > > > > allocation requests coming from the EFI subsystem. > > > > > > > > Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org > > > > --- > > > > Changes since rfc: New patch > > > > > > > > arch/arm/mach-apple/board.c | 17 ++-- > > > > arch/arm/mach-snapdragon/board.c | 2 +- > > > > arch/arm/mach-stm32mp/dram_init.c | 4 +- > > > > arch/powerpc/cpu/mpc85xx/mp.c | 2 +- > > > > arch/powerpc/lib/bootm.c | 2 +- > > > > board/xilinx/common/board.c | 4 +- > > > > boot/bootm.c | 5 +- > > > > boot/image-board.c | 15 ++- > > > > boot/image-fdt.c | 15 +-- > > > > cmd/booti.c | 2 +- > > > > cmd/bootz.c | 2 +- > > > > cmd/load.c | 4 +- > > > > drivers/iommu/apple_dart.c | 6 +- > > > > drivers/iommu/sandbox_iommu.c | 6 +- > > > > fs/fs.c | 2 +- > > > > include/lmb.h | 23 ++--- > > > > lib/lmb.c | 48 ++++------ > > > > test/lib/lmb.c | 150 +++++++++++++++--------------- > > > > 18 files changed, 150 insertions(+), 159 deletions(-) > > > > > > This negates any code-size advantage of dropping the lmb parameter. > > > > > > All of these are LMB_NONE. Can we have a separate function (e.g. > > > lmb_alloc_type()) for when we actually need to specify the type? > > > > We will be passing different values when we call the LMB API's from > > the EFI allocation function. This is only adding a parameter to the > > allocation API's, which I believe is better than adding separate > > functions which take a flag parameter only to be called from the EFI > > subsystem. > > No i believe it is worse, unless there are a lot of such functions. > The flags are a special case, not the common case.
I have done some size impact tests on the two scenarios, one where we have a common set of lmb allocation API functions, with an added flags parameter, and second where we have separate API's to be called from the EFI memory module. I have put out the results of the size impact [1].
You will see that with common API's, we are not losing much even on boards with EFI_LOADER disabled. But otoh, on boards which have EFI_LOADER enabled, the gains are pretty significant. I believe we should reconsider using a common LMB API with the flags parameter.
Thanks for looking at it.
Did you do special versions of just lmb_alloc() and lmb_add() which call the flags versions? It seems that there is no size advantage with EFI_LOADER and only a small one with !EFI_LOADER. Can you please point me to the code?
For the separate API version, I introduced new versions lmb_alloc_flags(), lmb_alloc_base_flags(), lmb_alloc_addr_flags() and lmb_free_flags(), which are being called from the EFI memory module. I have pushed the two branches [1] [2] on my github. Please take a look.
Btw, both these branches are based on your v5 of the alist patches, and also incorporate the stack based implementation for running the lmb tests. So except for either having common API's, or not, there are no other differences between the two. Thanks.
Thanks for the info.
The non-flags functions can call the flags functions, so that you don't create a new code path. Something like this:
diff --git a/lib/lmb.c b/lib/lmb.c index 726e6c38227..0a251c587fe 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -528,7 +528,7 @@ long lmb_free_flags(phys_addr_t base, phys_size_t size,
long lmb_free(phys_addr_t base, phys_size_t size) {
return __lmb_free(base, size);
return lmb_free_flags(base, size, LMB_NONE);
}
long lmb_reserve_flags(phys_addr_t base, phys_size_t size, enum lmb_flags flags) @@ -624,7 +624,7 @@ static phys_addr_t __lmb_alloc_base(phys_size_t size, ulong align,
phys_addr_t lmb_alloc(phys_size_t size, ulong align) {
return lmb_alloc_base(size, align, LMB_ALLOC_ANYWHERE);
return lmb_alloc_flags(size, align, LMB_NONE);
}
phys_addr_t lmb_alloc_flags(phys_size_t size, ulong align, uint flags) @@ -635,15 +635,7 @@ phys_addr_t lmb_alloc_flags(phys_size_t size, ulong align, uint flags)
phys_addr_t lmb_alloc_base(phys_size_t size, ulong align, phys_addr_t max_addr) {
phys_addr_t alloc;
alloc = __lmb_alloc_base(size, align, max_addr, LMB_NONE);
if (alloc == 0)
printf("ERROR: Failed to allocate 0x%lx bytes below 0x%lx.\n",
(ulong)size, (ulong)max_addr);
return alloc;
return lmb_alloc_base_flags(size, align, max_addr, LMB_NONE);
}
phys_addr_t lmb_alloc_base_flags(phys_size_t size, ulong align, @@ -691,7 +683,7 @@ static phys_addr_t __lmb_alloc_addr(phys_addr_t base, phys_size_t size, */ phys_addr_t lmb_alloc_addr(phys_addr_t base, phys_size_t size) {
return __lmb_alloc_addr(base, size, LMB_NONE);
return lmb_alloc_addr_flags(base, size, LMB_NONE);
}
phys_addr_t lmb_alloc_addr_flags(phys_addr_t base, phys_size_t size,
But it only saves about 40 bytes on Thumb2. You can save another 16 by using the placeholder API:
Can you please explain the issue that you see with having a common set of API's with the flags parameter added? The way I see it, the API's are already undergoing a change where we are removing the struct lmb instance as a parameter from all the API functions. With the change that I propose, we are simply replacing the lmb instance parameter with the flags parameter. So arguably we are not adding any additional size here. Also, like the size tests show, we get a pretty good size benefit when the EFI_LOADER is enabled.
So, if your argument is to keep the API's similar to their earlier form, I think that they are undergoing a change in any case, so adding the flags parameter is not so much of an issue. If there is any problem that I am missing, I would like to understand that before we go with separate API's. Thanks.
I thought I explained this already, but perhaps not. We change APIs all the time, so that is not a problem.
Almost all calls don't need to pass flags since it is LMB_NONE (which really should be 0, BTW, not BIT(0)). We already have lmb_reserve_flags() and lmb_reserve() to deal with this difference. So passing a parameter which is almost always the same is not helpful for code size.
Yes, in my earlier response, I did mention this aspect. The size impact is not very high on non-EFI platforms, while pretty good with platforms that enable EFI_LOADER. But since this is your view, I will go with the separate API version so that we make progress.
Outside the tests, only one place (boot/image-fdt.c) uses lmb_reserve_flags() - BTW some of these are using the flags version of the function but passing LMB_NONE.
Here are the five functions:
lmb_reserve lmb_alloc lmb_alloc_base lmb_alloc_addr lmb_free
If some of them aren't worth having two versions, then I suppose we don't need them all. But note that if some of my EFI patches make it through code review, we won't need this circular relationship between EFI and lmb, so some of the 'flags' versions can be dropped again.
Your patch is tweaking the efi_allocate_pool() whereas the changes that I am making are in the efi_allocate_pages(). So even if your approach gets accepted, this will still be needed for efi_allocate_pages() API.
But EFI only even seems to pass LMB_NOOVERWRITE | LMB_NONOTIFY...?
Part of my frustration with all of this is that I created an lmb cleanup series[1] nearly a year ago, which was either ignored or blocked (I'm not sure which). That series tidied up the code quite a lot and took much effort. I'm not sure if you even saw it?
I wasn't aware of these patches. I started looking into the LMB/EFI memory management only a few months back.
Finally, it would help the project if you could do some code reviews. For example, how about [2] and [3] - they are very much in your area.
Okay, will do. I will check the patches in detail, but I believe that patch 2 looks fine, but I have concerns about using malloc for efi_allocate_pool(). I will respond to the patch though. Thanks.
-sughosh
Regards, Simon
[1] https://patchwork.ozlabs.org/project/uboot/list/?series=371258&state=* [2] https://patchwork.ozlabs.org/project/uboot/list/?series=417669 [3] https://patchwork.ozlabs.org/project/uboot/list/?series=418212
-sughosh
diff --git a/lib/lmb.c b/lib/lmb.c index 0a251c587fe..f6c8f06629c 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -416,13 +416,12 @@ static long lmb_add_region_flags(struct alist *lmb_rgn_lst, phys_addr_t base, if (coalesced) return coalesced;
if (alist_full(lmb_rgn_lst) &&
!alist_expand_by(lmb_rgn_lst, lmb_rgn_lst->alloc))
if (!alist_add_placeholder(lmb_rgn_lst)) return -1; rgn = lmb_rgn_lst->data; /* Couldn't coalesce the LMB, so add it to the sorted table. */
for (i = lmb_rgn_lst->count; i >= 0; i--) {
for (i = lmb_rgn_lst->count - 1; i >= 0; i--) { if (i && base < rgn[i - 1].base) { rgn[i] = rgn[i - 1]; } else {
@@ -433,8 +432,6 @@ static long lmb_add_region_flags(struct alist *lmb_rgn_lst, phys_addr_t base, } }
lmb_rgn_lst->count++;
return 0;
}
@@ -444,7 +441,6 @@ static long lmb_add_region(struct alist *lmb_rgn_lst, phys_addr_t base, return lmb_add_region_flags(lmb_rgn_lst, base, size, LMB_NONE); }
-/* This routine may be called with relocation disabled. */ long lmb_add(phys_addr_t base, phys_size_t size) { long ret;
-- 2.34.1
(patches are a bit rough, but I didn't think it worth sending them to the ML as real patches)
If I am correct and we don't need to publish events, then that will save a little more space.
Regards, Simon
-sughosh
[1] - https://github.com/sughoshg/u-boot/tree/lmb_efi_common_apis_nrfc_v2 [2] - https://github.com/sughoshg/u-boot/tree/lmb_efi_separate_flags_apis_nrfc_v2
Regards, Simon
[1] - https://gist.github.com/sughoshg/a20207f26e19238fef86f710134d6efd
> > Regards, > SImon

Hi Sughosh,
On Wed, 14 Aug 2024 at 02:13, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Fri, 9 Aug 2024 at 21:28, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Fri, 9 Aug 2024 at 02:25, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Thu, 8 Aug 2024 at 19:58, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Wed, 7 Aug 2024 at 00:32, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Wed, 7 Aug 2024 at 03:21, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Mon, 5 Aug 2024 at 05:55, Sughosh Ganu sughosh.ganu@linaro.org wrote: > > On Mon, 29 Jul 2024 at 20:56, Simon Glass sjg@chromium.org wrote: > > > > Hi Sughosh, > > > > On Mon, 29 Jul 2024 at 02:40, Sughosh Ganu sughosh.ganu@linaro.org wrote: > > > > > > On Fri, 26 Jul 2024 at 05:02, Simon Glass sjg@chromium.org wrote: > > > > > > > > Hi Sughosh, > > > > > > > > On Wed, 24 Jul 2024 at 00:04, Sughosh Ganu sughosh.ganu@linaro.org wrote: > > > > > > > > > > Add a flags parameter to the LMB API functions. The parameter can then > > > > > be used to pass any other type of reservations or allocations needed > > > > > by the callers. These will be used in a subsequent set of changes for > > > > > allocation requests coming from the EFI subsystem. > > > > > > > > > > Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org > > > > > --- > > > > > Changes since rfc: New patch > > > > > > > > > > arch/arm/mach-apple/board.c | 17 ++-- > > > > > arch/arm/mach-snapdragon/board.c | 2 +- > > > > > arch/arm/mach-stm32mp/dram_init.c | 4 +- > > > > > arch/powerpc/cpu/mpc85xx/mp.c | 2 +- > > > > > arch/powerpc/lib/bootm.c | 2 +- > > > > > board/xilinx/common/board.c | 4 +- > > > > > boot/bootm.c | 5 +- > > > > > boot/image-board.c | 15 ++- > > > > > boot/image-fdt.c | 15 +-- > > > > > cmd/booti.c | 2 +- > > > > > cmd/bootz.c | 2 +- > > > > > cmd/load.c | 4 +- > > > > > drivers/iommu/apple_dart.c | 6 +- > > > > > drivers/iommu/sandbox_iommu.c | 6 +- > > > > > fs/fs.c | 2 +- > > > > > include/lmb.h | 23 ++--- > > > > > lib/lmb.c | 48 ++++------ > > > > > test/lib/lmb.c | 150 +++++++++++++++--------------- > > > > > 18 files changed, 150 insertions(+), 159 deletions(-) > > > > > > > > This negates any code-size advantage of dropping the lmb parameter. > > > > > > > > All of these are LMB_NONE. Can we have a separate function (e.g. > > > > lmb_alloc_type()) for when we actually need to specify the type? > > > > > > We will be passing different values when we call the LMB API's from > > > the EFI allocation function. This is only adding a parameter to the > > > allocation API's, which I believe is better than adding separate > > > functions which take a flag parameter only to be called from the EFI > > > subsystem. > > > > No i believe it is worse, unless there are a lot of such functions. > > The flags are a special case, not the common case. > > I have done some size impact tests on the two scenarios, one where we > have a common set of lmb allocation API functions, with an added flags > parameter, and second where we have separate API's to be called from > the EFI memory module. I have put out the results of the size impact > [1]. > > You will see that with common API's, we are not losing much even on > boards with EFI_LOADER disabled. But otoh, on boards which have > EFI_LOADER enabled, the gains are pretty significant. I believe we > should reconsider using a common LMB API with the flags parameter.
Thanks for looking at it.
Did you do special versions of just lmb_alloc() and lmb_add() which call the flags versions? It seems that there is no size advantage with EFI_LOADER and only a small one with !EFI_LOADER. Can you please point me to the code?
For the separate API version, I introduced new versions lmb_alloc_flags(), lmb_alloc_base_flags(), lmb_alloc_addr_flags() and lmb_free_flags(), which are being called from the EFI memory module. I have pushed the two branches [1] [2] on my github. Please take a look.
Btw, both these branches are based on your v5 of the alist patches, and also incorporate the stack based implementation for running the lmb tests. So except for either having common API's, or not, there are no other differences between the two. Thanks.
Thanks for the info.
The non-flags functions can call the flags functions, so that you don't create a new code path. Something like this:
diff --git a/lib/lmb.c b/lib/lmb.c index 726e6c38227..0a251c587fe 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -528,7 +528,7 @@ long lmb_free_flags(phys_addr_t base, phys_size_t size,
long lmb_free(phys_addr_t base, phys_size_t size) {
return __lmb_free(base, size);
return lmb_free_flags(base, size, LMB_NONE);
}
long lmb_reserve_flags(phys_addr_t base, phys_size_t size, enum lmb_flags flags) @@ -624,7 +624,7 @@ static phys_addr_t __lmb_alloc_base(phys_size_t size, ulong align,
phys_addr_t lmb_alloc(phys_size_t size, ulong align) {
return lmb_alloc_base(size, align, LMB_ALLOC_ANYWHERE);
return lmb_alloc_flags(size, align, LMB_NONE);
}
phys_addr_t lmb_alloc_flags(phys_size_t size, ulong align, uint flags) @@ -635,15 +635,7 @@ phys_addr_t lmb_alloc_flags(phys_size_t size, ulong align, uint flags)
phys_addr_t lmb_alloc_base(phys_size_t size, ulong align, phys_addr_t max_addr) {
phys_addr_t alloc;
alloc = __lmb_alloc_base(size, align, max_addr, LMB_NONE);
if (alloc == 0)
printf("ERROR: Failed to allocate 0x%lx bytes below 0x%lx.\n",
(ulong)size, (ulong)max_addr);
return alloc;
return lmb_alloc_base_flags(size, align, max_addr, LMB_NONE);
}
phys_addr_t lmb_alloc_base_flags(phys_size_t size, ulong align, @@ -691,7 +683,7 @@ static phys_addr_t __lmb_alloc_addr(phys_addr_t base, phys_size_t size, */ phys_addr_t lmb_alloc_addr(phys_addr_t base, phys_size_t size) {
return __lmb_alloc_addr(base, size, LMB_NONE);
return lmb_alloc_addr_flags(base, size, LMB_NONE);
}
phys_addr_t lmb_alloc_addr_flags(phys_addr_t base, phys_size_t size,
But it only saves about 40 bytes on Thumb2. You can save another 16 by using the placeholder API:
Can you please explain the issue that you see with having a common set of API's with the flags parameter added? The way I see it, the API's are already undergoing a change where we are removing the struct lmb instance as a parameter from all the API functions. With the change that I propose, we are simply replacing the lmb instance parameter with the flags parameter. So arguably we are not adding any additional size here. Also, like the size tests show, we get a pretty good size benefit when the EFI_LOADER is enabled.
So, if your argument is to keep the API's similar to their earlier form, I think that they are undergoing a change in any case, so adding the flags parameter is not so much of an issue. If there is any problem that I am missing, I would like to understand that before we go with separate API's. Thanks.
I thought I explained this already, but perhaps not. We change APIs all the time, so that is not a problem.
Almost all calls don't need to pass flags since it is LMB_NONE (which really should be 0, BTW, not BIT(0)). We already have lmb_reserve_flags() and lmb_reserve() to deal with this difference. So passing a parameter which is almost always the same is not helpful for code size.
Yes, in my earlier response, I did mention this aspect. The size impact is not very high on non-EFI platforms, while pretty good with platforms that enable EFI_LOADER. But since this is your view, I will go with the separate API version so that we make progress.
Outside the tests, only one place (boot/image-fdt.c) uses lmb_reserve_flags() - BTW some of these are using the flags version of the function but passing LMB_NONE.
Here are the five functions:
lmb_reserve lmb_alloc lmb_alloc_base lmb_alloc_addr lmb_free
If some of them aren't worth having two versions, then I suppose we don't need them all. But note that if some of my EFI patches make it through code review, we won't need this circular relationship between EFI and lmb, so some of the 'flags' versions can be dropped again.
Your patch is tweaking the efi_allocate_pool() whereas the changes that I am making are in the efi_allocate_pages(). So even if your approach gets accepted, this will still be needed for efi_allocate_pages() API.
Yes, I am not trying to change or remove APIs.
But EFI only even seems to pass LMB_NOOVERWRITE | LMB_NONOTIFY...?
Part of my frustration with all of this is that I created an lmb cleanup series[1] nearly a year ago, which was either ignored or blocked (I'm not sure which). That series tidied up the code quite a lot and took much effort. I'm not sure if you even saw it?
I wasn't aware of these patches. I started looking into the LMB/EFI memory management only a few months back.
Finally, it would help the project if you could do some code reviews. For example, how about [2] and [3] - they are very much in your area.
Okay, will do. I will check the patches in detail, but I believe that patch 2 looks fine, but I have concerns about using malloc for efi_allocate_pool(). I will respond to the patch though. Thanks.
Thank you.
Regards, Simon
[1] https://patchwork.ozlabs.org/project/uboot/list/?series=371258&state=* [2] https://patchwork.ozlabs.org/project/uboot/list/?series=417669 [3] https://patchwork.ozlabs.org/project/uboot/list/?series=418212
-sughosh
diff --git a/lib/lmb.c b/lib/lmb.c index 0a251c587fe..f6c8f06629c 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -416,13 +416,12 @@ static long lmb_add_region_flags(struct alist *lmb_rgn_lst, phys_addr_t base, if (coalesced) return coalesced;
if (alist_full(lmb_rgn_lst) &&
!alist_expand_by(lmb_rgn_lst, lmb_rgn_lst->alloc))
if (!alist_add_placeholder(lmb_rgn_lst)) return -1; rgn = lmb_rgn_lst->data; /* Couldn't coalesce the LMB, so add it to the sorted table. */
for (i = lmb_rgn_lst->count; i >= 0; i--) {
for (i = lmb_rgn_lst->count - 1; i >= 0; i--) { if (i && base < rgn[i - 1].base) { rgn[i] = rgn[i - 1]; } else {
@@ -433,8 +432,6 @@ static long lmb_add_region_flags(struct alist *lmb_rgn_lst, phys_addr_t base, } }
lmb_rgn_lst->count++;
return 0;
}
@@ -444,7 +441,6 @@ static long lmb_add_region(struct alist *lmb_rgn_lst, phys_addr_t base, return lmb_add_region_flags(lmb_rgn_lst, base, size, LMB_NONE); }
-/* This routine may be called with relocation disabled. */ long lmb_add(phys_addr_t base, phys_size_t size) { long ret;
-- 2.34.1
(patches are a bit rough, but I didn't think it worth sending them to the ML as real patches)
If I am correct and we don't need to publish events, then that will save a little more space.
Regards, Simon
-sughosh
[1] - https://github.com/sughoshg/u-boot/tree/lmb_efi_common_apis_nrfc_v2 [2] - https://github.com/sughoshg/u-boot/tree/lmb_efi_separate_flags_apis_nrfc_v2
Regards, Simon
> > [1] - https://gist.github.com/sughoshg/a20207f26e19238fef86f710134d6efd > > > > > Regards, > > SImon

Almost all of the current definitions of arch_lmb_reserve() are doing the same thing. The only exception in a couple of cases is the alignment parameter requirement. Have a generic weak implementation of this function, keeping the highest value of alignment that is being used(16K).
Also, instead of using the current value of stack pointer for starting the reserved region, have a fixed value, considering the stack size config value.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since rfc: None
arch/arc/lib/cache.c | 14 -------------- arch/arm/lib/stack.c | 14 -------------- arch/m68k/lib/bootm.c | 17 ----------------- arch/microblaze/lib/bootm.c | 14 -------------- arch/mips/lib/bootm.c | 15 --------------- arch/nios2/lib/bootm.c | 13 ------------- arch/powerpc/lib/bootm.c | 13 +++---------- arch/riscv/lib/bootm.c | 13 ------------- arch/sh/lib/bootm.c | 13 ------------- arch/x86/lib/bootm.c | 18 ------------------ arch/xtensa/lib/bootm.c | 13 ------------- lib/lmb.c | 6 +++++- 12 files changed, 8 insertions(+), 155 deletions(-)
diff --git a/arch/arc/lib/cache.c b/arch/arc/lib/cache.c index 5151af917a..5169fc627f 100644 --- a/arch/arc/lib/cache.c +++ b/arch/arc/lib/cache.c @@ -10,7 +10,6 @@ #include <linux/compiler.h> #include <linux/kernel.h> #include <linux/log2.h> -#include <lmb.h> #include <asm/arcregs.h> #include <asm/arc-bcr.h> #include <asm/cache.h> @@ -820,16 +819,3 @@ void sync_n_cleanup_cache_all(void)
__ic_entire_invalidate(); } - -static ulong get_sp(void) -{ - ulong ret; - - asm("mov %0, sp" : "=r"(ret) : ); - return ret; -} - -void arch_lmb_reserve(void) -{ - arch_lmb_reserve_generic(get_sp(), gd->ram_top, 4096); -} diff --git a/arch/arm/lib/stack.c b/arch/arm/lib/stack.c index 87d5c962d7..2b21ec0734 100644 --- a/arch/arm/lib/stack.c +++ b/arch/arm/lib/stack.c @@ -11,7 +11,6 @@ * Marius Groeger mgroeger@sysgo.de */ #include <init.h> -#include <lmb.h> #include <asm/global_data.h>
DECLARE_GLOBAL_DATA_PTR; @@ -33,16 +32,3 @@ int arch_reserve_stacks(void)
return 0; } - -static ulong get_sp(void) -{ - ulong ret; - - asm("mov %0, sp" : "=r"(ret) : ); - return ret; -} - -void arch_lmb_reserve(void) -{ - arch_lmb_reserve_generic(get_sp(), gd->ram_top, 16384); -} diff --git a/arch/m68k/lib/bootm.c b/arch/m68k/lib/bootm.c index eb220d178d..06854e1442 100644 --- a/arch/m68k/lib/bootm.c +++ b/arch/m68k/lib/bootm.c @@ -9,7 +9,6 @@ #include <command.h> #include <env.h> #include <image.h> -#include <lmb.h> #include <log.h> #include <asm/global_data.h> #include <u-boot/zlib.h> @@ -27,14 +26,8 @@ DECLARE_GLOBAL_DATA_PTR; #define LINUX_MAX_ENVS 256 #define LINUX_MAX_ARGS 256
-static ulong get_sp (void); static void set_clocks_in_mhz (struct bd_info *kbd);
-void arch_lmb_reserve(void) -{ - arch_lmb_reserve_generic(get_sp(), gd->ram_top, 1024); -} - int do_bootm_linux(int flag, struct bootm_info *bmi) { struct bootm_headers *images = bmi->images; @@ -88,16 +81,6 @@ error: return 1; }
-static ulong get_sp (void) -{ - ulong sp; - - asm("movel %%a7, %%d0\n" - "movel %%d0, %0\n": "=d"(sp): :"%d0"); - - return sp; -} - static void set_clocks_in_mhz (struct bd_info *kbd) { char *s; diff --git a/arch/microblaze/lib/bootm.c b/arch/microblaze/lib/bootm.c index ce96bca28f..4879a41aab 100644 --- a/arch/microblaze/lib/bootm.c +++ b/arch/microblaze/lib/bootm.c @@ -15,7 +15,6 @@ #include <fdt_support.h> #include <hang.h> #include <image.h> -#include <lmb.h> #include <log.h> #include <asm/cache.h> #include <asm/global_data.h> @@ -24,19 +23,6 @@
DECLARE_GLOBAL_DATA_PTR;
-static ulong get_sp(void) -{ - ulong ret; - - asm("addik %0, r1, 0" : "=r"(ret) : ); - return ret; -} - -void arch_lmb_reserve(void) -{ - arch_lmb_reserve_generic(get_sp(), gd->ram_top, 4096); -} - static void boot_jump_linux(struct bootm_headers *images, int flag) { void (*thekernel)(char *cmdline, ulong rd, ulong dt); diff --git a/arch/mips/lib/bootm.c b/arch/mips/lib/bootm.c index 8fb3a3923f..8719510002 100644 --- a/arch/mips/lib/bootm.c +++ b/arch/mips/lib/bootm.c @@ -9,7 +9,6 @@ #include <env.h> #include <image.h> #include <fdt_support.h> -#include <lmb.h> #include <log.h> #include <asm/addrspace.h> #include <asm/global_data.h> @@ -28,20 +27,6 @@ static char **linux_env; static char *linux_env_p; static int linux_env_idx;
-static ulong arch_get_sp(void) -{ - ulong ret; - - __asm__ __volatile__("move %0, $sp" : "=r"(ret) : ); - - return ret; -} - -void arch_lmb_reserve(void) -{ - arch_lmb_reserve_generic(arch_get_sp(), gd->ram_top, 4096); -} - static void linux_cmdline_init(void) { linux_argc = 1; diff --git a/arch/nios2/lib/bootm.c b/arch/nios2/lib/bootm.c index d33d45d28f..71319839ba 100644 --- a/arch/nios2/lib/bootm.c +++ b/arch/nios2/lib/bootm.c @@ -64,16 +64,3 @@ int do_bootm_linux(int flag, struct bootm_info *bmi)
return 1; } - -static ulong get_sp(void) -{ - ulong ret; - - asm("mov %0, sp" : "=r"(ret) : ); - return ret; -} - -void arch_lmb_reserve(void) -{ - arch_lmb_reserve_generic(get_sp(), gd->ram_top, 4096); -} diff --git a/arch/powerpc/lib/bootm.c b/arch/powerpc/lib/bootm.c index 350cddf1dd..00164ba14e 100644 --- a/arch/powerpc/lib/bootm.c +++ b/arch/powerpc/lib/bootm.c @@ -37,7 +37,6 @@
DECLARE_GLOBAL_DATA_PTR;
-static ulong get_sp (void); extern void ft_fixup_num_cores(void *blob); static void set_clocks_in_mhz (struct bd_info *kbd);
@@ -118,6 +117,7 @@ static void boot_jump_linux(struct bootm_headers *images)
void arch_lmb_reserve(void) { + phys_addr_t rsv_start; phys_size_t bootm_size; ulong size, bootmap_base;
@@ -142,7 +142,8 @@ void arch_lmb_reserve(void) lmb_reserve(base, bootm_size - size, LMB_NONE); }
- arch_lmb_reserve_generic(get_sp(), gd->ram_top, 4096); + rsv_start = gd->start_addr_sp - CONFIG_STACK_SIZE; + arch_lmb_reserve_generic(rsv_start, gd->ram_top, 4096);
#ifdef CONFIG_MP cpu_mp_lmb_reserve(); @@ -250,14 +251,6 @@ int do_bootm_linux(int flag, struct bootm_info *bmi) return 0; }
-static ulong get_sp (void) -{ - ulong sp; - - asm( "mr %0,1": "=r"(sp) : ); - return sp; -} - static void set_clocks_in_mhz (struct bd_info *kbd) { char *s; diff --git a/arch/riscv/lib/bootm.c b/arch/riscv/lib/bootm.c index bbf62f9e05..82502972ee 100644 --- a/arch/riscv/lib/bootm.c +++ b/arch/riscv/lib/bootm.c @@ -133,16 +133,3 @@ int do_bootm_vxworks(int flag, struct bootm_info *bmi) { return do_bootm_linux(flag, bmi); } - -static ulong get_sp(void) -{ - ulong ret; - - asm("mv %0, sp" : "=r"(ret) : ); - return ret; -} - -void arch_lmb_reserve(void) -{ - arch_lmb_reserve_generic(get_sp(), gd->ram_top, 4096); -} diff --git a/arch/sh/lib/bootm.c b/arch/sh/lib/bootm.c index 44ac05988c..bb0f59e0aa 100644 --- a/arch/sh/lib/bootm.c +++ b/arch/sh/lib/bootm.c @@ -101,16 +101,3 @@ int do_bootm_linux(int flag, struct bootm_info *bmi) /* does not return */ return 1; } - -static ulong get_sp(void) -{ - ulong ret; - - asm("mov r15, %0" : "=r"(ret) : ); - return ret; -} - -void arch_lmb_reserve(void) -{ - arch_lmb_reserve_generic(get_sp(), gd->ram_top, 4096); -} diff --git a/arch/x86/lib/bootm.c b/arch/x86/lib/bootm.c index 114b31012e..55f581836d 100644 --- a/arch/x86/lib/bootm.c +++ b/arch/x86/lib/bootm.c @@ -253,21 +253,3 @@ int do_bootm_linux(int flag, struct bootm_info *bmi)
return boot_jump_linux(images); } - -static ulong get_sp(void) -{ - ulong ret; - -#if CONFIG_IS_ENABLED(X86_64) - asm("mov %%rsp, %0" : "=r"(ret) : ); -#else - asm("mov %%esp, %0" : "=r"(ret) : ); -#endif - - return ret; -} - -void arch_lmb_reserve(void) -{ - arch_lmb_reserve_generic(get_sp(), gd->ram_top, 4096); -} diff --git a/arch/xtensa/lib/bootm.c b/arch/xtensa/lib/bootm.c index bdbd6d4692..2958f20739 100644 --- a/arch/xtensa/lib/bootm.c +++ b/arch/xtensa/lib/bootm.c @@ -197,16 +197,3 @@ int do_bootm_linux(int flag, struct bootm_info *bmi)
return 1; } - -static ulong get_sp(void) -{ - ulong ret; - - asm("mov %0, a1" : "=r"(ret) : ); - return ret; -} - -void arch_lmb_reserve(void) -{ - arch_lmb_reserve_generic(get_sp(), gd->ram_top, 4096); -} diff --git a/lib/lmb.c b/lib/lmb.c index ed5988bb1b..8ca4fd95c6 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -731,7 +731,11 @@ __weak void board_lmb_reserve(void)
__weak void arch_lmb_reserve(void) { - /* please define platform specific arch_lmb_reserve() */ + phys_addr_t rsv_start; + + rsv_start = gd->start_addr_sp - CONFIG_STACK_SIZE; + + arch_lmb_reserve_generic(rsv_start, gd->ram_top, 16384); }
/**

Hi Sughosh,
On Wed, 24 Jul 2024 at 00:04, Sughosh Ganu sughosh.ganu@linaro.org wrote:
Almost all of the current definitions of arch_lmb_reserve() are doing the same thing. The only exception in a couple of cases is the alignment parameter requirement. Have a generic weak implementation of this function, keeping the highest value of alignment that is being used(16K).
Also, instead of using the current value of stack pointer for starting the reserved region, have a fixed value, considering the stack size config value.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since rfc: None
arch/arc/lib/cache.c | 14 -------------- arch/arm/lib/stack.c | 14 -------------- arch/m68k/lib/bootm.c | 17 ----------------- arch/microblaze/lib/bootm.c | 14 -------------- arch/mips/lib/bootm.c | 15 --------------- arch/nios2/lib/bootm.c | 13 ------------- arch/powerpc/lib/bootm.c | 13 +++---------- arch/riscv/lib/bootm.c | 13 ------------- arch/sh/lib/bootm.c | 13 ------------- arch/x86/lib/bootm.c | 18 ------------------ arch/xtensa/lib/bootm.c | 13 ------------- lib/lmb.c | 6 +++++- 12 files changed, 8 insertions(+), 155 deletions(-)
How about not having a weak function? I have to wonder whether powerpc really needs to be different? If it does, I suppose we could use an event to deal with powerpc.
Regards, Simon

On Fri, 26 Jul 2024 at 05:02, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Wed, 24 Jul 2024 at 00:04, Sughosh Ganu sughosh.ganu@linaro.org wrote:
Almost all of the current definitions of arch_lmb_reserve() are doing the same thing. The only exception in a couple of cases is the alignment parameter requirement. Have a generic weak implementation of this function, keeping the highest value of alignment that is being used(16K).
Also, instead of using the current value of stack pointer for starting the reserved region, have a fixed value, considering the stack size config value.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since rfc: None
arch/arc/lib/cache.c | 14 -------------- arch/arm/lib/stack.c | 14 -------------- arch/m68k/lib/bootm.c | 17 ----------------- arch/microblaze/lib/bootm.c | 14 -------------- arch/mips/lib/bootm.c | 15 --------------- arch/nios2/lib/bootm.c | 13 ------------- arch/powerpc/lib/bootm.c | 13 +++---------- arch/riscv/lib/bootm.c | 13 ------------- arch/sh/lib/bootm.c | 13 ------------- arch/x86/lib/bootm.c | 18 ------------------ arch/xtensa/lib/bootm.c | 13 ------------- lib/lmb.c | 6 +++++- 12 files changed, 8 insertions(+), 155 deletions(-)
How about not having a weak function? I have to wonder whether powerpc really needs to be different? If it does, I suppose we could use an event to deal with powerpc.
Again, I have the same question about weak functions. It does not seem to be a universal policy.
-sughosh
Regards, Simon

Hi Sughosh,
On Mon, 29 Jul 2024 at 02:42, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Fri, 26 Jul 2024 at 05:02, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Wed, 24 Jul 2024 at 00:04, Sughosh Ganu sughosh.ganu@linaro.org wrote:
Almost all of the current definitions of arch_lmb_reserve() are doing the same thing. The only exception in a couple of cases is the alignment parameter requirement. Have a generic weak implementation of this function, keeping the highest value of alignment that is being used(16K).
Also, instead of using the current value of stack pointer for starting the reserved region, have a fixed value, considering the stack size config value.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since rfc: None
arch/arc/lib/cache.c | 14 -------------- arch/arm/lib/stack.c | 14 -------------- arch/m68k/lib/bootm.c | 17 ----------------- arch/microblaze/lib/bootm.c | 14 -------------- arch/mips/lib/bootm.c | 15 --------------- arch/nios2/lib/bootm.c | 13 ------------- arch/powerpc/lib/bootm.c | 13 +++---------- arch/riscv/lib/bootm.c | 13 ------------- arch/sh/lib/bootm.c | 13 ------------- arch/x86/lib/bootm.c | 18 ------------------ arch/xtensa/lib/bootm.c | 13 ------------- lib/lmb.c | 6 +++++- 12 files changed, 8 insertions(+), 155 deletions(-)
How about not having a weak function? I have to wonder whether powerpc really needs to be different? If it does, I suppose we could use an event to deal with powerpc.
Again, I have the same question about weak functions. It does not seem to be a universal policy.
Perhaps the powerpc maintainer can help figure this out, with answers to the questions I posed?
Regards, SImon

The TCG event log buffer is being set at the end of ram memory. This region of memory is to be reserved as LMB_NOMAP memory in the LMB memory map. The current location of this buffer overlaps with the memory region reserved for the U-Boot image, which is at the top of the usable memory. This worked earlier as the LMB memory map was not global but caller specific, but fails now because of the overlap. Move the TCG event log buffer to the start of the ram memory region instead.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since rfc: New patch
arch/sandbox/dts/test.dts | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/sandbox/dts/test.dts b/arch/sandbox/dts/test.dts index 5fb5eac862..1e1e9eaeb5 100644 --- a/arch/sandbox/dts/test.dts +++ b/arch/sandbox/dts/test.dts @@ -78,7 +78,7 @@
event_log: tcg_event_log { no-map; - reg = <(CFG_SYS_SDRAM_SIZE - 0x2000) 0x2000>; + reg = <(CFG_SYS_SDRAM_BASE + 0x1000) 0x2000>; }; };

Hi Sughosh,
On Wed, 24 Jul 2024 at 00:04, Sughosh Ganu sughosh.ganu@linaro.org wrote:
The TCG event log buffer is being set at the end of ram memory. This region of memory is to be reserved as LMB_NOMAP memory in the LMB memory map. The current location of this buffer overlaps with the memory region reserved for the U-Boot image, which is at the top of the usable memory. This worked earlier as the LMB memory map was not global but caller specific, but fails now because of the overlap. Move the TCG event log buffer to the start of the ram memory region instead.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since rfc: New patch
arch/sandbox/dts/test.dts | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/sandbox/dts/test.dts b/arch/sandbox/dts/test.dts index 5fb5eac862..1e1e9eaeb5 100644 --- a/arch/sandbox/dts/test.dts +++ b/arch/sandbox/dts/test.dts @@ -78,7 +78,7 @@
event_log: tcg_event_log { no-map;
reg = <(CFG_SYS_SDRAM_SIZE - 0x2000) 0x2000>;
reg = <(CFG_SYS_SDRAM_BASE + 0x1000) 0x2000>; }; };
-- 2.34.1
It looks like this will conflict.
Please check and update the memory map at the end of doc/arch/sandbox/sandbox.rst
Separate from this patch, I really think the TCG event log should go in a bloblist. I can have a crack at that if you like.
Regards, Simon

On Fri, 26 Jul 2024 at 05:02, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Wed, 24 Jul 2024 at 00:04, Sughosh Ganu sughosh.ganu@linaro.org wrote:
The TCG event log buffer is being set at the end of ram memory. This region of memory is to be reserved as LMB_NOMAP memory in the LMB memory map. The current location of this buffer overlaps with the memory region reserved for the U-Boot image, which is at the top of the usable memory. This worked earlier as the LMB memory map was not global but caller specific, but fails now because of the overlap. Move the TCG event log buffer to the start of the ram memory region instead.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since rfc: New patch
arch/sandbox/dts/test.dts | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/sandbox/dts/test.dts b/arch/sandbox/dts/test.dts index 5fb5eac862..1e1e9eaeb5 100644 --- a/arch/sandbox/dts/test.dts +++ b/arch/sandbox/dts/test.dts @@ -78,7 +78,7 @@
event_log: tcg_event_log { no-map;
reg = <(CFG_SYS_SDRAM_SIZE - 0x2000) 0x2000>;
reg = <(CFG_SYS_SDRAM_BASE + 0x1000) 0x2000>; }; };
-- 2.34.1
It looks like this will conflict.
Can you please elaborate a bit here. I am not saying that I am not open to a different solution, but I did not get any CI errors with this change.
Please check and update the memory map at the end of doc/arch/sandbox/sandbox.rst
Separate from this patch, I really think the TCG event log should go in a bloblist. I can have a crack at that if you like.
That can be taken up as a separate activity.
-sughosh
Regards, Simon

Hi Sughosh,
On Mon, 29 Jul 2024 at 02:45, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Fri, 26 Jul 2024 at 05:02, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Wed, 24 Jul 2024 at 00:04, Sughosh Ganu sughosh.ganu@linaro.org wrote:
The TCG event log buffer is being set at the end of ram memory. This region of memory is to be reserved as LMB_NOMAP memory in the LMB memory map. The current location of this buffer overlaps with the memory region reserved for the U-Boot image, which is at the top of the usable memory. This worked earlier as the LMB memory map was not global but caller specific, but fails now because of the overlap. Move the TCG event log buffer to the start of the ram memory region instead.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since rfc: New patch
arch/sandbox/dts/test.dts | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/sandbox/dts/test.dts b/arch/sandbox/dts/test.dts index 5fb5eac862..1e1e9eaeb5 100644 --- a/arch/sandbox/dts/test.dts +++ b/arch/sandbox/dts/test.dts @@ -78,7 +78,7 @@
event_log: tcg_event_log { no-map;
reg = <(CFG_SYS_SDRAM_SIZE - 0x2000) 0x2000>;
reg = <(CFG_SYS_SDRAM_BASE + 0x1000) 0x2000>; }; };
-- 2.34.1
It looks like this will conflict.
Can you please elaborate a bit here. I am not saying that I am not open to a different solution, but I did not get any CI errors with this change.
Please check and update the memory map at the end of doc/arch/sandbox/sandbox.rst
The address you are choosing is too low in memory...move it above the FDT and bloblist.
Separate from this patch, I really think the TCG event log should go in a bloblist. I can have a crack at that if you like.
That can be taken up as a separate activity.
Regards, Simon

The spl_board_init() function on sandbox invokes the unit tests. Invoking the tests should be done once the rest of the system has been initialised. Call the spl_board_init() function at the very end, once the rest of the initilisation functions have been called, including the setting up of the LMB memory map.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since rfc: New patch
common/spl/spl.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/common/spl/spl.c b/common/spl/spl.c index 38ac0608bb..891edde156 100644 --- a/common/spl/spl.c +++ b/common/spl/spl.c @@ -713,9 +713,6 @@ void board_init_r(gd_t *dummy1, ulong dummy2) if (CONFIG_IS_ENABLED(SOC_INIT)) spl_soc_init();
- if (CONFIG_IS_ENABLED(BOARD_INIT)) - spl_board_init(); - if (IS_ENABLED(CONFIG_SPL_WATCHDOG) && CONFIG_IS_ENABLED(WDT)) initr_watchdog();
@@ -733,6 +730,9 @@ void board_init_r(gd_t *dummy1, ulong dummy2) /* Don't fail. We still can try other boot methods. */ }
+ if (CONFIG_IS_ENABLED(BOARD_INIT)) + spl_board_init(); + bootcount_inc();
/* Dump driver model states to aid analysis */

On Wed, 24 Jul 2024 at 00:04, Sughosh Ganu sughosh.ganu@linaro.org wrote:
The spl_board_init() function on sandbox invokes the unit tests. Invoking the tests should be done once the rest of the system has been initialised. Call the spl_board_init() function at the very end, once the rest of the initilisation functions have been called, including the setting up of the LMB memory map.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since rfc: New patch
common/spl/spl.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
Reviewed-by: Simon Glass sjg@chromium.org

Initialise the ram bank information for sandbox in SPL. This is needed for initialising the LMB memory map as part of the platform init.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since rfc: New patch
arch/sandbox/cpu/spl.c | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-)
diff --git a/arch/sandbox/cpu/spl.c b/arch/sandbox/cpu/spl.c index 9ad9da686c..22cd916685 100644 --- a/arch/sandbox/cpu/spl.c +++ b/arch/sandbox/cpu/spl.c @@ -121,6 +121,15 @@ static int load_from_image(struct spl_image_info *spl_image, } SPL_LOAD_IMAGE_METHOD("sandbox_image", 7, BOOT_DEVICE_BOARD, load_from_image);
+int dram_init_banksize(void) +{ + /* These are necessary so TFTP can use LMBs to check its load address */ + gd->bd->bi_dram[0].start = gd->ram_base; + gd->bd->bi_dram[0].size = get_effective_memsize(); + + return 0; +} + void spl_board_init(void) { struct sandbox_state *state = state_get_current(); @@ -128,10 +137,6 @@ void spl_board_init(void) if (!CONFIG_IS_ENABLED(UNIT_TEST)) return;
- /* These are necessary so TFTP can use LMBs to check its load address */ - gd->bd->bi_dram[0].start = gd->ram_base; - gd->bd->bi_dram[0].size = get_effective_memsize(); - if (state->run_unittests) { struct unit_test *tests = UNIT_TEST_ALL_START(); const int count = UNIT_TEST_ALL_COUNT();

Hi Sughosh,
On Wed, 24 Jul 2024 at 00:05, Sughosh Ganu sughosh.ganu@linaro.org wrote:
Initialise the ram bank information for sandbox in SPL. This is needed for initialising the LMB memory map as part of the platform init.
I don't understand what is going on here...you are moving code into a function and should explain why.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since rfc: New patch
arch/sandbox/cpu/spl.c | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-)
diff --git a/arch/sandbox/cpu/spl.c b/arch/sandbox/cpu/spl.c index 9ad9da686c..22cd916685 100644 --- a/arch/sandbox/cpu/spl.c +++ b/arch/sandbox/cpu/spl.c @@ -121,6 +121,15 @@ static int load_from_image(struct spl_image_info *spl_image, } SPL_LOAD_IMAGE_METHOD("sandbox_image", 7, BOOT_DEVICE_BOARD, load_from_image);
+int dram_init_banksize(void) +{
/* These are necessary so TFTP can use LMBs to check its load address */
gd->bd->bi_dram[0].start = gd->ram_base;
gd->bd->bi_dram[0].size = get_effective_memsize();
return 0;
+}
void spl_board_init(void) { struct sandbox_state *state = state_get_current(); @@ -128,10 +137,6 @@ void spl_board_init(void) if (!CONFIG_IS_ENABLED(UNIT_TEST)) return;
/* These are necessary so TFTP can use LMBs to check its load address */
gd->bd->bi_dram[0].start = gd->ram_base;
gd->bd->bi_dram[0].size = get_effective_memsize();
if (state->run_unittests) { struct unit_test *tests = UNIT_TEST_ALL_START(); const int count = UNIT_TEST_ALL_COUNT();
-- 2.34.1
Regards, Simon

On Fri, 26 Jul 2024 at 05:02, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Wed, 24 Jul 2024 at 00:05, Sughosh Ganu sughosh.ganu@linaro.org wrote:
Initialise the ram bank information for sandbox in SPL. This is needed for initialising the LMB memory map as part of the platform init.
I don't understand what is going on here...you are moving code into a function and should explain why.
Will do. The bd struct's dram values need to be set up before the lmb initialisation happens.
-sughosh
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since rfc: New patch
arch/sandbox/cpu/spl.c | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-)
diff --git a/arch/sandbox/cpu/spl.c b/arch/sandbox/cpu/spl.c index 9ad9da686c..22cd916685 100644 --- a/arch/sandbox/cpu/spl.c +++ b/arch/sandbox/cpu/spl.c @@ -121,6 +121,15 @@ static int load_from_image(struct spl_image_info *spl_image, } SPL_LOAD_IMAGE_METHOD("sandbox_image", 7, BOOT_DEVICE_BOARD, load_from_image);
+int dram_init_banksize(void) +{
/* These are necessary so TFTP can use LMBs to check its load address */
gd->bd->bi_dram[0].start = gd->ram_base;
gd->bd->bi_dram[0].size = get_effective_memsize();
return 0;
+}
void spl_board_init(void) { struct sandbox_state *state = state_get_current(); @@ -128,10 +137,6 @@ void spl_board_init(void) if (!CONFIG_IS_ENABLED(UNIT_TEST)) return;
/* These are necessary so TFTP can use LMBs to check its load address */
gd->bd->bi_dram[0].start = gd->ram_base;
gd->bd->bi_dram[0].size = get_effective_memsize();
if (state->run_unittests) { struct unit_test *tests = UNIT_TEST_ALL_START(); const int count = UNIT_TEST_ALL_COUNT();
-- 2.34.1
Regards, Simon

The LMB module provides allocation/reservation API's, primarily for loading images to memory. This is functionality which is used by all boards. Make the config symbol used for the main U-Boot image as def_bool and enable it by default.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since rfc: New patch
lib/Kconfig | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/lib/Kconfig b/lib/Kconfig index 6a9338390a..c7fa4bc77e 100644 --- a/lib/Kconfig +++ b/lib/Kconfig @@ -1099,9 +1099,7 @@ config LIB_ELF This supports for 32 bit and 64 bit versions.
config LMB - bool "Enable the logical memory blocks library (lmb)" - default y if ARC || ARM || M68K || MICROBLAZE || MIPS || \ - NIOS2 || PPC || RISCV || SANDBOX || SH || X86 || XTENSA + def_bool y help Support the library logical memory blocks. This will require a malloc() implementation for defining the data structures

On Wed, Jul 24, 2024 at 11:32:12AM +0530, Sughosh Ganu wrote:
The LMB module provides allocation/reservation API's, primarily for loading images to memory. This is functionality which is used by all boards. Make the config symbol used for the main U-Boot image as def_bool and enable it by default.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since rfc: New patch
lib/Kconfig | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-)
OK, I asked for this, but, some platforms had special enough uses where they disabled it and so we get more growth here too, so, lets drop this concept for now.

On 7/24/24 22:55, Tom Rini wrote:
On Wed, Jul 24, 2024 at 11:32:12AM +0530, Sughosh Ganu wrote:
The LMB module provides allocation/reservation API's, primarily for loading images to memory. This is functionality which is used by all boards. Make the config symbol used for the main U-Boot image as def_bool and enable it by default.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since rfc: New patch
lib/Kconfig | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-)
OK, I asked for this, but, some platforms had special enough uses where they disabled it and so we get more growth here too, so, lets drop this concept for now.
Our mini configurations are disabling it because it consumes a lot of space ad we don't need it. Commit message should be definitely updated.
Thanks, Michal

Enable the LMB config in SPL. This helps in testing the LMB code in SPL on sandbox.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since rfc: * Enable config for sandbox_noinst
configs/sandbox_noinst_defconfig | 1 + configs/sandbox_spl_defconfig | 1 + 2 files changed, 2 insertions(+)
diff --git a/configs/sandbox_noinst_defconfig b/configs/sandbox_noinst_defconfig index 3e5ef854f6..6150f55072 100644 --- a/configs/sandbox_noinst_defconfig +++ b/configs/sandbox_noinst_defconfig @@ -285,3 +285,4 @@ CONFIG_TPM=y CONFIG_ZSTD=y CONFIG_SPL_LZMA=y CONFIG_ERRNO_STR=y +CONFIG_SPL_LMB=y diff --git a/configs/sandbox_spl_defconfig b/configs/sandbox_spl_defconfig index 2823bde492..3dd4c7ab43 100644 --- a/configs/sandbox_spl_defconfig +++ b/configs/sandbox_spl_defconfig @@ -253,3 +253,4 @@ CONFIG_ZSTD=y CONFIG_SPL_LZMA=y CONFIG_ERRNO_STR=y CONFIG_SPL_HEXDUMP=y +CONFIG_SPL_LMB=y

The sandbox iommu driver uses the LMB module to allocate a particular range of memory for the device virtual address(DVA). This used to work earlier since the LMB memory map was caller specific and not global. But with the change to make the LMB allocations global and persistent, adding this memory range has other side effects. On the other hand, the sandbox iommu test expects to see this particular value of the DVA. Use the DVA address directly, instead of mapping it in the LMB memory map, and then have it allocated.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since rfc: None
drivers/iommu/sandbox_iommu.c | 9 ++------- 1 file changed, 2 insertions(+), 7 deletions(-)
diff --git a/drivers/iommu/sandbox_iommu.c b/drivers/iommu/sandbox_iommu.c index 505f2c3250..15ecaacbb3 100644 --- a/drivers/iommu/sandbox_iommu.c +++ b/drivers/iommu/sandbox_iommu.c @@ -5,10 +5,10 @@
#include <dm.h> #include <iommu.h> -#include <lmb.h> #include <asm/io.h> #include <linux/sizes.h>
+#define DVA_ADDR 0x89abc000 #define IOMMU_PAGE_SIZE SZ_4K
static dma_addr_t sandbox_iommu_map(struct udevice *dev, void *addr, @@ -20,8 +20,7 @@ static dma_addr_t sandbox_iommu_map(struct udevice *dev, void *addr, paddr = ALIGN_DOWN(virt_to_phys(addr), IOMMU_PAGE_SIZE); off = virt_to_phys(addr) - paddr; psize = ALIGN(size + off, IOMMU_PAGE_SIZE); - - dva = lmb_alloc(psize, IOMMU_PAGE_SIZE, LMB_NONE); + dva = (phys_addr_t)DVA_ADDR;
return dva + off; } @@ -35,8 +34,6 @@ static void sandbox_iommu_unmap(struct udevice *dev, dma_addr_t addr, dva = ALIGN_DOWN(addr, IOMMU_PAGE_SIZE); psize = size + (addr - dva); psize = ALIGN(psize, IOMMU_PAGE_SIZE); - - lmb_free(dva, psize, LMB_NONE); }
static struct iommu_ops sandbox_iommu_ops = { @@ -46,8 +43,6 @@ static struct iommu_ops sandbox_iommu_ops = {
static int sandbox_iommu_probe(struct udevice *dev) { - lmb_add(0x89abc000, SZ_16K, LMB_NONE); - return 0; }

Hi Sughosh,
On Wed, 24 Jul 2024 at 00:05, Sughosh Ganu sughosh.ganu@linaro.org wrote:
The sandbox iommu driver uses the LMB module to allocate a particular range of memory for the device virtual address(DVA). This used to work earlier since the LMB memory map was caller specific and not global. But with the change to make the LMB allocations global and persistent, adding this memory range has other side effects. On the other hand, the sandbox iommu test expects to see this particular value of the DVA. Use the DVA address directly, instead of mapping it in the LMB memory map, and then have it allocated.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since rfc: None
drivers/iommu/sandbox_iommu.c | 9 ++------- 1 file changed, 2 insertions(+), 7 deletions(-)
diff --git a/drivers/iommu/sandbox_iommu.c b/drivers/iommu/sandbox_iommu.c index 505f2c3250..15ecaacbb3 100644 --- a/drivers/iommu/sandbox_iommu.c +++ b/drivers/iommu/sandbox_iommu.c @@ -5,10 +5,10 @@
#include <dm.h> #include <iommu.h> -#include <lmb.h> #include <asm/io.h> #include <linux/sizes.h>
+#define DVA_ADDR 0x89abc000
This is a very strange address. You can put this in arch/sandbox/include/asm/test.h and check it in the test. I suggest using something like 0x12345678 so it is obviously meaningless.
#define IOMMU_PAGE_SIZE SZ_4K
static dma_addr_t sandbox_iommu_map(struct udevice *dev, void *addr, @@ -20,8 +20,7 @@ static dma_addr_t sandbox_iommu_map(struct udevice *dev, void *addr, paddr = ALIGN_DOWN(virt_to_phys(addr), IOMMU_PAGE_SIZE); off = virt_to_phys(addr) - paddr; psize = ALIGN(size + off, IOMMU_PAGE_SIZE);
dva = lmb_alloc(psize, IOMMU_PAGE_SIZE, LMB_NONE);
dva = (phys_addr_t)DVA_ADDR; return dva + off;
} @@ -35,8 +34,6 @@ static void sandbox_iommu_unmap(struct udevice *dev, dma_addr_t addr, dva = ALIGN_DOWN(addr, IOMMU_PAGE_SIZE); psize = size + (addr - dva); psize = ALIGN(psize, IOMMU_PAGE_SIZE);
lmb_free(dva, psize, LMB_NONE);
}
static struct iommu_ops sandbox_iommu_ops = { @@ -46,8 +43,6 @@ static struct iommu_ops sandbox_iommu_ops = {
static int sandbox_iommu_probe(struct udevice *dev) {
lmb_add(0x89abc000, SZ_16K, LMB_NONE);
return 0;
}
If this function is empty, you should remove it.
-- 2.34.1
Regards, Simon

On Fri, 26 Jul 2024 at 05:02, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Wed, 24 Jul 2024 at 00:05, Sughosh Ganu sughosh.ganu@linaro.org wrote:
The sandbox iommu driver uses the LMB module to allocate a particular range of memory for the device virtual address(DVA). This used to work earlier since the LMB memory map was caller specific and not global. But with the change to make the LMB allocations global and persistent, adding this memory range has other side effects. On the other hand, the sandbox iommu test expects to see this particular value of the DVA. Use the DVA address directly, instead of mapping it in the LMB memory map, and then have it allocated.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since rfc: None
drivers/iommu/sandbox_iommu.c | 9 ++------- 1 file changed, 2 insertions(+), 7 deletions(-)
diff --git a/drivers/iommu/sandbox_iommu.c b/drivers/iommu/sandbox_iommu.c index 505f2c3250..15ecaacbb3 100644 --- a/drivers/iommu/sandbox_iommu.c +++ b/drivers/iommu/sandbox_iommu.c @@ -5,10 +5,10 @@
#include <dm.h> #include <iommu.h> -#include <lmb.h> #include <asm/io.h> #include <linux/sizes.h>
+#define DVA_ADDR 0x89abc000
This is a very strange address. You can put this in arch/sandbox/include/asm/test.h and check it in the test. I suggest using something like 0x12345678 so it is obviously meaningless.
You will see that I am using the same address that is being used currently. I did not want to make any changes to what is working, nor did I want to spend time debugging if this logic can be improved -- that is not the purpose of this series.
#define IOMMU_PAGE_SIZE SZ_4K
static dma_addr_t sandbox_iommu_map(struct udevice *dev, void *addr, @@ -20,8 +20,7 @@ static dma_addr_t sandbox_iommu_map(struct udevice *dev, void *addr, paddr = ALIGN_DOWN(virt_to_phys(addr), IOMMU_PAGE_SIZE); off = virt_to_phys(addr) - paddr; psize = ALIGN(size + off, IOMMU_PAGE_SIZE);
dva = lmb_alloc(psize, IOMMU_PAGE_SIZE, LMB_NONE);
dva = (phys_addr_t)DVA_ADDR; return dva + off;
} @@ -35,8 +34,6 @@ static void sandbox_iommu_unmap(struct udevice *dev, dma_addr_t addr, dva = ALIGN_DOWN(addr, IOMMU_PAGE_SIZE); psize = size + (addr - dva); psize = ALIGN(psize, IOMMU_PAGE_SIZE);
lmb_free(dva, psize, LMB_NONE);
}
static struct iommu_ops sandbox_iommu_ops = { @@ -46,8 +43,6 @@ static struct iommu_ops sandbox_iommu_ops = {
static int sandbox_iommu_probe(struct udevice *dev) {
lmb_add(0x89abc000, SZ_16K, LMB_NONE);
return 0;
}
If this function is empty, you should remove it.
Okay.
-sughosh
-- 2.34.1
Regards, Simon

Hi Sughosh,
On Mon, 29 Jul 2024 at 02:52, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Fri, 26 Jul 2024 at 05:02, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Wed, 24 Jul 2024 at 00:05, Sughosh Ganu sughosh.ganu@linaro.org wrote:
The sandbox iommu driver uses the LMB module to allocate a particular range of memory for the device virtual address(DVA). This used to work earlier since the LMB memory map was caller specific and not global. But with the change to make the LMB allocations global and persistent, adding this memory range has other side effects. On the other hand, the sandbox iommu test expects to see this particular value of the DVA. Use the DVA address directly, instead of mapping it in the LMB memory map, and then have it allocated.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since rfc: None
drivers/iommu/sandbox_iommu.c | 9 ++------- 1 file changed, 2 insertions(+), 7 deletions(-)
diff --git a/drivers/iommu/sandbox_iommu.c b/drivers/iommu/sandbox_iommu.c index 505f2c3250..15ecaacbb3 100644 --- a/drivers/iommu/sandbox_iommu.c +++ b/drivers/iommu/sandbox_iommu.c @@ -5,10 +5,10 @@
#include <dm.h> #include <iommu.h> -#include <lmb.h> #include <asm/io.h> #include <linux/sizes.h>
+#define DVA_ADDR 0x89abc000
This is a very strange address. You can put this in arch/sandbox/include/asm/test.h and check it in the test. I suggest using something like 0x12345678 so it is obviously meaningless.
You will see that I am using the same address that is being used currently. I did not want to make any changes to what is working, nor did I want to spend time debugging if this logic can be improved -- that is not the purpose of this series.
Please just move it. You can keep the same address, but put the constant in the header file.
#define IOMMU_PAGE_SIZE SZ_4K
static dma_addr_t sandbox_iommu_map(struct udevice *dev, void *addr, @@ -20,8 +20,7 @@ static dma_addr_t sandbox_iommu_map(struct udevice *dev, void *addr, paddr = ALIGN_DOWN(virt_to_phys(addr), IOMMU_PAGE_SIZE); off = virt_to_phys(addr) - paddr; psize = ALIGN(size + off, IOMMU_PAGE_SIZE);
dva = lmb_alloc(psize, IOMMU_PAGE_SIZE, LMB_NONE);
dva = (phys_addr_t)DVA_ADDR; return dva + off;
} @@ -35,8 +34,6 @@ static void sandbox_iommu_unmap(struct udevice *dev, dma_addr_t addr, dva = ALIGN_DOWN(addr, IOMMU_PAGE_SIZE); psize = size + (addr - dva); psize = ALIGN(psize, IOMMU_PAGE_SIZE);
lmb_free(dva, psize, LMB_NONE);
}
static struct iommu_ops sandbox_iommu_ops = { @@ -46,8 +43,6 @@ static struct iommu_ops sandbox_iommu_ops = {
static int sandbox_iommu_probe(struct udevice *dev) {
lmb_add(0x89abc000, SZ_16K, LMB_NONE);
return 0;
}
If this function is empty, you should remove it.
Okay.
Regards, Simon

On Mon, 29 Jul 2024 at 20:58, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Mon, 29 Jul 2024 at 02:52, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Fri, 26 Jul 2024 at 05:02, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Wed, 24 Jul 2024 at 00:05, Sughosh Ganu sughosh.ganu@linaro.org wrote:
The sandbox iommu driver uses the LMB module to allocate a particular range of memory for the device virtual address(DVA). This used to work earlier since the LMB memory map was caller specific and not global. But with the change to make the LMB allocations global and persistent, adding this memory range has other side effects. On the other hand, the sandbox iommu test expects to see this particular value of the DVA. Use the DVA address directly, instead of mapping it in the LMB memory map, and then have it allocated.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since rfc: None
drivers/iommu/sandbox_iommu.c | 9 ++------- 1 file changed, 2 insertions(+), 7 deletions(-)
diff --git a/drivers/iommu/sandbox_iommu.c b/drivers/iommu/sandbox_iommu.c index 505f2c3250..15ecaacbb3 100644 --- a/drivers/iommu/sandbox_iommu.c +++ b/drivers/iommu/sandbox_iommu.c @@ -5,10 +5,10 @@
#include <dm.h> #include <iommu.h> -#include <lmb.h> #include <asm/io.h> #include <linux/sizes.h>
+#define DVA_ADDR 0x89abc000
This is a very strange address. You can put this in arch/sandbox/include/asm/test.h and check it in the test. I suggest using something like 0x12345678 so it is obviously meaningless.
You will see that I am using the same address that is being used currently. I did not want to make any changes to what is working, nor did I want to spend time debugging if this logic can be improved -- that is not the purpose of this series.
Please just move it. You can keep the same address, but put the constant in the header file.
Looking into this, there isn't an appropriate header file where I can put this constant. And creating one only for a macro seems like an overkill. Can you propose a header file where I can put this. If there isn't an existing one, maybe I can keep the macro as is?
-sughosh
#define IOMMU_PAGE_SIZE SZ_4K
static dma_addr_t sandbox_iommu_map(struct udevice *dev, void *addr, @@ -20,8 +20,7 @@ static dma_addr_t sandbox_iommu_map(struct udevice *dev, void *addr, paddr = ALIGN_DOWN(virt_to_phys(addr), IOMMU_PAGE_SIZE); off = virt_to_phys(addr) - paddr; psize = ALIGN(size + off, IOMMU_PAGE_SIZE);
dva = lmb_alloc(psize, IOMMU_PAGE_SIZE, LMB_NONE);
dva = (phys_addr_t)DVA_ADDR; return dva + off;
} @@ -35,8 +34,6 @@ static void sandbox_iommu_unmap(struct udevice *dev, dma_addr_t addr, dva = ALIGN_DOWN(addr, IOMMU_PAGE_SIZE); psize = size + (addr - dva); psize = ALIGN(psize, IOMMU_PAGE_SIZE);
lmb_free(dva, psize, LMB_NONE);
}
static struct iommu_ops sandbox_iommu_ops = { @@ -46,8 +43,6 @@ static struct iommu_ops sandbox_iommu_ops = {
static int sandbox_iommu_probe(struct udevice *dev) {
lmb_add(0x89abc000, SZ_16K, LMB_NONE);
return 0;
}
If this function is empty, you should remove it.
Okay.
Regards, Simon

Hi Sughosh,
On Fri, 2 Aug 2024 at 01:44, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Mon, 29 Jul 2024 at 20:58, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Mon, 29 Jul 2024 at 02:52, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Fri, 26 Jul 2024 at 05:02, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Wed, 24 Jul 2024 at 00:05, Sughosh Ganu sughosh.ganu@linaro.org wrote:
The sandbox iommu driver uses the LMB module to allocate a particular range of memory for the device virtual address(DVA). This used to work earlier since the LMB memory map was caller specific and not global. But with the change to make the LMB allocations global and persistent, adding this memory range has other side effects. On the other hand, the sandbox iommu test expects to see this particular value of the DVA. Use the DVA address directly, instead of mapping it in the LMB memory map, and then have it allocated.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since rfc: None
drivers/iommu/sandbox_iommu.c | 9 ++------- 1 file changed, 2 insertions(+), 7 deletions(-)
diff --git a/drivers/iommu/sandbox_iommu.c b/drivers/iommu/sandbox_iommu.c index 505f2c3250..15ecaacbb3 100644 --- a/drivers/iommu/sandbox_iommu.c +++ b/drivers/iommu/sandbox_iommu.c @@ -5,10 +5,10 @@
#include <dm.h> #include <iommu.h> -#include <lmb.h> #include <asm/io.h> #include <linux/sizes.h>
+#define DVA_ADDR 0x89abc000
This is a very strange address. You can put this in arch/sandbox/include/asm/test.h and check it in the test. I suggest using something like 0x12345678 so it is obviously meaningless.
You will see that I am using the same address that is being used currently. I did not want to make any changes to what is working, nor did I want to spend time debugging if this logic can be improved -- that is not the purpose of this series.
Please just move it. You can keep the same address, but put the constant in the header file.
Looking into this, there isn't an appropriate header file where I can put this constant. And creating one only for a macro seems like an overkill. Can you propose a header file where I can put this. If there isn't an existing one, maybe I can keep the macro as is?
arch/sandbox/include/asm/test.h
It is used for quite a few constants.
-sughosh
#define IOMMU_PAGE_SIZE SZ_4K
static dma_addr_t sandbox_iommu_map(struct udevice *dev, void *addr, @@ -20,8 +20,7 @@ static dma_addr_t sandbox_iommu_map(struct udevice *dev, void *addr, paddr = ALIGN_DOWN(virt_to_phys(addr), IOMMU_PAGE_SIZE); off = virt_to_phys(addr) - paddr; psize = ALIGN(size + off, IOMMU_PAGE_SIZE);
dva = lmb_alloc(psize, IOMMU_PAGE_SIZE, LMB_NONE);
dva = (phys_addr_t)DVA_ADDR; return dva + off;
} @@ -35,8 +34,6 @@ static void sandbox_iommu_unmap(struct udevice *dev, dma_addr_t addr, dva = ALIGN_DOWN(addr, IOMMU_PAGE_SIZE); psize = size + (addr - dva); psize = ALIGN(psize, IOMMU_PAGE_SIZE);
lmb_free(dva, psize, LMB_NONE);
}
static struct iommu_ops sandbox_iommu_ops = { @@ -46,8 +43,6 @@ static struct iommu_ops sandbox_iommu_ops = {
static int sandbox_iommu_probe(struct udevice *dev) {
lmb_add(0x89abc000, SZ_16K, LMB_NONE);
return 0;
}
If this function is empty, you should remove it.
Okay.
Regards, Simon

The LMB memory is typically not needed very early in the platform's boot. Do not add memory to the LMB map before relocation. Reservation of common areas and adding of memory is done after relocation.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since rfc: None
board/xilinx/common/board.c | 31 ------------------------------- 1 file changed, 31 deletions(-)
diff --git a/board/xilinx/common/board.c b/board/xilinx/common/board.c index 1fcf7c3d8f..3440402ab4 100644 --- a/board/xilinx/common/board.c +++ b/board/xilinx/common/board.c @@ -12,7 +12,6 @@ #include <image.h> #include <init.h> #include <jffs2/load_kernel.h> -#include <lmb.h> #include <log.h> #include <asm/global_data.h> #include <asm/sections.h> @@ -665,36 +664,6 @@ int embedded_dtb_select(void) } #endif
-#if IS_ENABLED(CONFIG_LMB) - -#ifndef MMU_SECTION_SIZE -#define MMU_SECTION_SIZE (1 * 1024 * 1024) -#endif - -phys_addr_t board_get_usable_ram_top(phys_size_t total_size) -{ - phys_size_t size; - phys_addr_t reg; - - if (!total_size) - return gd->ram_top; - - if (!IS_ALIGNED((ulong)gd->fdt_blob, 0x8)) - panic("Not 64bit aligned DT location: %p\n", gd->fdt_blob); - - /* found enough not-reserved memory to relocated U-Boot */ - lmb_add(gd->ram_base, gd->ram_size, LMB_NONE); - boot_fdt_add_mem_rsv_regions((void *)gd->fdt_blob); - size = ALIGN(CONFIG_SYS_MALLOC_LEN + total_size, MMU_SECTION_SIZE); - reg = lmb_alloc(size, MMU_SECTION_SIZE, LMB_NONE); - - if (!reg) - reg = gd->ram_top - size; - - return reg + size; -} -#endif - #ifdef CONFIG_OF_BOARD_SETUP #define MAX_RAND_SIZE 8 int ft_board_setup(void *blob, struct bd_info *bd)

The LMB memory map is now being handled through data structures which need heap memory to be operational. Moreover, the LMB memory map is initialised as part of the board boot after relocation as part of platform agnostic code. Remove addition of LMB memory before relocation for the stm32mp platforms.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since rfc: New patch
arch/arm/mach-stm32mp/dram_init.c | 11 ++--------- 1 file changed, 2 insertions(+), 9 deletions(-)
diff --git a/arch/arm/mach-stm32mp/dram_init.c b/arch/arm/mach-stm32mp/dram_init.c index 97d894d05f..32f2a95ed8 100644 --- a/arch/arm/mach-stm32mp/dram_init.c +++ b/arch/arm/mach-stm32mp/dram_init.c @@ -57,15 +57,8 @@ phys_addr_t board_get_usable_ram_top(phys_size_t total_size) */ gd->ram_top = clamp_val(gd->ram_top, 0, SZ_4G - 1);
- /* found enough not-reserved memory to relocated U-Boot */ - lmb_add(gd->ram_base, gd->ram_top - gd->ram_base, LMB_NONE); - boot_fdt_add_mem_rsv_regions((void *)gd->fdt_blob); - /* add 8M for reserved memory for display, fdt, gd,... */ - size = ALIGN(SZ_8M + CONFIG_SYS_MALLOC_LEN + total_size, MMU_SECTION_SIZE), - reg = lmb_alloc(size, MMU_SECTION_SIZE, LMB_NONE); - - if (!reg) - reg = gd->ram_top - size; + size = ALIGN(SZ_8M + CONFIG_SYS_MALLOC_LEN + total_size, MMU_SECTION_SIZE); + reg = gd->ram_top - size;
/* before relocation, mark the U-Boot memory as cacheable by default */ if (!(gd->flags & GD_FLG_RELOC))

Instead of a randomly selected address, use an LMB allocated one for reading the file into memory. With the LMB map now being persistent and global, the address used for reading the file might be already allocated as non-overwritable, resulting in a failure. Get a valid address from LMB and then read the file to that address.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since rfc: None
test/boot/cedit.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/test/boot/cedit.c b/test/boot/cedit.c index fd19da0a0c..6078b7cc0f 100644 --- a/test/boot/cedit.c +++ b/test/boot/cedit.c @@ -7,6 +7,7 @@ #include <cedit.h> #include <env.h> #include <expo.h> +#include <lmb.h> #include <mapmem.h> #include <dm/ofnode.h> #include <test/ut.h> @@ -61,7 +62,7 @@ static int cedit_fdt(struct unit_test_state *uts) struct video_priv *vid_priv; extern struct expo *cur_exp; struct scene_obj_menu *menu; - ulong addr = 0x1000; + ulong addr; struct ofprop prop; struct scene *scn; oftree tree; @@ -86,6 +87,8 @@ static int cedit_fdt(struct unit_test_state *uts) str = abuf_data(&tline->buf); strcpy(str, "my-machine");
+ addr = lmb_alloc(1024, 1024, LMB_NONE); + ut_asserteq(!!addr, !0); ut_assertok(run_command("cedit write_fdt hostfs - settings.dtb", 0)); ut_assertok(run_commandf("load hostfs - %lx settings.dtb", addr)); ut_assert_nextlinen("1024 bytes read"); @@ -94,6 +97,7 @@ static int cedit_fdt(struct unit_test_state *uts) tree = oftree_from_fdt(fdt); node = ofnode_find_subnode(oftree_root(tree), CEDIT_NODE_NAME); ut_assert(ofnode_valid(node)); + lmb_free(addr, 1024, LMB_NONE);
ut_asserteq(ID_CPU_SPEED_2, ofnode_read_u32_default(node, "cpu-speed", 0));

Hi Sughosh,
On Wed, 24 Jul 2024 at 00:05, Sughosh Ganu sughosh.ganu@linaro.org wrote:
Instead of a randomly selected address, use an LMB allocated one for reading the file into memory. With the LMB map now being persistent and global, the address used for reading the file might be already allocated as non-overwritable, resulting in a failure. Get a valid address from LMB and then read the file to that address.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since rfc: None
test/boot/cedit.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-)
No, we should not start putting lmb into tests like this.
Regards, Simon

On Fri, 26 Jul 2024 at 05:02, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Wed, 24 Jul 2024 at 00:05, Sughosh Ganu sughosh.ganu@linaro.org wrote:
Instead of a randomly selected address, use an LMB allocated one for reading the file into memory. With the LMB map now being persistent and global, the address used for reading the file might be already allocated as non-overwritable, resulting in a failure. Get a valid address from LMB and then read the file to that address.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since rfc: None
test/boot/cedit.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-)
No, we should not start putting lmb into tests like this.
The test fails if the address is not allocated through an lmb api. Although I can check why.
-sughosh
Regards, Simon

Hi Sughosh,
On Mon, 29 Jul 2024 at 02:53, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Fri, 26 Jul 2024 at 05:02, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Wed, 24 Jul 2024 at 00:05, Sughosh Ganu sughosh.ganu@linaro.org wrote:
Instead of a randomly selected address, use an LMB allocated one for reading the file into memory. With the LMB map now being persistent and global, the address used for reading the file might be already allocated as non-overwritable, resulting in a failure. Get a valid address from LMB and then read the file to that address.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since rfc: None
test/boot/cedit.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-)
No, we should not start putting lmb into tests like this.
The test fails if the address is not allocated through an lmb api. Although I can check why.
Neither am I, but you did put the tcg in the same region so perhaps that is an issue?
Sandbox is designed so that low memory regions can be used in tests. I pointed you to the memory-map docs.
Regards, SImon

On Mon, 29 Jul 2024 at 20:58, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Mon, 29 Jul 2024 at 02:53, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Fri, 26 Jul 2024 at 05:02, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Wed, 24 Jul 2024 at 00:05, Sughosh Ganu sughosh.ganu@linaro.org wrote:
Instead of a randomly selected address, use an LMB allocated one for reading the file into memory. With the LMB map now being persistent and global, the address used for reading the file might be already allocated as non-overwritable, resulting in a failure. Get a valid address from LMB and then read the file to that address.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since rfc: None
test/boot/cedit.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-)
No, we should not start putting lmb into tests like this.
The test fails if the address is not allocated through an lmb api. Although I can check why.
Neither am I, but you did put the tcg in the same region so perhaps that is an issue?
I now remember the issue. The cedit_fdt() function uses a random address(0x1000) and tries to load the settings.dtb file to the address. The load command uses the lmb_alloc_addr() API function to reserve and use this address for loading the file. This worked earlier because of the local nature of the lmb memory map. But it does not work anymore, as this clashes with some existing reservation resulting in the test failure. So this change is very much needed.
-sughosh
Sandbox is designed so that low memory regions can be used in tests. I pointed you to the memory-map docs.
Regards, SImon

Hi Sughosh,
On Wed, 31 Jul 2024 at 01:26, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Mon, 29 Jul 2024 at 20:58, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Mon, 29 Jul 2024 at 02:53, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Fri, 26 Jul 2024 at 05:02, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Wed, 24 Jul 2024 at 00:05, Sughosh Ganu sughosh.ganu@linaro.org wrote:
Instead of a randomly selected address, use an LMB allocated one for reading the file into memory. With the LMB map now being persistent and global, the address used for reading the file might be already allocated as non-overwritable, resulting in a failure. Get a valid address from LMB and then read the file to that address.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since rfc: None
test/boot/cedit.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-)
No, we should not start putting lmb into tests like this.
The test fails if the address is not allocated through an lmb api. Although I can check why.
Neither am I, but you did put the tcg in the same region so perhaps that is an issue?
I now remember the issue. The cedit_fdt() function uses a random address(0x1000) and tries to load the settings.dtb file to the address. The load command uses the lmb_alloc_addr() API function to reserve and use this address for loading the file. This worked earlier because of the local nature of the lmb memory map. But it does not work anymore, as this clashes with some existing reservation resulting in the test failure. So this change is very much needed.
OK I see. Do you know what is clashing? I wonder if we should have strings associated with lmb records, to make this easier, if there are going to be so many?
-sughosh
Sandbox is designed so that low memory regions can be used in tests. I pointed you to the memory-map docs.
Regards, Simon

On Wed, 31 Jul 2024 at 20:08, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Wed, 31 Jul 2024 at 01:26, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Mon, 29 Jul 2024 at 20:58, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Mon, 29 Jul 2024 at 02:53, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Fri, 26 Jul 2024 at 05:02, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Wed, 24 Jul 2024 at 00:05, Sughosh Ganu sughosh.ganu@linaro.org wrote:
Instead of a randomly selected address, use an LMB allocated one for reading the file into memory. With the LMB map now being persistent and global, the address used for reading the file might be already allocated as non-overwritable, resulting in a failure. Get a valid address from LMB and then read the file to that address.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since rfc: None
test/boot/cedit.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-)
No, we should not start putting lmb into tests like this.
The test fails if the address is not allocated through an lmb api. Although I can check why.
Neither am I, but you did put the tcg in the same region so perhaps that is an issue?
I now remember the issue. The cedit_fdt() function uses a random address(0x1000) and tries to load the settings.dtb file to the address. The load command uses the lmb_alloc_addr() API function to reserve and use this address for loading the file. This worked earlier because of the local nature of the lmb memory map. But it does not work anymore, as this clashes with some existing reservation resulting in the test failure. So this change is very much needed.
OK I see. Do you know what is clashing? I wonder if we should have strings associated with lmb records, to make this easier, if there are going to be so many?
Regarding your point about making use of strings for lmb records, I have something similar on my todo list. I plan to use the strings for the lmb flags associated with a memory region. The current flag values shown as part of the bdinfo command are not very helpful from the ease of understanding pov.
-sughosh
-sughosh
Sandbox is designed so that low memory regions can be used in tests. I pointed you to the memory-map docs.
Regards, Simon

The LMB memory maps are now persistent, with alloced lists being used to keep track of the available and free memory. Make corresponding changes in the test functions so that the list information can be accessed by the tests for checking against expected values. Also introduce functions to initialise and cleanup the lists. These functions will be invoked from every test to start the memory map from a clean slate.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since rfc: * Change the lmb_mem_regions_init() function to have it called from the lmb tests as well.
include/lmb.h | 24 +++- lib/lmb.c | 38 ++++++- test/lib/lmb.c | 294 ++++++++++++++++++++++++++++++------------------- 3 files changed, 239 insertions(+), 117 deletions(-)
diff --git a/include/lmb.h b/include/lmb.h index c90f167eec..25a984e14f 100644 --- a/include/lmb.h +++ b/include/lmb.h @@ -13,6 +13,8 @@ * Copyright (C) 2001 Peter Bergner, IBM Corp. */
+struct alist; + /** * enum lmb_flags - definition of memory region attributes * @LMB_NONE: no special request @@ -90,16 +92,34 @@ void arch_lmb_reserve_generic(ulong sp, ulong end, ulong align);
/** * lmb_mem_regions_init() - Initialise the LMB memory + * @mem_lst: Pointer to store location of free memory list + * @used_lst: Pointer to store location of used memory list + * @add_rsv_mem: flag to indicate if memory is to be added and reserved * * Initialise the LMB subsystem related data structures. There are two * alloced lists that are initialised, one for the free memory, and one * for the used memory. * - * Initialise the two lists as part of board init. + * Initialise the two lists as part of board init during boot. When called + * from a test, passes the pointers to the two lists to the caller. The + * caller is then required to call the corresponding function to uninit + * the lists. * * Return: 0 if OK, -ve on failure. */ -int lmb_mem_regions_init(void); +int lmb_mem_regions_init(struct alist **mem_lst, struct alist **used_lst, + bool add_rsv_mem); + +/** + * lmb_mem_regions_uninit() - Unitialise the lmb lists + * @mem_lst: Pointer to store location of free memory list + * @used_lst: Pointer to store location of used memory list + * + * Unitialise the LMB lists for free and used memory that was + * initialised as part of the init function. Called when running + * lmb test routines. + */ +void lmb_mem_regions_uninit(struct alist *mem_lst, struct alist *used_lst);
#endif /* __KERNEL__ */
diff --git a/lib/lmb.c b/lib/lmb.c index 8ca4fd95c6..97ea26b013 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -740,16 +740,23 @@ __weak void arch_lmb_reserve(void)
/** * lmb_mem_regions_init() - Initialise the LMB memory + * @mem_lst: Pointer to store location of free memory list + * @used_lst: Pointer to store location of used memory list + * @add_rsv_mem: flag to indicate if memory is to be added and reserved * * Initialise the LMB subsystem related data structures. There are two * alloced lists that are initialised, one for the free memory, and one * for the used memory. * - * Initialise the two lists as part of board init. + * Initialise the two lists as part of board init during boot. When called + * from a test, passes the pointers to the two lists to the caller. The + * caller is then required to call the corresponding function to uninit + * the lists. * * Return: 0 if OK, -ve on failure. */ -int lmb_mem_regions_init(void) +int lmb_mem_regions_init(struct alist **mem_lst, struct alist **used_lst, + bool add_rsv_mem) { bool ret;
@@ -767,6 +774,15 @@ int lmb_mem_regions_init(void) return -1; }
+ if (mem_lst) + *mem_lst = &lmb_free_mem; + + if (used_lst) + *used_lst = &lmb_used_mem; + + if (!add_rsv_mem) + return 0; + lmb_add_memory();
/* Reserve the U-Boot image region once U-Boot has relocated */ @@ -778,6 +794,22 @@ int lmb_mem_regions_init(void) return 0; }
+/** + * lmb_mem_regions_uninit() - Unitialise the lmb lists + * @mem_lst: Pointer to store location of free memory list + * @used_lst: Pointer to store location of used memory list + * + * Unitialise the LMB lists for free and used memory that was + * initialised as part of the init function. Called when running + * lmb test routines. + */ +void __maybe_unused lmb_mem_regions_uninit(struct alist *mem_lst, + struct alist *used_lst) +{ + alist_uninit(mem_lst); + alist_uninit(used_lst); +} + /** * initr_lmb() - Initialise the LMB lists * @@ -791,7 +823,7 @@ int initr_lmb(void) { int ret;
- ret = lmb_mem_regions_init(); + ret = lmb_mem_regions_init(NULL, NULL, true); if (ret) printf("Unable to initialise the LMB data structures\n");
diff --git a/test/lib/lmb.c b/test/lib/lmb.c index 3f99156cf3..aea84e7b1c 100644 --- a/test/lib/lmb.c +++ b/test/lib/lmb.c @@ -3,6 +3,7 @@ * (C) Copyright 2018 Simon Goldschmidt */
+#include <alist.h> #include <dm.h> #include <lmb.h> #include <log.h> @@ -12,52 +13,51 @@ #include <test/test.h> #include <test/ut.h>
-extern struct lmb lmb; - -static inline bool lmb_is_nomap(struct lmb_property *m) +static inline bool lmb_is_nomap(struct lmb_region *m) { return m->flags & LMB_NOMAP; }
-static int check_lmb(struct unit_test_state *uts, struct lmb *lmb, - phys_addr_t ram_base, phys_size_t ram_size, - unsigned long num_reserved, +static int check_lmb(struct unit_test_state *uts, struct alist *mem_lst, + struct alist *used_lst, phys_addr_t ram_base, + phys_size_t ram_size, unsigned long num_reserved, phys_addr_t base1, phys_size_t size1, phys_addr_t base2, phys_size_t size2, phys_addr_t base3, phys_size_t size3) { + struct lmb_region *mem, *used; + + mem = mem_lst->data; + used = used_lst->data; + if (ram_size) { - ut_asserteq(lmb->memory.cnt, 1); - ut_asserteq(lmb->memory.region[0].base, ram_base); - ut_asserteq(lmb->memory.region[0].size, ram_size); + ut_asserteq(mem_lst->count, 1); + ut_asserteq(mem[0].base, ram_base); + ut_asserteq(mem[0].size, ram_size); }
- ut_asserteq(lmb->reserved.cnt, num_reserved); + ut_asserteq(used_lst->count, num_reserved); if (num_reserved > 0) { - ut_asserteq(lmb->reserved.region[0].base, base1); - ut_asserteq(lmb->reserved.region[0].size, size1); + ut_asserteq(used[0].base, base1); + ut_asserteq(used[0].size, size1); } if (num_reserved > 1) { - ut_asserteq(lmb->reserved.region[1].base, base2); - ut_asserteq(lmb->reserved.region[1].size, size2); + ut_asserteq(used[1].base, base2); + ut_asserteq(used[1].size, size2); } if (num_reserved > 2) { - ut_asserteq(lmb->reserved.region[2].base, base3); - ut_asserteq(lmb->reserved.region[2].size, size3); + ut_asserteq(used[2].base, base3); + ut_asserteq(used[2].size, size3); } return 0; }
-#define ASSERT_LMB(lmb, ram_base, ram_size, num_reserved, base1, size1, \ +#define ASSERT_LMB(mem_lst, used_lst, ram_base, ram_size, num_reserved, base1, size1, \ base2, size2, base3, size3) \ - ut_assert(!check_lmb(uts, lmb, ram_base, ram_size, \ + ut_assert(!check_lmb(uts, mem_lst, used_lst, ram_base, ram_size, \ num_reserved, base1, size1, base2, size2, base3, \ size3))
-/* - * Test helper function that reserves 64 KiB somewhere in the simulated RAM and - * then does some alloc + free tests. - */ static int test_multi_alloc(struct unit_test_state *uts, const phys_addr_t ram, const phys_size_t ram_size, const phys_addr_t ram0, const phys_size_t ram0_size, @@ -67,6 +67,8 @@ static int test_multi_alloc(struct unit_test_state *uts, const phys_addr_t ram, const phys_addr_t alloc_64k_end = alloc_64k_addr + 0x10000;
long ret; + struct alist *mem_lst, *used_lst; + struct lmb_region *mem, *used; phys_addr_t a, a2, b, b2, c, d;
/* check for overflow */ @@ -76,6 +78,10 @@ static int test_multi_alloc(struct unit_test_state *uts, const phys_addr_t ram, ut_assert(alloc_64k_addr >= ram + 8); ut_assert(alloc_64k_end <= ram_end - 8);
+ ut_asserteq(lmb_mem_regions_init(&mem_lst, &used_lst, false), 0); + mem = mem_lst->data; + used = used_lst->data; + if (ram0_size) { ret = lmb_add(ram0, ram0_size, LMB_NONE); ut_asserteq(ret, 0); @@ -85,95 +91,97 @@ static int test_multi_alloc(struct unit_test_state *uts, const phys_addr_t ram, ut_asserteq(ret, 0);
if (ram0_size) { - ut_asserteq(lmb.memory.cnt, 2); - ut_asserteq(lmb.memory.region[0].base, ram0); - ut_asserteq(lmb.memory.region[0].size, ram0_size); - ut_asserteq(lmb.memory.region[1].base, ram); - ut_asserteq(lmb.memory.region[1].size, ram_size); + ut_asserteq(mem_lst->count, 2); + ut_asserteq(mem[0].base, ram0); + ut_asserteq(mem[0].size, ram0_size); + ut_asserteq(mem[1].base, ram); + ut_asserteq(mem[1].size, ram_size); } else { - ut_asserteq(lmb.memory.cnt, 1); - ut_asserteq(lmb.memory.region[0].base, ram); - ut_asserteq(lmb.memory.region[0].size, ram_size); + ut_asserteq(mem_lst->count, 1); + ut_asserteq(mem[0].base, ram); + ut_asserteq(mem[0].size, ram_size); }
/* reserve 64KiB somewhere */ ret = lmb_reserve(alloc_64k_addr, 0x10000, LMB_NONE); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, 0, 0, 1, alloc_64k_addr, 0x10000, + ASSERT_LMB(mem_lst, used_lst, 0, 0, 1, alloc_64k_addr, 0x10000, 0, 0, 0, 0);
/* allocate somewhere, should be at the end of RAM */ a = lmb_alloc(4, 1, LMB_NONE); ut_asserteq(a, ram_end - 4); - ASSERT_LMB(&lmb, 0, 0, 2, alloc_64k_addr, 0x10000, + ASSERT_LMB(mem_lst, used_lst, 0, 0, 2, alloc_64k_addr, 0x10000, ram_end - 4, 4, 0, 0); /* alloc below end of reserved region -> below reserved region */ b = lmb_alloc_base(4, 1, alloc_64k_end, LMB_NONE); ut_asserteq(b, alloc_64k_addr - 4); - ASSERT_LMB(&lmb, 0, 0, 2, + ASSERT_LMB(mem_lst, used_lst, 0, 0, 2, alloc_64k_addr - 4, 0x10000 + 4, ram_end - 4, 4, 0, 0);
/* 2nd time */ c = lmb_alloc(4, 1, LMB_NONE); ut_asserteq(c, ram_end - 8); - ASSERT_LMB(&lmb, 0, 0, 2, + ASSERT_LMB(mem_lst, used_lst, 0, 0, 2, alloc_64k_addr - 4, 0x10000 + 4, ram_end - 8, 8, 0, 0); d = lmb_alloc_base(4, 1, alloc_64k_end, LMB_NONE); ut_asserteq(d, alloc_64k_addr - 8); - ASSERT_LMB(&lmb, 0, 0, 2, + ASSERT_LMB(mem_lst, used_lst, 0, 0, 2, alloc_64k_addr - 8, 0x10000 + 8, ram_end - 8, 8, 0, 0);
ret = lmb_free(a, 4, LMB_NONE); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, 0, 0, 2, + ASSERT_LMB(mem_lst, used_lst, 0, 0, 2, alloc_64k_addr - 8, 0x10000 + 8, ram_end - 8, 4, 0, 0); /* allocate again to ensure we get the same address */ a2 = lmb_alloc(4, 1, LMB_NONE); ut_asserteq(a, a2); - ASSERT_LMB(&lmb, 0, 0, 2, + ASSERT_LMB(mem_lst, used_lst, 0, 0, 2, alloc_64k_addr - 8, 0x10000 + 8, ram_end - 8, 8, 0, 0); ret = lmb_free(a2, 4, LMB_NONE); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, 0, 0, 2, + ASSERT_LMB(mem_lst, used_lst, 0, 0, 2, alloc_64k_addr - 8, 0x10000 + 8, ram_end - 8, 4, 0, 0);
ret = lmb_free(b, 4, LMB_NONE); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, 0, 0, 3, + ASSERT_LMB(mem_lst, used_lst, 0, 0, 3, alloc_64k_addr - 8, 4, alloc_64k_addr, 0x10000, ram_end - 8, 4); /* allocate again to ensure we get the same address */ b2 = lmb_alloc_base(4, 1, alloc_64k_end, LMB_NONE); ut_asserteq(b, b2); - ASSERT_LMB(&lmb, 0, 0, 2, + ASSERT_LMB(mem_lst, used_lst, 0, 0, 2, alloc_64k_addr - 8, 0x10000 + 8, ram_end - 8, 4, 0, 0); ret = lmb_free(b2, 4, LMB_NONE); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, 0, 0, 3, + ASSERT_LMB(mem_lst, used_lst, 0, 0, 3, alloc_64k_addr - 8, 4, alloc_64k_addr, 0x10000, ram_end - 8, 4);
ret = lmb_free(c, 4, LMB_NONE); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, 0, 0, 2, + ASSERT_LMB(mem_lst, used_lst, 0, 0, 2, alloc_64k_addr - 8, 4, alloc_64k_addr, 0x10000, 0, 0); ret = lmb_free(d, 4, LMB_NONE); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, 0, 0, 1, alloc_64k_addr, 0x10000, + ASSERT_LMB(mem_lst, used_lst, 0, 0, 1, alloc_64k_addr, 0x10000, 0, 0, 0, 0);
if (ram0_size) { - ut_asserteq(lmb.memory.cnt, 2); - ut_asserteq(lmb.memory.region[0].base, ram0); - ut_asserteq(lmb.memory.region[0].size, ram0_size); - ut_asserteq(lmb.memory.region[1].base, ram); - ut_asserteq(lmb.memory.region[1].size, ram_size); + ut_asserteq(mem_lst->count, 2); + ut_asserteq(mem[0].base, ram0); + ut_asserteq(mem[0].size, ram0_size); + ut_asserteq(mem[1].base, ram); + ut_asserteq(mem[1].size, ram_size); } else { - ut_asserteq(lmb.memory.cnt, 1); - ut_asserteq(lmb.memory.region[0].base, ram); - ut_asserteq(lmb.memory.region[0].size, ram_size); + ut_asserteq(mem_lst->count, 1); + ut_asserteq(mem[0].base, ram); + ut_asserteq(mem[0].size, ram_size); }
+ lmb_mem_regions_uninit(mem_lst, used_lst); + return 0; }
@@ -228,45 +236,53 @@ static int test_bigblock(struct unit_test_state *uts, const phys_addr_t ram) const phys_size_t big_block_size = 0x10000000; const phys_addr_t ram_end = ram + ram_size; const phys_addr_t alloc_64k_addr = ram + 0x10000000; + struct alist *mem_lst, *used_lst; + struct lmb_region *mem, *used; long ret; phys_addr_t a, b;
/* check for overflow */ ut_assert(ram_end == 0 || ram_end > ram);
+ ut_asserteq(lmb_mem_regions_init(&mem_lst, &used_lst, false), 0); + mem = mem_lst->data; + used = used_lst->data; + ret = lmb_add(ram, ram_size, LMB_NONE); ut_asserteq(ret, 0);
/* reserve 64KiB in the middle of RAM */ ret = lmb_reserve(alloc_64k_addr, 0x10000, LMB_NONE); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, ram, ram_size, 1, alloc_64k_addr, 0x10000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, alloc_64k_addr, 0x10000, 0, 0, 0, 0);
/* allocate a big block, should be below reserved */ a = lmb_alloc(big_block_size, 1, LMB_NONE); ut_asserteq(a, ram); - ASSERT_LMB(&lmb, ram, ram_size, 1, a, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, a, big_block_size + 0x10000, 0, 0, 0, 0); /* allocate 2nd big block */ /* This should fail, printing an error */ b = lmb_alloc(big_block_size, 1, LMB_NONE); ut_asserteq(b, 0); - ASSERT_LMB(&lmb, ram, ram_size, 1, a, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, a, big_block_size + 0x10000, 0, 0, 0, 0);
ret = lmb_free(a, big_block_size, LMB_NONE); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, ram, ram_size, 1, alloc_64k_addr, 0x10000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, alloc_64k_addr, 0x10000, 0, 0, 0, 0);
/* allocate too big block */ /* This should fail, printing an error */ a = lmb_alloc(ram_size, 1, LMB_NONE); ut_asserteq(a, 0); - ASSERT_LMB(&lmb, ram, ram_size, 1, alloc_64k_addr, 0x10000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, alloc_64k_addr, 0x10000, 0, 0, 0, 0);
+ lmb_mem_regions_uninit(mem_lst, used_lst); + return 0; }
@@ -292,51 +308,62 @@ static int test_noreserved(struct unit_test_state *uts, const phys_addr_t ram, const phys_addr_t ram_end = ram + ram_size; long ret; phys_addr_t a, b; + struct alist *mem_lst, *used_lst; + struct lmb_region *mem, *used; const phys_addr_t alloc_size_aligned = (alloc_size + align - 1) & ~(align - 1);
/* check for overflow */ ut_assert(ram_end == 0 || ram_end > ram);
+ ut_asserteq(lmb_mem_regions_init(&mem_lst, &used_lst, false), 0); + mem = mem_lst->data; + used = used_lst->data; + ret = lmb_add(ram, ram_size, LMB_NONE); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, ram, ram_size, 0, 0, 0, 0, 0, 0, 0); + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 0, 0, 0, 0, 0, 0, 0);
/* allocate a block */ a = lmb_alloc(alloc_size, align, LMB_NONE); ut_assert(a != 0); - ASSERT_LMB(&lmb, ram, ram_size, 1, ram + ram_size - alloc_size_aligned, - alloc_size, 0, 0, 0, 0); + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, + ram + ram_size - alloc_size_aligned, alloc_size, 0, 0, 0, 0); + /* allocate another block */ b = lmb_alloc(alloc_size, align, LMB_NONE); ut_assert(b != 0); if (alloc_size == alloc_size_aligned) { - ASSERT_LMB(&lmb, ram, ram_size, 1, ram + ram_size - + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, ram + ram_size - (alloc_size_aligned * 2), alloc_size * 2, 0, 0, 0, 0); } else { - ASSERT_LMB(&lmb, ram, ram_size, 2, ram + ram_size - + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 2, ram + ram_size - (alloc_size_aligned * 2), alloc_size, ram + ram_size - alloc_size_aligned, alloc_size, 0, 0); } /* and free them */ ret = lmb_free(b, alloc_size, LMB_NONE); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, ram, ram_size, 1, ram + ram_size - alloc_size_aligned, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, + ram + ram_size - alloc_size_aligned, alloc_size, 0, 0, 0, 0); ret = lmb_free(a, alloc_size, LMB_NONE); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, ram, ram_size, 0, 0, 0, 0, 0, 0, 0); + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 0, 0, 0, 0, 0, 0, 0);
/* allocate a block with base*/ b = lmb_alloc_base(alloc_size, align, ram_end, LMB_NONE); ut_assert(a == b); - ASSERT_LMB(&lmb, ram, ram_size, 1, ram + ram_size - alloc_size_aligned, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, + ram + ram_size - alloc_size_aligned, alloc_size, 0, 0, 0, 0); /* and free it */ ret = lmb_free(b, alloc_size, LMB_NONE); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, ram, ram_size, 0, 0, 0, 0, 0, 0, 0); + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 0, 0, 0, 0, 0, 0, 0); + + lmb_mem_regions_uninit(mem_lst, used_lst);
return 0; } @@ -378,33 +405,41 @@ static int lib_test_lmb_at_0(struct unit_test_state *uts) { const phys_addr_t ram = 0; const phys_size_t ram_size = 0x20000000; + struct alist *mem_lst, *used_lst; + struct lmb_region *mem, *used; long ret; phys_addr_t a, b;
+ ut_asserteq(lmb_mem_regions_init(&mem_lst, &used_lst, false), 0); + mem = mem_lst->data; + used = used_lst->data; + ret = lmb_add(ram, ram_size, LMB_NONE); ut_asserteq(ret, 0);
/* allocate nearly everything */ a = lmb_alloc(ram_size - 4, 1, LMB_NONE); ut_asserteq(a, ram + 4); - ASSERT_LMB(&lmb, ram, ram_size, 1, a, ram_size - 4, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, a, ram_size - 4, 0, 0, 0, 0); /* allocate the rest */ /* This should fail as the allocated address would be 0 */ b = lmb_alloc(4, 1, LMB_NONE); ut_asserteq(b, 0); /* check that this was an error by checking lmb */ - ASSERT_LMB(&lmb, ram, ram_size, 1, a, ram_size - 4, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, a, ram_size - 4, 0, 0, 0, 0); /* check that this was an error by freeing b */ ret = lmb_free(b, 4, LMB_NONE); ut_asserteq(ret, -1); - ASSERT_LMB(&lmb, ram, ram_size, 1, a, ram_size - 4, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, a, ram_size - 4, 0, 0, 0, 0);
ret = lmb_free(a, ram_size - 4, LMB_NONE); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, ram, ram_size, 0, 0, 0, 0, 0, 0, 0); + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 0, 0, 0, 0, 0, 0, 0); + + lmb_mem_regions_uninit(mem_lst, used_lst);
return 0; } @@ -415,42 +450,52 @@ static int lib_test_lmb_overlapping_reserve(struct unit_test_state *uts) { const phys_addr_t ram = 0x40000000; const phys_size_t ram_size = 0x20000000; + struct alist *mem_lst, *used_lst; + struct lmb_region *mem, *used; long ret;
+ ut_asserteq(lmb_mem_regions_init(&mem_lst, &used_lst, false), 0); + mem = mem_lst->data; + used = used_lst->data; + ret = lmb_add(ram, ram_size, LMB_NONE); ut_asserteq(ret, 0);
ret = lmb_reserve(0x40010000, 0x10000, LMB_NONE); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40010000, 0x10000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, 0x40010000, 0x10000, 0, 0, 0, 0); - /* allocate overlapping region should fail */ + + /* allocate overlapping region should return the coalesced count */ ret = lmb_reserve(0x40011000, 0x10000, LMB_NONE); - ut_asserteq(ret, -1); - ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40010000, 0x10000, + ut_asserteq(ret, 1); + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, 0x40010000, 0x11000, 0, 0, 0, 0); /* allocate 3nd region */ ret = lmb_reserve(0x40030000, 0x10000, LMB_NONE); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, ram, ram_size, 2, 0x40010000, 0x10000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 2, 0x40010000, 0x11000, 0x40030000, 0x10000, 0, 0); /* allocate 2nd region , This should coalesced all region into one */ ret = lmb_reserve(0x40020000, 0x10000, LMB_NONE); ut_assert(ret >= 0); - ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40010000, 0x30000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, 0x40010000, 0x30000, 0, 0, 0, 0);
/* allocate 2nd region, which should be added as first region */ ret = lmb_reserve(0x40000000, 0x8000, LMB_NONE); ut_assert(ret >= 0); - ASSERT_LMB(&lmb, ram, ram_size, 2, 0x40000000, 0x8000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 2, 0x40000000, 0x8000, 0x40010000, 0x30000, 0, 0);
/* allocate 3rd region, coalesce with first and overlap with second */ ret = lmb_reserve(0x40008000, 0x10000, LMB_NONE); ut_assert(ret >= 0); - ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40000000, 0x40000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, 0x40000000, 0x40000, 0, 0, 0, 0); + + lmb_mem_regions_uninit(mem_lst, used_lst); + return 0; } LIB_TEST(lib_test_lmb_overlapping_reserve, 0); @@ -461,6 +506,8 @@ LIB_TEST(lib_test_lmb_overlapping_reserve, 0); */ static int test_alloc_addr(struct unit_test_state *uts, const phys_addr_t ram) { + struct lmb_region *mem, *used; + struct alist *mem_lst, *used_lst; const phys_size_t ram_size = 0x20000000; const phys_addr_t ram_end = ram + ram_size; const phys_size_t alloc_addr_a = ram + 0x8000000; @@ -472,6 +519,10 @@ static int test_alloc_addr(struct unit_test_state *uts, const phys_addr_t ram) /* check for overflow */ ut_assert(ram_end == 0 || ram_end > ram);
+ ut_asserteq(lmb_mem_regions_init(&mem_lst, &used_lst, false), 0); + mem = mem_lst->data; + used = used_lst->data; + ret = lmb_add(ram, ram_size, LMB_NONE); ut_asserteq(ret, 0);
@@ -482,34 +533,34 @@ static int test_alloc_addr(struct unit_test_state *uts, const phys_addr_t ram) ut_asserteq(ret, 0); ret = lmb_reserve(alloc_addr_c, 0x10000, LMB_NONE); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, ram, ram_size, 3, alloc_addr_a, 0x10000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 3, alloc_addr_a, 0x10000, alloc_addr_b, 0x10000, alloc_addr_c, 0x10000);
/* allocate blocks */ a = lmb_alloc_addr(ram, alloc_addr_a - ram, LMB_NONE); ut_asserteq(a, ram); - ASSERT_LMB(&lmb, ram, ram_size, 3, ram, 0x8010000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 3, ram, 0x8010000, alloc_addr_b, 0x10000, alloc_addr_c, 0x10000); b = lmb_alloc_addr(alloc_addr_a + 0x10000, alloc_addr_b - alloc_addr_a - 0x10000, LMB_NONE); ut_asserteq(b, alloc_addr_a + 0x10000); - ASSERT_LMB(&lmb, ram, ram_size, 2, ram, 0x10010000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 2, ram, 0x10010000, alloc_addr_c, 0x10000, 0, 0); c = lmb_alloc_addr(alloc_addr_b + 0x10000, alloc_addr_c - alloc_addr_b - 0x10000, LMB_NONE); ut_asserteq(c, alloc_addr_b + 0x10000); - ASSERT_LMB(&lmb, ram, ram_size, 1, ram, 0x18010000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, ram, 0x18010000, 0, 0, 0, 0); d = lmb_alloc_addr(alloc_addr_c + 0x10000, ram_end - alloc_addr_c - 0x10000, LMB_NONE); ut_asserteq(d, alloc_addr_c + 0x10000); - ASSERT_LMB(&lmb, ram, ram_size, 1, ram, ram_size, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, ram, ram_size, 0, 0, 0, 0);
/* allocating anything else should fail */ e = lmb_alloc(1, 1, LMB_NONE); ut_asserteq(e, 0); - ASSERT_LMB(&lmb, ram, ram_size, 1, ram, ram_size, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, ram, ram_size, 0, 0, 0, 0);
ret = lmb_free(d, ram_end - alloc_addr_c - 0x10000, LMB_NONE); @@ -519,39 +570,40 @@ static int test_alloc_addr(struct unit_test_state *uts, const phys_addr_t ram)
d = lmb_alloc_addr(ram_end - 4, 4, LMB_NONE); ut_asserteq(d, ram_end - 4); - ASSERT_LMB(&lmb, ram, ram_size, 2, ram, 0x18010000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 2, ram, 0x18010000, d, 4, 0, 0); ret = lmb_free(d, 4, LMB_NONE); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, ram, ram_size, 1, ram, 0x18010000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, ram, 0x18010000, 0, 0, 0, 0);
d = lmb_alloc_addr(ram_end - 128, 4, LMB_NONE); ut_asserteq(d, ram_end - 128); - ASSERT_LMB(&lmb, ram, ram_size, 2, ram, 0x18010000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 2, ram, 0x18010000, d, 4, 0, 0); ret = lmb_free(d, 4, LMB_NONE); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, ram, ram_size, 1, ram, 0x18010000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, ram, 0x18010000, 0, 0, 0, 0);
d = lmb_alloc_addr(alloc_addr_c + 0x10000, 4, LMB_NONE); ut_asserteq(d, alloc_addr_c + 0x10000); - ASSERT_LMB(&lmb, ram, ram_size, 1, ram, 0x18010004, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, ram, 0x18010004, 0, 0, 0, 0); ret = lmb_free(d, 4, LMB_NONE); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, ram, ram_size, 1, ram, 0x18010000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, ram, 0x18010000, 0, 0, 0, 0);
/* allocate at the bottom */ ret = lmb_free(a, alloc_addr_a - ram, LMB_NONE); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, ram, ram_size, 1, ram + 0x8000000, 0x10010000, - 0, 0, 0, 0); + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, ram + 0x8000000, + 0x10010000, 0, 0, 0, 0); + d = lmb_alloc_addr(ram, 4, LMB_NONE); ut_asserteq(d, ram); - ASSERT_LMB(&lmb, ram, ram_size, 2, d, 4, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 2, d, 4, ram + 0x8000000, 0x10010000, 0, 0);
/* check that allocating outside memory fails */ @@ -564,6 +616,8 @@ static int test_alloc_addr(struct unit_test_state *uts, const phys_addr_t ram) ut_asserteq(ret, 0); }
+ lmb_mem_regions_uninit(mem_lst, used_lst); + return 0; }
@@ -585,6 +639,8 @@ LIB_TEST(lib_test_lmb_alloc_addr, 0); static int test_get_unreserved_size(struct unit_test_state *uts, const phys_addr_t ram) { + struct lmb_region *mem, *used; + struct alist *mem_lst, *used_lst; const phys_size_t ram_size = 0x20000000; const phys_addr_t ram_end = ram + ram_size; const phys_size_t alloc_addr_a = ram + 0x8000000; @@ -596,6 +652,10 @@ static int test_get_unreserved_size(struct unit_test_state *uts, /* check for overflow */ ut_assert(ram_end == 0 || ram_end > ram);
+ ut_asserteq(lmb_mem_regions_init(&mem_lst, &used_lst, false), 0); + mem = mem_lst->data; + used = used_lst->data; + ret = lmb_add(ram, ram_size, LMB_NONE); ut_asserteq(ret, 0);
@@ -606,7 +666,7 @@ static int test_get_unreserved_size(struct unit_test_state *uts, ut_asserteq(ret, 0); ret = lmb_reserve(alloc_addr_c, 0x10000, LMB_NONE); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, ram, ram_size, 3, alloc_addr_a, 0x10000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 3, alloc_addr_a, 0x10000, alloc_addr_b, 0x10000, alloc_addr_c, 0x10000);
/* check addresses in between blocks */ @@ -631,6 +691,8 @@ static int test_get_unreserved_size(struct unit_test_state *uts, s = lmb_get_free_size(ram_end - 4); ut_asserteq(s, 4);
+ lmb_mem_regions_uninit(mem_lst, used_lst); + return 0; }
@@ -650,83 +712,91 @@ LIB_TEST(lib_test_lmb_get_free_size, 0);
static int lib_test_lmb_flags(struct unit_test_state *uts) { + struct lmb_region *mem, *used; + struct alist *mem_lst, *used_lst; const phys_addr_t ram = 0x40000000; const phys_size_t ram_size = 0x20000000; long ret;
+ ut_asserteq(lmb_mem_regions_init(&mem_lst, &used_lst, false), 0); + mem = mem_lst->data; + used = used_lst->data; + ret = lmb_add(ram, ram_size, LMB_NONE); ut_asserteq(ret, 0);
/* reserve, same flag */ ret = lmb_reserve(0x40010000, 0x10000, LMB_NOMAP); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40010000, 0x10000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, 0x40010000, 0x10000, 0, 0, 0, 0);
/* reserve again, same flag */ ret = lmb_reserve(0x40010000, 0x10000, LMB_NOMAP); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40010000, 0x10000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, 0x40010000, 0x10000, 0, 0, 0, 0);
/* reserve again, new flag */ ret = lmb_reserve(0x40010000, 0x10000, LMB_NONE); ut_asserteq(ret, -1); - ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40010000, 0x10000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, 0x40010000, 0x10000, 0, 0, 0, 0);
- ut_asserteq(lmb_is_nomap(&lmb.reserved.region[0]), 1); + ut_asserteq(lmb_is_nomap(&used[0]), 1);
/* merge after */ ret = lmb_reserve(0x40020000, 0x10000, LMB_NOMAP); ut_asserteq(ret, 1); - ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40010000, 0x20000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, 0x40010000, 0x20000, 0, 0, 0, 0);
/* merge before */ ret = lmb_reserve(0x40000000, 0x10000, LMB_NOMAP); ut_asserteq(ret, 1); - ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40000000, 0x30000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, 0x40000000, 0x30000, 0, 0, 0, 0);
- ut_asserteq(lmb_is_nomap(&lmb.reserved.region[0]), 1); + ut_asserteq(lmb_is_nomap(&used[0]), 1);
ret = lmb_reserve(0x40030000, 0x10000, LMB_NONE); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, ram, ram_size, 2, 0x40000000, 0x30000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 2, 0x40000000, 0x30000, 0x40030000, 0x10000, 0, 0);
- ut_asserteq(lmb_is_nomap(&lmb.reserved.region[0]), 1); - ut_asserteq(lmb_is_nomap(&lmb.reserved.region[1]), 0); + ut_asserteq(lmb_is_nomap(&used[0]), 1); + ut_asserteq(lmb_is_nomap(&used[1]), 0);
/* test that old API use LMB_NONE */ ret = lmb_reserve(0x40040000, 0x10000, LMB_NONE); ut_asserteq(ret, 1); - ASSERT_LMB(&lmb, ram, ram_size, 2, 0x40000000, 0x30000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 2, 0x40000000, 0x30000, 0x40030000, 0x20000, 0, 0);
- ut_asserteq(lmb_is_nomap(&lmb.reserved.region[0]), 1); - ut_asserteq(lmb_is_nomap(&lmb.reserved.region[1]), 0); + ut_asserteq(lmb_is_nomap(&used[0]), 1); + ut_asserteq(lmb_is_nomap(&used[1]), 0);
ret = lmb_reserve(0x40070000, 0x10000, LMB_NOMAP); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, ram, ram_size, 3, 0x40000000, 0x30000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 3, 0x40000000, 0x30000, 0x40030000, 0x20000, 0x40070000, 0x10000);
ret = lmb_reserve(0x40050000, 0x10000, LMB_NOMAP); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, ram, ram_size, 4, 0x40000000, 0x30000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 4, 0x40000000, 0x30000, 0x40030000, 0x20000, 0x40050000, 0x10000);
/* merge with 2 adjacent regions */ ret = lmb_reserve(0x40060000, 0x10000, LMB_NOMAP); ut_asserteq(ret, 2); - ASSERT_LMB(&lmb, ram, ram_size, 3, 0x40000000, 0x30000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 3, 0x40000000, 0x30000, 0x40030000, 0x20000, 0x40050000, 0x30000);
- ut_asserteq(lmb_is_nomap(&lmb.reserved.region[0]), 1); - ut_asserteq(lmb_is_nomap(&lmb.reserved.region[1]), 0); - ut_asserteq(lmb_is_nomap(&lmb.reserved.region[2]), 1); + ut_asserteq(lmb_is_nomap(&used[0]), 1); + ut_asserteq(lmb_is_nomap(&used[1]), 0); + ut_asserteq(lmb_is_nomap(&used[2]), 1); + + lmb_mem_regions_uninit(mem_lst, used_lst);
return 0; }

The LMB code has been changed so that the memory reservations and allocations are now persistent and global. With this change, the design of the LMB tests needs to be changed accordingly. Mark the LMB tests to be run only manually. The tests won't be run as part of the unit test suite, but would be invoked through a separate test, and thus would not interfere with the running of the rest of the tests.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since rfc: None
test/lib/lmb.c | 41 ++++++++++++++++++++--------------------- 1 file changed, 20 insertions(+), 21 deletions(-)
diff --git a/test/lib/lmb.c b/test/lib/lmb.c index aea84e7b1c..c869f58b42 100644 --- a/test/lib/lmb.c +++ b/test/lib/lmb.c @@ -200,7 +200,7 @@ static int test_multi_alloc_512mb_x2(struct unit_test_state *uts, }
/* Create a memory region with one reserved region and allocate */ -static int lib_test_lmb_simple(struct unit_test_state *uts) +static int lib_test_lmb_simple_norun(struct unit_test_state *uts) { int ret;
@@ -212,10 +212,10 @@ static int lib_test_lmb_simple(struct unit_test_state *uts) /* simulate 512 MiB RAM beginning at 1.5GiB */ return test_multi_alloc_512mb(uts, 0xE0000000); } -LIB_TEST(lib_test_lmb_simple, 0); +LIB_TEST(lib_test_lmb_simple_norun, UT_TESTF_MANUAL);
/* Create two memory regions with one reserved region and allocate */ -static int lib_test_lmb_simple_x2(struct unit_test_state *uts) +static int lib_test_lmb_simple_x2_norun(struct unit_test_state *uts) { int ret;
@@ -227,7 +227,7 @@ static int lib_test_lmb_simple_x2(struct unit_test_state *uts) /* simulate 512 MiB RAM beginning at 3.5GiB and 1 GiB */ return test_multi_alloc_512mb_x2(uts, 0xE0000000, 0x40000000); } -LIB_TEST(lib_test_lmb_simple_x2, 0); +LIB_TEST(lib_test_lmb_simple_x2_norun, UT_TESTF_MANUAL);
/* Simulate 512 MiB RAM, allocate some blocks that fit/don't fit */ static int test_bigblock(struct unit_test_state *uts, const phys_addr_t ram) @@ -286,7 +286,7 @@ static int test_bigblock(struct unit_test_state *uts, const phys_addr_t ram) return 0; }
-static int lib_test_lmb_big(struct unit_test_state *uts) +static int lib_test_lmb_big_norun(struct unit_test_state *uts) { int ret;
@@ -298,7 +298,7 @@ static int lib_test_lmb_big(struct unit_test_state *uts) /* simulate 512 MiB RAM beginning at 1.5GiB */ return test_bigblock(uts, 0xE0000000); } -LIB_TEST(lib_test_lmb_big, 0); +LIB_TEST(lib_test_lmb_big_norun, UT_TESTF_MANUAL);
/* Simulate 512 MiB RAM, allocate a block without previous reservation */ static int test_noreserved(struct unit_test_state *uts, const phys_addr_t ram, @@ -368,7 +368,7 @@ static int test_noreserved(struct unit_test_state *uts, const phys_addr_t ram, return 0; }
-static int lib_test_lmb_noreserved(struct unit_test_state *uts) +static int lib_test_lmb_noreserved_norun(struct unit_test_state *uts) { int ret;
@@ -380,10 +380,9 @@ static int lib_test_lmb_noreserved(struct unit_test_state *uts) /* simulate 512 MiB RAM beginning at 1.5GiB */ return test_noreserved(uts, 0xE0000000, 4, 1); } +LIB_TEST(lib_test_lmb_noreserved_norun, UT_TESTF_MANUAL);
-LIB_TEST(lib_test_lmb_noreserved, 0); - -static int lib_test_lmb_unaligned_size(struct unit_test_state *uts) +static int lib_test_lmb_unaligned_size_norun(struct unit_test_state *uts) { int ret;
@@ -395,13 +394,13 @@ static int lib_test_lmb_unaligned_size(struct unit_test_state *uts) /* simulate 512 MiB RAM beginning at 1.5GiB */ return test_noreserved(uts, 0xE0000000, 5, 8); } -LIB_TEST(lib_test_lmb_unaligned_size, 0); +LIB_TEST(lib_test_lmb_unaligned_size_norun, UT_TESTF_MANUAL);
/* * Simulate a RAM that starts at 0 and allocate down to address 0, which must * fail as '0' means failure for the lmb_alloc functions. */ -static int lib_test_lmb_at_0(struct unit_test_state *uts) +static int lib_test_lmb_at_0_norun(struct unit_test_state *uts) { const phys_addr_t ram = 0; const phys_size_t ram_size = 0x20000000; @@ -443,10 +442,10 @@ static int lib_test_lmb_at_0(struct unit_test_state *uts)
return 0; } -LIB_TEST(lib_test_lmb_at_0, 0); +LIB_TEST(lib_test_lmb_at_0_norun, UT_TESTF_MANUAL);
/* Check that calling lmb_reserve with overlapping regions fails. */ -static int lib_test_lmb_overlapping_reserve(struct unit_test_state *uts) +static int lib_test_lmb_overlapping_reserve_norun(struct unit_test_state *uts) { const phys_addr_t ram = 0x40000000; const phys_size_t ram_size = 0x20000000; @@ -498,7 +497,7 @@ static int lib_test_lmb_overlapping_reserve(struct unit_test_state *uts)
return 0; } -LIB_TEST(lib_test_lmb_overlapping_reserve, 0); +LIB_TEST(lib_test_lmb_overlapping_reserve_norun, UT_TESTF_MANUAL);
/* * Simulate 512 MiB RAM, reserve 3 blocks, allocate addresses in between. @@ -621,7 +620,7 @@ static int test_alloc_addr(struct unit_test_state *uts, const phys_addr_t ram) return 0; }
-static int lib_test_lmb_alloc_addr(struct unit_test_state *uts) +static int lib_test_lmb_alloc_addr_norun(struct unit_test_state *uts) { int ret;
@@ -633,7 +632,7 @@ static int lib_test_lmb_alloc_addr(struct unit_test_state *uts) /* simulate 512 MiB RAM beginning at 1.5GiB */ return test_alloc_addr(uts, 0xE0000000); } -LIB_TEST(lib_test_lmb_alloc_addr, 0); +LIB_TEST(lib_test_lmb_alloc_addr_norun, UT_TESTF_MANUAL);
/* Simulate 512 MiB RAM, reserve 3 blocks, check addresses in between */ static int test_get_unreserved_size(struct unit_test_state *uts, @@ -696,7 +695,7 @@ static int test_get_unreserved_size(struct unit_test_state *uts, return 0; }
-static int lib_test_lmb_get_free_size(struct unit_test_state *uts) +static int lib_test_lmb_get_free_size_norun(struct unit_test_state *uts) { int ret;
@@ -708,9 +707,9 @@ static int lib_test_lmb_get_free_size(struct unit_test_state *uts) /* simulate 512 MiB RAM beginning at 1.5GiB */ return test_get_unreserved_size(uts, 0xE0000000); } -LIB_TEST(lib_test_lmb_get_free_size, 0); +LIB_TEST(lib_test_lmb_get_free_size_norun, UT_TESTF_MANUAL);
-static int lib_test_lmb_flags(struct unit_test_state *uts) +static int lib_test_lmb_flags_norun(struct unit_test_state *uts) { struct lmb_region *mem, *used; struct alist *mem_lst, *used_lst; @@ -800,4 +799,4 @@ static int lib_test_lmb_flags(struct unit_test_state *uts)
return 0; } -LIB_TEST(lib_test_lmb_flags, 0); +LIB_TEST(lib_test_lmb_flags_norun, UT_TESTF_MANUAL);

Hi Sughosh,
On Wed, 24 Jul 2024 at 00:05, Sughosh Ganu sughosh.ganu@linaro.org wrote:
The LMB code has been changed so that the memory reservations and allocations are now persistent and global. With this change, the design of the LMB tests needs to be changed accordingly. Mark the LMB tests to be run only manually. The tests won't be run as part of the unit test suite, but would be invoked through a separate test, and thus would not interfere with the running of the rest of the tests.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since rfc: None
test/lib/lmb.c | 41 ++++++++++++++++++++--------------------- 1 file changed, 20 insertions(+), 21 deletions(-)
If you put the lmb state in a struct it is pretty easy to have the tests work without all of this.
Regards, Simon

Add the LMB unit tests under a separate class of tests. The LMB tests involve changing the LMB's memory map. With the memory map now persistent and global, running these tests has a side effect and impact any subsequent tests. Run these tests separately so that the system can be reset on completion of these tests.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since rfc: None
include/test/suites.h | 1 + test/Kconfig | 9 ++++++ test/Makefile | 1 + test/cmd_ut.c | 7 +++++ test/lib/Makefile | 1 - test/{lib/lmb.c => lmb_ut.c} | 53 ++++++++++++++++++++++-------------- 6 files changed, 50 insertions(+), 22 deletions(-) rename test/{lib/lmb.c => lmb_ut.c} (93%)
diff --git a/include/test/suites.h b/include/test/suites.h index 365d5f20df..5ef164a956 100644 --- a/include/test/suites.h +++ b/include/test/suites.h @@ -45,6 +45,7 @@ int do_ut_fdt(struct cmd_tbl *cmdtp, int flag, int argc, char *const argv[]); int do_ut_font(struct cmd_tbl *cmdtp, int flag, int argc, char *const argv[]); int do_ut_hush(struct cmd_tbl *cmdtp, int flag, int argc, char *const argv[]); int do_ut_lib(struct cmd_tbl *cmdtp, int flag, int argc, char *const argv[]); +int do_ut_lmb(struct cmd_tbl *cmdtp, int flag, int argc, char *const argv[]); int do_ut_loadm(struct cmd_tbl *cmdtp, int flag, int argc, char *const argv[]); int do_ut_log(struct cmd_tbl *cmdtp, int flag, int argc, char * const argv[]); int do_ut_mbr(struct cmd_tbl *cmdtp, int flag, int argc, char *const argv[]); diff --git a/test/Kconfig b/test/Kconfig index e2ec0994a2..49c24722dc 100644 --- a/test/Kconfig +++ b/test/Kconfig @@ -79,6 +79,15 @@ config UT_COMPRESSION Enables tests for compression and decompression routines for simple sanity and for buffer overflow conditions.
+config UT_LMB + bool "Unit tests for LMB functions" + depends on !SPL && UNIT_TEST + default y + help + Enables the 'ut lmb' commands which tests the lmb functions + responsible for reserving memory for loading images into + memory. + config UT_LOG bool "Unit tests for logging functions" depends on UNIT_TEST diff --git a/test/Makefile b/test/Makefile index ed312cd0a4..e9bdd14eba 100644 --- a/test/Makefile +++ b/test/Makefile @@ -14,6 +14,7 @@ obj-$(CONFIG_$(SPL_)CMDLINE) += command_ut.o obj-$(CONFIG_$(SPL_)UT_COMPRESSION) += compression.o obj-y += dm/ obj-$(CONFIG_FUZZ) += fuzz/ +obj-$(CONFIG_UT_LMB) += lmb_ut.o ifndef CONFIG_SANDBOX_VPL obj-$(CONFIG_UNIT_TEST) += lib/ endif diff --git a/test/cmd_ut.c b/test/cmd_ut.c index 4e4aa8f1cb..60ff872723 100644 --- a/test/cmd_ut.c +++ b/test/cmd_ut.c @@ -78,6 +78,7 @@ static struct cmd_tbl cmd_ut_sub[] = { #ifdef CONFIG_CONSOLE_TRUETYPE U_BOOT_CMD_MKENT(font, CONFIG_SYS_MAXARGS, 1, do_ut_font, "", ""), #endif + #ifdef CONFIG_UT_OPTEE U_BOOT_CMD_MKENT(optee, CONFIG_SYS_MAXARGS, 1, do_ut_optee, "", ""), #endif @@ -87,6 +88,9 @@ static struct cmd_tbl cmd_ut_sub[] = { #ifdef CONFIG_UT_LIB U_BOOT_CMD_MKENT(lib, CONFIG_SYS_MAXARGS, 1, do_ut_lib, "", ""), #endif +#ifdef CONFIG_UT_LMB + U_BOOT_CMD_MKENT(lmb, CONFIG_SYS_MAXARGS, 1, do_ut_lmb, "", ""), +#endif #ifdef CONFIG_UT_LOG U_BOOT_CMD_MKENT(log, CONFIG_SYS_MAXARGS, 1, do_ut_log, "", ""), #endif @@ -228,6 +232,9 @@ U_BOOT_LONGHELP(ut, #ifdef CONFIG_UT_LIB "\nlib - library functions" #endif +#ifdef CONFIG_UT_LMB + "\nlmb - lmb functions" +#endif #ifdef CONFIG_UT_LOG "\nlog - logging functions" #endif diff --git a/test/lib/Makefile b/test/lib/Makefile index 70f14c46b1..ecb96dc1d7 100644 --- a/test/lib/Makefile +++ b/test/lib/Makefile @@ -10,7 +10,6 @@ obj-$(CONFIG_EFI_LOADER) += efi_device_path.o obj-$(CONFIG_EFI_SECURE_BOOT) += efi_image_region.o obj-y += hexdump.o obj-$(CONFIG_SANDBOX) += kconfig.o -obj-y += lmb.o obj-y += longjmp.o obj-$(CONFIG_CONSOLE_RECORD) += test_print.o obj-$(CONFIG_SSCANF) += sscanf.o diff --git a/test/lib/lmb.c b/test/lmb_ut.c similarity index 93% rename from test/lib/lmb.c rename to test/lmb_ut.c index c869f58b42..49f75e55d2 100644 --- a/test/lib/lmb.c +++ b/test/lmb_ut.c @@ -9,10 +9,13 @@ #include <log.h> #include <malloc.h> #include <dm/test.h> -#include <test/lib.h> +#include <test/suites.h> #include <test/test.h> #include <test/ut.h>
+ +#define LMB_TEST(_name, _flags) UNIT_TEST(_name, _flags, lmb_test) + static inline bool lmb_is_nomap(struct lmb_region *m) { return m->flags & LMB_NOMAP; @@ -200,7 +203,7 @@ static int test_multi_alloc_512mb_x2(struct unit_test_state *uts, }
/* Create a memory region with one reserved region and allocate */ -static int lib_test_lmb_simple_norun(struct unit_test_state *uts) +static int lmb_test_lmb_simple_norun(struct unit_test_state *uts) { int ret;
@@ -212,10 +215,10 @@ static int lib_test_lmb_simple_norun(struct unit_test_state *uts) /* simulate 512 MiB RAM beginning at 1.5GiB */ return test_multi_alloc_512mb(uts, 0xE0000000); } -LIB_TEST(lib_test_lmb_simple_norun, UT_TESTF_MANUAL); +LMB_TEST(lmb_test_lmb_simple_norun, UT_TESTF_MANUAL);
/* Create two memory regions with one reserved region and allocate */ -static int lib_test_lmb_simple_x2_norun(struct unit_test_state *uts) +static int lmb_test_lmb_simple_x2_norun(struct unit_test_state *uts) { int ret;
@@ -227,7 +230,7 @@ static int lib_test_lmb_simple_x2_norun(struct unit_test_state *uts) /* simulate 512 MiB RAM beginning at 3.5GiB and 1 GiB */ return test_multi_alloc_512mb_x2(uts, 0xE0000000, 0x40000000); } -LIB_TEST(lib_test_lmb_simple_x2_norun, UT_TESTF_MANUAL); +LMB_TEST(lmb_test_lmb_simple_x2_norun, UT_TESTF_MANUAL);
/* Simulate 512 MiB RAM, allocate some blocks that fit/don't fit */ static int test_bigblock(struct unit_test_state *uts, const phys_addr_t ram) @@ -286,7 +289,7 @@ static int test_bigblock(struct unit_test_state *uts, const phys_addr_t ram) return 0; }
-static int lib_test_lmb_big_norun(struct unit_test_state *uts) +static int lmb_test_lmb_big_norun(struct unit_test_state *uts) { int ret;
@@ -298,7 +301,7 @@ static int lib_test_lmb_big_norun(struct unit_test_state *uts) /* simulate 512 MiB RAM beginning at 1.5GiB */ return test_bigblock(uts, 0xE0000000); } -LIB_TEST(lib_test_lmb_big_norun, UT_TESTF_MANUAL); +LMB_TEST(lmb_test_lmb_big_norun, UT_TESTF_MANUAL);
/* Simulate 512 MiB RAM, allocate a block without previous reservation */ static int test_noreserved(struct unit_test_state *uts, const phys_addr_t ram, @@ -368,7 +371,7 @@ static int test_noreserved(struct unit_test_state *uts, const phys_addr_t ram, return 0; }
-static int lib_test_lmb_noreserved_norun(struct unit_test_state *uts) +static int lmb_test_lmb_noreserved_norun(struct unit_test_state *uts) { int ret;
@@ -380,9 +383,9 @@ static int lib_test_lmb_noreserved_norun(struct unit_test_state *uts) /* simulate 512 MiB RAM beginning at 1.5GiB */ return test_noreserved(uts, 0xE0000000, 4, 1); } -LIB_TEST(lib_test_lmb_noreserved_norun, UT_TESTF_MANUAL); +LMB_TEST(lmb_test_lmb_noreserved_norun, UT_TESTF_MANUAL);
-static int lib_test_lmb_unaligned_size_norun(struct unit_test_state *uts) +static int lmb_test_lmb_unaligned_size_norun(struct unit_test_state *uts) { int ret;
@@ -394,13 +397,13 @@ static int lib_test_lmb_unaligned_size_norun(struct unit_test_state *uts) /* simulate 512 MiB RAM beginning at 1.5GiB */ return test_noreserved(uts, 0xE0000000, 5, 8); } -LIB_TEST(lib_test_lmb_unaligned_size_norun, UT_TESTF_MANUAL); +LMB_TEST(lmb_test_lmb_unaligned_size_norun, UT_TESTF_MANUAL);
/* * Simulate a RAM that starts at 0 and allocate down to address 0, which must * fail as '0' means failure for the lmb_alloc functions. */ -static int lib_test_lmb_at_0_norun(struct unit_test_state *uts) +static int lmb_test_lmb_at_0_norun(struct unit_test_state *uts) { const phys_addr_t ram = 0; const phys_size_t ram_size = 0x20000000; @@ -442,10 +445,10 @@ static int lib_test_lmb_at_0_norun(struct unit_test_state *uts)
return 0; } -LIB_TEST(lib_test_lmb_at_0_norun, UT_TESTF_MANUAL); +LMB_TEST(lmb_test_lmb_at_0_norun, UT_TESTF_MANUAL);
/* Check that calling lmb_reserve with overlapping regions fails. */ -static int lib_test_lmb_overlapping_reserve_norun(struct unit_test_state *uts) +static int lmb_test_lmb_overlapping_reserve_norun(struct unit_test_state *uts) { const phys_addr_t ram = 0x40000000; const phys_size_t ram_size = 0x20000000; @@ -497,7 +500,7 @@ static int lib_test_lmb_overlapping_reserve_norun(struct unit_test_state *uts)
return 0; } -LIB_TEST(lib_test_lmb_overlapping_reserve_norun, UT_TESTF_MANUAL); +LMB_TEST(lmb_test_lmb_overlapping_reserve_norun, UT_TESTF_MANUAL);
/* * Simulate 512 MiB RAM, reserve 3 blocks, allocate addresses in between. @@ -620,7 +623,7 @@ static int test_alloc_addr(struct unit_test_state *uts, const phys_addr_t ram) return 0; }
-static int lib_test_lmb_alloc_addr_norun(struct unit_test_state *uts) +static int lmb_test_lmb_alloc_addr_norun(struct unit_test_state *uts) { int ret;
@@ -632,7 +635,7 @@ static int lib_test_lmb_alloc_addr_norun(struct unit_test_state *uts) /* simulate 512 MiB RAM beginning at 1.5GiB */ return test_alloc_addr(uts, 0xE0000000); } -LIB_TEST(lib_test_lmb_alloc_addr_norun, UT_TESTF_MANUAL); +LMB_TEST(lmb_test_lmb_alloc_addr_norun, UT_TESTF_MANUAL);
/* Simulate 512 MiB RAM, reserve 3 blocks, check addresses in between */ static int test_get_unreserved_size(struct unit_test_state *uts, @@ -695,7 +698,7 @@ static int test_get_unreserved_size(struct unit_test_state *uts, return 0; }
-static int lib_test_lmb_get_free_size_norun(struct unit_test_state *uts) +static int lmb_test_lmb_get_free_size_norun(struct unit_test_state *uts) { int ret;
@@ -707,9 +710,9 @@ static int lib_test_lmb_get_free_size_norun(struct unit_test_state *uts) /* simulate 512 MiB RAM beginning at 1.5GiB */ return test_get_unreserved_size(uts, 0xE0000000); } -LIB_TEST(lib_test_lmb_get_free_size_norun, UT_TESTF_MANUAL); +LMB_TEST(lmb_test_lmb_get_free_size_norun, UT_TESTF_MANUAL);
-static int lib_test_lmb_flags_norun(struct unit_test_state *uts) +static int lmb_test_lmb_flags_norun(struct unit_test_state *uts) { struct lmb_region *mem, *used; struct alist *mem_lst, *used_lst; @@ -799,4 +802,12 @@ static int lib_test_lmb_flags_norun(struct unit_test_state *uts)
return 0; } -LIB_TEST(lib_test_lmb_flags_norun, UT_TESTF_MANUAL); +LMB_TEST(lmb_test_lmb_flags_norun, UT_TESTF_MANUAL); + +int do_ut_lmb(struct cmd_tbl *cmdtp, int flag, int argc, char *const argv[]) +{ + struct unit_test *tests = UNIT_TEST_SUITE_START(lmb_test); + const int n_ents = UNIT_TEST_SUITE_COUNT(lmb_test); + + return cmd_ut_category("lmb", "lmb_test_", tests, n_ents, argc, argv); +}

Hi Sughosh,
On Wed, 24 Jul 2024 at 00:05, Sughosh Ganu sughosh.ganu@linaro.org wrote:
Add the LMB unit tests under a separate class of tests. The LMB tests involve changing the LMB's memory map. With the memory map now persistent and global, running these tests has a side effect and impact any subsequent tests. Run these tests separately so that the system can be reset on completion of these tests.
This is just not a good idea...we should fix the tests so they don't have these side effects.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since rfc: None
include/test/suites.h | 1 + test/Kconfig | 9 ++++++ test/Makefile | 1 + test/cmd_ut.c | 7 +++++ test/lib/Makefile | 1 - test/{lib/lmb.c => lmb_ut.c} | 53 ++++++++++++++++++++++-------------- 6 files changed, 50 insertions(+), 22 deletions(-) rename test/{lib/lmb.c => lmb_ut.c} (93%)
Regards, Simon

On Fri, 26 Jul 2024 at 05:03, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Wed, 24 Jul 2024 at 00:05, Sughosh Ganu sughosh.ganu@linaro.org wrote:
Add the LMB unit tests under a separate class of tests. The LMB tests involve changing the LMB's memory map. With the memory map now persistent and global, running these tests has a side effect and impact any subsequent tests. Run these tests separately so that the system can be reset on completion of these tests.
This is just not a good idea...we should fix the tests so they don't have these side effects.
But this test *will* have side-effects -- if we are making the lmb memory maps persistent, testing that code will have an impact. There is no getting around it. Btw, I got this idea based on how the VBE tests are run. This is similar to what is being done for the VBE tests.
As an aside, what issue do you see with running these tests separately?
-sughosh
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since rfc: None
include/test/suites.h | 1 + test/Kconfig | 9 ++++++ test/Makefile | 1 + test/cmd_ut.c | 7 +++++ test/lib/Makefile | 1 - test/{lib/lmb.c => lmb_ut.c} | 53 ++++++++++++++++++++++-------------- 6 files changed, 50 insertions(+), 22 deletions(-) rename test/{lib/lmb.c => lmb_ut.c} (93%)
Regards, Simon

Hi Sughosh,
On Mon, 29 Jul 2024 at 02:56, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Fri, 26 Jul 2024 at 05:03, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Wed, 24 Jul 2024 at 00:05, Sughosh Ganu sughosh.ganu@linaro.org wrote:
Add the LMB unit tests under a separate class of tests. The LMB tests involve changing the LMB's memory map. With the memory map now persistent and global, running these tests has a side effect and impact any subsequent tests. Run these tests separately so that the system can be reset on completion of these tests.
This is just not a good idea...we should fix the tests so they don't have these side effects.
But this test *will* have side-effects -- if we are making the lmb memory maps persistent, testing that code will have an impact. There is no getting around it. Btw, I got this idea based on how the VBE tests are run. This is similar to what is being done for the VBE tests.
You can set up the state before the test and restore it afterwards. That is what we do with other tests and it is why you should have the lmb state in a struct. See for example state_reset_for_test() which is called if the test has a UT_TESTF_DM flag.
You can add a save/restore function (since you don't want a lmb pointer).
The _norun thing is because the tests are written in C but need some Python setup. So the setup is done and then the test is run immediately afterwards.
As an aside, what issue do you see with running these tests separately?
They don't run with the 'ut' command so must be done specially. There is no Python setup needed. We have DM tests which save and restore all sorts of things...so we should do the same with lmb.
-sughosh
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since rfc: None
include/test/suites.h | 1 + test/Kconfig | 9 ++++++ test/Makefile | 1 + test/cmd_ut.c | 7 +++++ test/lib/Makefile | 1 - test/{lib/lmb.c => lmb_ut.c} | 53 ++++++++++++++++++++++-------------- 6 files changed, 50 insertions(+), 22 deletions(-) rename test/{lib/lmb.c => lmb_ut.c} (93%)
- Simon

With the LMB tests moved under a separate class of unit tests, invoke these from a separate script which would allow for a system reset once the tests have been run. This enables clearing up the LMB memory map after having run the tests.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since rfc: None
test/py/tests/test_lmb.py | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+) create mode 100644 test/py/tests/test_lmb.py
diff --git a/test/py/tests/test_lmb.py b/test/py/tests/test_lmb.py new file mode 100644 index 0000000000..b6f9ff9c6a --- /dev/null +++ b/test/py/tests/test_lmb.py @@ -0,0 +1,24 @@ +# SPDX-License-Identifier: GPL-2.0+ +# Copyright 2024 Linaro Ltd +# +# Run the LMB tests + +import pytest + +base_script = ''' +ut lmb -f +''' + +@pytest.mark.boardspec('sandbox') +def test_lmb(u_boot_console): + cons = u_boot_console + cmd = base_script + + with cons.log.section('LMB Unit Test'): + output = cons.run_command_list(cmd.splitlines()) + + assert 'Failures: 0' in output[-1] + + # Restart so that the LMB memory map starts with + # a clean slate for the next set of tests. + u_boot_console.restart_uboot()

The LMB code has been changed to make the memory reservations persistent and global. Make corresponding change the the lmb_test_dump_all() function to print the global LMB available and used memory.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since rfc: None
test/cmd/bdinfo.c | 28 ++++++++++++++++------------ 1 file changed, 16 insertions(+), 12 deletions(-)
diff --git a/test/cmd/bdinfo.c b/test/cmd/bdinfo.c index 1cd81a195b..3184aaf629 100644 --- a/test/cmd/bdinfo.c +++ b/test/cmd/bdinfo.c @@ -5,6 +5,7 @@ * Copyright 2023 Marek Vasut marek.vasut+renesas@mailbox.org */
+#include <alist.h> #include <console.h> #include <mapmem.h> #include <asm/global_data.h> @@ -21,6 +22,7 @@ #include <asm/cache.h> #include <asm/global_data.h> #include <display_options.h> +#include <linux/kernel.h>
DECLARE_GLOBAL_DATA_PTR;
@@ -99,19 +101,20 @@ static int test_video_info(struct unit_test_state *uts) }
static int lmb_test_dump_region(struct unit_test_state *uts, - struct lmb_region *rgn, char *name) + struct alist *lmb_rgn_lst, char *name) { + struct lmb_region *rgn = lmb_rgn_lst->data; unsigned long long base, size, end; enum lmb_flags flags; int i;
- ut_assert_nextline(" %s.cnt = 0x%lx / max = 0x%lx", name, rgn->cnt, rgn->max); + ut_assert_nextline(" %s.count = 0x%hx", name, lmb_rgn_lst->count);
- for (i = 0; i < rgn->cnt; i++) { - base = rgn->region[i].base; - size = rgn->region[i].size; + for (i = 0; i < lmb_rgn_lst->count; i++) { + base = rgn[i].base; + size = rgn[i].size; end = base + size - 1; - flags = rgn->region[i].flags; + flags = rgn[i].flags;
if (!IS_ENABLED(CONFIG_SANDBOX) && i == 3) { ut_assert_nextlinen(" %s[%d]\t[", name, i); @@ -124,11 +127,14 @@ static int lmb_test_dump_region(struct unit_test_state *uts, return 0; }
-static int lmb_test_dump_all(struct unit_test_state *uts, struct lmb *lmb) +static int lmb_test_dump_all(struct unit_test_state *uts) { + extern struct alist lmb_free_mem; + extern struct alist lmb_used_mem; + ut_assert_nextline("lmb_dump_all:"); - ut_assertok(lmb_test_dump_region(uts, &lmb->memory, "memory")); - ut_assertok(lmb_test_dump_region(uts, &lmb->reserved, "reserved")); + ut_assertok(lmb_test_dump_region(uts, &lmb_free_mem, "memory")); + ut_assertok(lmb_test_dump_region(uts, &lmb_used_mem, "reserved"));
return 0; } @@ -190,9 +196,7 @@ static int bdinfo_test_all(struct unit_test_state *uts) #endif
if (IS_ENABLED(CONFIG_LMB) && gd->fdt_blob) { - struct lmb lmb; - - ut_assertok(lmb_test_dump_all(uts, &lmb)); + ut_assertok(lmb_test_dump_all(uts)); if (IS_ENABLED(CONFIG_OF_REAL)) ut_assert_nextline("devicetree = %s", fdtdec_get_srcname()); }

The TCG event log has now been moved to the start of the memory, and occupies 8KB of memory. Make a corresponding change to the load address in a couple of tests so that it does not overlap with the TCG event log.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since rfc: New patch
test/py/tests/test_android/test_abootimg.py | 2 +- test/py/tests/test_vbe.py | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/test/py/tests/test_android/test_abootimg.py b/test/py/tests/test_android/test_abootimg.py index 6a8ff34538..bf6a19fe4f 100644 --- a/test/py/tests/test_android/test_abootimg.py +++ b/test/py/tests/test_android/test_abootimg.py @@ -90,7 +90,7 @@ dtb_dump_resp="""## DTB area contents (concat format): (DTB)model = x2 (DTB)compatible = y2,z2""" # Address in RAM where to load the boot image ('abootimg' looks in $loadaddr) -loadaddr = 0x1000 +loadaddr = 0x4000 # Address in RAM where to load the vendor boot image ('abootimg' looks in $vloadaddr) vloadaddr= 0x10000 # Expected DTB #1 offset from the boot image start address diff --git a/test/py/tests/test_vbe.py b/test/py/tests/test_vbe.py index 50b6c1cd91..edeb655c6f 100644 --- a/test/py/tests/test_vbe.py +++ b/test/py/tests/test_vbe.py @@ -97,7 +97,7 @@ def test_vbe(u_boot_console): fdt_out = fit_util.make_fname(cons, 'fdt-out.dtb')
params = { - 'fit_addr' : 0x1000, + 'fit_addr' : 0x4000,
'kernel' : kernel,

Hi Sughosh,
On Wed, 24 Jul 2024 at 00:06, Sughosh Ganu sughosh.ganu@linaro.org wrote:
The TCG event log has now been moved to the start of the memory, and occupies 8KB of memory. Make a corresponding change to the load address in a couple of tests so that it does not overlap with the TCG event log.
That address overlaps with the FDT...so I am hoping you can leave the low memory free.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since rfc: New patch
test/py/tests/test_android/test_abootimg.py | 2 +- test/py/tests/test_vbe.py | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-)
Regards.
Simon

All the changes needed for making the LMB memory map persistent and global have been made, including making corresponding changes in the test code. Re-enable the unit tests on the platforms.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since rfc: New patch
configs/sandbox64_defconfig | 4 +++- configs/sandbox_defconfig | 7 +++---- configs/sandbox_flattree_defconfig | 4 +++- configs/sandbox_noinst_defconfig | 8 ++++---- configs/sandbox_spl_defconfig | 8 ++++---- configs/sandbox_vpl_defconfig | 6 ++++-- configs/snow_defconfig | 2 +- 7 files changed, 22 insertions(+), 17 deletions(-)
diff --git a/configs/sandbox64_defconfig b/configs/sandbox64_defconfig index dbcf65d29f..dd0582d2a0 100644 --- a/configs/sandbox64_defconfig +++ b/configs/sandbox64_defconfig @@ -3,7 +3,6 @@ CONFIG_SYS_MALLOC_LEN=0x6000000 CONFIG_NR_DRAM_BANKS=1 CONFIG_ENV_SIZE=0x2000 CONFIG_DEFAULT_DEVICE_TREE="sandbox64" -CONFIG_OF_LIBFDT_OVERLAY=y CONFIG_DM_RESET=y CONFIG_PRE_CON_BUF_ADDR=0x100000 CONFIG_SYS_LOAD_ADDR=0x0 @@ -271,3 +270,6 @@ CONFIG_GETOPT=y CONFIG_EFI_RT_VOLATILE_STORE=y CONFIG_EFI_SECURE_BOOT=y CONFIG_TEST_FDTDEC=y +CONFIG_UNIT_TEST=y +CONFIG_UT_TIME=y +CONFIG_UT_DM=y diff --git a/configs/sandbox_defconfig b/configs/sandbox_defconfig index 9bb1813e77..dc5fcdbd1c 100644 --- a/configs/sandbox_defconfig +++ b/configs/sandbox_defconfig @@ -3,7 +3,6 @@ CONFIG_SYS_MALLOC_LEN=0x6000000 CONFIG_NR_DRAM_BANKS=1 CONFIG_ENV_SIZE=0x2000 CONFIG_DEFAULT_DEVICE_TREE="sandbox" -CONFIG_OF_LIBFDT_OVERLAY=y CONFIG_DM_RESET=y CONFIG_PRE_CON_BUF_ADDR=0xf0000 CONFIG_SYS_LOAD_ADDR=0x0 @@ -346,9 +345,6 @@ CONFIG_ADDR_MAP=y CONFIG_CMD_DHRYSTONE=y CONFIG_ECDSA=y CONFIG_ECDSA_VERIFY=y -CONFIG_CRYPT_PW=y -CONFIG_CRYPT_PW_SHA256=y -CONFIG_CRYPT_PW_SHA512=y CONFIG_TPM=y CONFIG_ERRNO_STR=y CONFIG_GETOPT=y @@ -360,3 +356,6 @@ CONFIG_EFI_CAPSULE_AUTHENTICATE=y CONFIG_EFI_CAPSULE_CRT_FILE="board/sandbox/capsule_pub_key_good.crt" CONFIG_EFI_SECURE_BOOT=y CONFIG_TEST_FDTDEC=y +CONFIG_UNIT_TEST=y +CONFIG_UT_TIME=y +CONFIG_UT_DM=y diff --git a/configs/sandbox_flattree_defconfig b/configs/sandbox_flattree_defconfig index b87bd145bb..049a606613 100644 --- a/configs/sandbox_flattree_defconfig +++ b/configs/sandbox_flattree_defconfig @@ -2,7 +2,6 @@ CONFIG_TEXT_BASE=0 CONFIG_NR_DRAM_BANKS=1 CONFIG_ENV_SIZE=0x2000 CONFIG_DEFAULT_DEVICE_TREE="sandbox" -CONFIG_OF_LIBFDT_OVERLAY=y CONFIG_DM_RESET=y CONFIG_SYS_LOAD_ADDR=0x0 CONFIG_PCI=y @@ -229,3 +228,6 @@ CONFIG_EFI_CAPSULE_ON_DISK=y CONFIG_EFI_CAPSULE_FIRMWARE_FIT=y CONFIG_EFI_CAPSULE_AUTHENTICATE=y CONFIG_EFI_CAPSULE_CRT_FILE="board/sandbox/capsule_pub_key_good.crt" +CONFIG_UNIT_TEST=y +CONFIG_UT_TIME=y +CONFIG_UT_DM=y diff --git a/configs/sandbox_noinst_defconfig b/configs/sandbox_noinst_defconfig index 6150f55072..e930162937 100644 --- a/configs/sandbox_noinst_defconfig +++ b/configs/sandbox_noinst_defconfig @@ -6,7 +6,6 @@ CONFIG_NR_DRAM_BANKS=1 CONFIG_ENV_SIZE=0x2000 CONFIG_SPL_DM_SPI=y CONFIG_DEFAULT_DEVICE_TREE="sandbox" -CONFIG_OF_LIBFDT_OVERLAY=y CONFIG_DM_RESET=y CONFIG_SPL_MMC=y CONFIG_SPL_SERIAL=y @@ -132,7 +131,6 @@ CONFIG_NETCONSOLE=y CONFIG_IP_DEFRAG=y CONFIG_BOOTP_SERVERIP=y CONFIG_SPL_DM=y -CONFIG_SPL_DM_DEVICE_REMOVE=y CONFIG_DM_DMA=y CONFIG_REGMAP=y CONFIG_SPL_REGMAP=y @@ -279,10 +277,12 @@ CONFIG_FS_CRAMFS=y # CONFIG_SPL_USE_TINY_PRINTF is not set CONFIG_CMD_DHRYSTONE=y CONFIG_RSA_VERIFY_WITH_PKEY=y -CONFIG_X509_CERTIFICATE_PARSER=y -CONFIG_PKCS7_MESSAGE_PARSER=y CONFIG_TPM=y CONFIG_ZSTD=y CONFIG_SPL_LZMA=y CONFIG_ERRNO_STR=y CONFIG_SPL_LMB=y +CONFIG_UNIT_TEST=y +CONFIG_SPL_UNIT_TEST=y +CONFIG_UT_TIME=y +CONFIG_UT_DM=y diff --git a/configs/sandbox_spl_defconfig b/configs/sandbox_spl_defconfig index 3dd4c7ab43..31ccdbd502 100644 --- a/configs/sandbox_spl_defconfig +++ b/configs/sandbox_spl_defconfig @@ -5,7 +5,6 @@ CONFIG_SPL_LIBGENERIC_SUPPORT=y CONFIG_NR_DRAM_BANKS=1 CONFIG_ENV_SIZE=0x2000 CONFIG_DEFAULT_DEVICE_TREE="sandbox" -CONFIG_OF_LIBFDT_OVERLAY=y CONFIG_DM_RESET=y CONFIG_SPL_SERIAL=y CONFIG_SPL_DRIVERS_MISC=y @@ -108,7 +107,6 @@ CONFIG_NETCONSOLE=y CONFIG_IP_DEFRAG=y CONFIG_BOOTP_SERVERIP=y CONFIG_SPL_DM=y -CONFIG_SPL_DM_DEVICE_REMOVE=y CONFIG_DM_DMA=y CONFIG_REGMAP=y CONFIG_SPL_REGMAP=y @@ -245,8 +243,6 @@ CONFIG_FS_CRAMFS=y # CONFIG_SPL_USE_TINY_PRINTF is not set CONFIG_CMD_DHRYSTONE=y CONFIG_RSA_VERIFY_WITH_PKEY=y -CONFIG_X509_CERTIFICATE_PARSER=y -CONFIG_PKCS7_MESSAGE_PARSER=y CONFIG_TPM=y CONFIG_SPL_CRC8=y CONFIG_ZSTD=y @@ -254,3 +250,7 @@ CONFIG_SPL_LZMA=y CONFIG_ERRNO_STR=y CONFIG_SPL_HEXDUMP=y CONFIG_SPL_LMB=y +CONFIG_UNIT_TEST=y +CONFIG_SPL_UNIT_TEST=y +CONFIG_UT_TIME=y +CONFIG_UT_DM=y diff --git a/configs/sandbox_vpl_defconfig b/configs/sandbox_vpl_defconfig index a8e9e16746..3ec64fd982 100644 --- a/configs/sandbox_vpl_defconfig +++ b/configs/sandbox_vpl_defconfig @@ -5,7 +5,6 @@ CONFIG_NR_DRAM_BANKS=1 CONFIG_ENV_SIZE=0x2000 CONFIG_DEFAULT_DEVICE_TREE="sandbox" CONFIG_SPL_TEXT_BASE=0x100000 -CONFIG_OF_LIBFDT_OVERLAY=y CONFIG_DM_RESET=y CONFIG_SPL_MMC=y CONFIG_SPL_SERIAL=y @@ -119,7 +118,6 @@ CONFIG_NETCONSOLE=y CONFIG_IP_DEFRAG=y CONFIG_SPL_DM=y CONFIG_TPL_DM=y -CONFIG_SPL_DM_DEVICE_REMOVE=y CONFIG_SPL_DM_SEQ_ALIAS=y CONFIG_DM_DMA=y CONFIG_REGMAP=y @@ -254,3 +252,7 @@ CONFIG_TPM=y CONFIG_ZSTD=y # CONFIG_VPL_LZMA is not set CONFIG_ERRNO_STR=y +CONFIG_UNIT_TEST=y +CONFIG_SPL_UNIT_TEST=y +CONFIG_UT_TIME=y +CONFIG_UT_DM=y diff --git a/configs/snow_defconfig b/configs/snow_defconfig index 637c51d2c2..2c0757194b 100644 --- a/configs/snow_defconfig +++ b/configs/snow_defconfig @@ -19,7 +19,6 @@ CONFIG_ENV_OFFSET=0x3FC000 CONFIG_ENV_SECT_SIZE=0x4000 CONFIG_DEFAULT_DEVICE_TREE="exynos5250-snow" CONFIG_SPL_TEXT_BASE=0x02023400 -CONFIG_OF_LIBFDT_OVERLAY=y CONFIG_SPL=y CONFIG_DEBUG_UART_BASE=0x12c30000 CONFIG_DEBUG_UART_CLOCK=100000000 @@ -108,3 +107,4 @@ CONFIG_VIDEO_BRIDGE_PARADE_PS862X=y CONFIG_VIDEO_BRIDGE_NXP_PTN3460=y CONFIG_TPM=y CONFIG_ERRNO_STR=y +CONFIG_UNIT_TEST=y

Hi Sughosh,
On Wed, 24 Jul 2024 at 00:06, Sughosh Ganu sughosh.ganu@linaro.org wrote:
All the changes needed for making the LMB memory map persistent and global have been made, including making corresponding changes in the test code. Re-enable the unit tests on the platforms.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since rfc: New patch
configs/sandbox64_defconfig | 4 +++- configs/sandbox_defconfig | 7 +++---- configs/sandbox_flattree_defconfig | 4 +++- configs/sandbox_noinst_defconfig | 8 ++++---- configs/sandbox_spl_defconfig | 8 ++++---- configs/sandbox_vpl_defconfig | 6 ++++-- configs/snow_defconfig | 2 +- 7 files changed, 22 insertions(+), 17 deletions(-)
It would be much better to change the tests as needed, so they keep passing.
Regards, Simon

On Fri, 26 Jul 2024 at 05:03, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Wed, 24 Jul 2024 at 00:06, Sughosh Ganu sughosh.ganu@linaro.org wrote:
All the changes needed for making the LMB memory map persistent and global have been made, including making corresponding changes in the test code. Re-enable the unit tests on the platforms.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since rfc: New patch
configs/sandbox64_defconfig | 4 +++- configs/sandbox_defconfig | 7 +++---- configs/sandbox_flattree_defconfig | 4 +++- configs/sandbox_noinst_defconfig | 8 ++++---- configs/sandbox_spl_defconfig | 8 ++++---- configs/sandbox_vpl_defconfig | 6 ++++-- configs/snow_defconfig | 2 +- 7 files changed, 22 insertions(+), 17 deletions(-)
It would be much better to change the tests as needed, so they keep passing.
What issue do you see with this approach? If I have to do what you suggest, I will have to put all the test related changes as part of the same commit which makes the LMB map persistent and global. I have seen this kind of approach taken elsewhere, so really not sure what is the problem with this.
-sughosh
Regards, Simon

Hi Sughosh,
On Mon, 29 Jul 2024 at 02:58, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Fri, 26 Jul 2024 at 05:03, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Wed, 24 Jul 2024 at 00:06, Sughosh Ganu sughosh.ganu@linaro.org wrote:
All the changes needed for making the LMB memory map persistent and global have been made, including making corresponding changes in the test code. Re-enable the unit tests on the platforms.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since rfc: New patch
configs/sandbox64_defconfig | 4 +++- configs/sandbox_defconfig | 7 +++---- configs/sandbox_flattree_defconfig | 4 +++- configs/sandbox_noinst_defconfig | 8 ++++---- configs/sandbox_spl_defconfig | 8 ++++---- configs/sandbox_vpl_defconfig | 6 ++++-- configs/snow_defconfig | 2 +- 7 files changed, 22 insertions(+), 17 deletions(-)
It would be much better to change the tests as needed, so they keep passing.
What issue do you see with this approach? If I have to do what you suggest, I will have to put all the test related changes as part of the same commit which makes the LMB map persistent and global. I have seen this kind of approach taken elsewhere, so really not sure what is the problem with this.
Yes, normally when code changes the tests change too, otherwise the tests are broken on that commit. Disabling them doesn't resolve that.
Once you figure out the save/restore thing you will find the tests quite easy. I think part of the problem is that you are making the lmb tests 'special'.
Regards, Simon
participants (6)
-
Caleb Connolly
-
Ilias Apalodimas
-
Michal Simek
-
Simon Glass
-
Sughosh Ganu
-
Tom Rini