[RFC PATCH v2 00/48] Make U-Boot memory reservations coherent

The aim of this patch series is to fix the current state of incoherence between modules when it comes to memory usage. The primary issue that this series is trying to fix is that the EFI memory module which is responsible for allocating and freeing memory, does not have any visibility of the memory that is being used by the LMB module. This is further complicated by the fact that the LMB allocations are caller specific -- the LMB memory map is not global nor persistent. This means that the memory "allocated" by the LMB module might be relevant only for a given function. Hence one of the requirements for making the memory usage visible across modules is to make LMB allocations persistent and global, and then have means to communicate the use of memory across modules.
The first set of patches in this series work on making the LMB memory map persistent and global. This is being done keeping in mind the usage of LMB memory by platforms where the same memory region can be used to load multiple different images. What is not allowed is to overwrite memory that has been allocated by the other module, currently the EFI memory module. This is being achieved by introducing a new flag, LMB_NOOVERWRITE, which represents memory which cannot be re-requested once allocated.
A review comment on the earlier version was to do away with the static arrays for the LMB lists of free and used memory. This version uses the alloced list data structure for the LMB lists.
The second set of patches are making changes to the EFI memory module to make use of the LMB functions to allocate and free memory. A *_flags() version of LMB API's has been introduced for the same. The earlier version was using notification mechanism from both LMB and EFI modules to maintain memory coherence. This version makes use of the LMB API functions for the memory allocations. This is based on review comments of EFI maintainers.
Patches 1 - 4 are from Simon Glass and are adding support for the alloced list data structure.
Patches 5 - 33 are for making the LMB memory map persistent and global.
Patches 34 - 48 are for making EFI memory allocations work with LMB API's.
Note:
* Once the general direction of these patches has been agreed upon, I plan to split these patches into two series, with the above split.
* I am running the CI over a patch from Rasmus Villemoes - RFC-test-cyclic-try-to-avoid-spurious-test-failures-due-to-cyclic-callbacks.patch I get spurious watchdog timeout errors without this patch.
* Because this is common code, and I was not able to disable LMB config(some code under boot/ fails to build), all of the patches need to be applied together when testing.
Todo's ------
There needs to be a test written for testing the various scenarios of memory being allocated and freed by different modules, namely LMB and EFI. I have written a couple of commands for testing the changes that I have made. I will be working on this once there is agreement on the patches.
Secondly, there were comments on the earlier series about things like code size impact etc, but I have not looked at those right now. I will look at these aspects in the following version.
Simon Glass (4): malloc: Support testing with realloc() lib: Handle a special case with str_to_list() alist: Add support for an allocated pointer list lib: Convert str_to_list() to use alist
Sughosh Ganu (44): alist: add a couple of helper functions alist: add a function declaration for alist_expand_by() lmb: remove the unused lmb_is_reserved() function lmb: staticize __lmb_alloc_base() lmb: remove call to lmb_init() lmb: remove local instances of the lmb structure variable lmb: pass a flag to image_setup_libfdt() for lmb reservations lmb: allow for resizing lmb regions lmb: make LMB memory map persistent and global lmb: remove config symbols used for lmb region count test: lmb: remove the test for max regions lmb: config: add lmb config symbols for SPL lmb: allow lmb module to be used in SPL lmb: introduce a function to add memory to the lmb memory map lmb: remove the lmb_init_and_reserve() function lmb: reserve common areas during board init lmb: remove lmb_init_and_reserve_range() function lmb: init: initialise the lmb data structures during board init lmb: use the BIT macro for lmb flags lmb: add a common implementation of arch_lmb_reserve() sandbox: spl: enable lmb in SPL sandbox: iommu: remove lmb allocation in the driver zynq: lmb: do not add to lmb map before relocation test: cedit: use allocated address for reading file test: lmb: tweak the tests for the persistent lmb memory map test: lmb: run lmb tests only manually test: lmb: add a separate class of unit tests for lmb test: lmb: invoke the LMB unit tests from a separate script test: bdinfo: dump the global LMB memory map lmb: add versions of the lmb API with flags lmb: add a flag to allow suppressing memory map change notification efi: memory: use the lmb API's for allocating and freeing memory event: add event to notify lmb memory map changes lib: Kconfig: add a config symbol for getting lmb memory map updates add a function to check if an address is in RAM memory lmb: notify of any changes to the LMB memory map efi_memory: add an event handler to update memory map ti: k3: remove efi_add_known_memory() function definition layerscape: use the lmb API's to add RAM memory x86: e820: use the lmb API for adding RAM memory efi_memory: do not add RAM memory to the memory map lmb: mark the EFI runtime memory regions as reserved test: event: update the expected event dump output temp: mx6sabresd: bump up the size limit of the board
arch/arc/lib/cache.c | 14 - arch/arm/cpu/armv8/fsl-layerscape/cpu.c | 8 +- arch/arm/lib/stack.c | 14 - arch/arm/mach-apple/board.c | 17 +- arch/arm/mach-k3/common.c | 11 - arch/arm/mach-snapdragon/board.c | 17 +- arch/arm/mach-stm32mp/dram_init.c | 8 +- arch/arm/mach-stm32mp/stm32mp1/cpu.c | 7 +- arch/m68k/lib/bootm.c | 20 +- arch/microblaze/lib/bootm.c | 14 - arch/mips/lib/bootm.c | 22 +- arch/nios2/lib/bootm.c | 13 - arch/powerpc/cpu/mpc85xx/mp.c | 4 +- arch/powerpc/include/asm/mp.h | 4 +- arch/powerpc/lib/bootm.c | 25 +- arch/riscv/lib/bootm.c | 13 - arch/sh/lib/bootm.c | 13 - arch/x86/lib/bootm.c | 18 - arch/x86/lib/e820.c | 47 +- arch/xtensa/lib/bootm.c | 13 - board/xilinx/common/board.c | 33 - boot/bootm.c | 37 +- boot/bootm_os.c | 5 +- boot/image-board.c | 36 +- boot/image-fdt.c | 36 +- cmd/bdinfo.c | 5 +- cmd/booti.c | 2 +- cmd/bootz.c | 2 +- cmd/elf.c | 2 +- cmd/load.c | 7 +- common/board_r.c | 9 + common/dlmalloc.c | 4 + common/event.c | 2 + common/spl/spl.c | 4 + configs/a3y17lte_defconfig | 1 - configs/a5y17lte_defconfig | 1 - configs/a7y17lte_defconfig | 1 - configs/apple_m1_defconfig | 1 - configs/mt7981_emmc_rfb_defconfig | 1 - configs/mt7981_rfb_defconfig | 1 - configs/mt7981_sd_rfb_defconfig | 1 - configs/mt7986_rfb_defconfig | 1 - configs/mt7986a_bpir3_emmc_defconfig | 1 - configs/mt7986a_bpir3_sd_defconfig | 1 - configs/mt7988_rfb_defconfig | 1 - configs/mt7988_sd_rfb_defconfig | 1 - configs/mx6sabresd_defconfig | 2 +- configs/qcom_defconfig | 1 - configs/sandbox_spl_defconfig | 1 + configs/stm32mp13_defconfig | 3 - configs/stm32mp15_basic_defconfig | 3 - configs/stm32mp15_defconfig | 3 - configs/stm32mp15_trusted_defconfig | 3 - configs/stm32mp25_defconfig | 3 - configs/th1520_lpi4a_defconfig | 1 - drivers/iommu/apple_dart.c | 8 +- drivers/iommu/sandbox_iommu.c | 17 +- fs/fs.c | 10 +- include/alist.h | 236 +++++++ include/efi_loader.h | 12 +- include/event.h | 14 + include/image.h | 27 +- include/lmb.h | 146 ++--- include/test/suites.h | 1 + lib/Kconfig | 52 +- lib/Makefile | 3 +- lib/alist.c | 154 +++++ lib/efi_loader/Kconfig | 2 + lib/efi_loader/efi_dt_fixup.c | 2 +- lib/efi_loader/efi_helper.c | 2 +- lib/efi_loader/efi_memory.c | 187 ++---- lib/lmb.c | 745 ++++++++++++++------- lib/strto.c | 33 +- net/tftp.c | 11 +- net/wget.c | 9 +- test/Kconfig | 9 + test/Makefile | 1 + test/boot/cedit.c | 6 +- test/cmd/bdinfo.c | 39 +- test/cmd_ut.c | 7 + test/lib/Makefile | 2 +- test/lib/alist.c | 197 ++++++ test/lib/lmb.c | 825 ------------------------ test/lmb_ut.c | 811 +++++++++++++++++++++++ test/py/tests/test_event_dump.py | 1 + test/py/tests/test_lmb.py | 24 + test/str_ut.c | 4 +- 87 files changed, 2346 insertions(+), 1769 deletions(-) create mode 100644 include/alist.h create mode 100644 lib/alist.c create mode 100644 test/lib/alist.c delete mode 100644 test/lib/lmb.c create mode 100644 test/lmb_ut.c create mode 100644 test/py/tests/test_lmb.py

From: Simon Glass sjg@chromium.org
At present in tests it is possible to cause an out-of-memory condition with malloc() but not realloc(). Add support to realloc() too, so code which uses that function can be tested.
Signed-off-by: Simon Glass sjg@chromium.org Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- common/dlmalloc.c | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/common/dlmalloc.c b/common/dlmalloc.c index 9549c59f35..65b130fabc 100644 --- a/common/dlmalloc.c +++ b/common/dlmalloc.c @@ -1758,6 +1758,10 @@ Void_t* rEALLOc_impl(oldmem, bytes) Void_t* oldmem; size_t bytes; panic("pre-reloc realloc() is not supported"); } #endif + if (CONFIG_IS_ENABLED(UNIT_TEST) && malloc_testing) { + if (--malloc_max_allocs < 0) + return NULL; + }
newp = oldp = mem2chunk(oldmem); newsize = oldsize = chunksize(oldp);

From: Simon Glass sjg@chromium.org
The current implementation can return an extra result at the end when the string ends with a space. Fix this by adding a special case.
Signed-off-by: Simon Glass sjg@chromium.org Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- lib/strto.c | 4 +++- test/str_ut.c | 4 +--- 2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/lib/strto.c b/lib/strto.c index 5157332d6c..f83ac67c66 100644 --- a/lib/strto.c +++ b/lib/strto.c @@ -236,12 +236,14 @@ const char **str_to_list(const char *instr) return NULL;
/* count the number of space-separated strings */ - for (count = *str != '\0', p = str; *p; p++) { + for (count = 0, p = str; *p; p++) { if (*p == ' ') { count++; *p = '\0'; } } + if (p != str && p[-1]) + count++;
/* allocate the pointer array, allowing for a NULL terminator */ ptr = calloc(count + 1, sizeof(char *)); diff --git a/test/str_ut.c b/test/str_ut.c index 389779859a..96e048975d 100644 --- a/test/str_ut.c +++ b/test/str_ut.c @@ -342,9 +342,7 @@ static int test_str_to_list(struct unit_test_state *uts) ut_asserteq_str("space", ptr[3]); ut_assertnonnull(ptr[4]); ut_asserteq_str("", ptr[4]); - ut_assertnonnull(ptr[5]); - ut_asserteq_str("", ptr[5]); - ut_assertnull(ptr[6]); + ut_assertnull(ptr[5]); str_free_list(ptr); ut_assertok(ut_check_delta(start));

From: Simon Glass sjg@chromium.org
In various places it is useful to have an array of structures, but allow it to grow. In some cases we work around it by setting maximum number of entries, using a Kconfig option. In other places we use a linked list, which does not provide for random access and can complicate the code.
Introduce a new data structure, which is a variable-sized list of structs each of the same, pre-set size. It provides O(1) access and is reasonably efficient at expanding linearly, since it doubles in size when it runs out of space.
Signed-off-by: Simon Glass sjg@chromium.org Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- include/alist.h | 205 ++++++++++++++++++++++++++++++++++++++++++++++ lib/Makefile | 1 + lib/alist.c | 154 ++++++++++++++++++++++++++++++++++ test/lib/Makefile | 1 + test/lib/alist.c | 197 ++++++++++++++++++++++++++++++++++++++++++++ 5 files changed, 558 insertions(+) create mode 100644 include/alist.h create mode 100644 lib/alist.c create mode 100644 test/lib/alist.c
diff --git a/include/alist.h b/include/alist.h new file mode 100644 index 0000000000..a68afc9fff --- /dev/null +++ b/include/alist.h @@ -0,0 +1,205 @@ +/* SPDX-License-Identifier: GPL-2.0+ */ +/* + * Handles a contiguous list of pointers which be allocated and freed + * + * Copyright 2023 Google LLC + * Written by Simon Glass sjg@chromium.org + */ + +#ifndef __ALIST_H +#define __ALIST_H + +#include <stdbool.h> +#include <linux/bitops.h> +#include <linux/types.h> + +/** + * struct alist - object list that can be allocated and freed + * + * Holds a list of objects, each of the same size. The object is typically a + * C struct. The array is alloced in memory can change in size. + * + * The list rememebers the size of the list, but has a separate count of how + * much space is allocated, This allows it increase in size in steps as more + * elements are added, which is more efficient that reallocating the list every + * time a single item is added + * + * Two types of access are provided: + * + * alist_get...(index) + * gets an existing element, if its index is less that size + * + * alist_ensure(index) + * address an existing element, or creates a new one if not present + * + * @data: object data of size `@obj_size * @alloc`. The list can grow as + * needed but never shrinks + * @obj_size: Size of each object in bytes + * @count: number of objects in array + * @alloc: allocated length of array, to which @count can grow + * @flags: flags for the alist (ALISTF_...) + */ +struct alist { + void *data; + u16 obj_size; + u16 count; + u16 alloc; + u16 flags; +}; + +/** + * enum alist_flags - Flags for the alist + * + * @ALIST_FAIL: true if any allocation has failed. Once this has happened, the + * alist is dead and cannot grow further + */ +enum alist_flags { + ALISTF_FAIL = BIT(0), +}; + +/** + * alist_has() - Check if an index is within the list range + * + * Checks if index is within the current alist count + * + * @lst: alist to check + * @index: Index to check + * Returns: true if value, else false + */ +static inline bool alist_has(struct alist *lst, uint index) +{ + return index < lst->count; +} + +/** + * alist_err() - Check if the alist is still valid + * + * @lst: List to check + * Return: false if OK, true if any previous allocation failed + */ +static inline bool alist_err(struct alist *lst) +{ + return lst->flags & ALISTF_FAIL; +} + +/** + * alist_get_ptr() - Get the value of a pointer + * + * @lst: alist to check + * @index: Index to read from + * Returns: pointer, if present, else NULL + */ +const void *alist_get_ptr(struct alist *lst, uint index); + +/** + * alist_getd() - Get the value of a pointer directly, with no checking + * + * This must only be called on indexes for which alist_has() returns true + * + * @lst: alist to check + * @index: Index to read from + * Returns: pointer value (may be NULL) + */ +static inline const void *alist_getd(struct alist *lst, uint index) +{ + return lst->data + index * lst->obj_size; +} + +#define alist_get(_lst, _index, _struct) \ + ((const _struct *)alist_get_ptr(_lst, _index)) + +/** + * alist_ensure_ptr() - Ensure an object exists at a given index + * + * This provides read/write access to an array element. If it does not exist, + * it is allocated, reading for the caller to store the object into + * + * Allocates a object at the given index if needed + * + * @lst: alist to check + * @index: Index to address + * Returns: pointer where struct can be read/written, or NULL if out of memory + */ +void *alist_ensure_ptr(struct alist *lst, uint index); + +/** + * alist_ensure() - Address a struct, the correct object type + * + * Use as: + * struct my_struct *ptr = alist_ensure(&lst, 4, struct my_struct); + */ +#define alist_ensure(_lst, _index, _struct) \ + ((_struct *)alist_ensure_ptr(_lst, _index)) + +/** + * alist_add_ptr() - Ad a new object to the list + * + * @lst: alist to add to + * @obj: Pointer to object to copy in + * Returns: pointer to where the object was copied, or NULL if out of memory + */ +void *alist_add_ptr(struct alist *lst, void *obj); + +/** + * alist_add() - Used to add an object type with the correct typeee + * + * Use as: + * struct my_struct obj; + * struct my_struct *ptr = alist_add(&lst, &obj, struct my_struct); + */ +#define alist_add(_lst, _obj, _struct) \ + ((_struct *)alist_add_ptr(_lst, (_struct *)(_obj))) + +/** + * alist_init() - Set up a new object list + * + * Sets up a list of objects, initially empty + * + * @lst: alist to set up + * @obj_size: Size of each element in bytes + * @alloc_size: Number of items to allowed to start, before reallocation is + * needed (0 to start with no space) + * Return: true if OK, false if out of memory + */ +bool alist_init(struct alist *lst, uint obj_size, uint alloc_size); + +#define alist_init_struct(_lst, _struct) \ + alist_init(_lst, sizeof(_struct), 0) + +/** + * alist_uninit_move_ptr() - Return the allocated contents and uninit the alist + * + * This returns the alist data to the caller, so that the caller receives data + * that it can be sure will hang around. The caller is responsible for freeing + * the data. + * + * If the alist size is 0, this returns NULL + * + * The alist is uninited as part of this. + * + * The alist must be inited before this can be called. + * + * @alist: alist to uninit + * @countp: if non-NULL, returns the number of objects in the returned data + * (which is @alist->size) + * Return: data contents, allocated with malloc(), or NULL if the data could not + * be allocated, or the data size is 0 + */ +void *alist_uninit_move_ptr(struct alist *alist, size_t *countp); + +/** + * alist_uninit_move() - Typed version of alist_uninit_move_ptr() + */ +#define alist_uninit_move(_lst, _countp, _struct) \ + (_struct *)alist_uninit_move_ptr(_lst, _countp) + +/** + * alist_uninit() - Free any memory used by an alist + * + * The alist must be inited before this can be called. + * + * @alist: alist to uninit + */ +void alist_uninit(struct alist *alist); + +#endif /* __ALIST_H */ diff --git a/lib/Makefile b/lib/Makefile index e389ad014f..81b503ab52 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -147,6 +147,7 @@ endif obj-$(CONFIG_$(SPL_)OID_REGISTRY) += oid_registry.o
obj-y += abuf.o +obj-y += alist.o obj-y += date.o obj-y += rtc-lib.o obj-$(CONFIG_LIB_ELF) += elf.o diff --git a/lib/alist.c b/lib/alist.c new file mode 100644 index 0000000000..0168bfe79d --- /dev/null +++ b/lib/alist.c @@ -0,0 +1,154 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* + * Handles a contiguous list of pointers which be allocated and freed + * + * Copyright 2023 Google LLC + * Written by Simon Glass sjg@chromium.org + */ + +#include <alist.h> +#include <display_options.h> +#include <malloc.h> +#include <stdio.h> +#include <string.h> + +enum { + ALIST_INITIAL_SIZE = 4, /* default size of unsized list */ +}; + +bool alist_init(struct alist *lst, uint obj_size, uint start_size) +{ + /* Avoid realloc for the initial size to help malloc_simple */ + memset(lst, '\0', sizeof(struct alist)); + if (start_size) { + lst->data = calloc(obj_size, start_size); + if (!lst->data) { + lst->flags = ALISTF_FAIL; + return false; + } + lst->alloc = start_size; + } + lst->obj_size = obj_size; + + return true; +} + +void alist_uninit(struct alist *lst) +{ + free(lst->data); + + /* Clear fields to avoid any confusion */ + memset(lst, '\0', sizeof(struct alist)); +} + +/** + * alist_expand_to() - Expand a list to the given size + * + * @lst: List to modify + * @inc_by: Amount to expand to + * Return: true if OK, false if out of memory + */ +static bool alist_expand_to(struct alist *lst, uint new_alloc) +{ + void *new_data; + + if (lst->flags & ALISTF_FAIL) + return false; + new_data = realloc(lst->data, lst->obj_size * new_alloc); + if (!new_data) { + lst->flags |= ALISTF_FAIL; + return false; + } + memset(new_data + lst->obj_size * lst->alloc, '\0', + lst->obj_size * (new_alloc - lst->alloc)); + lst->alloc = new_alloc; + lst->data = new_data; + + return true; +} + +/** + * alist_expand_by() - Expand a list by the given amount + * + * @lst: alist to expand + * @inc_by: Amount to expand by + * Return: true if OK, false if out of memory + */ +bool alist_expand_by(struct alist *lst, uint inc_by) +{ + return alist_expand_to(lst, lst->alloc + inc_by); +} + +/** + * alist_expand_min() - Expand to at least the provided size + * + * Expands to the lowest power of two which can incorporate the new size + * + * @lst: alist to expand + * @min_alloc: Minimum new allocated size; if 0 then ALIST_INITIAL_SIZE is used + * Return: true if OK, false if out of memory + */ +static bool alist_expand_min(struct alist *lst, uint min_alloc) +{ + uint new_alloc; + + for (new_alloc = lst->alloc ?: ALIST_INITIAL_SIZE; + new_alloc < min_alloc;) + new_alloc *= 2; + + return alist_expand_to(lst, new_alloc); +} + +const void *alist_get_ptr(struct alist *lst, uint index) +{ + if (index >= lst->count) + return NULL; + + return lst->data + index * lst->obj_size; +} + +void *alist_ensure_ptr(struct alist *lst, uint index) +{ + uint minsize = index + 1; + void *ptr; + + if (index >= lst->alloc && !alist_expand_min(lst, minsize)) + return NULL; + + ptr = lst->data + index * lst->obj_size; + if (minsize >= lst->count) + lst->count = minsize; + + return ptr; +} + +void *alist_add_ptr(struct alist *lst, void *obj) +{ + void *ptr; + + ptr = alist_ensure_ptr(lst, lst->count); + if (!ptr) + return ptr; + memcpy(ptr, obj, lst->obj_size); + + return ptr; +} + +void *alist_uninit_move_ptr(struct alist *alist, size_t *countp) +{ + void *ptr; + + if (countp) + *countp = alist->count; + if (!alist->count) { + alist_uninit(alist); + return NULL; + } + + ptr = alist->data; + + /* Clear everything out so there is no record of the data */ + alist_init(alist, alist->obj_size, 0); + + return ptr; +} diff --git a/test/lib/Makefile b/test/lib/Makefile index e75a263e6a..70f14c46b1 100644 --- a/test/lib/Makefile +++ b/test/lib/Makefile @@ -5,6 +5,7 @@ ifeq ($(CONFIG_SPL_BUILD),) obj-y += cmd_ut_lib.o obj-y += abuf.o +obj-y += alist.o obj-$(CONFIG_EFI_LOADER) += efi_device_path.o obj-$(CONFIG_EFI_SECURE_BOOT) += efi_image_region.o obj-y += hexdump.o diff --git a/test/lib/alist.c b/test/lib/alist.c new file mode 100644 index 0000000000..f9050a963e --- /dev/null +++ b/test/lib/alist.c @@ -0,0 +1,197 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* + * Copyright 2023 Google LLC + * Written by Simon Glass sjg@chromium.org + */ + +#include <alist.h> +#include <string.h> +#include <test/lib.h> +#include <test/test.h> +#include <test/ut.h> + +struct my_struct { + uint val; + uint other_val; +}; + +enum { + obj_size = sizeof(struct my_struct), +}; + +/* Test alist_init() */ +static int lib_test_alist_init(struct unit_test_state *uts) +{ + struct alist lst; + ulong start; + + start = ut_check_free(); + + /* with a size of 0, the fields should be inited, with no memory used */ + memset(&lst, '\xff', sizeof(lst)); + ut_assert(alist_init_struct(&lst, struct my_struct)); + ut_asserteq_ptr(NULL, lst.data); + ut_asserteq(0, lst.count); + ut_asserteq(0, lst.alloc); + ut_assertok(ut_check_delta(start)); + alist_uninit(&lst); + ut_asserteq_ptr(NULL, lst.data); + ut_asserteq(0, lst.count); + ut_asserteq(0, lst.alloc); + + /* use an impossible size */ + ut_asserteq(false, alist_init(&lst, obj_size, + CONFIG_SYS_MALLOC_LEN)); + ut_assertnull(lst.data); + ut_asserteq(0, lst.count); + ut_asserteq(0, lst.alloc); + + /* use a small size */ + ut_assert(alist_init(&lst, obj_size, 4)); + ut_assertnonnull(lst.data); + ut_asserteq(0, lst.count); + ut_asserteq(4, lst.alloc); + + /* free it */ + alist_uninit(&lst); + ut_asserteq_ptr(NULL, lst.data); + ut_asserteq(0, lst.count); + ut_asserteq(0, lst.alloc); + ut_assertok(ut_check_delta(start)); + + /* Check for memory leaks */ + ut_assertok(ut_check_delta(start)); + + return 0; +} +LIB_TEST(lib_test_alist_init, 0); + +/* Test alist_get() and alist_getd() */ +static int lib_test_alist_get(struct unit_test_state *uts) +{ + struct alist lst; + ulong start; + void *ptr; + + start = ut_check_free(); + + ut_assert(alist_init(&lst, obj_size, 3)); + ut_asserteq(0, lst.count); + ut_asserteq(3, lst.alloc); + + ut_assertnull(alist_get_ptr(&lst, 2)); + ut_assertnull(alist_get_ptr(&lst, 3)); + + ptr = alist_ensure_ptr(&lst, 1); + ut_assertnonnull(ptr); + ut_asserteq(2, lst.count); + ptr = alist_ensure_ptr(&lst, 2); + ut_asserteq(3, lst.count); + ut_assertnonnull(ptr); + + ptr = alist_ensure_ptr(&lst, 3); + ut_assertnonnull(ptr); + ut_asserteq(4, lst.count); + ut_asserteq(6, lst.alloc); + + ut_assertnull(alist_get_ptr(&lst, 4)); + + alist_uninit(&lst); + + /* Check for memory leaks */ + ut_assertok(ut_check_delta(start)); + + return 0; +} +LIB_TEST(lib_test_alist_get, 0); + +/* Test alist_has() */ +static int lib_test_alist_has(struct unit_test_state *uts) +{ + struct alist lst; + ulong start; + void *ptr; + + start = ut_check_free(); + + ut_assert(alist_init(&lst, obj_size, 3)); + + ut_assert(!alist_has(&lst, 0)); + ut_assert(!alist_has(&lst, 1)); + ut_assert(!alist_has(&lst, 2)); + ut_assert(!alist_has(&lst, 3)); + + /* create a new one to force expansion */ + ptr = alist_ensure_ptr(&lst, 4); + ut_assertnonnull(ptr); + + ut_assert(alist_has(&lst, 0)); + ut_assert(alist_has(&lst, 1)); + ut_assert(alist_has(&lst, 2)); + ut_assert(alist_has(&lst, 3)); + ut_assert(alist_has(&lst, 4)); + ut_assert(!alist_has(&lst, 5)); + + alist_uninit(&lst); + + /* Check for memory leaks */ + ut_assertok(ut_check_delta(start)); + + return 0; +} +LIB_TEST(lib_test_alist_has, 0); + +/* Test alist_ensure() */ +static int lib_test_alist_ensure(struct unit_test_state *uts) +{ + struct my_struct *ptr3, *ptr4; + struct alist lst; + ulong start; + + start = ut_check_free(); + + ut_assert(alist_init_struct(&lst, struct my_struct)); + ut_asserteq(obj_size, lst.obj_size); + ut_asserteq(0, lst.count); + ut_asserteq(0, lst.alloc); + ptr3 = alist_ensure_ptr(&lst, 3); + ut_asserteq(4, lst.count); + ut_asserteq(4, lst.alloc); + ut_assertnonnull(ptr3); + ptr3->val = 3; + + ptr4 = alist_ensure_ptr(&lst, 4); + ut_asserteq(8, lst.alloc); + ut_asserteq(5, lst.count); + ut_assertnonnull(ptr4); + ptr4->val = 4; + ut_asserteq(4, alist_get(&lst, 4, struct my_struct)->val); + + ut_asserteq_ptr(ptr4, alist_ensure(&lst, 4, struct my_struct)); + + alist_ensure(&lst, 4, struct my_struct)->val = 44; + ut_asserteq(44, alist_get(&lst, 4, struct my_struct)->val); + ut_asserteq(3, alist_get(&lst, 3, struct my_struct)->val); + ut_assertnull(alist_get(&lst, 7, struct my_struct)); + ut_asserteq(8, lst.alloc); + ut_asserteq(5, lst.count); + + /* add some more, checking handling of malloc() failure */ + malloc_enable_testing(0); + ut_assertnonnull(alist_ensure(&lst, 7, struct my_struct)); + ut_assertnull(alist_ensure(&lst, 8, struct my_struct)); + malloc_disable_testing(); + + lst.flags &= ~ALISTF_FAIL; + ut_assertnonnull(alist_ensure(&lst, 8, struct my_struct)); + ut_asserteq(16, lst.alloc); + ut_asserteq(9, lst.count); + + alist_uninit(&lst); + + /* Check for memory leaks */ + ut_assertok(ut_check_delta(start)); + + return 0; +} +LIB_TEST(lib_test_alist_ensure, 0);

From: Simon Glass sjg@chromium.org
Use this new data structure in the utility function.
Signed-off-by: Simon Glass sjg@chromium.org Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- lib/strto.c | 35 +++++++++++++++++++---------------- 1 file changed, 19 insertions(+), 16 deletions(-)
diff --git a/lib/strto.c b/lib/strto.c index f83ac67c66..f059408755 100644 --- a/lib/strto.c +++ b/lib/strto.c @@ -9,6 +9,7 @@ * Wirzenius wrote this portably, Torvalds fucked it up :-) */
+#include <alist.h> #include <errno.h> #include <malloc.h> #include <vsprintf.h> @@ -226,37 +227,39 @@ void str_to_upper(const char *in, char *out, size_t len)
const char **str_to_list(const char *instr) { - const char **ptr; - char *str, *p; - int count, i; + struct alist alist; + char *str, *p, *start;
/* don't allocate if the string is empty */ str = *instr ? strdup(instr) : (char *)instr; if (!str) return NULL;
- /* count the number of space-separated strings */ - for (count = 0, p = str; *p; p++) { + alist_init_struct(&alist, char *); + + if (*str) + alist_add(&alist, &str, char *); + for (start = str, p = str; *p; p++) { if (*p == ' ') { - count++; *p = '\0'; + start = p + 1; + if (*start) + alist_add(&alist, &start, char *); } } - if (p != str && p[-1]) - count++;
- /* allocate the pointer array, allowing for a NULL terminator */ - ptr = calloc(count + 1, sizeof(char *)); - if (!ptr) { - if (*str) + /* terminate list */ + p = NULL; + alist_add(&alist, &p, char *); + if (alist_err(&alist)) { + alist_uninit(&alist); + + if (*instr) free(str); return NULL; }
- for (i = 0, p = str; i < count; p += strlen(p) + 1, i++) - ptr[i] = p; - - return ptr; + return alist_uninit_move(&alist, NULL, const char *); }
void str_free_list(const char **ptr)

Add a couple of helper functions to detect an empty and full alist.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since V1: New patch
include/alist.h | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+)
diff --git a/include/alist.h b/include/alist.h index a68afc9fff..bab146c35d 100644 --- a/include/alist.h +++ b/include/alist.h @@ -82,6 +82,28 @@ static inline bool alist_err(struct alist *lst) return lst->flags & ALISTF_FAIL; }
+/** + * alist_full() - Check if the alist is full + * + * @lst: List to check + * Return: true if full, false otherwise + */ +static inline bool alist_full(struct alist *lst) +{ + return lst->count == lst->alloc; +} + +/** + * alist_empty() - Check if the alist is empty + * + * @lst: List to check + * Return: true if empty, false otherwise + */ +static inline bool alist_empty(struct alist *lst) +{ + return !lst->count && lst->alloc; +} + /** * alist_get_ptr() - Get the value of a pointer *

The alist_expand_by() function is a global function. Add a declaration for the function in the header.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since V1: New patch
include/alist.h | 9 +++++++++ 1 file changed, 9 insertions(+)
diff --git a/include/alist.h b/include/alist.h index bab146c35d..d7cc27c14c 100644 --- a/include/alist.h +++ b/include/alist.h @@ -224,4 +224,13 @@ void *alist_uninit_move_ptr(struct alist *alist, size_t *countp); */ void alist_uninit(struct alist *alist);
+/** + * alist_expand_by() - Expand a list by the given amount + * + * @lst: alist to expand + * @inc_by: Amount to expand by + * Return: true if OK, false if out of memory + */ +bool alist_expand_by(struct alist *lst, uint inc_by); + #endif /* __ALIST_H */

The lmb_is_reserved() API is not used. There is another API, lmb_is_reserved_flags() which can be used to check if a particular memory region is reserved. Remove the unused API.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org Reviewed-by: Ilias Apalodimas ilias.apalodimas@linaro.org --- Changes since V1: None
include/lmb.h | 11 ----------- lib/lmb.c | 5 ----- 2 files changed, 16 deletions(-)
diff --git a/include/lmb.h b/include/lmb.h index 231b68b27d..6c50d93e83 100644 --- a/include/lmb.h +++ b/include/lmb.h @@ -117,17 +117,6 @@ phys_addr_t __lmb_alloc_base(struct lmb *lmb, phys_size_t size, ulong align, phys_addr_t lmb_alloc_addr(struct lmb *lmb, phys_addr_t base, phys_size_t size); phys_size_t lmb_get_free_size(struct lmb *lmb, phys_addr_t addr);
-/** - * lmb_is_reserved() - test if address is in reserved region - * - * The function checks if a reserved region comprising @addr exists. - * - * @lmb: the logical memory block struct - * @addr: address to be tested - * Return: 1 if reservation exists, 0 otherwise - */ -int lmb_is_reserved(struct lmb *lmb, phys_addr_t addr); - /** * lmb_is_reserved_flags() - test if address is in reserved region with flag bits set * diff --git a/lib/lmb.c b/lib/lmb.c index 44f9820531..adc3abd5b4 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -565,11 +565,6 @@ int lmb_is_reserved_flags(struct lmb *lmb, phys_addr_t addr, int flags) return 0; }
-int lmb_is_reserved(struct lmb *lmb, phys_addr_t addr) -{ - return lmb_is_reserved_flags(lmb, addr, LMB_NONE); -} - __weak void board_lmb_reserve(struct lmb *lmb) { /* please define platform specific board_lmb_reserve() */

On Thu, 4 Jul 2024 at 08:36, Sughosh Ganu sughosh.ganu@linaro.org wrote:
The lmb_is_reserved() API is not used. There is another API, lmb_is_reserved_flags() which can be used to check if a particular memory region is reserved. Remove the unused API.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org Reviewed-by: Ilias Apalodimas ilias.apalodimas@linaro.org
Changes since V1: None
include/lmb.h | 11 ----------- lib/lmb.c | 5 ----- 2 files changed, 16 deletions(-)
Reviewed-by: Simon Glass sjg@chromium.org

The __lmb_alloc_base() function is only called from within the lmb module. Moreover, the lmb_alloc() and lmb_alloc_base() API's are good enough for the allocation API calls. Make the __lmb_alloc_base() function static.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org Reviewed-by: Ilias Apalodimas ilias.apalodimas@linaro.org --- Changes since V1: None
include/lmb.h | 2 -- lib/lmb.c | 39 ++++++++++++++++++++------------------- 2 files changed, 20 insertions(+), 21 deletions(-)
diff --git a/include/lmb.h b/include/lmb.h index 6c50d93e83..7b87181b9e 100644 --- a/include/lmb.h +++ b/include/lmb.h @@ -112,8 +112,6 @@ long lmb_reserve_flags(struct lmb *lmb, phys_addr_t base, phys_addr_t lmb_alloc(struct lmb *lmb, phys_size_t size, ulong align); phys_addr_t lmb_alloc_base(struct lmb *lmb, phys_size_t size, ulong align, phys_addr_t max_addr); -phys_addr_t __lmb_alloc_base(struct lmb *lmb, phys_size_t size, ulong align, - phys_addr_t max_addr); phys_addr_t lmb_alloc_addr(struct lmb *lmb, phys_addr_t base, phys_size_t size); phys_size_t lmb_get_free_size(struct lmb *lmb, phys_addr_t addr);
diff --git a/lib/lmb.c b/lib/lmb.c index adc3abd5b4..4d39c0d1f9 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -435,30 +435,13 @@ static long lmb_overlaps_region(struct lmb_region *rgn, phys_addr_t base, return (i < rgn->cnt) ? i : -1; }
-phys_addr_t lmb_alloc(struct lmb *lmb, phys_size_t size, ulong align) -{ - return lmb_alloc_base(lmb, size, align, LMB_ALLOC_ANYWHERE); -} - -phys_addr_t lmb_alloc_base(struct lmb *lmb, phys_size_t size, ulong align, phys_addr_t max_addr) -{ - phys_addr_t alloc; - - alloc = __lmb_alloc_base(lmb, size, align, max_addr); - - if (alloc == 0) - printf("ERROR: Failed to allocate 0x%lx bytes below 0x%lx.\n", - (ulong)size, (ulong)max_addr); - - return alloc; -} - static phys_addr_t lmb_align_down(phys_addr_t addr, phys_size_t size) { return addr & ~(size - 1); }
-phys_addr_t __lmb_alloc_base(struct lmb *lmb, phys_size_t size, ulong align, phys_addr_t max_addr) +static phys_addr_t __lmb_alloc_base(struct lmb *lmb, phys_size_t size, + ulong align, phys_addr_t max_addr) { long i, rgn; phys_addr_t base = 0; @@ -499,6 +482,24 @@ phys_addr_t __lmb_alloc_base(struct lmb *lmb, phys_size_t size, ulong align, phy return 0; }
+phys_addr_t lmb_alloc(struct lmb *lmb, phys_size_t size, ulong align) +{ + return lmb_alloc_base(lmb, size, align, LMB_ALLOC_ANYWHERE); +} + +phys_addr_t lmb_alloc_base(struct lmb *lmb, phys_size_t size, ulong align, phys_addr_t max_addr) +{ + phys_addr_t alloc; + + alloc = __lmb_alloc_base(lmb, size, align, max_addr); + + if (alloc == 0) + printf("ERROR: Failed to allocate 0x%lx bytes below 0x%lx.\n", + (ulong)size, (ulong)max_addr); + + return alloc; +} + /* * Try to allocate a specific address range: must be in defined memory but not * reserved

On Thu, 4 Jul 2024 at 08:36, Sughosh Ganu sughosh.ganu@linaro.org wrote:
The __lmb_alloc_base() function is only called from within the lmb module. Moreover, the lmb_alloc() and lmb_alloc_base() API's are good enough for the allocation API calls. Make the __lmb_alloc_base() function static.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org Reviewed-by: Ilias Apalodimas ilias.apalodimas@linaro.org
Changes since V1: None
include/lmb.h | 2 -- lib/lmb.c | 39 ++++++++++++++++++++------------------- 2 files changed, 20 insertions(+), 21 deletions(-)
Reviewed-by: Simon Glass sjg@chromium.org

The LMB module will be changed to have persistent and global memory maps of available and used memory. With this change, there won't be any need to explicitly initialise the LMB memory maps. Remove the call to the lmb_init() function.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since V1: New patch
arch/arm/mach-stm32mp/dram_init.c | 1 - board/xilinx/common/board.c | 1 - drivers/iommu/apple_dart.c | 1 - drivers/iommu/sandbox_iommu.c | 1 - include/lmb.h | 1 - lib/lmb.c | 18 ------------------ test/lib/lmb.c | 18 ------------------ 7 files changed, 41 deletions(-)
diff --git a/arch/arm/mach-stm32mp/dram_init.c b/arch/arm/mach-stm32mp/dram_init.c index 6024959b97..a5437e4e55 100644 --- a/arch/arm/mach-stm32mp/dram_init.c +++ b/arch/arm/mach-stm32mp/dram_init.c @@ -59,7 +59,6 @@ phys_addr_t board_get_usable_ram_top(phys_size_t total_size) gd->ram_top = clamp_val(gd->ram_top, 0, SZ_4G - 1);
/* found enough not-reserved memory to relocated U-Boot */ - lmb_init(&lmb); lmb_add(&lmb, gd->ram_base, gd->ram_top - gd->ram_base); boot_fdt_add_mem_rsv_regions(&lmb, (void *)gd->fdt_blob); /* add 8M for reserved memory for display, fdt, gd,... */ diff --git a/board/xilinx/common/board.c b/board/xilinx/common/board.c index 0b43407b9e..61dc37964d 100644 --- a/board/xilinx/common/board.c +++ b/board/xilinx/common/board.c @@ -684,7 +684,6 @@ phys_addr_t board_get_usable_ram_top(phys_size_t total_size) panic("Not 64bit aligned DT location: %p\n", gd->fdt_blob);
/* found enough not-reserved memory to relocated U-Boot */ - lmb_init(&lmb); lmb_add(&lmb, gd->ram_base, gd->ram_size); boot_fdt_add_mem_rsv_regions(&lmb, (void *)gd->fdt_blob); size = ALIGN(CONFIG_SYS_MALLOC_LEN + total_size, MMU_SECTION_SIZE); diff --git a/drivers/iommu/apple_dart.c b/drivers/iommu/apple_dart.c index 9327dea1e3..3e59490973 100644 --- a/drivers/iommu/apple_dart.c +++ b/drivers/iommu/apple_dart.c @@ -213,7 +213,6 @@ static int apple_dart_probe(struct udevice *dev) priv->dvabase = DART_PAGE_SIZE; priv->dvaend = SZ_4G - DART_PAGE_SIZE;
- lmb_init(&priv->lmb); lmb_add(&priv->lmb, priv->dvabase, priv->dvaend - priv->dvabase);
/* Disable translations. */ diff --git a/drivers/iommu/sandbox_iommu.c b/drivers/iommu/sandbox_iommu.c index e37976f86f..3184b3a64e 100644 --- a/drivers/iommu/sandbox_iommu.c +++ b/drivers/iommu/sandbox_iommu.c @@ -54,7 +54,6 @@ static int sandbox_iommu_probe(struct udevice *dev) { struct sandbox_iommu_priv *priv = dev_get_priv(dev);
- lmb_init(&priv->lmb); lmb_add(&priv->lmb, 0x89abc000, SZ_16K);
return 0; diff --git a/include/lmb.h b/include/lmb.h index 7b87181b9e..20d6feebf5 100644 --- a/include/lmb.h +++ b/include/lmb.h @@ -92,7 +92,6 @@ struct lmb { #endif };
-void lmb_init(struct lmb *lmb); void lmb_init_and_reserve(struct lmb *lmb, struct bd_info *bd, void *fdt_blob); void lmb_init_and_reserve_range(struct lmb *lmb, phys_addr_t base, phys_size_t size, void *fdt_blob); diff --git a/lib/lmb.c b/lib/lmb.c index 4d39c0d1f9..0141da9766 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -130,21 +130,6 @@ static void lmb_fix_over_lap_regions(struct lmb_region *rgn, unsigned long r1, lmb_remove_region(rgn, r2); }
-void lmb_init(struct lmb *lmb) -{ -#if IS_ENABLED(CONFIG_LMB_USE_MAX_REGIONS) - lmb->memory.max = CONFIG_LMB_MAX_REGIONS; - lmb->reserved.max = CONFIG_LMB_MAX_REGIONS; -#else - lmb->memory.max = CONFIG_LMB_MEMORY_REGIONS; - lmb->reserved.max = CONFIG_LMB_RESERVED_REGIONS; - lmb->memory.region = lmb->memory_regions; - lmb->reserved.region = lmb->reserved_regions; -#endif - lmb->memory.cnt = 0; - lmb->reserved.cnt = 0; -} - void arch_lmb_reserve_generic(struct lmb *lmb, ulong sp, ulong end, ulong align) { ulong bank_end; @@ -231,8 +216,6 @@ void lmb_init_and_reserve(struct lmb *lmb, struct bd_info *bd, void *fdt_blob) { int i;
- lmb_init(lmb); - for (i = 0; i < CONFIG_NR_DRAM_BANKS; i++) { if (bd->bi_dram[i].size) { lmb_add(lmb, bd->bi_dram[i].start, @@ -247,7 +230,6 @@ void lmb_init_and_reserve(struct lmb *lmb, struct bd_info *bd, void *fdt_blob) void lmb_init_and_reserve_range(struct lmb *lmb, phys_addr_t base, phys_size_t size, void *fdt_blob) { - lmb_init(lmb); lmb_add(lmb, base, size); lmb_reserve_common(lmb, fdt_blob); } diff --git a/test/lib/lmb.c b/test/lib/lmb.c index 4b5b6e5e20..74e74501cf 100644 --- a/test/lib/lmb.c +++ b/test/lib/lmb.c @@ -75,8 +75,6 @@ static int test_multi_alloc(struct unit_test_state *uts, const phys_addr_t ram, ut_assert(alloc_64k_addr >= ram + 8); ut_assert(alloc_64k_end <= ram_end - 8);
- lmb_init(&lmb); - if (ram0_size) { ret = lmb_add(&lmb, ram0, ram0_size); ut_asserteq(ret, 0); @@ -236,8 +234,6 @@ static int test_bigblock(struct unit_test_state *uts, const phys_addr_t ram) /* check for overflow */ ut_assert(ram_end == 0 || ram_end > ram);
- lmb_init(&lmb); - ret = lmb_add(&lmb, ram, ram_size); ut_asserteq(ret, 0);
@@ -303,8 +299,6 @@ static int test_noreserved(struct unit_test_state *uts, const phys_addr_t ram, /* check for overflow */ ut_assert(ram_end == 0 || ram_end > ram);
- lmb_init(&lmb); - ret = lmb_add(&lmb, ram, ram_size); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 0, 0, 0, 0, 0, 0, 0); @@ -389,8 +383,6 @@ static int lib_test_lmb_at_0(struct unit_test_state *uts) long ret; phys_addr_t a, b;
- lmb_init(&lmb); - ret = lmb_add(&lmb, ram, ram_size); ut_asserteq(ret, 0);
@@ -428,8 +420,6 @@ static int lib_test_lmb_overlapping_reserve(struct unit_test_state *uts) struct lmb lmb; long ret;
- lmb_init(&lmb); - ret = lmb_add(&lmb, ram, ram_size); ut_asserteq(ret, 0);
@@ -486,8 +476,6 @@ static int test_alloc_addr(struct unit_test_state *uts, const phys_addr_t ram) /* check for overflow */ ut_assert(ram_end == 0 || ram_end > ram);
- lmb_init(&lmb); - ret = lmb_add(&lmb, ram, ram_size); ut_asserteq(ret, 0);
@@ -613,8 +601,6 @@ static int test_get_unreserved_size(struct unit_test_state *uts, /* check for overflow */ ut_assert(ram_end == 0 || ram_end > ram);
- lmb_init(&lmb); - ret = lmb_add(&lmb, ram, ram_size); ut_asserteq(ret, 0);
@@ -683,8 +669,6 @@ static int lib_test_lmb_max_regions(struct unit_test_state *uts) struct lmb lmb; int ret, i;
- lmb_init(&lmb); - ut_asserteq(lmb.memory.cnt, 0); ut_asserteq(lmb.memory.max, CONFIG_LMB_MAX_REGIONS); ut_asserteq(lmb.reserved.cnt, 0); @@ -744,8 +728,6 @@ static int lib_test_lmb_flags(struct unit_test_state *uts) struct lmb lmb; long ret;
- lmb_init(&lmb); - ret = lmb_add(&lmb, ram, ram_size); ut_asserteq(ret, 0);

kHi Sughosh,
On Thu, 4 Jul 2024 at 08:36, Sughosh Ganu sughosh.ganu@linaro.org wrote:
The LMB module will be changed to have persistent and global memory maps of available and used memory. With this change, there won't be any need to explicitly initialise the LMB memory maps. Remove the call to the lmb_init() function.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since V1: New patch
arch/arm/mach-stm32mp/dram_init.c | 1 - board/xilinx/common/board.c | 1 - drivers/iommu/apple_dart.c | 1 - drivers/iommu/sandbox_iommu.c | 1 - include/lmb.h | 1 - lib/lmb.c | 18 ------------------ test/lib/lmb.c | 18 ------------------ 7 files changed, 41 deletions(-)
I can't actually find the call to lmb_init(). Where is it?
Regards, Simon

hi Simon,
On Sat, 13 Jul 2024 at 20:45, Simon Glass sjg@chromium.org wrote:
kHi Sughosh,
On Thu, 4 Jul 2024 at 08:36, Sughosh Ganu sughosh.ganu@linaro.org wrote:
The LMB module will be changed to have persistent and global memory maps of available and used memory. With this change, there won't be any need to explicitly initialise the LMB memory maps. Remove the call to the lmb_init() function.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since V1: New patch
arch/arm/mach-stm32mp/dram_init.c | 1 - board/xilinx/common/board.c | 1 - drivers/iommu/apple_dart.c | 1 - drivers/iommu/sandbox_iommu.c | 1 - include/lmb.h | 1 - lib/lmb.c | 18 ------------------ test/lib/lmb.c | 18 ------------------ 7 files changed, 41 deletions(-)
I can't actually find the call to lmb_init(). Where is it?
Sorry, I do not understand this question. This patch removes the lmb_init() definition, as well as calls to the function from the rest of the files.
-sughosh
Regards, Simon

Hi Sughosh,
On Mon, 15 Jul 2024 at 10:31, Sughosh Ganu sughosh.ganu@linaro.org wrote:
hi Simon,
On Sat, 13 Jul 2024 at 20:45, Simon Glass sjg@chromium.org wrote:
kHi Sughosh,
On Thu, 4 Jul 2024 at 08:36, Sughosh Ganu sughosh.ganu@linaro.org wrote:
The LMB module will be changed to have persistent and global memory maps of available and used memory. With this change, there won't be any need to explicitly initialise the LMB memory maps. Remove the call to the lmb_init() function.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since V1: New patch
arch/arm/mach-stm32mp/dram_init.c | 1 - board/xilinx/common/board.c | 1 - drivers/iommu/apple_dart.c | 1 - drivers/iommu/sandbox_iommu.c | 1 - include/lmb.h | 1 - lib/lmb.c | 18 ------------------ test/lib/lmb.c | 18 ------------------ 7 files changed, 41 deletions(-)
I can't actually find the call to lmb_init(). Where is it?
Sorry, I do not understand this question. This patch removes the lmb_init() definition, as well as calls to the function from the rest of the files.
I see later you add lmb_mem_regions_init() so that there are two init functions. We should really only have one.
Regards, Simon

hi Simon,
On Mon, 15 Jul 2024 at 17:09, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Mon, 15 Jul 2024 at 10:31, Sughosh Ganu sughosh.ganu@linaro.org wrote:
hi Simon,
On Sat, 13 Jul 2024 at 20:45, Simon Glass sjg@chromium.org wrote:
kHi Sughosh,
On Thu, 4 Jul 2024 at 08:36, Sughosh Ganu sughosh.ganu@linaro.org wrote:
The LMB module will be changed to have persistent and global memory maps of available and used memory. With this change, there won't be any need to explicitly initialise the LMB memory maps. Remove the call to the lmb_init() function.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since V1: New patch
arch/arm/mach-stm32mp/dram_init.c | 1 - board/xilinx/common/board.c | 1 - drivers/iommu/apple_dart.c | 1 - drivers/iommu/sandbox_iommu.c | 1 - include/lmb.h | 1 - lib/lmb.c | 18 ------------------ test/lib/lmb.c | 18 ------------------ 7 files changed, 41 deletions(-)
I can't actually find the call to lmb_init(). Where is it?
Sorry, I do not understand this question. This patch removes the lmb_init() definition, as well as calls to the function from the rest of the files.
I see later you add lmb_mem_regions_init() so that there are two init functions. We should really only have one.
Okay, let me see how I can put the functionality of lmb_init() needed for the tests into the lmb_mem_regions_init() function.
-sughosh
Regards, Simon

With the move of the LMB structure to a persistent state, there is no need to declare the variable locally, and pass it as part of the LMB API's. Remove all local variable instances and change the API's correspondingly.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since V1: None
arch/arc/lib/cache.c | 4 +- arch/arm/lib/stack.c | 4 +- arch/arm/mach-apple/board.c | 17 ++- arch/arm/mach-snapdragon/board.c | 17 ++- arch/arm/mach-stm32mp/dram_init.c | 7 +- arch/arm/mach-stm32mp/stm32mp1/cpu.c | 6 +- arch/m68k/lib/bootm.c | 7 +- arch/microblaze/lib/bootm.c | 4 +- arch/mips/lib/bootm.c | 9 +- arch/nios2/lib/bootm.c | 4 +- arch/powerpc/cpu/mpc85xx/mp.c | 4 +- arch/powerpc/include/asm/mp.h | 4 +- arch/powerpc/lib/bootm.c | 14 +- arch/riscv/lib/bootm.c | 4 +- arch/sh/lib/bootm.c | 4 +- arch/x86/lib/bootm.c | 4 +- arch/xtensa/lib/bootm.c | 4 +- board/xilinx/common/board.c | 7 +- boot/bootm.c | 26 ++-- boot/bootm_os.c | 5 +- boot/image-board.c | 32 ++--- boot/image-fdt.c | 29 ++--- cmd/bdinfo.c | 6 +- cmd/booti.c | 2 +- cmd/bootz.c | 2 +- cmd/load.c | 7 +- drivers/iommu/apple_dart.c | 7 +- drivers/iommu/sandbox_iommu.c | 15 +-- fs/fs.c | 7 +- include/image.h | 22 +--- include/lmb.h | 39 +++--- lib/lmb.c | 81 ++++++------ net/tftp.c | 5 +- net/wget.c | 5 +- test/cmd/bdinfo.c | 2 +- test/lib/lmb.c | 187 +++++++++++++-------------- 36 files changed, 270 insertions(+), 333 deletions(-)
diff --git a/arch/arc/lib/cache.c b/arch/arc/lib/cache.c index 22e748868a..5151af917a 100644 --- a/arch/arc/lib/cache.c +++ b/arch/arc/lib/cache.c @@ -829,7 +829,7 @@ static ulong get_sp(void) return ret; }
-void arch_lmb_reserve(struct lmb *lmb) +void arch_lmb_reserve(void) { - arch_lmb_reserve_generic(lmb, get_sp(), gd->ram_top, 4096); + arch_lmb_reserve_generic(get_sp(), gd->ram_top, 4096); } diff --git a/arch/arm/lib/stack.c b/arch/arm/lib/stack.c index ea1b937add..87d5c962d7 100644 --- a/arch/arm/lib/stack.c +++ b/arch/arm/lib/stack.c @@ -42,7 +42,7 @@ static ulong get_sp(void) return ret; }
-void arch_lmb_reserve(struct lmb *lmb) +void arch_lmb_reserve(void) { - arch_lmb_reserve_generic(lmb, get_sp(), gd->ram_top, 16384); + arch_lmb_reserve_generic(get_sp(), gd->ram_top, 16384); } diff --git a/arch/arm/mach-apple/board.c b/arch/arm/mach-apple/board.c index 8bace3005e..213390d6e8 100644 --- a/arch/arm/mach-apple/board.c +++ b/arch/arm/mach-apple/board.c @@ -773,23 +773,22 @@ u64 get_page_table_size(void)
int board_late_init(void) { - struct lmb lmb; u32 status = 0;
- lmb_init_and_reserve(&lmb, gd->bd, (void *)gd->fdt_blob); + lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob);
/* somewhat based on the Linux Kernel boot requirements: * align by 2M and maximal FDT size 2M */ - status |= env_set_hex("loadaddr", lmb_alloc(&lmb, SZ_1G, SZ_2M)); - status |= env_set_hex("fdt_addr_r", lmb_alloc(&lmb, SZ_2M, SZ_2M)); - status |= env_set_hex("kernel_addr_r", lmb_alloc(&lmb, SZ_128M, SZ_2M)); - status |= env_set_hex("ramdisk_addr_r", lmb_alloc(&lmb, SZ_1G, SZ_2M)); + status |= env_set_hex("loadaddr", lmb_alloc(SZ_1G, SZ_2M)); + status |= env_set_hex("fdt_addr_r", lmb_alloc(SZ_2M, SZ_2M)); + status |= env_set_hex("kernel_addr_r", lmb_alloc(SZ_128M, SZ_2M)); + status |= env_set_hex("ramdisk_addr_r", lmb_alloc(SZ_1G, SZ_2M)); status |= env_set_hex("kernel_comp_addr_r", - lmb_alloc(&lmb, KERNEL_COMP_SIZE, SZ_2M)); + lmb_alloc(KERNEL_COMP_SIZE, SZ_2M)); status |= env_set_hex("kernel_comp_size", KERNEL_COMP_SIZE); - status |= env_set_hex("scriptaddr", lmb_alloc(&lmb, SZ_4M, SZ_2M)); - status |= env_set_hex("pxefile_addr_r", lmb_alloc(&lmb, SZ_4M, SZ_2M)); + status |= env_set_hex("scriptaddr", lmb_alloc(SZ_4M, SZ_2M)); + status |= env_set_hex("pxefile_addr_r", lmb_alloc(SZ_4M, SZ_2M));
if (status) log_warning("late_init: Failed to set run time variables\n"); diff --git a/arch/arm/mach-snapdragon/board.c b/arch/arm/mach-snapdragon/board.c index b439a19ec7..a63c8bec45 100644 --- a/arch/arm/mach-snapdragon/board.c +++ b/arch/arm/mach-snapdragon/board.c @@ -275,24 +275,23 @@ void __weak qcom_late_init(void)
#define KERNEL_COMP_SIZE SZ_64M
-#define addr_alloc(lmb, size) lmb_alloc(lmb, size, SZ_2M) +#define addr_alloc(size) lmb_alloc(size, SZ_2M)
/* Stolen from arch/arm/mach-apple/board.c */ int board_late_init(void) { - struct lmb lmb; u32 status = 0;
- lmb_init_and_reserve(&lmb, gd->bd, (void *)gd->fdt_blob); + lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob);
/* We need to be fairly conservative here as we support boards with just 1G of TOTAL RAM */ - status |= env_set_hex("kernel_addr_r", addr_alloc(&lmb, SZ_128M)); - status |= env_set_hex("ramdisk_addr_r", addr_alloc(&lmb, SZ_128M)); - status |= env_set_hex("kernel_comp_addr_r", addr_alloc(&lmb, KERNEL_COMP_SIZE)); + status |= env_set_hex("kernel_addr_r", addr_alloc(SZ_128M)); + status |= env_set_hex("ramdisk_addr_r", addr_alloc(SZ_128M)); + status |= env_set_hex("kernel_comp_addr_r", addr_alloc(KERNEL_COMP_SIZE)); status |= env_set_hex("kernel_comp_size", KERNEL_COMP_SIZE); - status |= env_set_hex("scriptaddr", addr_alloc(&lmb, SZ_4M)); - status |= env_set_hex("pxefile_addr_r", addr_alloc(&lmb, SZ_4M)); - status |= env_set_hex("fdt_addr_r", addr_alloc(&lmb, SZ_2M)); + status |= env_set_hex("scriptaddr", addr_alloc(SZ_4M)); + status |= env_set_hex("pxefile_addr_r", addr_alloc(SZ_4M)); + status |= env_set_hex("fdt_addr_r", addr_alloc(SZ_2M));
if (status) log_warning("%s: Failed to set run time variables\n", __func__); diff --git a/arch/arm/mach-stm32mp/dram_init.c b/arch/arm/mach-stm32mp/dram_init.c index a5437e4e55..e8b0a38be1 100644 --- a/arch/arm/mach-stm32mp/dram_init.c +++ b/arch/arm/mach-stm32mp/dram_init.c @@ -47,7 +47,6 @@ phys_addr_t board_get_usable_ram_top(phys_size_t total_size) { phys_size_t size; phys_addr_t reg; - struct lmb lmb;
if (!total_size) return gd->ram_top; @@ -59,11 +58,11 @@ phys_addr_t board_get_usable_ram_top(phys_size_t total_size) gd->ram_top = clamp_val(gd->ram_top, 0, SZ_4G - 1);
/* found enough not-reserved memory to relocated U-Boot */ - lmb_add(&lmb, gd->ram_base, gd->ram_top - gd->ram_base); - boot_fdt_add_mem_rsv_regions(&lmb, (void *)gd->fdt_blob); + lmb_add(gd->ram_base, gd->ram_top - gd->ram_base); + boot_fdt_add_mem_rsv_regions((void *)gd->fdt_blob); /* add 8M for reserved memory for display, fdt, gd,... */ size = ALIGN(SZ_8M + CONFIG_SYS_MALLOC_LEN + total_size, MMU_SECTION_SIZE), - reg = lmb_alloc(&lmb, size, MMU_SECTION_SIZE); + reg = lmb_alloc(size, MMU_SECTION_SIZE);
if (!reg) reg = gd->ram_top - size; diff --git a/arch/arm/mach-stm32mp/stm32mp1/cpu.c b/arch/arm/mach-stm32mp/stm32mp1/cpu.c index 478c3efae7..a913737342 100644 --- a/arch/arm/mach-stm32mp/stm32mp1/cpu.c +++ b/arch/arm/mach-stm32mp/stm32mp1/cpu.c @@ -30,8 +30,6 @@ */ u8 early_tlb[PGTABLE_SIZE] __section(".data") __aligned(0x4000);
-struct lmb lmb; - u32 get_bootmode(void) { /* read bootmode from TAMP backup register */ @@ -80,7 +78,7 @@ void dram_bank_mmu_setup(int bank) i < (start >> MMU_SECTION_SHIFT) + (size >> MMU_SECTION_SHIFT); i++) { option = DCACHE_DEFAULT_OPTION; - if (use_lmb && lmb_is_reserved_flags(&lmb, i << MMU_SECTION_SHIFT, LMB_NOMAP)) + if (use_lmb && lmb_is_reserved_flags(i << MMU_SECTION_SHIFT, LMB_NOMAP)) option = 0; /* INVALID ENTRY in TLB */ set_section_dcache(i, option); } @@ -144,7 +142,7 @@ int mach_cpu_init(void) void enable_caches(void) { /* parse device tree when data cache is still activated */ - lmb_init_and_reserve(&lmb, gd->bd, (void *)gd->fdt_blob); + lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob);
/* I-cache is already enabled in start.S: icache_enable() not needed */
diff --git a/arch/m68k/lib/bootm.c b/arch/m68k/lib/bootm.c index f2d02e4376..eb220d178d 100644 --- a/arch/m68k/lib/bootm.c +++ b/arch/m68k/lib/bootm.c @@ -30,9 +30,9 @@ DECLARE_GLOBAL_DATA_PTR; static ulong get_sp (void); static void set_clocks_in_mhz (struct bd_info *kbd);
-void arch_lmb_reserve(struct lmb *lmb) +void arch_lmb_reserve(void) { - arch_lmb_reserve_generic(lmb, get_sp(), gd->ram_top, 1024); + arch_lmb_reserve_generic(get_sp(), gd->ram_top, 1024); }
int do_bootm_linux(int flag, struct bootm_info *bmi) @@ -41,7 +41,6 @@ int do_bootm_linux(int flag, struct bootm_info *bmi) int ret; struct bd_info *kbd; void (*kernel) (struct bd_info *, ulong, ulong, ulong, ulong); - struct lmb *lmb = &images->lmb;
/* * allow the PREP bootm subcommand, it is required for bootm to work @@ -53,7 +52,7 @@ int do_bootm_linux(int flag, struct bootm_info *bmi) return 1;
/* allocate space for kernel copy of board info */ - ret = boot_get_kbd (lmb, &kbd); + ret = boot_get_kbd (&kbd); if (ret) { puts("ERROR with allocation of kernel bd\n"); goto error; diff --git a/arch/microblaze/lib/bootm.c b/arch/microblaze/lib/bootm.c index cbe9d85aa9..ce96bca28f 100644 --- a/arch/microblaze/lib/bootm.c +++ b/arch/microblaze/lib/bootm.c @@ -32,9 +32,9 @@ static ulong get_sp(void) return ret; }
-void arch_lmb_reserve(struct lmb *lmb) +void arch_lmb_reserve(void) { - arch_lmb_reserve_generic(lmb, get_sp(), gd->ram_top, 4096); + arch_lmb_reserve_generic(get_sp(), gd->ram_top, 4096); }
static void boot_jump_linux(struct bootm_headers *images, int flag) diff --git a/arch/mips/lib/bootm.c b/arch/mips/lib/bootm.c index adb6b6cc22..54d89e9cca 100644 --- a/arch/mips/lib/bootm.c +++ b/arch/mips/lib/bootm.c @@ -37,9 +37,9 @@ static ulong arch_get_sp(void) return ret; }
-void arch_lmb_reserve(struct lmb *lmb) +void arch_lmb_reserve(void) { - arch_lmb_reserve_generic(lmb, arch_get_sp(), gd->ram_top, 4096); + arch_lmb_reserve_generic(arch_get_sp(), gd->ram_top, 4096); }
static void linux_cmdline_init(void) @@ -225,9 +225,8 @@ static int boot_reloc_fdt(struct bootm_headers *images) }
#if CONFIG_IS_ENABLED(MIPS_BOOT_FDT) && CONFIG_IS_ENABLED(OF_LIBFDT) - boot_fdt_add_mem_rsv_regions(&images->lmb, images->ft_addr); - return boot_relocate_fdt(&images->lmb, &images->ft_addr, - &images->ft_len); + boot_fdt_add_mem_rsv_regions(images->ft_addr); + return boot_relocate_fdt(&images->ft_addr, &images->ft_len); #else return 0; #endif diff --git a/arch/nios2/lib/bootm.c b/arch/nios2/lib/bootm.c index ce939ff5e1..d33d45d28f 100644 --- a/arch/nios2/lib/bootm.c +++ b/arch/nios2/lib/bootm.c @@ -73,7 +73,7 @@ static ulong get_sp(void) return ret; }
-void arch_lmb_reserve(struct lmb *lmb) +void arch_lmb_reserve(void) { - arch_lmb_reserve_generic(lmb, get_sp(), gd->ram_top, 4096); + arch_lmb_reserve_generic(get_sp(), gd->ram_top, 4096); } diff --git a/arch/powerpc/cpu/mpc85xx/mp.c b/arch/powerpc/cpu/mpc85xx/mp.c index b638f24ed1..543d6edcd8 100644 --- a/arch/powerpc/cpu/mpc85xx/mp.c +++ b/arch/powerpc/cpu/mpc85xx/mp.c @@ -409,11 +409,11 @@ static void plat_mp_up(unsigned long bootpg, unsigned int pagesize) } #endif
-void cpu_mp_lmb_reserve(struct lmb *lmb) +void cpu_mp_lmb_reserve(void) { u32 bootpg = determine_mp_bootpg(NULL);
- lmb_reserve(lmb, bootpg, 4096); + lmb_reserve(bootpg, 4096); }
void setup_mp(void) diff --git a/arch/powerpc/include/asm/mp.h b/arch/powerpc/include/asm/mp.h index 8dacd2781d..b3f59be840 100644 --- a/arch/powerpc/include/asm/mp.h +++ b/arch/powerpc/include/asm/mp.h @@ -6,10 +6,8 @@ #ifndef _ASM_MP_H_ #define _ASM_MP_H_
-#include <lmb.h> - void setup_mp(void); -void cpu_mp_lmb_reserve(struct lmb *lmb); +void cpu_mp_lmb_reserve(void); u32 determine_mp_bootpg(unsigned int *pagesize); int is_core_disabled(int nr);
diff --git a/arch/powerpc/lib/bootm.c b/arch/powerpc/lib/bootm.c index f55b5ff832..836e6478fc 100644 --- a/arch/powerpc/lib/bootm.c +++ b/arch/powerpc/lib/bootm.c @@ -117,7 +117,7 @@ static void boot_jump_linux(struct bootm_headers *images) return; }
-void arch_lmb_reserve(struct lmb *lmb) +void arch_lmb_reserve(void) { phys_size_t bootm_size; ulong size, bootmap_base; @@ -140,13 +140,13 @@ void arch_lmb_reserve(struct lmb *lmb) ulong base = bootmap_base + size; printf("WARNING: adjusting available memory from 0x%lx to 0x%llx\n", size, (unsigned long long)bootm_size); - lmb_reserve(lmb, base, bootm_size - size); + lmb_reserve(base, bootm_size - size); }
- arch_lmb_reserve_generic(lmb, get_sp(), gd->ram_top, 4096); + arch_lmb_reserve_generic(get_sp(), gd->ram_top, 4096);
#ifdef CONFIG_MP - cpu_mp_lmb_reserve(lmb); + cpu_mp_lmb_reserve(); #endif
return; @@ -167,7 +167,6 @@ static void boot_prep_linux(struct bootm_headers *images) static int boot_cmdline_linux(struct bootm_headers *images) { ulong of_size = images->ft_len; - struct lmb *lmb = &images->lmb; ulong *cmd_start = &images->cmdline_start; ulong *cmd_end = &images->cmdline_end;
@@ -175,7 +174,7 @@ static int boot_cmdline_linux(struct bootm_headers *images)
if (!of_size) { /* allocate space and init command line */ - ret = boot_get_cmdline (lmb, cmd_start, cmd_end); + ret = boot_get_cmdline (cmd_start, cmd_end); if (ret) { puts("ERROR with allocation of cmdline\n"); return ret; @@ -188,14 +187,13 @@ static int boot_cmdline_linux(struct bootm_headers *images) static int boot_bd_t_linux(struct bootm_headers *images) { ulong of_size = images->ft_len; - struct lmb *lmb = &images->lmb; struct bd_info **kbd = &images->kbd;
int ret = 0;
if (!of_size) { /* allocate space for kernel copy of board info */ - ret = boot_get_kbd (lmb, kbd); + ret = boot_get_kbd (kbd); if (ret) { puts("ERROR with allocation of kernel bd\n"); return ret; diff --git a/arch/riscv/lib/bootm.c b/arch/riscv/lib/bootm.c index 13cbaaba68..bbf62f9e05 100644 --- a/arch/riscv/lib/bootm.c +++ b/arch/riscv/lib/bootm.c @@ -142,7 +142,7 @@ static ulong get_sp(void) return ret; }
-void arch_lmb_reserve(struct lmb *lmb) +void arch_lmb_reserve(void) { - arch_lmb_reserve_generic(lmb, get_sp(), gd->ram_top, 4096); + arch_lmb_reserve_generic(get_sp(), gd->ram_top, 4096); } diff --git a/arch/sh/lib/bootm.c b/arch/sh/lib/bootm.c index e298d766b5..44ac05988c 100644 --- a/arch/sh/lib/bootm.c +++ b/arch/sh/lib/bootm.c @@ -110,7 +110,7 @@ static ulong get_sp(void) return ret; }
-void arch_lmb_reserve(struct lmb *lmb) +void arch_lmb_reserve(void) { - arch_lmb_reserve_generic(lmb, get_sp(), gd->ram_top, 4096); + arch_lmb_reserve_generic(get_sp(), gd->ram_top, 4096); } diff --git a/arch/x86/lib/bootm.c b/arch/x86/lib/bootm.c index 2c889bcd33..114b31012e 100644 --- a/arch/x86/lib/bootm.c +++ b/arch/x86/lib/bootm.c @@ -267,7 +267,7 @@ static ulong get_sp(void) return ret; }
-void arch_lmb_reserve(struct lmb *lmb) +void arch_lmb_reserve(void) { - arch_lmb_reserve_generic(lmb, get_sp(), gd->ram_top, 4096); + arch_lmb_reserve_generic(get_sp(), gd->ram_top, 4096); } diff --git a/arch/xtensa/lib/bootm.c b/arch/xtensa/lib/bootm.c index 1de06b7fb5..bdbd6d4692 100644 --- a/arch/xtensa/lib/bootm.c +++ b/arch/xtensa/lib/bootm.c @@ -206,7 +206,7 @@ static ulong get_sp(void) return ret; }
-void arch_lmb_reserve(struct lmb *lmb) +void arch_lmb_reserve(void) { - arch_lmb_reserve_generic(lmb, get_sp(), gd->ram_top, 4096); + arch_lmb_reserve_generic(get_sp(), gd->ram_top, 4096); } diff --git a/board/xilinx/common/board.c b/board/xilinx/common/board.c index 61dc37964d..4056884400 100644 --- a/board/xilinx/common/board.c +++ b/board/xilinx/common/board.c @@ -675,7 +675,6 @@ phys_addr_t board_get_usable_ram_top(phys_size_t total_size) { phys_size_t size; phys_addr_t reg; - struct lmb lmb;
if (!total_size) return gd->ram_top; @@ -684,10 +683,10 @@ phys_addr_t board_get_usable_ram_top(phys_size_t total_size) panic("Not 64bit aligned DT location: %p\n", gd->fdt_blob);
/* found enough not-reserved memory to relocated U-Boot */ - lmb_add(&lmb, gd->ram_base, gd->ram_size); - boot_fdt_add_mem_rsv_regions(&lmb, (void *)gd->fdt_blob); + lmb_add(gd->ram_base, gd->ram_size); + boot_fdt_add_mem_rsv_regions((void *)gd->fdt_blob); size = ALIGN(CONFIG_SYS_MALLOC_LEN + total_size, MMU_SECTION_SIZE); - reg = lmb_alloc(&lmb, size, MMU_SECTION_SIZE); + reg = lmb_alloc(size, MMU_SECTION_SIZE);
if (!reg) reg = gd->ram_top - size; diff --git a/boot/bootm.c b/boot/bootm.c index 376d63aafc..7b3fe551de 100644 --- a/boot/bootm.c +++ b/boot/bootm.c @@ -240,7 +240,7 @@ static int boot_get_kernel(const char *addr_fit, struct bootm_headers *images, }
#ifdef CONFIG_LMB -static void boot_start_lmb(struct bootm_headers *images) +static void boot_start_lmb(void) { phys_addr_t mem_start; phys_size_t mem_size; @@ -248,12 +248,11 @@ static void boot_start_lmb(struct bootm_headers *images) mem_start = env_get_bootm_low(); mem_size = env_get_bootm_size();
- lmb_init_and_reserve_range(&images->lmb, mem_start, - mem_size, NULL); + lmb_init_and_reserve_range(mem_start, mem_size, NULL); } #else -#define lmb_reserve(lmb, base, size) -static inline void boot_start_lmb(struct bootm_headers *images) { } +#define lmb_reserve(base, size) +static inline void boot_start_lmb(void) { } #endif
static int bootm_start(void) @@ -261,7 +260,7 @@ static int bootm_start(void) memset((void *)&images, 0, sizeof(images)); images.verify = env_get_yesno("verify");
- boot_start_lmb(&images); + boot_start_lmb();
bootstage_mark_name(BOOTSTAGE_ID_BOOTM_START, "bootm_start"); images.state = BOOTM_STATE_START; @@ -640,7 +639,7 @@ static int bootm_load_os(struct bootm_headers *images, int boot_progress) if (os.type == IH_TYPE_KERNEL_NOLOAD && os.comp != IH_COMP_NONE) { ulong req_size = ALIGN(image_len * 4, SZ_1M);
- load = lmb_alloc(&images->lmb, req_size, SZ_2M); + load = lmb_alloc(req_size, SZ_2M); if (!load) return 1; os.load = load; @@ -714,8 +713,7 @@ static int bootm_load_os(struct bootm_headers *images, int boot_progress) images->os.end = relocated_addr + image_size; }
- lmb_reserve(&images->lmb, images->os.load, (load_end - - images->os.load)); + lmb_reserve(images->os.load, (load_end - images->os.load)); return 0; }
@@ -1041,8 +1039,9 @@ int bootm_run_states(struct bootm_info *bmi, int states) if (!ret && (states & BOOTM_STATE_RAMDISK)) { ulong rd_len = images->rd_end - images->rd_start;
- ret = boot_ramdisk_high(&images->lmb, images->rd_start, - rd_len, &images->initrd_start, &images->initrd_end); + ret = boot_ramdisk_high(images->rd_start, rd_len, + &images->initrd_start, + &images->initrd_end); if (!ret) { env_set_hex("initrd_start", images->initrd_start); env_set_hex("initrd_end", images->initrd_end); @@ -1051,9 +1050,8 @@ int bootm_run_states(struct bootm_info *bmi, int states) #endif #if CONFIG_IS_ENABLED(OF_LIBFDT) && defined(CONFIG_LMB) if (!ret && (states & BOOTM_STATE_FDT)) { - boot_fdt_add_mem_rsv_regions(&images->lmb, images->ft_addr); - ret = boot_relocate_fdt(&images->lmb, &images->ft_addr, - &images->ft_len); + boot_fdt_add_mem_rsv_regions(images->ft_addr); + ret = boot_relocate_fdt(&images->ft_addr, &images->ft_len); } #endif
diff --git a/boot/bootm_os.c b/boot/bootm_os.c index 15297ddb53..300abf87d8 100644 --- a/boot/bootm_os.c +++ b/boot/bootm_os.c @@ -259,12 +259,11 @@ static void do_bootvx_fdt(struct bootm_headers *images) char *bootline; ulong of_size = images->ft_len; char **of_flat_tree = &images->ft_addr; - struct lmb *lmb = &images->lmb;
if (*of_flat_tree) { - boot_fdt_add_mem_rsv_regions(lmb, *of_flat_tree); + boot_fdt_add_mem_rsv_regions(*of_flat_tree);
- ret = boot_relocate_fdt(lmb, of_flat_tree, &of_size); + ret = boot_relocate_fdt(of_flat_tree, &of_size); if (ret) return;
diff --git a/boot/image-board.c b/boot/image-board.c index f212401304..e3cb418e3c 100644 --- a/boot/image-board.c +++ b/boot/image-board.c @@ -515,7 +515,6 @@ int boot_get_ramdisk(char const *select, struct bootm_headers *images,
/** * boot_ramdisk_high - relocate init ramdisk - * @lmb: pointer to lmb handle, will be used for memory mgmt * @rd_data: ramdisk data start address * @rd_len: ramdisk data length * @initrd_start: pointer to a ulong variable, will hold final init ramdisk @@ -534,8 +533,8 @@ int boot_get_ramdisk(char const *select, struct bootm_headers *images, * 0 - success * -1 - failure */ -int boot_ramdisk_high(struct lmb *lmb, ulong rd_data, ulong rd_len, - ulong *initrd_start, ulong *initrd_end) +int boot_ramdisk_high(ulong rd_data, ulong rd_len, ulong *initrd_start, + ulong *initrd_end) { char *s; phys_addr_t initrd_high; @@ -561,13 +560,14 @@ int boot_ramdisk_high(struct lmb *lmb, ulong rd_data, ulong rd_len, debug(" in-place initrd\n"); *initrd_start = rd_data; *initrd_end = rd_data + rd_len; - lmb_reserve(lmb, rd_data, rd_len); + lmb_reserve(rd_data, rd_len); } else { if (initrd_high) - *initrd_start = (ulong)lmb_alloc_base(lmb, - rd_len, 0x1000, initrd_high); + *initrd_start = (ulong)lmb_alloc_base(rd_len, + 0x1000, + initrd_high); else - *initrd_start = (ulong)lmb_alloc(lmb, rd_len, + *initrd_start = (ulong)lmb_alloc(rd_len, 0x1000);
if (*initrd_start == 0) { @@ -800,7 +800,6 @@ int boot_get_loadable(struct bootm_headers *images)
/** * boot_get_cmdline - allocate and initialize kernel cmdline - * @lmb: pointer to lmb handle, will be used for memory mgmt * @cmd_start: pointer to a ulong variable, will hold cmdline start * @cmd_end: pointer to a ulong variable, will hold cmdline end * @@ -813,7 +812,7 @@ int boot_get_loadable(struct bootm_headers *images) * 0 - success * -1 - failure */ -int boot_get_cmdline(struct lmb *lmb, ulong *cmd_start, ulong *cmd_end) +int boot_get_cmdline(ulong *cmd_start, ulong *cmd_end) { int barg; char *cmdline; @@ -827,7 +826,7 @@ int boot_get_cmdline(struct lmb *lmb, ulong *cmd_start, ulong *cmd_end) return 0;
barg = IF_ENABLED_INT(CONFIG_SYS_BOOT_GET_CMDLINE, CONFIG_SYS_BARGSIZE); - cmdline = (char *)(ulong)lmb_alloc_base(lmb, barg, 0xf, + cmdline = (char *)(ulong)lmb_alloc_base(barg, 0xf, env_get_bootm_mapsize() + env_get_bootm_low()); if (!cmdline) return -1; @@ -848,7 +847,6 @@ int boot_get_cmdline(struct lmb *lmb, ulong *cmd_start, ulong *cmd_end)
/** * boot_get_kbd - allocate and initialize kernel copy of board info - * @lmb: pointer to lmb handle, will be used for memory mgmt * @kbd: double pointer to board info data * * boot_get_kbd() allocates space for kernel copy of board info data below @@ -859,10 +857,9 @@ int boot_get_cmdline(struct lmb *lmb, ulong *cmd_start, ulong *cmd_end) * 0 - success * -1 - failure */ -int boot_get_kbd(struct lmb *lmb, struct bd_info **kbd) +int boot_get_kbd(struct bd_info **kbd) { - *kbd = (struct bd_info *)(ulong)lmb_alloc_base(lmb, - sizeof(struct bd_info), + *kbd = (struct bd_info *)(ulong)lmb_alloc_base(sizeof(struct bd_info), 0xf, env_get_bootm_mapsize() + env_get_bootm_low()); @@ -883,17 +880,16 @@ int image_setup_linux(struct bootm_headers *images) { ulong of_size = images->ft_len; char **of_flat_tree = &images->ft_addr; - struct lmb *lmb = images_lmb(images); int ret;
/* This function cannot be called without lmb support */ if (!IS_ENABLED(CONFIG_LMB)) return -EFAULT; if (CONFIG_IS_ENABLED(OF_LIBFDT)) - boot_fdt_add_mem_rsv_regions(lmb, *of_flat_tree); + boot_fdt_add_mem_rsv_regions(*of_flat_tree);
if (IS_ENABLED(CONFIG_SYS_BOOT_GET_CMDLINE)) { - ret = boot_get_cmdline(lmb, &images->cmdline_start, + ret = boot_get_cmdline(&images->cmdline_start, &images->cmdline_end); if (ret) { puts("ERROR with allocation of cmdline\n"); @@ -902,7 +898,7 @@ int image_setup_linux(struct bootm_headers *images) }
if (CONFIG_IS_ENABLED(OF_LIBFDT)) { - ret = boot_relocate_fdt(lmb, of_flat_tree, &of_size); + ret = boot_relocate_fdt(of_flat_tree, &of_size); if (ret) return ret; } diff --git a/boot/image-fdt.c b/boot/image-fdt.c index 56dd7687f5..943a3477a1 100644 --- a/boot/image-fdt.c +++ b/boot/image-fdt.c @@ -68,12 +68,12 @@ static const struct legacy_img_hdr *image_get_fdt(ulong fdt_addr) } #endif
-static void boot_fdt_reserve_region(struct lmb *lmb, uint64_t addr, - uint64_t size, enum lmb_flags flags) +static void boot_fdt_reserve_region(uint64_t addr, uint64_t size, + enum lmb_flags flags) { long ret;
- ret = lmb_reserve_flags(lmb, addr, size, flags); + ret = lmb_reserve_flags(addr, size, flags); if (ret >= 0) { debug(" reserving fdt memory region: addr=%llx size=%llx flags=%x\n", (unsigned long long)addr, @@ -89,14 +89,13 @@ static void boot_fdt_reserve_region(struct lmb *lmb, uint64_t addr, /** * boot_fdt_add_mem_rsv_regions - Mark the memreserve and reserved-memory * sections as unusable - * @lmb: pointer to lmb handle, will be used for memory mgmt * @fdt_blob: pointer to fdt blob base address * * Adds the and reserved-memorymemreserve regions in the dtb to the lmb block. * Adding the memreserve regions prevents u-boot from using them to store the * initrd or the fdt blob. */ -void boot_fdt_add_mem_rsv_regions(struct lmb *lmb, void *fdt_blob) +void boot_fdt_add_mem_rsv_regions(void *fdt_blob) { uint64_t addr, size; int i, total, ret; @@ -112,7 +111,7 @@ void boot_fdt_add_mem_rsv_regions(struct lmb *lmb, void *fdt_blob) for (i = 0; i < total; i++) { if (fdt_get_mem_rsv(fdt_blob, i, &addr, &size) != 0) continue; - boot_fdt_reserve_region(lmb, addr, size, LMB_NONE); + boot_fdt_reserve_region(addr, size, LMB_NONE); }
/* process reserved-memory */ @@ -130,7 +129,7 @@ void boot_fdt_add_mem_rsv_regions(struct lmb *lmb, void *fdt_blob) flags = LMB_NOMAP; addr = res.start; size = res.end - res.start + 1; - boot_fdt_reserve_region(lmb, addr, size, flags); + boot_fdt_reserve_region(addr, size, flags); }
subnode = fdt_next_subnode(fdt_blob, subnode); @@ -140,7 +139,6 @@ void boot_fdt_add_mem_rsv_regions(struct lmb *lmb, void *fdt_blob)
/** * boot_relocate_fdt - relocate flat device tree - * @lmb: pointer to lmb handle, will be used for memory mgmt * @of_flat_tree: pointer to a char* variable, will hold fdt start address * @of_size: pointer to a ulong variable, will hold fdt length * @@ -155,7 +153,7 @@ void boot_fdt_add_mem_rsv_regions(struct lmb *lmb, void *fdt_blob) * 0 - success * 1 - failure */ -int boot_relocate_fdt(struct lmb *lmb, char **of_flat_tree, ulong *of_size) +int boot_relocate_fdt(char **of_flat_tree, ulong *of_size) { u64 start, size, usable, addr, low, mapsize; void *fdt_blob = *of_flat_tree; @@ -187,18 +185,17 @@ int boot_relocate_fdt(struct lmb *lmb, char **of_flat_tree, ulong *of_size) if (desired_addr == ~0UL) { /* All ones means use fdt in place */ of_start = fdt_blob; - lmb_reserve(lmb, map_to_sysmem(of_start), of_len); + lmb_reserve(map_to_sysmem(of_start), of_len); disable_relocation = 1; } else if (desired_addr) { - addr = lmb_alloc_base(lmb, of_len, 0x1000, - desired_addr); + addr = lmb_alloc_base(of_len, 0x1000, desired_addr); of_start = map_sysmem(addr, of_len); if (of_start == NULL) { puts("Failed using fdt_high value for Device Tree"); goto error; } } else { - addr = lmb_alloc(lmb, of_len, 0x1000); + addr = lmb_alloc(of_len, 0x1000); of_start = map_sysmem(addr, of_len); } } else { @@ -220,7 +217,7 @@ int boot_relocate_fdt(struct lmb *lmb, char **of_flat_tree, ulong *of_size) * for LMB allocation. */ usable = min(start + size, low + mapsize); - addr = lmb_alloc_base(lmb, of_len, 0x1000, usable); + addr = lmb_alloc_base(of_len, 0x1000, usable); of_start = map_sysmem(addr, of_len); /* Allocation succeeded, use this block. */ if (of_start != NULL) @@ -671,7 +668,7 @@ int image_setup_libfdt(struct bootm_headers *images, void *blob,
/* Delete the old LMB reservation */ if (lmb) - lmb_free(lmb, map_to_sysmem(blob), fdt_totalsize(blob)); + lmb_free(map_to_sysmem(blob), fdt_totalsize(blob));
ret = fdt_shrink_to_minimum(blob, 0); if (ret < 0) @@ -680,7 +677,7 @@ int image_setup_libfdt(struct bootm_headers *images, void *blob,
/* Create a new LMB reservation */ if (lmb) - lmb_reserve(lmb, map_to_sysmem(blob), of_size); + lmb_reserve(map_to_sysmem(blob), of_size);
#if defined(CONFIG_ARCH_KEYSTONE) if (IS_ENABLED(CONFIG_OF_BOARD_SETUP)) diff --git a/cmd/bdinfo.c b/cmd/bdinfo.c index 437ac4e863..b31e0208df 100644 --- a/cmd/bdinfo.c +++ b/cmd/bdinfo.c @@ -162,10 +162,8 @@ static int bdinfo_print_all(struct bd_info *bd) bdinfo_print_num_l("multi_dtb_fit", (ulong)gd->multi_dtb_fit); #endif if (IS_ENABLED(CONFIG_LMB) && gd->fdt_blob) { - struct lmb lmb; - - lmb_init_and_reserve(&lmb, gd->bd, (void *)gd->fdt_blob); - lmb_dump_all_force(&lmb); + lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob); + lmb_dump_all_force(); if (IS_ENABLED(CONFIG_OF_REAL)) printf("devicetree = %s\n", fdtdec_get_srcname()); } diff --git a/cmd/booti.c b/cmd/booti.c index 62b19e8343..6018cbacf0 100644 --- a/cmd/booti.c +++ b/cmd/booti.c @@ -87,7 +87,7 @@ static int booti_start(struct bootm_info *bmi) images->os.start = relocated_addr; images->os.end = relocated_addr + image_size;
- lmb_reserve(&images->lmb, images->ep, le32_to_cpu(image_size)); + lmb_reserve(images->ep, le32_to_cpu(image_size));
/* * Handle the BOOTM_STATE_FINDOTHER state ourselves as we do not diff --git a/cmd/bootz.c b/cmd/bootz.c index 55837a7599..787203f5bd 100644 --- a/cmd/bootz.c +++ b/cmd/bootz.c @@ -56,7 +56,7 @@ static int bootz_start(struct cmd_tbl *cmdtp, int flag, int argc, if (ret != 0) return 1;
- lmb_reserve(&images->lmb, images->ep, zi_end - zi_start); + lmb_reserve(images->ep, zi_end - zi_start);
/* * Handle the BOOTM_STATE_FINDOTHER state ourselves as we do not diff --git a/cmd/load.c b/cmd/load.c index ace1c52f90..b4e747f966 100644 --- a/cmd/load.c +++ b/cmd/load.c @@ -141,7 +141,6 @@ static int do_load_serial(struct cmd_tbl *cmdtp, int flag, int argc,
static ulong load_serial(long offset) { - struct lmb lmb; char record[SREC_MAXRECLEN + 1]; /* buffer for one S-Record */ char binbuf[SREC_MAXBINLEN]; /* buffer for binary data */ int binlen; /* no. of data bytes in S-Rec. */ @@ -154,7 +153,7 @@ static ulong load_serial(long offset) int line_count = 0; long ret;
- lmb_init_and_reserve(&lmb, gd->bd, (void *)gd->fdt_blob); + lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob);
while (read_record(record, SREC_MAXRECLEN + 1) >= 0) { type = srec_decode(record, &binlen, &addr, binbuf); @@ -182,7 +181,7 @@ static ulong load_serial(long offset) { void *dst;
- ret = lmb_reserve(&lmb, store_addr, binlen); + ret = lmb_reserve(store_addr, binlen); if (ret) { printf("\nCannot overwrite reserved area (%08lx..%08lx)\n", store_addr, store_addr + binlen); @@ -191,7 +190,7 @@ static ulong load_serial(long offset) dst = map_sysmem(store_addr, binlen); memcpy(dst, binbuf, binlen); unmap_sysmem(dst); - lmb_free(&lmb, store_addr, binlen); + lmb_free(store_addr, binlen); } if ((store_addr) < start_addr) start_addr = store_addr; diff --git a/drivers/iommu/apple_dart.c b/drivers/iommu/apple_dart.c index 3e59490973..611ac7cd6d 100644 --- a/drivers/iommu/apple_dart.c +++ b/drivers/iommu/apple_dart.c @@ -70,7 +70,6 @@
struct apple_dart_priv { void *base; - struct lmb lmb; u64 *l1, *l2; int bypass, shift;
@@ -124,7 +123,7 @@ static dma_addr_t apple_dart_map(struct udevice *dev, void *addr, size_t size) off = (phys_addr_t)addr - paddr; psize = ALIGN(size + off, DART_PAGE_SIZE);
- dva = lmb_alloc(&priv->lmb, psize, DART_PAGE_SIZE); + dva = lmb_alloc(psize, DART_PAGE_SIZE);
idx = dva / DART_PAGE_SIZE; for (i = 0; i < psize / DART_PAGE_SIZE; i++) { @@ -160,7 +159,7 @@ static void apple_dart_unmap(struct udevice *dev, dma_addr_t addr, size_t size) (unsigned long)&priv->l2[idx + i]); priv->flush_tlb(priv);
- lmb_free(&priv->lmb, dva, psize); + lmb_free(dva, psize); }
static struct iommu_ops apple_dart_ops = { @@ -213,7 +212,7 @@ static int apple_dart_probe(struct udevice *dev) priv->dvabase = DART_PAGE_SIZE; priv->dvaend = SZ_4G - DART_PAGE_SIZE;
- lmb_add(&priv->lmb, priv->dvabase, priv->dvaend - priv->dvabase); + lmb_add(priv->dvabase, priv->dvaend - priv->dvabase);
/* Disable translations. */ for (sid = 0; sid < priv->nsid; sid++) diff --git a/drivers/iommu/sandbox_iommu.c b/drivers/iommu/sandbox_iommu.c index 3184b3a64e..5b4a6a8982 100644 --- a/drivers/iommu/sandbox_iommu.c +++ b/drivers/iommu/sandbox_iommu.c @@ -11,14 +11,9 @@
#define IOMMU_PAGE_SIZE SZ_4K
-struct sandbox_iommu_priv { - struct lmb lmb; -}; - static dma_addr_t sandbox_iommu_map(struct udevice *dev, void *addr, size_t size) { - struct sandbox_iommu_priv *priv = dev_get_priv(dev); phys_addr_t paddr, dva; phys_size_t psize, off;
@@ -26,7 +21,7 @@ static dma_addr_t sandbox_iommu_map(struct udevice *dev, void *addr, off = virt_to_phys(addr) - paddr; psize = ALIGN(size + off, IOMMU_PAGE_SIZE);
- dva = lmb_alloc(&priv->lmb, psize, IOMMU_PAGE_SIZE); + dva = lmb_alloc(psize, IOMMU_PAGE_SIZE);
return dva + off; } @@ -34,7 +29,6 @@ static dma_addr_t sandbox_iommu_map(struct udevice *dev, void *addr, static void sandbox_iommu_unmap(struct udevice *dev, dma_addr_t addr, size_t size) { - struct sandbox_iommu_priv *priv = dev_get_priv(dev); phys_addr_t dva; phys_size_t psize;
@@ -42,7 +36,7 @@ static void sandbox_iommu_unmap(struct udevice *dev, dma_addr_t addr, psize = size + (addr - dva); psize = ALIGN(psize, IOMMU_PAGE_SIZE);
- lmb_free(&priv->lmb, dva, psize); + lmb_free(dva, psize); }
static struct iommu_ops sandbox_iommu_ops = { @@ -52,9 +46,7 @@ static struct iommu_ops sandbox_iommu_ops = {
static int sandbox_iommu_probe(struct udevice *dev) { - struct sandbox_iommu_priv *priv = dev_get_priv(dev); - - lmb_add(&priv->lmb, 0x89abc000, SZ_16K); + lmb_add(0x89abc000, SZ_16K);
return 0; } @@ -68,7 +60,6 @@ U_BOOT_DRIVER(sandbox_iommu) = { .name = "sandbox_iommu", .id = UCLASS_IOMMU, .of_match = sandbox_iommu_ids, - .priv_auto = sizeof(struct sandbox_iommu_priv), .ops = &sandbox_iommu_ops, .probe = sandbox_iommu_probe, }; diff --git a/fs/fs.c b/fs/fs.c index 0c47943f33..2c835eef86 100644 --- a/fs/fs.c +++ b/fs/fs.c @@ -531,7 +531,6 @@ int fs_size(const char *filename, loff_t *size) static int fs_read_lmb_check(const char *filename, ulong addr, loff_t offset, loff_t len, struct fstype_info *info) { - struct lmb lmb; int ret; loff_t size; loff_t read_len; @@ -550,10 +549,10 @@ static int fs_read_lmb_check(const char *filename, ulong addr, loff_t offset, if (len && len < read_len) read_len = len;
- lmb_init_and_reserve(&lmb, gd->bd, (void *)gd->fdt_blob); - lmb_dump_all(&lmb); + lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob); + lmb_dump_all();
- if (lmb_alloc_addr(&lmb, addr, read_len) == addr) + if (lmb_alloc_addr(addr, read_len) == addr) return 0;
log_err("** Reading file would overwrite reserved memory **\n"); diff --git a/include/image.h b/include/image.h index c5b288f62b..8036eae15c 100644 --- a/include/image.h +++ b/include/image.h @@ -411,18 +411,8 @@ struct bootm_headers { #define BOOTM_STATE_PRE_LOAD 0x00000800 #define BOOTM_STATE_MEASURE 0x00001000 int state; - -#if defined(CONFIG_LMB) && !defined(USE_HOSTCC) - struct lmb lmb; /* for memory mgmt */ -#endif };
-#ifdef CONFIG_LMB -#define images_lmb(_images) (&(_images)->lmb) -#else -#define images_lmb(_images) NULL -#endif - extern struct bootm_headers images;
/* @@ -834,13 +824,13 @@ int boot_get_fdt(void *buf, const char *select, uint arch, struct bootm_headers *images, char **of_flat_tree, ulong *of_size);
-void boot_fdt_add_mem_rsv_regions(struct lmb *lmb, void *fdt_blob); -int boot_relocate_fdt(struct lmb *lmb, char **of_flat_tree, ulong *of_size); +void boot_fdt_add_mem_rsv_regions(void *fdt_blob); +int boot_relocate_fdt(char **of_flat_tree, ulong *of_size);
-int boot_ramdisk_high(struct lmb *lmb, ulong rd_data, ulong rd_len, - ulong *initrd_start, ulong *initrd_end); -int boot_get_cmdline(struct lmb *lmb, ulong *cmd_start, ulong *cmd_end); -int boot_get_kbd(struct lmb *lmb, struct bd_info **kbd); +int boot_ramdisk_high(ulong rd_data, ulong rd_len, ulong *initrd_start, + ulong *initrd_end); +int boot_get_cmdline(ulong *cmd_start, ulong *cmd_end); +int boot_get_kbd(struct bd_info **kbd);
/*******************************************************************/ /* Legacy format specific code (prefixed with image_) */ diff --git a/include/lmb.h b/include/lmb.h index 20d6feebf5..069c6af2a3 100644 --- a/include/lmb.h +++ b/include/lmb.h @@ -92,27 +92,25 @@ struct lmb { #endif };
-void lmb_init_and_reserve(struct lmb *lmb, struct bd_info *bd, void *fdt_blob); -void lmb_init_and_reserve_range(struct lmb *lmb, phys_addr_t base, - phys_size_t size, void *fdt_blob); -long lmb_add(struct lmb *lmb, phys_addr_t base, phys_size_t size); -long lmb_reserve(struct lmb *lmb, phys_addr_t base, phys_size_t size); +void lmb_init_and_reserve(struct bd_info *bd, void *fdt_blob); +void lmb_init_and_reserve_range(phys_addr_t base, phys_size_t size, + void *fdt_blob); +long lmb_add(phys_addr_t base, phys_size_t size); +long lmb_reserve(phys_addr_t base, phys_size_t size); /** * lmb_reserve_flags - Reserve one region with a specific flags bitfield. * - * @lmb: the logical memory block struct * @base: base address of the memory region * @size: size of the memory region * @flags: flags for the memory region * Return: 0 if OK, > 0 for coalesced region or a negative error code. */ -long lmb_reserve_flags(struct lmb *lmb, phys_addr_t base, - phys_size_t size, enum lmb_flags flags); -phys_addr_t lmb_alloc(struct lmb *lmb, phys_size_t size, ulong align); -phys_addr_t lmb_alloc_base(struct lmb *lmb, phys_size_t size, ulong align, - phys_addr_t max_addr); -phys_addr_t lmb_alloc_addr(struct lmb *lmb, phys_addr_t base, phys_size_t size); -phys_size_t lmb_get_free_size(struct lmb *lmb, phys_addr_t addr); +long lmb_reserve_flags(phys_addr_t base, phys_size_t size, + enum lmb_flags flags); +phys_addr_t lmb_alloc(phys_size_t size, ulong align); +phys_addr_t lmb_alloc_base(phys_size_t size, ulong align, phys_addr_t max_addr); +phys_addr_t lmb_alloc_addr(phys_addr_t base, phys_size_t size); +phys_size_t lmb_get_free_size(phys_addr_t addr);
/** * lmb_is_reserved_flags() - test if address is in reserved region with flag bits set @@ -120,21 +118,20 @@ phys_size_t lmb_get_free_size(struct lmb *lmb, phys_addr_t addr); * The function checks if a reserved region comprising @addr exists which has * all flag bits set which are set in @flags. * - * @lmb: the logical memory block struct * @addr: address to be tested * @flags: bitmap with bits to be tested * Return: 1 if matching reservation exists, 0 otherwise */ -int lmb_is_reserved_flags(struct lmb *lmb, phys_addr_t addr, int flags); +int lmb_is_reserved_flags(phys_addr_t addr, int flags);
-long lmb_free(struct lmb *lmb, phys_addr_t base, phys_size_t size); +long lmb_free(phys_addr_t base, phys_size_t size);
-void lmb_dump_all(struct lmb *lmb); -void lmb_dump_all_force(struct lmb *lmb); +void lmb_dump_all(void); +void lmb_dump_all_force(void);
-void board_lmb_reserve(struct lmb *lmb); -void arch_lmb_reserve(struct lmb *lmb); -void arch_lmb_reserve_generic(struct lmb *lmb, ulong sp, ulong end, ulong align); +void board_lmb_reserve(void); +void arch_lmb_reserve(void); +void arch_lmb_reserve_generic(ulong sp, ulong end, ulong align);
#endif /* __KERNEL__ */
diff --git a/lib/lmb.c b/lib/lmb.c index 0141da9766..0d01e58a46 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -39,17 +39,17 @@ static void lmb_dump_region(struct lmb_region *rgn, char *name) } }
-void lmb_dump_all_force(struct lmb *lmb) +void lmb_dump_all_force(void) { printf("lmb_dump_all:\n"); lmb_dump_region(&lmb->memory, "memory"); lmb_dump_region(&lmb->reserved, "reserved"); }
-void lmb_dump_all(struct lmb *lmb) +void lmb_dump_all(void) { #ifdef DEBUG - lmb_dump_all_force(lmb); + lmb_dump_all_force(); #endif }
@@ -130,7 +130,7 @@ static void lmb_fix_over_lap_regions(struct lmb_region *rgn, unsigned long r1, lmb_remove_region(rgn, r2); }
-void arch_lmb_reserve_generic(struct lmb *lmb, ulong sp, ulong end, ulong align) +void arch_lmb_reserve_generic(ulong sp, ulong end, ulong align) { ulong bank_end; int bank; @@ -156,10 +156,10 @@ void arch_lmb_reserve_generic(struct lmb *lmb, ulong sp, ulong end, ulong align) if (bank_end > end) bank_end = end - 1;
- lmb_reserve(lmb, sp, bank_end - sp + 1); + lmb_reserve(sp, bank_end - sp + 1);
if (gd->flags & GD_FLG_SKIP_RELOC) - lmb_reserve(lmb, (phys_addr_t)(uintptr_t)_start, gd->mon_len); + lmb_reserve((phys_addr_t)(uintptr_t)_start, gd->mon_len);
break; } @@ -171,10 +171,9 @@ void arch_lmb_reserve_generic(struct lmb *lmb, ulong sp, ulong end, ulong align) * Add reservations for all EFI memory areas that are not * EFI_CONVENTIONAL_MEMORY. * - * @lmb: lmb environment * Return: 0 on success, 1 on failure */ -static __maybe_unused int efi_lmb_reserve(struct lmb *lmb) +static __maybe_unused int efi_lmb_reserve(void) { struct efi_mem_desc *memmap = NULL, *map; efi_uintn_t i, map_size = 0; @@ -186,8 +185,7 @@ static __maybe_unused int efi_lmb_reserve(struct lmb *lmb)
for (i = 0, map = memmap; i < map_size / sizeof(*map); ++map, ++i) { if (map->type != EFI_CONVENTIONAL_MEMORY) { - lmb_reserve_flags(lmb, - map_to_sysmem((void *)(uintptr_t) + lmb_reserve_flags(map_to_sysmem((void *)(uintptr_t) map->physical_start), map->num_pages * EFI_PAGE_SIZE, map->type == EFI_RESERVED_MEMORY_TYPE @@ -199,39 +197,37 @@ static __maybe_unused int efi_lmb_reserve(struct lmb *lmb) return 0; }
-static void lmb_reserve_common(struct lmb *lmb, void *fdt_blob) +static void lmb_reserve_common(void *fdt_blob) { - arch_lmb_reserve(lmb); - board_lmb_reserve(lmb); + arch_lmb_reserve(); + board_lmb_reserve();
if (CONFIG_IS_ENABLED(OF_LIBFDT) && fdt_blob) - boot_fdt_add_mem_rsv_regions(lmb, fdt_blob); + boot_fdt_add_mem_rsv_regions(fdt_blob);
if (CONFIG_IS_ENABLED(EFI_LOADER)) - efi_lmb_reserve(lmb); + efi_lmb_reserve(); }
/* Initialize the struct, add memory and call arch/board reserve functions */ -void lmb_init_and_reserve(struct lmb *lmb, struct bd_info *bd, void *fdt_blob) +void lmb_init_and_reserve(struct bd_info *bd, void *fdt_blob) { int i;
for (i = 0; i < CONFIG_NR_DRAM_BANKS; i++) { - if (bd->bi_dram[i].size) { - lmb_add(lmb, bd->bi_dram[i].start, - bd->bi_dram[i].size); - } + if (bd->bi_dram[i].size) + lmb_add(bd->bi_dram[i].start, bd->bi_dram[i].size); }
- lmb_reserve_common(lmb, fdt_blob); + lmb_reserve_common(fdt_blob); }
/* Initialize the struct, add memory and call arch/board reserve functions */ -void lmb_init_and_reserve_range(struct lmb *lmb, phys_addr_t base, - phys_size_t size, void *fdt_blob) +void lmb_init_and_reserve_range(phys_addr_t base, phys_size_t size, + void *fdt_blob) { - lmb_add(lmb, base, size); - lmb_reserve_common(lmb, fdt_blob); + lmb_add(base, size); + lmb_reserve_common(fdt_blob); }
/* This routine called with relocation disabled. */ @@ -332,14 +328,14 @@ static long lmb_add_region(struct lmb_region *rgn, phys_addr_t base, }
/* This routine may be called with relocation disabled. */ -long lmb_add(struct lmb *lmb, phys_addr_t base, phys_size_t size) +long lmb_add(phys_addr_t base, phys_size_t size) { struct lmb_region *_rgn = &(lmb->memory);
return lmb_add_region(_rgn, base, size); }
-long lmb_free(struct lmb *lmb, phys_addr_t base, phys_size_t size) +long lmb_free(phys_addr_t base, phys_size_t size) { struct lmb_region *rgn = &(lmb->reserved); phys_addr_t rgnbegin, rgnend; @@ -389,17 +385,16 @@ long lmb_free(struct lmb *lmb, phys_addr_t base, phys_size_t size) rgn->region[i].flags); }
-long lmb_reserve_flags(struct lmb *lmb, phys_addr_t base, phys_size_t size, - enum lmb_flags flags) +long lmb_reserve_flags(phys_addr_t base, phys_size_t size, enum lmb_flags flags) { struct lmb_region *_rgn = &(lmb->reserved);
return lmb_add_region_flags(_rgn, base, size, flags); }
-long lmb_reserve(struct lmb *lmb, phys_addr_t base, phys_size_t size) +long lmb_reserve(phys_addr_t base, phys_size_t size) { - return lmb_reserve_flags(lmb, base, size, LMB_NONE); + return lmb_reserve_flags(base, size, LMB_NONE); }
static long lmb_overlaps_region(struct lmb_region *rgn, phys_addr_t base, @@ -422,8 +417,8 @@ static phys_addr_t lmb_align_down(phys_addr_t addr, phys_size_t size) return addr & ~(size - 1); }
-static phys_addr_t __lmb_alloc_base(struct lmb *lmb, phys_size_t size, - ulong align, phys_addr_t max_addr) +static phys_addr_t __lmb_alloc_base(phys_size_t size, ulong align, + phys_addr_t max_addr) { long i, rgn; phys_addr_t base = 0; @@ -464,16 +459,16 @@ static phys_addr_t __lmb_alloc_base(struct lmb *lmb, phys_size_t size, return 0; }
-phys_addr_t lmb_alloc(struct lmb *lmb, phys_size_t size, ulong align) +phys_addr_t lmb_alloc(phys_size_t size, ulong align) { - return lmb_alloc_base(lmb, size, align, LMB_ALLOC_ANYWHERE); + return lmb_alloc_base(size, align, LMB_ALLOC_ANYWHERE); }
-phys_addr_t lmb_alloc_base(struct lmb *lmb, phys_size_t size, ulong align, phys_addr_t max_addr) +phys_addr_t lmb_alloc_base(phys_size_t size, ulong align, phys_addr_t max_addr) { phys_addr_t alloc;
- alloc = __lmb_alloc_base(lmb, size, align, max_addr); + alloc = __lmb_alloc_base(size, align, max_addr);
if (alloc == 0) printf("ERROR: Failed to allocate 0x%lx bytes below 0x%lx.\n", @@ -486,7 +481,7 @@ phys_addr_t lmb_alloc_base(struct lmb *lmb, phys_size_t size, ulong align, phys_ * Try to allocate a specific address range: must be in defined memory but not * reserved */ -phys_addr_t lmb_alloc_addr(struct lmb *lmb, phys_addr_t base, phys_size_t size) +phys_addr_t lmb_alloc_addr(phys_addr_t base, phys_size_t size) { long rgn;
@@ -501,7 +496,7 @@ phys_addr_t lmb_alloc_addr(struct lmb *lmb, phys_addr_t base, phys_size_t size) lmb->memory.region[rgn].size, base + size - 1, 1)) { /* ok, reserve the memory */ - if (lmb_reserve(lmb, base, size) >= 0) + if (lmb_reserve(base, size) >= 0) return base; } } @@ -509,7 +504,7 @@ phys_addr_t lmb_alloc_addr(struct lmb *lmb, phys_addr_t base, phys_size_t size) }
/* Return number of bytes from a given address that are free */ -phys_size_t lmb_get_free_size(struct lmb *lmb, phys_addr_t addr) +phys_size_t lmb_get_free_size(phys_addr_t addr) { int i; long rgn; @@ -535,7 +530,7 @@ phys_size_t lmb_get_free_size(struct lmb *lmb, phys_addr_t addr) return 0; }
-int lmb_is_reserved_flags(struct lmb *lmb, phys_addr_t addr, int flags) +int lmb_is_reserved_flags(phys_addr_t addr, int flags) { int i;
@@ -548,12 +543,12 @@ int lmb_is_reserved_flags(struct lmb *lmb, phys_addr_t addr, int flags) return 0; }
-__weak void board_lmb_reserve(struct lmb *lmb) +__weak void board_lmb_reserve(void) { /* please define platform specific board_lmb_reserve() */ }
-__weak void arch_lmb_reserve(struct lmb *lmb) +__weak void arch_lmb_reserve(void) { /* please define platform specific arch_lmb_reserve() */ } diff --git a/net/tftp.c b/net/tftp.c index 6b16bdcbe4..5e27fd4aa9 100644 --- a/net/tftp.c +++ b/net/tftp.c @@ -711,12 +711,11 @@ static void tftp_timeout_handler(void) static int tftp_init_load_addr(void) { #ifdef CONFIG_LMB - struct lmb lmb; phys_size_t max_size;
- lmb_init_and_reserve(&lmb, gd->bd, (void *)gd->fdt_blob); + lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob);
- max_size = lmb_get_free_size(&lmb, image_load_addr); + max_size = lmb_get_free_size(image_load_addr); if (!max_size) return -1;
diff --git a/net/wget.c b/net/wget.c index f1dd7abeff..7cf809a8ef 100644 --- a/net/wget.c +++ b/net/wget.c @@ -73,12 +73,11 @@ static ulong wget_load_size; */ static int wget_init_load_size(void) { - struct lmb lmb; phys_size_t max_size;
- lmb_init_and_reserve(&lmb, gd->bd, (void *)gd->fdt_blob); + lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob);
- max_size = lmb_get_free_size(&lmb, image_load_addr); + max_size = lmb_get_free_size(image_load_addr); if (!max_size) return -1;
diff --git a/test/cmd/bdinfo.c b/test/cmd/bdinfo.c index 027848c3e2..34d2b141d8 100644 --- a/test/cmd/bdinfo.c +++ b/test/cmd/bdinfo.c @@ -200,7 +200,7 @@ static int bdinfo_test_all(struct unit_test_state *uts) if (IS_ENABLED(CONFIG_LMB) && gd->fdt_blob) { struct lmb lmb;
- lmb_init_and_reserve(&lmb, gd->bd, (void *)gd->fdt_blob); + lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob); ut_assertok(lmb_test_dump_all(uts, &lmb)); if (IS_ENABLED(CONFIG_OF_REAL)) ut_assert_nextline("devicetree = %s", fdtdec_get_srcname()); diff --git a/test/lib/lmb.c b/test/lib/lmb.c index 74e74501cf..9f6b5633a7 100644 --- a/test/lib/lmb.c +++ b/test/lib/lmb.c @@ -12,6 +12,8 @@ #include <test/test.h> #include <test/ut.h>
+extern struct lmb lmb; + static inline bool lmb_is_nomap(struct lmb_property *m) { return m->flags & LMB_NOMAP; @@ -64,7 +66,6 @@ static int test_multi_alloc(struct unit_test_state *uts, const phys_addr_t ram, const phys_addr_t ram_end = ram + ram_size; const phys_addr_t alloc_64k_end = alloc_64k_addr + 0x10000;
- struct lmb lmb; long ret; phys_addr_t a, a2, b, b2, c, d;
@@ -76,11 +77,11 @@ static int test_multi_alloc(struct unit_test_state *uts, const phys_addr_t ram, ut_assert(alloc_64k_end <= ram_end - 8);
if (ram0_size) { - ret = lmb_add(&lmb, ram0, ram0_size); + ret = lmb_add(ram0, ram0_size); ut_asserteq(ret, 0); }
- ret = lmb_add(&lmb, ram, ram_size); + ret = lmb_add(ram, ram_size); ut_asserteq(ret, 0);
if (ram0_size) { @@ -96,67 +97,67 @@ static int test_multi_alloc(struct unit_test_state *uts, const phys_addr_t ram, }
/* reserve 64KiB somewhere */ - ret = lmb_reserve(&lmb, alloc_64k_addr, 0x10000); + ret = lmb_reserve(alloc_64k_addr, 0x10000); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, 0, 0, 1, alloc_64k_addr, 0x10000, 0, 0, 0, 0);
/* allocate somewhere, should be at the end of RAM */ - a = lmb_alloc(&lmb, 4, 1); + a = lmb_alloc(4, 1); ut_asserteq(a, ram_end - 4); ASSERT_LMB(&lmb, 0, 0, 2, alloc_64k_addr, 0x10000, ram_end - 4, 4, 0, 0); /* alloc below end of reserved region -> below reserved region */ - b = lmb_alloc_base(&lmb, 4, 1, alloc_64k_end); + b = lmb_alloc_base(4, 1, alloc_64k_end); ut_asserteq(b, alloc_64k_addr - 4); ASSERT_LMB(&lmb, 0, 0, 2, alloc_64k_addr - 4, 0x10000 + 4, ram_end - 4, 4, 0, 0);
/* 2nd time */ - c = lmb_alloc(&lmb, 4, 1); + c = lmb_alloc(4, 1); ut_asserteq(c, ram_end - 8); ASSERT_LMB(&lmb, 0, 0, 2, alloc_64k_addr - 4, 0x10000 + 4, ram_end - 8, 8, 0, 0); - d = lmb_alloc_base(&lmb, 4, 1, alloc_64k_end); + d = lmb_alloc_base(4, 1, alloc_64k_end); ut_asserteq(d, alloc_64k_addr - 8); ASSERT_LMB(&lmb, 0, 0, 2, alloc_64k_addr - 8, 0x10000 + 8, ram_end - 8, 8, 0, 0);
- ret = lmb_free(&lmb, a, 4); + ret = lmb_free(a, 4); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, 0, 0, 2, alloc_64k_addr - 8, 0x10000 + 8, ram_end - 8, 4, 0, 0); /* allocate again to ensure we get the same address */ - a2 = lmb_alloc(&lmb, 4, 1); + a2 = lmb_alloc(4, 1); ut_asserteq(a, a2); ASSERT_LMB(&lmb, 0, 0, 2, alloc_64k_addr - 8, 0x10000 + 8, ram_end - 8, 8, 0, 0); - ret = lmb_free(&lmb, a2, 4); + ret = lmb_free(a2, 4); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, 0, 0, 2, alloc_64k_addr - 8, 0x10000 + 8, ram_end - 8, 4, 0, 0);
- ret = lmb_free(&lmb, b, 4); + ret = lmb_free(b, 4); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, 0, 0, 3, alloc_64k_addr - 8, 4, alloc_64k_addr, 0x10000, ram_end - 8, 4); /* allocate again to ensure we get the same address */ - b2 = lmb_alloc_base(&lmb, 4, 1, alloc_64k_end); + b2 = lmb_alloc_base(4, 1, alloc_64k_end); ut_asserteq(b, b2); ASSERT_LMB(&lmb, 0, 0, 2, alloc_64k_addr - 8, 0x10000 + 8, ram_end - 8, 4, 0, 0); - ret = lmb_free(&lmb, b2, 4); + ret = lmb_free(b2, 4); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, 0, 0, 3, alloc_64k_addr - 8, 4, alloc_64k_addr, 0x10000, ram_end - 8, 4);
- ret = lmb_free(&lmb, c, 4); + ret = lmb_free(c, 4); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, 0, 0, 2, alloc_64k_addr - 8, 4, alloc_64k_addr, 0x10000, 0, 0); - ret = lmb_free(&lmb, d, 4); + ret = lmb_free(d, 4); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, 0, 0, 1, alloc_64k_addr, 0x10000, 0, 0, 0, 0); @@ -227,42 +228,41 @@ static int test_bigblock(struct unit_test_state *uts, const phys_addr_t ram) const phys_size_t big_block_size = 0x10000000; const phys_addr_t ram_end = ram + ram_size; const phys_addr_t alloc_64k_addr = ram + 0x10000000; - struct lmb lmb; long ret; phys_addr_t a, b;
/* check for overflow */ ut_assert(ram_end == 0 || ram_end > ram);
- ret = lmb_add(&lmb, ram, ram_size); + ret = lmb_add(ram, ram_size); ut_asserteq(ret, 0);
/* reserve 64KiB in the middle of RAM */ - ret = lmb_reserve(&lmb, alloc_64k_addr, 0x10000); + ret = lmb_reserve(alloc_64k_addr, 0x10000); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 1, alloc_64k_addr, 0x10000, 0, 0, 0, 0);
/* allocate a big block, should be below reserved */ - a = lmb_alloc(&lmb, big_block_size, 1); + a = lmb_alloc(big_block_size, 1); ut_asserteq(a, ram); ASSERT_LMB(&lmb, ram, ram_size, 1, a, big_block_size + 0x10000, 0, 0, 0, 0); /* allocate 2nd big block */ /* This should fail, printing an error */ - b = lmb_alloc(&lmb, big_block_size, 1); + b = lmb_alloc(big_block_size, 1); ut_asserteq(b, 0); ASSERT_LMB(&lmb, ram, ram_size, 1, a, big_block_size + 0x10000, 0, 0, 0, 0);
- ret = lmb_free(&lmb, a, big_block_size); + ret = lmb_free(a, big_block_size); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 1, alloc_64k_addr, 0x10000, 0, 0, 0, 0);
/* allocate too big block */ /* This should fail, printing an error */ - a = lmb_alloc(&lmb, ram_size, 1); + a = lmb_alloc(ram_size, 1); ut_asserteq(a, 0); ASSERT_LMB(&lmb, ram, ram_size, 1, alloc_64k_addr, 0x10000, 0, 0, 0, 0); @@ -290,7 +290,6 @@ static int test_noreserved(struct unit_test_state *uts, const phys_addr_t ram, { const phys_size_t ram_size = 0x20000000; const phys_addr_t ram_end = ram + ram_size; - struct lmb lmb; long ret; phys_addr_t a, b; const phys_addr_t alloc_size_aligned = (alloc_size + align - 1) & @@ -299,17 +298,17 @@ static int test_noreserved(struct unit_test_state *uts, const phys_addr_t ram, /* check for overflow */ ut_assert(ram_end == 0 || ram_end > ram);
- ret = lmb_add(&lmb, ram, ram_size); + ret = lmb_add(ram, ram_size); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 0, 0, 0, 0, 0, 0, 0);
/* allocate a block */ - a = lmb_alloc(&lmb, alloc_size, align); + a = lmb_alloc(alloc_size, align); ut_assert(a != 0); ASSERT_LMB(&lmb, ram, ram_size, 1, ram + ram_size - alloc_size_aligned, alloc_size, 0, 0, 0, 0); /* allocate another block */ - b = lmb_alloc(&lmb, alloc_size, align); + b = lmb_alloc(alloc_size, align); ut_assert(b != 0); if (alloc_size == alloc_size_aligned) { ASSERT_LMB(&lmb, ram, ram_size, 1, ram + ram_size - @@ -321,21 +320,21 @@ static int test_noreserved(struct unit_test_state *uts, const phys_addr_t ram, - alloc_size_aligned, alloc_size, 0, 0); } /* and free them */ - ret = lmb_free(&lmb, b, alloc_size); + ret = lmb_free(b, alloc_size); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 1, ram + ram_size - alloc_size_aligned, alloc_size, 0, 0, 0, 0); - ret = lmb_free(&lmb, a, alloc_size); + ret = lmb_free(a, alloc_size); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 0, 0, 0, 0, 0, 0, 0);
/* allocate a block with base*/ - b = lmb_alloc_base(&lmb, alloc_size, align, ram_end); + b = lmb_alloc_base(alloc_size, align, ram_end); ut_assert(a == b); ASSERT_LMB(&lmb, ram, ram_size, 1, ram + ram_size - alloc_size_aligned, alloc_size, 0, 0, 0, 0); /* and free it */ - ret = lmb_free(&lmb, b, alloc_size); + ret = lmb_free(b, alloc_size); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 0, 0, 0, 0, 0, 0, 0);
@@ -379,32 +378,31 @@ static int lib_test_lmb_at_0(struct unit_test_state *uts) { const phys_addr_t ram = 0; const phys_size_t ram_size = 0x20000000; - struct lmb lmb; long ret; phys_addr_t a, b;
- ret = lmb_add(&lmb, ram, ram_size); + ret = lmb_add(ram, ram_size); ut_asserteq(ret, 0);
/* allocate nearly everything */ - a = lmb_alloc(&lmb, ram_size - 4, 1); + a = lmb_alloc(ram_size - 4, 1); ut_asserteq(a, ram + 4); ASSERT_LMB(&lmb, ram, ram_size, 1, a, ram_size - 4, 0, 0, 0, 0); /* allocate the rest */ /* This should fail as the allocated address would be 0 */ - b = lmb_alloc(&lmb, 4, 1); + b = lmb_alloc(4, 1); ut_asserteq(b, 0); /* check that this was an error by checking lmb */ ASSERT_LMB(&lmb, ram, ram_size, 1, a, ram_size - 4, 0, 0, 0, 0); /* check that this was an error by freeing b */ - ret = lmb_free(&lmb, b, 4); + ret = lmb_free(b, 4); ut_asserteq(ret, -1); ASSERT_LMB(&lmb, ram, ram_size, 1, a, ram_size - 4, 0, 0, 0, 0);
- ret = lmb_free(&lmb, a, ram_size - 4); + ret = lmb_free(a, ram_size - 4); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 0, 0, 0, 0, 0, 0, 0);
@@ -417,40 +415,39 @@ static int lib_test_lmb_overlapping_reserve(struct unit_test_state *uts) { const phys_addr_t ram = 0x40000000; const phys_size_t ram_size = 0x20000000; - struct lmb lmb; long ret;
- ret = lmb_add(&lmb, ram, ram_size); + ret = lmb_add(ram, ram_size); ut_asserteq(ret, 0);
- ret = lmb_reserve(&lmb, 0x40010000, 0x10000); + ret = lmb_reserve(0x40010000, 0x10000); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40010000, 0x10000, 0, 0, 0, 0); /* allocate overlapping region should fail */ - ret = lmb_reserve(&lmb, 0x40011000, 0x10000); + ret = lmb_reserve(0x40011000, 0x10000); ut_asserteq(ret, -1); ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40010000, 0x10000, 0, 0, 0, 0); /* allocate 3nd region */ - ret = lmb_reserve(&lmb, 0x40030000, 0x10000); + ret = lmb_reserve(0x40030000, 0x10000); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 2, 0x40010000, 0x10000, 0x40030000, 0x10000, 0, 0); /* allocate 2nd region , This should coalesced all region into one */ - ret = lmb_reserve(&lmb, 0x40020000, 0x10000); + ret = lmb_reserve(0x40020000, 0x10000); ut_assert(ret >= 0); ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40010000, 0x30000, 0, 0, 0, 0);
/* allocate 2nd region, which should be added as first region */ - ret = lmb_reserve(&lmb, 0x40000000, 0x8000); + ret = lmb_reserve(0x40000000, 0x8000); ut_assert(ret >= 0); ASSERT_LMB(&lmb, ram, ram_size, 2, 0x40000000, 0x8000, 0x40010000, 0x30000, 0, 0);
/* allocate 3rd region, coalesce with first and overlap with second */ - ret = lmb_reserve(&lmb, 0x40008000, 0x10000); + ret = lmb_reserve(0x40008000, 0x10000); ut_assert(ret >= 0); ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40000000, 0x40000, 0, 0, 0, 0); @@ -469,102 +466,101 @@ static int test_alloc_addr(struct unit_test_state *uts, const phys_addr_t ram) const phys_size_t alloc_addr_a = ram + 0x8000000; const phys_size_t alloc_addr_b = ram + 0x8000000 * 2; const phys_size_t alloc_addr_c = ram + 0x8000000 * 3; - struct lmb lmb; long ret; phys_addr_t a, b, c, d, e;
/* check for overflow */ ut_assert(ram_end == 0 || ram_end > ram);
- ret = lmb_add(&lmb, ram, ram_size); + ret = lmb_add(ram, ram_size); ut_asserteq(ret, 0);
/* reserve 3 blocks */ - ret = lmb_reserve(&lmb, alloc_addr_a, 0x10000); + ret = lmb_reserve(alloc_addr_a, 0x10000); ut_asserteq(ret, 0); - ret = lmb_reserve(&lmb, alloc_addr_b, 0x10000); + ret = lmb_reserve(alloc_addr_b, 0x10000); ut_asserteq(ret, 0); - ret = lmb_reserve(&lmb, alloc_addr_c, 0x10000); + ret = lmb_reserve(alloc_addr_c, 0x10000); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 3, alloc_addr_a, 0x10000, alloc_addr_b, 0x10000, alloc_addr_c, 0x10000);
/* allocate blocks */ - a = lmb_alloc_addr(&lmb, ram, alloc_addr_a - ram); + a = lmb_alloc_addr(ram, alloc_addr_a - ram); ut_asserteq(a, ram); ASSERT_LMB(&lmb, ram, ram_size, 3, ram, 0x8010000, alloc_addr_b, 0x10000, alloc_addr_c, 0x10000); - b = lmb_alloc_addr(&lmb, alloc_addr_a + 0x10000, + b = lmb_alloc_addr(alloc_addr_a + 0x10000, alloc_addr_b - alloc_addr_a - 0x10000); ut_asserteq(b, alloc_addr_a + 0x10000); ASSERT_LMB(&lmb, ram, ram_size, 2, ram, 0x10010000, alloc_addr_c, 0x10000, 0, 0); - c = lmb_alloc_addr(&lmb, alloc_addr_b + 0x10000, + c = lmb_alloc_addr(alloc_addr_b + 0x10000, alloc_addr_c - alloc_addr_b - 0x10000); ut_asserteq(c, alloc_addr_b + 0x10000); ASSERT_LMB(&lmb, ram, ram_size, 1, ram, 0x18010000, 0, 0, 0, 0); - d = lmb_alloc_addr(&lmb, alloc_addr_c + 0x10000, + d = lmb_alloc_addr(alloc_addr_c + 0x10000, ram_end - alloc_addr_c - 0x10000); ut_asserteq(d, alloc_addr_c + 0x10000); ASSERT_LMB(&lmb, ram, ram_size, 1, ram, ram_size, 0, 0, 0, 0);
/* allocating anything else should fail */ - e = lmb_alloc(&lmb, 1, 1); + e = lmb_alloc(1, 1); ut_asserteq(e, 0); ASSERT_LMB(&lmb, ram, ram_size, 1, ram, ram_size, 0, 0, 0, 0);
- ret = lmb_free(&lmb, d, ram_end - alloc_addr_c - 0x10000); + ret = lmb_free(d, ram_end - alloc_addr_c - 0x10000); ut_asserteq(ret, 0);
/* allocate at 3 points in free range */
- d = lmb_alloc_addr(&lmb, ram_end - 4, 4); + d = lmb_alloc_addr(ram_end - 4, 4); ut_asserteq(d, ram_end - 4); ASSERT_LMB(&lmb, ram, ram_size, 2, ram, 0x18010000, d, 4, 0, 0); - ret = lmb_free(&lmb, d, 4); + ret = lmb_free(d, 4); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 1, ram, 0x18010000, 0, 0, 0, 0);
- d = lmb_alloc_addr(&lmb, ram_end - 128, 4); + d = lmb_alloc_addr(ram_end - 128, 4); ut_asserteq(d, ram_end - 128); ASSERT_LMB(&lmb, ram, ram_size, 2, ram, 0x18010000, d, 4, 0, 0); - ret = lmb_free(&lmb, d, 4); + ret = lmb_free(d, 4); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 1, ram, 0x18010000, 0, 0, 0, 0);
- d = lmb_alloc_addr(&lmb, alloc_addr_c + 0x10000, 4); + d = lmb_alloc_addr(alloc_addr_c + 0x10000, 4); ut_asserteq(d, alloc_addr_c + 0x10000); ASSERT_LMB(&lmb, ram, ram_size, 1, ram, 0x18010004, 0, 0, 0, 0); - ret = lmb_free(&lmb, d, 4); + ret = lmb_free(d, 4); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 1, ram, 0x18010000, 0, 0, 0, 0);
/* allocate at the bottom */ - ret = lmb_free(&lmb, a, alloc_addr_a - ram); + ret = lmb_free(a, alloc_addr_a - ram); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 1, ram + 0x8000000, 0x10010000, 0, 0, 0, 0); - d = lmb_alloc_addr(&lmb, ram, 4); + d = lmb_alloc_addr(ram, 4); ut_asserteq(d, ram); ASSERT_LMB(&lmb, ram, ram_size, 2, d, 4, ram + 0x8000000, 0x10010000, 0, 0);
/* check that allocating outside memory fails */ if (ram_end != 0) { - ret = lmb_alloc_addr(&lmb, ram_end, 1); + ret = lmb_alloc_addr(ram_end, 1); ut_asserteq(ret, 0); } if (ram != 0) { - ret = lmb_alloc_addr(&lmb, ram - 1, 1); + ret = lmb_alloc_addr(ram - 1, 1); ut_asserteq(ret, 0); }
@@ -594,46 +590,45 @@ static int test_get_unreserved_size(struct unit_test_state *uts, const phys_size_t alloc_addr_a = ram + 0x8000000; const phys_size_t alloc_addr_b = ram + 0x8000000 * 2; const phys_size_t alloc_addr_c = ram + 0x8000000 * 3; - struct lmb lmb; long ret; phys_size_t s;
/* check for overflow */ ut_assert(ram_end == 0 || ram_end > ram);
- ret = lmb_add(&lmb, ram, ram_size); + ret = lmb_add(ram, ram_size); ut_asserteq(ret, 0);
/* reserve 3 blocks */ - ret = lmb_reserve(&lmb, alloc_addr_a, 0x10000); + ret = lmb_reserve(alloc_addr_a, 0x10000); ut_asserteq(ret, 0); - ret = lmb_reserve(&lmb, alloc_addr_b, 0x10000); + ret = lmb_reserve(alloc_addr_b, 0x10000); ut_asserteq(ret, 0); - ret = lmb_reserve(&lmb, alloc_addr_c, 0x10000); + ret = lmb_reserve(alloc_addr_c, 0x10000); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 3, alloc_addr_a, 0x10000, alloc_addr_b, 0x10000, alloc_addr_c, 0x10000);
/* check addresses in between blocks */ - s = lmb_get_free_size(&lmb, ram); + s = lmb_get_free_size(ram); ut_asserteq(s, alloc_addr_a - ram); - s = lmb_get_free_size(&lmb, ram + 0x10000); + s = lmb_get_free_size(ram + 0x10000); ut_asserteq(s, alloc_addr_a - ram - 0x10000); - s = lmb_get_free_size(&lmb, alloc_addr_a - 4); + s = lmb_get_free_size(alloc_addr_a - 4); ut_asserteq(s, 4);
- s = lmb_get_free_size(&lmb, alloc_addr_a + 0x10000); + s = lmb_get_free_size(alloc_addr_a + 0x10000); ut_asserteq(s, alloc_addr_b - alloc_addr_a - 0x10000); - s = lmb_get_free_size(&lmb, alloc_addr_a + 0x20000); + s = lmb_get_free_size(alloc_addr_a + 0x20000); ut_asserteq(s, alloc_addr_b - alloc_addr_a - 0x20000); - s = lmb_get_free_size(&lmb, alloc_addr_b - 4); + s = lmb_get_free_size(alloc_addr_b - 4); ut_asserteq(s, 4);
- s = lmb_get_free_size(&lmb, alloc_addr_c + 0x10000); + s = lmb_get_free_size(alloc_addr_c + 0x10000); ut_asserteq(s, ram_end - alloc_addr_c - 0x10000); - s = lmb_get_free_size(&lmb, alloc_addr_c + 0x20000); + s = lmb_get_free_size(alloc_addr_c + 0x20000); ut_asserteq(s, ram_end - alloc_addr_c - 0x20000); - s = lmb_get_free_size(&lmb, ram_end - 4); + s = lmb_get_free_size(ram_end - 4); ut_asserteq(s, 4);
return 0; @@ -666,7 +661,6 @@ static int lib_test_lmb_max_regions(struct unit_test_state *uts) + 1) * CONFIG_LMB_MAX_REGIONS; const phys_size_t blk_size = 0x10000; phys_addr_t offset; - struct lmb lmb; int ret, i;
ut_asserteq(lmb.memory.cnt, 0); @@ -677,7 +671,7 @@ static int lib_test_lmb_max_regions(struct unit_test_state *uts) /* Add CONFIG_LMB_MAX_REGIONS memory regions */ for (i = 0; i < CONFIG_LMB_MAX_REGIONS; i++) { offset = ram + 2 * i * ram_size; - ret = lmb_add(&lmb, offset, ram_size); + ret = lmb_add(offset, ram_size); ut_asserteq(ret, 0); } ut_asserteq(lmb.memory.cnt, CONFIG_LMB_MAX_REGIONS); @@ -685,7 +679,7 @@ static int lib_test_lmb_max_regions(struct unit_test_state *uts)
/* error for the (CONFIG_LMB_MAX_REGIONS + 1) memory regions */ offset = ram + 2 * (CONFIG_LMB_MAX_REGIONS + 1) * ram_size; - ret = lmb_add(&lmb, offset, ram_size); + ret = lmb_add(offset, ram_size); ut_asserteq(ret, -1);
ut_asserteq(lmb.memory.cnt, CONFIG_LMB_MAX_REGIONS); @@ -694,7 +688,7 @@ static int lib_test_lmb_max_regions(struct unit_test_state *uts) /* reserve CONFIG_LMB_MAX_REGIONS regions */ for (i = 0; i < CONFIG_LMB_MAX_REGIONS; i++) { offset = ram + 2 * i * blk_size; - ret = lmb_reserve(&lmb, offset, blk_size); + ret = lmb_reserve(offset, blk_size); ut_asserteq(ret, 0); }
@@ -703,7 +697,7 @@ static int lib_test_lmb_max_regions(struct unit_test_state *uts)
/* error for the 9th reserved blocks */ offset = ram + 2 * (CONFIG_LMB_MAX_REGIONS + 1) * blk_size; - ret = lmb_reserve(&lmb, offset, blk_size); + ret = lmb_reserve(offset, blk_size); ut_asserteq(ret, -1);
ut_asserteq(lmb.memory.cnt, CONFIG_LMB_MAX_REGIONS); @@ -725,26 +719,25 @@ static int lib_test_lmb_flags(struct unit_test_state *uts) { const phys_addr_t ram = 0x40000000; const phys_size_t ram_size = 0x20000000; - struct lmb lmb; long ret;
- ret = lmb_add(&lmb, ram, ram_size); + ret = lmb_add(ram, ram_size); ut_asserteq(ret, 0);
/* reserve, same flag */ - ret = lmb_reserve_flags(&lmb, 0x40010000, 0x10000, LMB_NOMAP); + ret = lmb_reserve_flags(0x40010000, 0x10000, LMB_NOMAP); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40010000, 0x10000, 0, 0, 0, 0);
/* reserve again, same flag */ - ret = lmb_reserve_flags(&lmb, 0x40010000, 0x10000, LMB_NOMAP); + ret = lmb_reserve_flags(0x40010000, 0x10000, LMB_NOMAP); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40010000, 0x10000, 0, 0, 0, 0);
/* reserve again, new flag */ - ret = lmb_reserve_flags(&lmb, 0x40010000, 0x10000, LMB_NONE); + ret = lmb_reserve_flags(0x40010000, 0x10000, LMB_NONE); ut_asserteq(ret, -1); ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40010000, 0x10000, 0, 0, 0, 0); @@ -752,20 +745,20 @@ static int lib_test_lmb_flags(struct unit_test_state *uts) ut_asserteq(lmb_is_nomap(&lmb.reserved.region[0]), 1);
/* merge after */ - ret = lmb_reserve_flags(&lmb, 0x40020000, 0x10000, LMB_NOMAP); + ret = lmb_reserve_flags(0x40020000, 0x10000, LMB_NOMAP); ut_asserteq(ret, 1); ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40010000, 0x20000, 0, 0, 0, 0);
/* merge before */ - ret = lmb_reserve_flags(&lmb, 0x40000000, 0x10000, LMB_NOMAP); + ret = lmb_reserve_flags(0x40000000, 0x10000, LMB_NOMAP); ut_asserteq(ret, 1); ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40000000, 0x30000, 0, 0, 0, 0);
ut_asserteq(lmb_is_nomap(&lmb.reserved.region[0]), 1);
- ret = lmb_reserve_flags(&lmb, 0x40030000, 0x10000, LMB_NONE); + ret = lmb_reserve_flags(0x40030000, 0x10000, LMB_NONE); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 2, 0x40000000, 0x30000, 0x40030000, 0x10000, 0, 0); @@ -774,7 +767,7 @@ static int lib_test_lmb_flags(struct unit_test_state *uts) ut_asserteq(lmb_is_nomap(&lmb.reserved.region[1]), 0);
/* test that old API use LMB_NONE */ - ret = lmb_reserve(&lmb, 0x40040000, 0x10000); + ret = lmb_reserve(0x40040000, 0x10000); ut_asserteq(ret, 1); ASSERT_LMB(&lmb, ram, ram_size, 2, 0x40000000, 0x30000, 0x40030000, 0x20000, 0, 0); @@ -782,18 +775,18 @@ static int lib_test_lmb_flags(struct unit_test_state *uts) ut_asserteq(lmb_is_nomap(&lmb.reserved.region[0]), 1); ut_asserteq(lmb_is_nomap(&lmb.reserved.region[1]), 0);
- ret = lmb_reserve_flags(&lmb, 0x40070000, 0x10000, LMB_NOMAP); + ret = lmb_reserve_flags(0x40070000, 0x10000, LMB_NOMAP); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 3, 0x40000000, 0x30000, 0x40030000, 0x20000, 0x40070000, 0x10000);
- ret = lmb_reserve_flags(&lmb, 0x40050000, 0x10000, LMB_NOMAP); + ret = lmb_reserve_flags(0x40050000, 0x10000, LMB_NOMAP); ut_asserteq(ret, 0); ASSERT_LMB(&lmb, ram, ram_size, 4, 0x40000000, 0x30000, 0x40030000, 0x20000, 0x40050000, 0x10000);
/* merge with 2 adjacent regions */ - ret = lmb_reserve_flags(&lmb, 0x40060000, 0x10000, LMB_NOMAP); + ret = lmb_reserve_flags(0x40060000, 0x10000, LMB_NOMAP); ut_asserteq(ret, 2); ASSERT_LMB(&lmb, ram, ram_size, 3, 0x40000000, 0x30000, 0x40030000, 0x20000, 0x40050000, 0x30000);

Hi Sughosh,
On Thu, 4 Jul 2024 at 08:36, Sughosh Ganu sughosh.ganu@linaro.org wrote:
With the move of the LMB structure to a persistent state, there is no need to declare the variable locally, and pass it as part of the LMB API's. Remove all local variable instances and change the API's correspondingly.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since V1: None
arch/arc/lib/cache.c | 4 +- arch/arm/lib/stack.c | 4 +- arch/arm/mach-apple/board.c | 17 ++- arch/arm/mach-snapdragon/board.c | 17 ++- arch/arm/mach-stm32mp/dram_init.c | 7 +- arch/arm/mach-stm32mp/stm32mp1/cpu.c | 6 +- arch/m68k/lib/bootm.c | 7 +- arch/microblaze/lib/bootm.c | 4 +- arch/mips/lib/bootm.c | 9 +- arch/nios2/lib/bootm.c | 4 +- arch/powerpc/cpu/mpc85xx/mp.c | 4 +- arch/powerpc/include/asm/mp.h | 4 +- arch/powerpc/lib/bootm.c | 14 +- arch/riscv/lib/bootm.c | 4 +- arch/sh/lib/bootm.c | 4 +- arch/x86/lib/bootm.c | 4 +- arch/xtensa/lib/bootm.c | 4 +- board/xilinx/common/board.c | 7 +- boot/bootm.c | 26 ++-- boot/bootm_os.c | 5 +- boot/image-board.c | 32 ++--- boot/image-fdt.c | 29 ++--- cmd/bdinfo.c | 6 +- cmd/booti.c | 2 +- cmd/bootz.c | 2 +- cmd/load.c | 7 +- drivers/iommu/apple_dart.c | 7 +- drivers/iommu/sandbox_iommu.c | 15 +-- fs/fs.c | 7 +- include/image.h | 22 +--- include/lmb.h | 39 +++--- lib/lmb.c | 81 ++++++------ net/tftp.c | 5 +- net/wget.c | 5 +- test/cmd/bdinfo.c | 2 +- test/lib/lmb.c | 187 +++++++++++++-------------- 36 files changed, 270 insertions(+), 333 deletions(-)
From this commit until 'test: bdinfo: dump the global LMB memory map'
,snow breaks.
Regards, Simon

hi Simon,
On Sat, 13 Jul 2024 at 20:45, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Thu, 4 Jul 2024 at 08:36, Sughosh Ganu sughosh.ganu@linaro.org wrote:
With the move of the LMB structure to a persistent state, there is no need to declare the variable locally, and pass it as part of the LMB API's. Remove all local variable instances and change the API's correspondingly.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since V1: None
arch/arc/lib/cache.c | 4 +- arch/arm/lib/stack.c | 4 +- arch/arm/mach-apple/board.c | 17 ++- arch/arm/mach-snapdragon/board.c | 17 ++- arch/arm/mach-stm32mp/dram_init.c | 7 +- arch/arm/mach-stm32mp/stm32mp1/cpu.c | 6 +- arch/m68k/lib/bootm.c | 7 +- arch/microblaze/lib/bootm.c | 4 +- arch/mips/lib/bootm.c | 9 +- arch/nios2/lib/bootm.c | 4 +- arch/powerpc/cpu/mpc85xx/mp.c | 4 +- arch/powerpc/include/asm/mp.h | 4 +- arch/powerpc/lib/bootm.c | 14 +- arch/riscv/lib/bootm.c | 4 +- arch/sh/lib/bootm.c | 4 +- arch/x86/lib/bootm.c | 4 +- arch/xtensa/lib/bootm.c | 4 +- board/xilinx/common/board.c | 7 +- boot/bootm.c | 26 ++-- boot/bootm_os.c | 5 +- boot/image-board.c | 32 ++--- boot/image-fdt.c | 29 ++--- cmd/bdinfo.c | 6 +- cmd/booti.c | 2 +- cmd/bootz.c | 2 +- cmd/load.c | 7 +- drivers/iommu/apple_dart.c | 7 +- drivers/iommu/sandbox_iommu.c | 15 +-- fs/fs.c | 7 +- include/image.h | 22 +--- include/lmb.h | 39 +++--- lib/lmb.c | 81 ++++++------ net/tftp.c | 5 +- net/wget.c | 5 +- test/cmd/bdinfo.c | 2 +- test/lib/lmb.c | 187 +++++++++++++-------------- 36 files changed, 270 insertions(+), 333 deletions(-)
From this commit until 'test: bdinfo: dump the global LMB memory map' ,snow breaks.
Okay, I am making certain changes to the patches to see that the patches are bisectable. I will check that on snow as well.
-sughosh
Regards, Simon

The image_setup_libfdt() function optionally calls the LMB API to reserve the region of memory occupied by the FDT blob. This was earlier determined through the presence of the pointer to the lmb structure, which is no longer present. Pass a flag to the image_setup_libfdt() function to indicate if the region occupied by the FDT blob is to be marked as reserved by the LMB module.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since V1: * Use true/false identifiers for bool instead of 1/0 * Fix the argument passed to the function in arch/mips/lib/bootm.c
arch/mips/lib/bootm.c | 2 +- boot/image-board.c | 2 +- boot/image-fdt.c | 7 +++---- cmd/elf.c | 2 +- include/image.h | 5 ++--- lib/efi_loader/efi_dt_fixup.c | 2 +- lib/efi_loader/efi_helper.c | 2 +- 7 files changed, 10 insertions(+), 12 deletions(-)
diff --git a/arch/mips/lib/bootm.c b/arch/mips/lib/bootm.c index 54d89e9cca..8fb3a3923f 100644 --- a/arch/mips/lib/bootm.c +++ b/arch/mips/lib/bootm.c @@ -247,7 +247,7 @@ static int boot_setup_fdt(struct bootm_headers *images) images->initrd_start = virt_to_phys((void *)images->initrd_start); images->initrd_end = virt_to_phys((void *)images->initrd_end);
- return image_setup_libfdt(images, images->ft_addr, &images->lmb); + return image_setup_libfdt(images, images->ft_addr, true); }
static void boot_prep_linux(struct bootm_headers *images) diff --git a/boot/image-board.c b/boot/image-board.c index e3cb418e3c..1f8c1ac69f 100644 --- a/boot/image-board.c +++ b/boot/image-board.c @@ -904,7 +904,7 @@ int image_setup_linux(struct bootm_headers *images) }
if (CONFIG_IS_ENABLED(OF_LIBFDT) && of_size) { - ret = image_setup_libfdt(images, *of_flat_tree, lmb); + ret = image_setup_libfdt(images, *of_flat_tree, true); if (ret) return ret; } diff --git a/boot/image-fdt.c b/boot/image-fdt.c index 943a3477a1..7c3d66bad7 100644 --- a/boot/image-fdt.c +++ b/boot/image-fdt.c @@ -566,8 +566,7 @@ __weak int arch_fixup_fdt(void *blob) return 0; }
-int image_setup_libfdt(struct bootm_headers *images, void *blob, - struct lmb *lmb) +int image_setup_libfdt(struct bootm_headers *images, void *blob, bool lmb) { ulong *initrd_start = &images->initrd_start; ulong *initrd_end = &images->initrd_end; @@ -667,7 +666,7 @@ int image_setup_libfdt(struct bootm_headers *images, void *blob, }
/* Delete the old LMB reservation */ - if (lmb) + if (CONFIG_IS_ENABLED(LMB) && lmb) lmb_free(map_to_sysmem(blob), fdt_totalsize(blob));
ret = fdt_shrink_to_minimum(blob, 0); @@ -676,7 +675,7 @@ int image_setup_libfdt(struct bootm_headers *images, void *blob, of_size = ret;
/* Create a new LMB reservation */ - if (lmb) + if (CONFIG_IS_ENABLED(LMB) && lmb) lmb_reserve(map_to_sysmem(blob), of_size);
#if defined(CONFIG_ARCH_KEYSTONE) diff --git a/cmd/elf.c b/cmd/elf.c index 32b7462f92..df53c5b0cb 100644 --- a/cmd/elf.c +++ b/cmd/elf.c @@ -68,7 +68,7 @@ int do_bootelf(struct cmd_tbl *cmdtp, int flag, int argc, char *const argv[]) log_debug("Setting up FDT at 0x%08lx ...\n", fdt_addr); flush();
- if (image_setup_libfdt(&img, (void *)fdt_addr, NULL)) + if (image_setup_libfdt(&img, (void *)fdt_addr, false)) return 1; } #endif diff --git a/include/image.h b/include/image.h index 8036eae15c..47fd80bef4 100644 --- a/include/image.h +++ b/include/image.h @@ -1018,11 +1018,10 @@ int image_decomp(int comp, ulong load, ulong image_start, int type, * * @images: Images information * @blob: FDT to update - * @lmb: Points to logical memory block structure + * @lmb: Flag indicating use of lmb for reserving FDT memory region * Return: 0 if ok, <0 on failure */ -int image_setup_libfdt(struct bootm_headers *images, void *blob, - struct lmb *lmb); +int image_setup_libfdt(struct bootm_headers *images, void *blob, bool lmb);
/** * Set up the FDT to use for booting a kernel diff --git a/lib/efi_loader/efi_dt_fixup.c b/lib/efi_loader/efi_dt_fixup.c index 9886e6897c..9d017804ee 100644 --- a/lib/efi_loader/efi_dt_fixup.c +++ b/lib/efi_loader/efi_dt_fixup.c @@ -172,7 +172,7 @@ efi_dt_fixup(struct efi_dt_fixup_protocol *this, void *dtb, }
fdt_set_totalsize(dtb, *buffer_size); - if (image_setup_libfdt(&img, dtb, NULL)) { + if (image_setup_libfdt(&img, dtb, false)) { log_err("failed to process device tree\n"); ret = EFI_INVALID_PARAMETER; goto out; diff --git a/lib/efi_loader/efi_helper.c b/lib/efi_loader/efi_helper.c index 348612c3da..13e97fb741 100644 --- a/lib/efi_loader/efi_helper.c +++ b/lib/efi_loader/efi_helper.c @@ -513,7 +513,7 @@ efi_status_t efi_install_fdt(void *fdt) return EFI_OUT_OF_RESOURCES; }
- if (image_setup_libfdt(&img, fdt, NULL)) { + if (image_setup_libfdt(&img, fdt, false)) { log_err("ERROR: failed to process device tree\n"); return EFI_LOAD_ERROR; }

On Thu, 4 Jul 2024 at 08:36, Sughosh Ganu sughosh.ganu@linaro.org wrote:
The image_setup_libfdt() function optionally calls the LMB API to reserve the region of memory occupied by the FDT blob. This was earlier determined through the presence of the pointer to the lmb structure, which is no longer present. Pass a flag to the image_setup_libfdt() function to indicate if the region occupied by the FDT blob is to be marked as reserved by the LMB module.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since V1:
- Use true/false identifiers for bool instead of 1/0
- Fix the argument passed to the function in arch/mips/lib/bootm.c
arch/mips/lib/bootm.c | 2 +- boot/image-board.c | 2 +- boot/image-fdt.c | 7 +++---- cmd/elf.c | 2 +- include/image.h | 5 ++--- lib/efi_loader/efi_dt_fixup.c | 2 +- lib/efi_loader/efi_helper.c | 2 +- 7 files changed, 10 insertions(+), 12 deletions(-)
Reviewed-by: Simon Glass sjg@chromium.org

Allow for resizing of LMB regions if the region attributes match. The current code returns a failure status on detecting an overlapping address. This worked up until now since the LMB calls were not persistent and global -- the LMB memory map was specific and private to a given caller of the LMB API's.
With the change in the LMB code to make the LMB reservations persistent, there needs to be a check on whether the memory region can be resized, and then do it if so. To distinguish between memory that cannot be resized, add a new flag, LMB_NOOVERWRITE. Reserving a region of memory with this attribute would indicate that the region cannot be resized.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since V1: * Do the check for the region flags in lmb_resize_regions() and lmb_merge_overlap_regions() to decide on merging the overlapping regions.
include/lmb.h | 1 + lib/lmb.c | 116 ++++++++++++++++++++++++++++++++++++++++++++------ 2 files changed, 103 insertions(+), 14 deletions(-)
diff --git a/include/lmb.h b/include/lmb.h index 069c6af2a3..99fcf5781f 100644 --- a/include/lmb.h +++ b/include/lmb.h @@ -20,6 +20,7 @@ enum lmb_flags { LMB_NONE = 0x0, LMB_NOMAP = 0x4, + LMB_NOOVERWRITE = 0x8, };
/** diff --git a/lib/lmb.c b/lib/lmb.c index 0d01e58a46..80945e3cae 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -230,12 +230,88 @@ void lmb_init_and_reserve_range(phys_addr_t base, phys_size_t size, lmb_reserve_common(fdt_blob); }
+static bool lmb_region_flags_match(struct lmb_region *rgn, unsigned long r1, + enum lmb_flags flags) +{ + return rgn->region[r1].flags == flags; +} + +static long lmb_merge_overlap_regions(struct lmb_region *rgn, unsigned long i, + phys_addr_t base, phys_size_t size, + enum lmb_flags flags) +{ + phys_size_t rgnsize; + unsigned long rgn_cnt, idx; + phys_addr_t rgnbase, rgnend; + phys_addr_t mergebase, mergeend; + + rgn_cnt = 0; + idx = i; + /* + * First thing to do is to identify how many regions does + * the requested region overlap. + * If the flags match, combine all these overlapping + * regions into a single region, and remove the merged + * regions. + */ + while (idx < rgn->cnt - 1) { + rgnbase = rgn->region[idx].base; + rgnsize = rgn->region[idx].size; + + if (lmb_addrs_overlap(base, size, rgnbase, + rgnsize)) { + if (!lmb_region_flags_match(rgn, idx, flags)) + return -1; + rgn_cnt++; + idx++; + } + } + + /* The merged region's base and size */ + rgnbase = rgn->region[i].base; + mergebase = min(base, rgnbase); + rgnend = rgn->region[idx].base + rgn->region[idx].size; + mergeend = max(rgnend, (base + size)); + + rgn->region[i].base = mergebase; + rgn->region[i].size = mergeend - mergebase; + + /* Now remove the merged regions */ + while (--rgn_cnt) + lmb_remove_region(rgn, i + 1); + + return 0; +} + +static long lmb_resize_regions(struct lmb_region *rgn, unsigned long i, + phys_addr_t base, phys_size_t size, + enum lmb_flags flags) +{ + long ret = 0; + phys_addr_t rgnend; + + if (i == rgn->cnt - 1 || + base + size < rgn->region[i + 1].base) { + if (!lmb_region_flags_match(rgn, i, flags)) + return -1; + + rgnend = rgn->region[i].base + rgn->region[i].size; + rgn->region[i].base = min(base, rgn->region[i].base); + rgnend = max(base + size, rgnend); + rgn->region[i].size = rgnend - rgn->region[i].base; + } else { + ret = lmb_merge_overlap_regions(rgn, i, base, size, flags); + } + + return ret; +} + /* This routine called with relocation disabled. */ static long lmb_add_region_flags(struct lmb_region *rgn, phys_addr_t base, phys_size_t size, enum lmb_flags flags) { unsigned long coalesced = 0; - long adjacent, i; + long ret, i;
if (rgn->cnt == 0) { rgn->region[0].base = base; @@ -260,23 +336,28 @@ static long lmb_add_region_flags(struct lmb_region *rgn, phys_addr_t base, return -1; /* regions with new flags */ }
- adjacent = lmb_addrs_adjacent(base, size, rgnbase, rgnsize); - if (adjacent > 0) { + ret = lmb_addrs_adjacent(base, size, rgnbase, rgnsize); + if (ret > 0) { if (flags != rgnflags) break; rgn->region[i].base -= size; rgn->region[i].size += size; coalesced++; break; - } else if (adjacent < 0) { + } else if (ret < 0) { if (flags != rgnflags) break; rgn->region[i].size += size; coalesced++; break; } else if (lmb_addrs_overlap(base, size, rgnbase, rgnsize)) { - /* regions overlap */ - return -1; + ret = lmb_resize_regions(rgn, i, base, size, + flags); + if (ret < 0) + return -1; + + coalesced++; + break; } }
@@ -418,7 +499,7 @@ static phys_addr_t lmb_align_down(phys_addr_t addr, phys_size_t size) }
static phys_addr_t __lmb_alloc_base(phys_size_t size, ulong align, - phys_addr_t max_addr) + phys_addr_t max_addr, enum lmb_flags flags) { long i, rgn; phys_addr_t base = 0; @@ -468,7 +549,7 @@ phys_addr_t lmb_alloc_base(phys_size_t size, ulong align, phys_addr_t max_addr) { phys_addr_t alloc;
- alloc = __lmb_alloc_base(size, align, max_addr); + alloc = __lmb_alloc_base(size, align, max_addr, LMB_NONE);
if (alloc == 0) printf("ERROR: Failed to allocate 0x%lx bytes below 0x%lx.\n", @@ -477,11 +558,8 @@ phys_addr_t lmb_alloc_base(phys_size_t size, ulong align, phys_addr_t max_addr) return alloc; }
-/* - * Try to allocate a specific address range: must be in defined memory but not - * reserved - */ -phys_addr_t lmb_alloc_addr(phys_addr_t base, phys_size_t size) +static phys_addr_t __lmb_alloc_addr(phys_addr_t base, phys_size_t size, + enum lmb_flags flags) { long rgn;
@@ -496,13 +574,23 @@ phys_addr_t lmb_alloc_addr(phys_addr_t base, phys_size_t size) lmb->memory.region[rgn].size, base + size - 1, 1)) { /* ok, reserve the memory */ - if (lmb_reserve(base, size) >= 0) + if (lmb_reserve_flags(base, size, flags) >= 0) return base; } } + return 0; }
+/* + * Try to allocate a specific address range: must be in defined memory but not + * reserved + */ +phys_addr_t lmb_alloc_addr(phys_addr_t base, phys_size_t size) +{ + return __lmb_alloc_addr(base, size, LMB_NONE); +} + /* Return number of bytes from a given address that are free */ phys_size_t lmb_get_free_size(phys_addr_t addr) {

Hi Sughosh,
On Thu, 4 Jul 2024 at 08:36, Sughosh Ganu sughosh.ganu@linaro.org wrote:
Allow for resizing of LMB regions if the region attributes match. The current code returns a failure status on detecting an overlapping address. This worked up until now since the LMB calls were not persistent and global -- the LMB memory map was specific and private to a given caller of the LMB API's.
With the change in the LMB code to make the LMB reservations persistent, there needs to be a check on whether the memory region can be resized, and then do it if so. To distinguish between memory that cannot be resized, add a new flag, LMB_NOOVERWRITE. Reserving a region of memory with this attribute would indicate that the region cannot be resized.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since V1:
- Do the check for the region flags in lmb_resize_regions() and lmb_merge_overlap_regions() to decide on merging the overlapping regions.
include/lmb.h | 1 + lib/lmb.c | 116 ++++++++++++++++++++++++++++++++++++++++++++------ 2 files changed, 103 insertions(+), 14 deletions(-)
diff --git a/include/lmb.h b/include/lmb.h index 069c6af2a3..99fcf5781f 100644 --- a/include/lmb.h +++ b/include/lmb.h @@ -20,6 +20,7 @@ enum lmb_flags { LMB_NONE = 0x0, LMB_NOMAP = 0x4,
LMB_NOOVERWRITE = 0x8,
How about LMB_PERSIST ?
These could be adjusted to use BIT()
};
/** diff --git a/lib/lmb.c b/lib/lmb.c index 0d01e58a46..80945e3cae 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -230,12 +230,88 @@ void lmb_init_and_reserve_range(phys_addr_t base, phys_size_t size, lmb_reserve_common(fdt_blob); }
+static bool lmb_region_flags_match(struct lmb_region *rgn, unsigned long r1,
At some point the index should change to an int (or uint), rather than unsigned long. I had a series that did that, but it died.
enum lmb_flags flags)
+{
return rgn->region[r1].flags == flags;
+}
+static long lmb_merge_overlap_regions(struct lmb_region *rgn, unsigned long i,
phys_addr_t base, phys_size_t size,
enum lmb_flags flags)
+{
phys_size_t rgnsize;
unsigned long rgn_cnt, idx;
phys_addr_t rgnbase, rgnend;
phys_addr_t mergebase, mergeend;
rgn_cnt = 0;
idx = i;
/*
* First thing to do is to identify how many regions does
* the requested region overlap.
how many regions the requested region overlaps
* If the flags match, combine all these overlapping
* regions into a single region, and remove the merged
* regions.
*/
while (idx < rgn->cnt - 1) {
rgnbase = rgn->region[idx].base;
rgnsize = rgn->region[idx].size;
if (lmb_addrs_overlap(base, size, rgnbase,
rgnsize)) {
if (!lmb_region_flags_match(rgn, idx, flags))
return -1;
rgn_cnt++;
idx++;
}
}
/* The merged region's base and size */
rgnbase = rgn->region[i].base;
mergebase = min(base, rgnbase);
rgnend = rgn->region[idx].base + rgn->region[idx].size;
mergeend = max(rgnend, (base + size));
rgn->region[i].base = mergebase;
rgn->region[i].size = mergeend - mergebase;
/* Now remove the merged regions */
while (--rgn_cnt)
lmb_remove_region(rgn, i + 1);
return 0;
+}
+static long lmb_resize_regions(struct lmb_region *rgn, unsigned long i,
phys_addr_t base, phys_size_t size,
enum lmb_flags flags)
+{
long ret = 0;
phys_addr_t rgnend;
if (i == rgn->cnt - 1 ||
base + size < rgn->region[i + 1].base) {
if (!lmb_region_flags_match(rgn, i, flags))
return -1;
rgnend = rgn->region[i].base + rgn->region[i].size;
rgn->region[i].base = min(base, rgn->region[i].base);
rgnend = max(base + size, rgnend);
rgn->region[i].size = rgnend - rgn->region[i].base;
} else {
ret = lmb_merge_overlap_regions(rgn, i, base, size, flags);
}
return ret;
+}
/* This routine called with relocation disabled. */ static long lmb_add_region_flags(struct lmb_region *rgn, phys_addr_t base, phys_size_t size, enum lmb_flags flags)
This could really use a function comment
{ unsigned long coalesced = 0;
long adjacent, i;
long ret, i; if (rgn->cnt == 0) { rgn->region[0].base = base;
@@ -260,23 +336,28 @@ static long lmb_add_region_flags(struct lmb_region *rgn, phys_addr_t base, return -1; /* regions with new flags */ }
adjacent = lmb_addrs_adjacent(base, size, rgnbase, rgnsize);
if (adjacent > 0) {
ret = lmb_addrs_adjacent(base, size, rgnbase, rgnsize);
if (ret > 0) { if (flags != rgnflags) break; rgn->region[i].base -= size; rgn->region[i].size += size; coalesced++; break;
} else if (adjacent < 0) {
} else if (ret < 0) { if (flags != rgnflags) break; rgn->region[i].size += size; coalesced++; break; } else if (lmb_addrs_overlap(base, size, rgnbase, rgnsize)) {
/* regions overlap */
return -1;
ret = lmb_resize_regions(rgn, i, base, size,
flags);
if (ret < 0)
return -1;
coalesced++;
break; } }
@@ -418,7 +499,7 @@ static phys_addr_t lmb_align_down(phys_addr_t addr, phys_size_t size) }
Regards, Simon

hi Simon,
On Sat, 13 Jul 2024 at 20:45, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Thu, 4 Jul 2024 at 08:36, Sughosh Ganu sughosh.ganu@linaro.org wrote:
Allow for resizing of LMB regions if the region attributes match. The current code returns a failure status on detecting an overlapping address. This worked up until now since the LMB calls were not persistent and global -- the LMB memory map was specific and private to a given caller of the LMB API's.
With the change in the LMB code to make the LMB reservations persistent, there needs to be a check on whether the memory region can be resized, and then do it if so. To distinguish between memory that cannot be resized, add a new flag, LMB_NOOVERWRITE. Reserving a region of memory with this attribute would indicate that the region cannot be resized.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since V1:
- Do the check for the region flags in lmb_resize_regions() and lmb_merge_overlap_regions() to decide on merging the overlapping regions.
include/lmb.h | 1 + lib/lmb.c | 116 ++++++++++++++++++++++++++++++++++++++++++++------ 2 files changed, 103 insertions(+), 14 deletions(-)
diff --git a/include/lmb.h b/include/lmb.h index 069c6af2a3..99fcf5781f 100644 --- a/include/lmb.h +++ b/include/lmb.h @@ -20,6 +20,7 @@ enum lmb_flags { LMB_NONE = 0x0, LMB_NOMAP = 0x4,
LMB_NOOVERWRITE = 0x8,
How about LMB_PERSIST ?
Isn't LMB_NOOVERWRITE more suitable here ? I mean this is indicating that the said region of memory is not to be re-used/re-requested.
These could be adjusted to use BIT()
I am changing these to use the BIT macro in a subsequent commit.
};
/** diff --git a/lib/lmb.c b/lib/lmb.c index 0d01e58a46..80945e3cae 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -230,12 +230,88 @@ void lmb_init_and_reserve_range(phys_addr_t base, phys_size_t size, lmb_reserve_common(fdt_blob); }
+static bool lmb_region_flags_match(struct lmb_region *rgn, unsigned long r1,
At some point the index should change to an int (or uint), rather than unsigned long. I had a series that did that, but it died.
enum lmb_flags flags)
+{
return rgn->region[r1].flags == flags;
+}
+static long lmb_merge_overlap_regions(struct lmb_region *rgn, unsigned long i,
phys_addr_t base, phys_size_t size,
enum lmb_flags flags)
+{
phys_size_t rgnsize;
unsigned long rgn_cnt, idx;
phys_addr_t rgnbase, rgnend;
phys_addr_t mergebase, mergeend;
rgn_cnt = 0;
idx = i;
/*
* First thing to do is to identify how many regions does
* the requested region overlap.
how many regions the requested region overlaps
Okay
* If the flags match, combine all these overlapping
* regions into a single region, and remove the merged
* regions.
*/
while (idx < rgn->cnt - 1) {
rgnbase = rgn->region[idx].base;
rgnsize = rgn->region[idx].size;
if (lmb_addrs_overlap(base, size, rgnbase,
rgnsize)) {
if (!lmb_region_flags_match(rgn, idx, flags))
return -1;
rgn_cnt++;
idx++;
}
}
/* The merged region's base and size */
rgnbase = rgn->region[i].base;
mergebase = min(base, rgnbase);
rgnend = rgn->region[idx].base + rgn->region[idx].size;
mergeend = max(rgnend, (base + size));
rgn->region[i].base = mergebase;
rgn->region[i].size = mergeend - mergebase;
/* Now remove the merged regions */
while (--rgn_cnt)
lmb_remove_region(rgn, i + 1);
return 0;
+}
+static long lmb_resize_regions(struct lmb_region *rgn, unsigned long i,
phys_addr_t base, phys_size_t size,
enum lmb_flags flags)
+{
long ret = 0;
phys_addr_t rgnend;
if (i == rgn->cnt - 1 ||
base + size < rgn->region[i + 1].base) {
if (!lmb_region_flags_match(rgn, i, flags))
return -1;
rgnend = rgn->region[i].base + rgn->region[i].size;
rgn->region[i].base = min(base, rgn->region[i].base);
rgnend = max(base + size, rgnend);
rgn->region[i].size = rgnend - rgn->region[i].base;
} else {
ret = lmb_merge_overlap_regions(rgn, i, base, size, flags);
}
return ret;
+}
/* This routine called with relocation disabled. */ static long lmb_add_region_flags(struct lmb_region *rgn, phys_addr_t base, phys_size_t size, enum lmb_flags flags)
This could really use a function comment
Okay
-sughosh
{ unsigned long coalesced = 0;
long adjacent, i;
long ret, i; if (rgn->cnt == 0) { rgn->region[0].base = base;
@@ -260,23 +336,28 @@ static long lmb_add_region_flags(struct lmb_region *rgn, phys_addr_t base, return -1; /* regions with new flags */ }
adjacent = lmb_addrs_adjacent(base, size, rgnbase, rgnsize);
if (adjacent > 0) {
ret = lmb_addrs_adjacent(base, size, rgnbase, rgnsize);
if (ret > 0) { if (flags != rgnflags) break; rgn->region[i].base -= size; rgn->region[i].size += size; coalesced++; break;
} else if (adjacent < 0) {
} else if (ret < 0) { if (flags != rgnflags) break; rgn->region[i].size += size; coalesced++; break; } else if (lmb_addrs_overlap(base, size, rgnbase, rgnsize)) {
/* regions overlap */
return -1;
ret = lmb_resize_regions(rgn, i, base, size,
flags);
if (ret < 0)
return -1;
coalesced++;
break; } }
@@ -418,7 +499,7 @@ static phys_addr_t lmb_align_down(phys_addr_t addr, phys_size_t size) }
Regards, Simon

Hi Sughosh,
On Mon, 15 Jul 2024 at 10:27, Sughosh Ganu sughosh.ganu@linaro.org wrote:
hi Simon,
On Sat, 13 Jul 2024 at 20:45, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Thu, 4 Jul 2024 at 08:36, Sughosh Ganu sughosh.ganu@linaro.org wrote:
Allow for resizing of LMB regions if the region attributes match. The current code returns a failure status on detecting an overlapping address. This worked up until now since the LMB calls were not persistent and global -- the LMB memory map was specific and private to a given caller of the LMB API's.
With the change in the LMB code to make the LMB reservations persistent, there needs to be a check on whether the memory region can be resized, and then do it if so. To distinguish between memory that cannot be resized, add a new flag, LMB_NOOVERWRITE. Reserving a region of memory with this attribute would indicate that the region cannot be resized.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since V1:
- Do the check for the region flags in lmb_resize_regions() and lmb_merge_overlap_regions() to decide on merging the overlapping regions.
include/lmb.h | 1 + lib/lmb.c | 116 ++++++++++++++++++++++++++++++++++++++++++++------ 2 files changed, 103 insertions(+), 14 deletions(-)
diff --git a/include/lmb.h b/include/lmb.h index 069c6af2a3..99fcf5781f 100644 --- a/include/lmb.h +++ b/include/lmb.h @@ -20,6 +20,7 @@ enum lmb_flags { LMB_NONE = 0x0, LMB_NOMAP = 0x4,
LMB_NOOVERWRITE = 0x8,
How about LMB_PERSIST ?
Isn't LMB_NOOVERWRITE more suitable here ? I mean this is indicating that the said region of memory is not to be re-used/re-requested.
These could be adjusted to use BIT()
I am changing these to use the BIT macro in a subsequent commit.
Yes, I saw that later and forgot to remove this. Normally we make refactoring changes before adding new code, though. [..]
Regards, Simon

hi Simon,
On Mon, 15 Jul 2024 at 17:09, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Mon, 15 Jul 2024 at 10:27, Sughosh Ganu sughosh.ganu@linaro.org wrote:
hi Simon,
On Sat, 13 Jul 2024 at 20:45, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Thu, 4 Jul 2024 at 08:36, Sughosh Ganu sughosh.ganu@linaro.org wrote:
Allow for resizing of LMB regions if the region attributes match. The current code returns a failure status on detecting an overlapping address. This worked up until now since the LMB calls were not persistent and global -- the LMB memory map was specific and private to a given caller of the LMB API's.
With the change in the LMB code to make the LMB reservations persistent, there needs to be a check on whether the memory region can be resized, and then do it if so. To distinguish between memory that cannot be resized, add a new flag, LMB_NOOVERWRITE. Reserving a region of memory with this attribute would indicate that the region cannot be resized.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since V1:
- Do the check for the region flags in lmb_resize_regions() and lmb_merge_overlap_regions() to decide on merging the overlapping regions.
include/lmb.h | 1 + lib/lmb.c | 116 ++++++++++++++++++++++++++++++++++++++++++++------ 2 files changed, 103 insertions(+), 14 deletions(-)
diff --git a/include/lmb.h b/include/lmb.h index 069c6af2a3..99fcf5781f 100644 --- a/include/lmb.h +++ b/include/lmb.h @@ -20,6 +20,7 @@ enum lmb_flags { LMB_NONE = 0x0, LMB_NOMAP = 0x4,
LMB_NOOVERWRITE = 0x8,
How about LMB_PERSIST ?
Isn't LMB_NOOVERWRITE more suitable here ? I mean this is indicating that the said region of memory is not to be re-used/re-requested.
These could be adjusted to use BIT()
I am changing these to use the BIT macro in a subsequent commit.
Yes, I saw that later and forgot to remove this. Normally we make refactoring changes before adding new code, though.
Okay, I will bring in that patch earlier in the series.
-sughosh
[..]
Regards, Simon

The current LMB API's for allocating and reserving memory use a per-caller based memory view. Memory allocated by a caller can then be overwritten by another caller. Make these allocations and reservations persistent using the alloced list data structure.
Two alloced lists are declared -- one for the available(free) memory, and one for the used memory. Once full, the list can then be extended at runtime.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since V1: * Use alloced list structure for the available and reserved memory lists instead of static arrays. * Corresponding changes in the code made as a result of the above change. * Rename the reserved memory list as 'used'.
include/lmb.h | 77 +++-------- lib/lmb.c | 346 ++++++++++++++++++++++++++++++-------------------- 2 files changed, 224 insertions(+), 199 deletions(-)
diff --git a/include/lmb.h b/include/lmb.h index 99fcf5781f..27cdb18c37 100644 --- a/include/lmb.h +++ b/include/lmb.h @@ -24,78 +24,18 @@ enum lmb_flags { };
/** - * struct lmb_property - Description of one region. + * struct lmb_region - Description of one region. * * @base: Base address of the region. * @size: Size of the region * @flags: memory region attributes */ -struct lmb_property { +struct lmb_region { phys_addr_t base; phys_size_t size; enum lmb_flags flags; };
-/* - * For regions size management, see LMB configuration in KConfig - * all the #if test are done with CONFIG_LMB_USE_MAX_REGIONS (boolean) - * - * case 1. CONFIG_LMB_USE_MAX_REGIONS is defined (legacy mode) - * => CONFIG_LMB_MAX_REGIONS is used to configure the region size, - * directly in the array lmb_region.region[], with the same - * configuration for memory and reserved regions. - * - * case 2. CONFIG_LMB_USE_MAX_REGIONS is not defined, the size of each - * region is configurated *independently* with - * => CONFIG_LMB_MEMORY_REGIONS: struct lmb.memory_regions - * => CONFIG_LMB_RESERVED_REGIONS: struct lmb.reserved_regions - * lmb_region.region is only a pointer to the correct buffer, - * initialized in lmb_init(). This configuration is useful to manage - * more reserved memory regions with CONFIG_LMB_RESERVED_REGIONS. - */ - -/** - * struct lmb_region - Description of a set of region. - * - * @cnt: Number of regions. - * @max: Size of the region array, max value of cnt. - * @region: Array of the region properties - */ -struct lmb_region { - unsigned long cnt; - unsigned long max; -#if IS_ENABLED(CONFIG_LMB_USE_MAX_REGIONS) - struct lmb_property region[CONFIG_LMB_MAX_REGIONS]; -#else - struct lmb_property *region; -#endif -}; - -/** - * struct lmb - Logical memory block handle. - * - * Clients provide storage for Logical memory block (lmb) handles. - * The content of the structure is managed by the lmb library. - * A lmb struct is initialized by lmb_init() functions. - * The lmb struct is passed to all other lmb APIs. - * - * @memory: Description of memory regions. - * @reserved: Description of reserved regions. - * @memory_regions: Array of the memory regions (statically allocated) - * @reserved_regions: Array of the reserved regions (statically allocated) - */ -struct lmb { - struct lmb_region memory; - struct lmb_region reserved; -#if !IS_ENABLED(CONFIG_LMB_USE_MAX_REGIONS) - struct lmb_property memory_regions[CONFIG_LMB_MEMORY_REGIONS]; - struct lmb_property reserved_regions[CONFIG_LMB_RESERVED_REGIONS]; -#endif -}; - -void lmb_init_and_reserve(struct bd_info *bd, void *fdt_blob); -void lmb_init_and_reserve_range(phys_addr_t base, phys_size_t size, - void *fdt_blob); long lmb_add(phys_addr_t base, phys_size_t size); long lmb_reserve(phys_addr_t base, phys_size_t size); /** @@ -134,6 +74,19 @@ void board_lmb_reserve(void); void arch_lmb_reserve(void); void arch_lmb_reserve_generic(ulong sp, ulong end, ulong align);
+/** + * lmb_mem_regions_init() - Initialise the LMB memory + * + * Initialise the LMB subsystem related data structures. There are two + * alloced lists that are initialised, one for the free memory, and one + * for the used memory. + * + * Initialise the two lists as part of board init. + * + * Return: 0 if OK, -ve on failure. + */ +int lmb_mem_regions_init(void); + #endif /* __KERNEL__ */
#endif /* _LINUX_LMB_H */ diff --git a/lib/lmb.c b/lib/lmb.c index 80945e3cae..a46bc8a7a3 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -6,6 +6,7 @@ * Copyright (C) 2001 Peter Bergner. */
+#include <alist.h> #include <efi_loader.h> #include <image.h> #include <mapmem.h> @@ -15,24 +16,30 @@
#include <asm/global_data.h> #include <asm/sections.h> +#include <linux/kernel.h>
DECLARE_GLOBAL_DATA_PTR;
#define LMB_ALLOC_ANYWHERE 0 +#define LMB_ALIST_INITIAL_SIZE 4
-static void lmb_dump_region(struct lmb_region *rgn, char *name) +struct alist lmb_free_mem; +struct alist lmb_used_mem; + +static void lmb_dump_region(struct alist *lmb_rgn_lst, char *name) { + struct lmb_region *rgn = lmb_rgn_lst->data; unsigned long long base, size, end; enum lmb_flags flags; int i;
- printf(" %s.cnt = 0x%lx / max = 0x%lx\n", name, rgn->cnt, rgn->max); + printf(" %s.count = 0x%hx\n", name, lmb_rgn_lst->count);
- for (i = 0; i < rgn->cnt; i++) { - base = rgn->region[i].base; - size = rgn->region[i].size; + for (i = 0; i < lmb_rgn_lst->count; i++) { + base = rgn[i].base; + size = rgn[i].size; end = base + size - 1; - flags = rgn->region[i].flags; + flags = rgn[i].flags;
printf(" %s[%d]\t[0x%llx-0x%llx], 0x%08llx bytes flags: %x\n", name, i, base, end, size, flags); @@ -42,8 +49,8 @@ static void lmb_dump_region(struct lmb_region *rgn, char *name) void lmb_dump_all_force(void) { printf("lmb_dump_all:\n"); - lmb_dump_region(&lmb->memory, "memory"); - lmb_dump_region(&lmb->reserved, "reserved"); + lmb_dump_region(&lmb_free_mem, "memory"); + lmb_dump_region(&lmb_used_mem, "reserved"); }
void lmb_dump_all(void) @@ -73,61 +80,71 @@ static long lmb_addrs_adjacent(phys_addr_t base1, phys_size_t size1, return 0; }
-static long lmb_regions_overlap(struct lmb_region *rgn, unsigned long r1, +static long lmb_regions_overlap(struct alist *lmb_rgn_lst, unsigned long r1, unsigned long r2) { - phys_addr_t base1 = rgn->region[r1].base; - phys_size_t size1 = rgn->region[r1].size; - phys_addr_t base2 = rgn->region[r2].base; - phys_size_t size2 = rgn->region[r2].size; + struct lmb_region *rgn = lmb_rgn_lst->data; + + phys_addr_t base1 = rgn[r1].base; + phys_size_t size1 = rgn[r1].size; + phys_addr_t base2 = rgn[r2].base; + phys_size_t size2 = rgn[r2].size;
return lmb_addrs_overlap(base1, size1, base2, size2); } -static long lmb_regions_adjacent(struct lmb_region *rgn, unsigned long r1, + +static long lmb_regions_adjacent(struct alist *lmb_rgn_lst, unsigned long r1, unsigned long r2) { - phys_addr_t base1 = rgn->region[r1].base; - phys_size_t size1 = rgn->region[r1].size; - phys_addr_t base2 = rgn->region[r2].base; - phys_size_t size2 = rgn->region[r2].size; + struct lmb_region *rgn = lmb_rgn_lst->data; + + phys_addr_t base1 = rgn[r1].base; + phys_size_t size1 = rgn[r1].size; + phys_addr_t base2 = rgn[r2].base; + phys_size_t size2 = rgn[r2].size; return lmb_addrs_adjacent(base1, size1, base2, size2); }
-static void lmb_remove_region(struct lmb_region *rgn, unsigned long r) +static void lmb_remove_region(struct alist *lmb_rgn_lst, unsigned long r) { unsigned long i; + struct lmb_region *rgn = lmb_rgn_lst->data;
- for (i = r; i < rgn->cnt - 1; i++) { - rgn->region[i].base = rgn->region[i + 1].base; - rgn->region[i].size = rgn->region[i + 1].size; - rgn->region[i].flags = rgn->region[i + 1].flags; + for (i = r; i < lmb_rgn_lst->count - 1; i++) { + rgn[i].base = rgn[i + 1].base; + rgn[i].size = rgn[i + 1].size; + rgn[i].flags = rgn[i + 1].flags; } - rgn->cnt--; + lmb_rgn_lst->count--; }
/* Assumption: base addr of region 1 < base addr of region 2 */ -static void lmb_coalesce_regions(struct lmb_region *rgn, unsigned long r1, +static void lmb_coalesce_regions(struct alist *lmb_rgn_lst, unsigned long r1, unsigned long r2) { - rgn->region[r1].size += rgn->region[r2].size; - lmb_remove_region(rgn, r2); + struct lmb_region *rgn = lmb_rgn_lst->data; + + rgn[r1].size += rgn[r2].size; + lmb_remove_region(lmb_rgn_lst, r2); }
/*Assumption : base addr of region 1 < base addr of region 2*/ -static void lmb_fix_over_lap_regions(struct lmb_region *rgn, unsigned long r1, - unsigned long r2) +static void lmb_fix_over_lap_regions(struct alist *lmb_rgn_lst, + unsigned long r1, unsigned long r2) { - phys_addr_t base1 = rgn->region[r1].base; - phys_size_t size1 = rgn->region[r1].size; - phys_addr_t base2 = rgn->region[r2].base; - phys_size_t size2 = rgn->region[r2].size; + struct lmb_region *rgn = lmb_rgn_lst->data; + + phys_addr_t base1 = rgn[r1].base; + phys_size_t size1 = rgn[r1].size; + phys_addr_t base2 = rgn[r2].base; + phys_size_t size2 = rgn[r2].size;
if (base1 + size1 > base2 + size2) { printf("This will not be a case any time\n"); return; } - rgn->region[r1].size = base2 + size2 - base1; - lmb_remove_region(rgn, r2); + rgn[r1].size = base2 + size2 - base1; + lmb_remove_region(lmb_rgn_lst, r2); }
void arch_lmb_reserve_generic(ulong sp, ulong end, ulong align) @@ -233,20 +250,22 @@ void lmb_init_and_reserve_range(phys_addr_t base, phys_size_t size, static bool lmb_region_flags_match(struct lmb_region *rgn, unsigned long r1, enum lmb_flags flags) { - return rgn->region[r1].flags == flags; + return rgn[r1].flags == flags; }
-static long lmb_merge_overlap_regions(struct lmb_region *rgn, unsigned long i, - phys_addr_t base, phys_size_t size, - enum lmb_flags flags) +static long lmb_merge_overlap_regions(struct alist *lmb_rgn_lst, + unsigned long i, phys_addr_t base, + phys_size_t size, enum lmb_flags flags) { phys_size_t rgnsize; unsigned long rgn_cnt, idx; phys_addr_t rgnbase, rgnend; phys_addr_t mergebase, mergeend; + struct lmb_region *rgn = lmb_rgn_lst->data;
rgn_cnt = 0; idx = i; + /* * First thing to do is to identify how many regions does * the requested region overlap. @@ -254,9 +273,9 @@ static long lmb_merge_overlap_regions(struct lmb_region *rgn, unsigned long i, * regions into a single region, and remove the merged * regions. */ - while (idx < rgn->cnt - 1) { - rgnbase = rgn->region[idx].base; - rgnsize = rgn->region[idx].size; + while (idx < lmb_rgn_lst->count - 1) { + rgnbase = rgn[idx].base; + rgnsize = rgn[idx].size;
if (lmb_addrs_overlap(base, size, rgnbase, rgnsize)) { @@ -268,64 +287,70 @@ static long lmb_merge_overlap_regions(struct lmb_region *rgn, unsigned long i, }
/* The merged region's base and size */ - rgnbase = rgn->region[i].base; + rgnbase = rgn[i].base; mergebase = min(base, rgnbase); - rgnend = rgn->region[idx].base + rgn->region[idx].size; + rgnend = rgn[idx].base + rgn[idx].size; mergeend = max(rgnend, (base + size));
- rgn->region[i].base = mergebase; - rgn->region[i].size = mergeend - mergebase; + rgn[i].base = mergebase; + rgn[i].size = mergeend - mergebase;
/* Now remove the merged regions */ while (--rgn_cnt) - lmb_remove_region(rgn, i + 1); + lmb_remove_region(lmb_rgn_lst, i + 1);
return 0; }
-static long lmb_resize_regions(struct lmb_region *rgn, unsigned long i, +static long lmb_resize_regions(struct alist *lmb_rgn_lst, unsigned long i, phys_addr_t base, phys_size_t size, enum lmb_flags flags) { long ret = 0; phys_addr_t rgnend; + struct lmb_region *rgn = lmb_rgn_lst->data;
- if (i == rgn->cnt - 1 || - base + size < rgn->region[i + 1].base) { + if (i == lmb_rgn_lst->count - 1 || + base + size < rgn[i + 1].base) { if (!lmb_region_flags_match(rgn, i, flags)) return -1;
- rgnend = rgn->region[i].base + rgn->region[i].size; - rgn->region[i].base = min(base, rgn->region[i].base); + rgnend = rgn[i].base + rgn[i].size; + rgn[i].base = min(base, rgn[i].base); rgnend = max(base + size, rgnend); - rgn->region[i].size = rgnend - rgn->region[i].base; + rgn[i].size = rgnend - rgn[i].base; } else { - ret = lmb_merge_overlap_regions(rgn, i, base, size, flags); + ret = lmb_merge_overlap_regions(lmb_rgn_lst, i, base, size, + flags); }
return ret; }
/* This routine called with relocation disabled. */ -static long lmb_add_region_flags(struct lmb_region *rgn, phys_addr_t base, +static long lmb_add_region_flags(struct alist *lmb_rgn_lst, phys_addr_t base, phys_size_t size, enum lmb_flags flags) { - unsigned long coalesced = 0; long ret, i; + unsigned long coalesced = 0; + struct lmb_region *rgn = lmb_rgn_lst->data;
- if (rgn->cnt == 0) { - rgn->region[0].base = base; - rgn->region[0].size = size; - rgn->region[0].flags = flags; - rgn->cnt = 1; + if (alist_err(lmb_rgn_lst)) + return -1; + + if (alist_empty(lmb_rgn_lst)) { + rgn[0].base = base; + rgn[0].size = size; + rgn[0].flags = flags; + lmb_rgn_lst->count = 1; return 0; }
/* First try and coalesce this LMB with another. */ - for (i = 0; i < rgn->cnt; i++) { - phys_addr_t rgnbase = rgn->region[i].base; - phys_size_t rgnsize = rgn->region[i].size; - phys_size_t rgnflags = rgn->region[i].flags; + for (i = 0; i < lmb_rgn_lst->count; i++) { + phys_addr_t rgnbase = rgn[i].base; + phys_size_t rgnsize = rgn[i].size; + phys_size_t rgnflags = rgn[i].flags; phys_addr_t end = base + size - 1; phys_addr_t rgnend = rgnbase + rgnsize - 1; if (rgnbase <= base && end <= rgnend) { @@ -340,19 +365,19 @@ static long lmb_add_region_flags(struct lmb_region *rgn, phys_addr_t base, if (ret > 0) { if (flags != rgnflags) break; - rgn->region[i].base -= size; - rgn->region[i].size += size; + rgn[i].base -= size; + rgn[i].size += size; coalesced++; break; } else if (ret < 0) { if (flags != rgnflags) break; - rgn->region[i].size += size; + rgn[i].size += size; coalesced++; break; } else if (lmb_addrs_overlap(base, size, rgnbase, rgnsize)) { - ret = lmb_resize_regions(rgn, i, base, size, - flags); + ret = lmb_resize_regions(lmb_rgn_lst, i, base, + size, flags); if (ret < 0) return -1;
@@ -361,99 +386,106 @@ static long lmb_add_region_flags(struct lmb_region *rgn, phys_addr_t base, } }
- if (i < rgn->cnt - 1 && rgn->region[i].flags == rgn->region[i + 1].flags) { - if (lmb_regions_adjacent(rgn, i, i + 1)) { - lmb_coalesce_regions(rgn, i, i + 1); + if (i < lmb_rgn_lst->count - 1 && + rgn[i].flags == rgn[i + 1].flags) { + if (lmb_regions_adjacent(lmb_rgn_lst, i, i + 1)) { + lmb_coalesce_regions(lmb_rgn_lst, i, i + 1); coalesced++; - } else if (lmb_regions_overlap(rgn, i, i + 1)) { + } else if (lmb_regions_overlap(lmb_rgn_lst, i, i + 1)) { /* fix overlapping area */ - lmb_fix_over_lap_regions(rgn, i, i + 1); + lmb_fix_over_lap_regions(lmb_rgn_lst, i, i + 1); coalesced++; } }
if (coalesced) return coalesced; - if (rgn->cnt >= rgn->max) - return -1; + + if (alist_full(lmb_rgn_lst)) { + if (!alist_expand_by(lmb_rgn_lst, lmb_rgn_lst->alloc * 2)) + return -1; + else + rgn = lmb_rgn_lst->data; + }
/* Couldn't coalesce the LMB, so add it to the sorted table. */ - for (i = rgn->cnt-1; i >= 0; i--) { - if (base < rgn->region[i].base) { - rgn->region[i + 1].base = rgn->region[i].base; - rgn->region[i + 1].size = rgn->region[i].size; - rgn->region[i + 1].flags = rgn->region[i].flags; + for (i = lmb_rgn_lst->count - 1; i >= 0; i--) { + if (base < rgn[i].base) { + rgn[i + 1].base = rgn[i].base; + rgn[i + 1].size = rgn[i].size; + rgn[i + 1].flags = rgn[i].flags; } else { - rgn->region[i + 1].base = base; - rgn->region[i + 1].size = size; - rgn->region[i + 1].flags = flags; + rgn[i + 1].base = base; + rgn[i + 1].size = size; + rgn[i + 1].flags = flags; break; } }
- if (base < rgn->region[0].base) { - rgn->region[0].base = base; - rgn->region[0].size = size; - rgn->region[0].flags = flags; + if (base < rgn[0].base) { + rgn[0].base = base; + rgn[0].size = size; + rgn[0].flags = flags; }
- rgn->cnt++; + lmb_rgn_lst->count++;
return 0; }
-static long lmb_add_region(struct lmb_region *rgn, phys_addr_t base, +static long lmb_add_region(struct alist *lmb_rgn_lst, phys_addr_t base, phys_size_t size) { - return lmb_add_region_flags(rgn, base, size, LMB_NONE); + return lmb_add_region_flags(lmb_rgn_lst, base, size, LMB_NONE); }
/* This routine may be called with relocation disabled. */ long lmb_add(phys_addr_t base, phys_size_t size) { - struct lmb_region *_rgn = &(lmb->memory); + struct alist *lmb_rgn_lst = &lmb_free_mem;
- return lmb_add_region(_rgn, base, size); + return lmb_add_region(lmb_rgn_lst, base, size); }
long lmb_free(phys_addr_t base, phys_size_t size) { - struct lmb_region *rgn = &(lmb->reserved); + struct lmb_region *rgn; + struct alist *lmb_rgn_lst = &lmb_used_mem; phys_addr_t rgnbegin, rgnend; phys_addr_t end = base + size - 1; int i;
rgnbegin = rgnend = 0; /* supress gcc warnings */ - + rgn = lmb_rgn_lst->data; /* Find the region where (base, size) belongs to */ - for (i = 0; i < rgn->cnt; i++) { - rgnbegin = rgn->region[i].base; - rgnend = rgnbegin + rgn->region[i].size - 1; + for (i = 0; i < lmb_rgn_lst->count; i++) { + rgnbegin = rgn[i].base; + rgnend = rgnbegin + rgn[i].size - 1;
if ((rgnbegin <= base) && (end <= rgnend)) break; }
/* Didn't find the region */ - if (i == rgn->cnt) + if (i == lmb_rgn_lst->count) return -1;
/* Check to see if we are removing entire region */ if ((rgnbegin == base) && (rgnend == end)) { - lmb_remove_region(rgn, i); + lmb_remove_region(lmb_rgn_lst, i); return 0; }
/* Check to see if region is matching at the front */ if (rgnbegin == base) { - rgn->region[i].base = end + 1; - rgn->region[i].size -= size; + rgn[i].base = end + 1; + rgn[i].size -= size; return 0; }
/* Check to see if the region is matching at the end */ if (rgnend == end) { - rgn->region[i].size -= size; + rgn[i].size -= size; return 0; }
@@ -461,16 +493,16 @@ long lmb_free(phys_addr_t base, phys_size_t size) * We need to split the entry - adjust the current one to the * beginging of the hole and add the region after hole. */ - rgn->region[i].size = base - rgn->region[i].base; - return lmb_add_region_flags(rgn, end + 1, rgnend - end, - rgn->region[i].flags); + rgn[i].size = base - rgn[i].base; + return lmb_add_region_flags(lmb_rgn_lst, end + 1, rgnend - end, + rgn[i].flags); }
long lmb_reserve_flags(phys_addr_t base, phys_size_t size, enum lmb_flags flags) { - struct lmb_region *_rgn = &(lmb->reserved); + struct alist *lmb_rgn_lst = &lmb_used_mem;
- return lmb_add_region_flags(_rgn, base, size, flags); + return lmb_add_region_flags(lmb_rgn_lst, base, size, flags); }
long lmb_reserve(phys_addr_t base, phys_size_t size) @@ -478,19 +510,20 @@ long lmb_reserve(phys_addr_t base, phys_size_t size) return lmb_reserve_flags(base, size, LMB_NONE); }
-static long lmb_overlaps_region(struct lmb_region *rgn, phys_addr_t base, +static long lmb_overlaps_region(struct alist *lmb_rgn_lst, phys_addr_t base, phys_size_t size) { unsigned long i; + struct lmb_region *rgn = lmb_rgn_lst->data;
- for (i = 0; i < rgn->cnt; i++) { - phys_addr_t rgnbase = rgn->region[i].base; - phys_size_t rgnsize = rgn->region[i].size; + for (i = 0; i < lmb_rgn_lst->count; i++) { + phys_addr_t rgnbase = rgn[i].base; + phys_size_t rgnsize = rgn[i].size; if (lmb_addrs_overlap(base, size, rgnbase, rgnsize)) break; }
- return (i < rgn->cnt) ? i : -1; + return (i < lmb_rgn_lst->count) ? i : -1; }
static phys_addr_t lmb_align_down(phys_addr_t addr, phys_size_t size) @@ -504,10 +537,12 @@ static phys_addr_t __lmb_alloc_base(phys_size_t size, ulong align, long i, rgn; phys_addr_t base = 0; phys_addr_t res_base; + struct lmb_region *lmb_used = lmb_used_mem.data; + struct lmb_region *lmb_memory = lmb_free_mem.data;
- for (i = lmb->memory.cnt - 1; i >= 0; i--) { - phys_addr_t lmbbase = lmb->memory.region[i].base; - phys_size_t lmbsize = lmb->memory.region[i].size; + for (i = lmb_free_mem.count - 1; i >= 0; i--) { + phys_addr_t lmbbase = lmb_memory[i].base; + phys_size_t lmbsize = lmb_memory[i].size;
if (lmbsize < size) continue; @@ -523,15 +558,16 @@ static phys_addr_t __lmb_alloc_base(phys_size_t size, ulong align, continue;
while (base && lmbbase <= base) { - rgn = lmb_overlaps_region(&lmb->reserved, base, size); + rgn = lmb_overlaps_region(&lmb_used_mem, base, size); if (rgn < 0) { /* This area isn't reserved, take it */ - if (lmb_add_region(&lmb->reserved, base, - size) < 0) + if (lmb_add_region_flags(&lmb_used_mem, base, + size, flags) < 0) return 0; return base; } - res_base = lmb->reserved.region[rgn].base; + + res_base = lmb_used[rgn].base; if (res_base < size) break; base = lmb_align_down(res_base - size, align); @@ -562,16 +598,17 @@ static phys_addr_t __lmb_alloc_addr(phys_addr_t base, phys_size_t size, enum lmb_flags flags) { long rgn; + struct lmb_region *lmb_memory = lmb_free_mem.data;
/* Check if the requested address is in one of the memory regions */ - rgn = lmb_overlaps_region(&lmb->memory, base, size); + rgn = lmb_overlaps_region(&lmb_free_mem, base, size); if (rgn >= 0) { /* * Check if the requested end address is in the same memory * region we found. */ - if (lmb_addrs_overlap(lmb->memory.region[rgn].base, - lmb->memory.region[rgn].size, + if (lmb_addrs_overlap(lmb_memory[rgn].base, + lmb_memory[rgn].size, base + size - 1, 1)) { /* ok, reserve the memory */ if (lmb_reserve_flags(base, size, flags) >= 0) @@ -596,24 +633,26 @@ phys_size_t lmb_get_free_size(phys_addr_t addr) { int i; long rgn; + struct lmb_region *lmb_used = lmb_used_mem.data; + struct lmb_region *lmb_memory = lmb_free_mem.data;
/* check if the requested address is in the memory regions */ - rgn = lmb_overlaps_region(&lmb->memory, addr, 1); + rgn = lmb_overlaps_region(&lmb_free_mem, addr, 1); if (rgn >= 0) { - for (i = 0; i < lmb->reserved.cnt; i++) { - if (addr < lmb->reserved.region[i].base) { + for (i = 0; i < lmb_used_mem.count; i++) { + if (addr < lmb_used[i].base) { /* first reserved range > requested address */ - return lmb->reserved.region[i].base - addr; + return lmb_used[i].base - addr; } - if (lmb->reserved.region[i].base + - lmb->reserved.region[i].size > addr) { + if (lmb_used[i].base + + lmb_used[i].size > addr) { /* requested addr is in this reserved range */ return 0; } } /* if we come here: no reserved ranges above requested addr */ - return lmb->memory.region[lmb->memory.cnt - 1].base + - lmb->memory.region[lmb->memory.cnt - 1].size - addr; + return lmb_memory[lmb_free_mem.count - 1].base + + lmb_memory[lmb_free_mem.count - 1].size - addr; } return 0; } @@ -621,12 +660,13 @@ phys_size_t lmb_get_free_size(phys_addr_t addr) int lmb_is_reserved_flags(phys_addr_t addr, int flags) { int i; + struct lmb_region *lmb_used = lmb_used_mem.data;
- for (i = 0; i < lmb->reserved.cnt; i++) { - phys_addr_t upper = lmb->reserved.region[i].base + - lmb->reserved.region[i].size - 1; - if ((addr >= lmb->reserved.region[i].base) && (addr <= upper)) - return (lmb->reserved.region[i].flags & flags) == flags; + for (i = 0; i < lmb_used_mem.count; i++) { + phys_addr_t upper = lmb_used[i].base + + lmb_used[i].size - 1; + if ((addr >= lmb_used[i].base) && (addr <= upper)) + return (lmb_used[i].flags & flags) == flags; } return 0; } @@ -640,3 +680,35 @@ __weak void arch_lmb_reserve(void) { /* please define platform specific arch_lmb_reserve() */ } + +/** + * lmb_mem_regions_init() - Initialise the LMB memory + * + * Initialise the LMB subsystem related data structures. There are two + * alloced lists that are initialised, one for the free memory, and one + * for the used memory. + * + * Initialise the two lists as part of board init. + * + * Return: 0 if OK, -ve on failure. + */ +int lmb_mem_regions_init(void) +{ + bool ret; + + ret = alist_init(&lmb_free_mem, sizeof(struct lmb_region), + (uint)LMB_ALIST_INITIAL_SIZE); + if (!ret) { + log_debug("Unable to initialise the list for LMB free memory\n"); + return -1; + } + + ret = alist_init(&lmb_used_mem, sizeof(struct lmb_region), + (uint)LMB_ALIST_INITIAL_SIZE); + if (!ret) { + log_debug("Unable to initialise the list for LMB used memory\n"); + return -1; + } + + return 0; +}

Hi Sughosh,
On Thu, 4 Jul 2024 at 08:36, Sughosh Ganu sughosh.ganu@linaro.org wrote:
The current LMB API's for allocating and reserving memory use a per-caller based memory view. Memory allocated by a caller can then be overwritten by another caller. Make these allocations and reservations persistent using the alloced list data structure.
Two alloced lists are declared -- one for the available(free) memory, and one for the used memory. Once full, the list can then be extended at runtime.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since V1:
- Use alloced list structure for the available and reserved memory lists instead of static arrays.
- Corresponding changes in the code made as a result of the above change.
- Rename the reserved memory list as 'used'.
include/lmb.h | 77 +++-------- lib/lmb.c | 346 ++++++++++++++++++++++++++++++-------------------- 2 files changed, 224 insertions(+), 199 deletions(-)
diff --git a/include/lmb.h b/include/lmb.h index 99fcf5781f..27cdb18c37 100644 --- a/include/lmb.h +++ b/include/lmb.h @@ -24,78 +24,18 @@ enum lmb_flags { };
/**
- struct lmb_property - Description of one region.
*/
- struct lmb_region - Description of one region.
- @base: Base address of the region.
- @size: Size of the region
- @flags: memory region attributes
-struct lmb_property { +struct lmb_region { phys_addr_t base; phys_size_t size; enum lmb_flags flags; };
-/*
- For regions size management, see LMB configuration in KConfig
- all the #if test are done with CONFIG_LMB_USE_MAX_REGIONS (boolean)
- case 1. CONFIG_LMB_USE_MAX_REGIONS is defined (legacy mode)
=> CONFIG_LMB_MAX_REGIONS is used to configure the region size,
directly in the array lmb_region.region[], with the same
configuration for memory and reserved regions.
- case 2. CONFIG_LMB_USE_MAX_REGIONS is not defined, the size of each
region is configurated *independently* with
=> CONFIG_LMB_MEMORY_REGIONS: struct lmb.memory_regions
=> CONFIG_LMB_RESERVED_REGIONS: struct lmb.reserved_regions
lmb_region.region is only a pointer to the correct buffer,
initialized in lmb_init(). This configuration is useful to manage
more reserved memory regions with CONFIG_LMB_RESERVED_REGIONS.
- */
-/**
- struct lmb_region - Description of a set of region.
- @cnt: Number of regions.
- @max: Size of the region array, max value of cnt.
- @region: Array of the region properties
- */
-struct lmb_region {
unsigned long cnt;
unsigned long max;
-#if IS_ENABLED(CONFIG_LMB_USE_MAX_REGIONS)
struct lmb_property region[CONFIG_LMB_MAX_REGIONS];
-#else
struct lmb_property *region;
-#endif -};
-/**
- struct lmb - Logical memory block handle.
- Clients provide storage for Logical memory block (lmb) handles.
- The content of the structure is managed by the lmb library.
- A lmb struct is initialized by lmb_init() functions.
- The lmb struct is passed to all other lmb APIs.
- @memory: Description of memory regions.
- @reserved: Description of reserved regions.
- @memory_regions: Array of the memory regions (statically allocated)
- @reserved_regions: Array of the reserved regions (statically allocated)
- */
-struct lmb {
struct lmb_region memory;
struct lmb_region reserved;
-#if !IS_ENABLED(CONFIG_LMB_USE_MAX_REGIONS)
struct lmb_property memory_regions[CONFIG_LMB_MEMORY_REGIONS];
struct lmb_property reserved_regions[CONFIG_LMB_RESERVED_REGIONS];
-#endif -};
-void lmb_init_and_reserve(struct bd_info *bd, void *fdt_blob); -void lmb_init_and_reserve_range(phys_addr_t base, phys_size_t size,
void *fdt_blob);
long lmb_add(phys_addr_t base, phys_size_t size); long lmb_reserve(phys_addr_t base, phys_size_t size); /** @@ -134,6 +74,19 @@ void board_lmb_reserve(void); void arch_lmb_reserve(void); void arch_lmb_reserve_generic(ulong sp, ulong end, ulong align);
+/**
- lmb_mem_regions_init() - Initialise the LMB memory
- Initialise the LMB subsystem related data structures. There are two
- alloced lists that are initialised, one for the free memory, and one
- for the used memory.
- Initialise the two lists as part of board init.
- Return: 0 if OK, -ve on failure.
- */
+int lmb_mem_regions_init(void);
#endif /* __KERNEL__ */
#endif /* _LINUX_LMB_H */ diff --git a/lib/lmb.c b/lib/lmb.c index 80945e3cae..a46bc8a7a3 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -6,6 +6,7 @@
- Copyright (C) 2001 Peter Bergner.
*/
+#include <alist.h> #include <efi_loader.h> #include <image.h> #include <mapmem.h> @@ -15,24 +16,30 @@
#include <asm/global_data.h> #include <asm/sections.h> +#include <linux/kernel.h>
DECLARE_GLOBAL_DATA_PTR;
#define LMB_ALLOC_ANYWHERE 0 +#define LMB_ALIST_INITIAL_SIZE 4
-static void lmb_dump_region(struct lmb_region *rgn, char *name) +struct alist lmb_free_mem; +struct alist lmb_used_mem;
I think these should be in a struct, e.g. struct lmb, allocated with malloc() and pointed to by gd->lmb so we can avoid making the tests destructive, and allow use of lmb in SPL. There is a arch_reset_for_test() which can reset the lmb back to its original state, or perhaps just swap the pointer for tests.
I know this series is already long, but this patch seems to do quite a bit. Perhaps it could be split into two?
Regards, Simon

hi Simon,
On Sat, 13 Jul 2024 at 20:46, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Thu, 4 Jul 2024 at 08:36, Sughosh Ganu sughosh.ganu@linaro.org wrote:
The current LMB API's for allocating and reserving memory use a per-caller based memory view. Memory allocated by a caller can then be overwritten by another caller. Make these allocations and reservations persistent using the alloced list data structure.
Two alloced lists are declared -- one for the available(free) memory, and one for the used memory. Once full, the list can then be extended at runtime.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since V1:
- Use alloced list structure for the available and reserved memory lists instead of static arrays.
- Corresponding changes in the code made as a result of the above change.
- Rename the reserved memory list as 'used'.
include/lmb.h | 77 +++-------- lib/lmb.c | 346 ++++++++++++++++++++++++++++++-------------------- 2 files changed, 224 insertions(+), 199 deletions(-)
diff --git a/include/lmb.h b/include/lmb.h index 99fcf5781f..27cdb18c37 100644 --- a/include/lmb.h +++ b/include/lmb.h @@ -24,78 +24,18 @@ enum lmb_flags { };
/**
- struct lmb_property - Description of one region.
*/
- struct lmb_region - Description of one region.
- @base: Base address of the region.
- @size: Size of the region
- @flags: memory region attributes
-struct lmb_property { +struct lmb_region { phys_addr_t base; phys_size_t size; enum lmb_flags flags; };
-/*
- For regions size management, see LMB configuration in KConfig
- all the #if test are done with CONFIG_LMB_USE_MAX_REGIONS (boolean)
- case 1. CONFIG_LMB_USE_MAX_REGIONS is defined (legacy mode)
=> CONFIG_LMB_MAX_REGIONS is used to configure the region size,
directly in the array lmb_region.region[], with the same
configuration for memory and reserved regions.
- case 2. CONFIG_LMB_USE_MAX_REGIONS is not defined, the size of each
region is configurated *independently* with
=> CONFIG_LMB_MEMORY_REGIONS: struct lmb.memory_regions
=> CONFIG_LMB_RESERVED_REGIONS: struct lmb.reserved_regions
lmb_region.region is only a pointer to the correct buffer,
initialized in lmb_init(). This configuration is useful to manage
more reserved memory regions with CONFIG_LMB_RESERVED_REGIONS.
- */
-/**
- struct lmb_region - Description of a set of region.
- @cnt: Number of regions.
- @max: Size of the region array, max value of cnt.
- @region: Array of the region properties
- */
-struct lmb_region {
unsigned long cnt;
unsigned long max;
-#if IS_ENABLED(CONFIG_LMB_USE_MAX_REGIONS)
struct lmb_property region[CONFIG_LMB_MAX_REGIONS];
-#else
struct lmb_property *region;
-#endif -};
-/**
- struct lmb - Logical memory block handle.
- Clients provide storage for Logical memory block (lmb) handles.
- The content of the structure is managed by the lmb library.
- A lmb struct is initialized by lmb_init() functions.
- The lmb struct is passed to all other lmb APIs.
- @memory: Description of memory regions.
- @reserved: Description of reserved regions.
- @memory_regions: Array of the memory regions (statically allocated)
- @reserved_regions: Array of the reserved regions (statically allocated)
- */
-struct lmb {
struct lmb_region memory;
struct lmb_region reserved;
-#if !IS_ENABLED(CONFIG_LMB_USE_MAX_REGIONS)
struct lmb_property memory_regions[CONFIG_LMB_MEMORY_REGIONS];
struct lmb_property reserved_regions[CONFIG_LMB_RESERVED_REGIONS];
-#endif -};
-void lmb_init_and_reserve(struct bd_info *bd, void *fdt_blob); -void lmb_init_and_reserve_range(phys_addr_t base, phys_size_t size,
void *fdt_blob);
long lmb_add(phys_addr_t base, phys_size_t size); long lmb_reserve(phys_addr_t base, phys_size_t size); /** @@ -134,6 +74,19 @@ void board_lmb_reserve(void); void arch_lmb_reserve(void); void arch_lmb_reserve_generic(ulong sp, ulong end, ulong align);
+/**
- lmb_mem_regions_init() - Initialise the LMB memory
- Initialise the LMB subsystem related data structures. There are two
- alloced lists that are initialised, one for the free memory, and one
- for the used memory.
- Initialise the two lists as part of board init.
- Return: 0 if OK, -ve on failure.
- */
+int lmb_mem_regions_init(void);
#endif /* __KERNEL__ */
#endif /* _LINUX_LMB_H */ diff --git a/lib/lmb.c b/lib/lmb.c index 80945e3cae..a46bc8a7a3 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -6,6 +6,7 @@
- Copyright (C) 2001 Peter Bergner.
*/
+#include <alist.h> #include <efi_loader.h> #include <image.h> #include <mapmem.h> @@ -15,24 +16,30 @@
#include <asm/global_data.h> #include <asm/sections.h> +#include <linux/kernel.h>
DECLARE_GLOBAL_DATA_PTR;
#define LMB_ALLOC_ANYWHERE 0 +#define LMB_ALIST_INITIAL_SIZE 4
-static void lmb_dump_region(struct lmb_region *rgn, char *name) +struct alist lmb_free_mem; +struct alist lmb_used_mem;
I think these should be in a struct, e.g. struct lmb, allocated with malloc() and pointed to by gd->lmb so we can avoid making the tests destructive, and allow use of lmb in SPL.
Can you elaborate on the point of allowing use of lmb in SPL. Why would the current design not work in SPL ? I tested this on the sandbox SPL variant, and the two lists do get initialised as part of the SPL initialisation routine. Is there some corner-case that I am not considering ?
Also, regarding putting the lmb structure pointer as part of gd, iirc Tom had a different opinion on this. Tom, can you please chime in here ?
There is a arch_reset_for_test() which can reset the lmb back to its original state, or perhaps just swap the pointer for tests.
I know this series is already long, but this patch seems to do quite a bit. Perhaps it could be split into two?
Sorry, do you mean the patch, or the series. If the later, I am going to split the non-rfc version which comes next into two parts. The first part being the lmb changes.
-sughosh
Regards, Simon

Hi Sughosh,
On Mon, 15 Jul 2024 at 10:48, Sughosh Ganu sughosh.ganu@linaro.org wrote:
hi Simon,
On Sat, 13 Jul 2024 at 20:46, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Thu, 4 Jul 2024 at 08:36, Sughosh Ganu sughosh.ganu@linaro.org wrote:
The current LMB API's for allocating and reserving memory use a per-caller based memory view. Memory allocated by a caller can then be overwritten by another caller. Make these allocations and reservations persistent using the alloced list data structure.
Two alloced lists are declared -- one for the available(free) memory, and one for the used memory. Once full, the list can then be extended at runtime.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since V1:
- Use alloced list structure for the available and reserved memory lists instead of static arrays.
- Corresponding changes in the code made as a result of the above change.
- Rename the reserved memory list as 'used'.
include/lmb.h | 77 +++-------- lib/lmb.c | 346 ++++++++++++++++++++++++++++++-------------------- 2 files changed, 224 insertions(+), 199 deletions(-)
diff --git a/include/lmb.h b/include/lmb.h index 99fcf5781f..27cdb18c37 100644 --- a/include/lmb.h +++ b/include/lmb.h @@ -24,78 +24,18 @@ enum lmb_flags { };
/**
- struct lmb_property - Description of one region.
*/
- struct lmb_region - Description of one region.
- @base: Base address of the region.
- @size: Size of the region
- @flags: memory region attributes
-struct lmb_property { +struct lmb_region { phys_addr_t base; phys_size_t size; enum lmb_flags flags; };
-/*
- For regions size management, see LMB configuration in KConfig
- all the #if test are done with CONFIG_LMB_USE_MAX_REGIONS (boolean)
- case 1. CONFIG_LMB_USE_MAX_REGIONS is defined (legacy mode)
=> CONFIG_LMB_MAX_REGIONS is used to configure the region size,
directly in the array lmb_region.region[], with the same
configuration for memory and reserved regions.
- case 2. CONFIG_LMB_USE_MAX_REGIONS is not defined, the size of each
region is configurated *independently* with
=> CONFIG_LMB_MEMORY_REGIONS: struct lmb.memory_regions
=> CONFIG_LMB_RESERVED_REGIONS: struct lmb.reserved_regions
lmb_region.region is only a pointer to the correct buffer,
initialized in lmb_init(). This configuration is useful to manage
more reserved memory regions with CONFIG_LMB_RESERVED_REGIONS.
- */
-/**
- struct lmb_region - Description of a set of region.
- @cnt: Number of regions.
- @max: Size of the region array, max value of cnt.
- @region: Array of the region properties
- */
-struct lmb_region {
unsigned long cnt;
unsigned long max;
-#if IS_ENABLED(CONFIG_LMB_USE_MAX_REGIONS)
struct lmb_property region[CONFIG_LMB_MAX_REGIONS];
-#else
struct lmb_property *region;
-#endif -};
-/**
- struct lmb - Logical memory block handle.
- Clients provide storage for Logical memory block (lmb) handles.
- The content of the structure is managed by the lmb library.
- A lmb struct is initialized by lmb_init() functions.
- The lmb struct is passed to all other lmb APIs.
- @memory: Description of memory regions.
- @reserved: Description of reserved regions.
- @memory_regions: Array of the memory regions (statically allocated)
- @reserved_regions: Array of the reserved regions (statically allocated)
- */
-struct lmb {
struct lmb_region memory;
struct lmb_region reserved;
-#if !IS_ENABLED(CONFIG_LMB_USE_MAX_REGIONS)
struct lmb_property memory_regions[CONFIG_LMB_MEMORY_REGIONS];
struct lmb_property reserved_regions[CONFIG_LMB_RESERVED_REGIONS];
-#endif -};
-void lmb_init_and_reserve(struct bd_info *bd, void *fdt_blob); -void lmb_init_and_reserve_range(phys_addr_t base, phys_size_t size,
void *fdt_blob);
long lmb_add(phys_addr_t base, phys_size_t size); long lmb_reserve(phys_addr_t base, phys_size_t size); /** @@ -134,6 +74,19 @@ void board_lmb_reserve(void); void arch_lmb_reserve(void); void arch_lmb_reserve_generic(ulong sp, ulong end, ulong align);
+/**
- lmb_mem_regions_init() - Initialise the LMB memory
- Initialise the LMB subsystem related data structures. There are two
- alloced lists that are initialised, one for the free memory, and one
- for the used memory.
- Initialise the two lists as part of board init.
- Return: 0 if OK, -ve on failure.
- */
+int lmb_mem_regions_init(void);
#endif /* __KERNEL__ */
#endif /* _LINUX_LMB_H */ diff --git a/lib/lmb.c b/lib/lmb.c index 80945e3cae..a46bc8a7a3 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -6,6 +6,7 @@
- Copyright (C) 2001 Peter Bergner.
*/
+#include <alist.h> #include <efi_loader.h> #include <image.h> #include <mapmem.h> @@ -15,24 +16,30 @@
#include <asm/global_data.h> #include <asm/sections.h> +#include <linux/kernel.h>
DECLARE_GLOBAL_DATA_PTR;
#define LMB_ALLOC_ANYWHERE 0 +#define LMB_ALIST_INITIAL_SIZE 4
-static void lmb_dump_region(struct lmb_region *rgn, char *name) +struct alist lmb_free_mem; +struct alist lmb_used_mem;
I think these should be in a struct, e.g. struct lmb, allocated with malloc() and pointed to by gd->lmb so we can avoid making the tests destructive, and allow use of lmb in SPL.
Can you elaborate on the point of allowing use of lmb in SPL. Why would the current design not work in SPL ? I tested this on the sandbox SPL variant, and the two lists do get initialised as part of the SPL initialisation routine. Is there some corner-case that I am not considering ?
Just that some boards don't have their BSS available until later on in SPL. In general we try to avoid local variables with driver model...I think lmb should be the same, particularly as there it is fairly cheap to allocate a struct with malloc().
Also, regarding putting the lmb structure pointer as part of gd, iirc Tom had a different opinion on this. Tom, can you please chime in here ?
or you could point to the discussion?
There is a arch_reset_for_test() which can reset the lmb back to its original state, or perhaps just swap the pointer for tests.
I know this series is already long, but this patch seems to do quite a bit. Perhaps it could be split into two?
Sorry, do you mean the patch, or the series. If the later, I am going to split the non-rfc version which comes next into two parts. The first part being the lmb changes.
OK ta.
Regards, Simon

On Mon, Jul 15, 2024 at 12:39:35PM +0100, Simon Glass wrote:
Hi Sughosh,
On Mon, 15 Jul 2024 at 10:48, Sughosh Ganu sughosh.ganu@linaro.org wrote:
hi Simon,
On Sat, 13 Jul 2024 at 20:46, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Thu, 4 Jul 2024 at 08:36, Sughosh Ganu sughosh.ganu@linaro.org wrote:
The current LMB API's for allocating and reserving memory use a per-caller based memory view. Memory allocated by a caller can then be overwritten by another caller. Make these allocations and reservations persistent using the alloced list data structure.
Two alloced lists are declared -- one for the available(free) memory, and one for the used memory. Once full, the list can then be extended at runtime.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since V1:
- Use alloced list structure for the available and reserved memory lists instead of static arrays.
- Corresponding changes in the code made as a result of the above change.
- Rename the reserved memory list as 'used'.
include/lmb.h | 77 +++-------- lib/lmb.c | 346 ++++++++++++++++++++++++++++++-------------------- 2 files changed, 224 insertions(+), 199 deletions(-)
diff --git a/include/lmb.h b/include/lmb.h index 99fcf5781f..27cdb18c37 100644 --- a/include/lmb.h +++ b/include/lmb.h @@ -24,78 +24,18 @@ enum lmb_flags { };
/**
- struct lmb_property - Description of one region.
*/
- struct lmb_region - Description of one region.
- @base: Base address of the region.
- @size: Size of the region
- @flags: memory region attributes
-struct lmb_property { +struct lmb_region { phys_addr_t base; phys_size_t size; enum lmb_flags flags; };
-/*
- For regions size management, see LMB configuration in KConfig
- all the #if test are done with CONFIG_LMB_USE_MAX_REGIONS (boolean)
- case 1. CONFIG_LMB_USE_MAX_REGIONS is defined (legacy mode)
=> CONFIG_LMB_MAX_REGIONS is used to configure the region size,
directly in the array lmb_region.region[], with the same
configuration for memory and reserved regions.
- case 2. CONFIG_LMB_USE_MAX_REGIONS is not defined, the size of each
region is configurated *independently* with
=> CONFIG_LMB_MEMORY_REGIONS: struct lmb.memory_regions
=> CONFIG_LMB_RESERVED_REGIONS: struct lmb.reserved_regions
lmb_region.region is only a pointer to the correct buffer,
initialized in lmb_init(). This configuration is useful to manage
more reserved memory regions with CONFIG_LMB_RESERVED_REGIONS.
- */
-/**
- struct lmb_region - Description of a set of region.
- @cnt: Number of regions.
- @max: Size of the region array, max value of cnt.
- @region: Array of the region properties
- */
-struct lmb_region {
unsigned long cnt;
unsigned long max;
-#if IS_ENABLED(CONFIG_LMB_USE_MAX_REGIONS)
struct lmb_property region[CONFIG_LMB_MAX_REGIONS];
-#else
struct lmb_property *region;
-#endif -};
-/**
- struct lmb - Logical memory block handle.
- Clients provide storage for Logical memory block (lmb) handles.
- The content of the structure is managed by the lmb library.
- A lmb struct is initialized by lmb_init() functions.
- The lmb struct is passed to all other lmb APIs.
- @memory: Description of memory regions.
- @reserved: Description of reserved regions.
- @memory_regions: Array of the memory regions (statically allocated)
- @reserved_regions: Array of the reserved regions (statically allocated)
- */
-struct lmb {
struct lmb_region memory;
struct lmb_region reserved;
-#if !IS_ENABLED(CONFIG_LMB_USE_MAX_REGIONS)
struct lmb_property memory_regions[CONFIG_LMB_MEMORY_REGIONS];
struct lmb_property reserved_regions[CONFIG_LMB_RESERVED_REGIONS];
-#endif -};
-void lmb_init_and_reserve(struct bd_info *bd, void *fdt_blob); -void lmb_init_and_reserve_range(phys_addr_t base, phys_size_t size,
void *fdt_blob);
long lmb_add(phys_addr_t base, phys_size_t size); long lmb_reserve(phys_addr_t base, phys_size_t size); /** @@ -134,6 +74,19 @@ void board_lmb_reserve(void); void arch_lmb_reserve(void); void arch_lmb_reserve_generic(ulong sp, ulong end, ulong align);
+/**
- lmb_mem_regions_init() - Initialise the LMB memory
- Initialise the LMB subsystem related data structures. There are two
- alloced lists that are initialised, one for the free memory, and one
- for the used memory.
- Initialise the two lists as part of board init.
- Return: 0 if OK, -ve on failure.
- */
+int lmb_mem_regions_init(void);
#endif /* __KERNEL__ */
#endif /* _LINUX_LMB_H */ diff --git a/lib/lmb.c b/lib/lmb.c index 80945e3cae..a46bc8a7a3 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -6,6 +6,7 @@
- Copyright (C) 2001 Peter Bergner.
*/
+#include <alist.h> #include <efi_loader.h> #include <image.h> #include <mapmem.h> @@ -15,24 +16,30 @@
#include <asm/global_data.h> #include <asm/sections.h> +#include <linux/kernel.h>
DECLARE_GLOBAL_DATA_PTR;
#define LMB_ALLOC_ANYWHERE 0 +#define LMB_ALIST_INITIAL_SIZE 4
-static void lmb_dump_region(struct lmb_region *rgn, char *name) +struct alist lmb_free_mem; +struct alist lmb_used_mem;
I think these should be in a struct, e.g. struct lmb, allocated with malloc() and pointed to by gd->lmb so we can avoid making the tests destructive, and allow use of lmb in SPL.
Can you elaborate on the point of allowing use of lmb in SPL. Why would the current design not work in SPL ? I tested this on the sandbox SPL variant, and the two lists do get initialised as part of the SPL initialisation routine. Is there some corner-case that I am not considering ?
Just that some boards don't have their BSS available until later on in SPL. In general we try to avoid local variables with driver model...I think lmb should be the same, particularly as there it is fairly cheap to allocate a struct with malloc().
We also have limited malloc space, in those cases. But, yes, we likely need LMB available to use earlier in the SPL case now than we did before, so malloc is likely better, as Simon suggests.
Also, regarding putting the lmb structure pointer as part of gd, iirc Tom had a different opinion on this. Tom, can you please chime in here ?
or you could point to the discussion?
It was in reply to the last posting of this series.
While there's a strong implication that tests in post/ (as it is short for 'power on self test') need to be non-destructive, that's not the case for test/ code.
So my thoughts are: - Since there's only one list for real use, calls are cleaner if we aren't passing a pointer to the one and only thing everyone could use. - I don't think we have a test case that's hindered by not doing this, only "now boot normally" may be harder/hindered. - Taking things out of gd is usually good?
At the end of the day, if this is a big sticking point with the new scheme, no, OK, we can go back to gd->lmb. I don't think we need it, but I can't convince myself Everyone Else Is Wrong about it either.

Hi,
On Mon, 15 Jul 2024 at 18:58, Tom Rini trini@konsulko.com wrote:
On Mon, Jul 15, 2024 at 12:39:35PM +0100, Simon Glass wrote:
Hi Sughosh,
On Mon, 15 Jul 2024 at 10:48, Sughosh Ganu sughosh.ganu@linaro.org wrote:
hi Simon,
On Sat, 13 Jul 2024 at 20:46, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Thu, 4 Jul 2024 at 08:36, Sughosh Ganu sughosh.ganu@linaro.org wrote:
The current LMB API's for allocating and reserving memory use a per-caller based memory view. Memory allocated by a caller can then be overwritten by another caller. Make these allocations and reservations persistent using the alloced list data structure.
Two alloced lists are declared -- one for the available(free) memory, and one for the used memory. Once full, the list can then be extended at runtime.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since V1:
- Use alloced list structure for the available and reserved memory lists instead of static arrays.
- Corresponding changes in the code made as a result of the above change.
- Rename the reserved memory list as 'used'.
include/lmb.h | 77 +++-------- lib/lmb.c | 346 ++++++++++++++++++++++++++++++-------------------- 2 files changed, 224 insertions(+), 199 deletions(-)
diff --git a/include/lmb.h b/include/lmb.h index 99fcf5781f..27cdb18c37 100644 --- a/include/lmb.h +++ b/include/lmb.h @@ -24,78 +24,18 @@ enum lmb_flags { };
/**
- struct lmb_property - Description of one region.
*/
- struct lmb_region - Description of one region.
- @base: Base address of the region.
- @size: Size of the region
- @flags: memory region attributes
-struct lmb_property { +struct lmb_region { phys_addr_t base; phys_size_t size; enum lmb_flags flags; };
-/*
- For regions size management, see LMB configuration in KConfig
- all the #if test are done with CONFIG_LMB_USE_MAX_REGIONS (boolean)
- case 1. CONFIG_LMB_USE_MAX_REGIONS is defined (legacy mode)
=> CONFIG_LMB_MAX_REGIONS is used to configure the region size,
directly in the array lmb_region.region[], with the same
configuration for memory and reserved regions.
- case 2. CONFIG_LMB_USE_MAX_REGIONS is not defined, the size of each
region is configurated *independently* with
=> CONFIG_LMB_MEMORY_REGIONS: struct lmb.memory_regions
=> CONFIG_LMB_RESERVED_REGIONS: struct lmb.reserved_regions
lmb_region.region is only a pointer to the correct buffer,
initialized in lmb_init(). This configuration is useful to manage
more reserved memory regions with CONFIG_LMB_RESERVED_REGIONS.
- */
-/**
- struct lmb_region - Description of a set of region.
- @cnt: Number of regions.
- @max: Size of the region array, max value of cnt.
- @region: Array of the region properties
- */
-struct lmb_region {
unsigned long cnt;
unsigned long max;
-#if IS_ENABLED(CONFIG_LMB_USE_MAX_REGIONS)
struct lmb_property region[CONFIG_LMB_MAX_REGIONS];
-#else
struct lmb_property *region;
-#endif -};
-/**
- struct lmb - Logical memory block handle.
- Clients provide storage for Logical memory block (lmb) handles.
- The content of the structure is managed by the lmb library.
- A lmb struct is initialized by lmb_init() functions.
- The lmb struct is passed to all other lmb APIs.
- @memory: Description of memory regions.
- @reserved: Description of reserved regions.
- @memory_regions: Array of the memory regions (statically allocated)
- @reserved_regions: Array of the reserved regions (statically allocated)
- */
-struct lmb {
struct lmb_region memory;
struct lmb_region reserved;
-#if !IS_ENABLED(CONFIG_LMB_USE_MAX_REGIONS)
struct lmb_property memory_regions[CONFIG_LMB_MEMORY_REGIONS];
struct lmb_property reserved_regions[CONFIG_LMB_RESERVED_REGIONS];
-#endif -};
-void lmb_init_and_reserve(struct bd_info *bd, void *fdt_blob); -void lmb_init_and_reserve_range(phys_addr_t base, phys_size_t size,
void *fdt_blob);
long lmb_add(phys_addr_t base, phys_size_t size); long lmb_reserve(phys_addr_t base, phys_size_t size); /** @@ -134,6 +74,19 @@ void board_lmb_reserve(void); void arch_lmb_reserve(void); void arch_lmb_reserve_generic(ulong sp, ulong end, ulong align);
+/**
- lmb_mem_regions_init() - Initialise the LMB memory
- Initialise the LMB subsystem related data structures. There are two
- alloced lists that are initialised, one for the free memory, and one
- for the used memory.
- Initialise the two lists as part of board init.
- Return: 0 if OK, -ve on failure.
- */
+int lmb_mem_regions_init(void);
#endif /* __KERNEL__ */
#endif /* _LINUX_LMB_H */ diff --git a/lib/lmb.c b/lib/lmb.c index 80945e3cae..a46bc8a7a3 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -6,6 +6,7 @@
- Copyright (C) 2001 Peter Bergner.
*/
+#include <alist.h> #include <efi_loader.h> #include <image.h> #include <mapmem.h> @@ -15,24 +16,30 @@
#include <asm/global_data.h> #include <asm/sections.h> +#include <linux/kernel.h>
DECLARE_GLOBAL_DATA_PTR;
#define LMB_ALLOC_ANYWHERE 0 +#define LMB_ALIST_INITIAL_SIZE 4
-static void lmb_dump_region(struct lmb_region *rgn, char *name) +struct alist lmb_free_mem; +struct alist lmb_used_mem;
I think these should be in a struct, e.g. struct lmb, allocated with malloc() and pointed to by gd->lmb so we can avoid making the tests destructive, and allow use of lmb in SPL.
Can you elaborate on the point of allowing use of lmb in SPL. Why would the current design not work in SPL ? I tested this on the sandbox SPL variant, and the two lists do get initialised as part of the SPL initialisation routine. Is there some corner-case that I am not considering ?
Just that some boards don't have their BSS available until later on in SPL. In general we try to avoid local variables with driver model...I think lmb should be the same, particularly as there it is fairly cheap to allocate a struct with malloc().
We also have limited malloc space, in those cases. But, yes, we likely need LMB available to use earlier in the SPL case now than we did before, so malloc is likely better, as Simon suggests.
Also, regarding putting the lmb structure pointer as part of gd, iirc Tom had a different opinion on this. Tom, can you please chime in here ?
or you could point to the discussion?
It was in reply to the last posting of this series.
While there's a strong implication that tests in post/ (as it is short for 'power on self test') need to be non-destructive, that's not the case for test/ code.
So my thoughts are:
- Since there's only one list for real use, calls are cleaner if we aren't passing a pointer to the one and only thing everyone could use.
- I don't think we have a test case that's hindered by not doing this, only "now boot normally" may be harder/hindered.
- Taking things out of gd is usually good?
At the end of the day, if this is a big sticking point with the new scheme, no, OK, we can go back to gd->lmb. I don't think we need it, but I can't convince myself Everyone Else Is Wrong about it either.
I don't think we need to keep lmb as a parameter to the API. My real interest is in putting the lmb 'state' in a struct such that it can be swapped in/out for tests, allocated in malloc(), etc., like we do with livetree and so on.
I will note that dropping the first parameter does not necessarily reduce code size. Accessing a global variable is typically more expensive than passing a parameter. But so long as the static functions in lmb are passed the pointer, it might be a win to not pass gd->lmb everywhere...? Anyway, it is easier for callers and since there is (normally, except for testing) only one lmb, it makes sense to me.
The issue of where to store the pointer is therefore a separate one...I'd argue for gd->lmb, but since it is easy to change later, a global var is fine for now, if that is more palatable.
Regards, Simon

The LMB memory maps are now being maintained through a couple of alloced lists, one for the available(added) memory, and one for the used memory. These lists are not static arrays but can be extended at runtime. Remove the config symbols which were being used to define the size of these lists with the earlier implementation of static arrays.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since V1: New patch
configs/a3y17lte_defconfig | 1 - configs/a5y17lte_defconfig | 1 - configs/a7y17lte_defconfig | 1 - configs/apple_m1_defconfig | 1 - configs/mt7981_emmc_rfb_defconfig | 1 - configs/mt7981_rfb_defconfig | 1 - configs/mt7981_sd_rfb_defconfig | 1 - configs/mt7986_rfb_defconfig | 1 - configs/mt7986a_bpir3_emmc_defconfig | 1 - configs/mt7986a_bpir3_sd_defconfig | 1 - configs/mt7988_rfb_defconfig | 1 - configs/mt7988_sd_rfb_defconfig | 1 - configs/qcom_defconfig | 1 - configs/stm32mp13_defconfig | 3 --- configs/stm32mp15_basic_defconfig | 3 --- configs/stm32mp15_defconfig | 3 --- configs/stm32mp15_trusted_defconfig | 3 --- configs/stm32mp25_defconfig | 3 --- configs/th1520_lpi4a_defconfig | 1 - lib/Kconfig | 34 ---------------------------- 20 files changed, 63 deletions(-)
diff --git a/configs/a3y17lte_defconfig b/configs/a3y17lte_defconfig index 5c15d51fdc..b012b985a3 100644 --- a/configs/a3y17lte_defconfig +++ b/configs/a3y17lte_defconfig @@ -23,4 +23,3 @@ CONFIG_HUSH_PARSER=y CONFIG_CMD_GPIO=y CONFIG_CMD_I2C=y CONFIG_DM_I2C_GPIO=y -CONFIG_LMB_MAX_REGIONS=64 diff --git a/configs/a5y17lte_defconfig b/configs/a5y17lte_defconfig index 7c9b6b2511..25a7d5bc98 100644 --- a/configs/a5y17lte_defconfig +++ b/configs/a5y17lte_defconfig @@ -23,4 +23,3 @@ CONFIG_HUSH_PARSER=y CONFIG_CMD_GPIO=y CONFIG_CMD_I2C=y CONFIG_DM_I2C_GPIO=y -CONFIG_LMB_MAX_REGIONS=64 diff --git a/configs/a7y17lte_defconfig b/configs/a7y17lte_defconfig index c7297f7d75..c87379ab39 100644 --- a/configs/a7y17lte_defconfig +++ b/configs/a7y17lte_defconfig @@ -23,4 +23,3 @@ CONFIG_HUSH_PARSER=y CONFIG_CMD_GPIO=y CONFIG_CMD_I2C=y CONFIG_DM_I2C_GPIO=y -CONFIG_LMB_MAX_REGIONS=64 diff --git a/configs/apple_m1_defconfig b/configs/apple_m1_defconfig index 20d2cff93f..dca6e0ca8b 100644 --- a/configs/apple_m1_defconfig +++ b/configs/apple_m1_defconfig @@ -26,4 +26,3 @@ CONFIG_SYS_WHITE_ON_BLACK=y CONFIG_NO_FB_CLEAR=y CONFIG_VIDEO_SIMPLE=y # CONFIG_SMBIOS is not set -CONFIG_LMB_MAX_REGIONS=64 diff --git a/configs/mt7981_emmc_rfb_defconfig b/configs/mt7981_emmc_rfb_defconfig index 76ee2aa2d6..d3e833905f 100644 --- a/configs/mt7981_emmc_rfb_defconfig +++ b/configs/mt7981_emmc_rfb_defconfig @@ -62,4 +62,3 @@ CONFIG_MTK_SERIAL=y CONFIG_FAT_WRITE=y CONFIG_HEXDUMP=y # CONFIG_EFI_LOADER is not set -CONFIG_LMB_MAX_REGIONS=64 diff --git a/configs/mt7981_rfb_defconfig b/configs/mt7981_rfb_defconfig index 3989c79d2b..4bc2173f13 100644 --- a/configs/mt7981_rfb_defconfig +++ b/configs/mt7981_rfb_defconfig @@ -65,4 +65,3 @@ CONFIG_DM_SPI=y CONFIG_MTK_SPIM=y CONFIG_HEXDUMP=y # CONFIG_EFI_LOADER is not set -CONFIG_LMB_MAX_REGIONS=64 diff --git a/configs/mt7981_sd_rfb_defconfig b/configs/mt7981_sd_rfb_defconfig index 9b33245527..8721b4074a 100644 --- a/configs/mt7981_sd_rfb_defconfig +++ b/configs/mt7981_sd_rfb_defconfig @@ -62,4 +62,3 @@ CONFIG_MTK_SERIAL=y CONFIG_FAT_WRITE=y CONFIG_HEXDUMP=y # CONFIG_EFI_LOADER is not set -CONFIG_LMB_MAX_REGIONS=64 diff --git a/configs/mt7986_rfb_defconfig b/configs/mt7986_rfb_defconfig index 4d0cc85d0e..15c31de236 100644 --- a/configs/mt7986_rfb_defconfig +++ b/configs/mt7986_rfb_defconfig @@ -65,4 +65,3 @@ CONFIG_DM_SPI=y CONFIG_MTK_SPIM=y CONFIG_HEXDUMP=y # CONFIG_EFI_LOADER is not set -CONFIG_LMB_MAX_REGIONS=64 diff --git a/configs/mt7986a_bpir3_emmc_defconfig b/configs/mt7986a_bpir3_emmc_defconfig index 3c296ab803..56921f3605 100644 --- a/configs/mt7986a_bpir3_emmc_defconfig +++ b/configs/mt7986a_bpir3_emmc_defconfig @@ -62,4 +62,3 @@ CONFIG_MTK_SERIAL=y CONFIG_FAT_WRITE=y CONFIG_HEXDUMP=y # CONFIG_EFI_LOADER is not set -CONFIG_LMB_MAX_REGIONS=64 diff --git a/configs/mt7986a_bpir3_sd_defconfig b/configs/mt7986a_bpir3_sd_defconfig index f644070f4e..4ed06b72d5 100644 --- a/configs/mt7986a_bpir3_sd_defconfig +++ b/configs/mt7986a_bpir3_sd_defconfig @@ -62,4 +62,3 @@ CONFIG_MTK_SERIAL=y CONFIG_FAT_WRITE=y CONFIG_HEXDUMP=y # CONFIG_EFI_LOADER is not set -CONFIG_LMB_MAX_REGIONS=64 diff --git a/configs/mt7988_rfb_defconfig b/configs/mt7988_rfb_defconfig index d0ed2cc1c9..f7ceaceb30 100644 --- a/configs/mt7988_rfb_defconfig +++ b/configs/mt7988_rfb_defconfig @@ -81,4 +81,3 @@ CONFIG_MTK_SPIM=y CONFIG_LZO=y CONFIG_HEXDUMP=y # CONFIG_EFI_LOADER is not set -CONFIG_LMB_MAX_REGIONS=64 diff --git a/configs/mt7988_sd_rfb_defconfig b/configs/mt7988_sd_rfb_defconfig index 5631eaa338..808c8b9011 100644 --- a/configs/mt7988_sd_rfb_defconfig +++ b/configs/mt7988_sd_rfb_defconfig @@ -69,4 +69,3 @@ CONFIG_MTK_SPIM=y CONFIG_LZO=y CONFIG_HEXDUMP=y # CONFIG_EFI_LOADER is not set -CONFIG_LMB_MAX_REGIONS=64 diff --git a/configs/qcom_defconfig b/configs/qcom_defconfig index 37966bfb20..1ce46a0625 100644 --- a/configs/qcom_defconfig +++ b/configs/qcom_defconfig @@ -106,4 +106,3 @@ CONFIG_SYS_WHITE_ON_BLACK=y CONFIG_NO_FB_CLEAR=y CONFIG_VIDEO_SIMPLE=y CONFIG_HEXDUMP=y -CONFIG_LMB_MAX_REGIONS=64 diff --git a/configs/stm32mp13_defconfig b/configs/stm32mp13_defconfig index caaabf39ef..9aa3560c7e 100644 --- a/configs/stm32mp13_defconfig +++ b/configs/stm32mp13_defconfig @@ -103,6 +103,3 @@ CONFIG_USB_GADGET_VENDOR_NUM=0x0483 CONFIG_USB_GADGET_PRODUCT_NUM=0x5720 CONFIG_USB_GADGET_DWC2_OTG=y CONFIG_ERRNO_STR=y -# CONFIG_LMB_USE_MAX_REGIONS is not set -CONFIG_LMB_MEMORY_REGIONS=2 -CONFIG_LMB_RESERVED_REGIONS=16 diff --git a/configs/stm32mp15_basic_defconfig b/configs/stm32mp15_basic_defconfig index 2e22bf8600..806935f389 100644 --- a/configs/stm32mp15_basic_defconfig +++ b/configs/stm32mp15_basic_defconfig @@ -190,6 +190,3 @@ CONFIG_WDT=y CONFIG_WDT_STM32MP=y # CONFIG_BINMAN_FDT is not set CONFIG_ERRNO_STR=y -# CONFIG_LMB_USE_MAX_REGIONS is not set -CONFIG_LMB_MEMORY_REGIONS=2 -CONFIG_LMB_RESERVED_REGIONS=16 diff --git a/configs/stm32mp15_defconfig b/configs/stm32mp15_defconfig index ffe7512650..5f050ee0d0 100644 --- a/configs/stm32mp15_defconfig +++ b/configs/stm32mp15_defconfig @@ -166,6 +166,3 @@ CONFIG_WDT=y CONFIG_WDT_STM32MP=y # CONFIG_BINMAN_FDT is not set CONFIG_ERRNO_STR=y -# CONFIG_LMB_USE_MAX_REGIONS is not set -CONFIG_LMB_MEMORY_REGIONS=2 -CONFIG_LMB_RESERVED_REGIONS=16 diff --git a/configs/stm32mp15_trusted_defconfig b/configs/stm32mp15_trusted_defconfig index 74deaaba2e..3c591d74af 100644 --- a/configs/stm32mp15_trusted_defconfig +++ b/configs/stm32mp15_trusted_defconfig @@ -166,6 +166,3 @@ CONFIG_WDT=y CONFIG_WDT_STM32MP=y # CONFIG_BINMAN_FDT is not set CONFIG_ERRNO_STR=y -# CONFIG_LMB_USE_MAX_REGIONS is not set -CONFIG_LMB_MEMORY_REGIONS=2 -CONFIG_LMB_RESERVED_REGIONS=16 diff --git a/configs/stm32mp25_defconfig b/configs/stm32mp25_defconfig index 87038cc773..f5623a19bb 100644 --- a/configs/stm32mp25_defconfig +++ b/configs/stm32mp25_defconfig @@ -49,6 +49,3 @@ CONFIG_WDT_STM32MP=y CONFIG_WDT_ARM_SMC=y CONFIG_ERRNO_STR=y # CONFIG_EFI_LOADER is not set -# CONFIG_LMB_USE_MAX_REGIONS is not set -CONFIG_LMB_MEMORY_REGIONS=2 -CONFIG_LMB_RESERVED_REGIONS=32 diff --git a/configs/th1520_lpi4a_defconfig b/configs/th1520_lpi4a_defconfig index 49ff92f6de..db80e33870 100644 --- a/configs/th1520_lpi4a_defconfig +++ b/configs/th1520_lpi4a_defconfig @@ -79,4 +79,3 @@ CONFIG_BZIP2=y CONFIG_ZSTD=y CONFIG_LIB_RATIONAL=y # CONFIG_EFI_LOADER is not set -# CONFIG_LMB_USE_MAX_REGIONS is not set diff --git a/lib/Kconfig b/lib/Kconfig index b3baa4b85b..072ed0ecfa 100644 --- a/lib/Kconfig +++ b/lib/Kconfig @@ -1105,40 +1105,6 @@ config LMB help Support the library logical memory blocks.
-config LMB_USE_MAX_REGIONS - bool "Use a common number of memory and reserved regions in lmb lib" - default y - help - Define the number of supported memory regions in the library logical - memory blocks. - This feature allow to reduce the lmb library size by using compiler - optimization when LMB_MEMORY_REGIONS == LMB_RESERVED_REGIONS. - -config LMB_MAX_REGIONS - int "Number of memory and reserved regions in lmb lib" - depends on LMB_USE_MAX_REGIONS - default 16 - help - Define the number of supported regions, memory and reserved, in the - library logical memory blocks. - -config LMB_MEMORY_REGIONS - int "Number of memory regions in lmb lib" - depends on !LMB_USE_MAX_REGIONS - default 8 - help - Define the number of supported memory regions in the library logical - memory blocks. - The minimal value is CONFIG_NR_DRAM_BANKS. - -config LMB_RESERVED_REGIONS - int "Number of reserved regions in lmb lib" - depends on !LMB_USE_MAX_REGIONS - default 8 - help - Define the number of supported reserved regions in the library logical - memory blocks. - config PHANDLE_CHECK_SEQ bool "Enable phandle check while getting sequence number" help

On Thu, 4 Jul 2024 at 08:36, Sughosh Ganu sughosh.ganu@linaro.org wrote:
The LMB memory maps are now being maintained through a couple of alloced lists, one for the available(added) memory, and one for the used memory. These lists are not static arrays but can be extended at runtime. Remove the config symbols which were being used to define the size of these lists with the earlier implementation of static arrays.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since V1: New patch
configs/a3y17lte_defconfig | 1 - configs/a5y17lte_defconfig | 1 - configs/a7y17lte_defconfig | 1 - configs/apple_m1_defconfig | 1 - configs/mt7981_emmc_rfb_defconfig | 1 - configs/mt7981_rfb_defconfig | 1 - configs/mt7981_sd_rfb_defconfig | 1 - configs/mt7986_rfb_defconfig | 1 - configs/mt7986a_bpir3_emmc_defconfig | 1 - configs/mt7986a_bpir3_sd_defconfig | 1 - configs/mt7988_rfb_defconfig | 1 - configs/mt7988_sd_rfb_defconfig | 1 - configs/qcom_defconfig | 1 - configs/stm32mp13_defconfig | 3 --- configs/stm32mp15_basic_defconfig | 3 --- configs/stm32mp15_defconfig | 3 --- configs/stm32mp15_trusted_defconfig | 3 --- configs/stm32mp25_defconfig | 3 --- configs/th1520_lpi4a_defconfig | 1 - lib/Kconfig | 34 ---------------------------- 20 files changed, 63 deletions(-)
Reviewed-by: Simon Glass sjg@chromium.org

On Thu, 4 Jul 2024 at 10:36, Sughosh Ganu sughosh.ganu@linaro.org wrote:
The LMB memory maps are now being maintained through a couple of alloced lists, one for the available(added) memory, and one for the used memory. These lists are not static arrays but can be extended at runtime. Remove the config symbols which were being used to define the size of these lists with the earlier implementation of static arrays.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since V1: New patch
configs/a3y17lte_defconfig | 1 - configs/a5y17lte_defconfig | 1 - configs/a7y17lte_defconfig | 1 - configs/apple_m1_defconfig | 1 - configs/mt7981_emmc_rfb_defconfig | 1 - configs/mt7981_rfb_defconfig | 1 - configs/mt7981_sd_rfb_defconfig | 1 - configs/mt7986_rfb_defconfig | 1 - configs/mt7986a_bpir3_emmc_defconfig | 1 - configs/mt7986a_bpir3_sd_defconfig | 1 - configs/mt7988_rfb_defconfig | 1 - configs/mt7988_sd_rfb_defconfig | 1 - configs/qcom_defconfig | 1 - configs/stm32mp13_defconfig | 3 --- configs/stm32mp15_basic_defconfig | 3 --- configs/stm32mp15_defconfig | 3 --- configs/stm32mp15_trusted_defconfig | 3 --- configs/stm32mp25_defconfig | 3 --- configs/th1520_lpi4a_defconfig | 1 - lib/Kconfig | 34 ---------------------------- 20 files changed, 63 deletions(-)
diff --git a/configs/a3y17lte_defconfig b/configs/a3y17lte_defconfig index 5c15d51fdc..b012b985a3 100644 --- a/configs/a3y17lte_defconfig +++ b/configs/a3y17lte_defconfig @@ -23,4 +23,3 @@ CONFIG_HUSH_PARSER=y CONFIG_CMD_GPIO=y CONFIG_CMD_I2C=y CONFIG_DM_I2C_GPIO=y -CONFIG_LMB_MAX_REGIONS=64 diff --git a/configs/a5y17lte_defconfig b/configs/a5y17lte_defconfig index 7c9b6b2511..25a7d5bc98 100644 --- a/configs/a5y17lte_defconfig +++ b/configs/a5y17lte_defconfig @@ -23,4 +23,3 @@ CONFIG_HUSH_PARSER=y CONFIG_CMD_GPIO=y CONFIG_CMD_I2C=y CONFIG_DM_I2C_GPIO=y -CONFIG_LMB_MAX_REGIONS=64 diff --git a/configs/a7y17lte_defconfig b/configs/a7y17lte_defconfig index c7297f7d75..c87379ab39 100644 --- a/configs/a7y17lte_defconfig +++ b/configs/a7y17lte_defconfig @@ -23,4 +23,3 @@ CONFIG_HUSH_PARSER=y CONFIG_CMD_GPIO=y CONFIG_CMD_I2C=y CONFIG_DM_I2C_GPIO=y -CONFIG_LMB_MAX_REGIONS=64 diff --git a/configs/apple_m1_defconfig b/configs/apple_m1_defconfig index 20d2cff93f..dca6e0ca8b 100644 --- a/configs/apple_m1_defconfig +++ b/configs/apple_m1_defconfig @@ -26,4 +26,3 @@ CONFIG_SYS_WHITE_ON_BLACK=y CONFIG_NO_FB_CLEAR=y CONFIG_VIDEO_SIMPLE=y # CONFIG_SMBIOS is not set -CONFIG_LMB_MAX_REGIONS=64 diff --git a/configs/mt7981_emmc_rfb_defconfig b/configs/mt7981_emmc_rfb_defconfig index 76ee2aa2d6..d3e833905f 100644 --- a/configs/mt7981_emmc_rfb_defconfig +++ b/configs/mt7981_emmc_rfb_defconfig @@ -62,4 +62,3 @@ CONFIG_MTK_SERIAL=y CONFIG_FAT_WRITE=y CONFIG_HEXDUMP=y # CONFIG_EFI_LOADER is not set -CONFIG_LMB_MAX_REGIONS=64 diff --git a/configs/mt7981_rfb_defconfig b/configs/mt7981_rfb_defconfig index 3989c79d2b..4bc2173f13 100644 --- a/configs/mt7981_rfb_defconfig +++ b/configs/mt7981_rfb_defconfig @@ -65,4 +65,3 @@ CONFIG_DM_SPI=y CONFIG_MTK_SPIM=y CONFIG_HEXDUMP=y # CONFIG_EFI_LOADER is not set -CONFIG_LMB_MAX_REGIONS=64 diff --git a/configs/mt7981_sd_rfb_defconfig b/configs/mt7981_sd_rfb_defconfig index 9b33245527..8721b4074a 100644 --- a/configs/mt7981_sd_rfb_defconfig +++ b/configs/mt7981_sd_rfb_defconfig @@ -62,4 +62,3 @@ CONFIG_MTK_SERIAL=y CONFIG_FAT_WRITE=y CONFIG_HEXDUMP=y # CONFIG_EFI_LOADER is not set -CONFIG_LMB_MAX_REGIONS=64 diff --git a/configs/mt7986_rfb_defconfig b/configs/mt7986_rfb_defconfig index 4d0cc85d0e..15c31de236 100644 --- a/configs/mt7986_rfb_defconfig +++ b/configs/mt7986_rfb_defconfig @@ -65,4 +65,3 @@ CONFIG_DM_SPI=y CONFIG_MTK_SPIM=y CONFIG_HEXDUMP=y # CONFIG_EFI_LOADER is not set -CONFIG_LMB_MAX_REGIONS=64 diff --git a/configs/mt7986a_bpir3_emmc_defconfig b/configs/mt7986a_bpir3_emmc_defconfig index 3c296ab803..56921f3605 100644 --- a/configs/mt7986a_bpir3_emmc_defconfig +++ b/configs/mt7986a_bpir3_emmc_defconfig @@ -62,4 +62,3 @@ CONFIG_MTK_SERIAL=y CONFIG_FAT_WRITE=y CONFIG_HEXDUMP=y # CONFIG_EFI_LOADER is not set -CONFIG_LMB_MAX_REGIONS=64 diff --git a/configs/mt7986a_bpir3_sd_defconfig b/configs/mt7986a_bpir3_sd_defconfig index f644070f4e..4ed06b72d5 100644 --- a/configs/mt7986a_bpir3_sd_defconfig +++ b/configs/mt7986a_bpir3_sd_defconfig @@ -62,4 +62,3 @@ CONFIG_MTK_SERIAL=y CONFIG_FAT_WRITE=y CONFIG_HEXDUMP=y # CONFIG_EFI_LOADER is not set -CONFIG_LMB_MAX_REGIONS=64 diff --git a/configs/mt7988_rfb_defconfig b/configs/mt7988_rfb_defconfig index d0ed2cc1c9..f7ceaceb30 100644 --- a/configs/mt7988_rfb_defconfig +++ b/configs/mt7988_rfb_defconfig @@ -81,4 +81,3 @@ CONFIG_MTK_SPIM=y CONFIG_LZO=y CONFIG_HEXDUMP=y # CONFIG_EFI_LOADER is not set -CONFIG_LMB_MAX_REGIONS=64 diff --git a/configs/mt7988_sd_rfb_defconfig b/configs/mt7988_sd_rfb_defconfig index 5631eaa338..808c8b9011 100644 --- a/configs/mt7988_sd_rfb_defconfig +++ b/configs/mt7988_sd_rfb_defconfig @@ -69,4 +69,3 @@ CONFIG_MTK_SPIM=y CONFIG_LZO=y CONFIG_HEXDUMP=y # CONFIG_EFI_LOADER is not set -CONFIG_LMB_MAX_REGIONS=64 diff --git a/configs/qcom_defconfig b/configs/qcom_defconfig index 37966bfb20..1ce46a0625 100644 --- a/configs/qcom_defconfig +++ b/configs/qcom_defconfig @@ -106,4 +106,3 @@ CONFIG_SYS_WHITE_ON_BLACK=y CONFIG_NO_FB_CLEAR=y CONFIG_VIDEO_SIMPLE=y CONFIG_HEXDUMP=y -CONFIG_LMB_MAX_REGIONS=64 diff --git a/configs/stm32mp13_defconfig b/configs/stm32mp13_defconfig index caaabf39ef..9aa3560c7e 100644 --- a/configs/stm32mp13_defconfig +++ b/configs/stm32mp13_defconfig @@ -103,6 +103,3 @@ CONFIG_USB_GADGET_VENDOR_NUM=0x0483 CONFIG_USB_GADGET_PRODUCT_NUM=0x5720 CONFIG_USB_GADGET_DWC2_OTG=y CONFIG_ERRNO_STR=y -# CONFIG_LMB_USE_MAX_REGIONS is not set -CONFIG_LMB_MEMORY_REGIONS=2 -CONFIG_LMB_RESERVED_REGIONS=16 diff --git a/configs/stm32mp15_basic_defconfig b/configs/stm32mp15_basic_defconfig index 2e22bf8600..806935f389 100644 --- a/configs/stm32mp15_basic_defconfig +++ b/configs/stm32mp15_basic_defconfig @@ -190,6 +190,3 @@ CONFIG_WDT=y CONFIG_WDT_STM32MP=y # CONFIG_BINMAN_FDT is not set CONFIG_ERRNO_STR=y -# CONFIG_LMB_USE_MAX_REGIONS is not set -CONFIG_LMB_MEMORY_REGIONS=2 -CONFIG_LMB_RESERVED_REGIONS=16 diff --git a/configs/stm32mp15_defconfig b/configs/stm32mp15_defconfig index ffe7512650..5f050ee0d0 100644 --- a/configs/stm32mp15_defconfig +++ b/configs/stm32mp15_defconfig @@ -166,6 +166,3 @@ CONFIG_WDT=y CONFIG_WDT_STM32MP=y # CONFIG_BINMAN_FDT is not set CONFIG_ERRNO_STR=y -# CONFIG_LMB_USE_MAX_REGIONS is not set -CONFIG_LMB_MEMORY_REGIONS=2 -CONFIG_LMB_RESERVED_REGIONS=16 diff --git a/configs/stm32mp15_trusted_defconfig b/configs/stm32mp15_trusted_defconfig index 74deaaba2e..3c591d74af 100644 --- a/configs/stm32mp15_trusted_defconfig +++ b/configs/stm32mp15_trusted_defconfig @@ -166,6 +166,3 @@ CONFIG_WDT=y CONFIG_WDT_STM32MP=y # CONFIG_BINMAN_FDT is not set CONFIG_ERRNO_STR=y -# CONFIG_LMB_USE_MAX_REGIONS is not set -CONFIG_LMB_MEMORY_REGIONS=2 -CONFIG_LMB_RESERVED_REGIONS=16 diff --git a/configs/stm32mp25_defconfig b/configs/stm32mp25_defconfig index 87038cc773..f5623a19bb 100644 --- a/configs/stm32mp25_defconfig +++ b/configs/stm32mp25_defconfig @@ -49,6 +49,3 @@ CONFIG_WDT_STM32MP=y CONFIG_WDT_ARM_SMC=y CONFIG_ERRNO_STR=y # CONFIG_EFI_LOADER is not set -# CONFIG_LMB_USE_MAX_REGIONS is not set -CONFIG_LMB_MEMORY_REGIONS=2 -CONFIG_LMB_RESERVED_REGIONS=32 diff --git a/configs/th1520_lpi4a_defconfig b/configs/th1520_lpi4a_defconfig index 49ff92f6de..db80e33870 100644 --- a/configs/th1520_lpi4a_defconfig +++ b/configs/th1520_lpi4a_defconfig @@ -79,4 +79,3 @@ CONFIG_BZIP2=y CONFIG_ZSTD=y CONFIG_LIB_RATIONAL=y # CONFIG_EFI_LOADER is not set -# CONFIG_LMB_USE_MAX_REGIONS is not set diff --git a/lib/Kconfig b/lib/Kconfig index b3baa4b85b..072ed0ecfa 100644 --- a/lib/Kconfig +++ b/lib/Kconfig @@ -1105,40 +1105,6 @@ config LMB help Support the library logical memory blocks.
-config LMB_USE_MAX_REGIONS
bool "Use a common number of memory and reserved regions in lmb lib"
default y
help
Define the number of supported memory regions in the library logical
memory blocks.
This feature allow to reduce the lmb library size by using compiler
optimization when LMB_MEMORY_REGIONS == LMB_RESERVED_REGIONS.
-config LMB_MAX_REGIONS
int "Number of memory and reserved regions in lmb lib"
depends on LMB_USE_MAX_REGIONS
default 16
help
Define the number of supported regions, memory and reserved, in the
library logical memory blocks.
-config LMB_MEMORY_REGIONS
int "Number of memory regions in lmb lib"
depends on !LMB_USE_MAX_REGIONS
default 8
help
Define the number of supported memory regions in the library logical
memory blocks.
The minimal value is CONFIG_NR_DRAM_BANKS.
-config LMB_RESERVED_REGIONS
int "Number of reserved regions in lmb lib"
depends on !LMB_USE_MAX_REGIONS
default 8
help
Define the number of supported reserved regions in the library logical
memory blocks.
config PHANDLE_CHECK_SEQ bool "Enable phandle check while getting sequence number" help -- 2.34.1
Reviewed-by: Ilias Apalodimas ilias.apalodimas@linaro.org

The LMB memory map is now persistent and global, and the CONFIG_LMB_USE_MAX_REGIONS config symbol has now been removed. Remove the corresponding lmb test case.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since V1: New patch
test/lib/lmb.c | 67 -------------------------------------------------- 1 file changed, 67 deletions(-)
diff --git a/test/lib/lmb.c b/test/lib/lmb.c index 9f6b5633a7..a3a7ad904c 100644 --- a/test/lib/lmb.c +++ b/test/lib/lmb.c @@ -648,73 +648,6 @@ static int lib_test_lmb_get_free_size(struct unit_test_state *uts) } LIB_TEST(lib_test_lmb_get_free_size, 0);
-#ifdef CONFIG_LMB_USE_MAX_REGIONS -static int lib_test_lmb_max_regions(struct unit_test_state *uts) -{ - const phys_addr_t ram = 0x00000000; - /* - * All of 32bit memory space will contain regions for this test, so - * we need to scale ram_size (which in this case is the size of the lmb - * region) to match. - */ - const phys_size_t ram_size = ((0xFFFFFFFF >> CONFIG_LMB_MAX_REGIONS) - + 1) * CONFIG_LMB_MAX_REGIONS; - const phys_size_t blk_size = 0x10000; - phys_addr_t offset; - int ret, i; - - ut_asserteq(lmb.memory.cnt, 0); - ut_asserteq(lmb.memory.max, CONFIG_LMB_MAX_REGIONS); - ut_asserteq(lmb.reserved.cnt, 0); - ut_asserteq(lmb.reserved.max, CONFIG_LMB_MAX_REGIONS); - - /* Add CONFIG_LMB_MAX_REGIONS memory regions */ - for (i = 0; i < CONFIG_LMB_MAX_REGIONS; i++) { - offset = ram + 2 * i * ram_size; - ret = lmb_add(offset, ram_size); - ut_asserteq(ret, 0); - } - ut_asserteq(lmb.memory.cnt, CONFIG_LMB_MAX_REGIONS); - ut_asserteq(lmb.reserved.cnt, 0); - - /* error for the (CONFIG_LMB_MAX_REGIONS + 1) memory regions */ - offset = ram + 2 * (CONFIG_LMB_MAX_REGIONS + 1) * ram_size; - ret = lmb_add(offset, ram_size); - ut_asserteq(ret, -1); - - ut_asserteq(lmb.memory.cnt, CONFIG_LMB_MAX_REGIONS); - ut_asserteq(lmb.reserved.cnt, 0); - - /* reserve CONFIG_LMB_MAX_REGIONS regions */ - for (i = 0; i < CONFIG_LMB_MAX_REGIONS; i++) { - offset = ram + 2 * i * blk_size; - ret = lmb_reserve(offset, blk_size); - ut_asserteq(ret, 0); - } - - ut_asserteq(lmb.memory.cnt, CONFIG_LMB_MAX_REGIONS); - ut_asserteq(lmb.reserved.cnt, CONFIG_LMB_MAX_REGIONS); - - /* error for the 9th reserved blocks */ - offset = ram + 2 * (CONFIG_LMB_MAX_REGIONS + 1) * blk_size; - ret = lmb_reserve(offset, blk_size); - ut_asserteq(ret, -1); - - ut_asserteq(lmb.memory.cnt, CONFIG_LMB_MAX_REGIONS); - ut_asserteq(lmb.reserved.cnt, CONFIG_LMB_MAX_REGIONS); - - /* check each regions */ - for (i = 0; i < CONFIG_LMB_MAX_REGIONS; i++) - ut_asserteq(lmb.memory.region[i].base, ram + 2 * i * ram_size); - - for (i = 0; i < CONFIG_LMB_MAX_REGIONS; i++) - ut_asserteq(lmb.reserved.region[i].base, ram + 2 * i * blk_size); - - return 0; -} -LIB_TEST(lib_test_lmb_max_regions, 0); -#endif - static int lib_test_lmb_flags(struct unit_test_state *uts) { const phys_addr_t ram = 0x40000000;

On Thu, 4 Jul 2024 at 08:36, Sughosh Ganu sughosh.ganu@linaro.org wrote:
The LMB memory map is now persistent and global, and the CONFIG_LMB_USE_MAX_REGIONS config symbol has now been removed. Remove the corresponding lmb test case.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since V1: New patch
test/lib/lmb.c | 67 -------------------------------------------------- 1 file changed, 67 deletions(-)
Reviewed-by: Simon Glass sjg@chromium.org

On Thu, 4 Jul 2024 at 10:36, Sughosh Ganu sughosh.ganu@linaro.org wrote:
The LMB memory map is now persistent and global, and the CONFIG_LMB_USE_MAX_REGIONS config symbol has now been removed. Remove the corresponding lmb test case.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since V1: New patch
test/lib/lmb.c | 67 -------------------------------------------------- 1 file changed, 67 deletions(-)
diff --git a/test/lib/lmb.c b/test/lib/lmb.c index 9f6b5633a7..a3a7ad904c 100644 --- a/test/lib/lmb.c +++ b/test/lib/lmb.c @@ -648,73 +648,6 @@ static int lib_test_lmb_get_free_size(struct unit_test_state *uts) } LIB_TEST(lib_test_lmb_get_free_size, 0);
-#ifdef CONFIG_LMB_USE_MAX_REGIONS -static int lib_test_lmb_max_regions(struct unit_test_state *uts) -{
const phys_addr_t ram = 0x00000000;
/*
* All of 32bit memory space will contain regions for this test, so
* we need to scale ram_size (which in this case is the size of the lmb
* region) to match.
*/
const phys_size_t ram_size = ((0xFFFFFFFF >> CONFIG_LMB_MAX_REGIONS)
+ 1) * CONFIG_LMB_MAX_REGIONS;
const phys_size_t blk_size = 0x10000;
phys_addr_t offset;
int ret, i;
ut_asserteq(lmb.memory.cnt, 0);
ut_asserteq(lmb.memory.max, CONFIG_LMB_MAX_REGIONS);
ut_asserteq(lmb.reserved.cnt, 0);
ut_asserteq(lmb.reserved.max, CONFIG_LMB_MAX_REGIONS);
/* Add CONFIG_LMB_MAX_REGIONS memory regions */
for (i = 0; i < CONFIG_LMB_MAX_REGIONS; i++) {
offset = ram + 2 * i * ram_size;
ret = lmb_add(offset, ram_size);
ut_asserteq(ret, 0);
}
ut_asserteq(lmb.memory.cnt, CONFIG_LMB_MAX_REGIONS);
ut_asserteq(lmb.reserved.cnt, 0);
/* error for the (CONFIG_LMB_MAX_REGIONS + 1) memory regions */
offset = ram + 2 * (CONFIG_LMB_MAX_REGIONS + 1) * ram_size;
ret = lmb_add(offset, ram_size);
ut_asserteq(ret, -1);
ut_asserteq(lmb.memory.cnt, CONFIG_LMB_MAX_REGIONS);
ut_asserteq(lmb.reserved.cnt, 0);
/* reserve CONFIG_LMB_MAX_REGIONS regions */
for (i = 0; i < CONFIG_LMB_MAX_REGIONS; i++) {
offset = ram + 2 * i * blk_size;
ret = lmb_reserve(offset, blk_size);
ut_asserteq(ret, 0);
}
ut_asserteq(lmb.memory.cnt, CONFIG_LMB_MAX_REGIONS);
ut_asserteq(lmb.reserved.cnt, CONFIG_LMB_MAX_REGIONS);
/* error for the 9th reserved blocks */
offset = ram + 2 * (CONFIG_LMB_MAX_REGIONS + 1) * blk_size;
ret = lmb_reserve(offset, blk_size);
ut_asserteq(ret, -1);
ut_asserteq(lmb.memory.cnt, CONFIG_LMB_MAX_REGIONS);
ut_asserteq(lmb.reserved.cnt, CONFIG_LMB_MAX_REGIONS);
/* check each regions */
for (i = 0; i < CONFIG_LMB_MAX_REGIONS; i++)
ut_asserteq(lmb.memory.region[i].base, ram + 2 * i * ram_size);
for (i = 0; i < CONFIG_LMB_MAX_REGIONS; i++)
ut_asserteq(lmb.reserved.region[i].base, ram + 2 * i * blk_size);
return 0;
-} -LIB_TEST(lib_test_lmb_max_regions, 0); -#endif
static int lib_test_lmb_flags(struct unit_test_state *uts) { const phys_addr_t ram = 0x40000000; -- 2.34.1
Reviewed-by: Ilias Apalodimas ilias.apalodimas@linaro.org

Add separate config symbols for enabling the LMB module for the SPL phase. The LMB module implementation now relies on alloced list data structure which requires heap area to be present. Add specific config symbol for the SPL phase of U-Boot so that this can be enabled on platforms which support a heap in SPL.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since V1: New patch
lib/Kconfig | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-)
diff --git a/lib/Kconfig b/lib/Kconfig index 072ed0ecfa..7eea517b3b 100644 --- a/lib/Kconfig +++ b/lib/Kconfig @@ -1103,7 +1103,17 @@ config LMB default y if ARC || ARM || M68K || MICROBLAZE || MIPS || \ NIOS2 || PPC || RISCV || SANDBOX || SH || X86 || XTENSA help - Support the library logical memory blocks. + Support the library logical memory blocks. This will require + a malloc() implementation for defining the data structures + needed for maintaining the LMB memory map. + +config SPL_LMB + bool "Enable LMB module for SPL" + depends on SPL && SPL_FRAMEWORK && SPL_SYS_MALLOC + help + Enable support for Logical Memory Block library routines in + SPL. This will require a malloc() implementation for defining + the data structures needed for maintaining the LMB memory map.
config PHANDLE_CHECK_SEQ bool "Enable phandle check while getting sequence number"

On Thu, Jul 04, 2024 at 01:05:12PM +0530, Sughosh Ganu wrote:
Add separate config symbols for enabling the LMB module for the SPL phase. The LMB module implementation now relies on alloced list data structure which requires heap area to be present. Add specific config symbol for the SPL phase of U-Boot so that this can be enabled on platforms which support a heap in SPL.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since V1: New patch
lib/Kconfig | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-)
diff --git a/lib/Kconfig b/lib/Kconfig index 072ed0ecfa..7eea517b3b 100644 --- a/lib/Kconfig +++ b/lib/Kconfig @@ -1103,7 +1103,17 @@ config LMB default y if ARC || ARM || M68K || MICROBLAZE || MIPS || \ NIOS2 || PPC || RISCV || SANDBOX || SH || X86 || XTENSA help
Support the library logical memory blocks.
Support the library logical memory blocks. This will require
a malloc() implementation for defining the data structures
needed for maintaining the LMB memory map.
Even today, LMB really should be def_bool y rather than an option, so this series should correct that. That said...
+config SPL_LMB
- bool "Enable LMB module for SPL"
- depends on SPL && SPL_FRAMEWORK && SPL_SYS_MALLOC
- help
Enable support for Logical Memory Block library routines in
SPL. This will require a malloc() implementation for defining
the data structures needed for maintaining the LMB memory map.
The question I guess becomes when do we need LMB in SPL, exactly? And I guess it's another case where it should be def_bool y (but still depends on what you have here) since we need to make sure we don't overwrite running SPL.

On Sat, 6 Jul 2024 at 01:18, Tom Rini trini@konsulko.com wrote:
On Thu, Jul 04, 2024 at 01:05:12PM +0530, Sughosh Ganu wrote:
Add separate config symbols for enabling the LMB module for the SPL phase. The LMB module implementation now relies on alloced list data structure which requires heap area to be present. Add specific config symbol for the SPL phase of U-Boot so that this can be enabled on platforms which support a heap in SPL.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since V1: New patch
lib/Kconfig | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-)
diff --git a/lib/Kconfig b/lib/Kconfig index 072ed0ecfa..7eea517b3b 100644 --- a/lib/Kconfig +++ b/lib/Kconfig @@ -1103,7 +1103,17 @@ config LMB default y if ARC || ARM || M68K || MICROBLAZE || MIPS || \ NIOS2 || PPC || RISCV || SANDBOX || SH || X86 || XTENSA help
Support the library logical memory blocks.
Support the library logical memory blocks. This will require
a malloc() implementation for defining the data structures
needed for maintaining the LMB memory map.
Even today, LMB really should be def_bool y rather than an option, so this series should correct that. That said...
Okay
+config SPL_LMB
bool "Enable LMB module for SPL"
depends on SPL && SPL_FRAMEWORK && SPL_SYS_MALLOC
help
Enable support for Logical Memory Block library routines in
SPL. This will require a malloc() implementation for defining
the data structures needed for maintaining the LMB memory map.
The question I guess becomes when do we need LMB in SPL, exactly? And I guess it's another case where it should be def_bool y (but still depends on what you have here) since we need to make sure we don't overwrite running SPL.
So this is a question even I had. Do we really need to enable LMB in SPL ? The main reason for introducing the symbol was to have more granularity to remove the LMB code from SPL, but should this really be enabled in SPL is something that I am not too sure about.
-sughosh
-- Tom

On Mon, Jul 08, 2024 at 05:06:45PM +0530, Sughosh Ganu wrote:
On Sat, 6 Jul 2024 at 01:18, Tom Rini trini@konsulko.com wrote:
On Thu, Jul 04, 2024 at 01:05:12PM +0530, Sughosh Ganu wrote:
Add separate config symbols for enabling the LMB module for the SPL phase. The LMB module implementation now relies on alloced list data structure which requires heap area to be present. Add specific config symbol for the SPL phase of U-Boot so that this can be enabled on platforms which support a heap in SPL.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since V1: New patch
lib/Kconfig | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-)
diff --git a/lib/Kconfig b/lib/Kconfig index 072ed0ecfa..7eea517b3b 100644 --- a/lib/Kconfig +++ b/lib/Kconfig @@ -1103,7 +1103,17 @@ config LMB default y if ARC || ARM || M68K || MICROBLAZE || MIPS || \ NIOS2 || PPC || RISCV || SANDBOX || SH || X86 || XTENSA help
Support the library logical memory blocks.
Support the library logical memory blocks. This will require
a malloc() implementation for defining the data structures
needed for maintaining the LMB memory map.
Even today, LMB really should be def_bool y rather than an option, so this series should correct that. That said...
Okay
+config SPL_LMB
bool "Enable LMB module for SPL"
depends on SPL && SPL_FRAMEWORK && SPL_SYS_MALLOC
help
Enable support for Logical Memory Block library routines in
SPL. This will require a malloc() implementation for defining
the data structures needed for maintaining the LMB memory map.
The question I guess becomes when do we need LMB in SPL, exactly? And I guess it's another case where it should be def_bool y (but still depends on what you have here) since we need to make sure we don't overwrite running SPL.
So this is a question even I had. Do we really need to enable LMB in SPL ? The main reason for introducing the symbol was to have more granularity to remove the LMB code from SPL, but should this really be enabled in SPL is something that I am not too sure about.
Yes, we need to ensure we obey reservations in SPL, both for U-Boot and for when we boot the OS from SPL.

Hi,
On Mon, 8 Jul 2024 at 15:46, Tom Rini trini@konsulko.com wrote:
On Mon, Jul 08, 2024 at 05:06:45PM +0530, Sughosh Ganu wrote:
On Sat, 6 Jul 2024 at 01:18, Tom Rini trini@konsulko.com wrote:
On Thu, Jul 04, 2024 at 01:05:12PM +0530, Sughosh Ganu wrote:
Add separate config symbols for enabling the LMB module for the SPL phase. The LMB module implementation now relies on alloced list data structure which requires heap area to be present. Add specific config symbol for the SPL phase of U-Boot so that this can be enabled on platforms which support a heap in SPL.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since V1: New patch
lib/Kconfig | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-)
diff --git a/lib/Kconfig b/lib/Kconfig index 072ed0ecfa..7eea517b3b 100644 --- a/lib/Kconfig +++ b/lib/Kconfig @@ -1103,7 +1103,17 @@ config LMB default y if ARC || ARM || M68K || MICROBLAZE || MIPS || \ NIOS2 || PPC || RISCV || SANDBOX || SH || X86 || XTENSA help
Support the library logical memory blocks.
Support the library logical memory blocks. This will require
a malloc() implementation for defining the data structures
needed for maintaining the LMB memory map.
Even today, LMB really should be def_bool y rather than an option, so this series should correct that. That said...
Okay
+config SPL_LMB
bool "Enable LMB module for SPL"
depends on SPL && SPL_FRAMEWORK && SPL_SYS_MALLOC
help
Enable support for Logical Memory Block library routines in
SPL. This will require a malloc() implementation for defining
the data structures needed for maintaining the LMB memory map.
The question I guess becomes when do we need LMB in SPL, exactly? And I guess it's another case where it should be def_bool y (but still depends on what you have here) since we need to make sure we don't overwrite running SPL.
So this is a question even I had. Do we really need to enable LMB in SPL ? The main reason for introducing the symbol was to have more granularity to remove the LMB code from SPL, but should this really be enabled in SPL is something that I am not too sure about.
Yes, we need to ensure we obey reservations in SPL, both for U-Boot and for when we boot the OS from SPL.
I believe we will use it, e.g. in VPL. But we don't have (nor perhaps even want) a mechanism for passing the lmb from one phase to the next.
Regards, Simon

With the introduction of separate config symbols for the SPL phase of U-Boot, the condition checks need to be tweaked so that platforms that enable the LMB module in SPL are also able to call the LMB API's. Use the appropriate condition checks to achieve this.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since V1: New patch
board/xilinx/common/board.c | 2 +- boot/bootm.c | 4 ++-- boot/image-board.c | 2 +- fs/fs.c | 4 ++-- lib/Makefile | 2 +- net/tftp.c | 6 +++--- net/wget.c | 4 ++-- 7 files changed, 12 insertions(+), 12 deletions(-)
diff --git a/board/xilinx/common/board.c b/board/xilinx/common/board.c index 4056884400..f04c92a70f 100644 --- a/board/xilinx/common/board.c +++ b/board/xilinx/common/board.c @@ -665,7 +665,7 @@ int embedded_dtb_select(void) } #endif
-#if defined(CONFIG_LMB) +#if IS_ENABLED(CONFIG_LMB)
#ifndef MMU_SECTION_SIZE #define MMU_SECTION_SIZE (1 * 1024 * 1024) diff --git a/boot/bootm.c b/boot/bootm.c index 7b3fe551de..5ce84b73b5 100644 --- a/boot/bootm.c +++ b/boot/bootm.c @@ -239,7 +239,7 @@ static int boot_get_kernel(const char *addr_fit, struct bootm_headers *images, return 0; }
-#ifdef CONFIG_LMB +#if CONFIG_IS_ENABLED(LMB) static void boot_start_lmb(void) { phys_addr_t mem_start; @@ -1048,7 +1048,7 @@ int bootm_run_states(struct bootm_info *bmi, int states) } } #endif -#if CONFIG_IS_ENABLED(OF_LIBFDT) && defined(CONFIG_LMB) +#if CONFIG_IS_ENABLED(OF_LIBFDT) && CONFIG_IS_ENABLED(LMB) if (!ret && (states & BOOTM_STATE_FDT)) { boot_fdt_add_mem_rsv_regions(images->ft_addr); ret = boot_relocate_fdt(&images->ft_addr, &images->ft_len); diff --git a/boot/image-board.c b/boot/image-board.c index 1f8c1ac69f..99ee7968ba 100644 --- a/boot/image-board.c +++ b/boot/image-board.c @@ -883,7 +883,7 @@ int image_setup_linux(struct bootm_headers *images) int ret;
/* This function cannot be called without lmb support */ - if (!IS_ENABLED(CONFIG_LMB)) + if (!CONFIG_IS_ENABLED(LMB)) return -EFAULT; if (CONFIG_IS_ENABLED(OF_LIBFDT)) boot_fdt_add_mem_rsv_regions(*of_flat_tree); diff --git a/fs/fs.c b/fs/fs.c index 2c835eef86..3fb00590be 100644 --- a/fs/fs.c +++ b/fs/fs.c @@ -526,7 +526,7 @@ int fs_size(const char *filename, loff_t *size) return ret; }
-#ifdef CONFIG_LMB +#if CONFIG_IS_ENABLED(LMB) /* Check if a file may be read to the given address */ static int fs_read_lmb_check(const char *filename, ulong addr, loff_t offset, loff_t len, struct fstype_info *info) @@ -567,7 +567,7 @@ static int _fs_read(const char *filename, ulong addr, loff_t offset, loff_t len, void *buf; int ret;
-#ifdef CONFIG_LMB +#if CONFIG_IS_ENABLED(LMB) if (do_lmb_check) { ret = fs_read_lmb_check(filename, addr, offset, len, info); if (ret) diff --git a/lib/Makefile b/lib/Makefile index 81b503ab52..398c11726e 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -118,7 +118,7 @@ obj-$(CONFIG_$(SPL_TPL_)OF_LIBFDT) += fdtdec.o fdtdec_common.o obj-y += hang.o obj-y += linux_compat.o obj-y += linux_string.o -obj-$(CONFIG_LMB) += lmb.o +obj-$(CONFIG_$(SPL_)LMB) += lmb.o obj-y += membuff.o obj-$(CONFIG_REGEX) += slre.o obj-y += string.o diff --git a/net/tftp.c b/net/tftp.c index 5e27fd4aa9..5c75d9d04f 100644 --- a/net/tftp.c +++ b/net/tftp.c @@ -82,7 +82,7 @@ static ulong tftp_block_wrap; static ulong tftp_block_wrap_offset; static int tftp_state; static ulong tftp_load_addr; -#ifdef CONFIG_LMB +#if CONFIG_IS_ENABLED(LMB) static ulong tftp_load_size; #endif #ifdef CONFIG_TFTP_TSIZE @@ -160,7 +160,7 @@ static inline int store_block(int block, uchar *src, unsigned int len) ulong store_addr = tftp_load_addr + offset; void *ptr;
-#ifdef CONFIG_LMB +#if CONFIG_IS_ENABLED(LMB) ulong end_addr = tftp_load_addr + tftp_load_size;
if (!end_addr) @@ -710,7 +710,7 @@ static void tftp_timeout_handler(void) /* Initialize tftp_load_addr and tftp_load_size from image_load_addr and lmb */ static int tftp_init_load_addr(void) { -#ifdef CONFIG_LMB +#if CONFIG_IS_ENABLED(LMB) phys_size_t max_size;
lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob); diff --git a/net/wget.c b/net/wget.c index 7cf809a8ef..b8ea43e7f0 100644 --- a/net/wget.c +++ b/net/wget.c @@ -98,7 +98,7 @@ static inline int store_block(uchar *src, unsigned int offset, unsigned int len) ulong newsize = offset + len; uchar *ptr;
- if (IS_ENABLED(CONFIG_LMB)) { + if (CONFIG_IS_ENABLED(LMB)) { ulong end_addr = image_load_addr + wget_load_size;
if (!end_addr) @@ -496,7 +496,7 @@ void wget_start(void) debug_cond(DEBUG_WGET, "\nwget:Load address: 0x%lx\nLoading: *\b", image_load_addr);
- if (IS_ENABLED(CONFIG_LMB)) { + if (CONFIG_IS_ENABLED(LMB)) { if (wget_init_load_size()) { printf("\nwget error: "); printf("trying to overwrite reserved memory...\n");

Hi Sughosh,
On Thu, 4 Jul 2024 at 08:37, Sughosh Ganu sughosh.ganu@linaro.org wrote:
With the introduction of separate config symbols for the SPL phase of U-Boot, the condition checks need to be tweaked so that platforms that enable the LMB module in SPL are also able to call the LMB API's. Use the appropriate condition checks to achieve this.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since V1: New patch
board/xilinx/common/board.c | 2 +- boot/bootm.c | 4 ++-- boot/image-board.c | 2 +- fs/fs.c | 4 ++-- lib/Makefile | 2 +- net/tftp.c | 6 +++--- net/wget.c | 4 ++-- 7 files changed, 12 insertions(+), 12 deletions(-)
diff --git a/board/xilinx/common/board.c b/board/xilinx/common/board.c index 4056884400..f04c92a70f 100644 --- a/board/xilinx/common/board.c +++ b/board/xilinx/common/board.c @@ -665,7 +665,7 @@ int embedded_dtb_select(void) } #endif
-#if defined(CONFIG_LMB) +#if IS_ENABLED(CONFIG_LMB)
#ifndef MMU_SECTION_SIZE #define MMU_SECTION_SIZE (1 * 1024 * 1024) diff --git a/boot/bootm.c b/boot/bootm.c index 7b3fe551de..5ce84b73b5 100644 --- a/boot/bootm.c +++ b/boot/bootm.c @@ -239,7 +239,7 @@ static int boot_get_kernel(const char *addr_fit, struct bootm_headers *images, return 0; }
-#ifdef CONFIG_LMB +#if CONFIG_IS_ENABLED(LMB)
Can you avoid #ifdefs where possible?
static void boot_start_lmb(void) { phys_addr_t mem_start; @@ -1048,7 +1048,7 @@ int bootm_run_states(struct bootm_info *bmi, int states) } } #endif -#if CONFIG_IS_ENABLED(OF_LIBFDT) && defined(CONFIG_LMB) +#if CONFIG_IS_ENABLED(OF_LIBFDT) && CONFIG_IS_ENABLED(LMB) if (!ret && (states & BOOTM_STATE_FDT)) { boot_fdt_add_mem_rsv_regions(images->ft_addr); ret = boot_relocate_fdt(&images->ft_addr, &images->ft_len); diff --git a/boot/image-board.c b/boot/image-board.c index 1f8c1ac69f..99ee7968ba 100644 --- a/boot/image-board.c +++ b/boot/image-board.c @@ -883,7 +883,7 @@ int image_setup_linux(struct bootm_headers *images) int ret;
/* This function cannot be called without lmb support */
if (!IS_ENABLED(CONFIG_LMB))
if (!CONFIG_IS_ENABLED(LMB)) return -EFAULT; if (CONFIG_IS_ENABLED(OF_LIBFDT)) boot_fdt_add_mem_rsv_regions(*of_flat_tree);
diff --git a/fs/fs.c b/fs/fs.c index 2c835eef86..3fb00590be 100644 --- a/fs/fs.c +++ b/fs/fs.c @@ -526,7 +526,7 @@ int fs_size(const char *filename, loff_t *size) return ret; }
-#ifdef CONFIG_LMB +#if CONFIG_IS_ENABLED(LMB) /* Check if a file may be read to the given address */ static int fs_read_lmb_check(const char *filename, ulong addr, loff_t offset, loff_t len, struct fstype_info *info) @@ -567,7 +567,7 @@ static int _fs_read(const char *filename, ulong addr, loff_t offset, loff_t len, void *buf; int ret;
-#ifdef CONFIG_LMB +#if CONFIG_IS_ENABLED(LMB) if (do_lmb_check) { ret = fs_read_lmb_check(filename, addr, offset, len, info); if (ret) diff --git a/lib/Makefile b/lib/Makefile index 81b503ab52..398c11726e 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -118,7 +118,7 @@ obj-$(CONFIG_$(SPL_TPL_)OF_LIBFDT) += fdtdec.o fdtdec_common.o obj-y += hang.o obj-y += linux_compat.o obj-y += linux_string.o -obj-$(CONFIG_LMB) += lmb.o +obj-$(CONFIG_$(SPL_)LMB) += lmb.o obj-y += membuff.o obj-$(CONFIG_REGEX) += slre.o obj-y += string.o diff --git a/net/tftp.c b/net/tftp.c index 5e27fd4aa9..5c75d9d04f 100644 --- a/net/tftp.c +++ b/net/tftp.c @@ -82,7 +82,7 @@ static ulong tftp_block_wrap; static ulong tftp_block_wrap_offset; static int tftp_state; static ulong tftp_load_addr; -#ifdef CONFIG_LMB +#if CONFIG_IS_ENABLED(LMB) static ulong tftp_load_size; #endif #ifdef CONFIG_TFTP_TSIZE @@ -160,7 +160,7 @@ static inline int store_block(int block, uchar *src, unsigned int len) ulong store_addr = tftp_load_addr + offset; void *ptr;
-#ifdef CONFIG_LMB +#if CONFIG_IS_ENABLED(LMB) ulong end_addr = tftp_load_addr + tftp_load_size;
if (!end_addr)
@@ -710,7 +710,7 @@ static void tftp_timeout_handler(void) /* Initialize tftp_load_addr and tftp_load_size from image_load_addr and lmb */ static int tftp_init_load_addr(void) { -#ifdef CONFIG_LMB +#if CONFIG_IS_ENABLED(LMB)
here too
phys_size_t max_size; lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob);
diff --git a/net/wget.c b/net/wget.c index 7cf809a8ef..b8ea43e7f0 100644 --- a/net/wget.c +++ b/net/wget.c @@ -98,7 +98,7 @@ static inline int store_block(uchar *src, unsigned int offset, unsigned int len) ulong newsize = offset + len; uchar *ptr;
if (IS_ENABLED(CONFIG_LMB)) {
if (CONFIG_IS_ENABLED(LMB)) { ulong end_addr = image_load_addr + wget_load_size; if (!end_addr)
@@ -496,7 +496,7 @@ void wget_start(void) debug_cond(DEBUG_WGET, "\nwget:Load address: 0x%lx\nLoading: *\b", image_load_addr);
if (IS_ENABLED(CONFIG_LMB)) {
if (CONFIG_IS_ENABLED(LMB)) { if (wget_init_load_size()) { printf("\nwget error: "); printf("trying to overwrite reserved memory...\n");
-- 2.34.1
Regards, Simon

hi Simon,
On Sat, 13 Jul 2024 at 20:45, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Thu, 4 Jul 2024 at 08:37, Sughosh Ganu sughosh.ganu@linaro.org wrote:
With the introduction of separate config symbols for the SPL phase of U-Boot, the condition checks need to be tweaked so that platforms that enable the LMB module in SPL are also able to call the LMB API's. Use the appropriate condition checks to achieve this.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since V1: New patch
board/xilinx/common/board.c | 2 +- boot/bootm.c | 4 ++-- boot/image-board.c | 2 +- fs/fs.c | 4 ++-- lib/Makefile | 2 +- net/tftp.c | 6 +++--- net/wget.c | 4 ++-- 7 files changed, 12 insertions(+), 12 deletions(-)
diff --git a/board/xilinx/common/board.c b/board/xilinx/common/board.c index 4056884400..f04c92a70f 100644 --- a/board/xilinx/common/board.c +++ b/board/xilinx/common/board.c @@ -665,7 +665,7 @@ int embedded_dtb_select(void) } #endif
-#if defined(CONFIG_LMB) +#if IS_ENABLED(CONFIG_LMB)
#ifndef MMU_SECTION_SIZE #define MMU_SECTION_SIZE (1 * 1024 * 1024) diff --git a/boot/bootm.c b/boot/bootm.c index 7b3fe551de..5ce84b73b5 100644 --- a/boot/bootm.c +++ b/boot/bootm.c @@ -239,7 +239,7 @@ static int boot_get_kernel(const char *addr_fit, struct bootm_headers *images, return 0; }
-#ifdef CONFIG_LMB +#if CONFIG_IS_ENABLED(LMB)
Can you avoid #ifdefs where possible?
Okay
static void boot_start_lmb(void) { phys_addr_t mem_start; @@ -1048,7 +1048,7 @@ int bootm_run_states(struct bootm_info *bmi, int states) } } #endif -#if CONFIG_IS_ENABLED(OF_LIBFDT) && defined(CONFIG_LMB) +#if CONFIG_IS_ENABLED(OF_LIBFDT) && CONFIG_IS_ENABLED(LMB) if (!ret && (states & BOOTM_STATE_FDT)) { boot_fdt_add_mem_rsv_regions(images->ft_addr); ret = boot_relocate_fdt(&images->ft_addr, &images->ft_len); diff --git a/boot/image-board.c b/boot/image-board.c index 1f8c1ac69f..99ee7968ba 100644 --- a/boot/image-board.c +++ b/boot/image-board.c @@ -883,7 +883,7 @@ int image_setup_linux(struct bootm_headers *images) int ret;
/* This function cannot be called without lmb support */
if (!IS_ENABLED(CONFIG_LMB))
if (!CONFIG_IS_ENABLED(LMB)) return -EFAULT; if (CONFIG_IS_ENABLED(OF_LIBFDT)) boot_fdt_add_mem_rsv_regions(*of_flat_tree);
diff --git a/fs/fs.c b/fs/fs.c index 2c835eef86..3fb00590be 100644 --- a/fs/fs.c +++ b/fs/fs.c @@ -526,7 +526,7 @@ int fs_size(const char *filename, loff_t *size) return ret; }
-#ifdef CONFIG_LMB +#if CONFIG_IS_ENABLED(LMB) /* Check if a file may be read to the given address */ static int fs_read_lmb_check(const char *filename, ulong addr, loff_t offset, loff_t len, struct fstype_info *info) @@ -567,7 +567,7 @@ static int _fs_read(const char *filename, ulong addr, loff_t offset, loff_t len, void *buf; int ret;
-#ifdef CONFIG_LMB +#if CONFIG_IS_ENABLED(LMB) if (do_lmb_check) { ret = fs_read_lmb_check(filename, addr, offset, len, info); if (ret) diff --git a/lib/Makefile b/lib/Makefile index 81b503ab52..398c11726e 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -118,7 +118,7 @@ obj-$(CONFIG_$(SPL_TPL_)OF_LIBFDT) += fdtdec.o fdtdec_common.o obj-y += hang.o obj-y += linux_compat.o obj-y += linux_string.o -obj-$(CONFIG_LMB) += lmb.o +obj-$(CONFIG_$(SPL_)LMB) += lmb.o obj-y += membuff.o obj-$(CONFIG_REGEX) += slre.o obj-y += string.o diff --git a/net/tftp.c b/net/tftp.c index 5e27fd4aa9..5c75d9d04f 100644 --- a/net/tftp.c +++ b/net/tftp.c @@ -82,7 +82,7 @@ static ulong tftp_block_wrap; static ulong tftp_block_wrap_offset; static int tftp_state; static ulong tftp_load_addr; -#ifdef CONFIG_LMB +#if CONFIG_IS_ENABLED(LMB) static ulong tftp_load_size; #endif #ifdef CONFIG_TFTP_TSIZE @@ -160,7 +160,7 @@ static inline int store_block(int block, uchar *src, unsigned int len) ulong store_addr = tftp_load_addr + offset; void *ptr;
-#ifdef CONFIG_LMB +#if CONFIG_IS_ENABLED(LMB) ulong end_addr = tftp_load_addr + tftp_load_size;
if (!end_addr)
@@ -710,7 +710,7 @@ static void tftp_timeout_handler(void) /* Initialize tftp_load_addr and tftp_load_size from image_load_addr and lmb */ static int tftp_init_load_addr(void) { -#ifdef CONFIG_LMB +#if CONFIG_IS_ENABLED(LMB)
here too
Okay
-sughosh
phys_size_t max_size; lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob);
diff --git a/net/wget.c b/net/wget.c index 7cf809a8ef..b8ea43e7f0 100644 --- a/net/wget.c +++ b/net/wget.c @@ -98,7 +98,7 @@ static inline int store_block(uchar *src, unsigned int offset, unsigned int len) ulong newsize = offset + len; uchar *ptr;
if (IS_ENABLED(CONFIG_LMB)) {
if (CONFIG_IS_ENABLED(LMB)) { ulong end_addr = image_load_addr + wget_load_size; if (!end_addr)
@@ -496,7 +496,7 @@ void wget_start(void) debug_cond(DEBUG_WGET, "\nwget:Load address: 0x%lx\nLoading: *\b", image_load_addr);
if (IS_ENABLED(CONFIG_LMB)) {
if (CONFIG_IS_ENABLED(LMB)) { if (wget_init_load_size()) { printf("\nwget error: "); printf("trying to overwrite reserved memory...\n");
-- 2.34.1
Regards, Simon

Introduce a function lmb_add_memory() to add available memory to the LMB memory map. Call this function during board init once the LMB data structures have been initialised.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since V1: * Add memory only till ram_top to the LMB memory map, instead of all enumerated memory.
include/lmb.h | 11 +++++++++++ lib/lmb.c | 42 ++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 53 insertions(+)
diff --git a/include/lmb.h b/include/lmb.h index 27cdb18c37..d0c094107c 100644 --- a/include/lmb.h +++ b/include/lmb.h @@ -36,6 +36,17 @@ struct lmb_region { enum lmb_flags flags; };
+/** + * lmb_add_memory() - Add memory range for LMB allocations + * + * Add the entire available memory range to the pool of memory that + * can be used by the LMB module for allocations. + * + * Return: None + * + */ +void lmb_add_memory(void); + long lmb_add(phys_addr_t base, phys_size_t size); long lmb_reserve(phys_addr_t base, phys_size_t size); /** diff --git a/lib/lmb.c b/lib/lmb.c index a46bc8a7a3..e4733b82ac 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -239,6 +239,46 @@ void lmb_init_and_reserve(struct bd_info *bd, void *fdt_blob) lmb_reserve_common(fdt_blob); }
+/** + * lmb_add_memory() - Add memory range for LMB allocations + * + * Add the entire available memory range to the pool of memory that + * can be used by the LMB module for allocations. + * + * This can be overridden for specific boards/architectures. + * + * Return: None + * + */ +__weak void lmb_add_memory(void) +{ + int i; + phys_size_t size; + phys_addr_t rgn_top; + u64 ram_top = gd->ram_top; + struct bd_info *bd = gd->bd; + + /* Assume a 4GB ram_top if not defined */ + if (!ram_top) + ram_top = 0x100000000ULL; + + for (i = 0; i < CONFIG_NR_DRAM_BANKS; i++) { + size = bd->bi_dram[i].size; + if (size) { + if (bd->bi_dram[i].start > ram_top) + continue; + + rgn_top = bd->bi_dram[i].start + + bd->bi_dram[i].size; + + if (rgn_top > ram_top ) + size -= rgn_top - ram_top; + + lmb_add(bd->bi_dram[i].start, size); + } + } +} + /* Initialize the struct, add memory and call arch/board reserve functions */ void lmb_init_and_reserve_range(phys_addr_t base, phys_size_t size, void *fdt_blob) @@ -710,5 +750,7 @@ int lmb_mem_regions_init(void) return -1; }
+ lmb_add_memory(); + return 0; }

On Thu, Jul 04, 2024 at 01:05:14PM +0530, Sughosh Ganu wrote:
Introduce a function lmb_add_memory() to add available memory to the LMB memory map. Call this function during board init once the LMB data structures have been initialised.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since V1:
- Add memory only till ram_top to the LMB memory map, instead of all enumerated memory.
include/lmb.h | 11 +++++++++++ lib/lmb.c | 42 ++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 53 insertions(+)
Checkpatch: ERROR: space prohibited before that close parenthesis ')' #78: FILE: lib/lmb.c:274: + if (rgn_top > ram_top )

With the changes to make the LMB reservations persistent, the common memory regions are being added during board init. Remove the now superfluous lmb_init_and_reserve() function.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since V1: * Removed the replacement of lmb_init_and_reserve() with lmb_add_memory(), as memory gets added during board init.
arch/arm/mach-apple/board.c | 2 -- arch/arm/mach-snapdragon/board.c | 2 -- arch/arm/mach-stm32mp/stm32mp1/cpu.c | 3 --- cmd/bdinfo.c | 1 - cmd/load.c | 2 -- fs/fs.c | 1 - lib/lmb.c | 13 ------------- net/tftp.c | 2 -- net/wget.c | 2 -- test/cmd/bdinfo.c | 9 --------- 10 files changed, 37 deletions(-)
diff --git a/arch/arm/mach-apple/board.c b/arch/arm/mach-apple/board.c index 213390d6e8..0b6d290b8a 100644 --- a/arch/arm/mach-apple/board.c +++ b/arch/arm/mach-apple/board.c @@ -775,8 +775,6 @@ int board_late_init(void) { u32 status = 0;
- lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob); - /* somewhat based on the Linux Kernel boot requirements: * align by 2M and maximal FDT size 2M */ diff --git a/arch/arm/mach-snapdragon/board.c b/arch/arm/mach-snapdragon/board.c index a63c8bec45..22a7d2a637 100644 --- a/arch/arm/mach-snapdragon/board.c +++ b/arch/arm/mach-snapdragon/board.c @@ -282,8 +282,6 @@ int board_late_init(void) { u32 status = 0;
- lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob); - /* We need to be fairly conservative here as we support boards with just 1G of TOTAL RAM */ status |= env_set_hex("kernel_addr_r", addr_alloc(SZ_128M)); status |= env_set_hex("ramdisk_addr_r", addr_alloc(SZ_128M)); diff --git a/arch/arm/mach-stm32mp/stm32mp1/cpu.c b/arch/arm/mach-stm32mp/stm32mp1/cpu.c index a913737342..64480da9f8 100644 --- a/arch/arm/mach-stm32mp/stm32mp1/cpu.c +++ b/arch/arm/mach-stm32mp/stm32mp1/cpu.c @@ -141,9 +141,6 @@ int mach_cpu_init(void)
void enable_caches(void) { - /* parse device tree when data cache is still activated */ - lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob); - /* I-cache is already enabled in start.S: icache_enable() not needed */
/* deactivate the data cache, early enabled in arch_cpu_init() */ diff --git a/cmd/bdinfo.c b/cmd/bdinfo.c index b31e0208df..3c40dee143 100644 --- a/cmd/bdinfo.c +++ b/cmd/bdinfo.c @@ -162,7 +162,6 @@ static int bdinfo_print_all(struct bd_info *bd) bdinfo_print_num_l("multi_dtb_fit", (ulong)gd->multi_dtb_fit); #endif if (IS_ENABLED(CONFIG_LMB) && gd->fdt_blob) { - lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob); lmb_dump_all_force(); if (IS_ENABLED(CONFIG_OF_REAL)) printf("devicetree = %s\n", fdtdec_get_srcname()); diff --git a/cmd/load.c b/cmd/load.c index b4e747f966..29cc98ff37 100644 --- a/cmd/load.c +++ b/cmd/load.c @@ -153,8 +153,6 @@ static ulong load_serial(long offset) int line_count = 0; long ret;
- lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob); - while (read_record(record, SREC_MAXRECLEN + 1) >= 0) { type = srec_decode(record, &binlen, &addr, binbuf);
diff --git a/fs/fs.c b/fs/fs.c index 3fb00590be..4bc28d1dff 100644 --- a/fs/fs.c +++ b/fs/fs.c @@ -549,7 +549,6 @@ static int fs_read_lmb_check(const char *filename, ulong addr, loff_t offset, if (len && len < read_len) read_len = len;
- lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob); lmb_dump_all();
if (lmb_alloc_addr(addr, read_len) == addr) diff --git a/lib/lmb.c b/lib/lmb.c index e4733b82ac..e1dde14a3c 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -226,19 +226,6 @@ static void lmb_reserve_common(void *fdt_blob) efi_lmb_reserve(); }
-/* Initialize the struct, add memory and call arch/board reserve functions */ -void lmb_init_and_reserve(struct bd_info *bd, void *fdt_blob) -{ - int i; - - for (i = 0; i < CONFIG_NR_DRAM_BANKS; i++) { - if (bd->bi_dram[i].size) - lmb_add(bd->bi_dram[i].start, bd->bi_dram[i].size); - } - - lmb_reserve_common(fdt_blob); -} - /** * lmb_add_memory() - Add memory range for LMB allocations * diff --git a/net/tftp.c b/net/tftp.c index 5c75d9d04f..dfbc620c73 100644 --- a/net/tftp.c +++ b/net/tftp.c @@ -713,8 +713,6 @@ static int tftp_init_load_addr(void) #if CONFIG_IS_ENABLED(LMB) phys_size_t max_size;
- lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob); - max_size = lmb_get_free_size(image_load_addr); if (!max_size) return -1; diff --git a/net/wget.c b/net/wget.c index b8ea43e7f0..82a7e30ea7 100644 --- a/net/wget.c +++ b/net/wget.c @@ -75,8 +75,6 @@ static int wget_init_load_size(void) { phys_size_t max_size;
- lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob); - max_size = lmb_get_free_size(image_load_addr); if (!max_size) return -1; diff --git a/test/cmd/bdinfo.c b/test/cmd/bdinfo.c index 34d2b141d8..1cd81a195b 100644 --- a/test/cmd/bdinfo.c +++ b/test/cmd/bdinfo.c @@ -113,14 +113,6 @@ static int lmb_test_dump_region(struct unit_test_state *uts, end = base + size - 1; flags = rgn->region[i].flags;
- /* - * this entry includes the stack (get_sp()) on many platforms - * so will different each time lmb_init_and_reserve() is called. - * We could instead have the bdinfo command put its lmb region - * in a known location, so we can check it directly, rather than - * calling lmb_init_and_reserve() to create a new (and hopefully - * identical one). But for now this seems good enough. - */ if (!IS_ENABLED(CONFIG_SANDBOX) && i == 3) { ut_assert_nextlinen(" %s[%d]\t[", name, i); continue; @@ -200,7 +192,6 @@ static int bdinfo_test_all(struct unit_test_state *uts) if (IS_ENABLED(CONFIG_LMB) && gd->fdt_blob) { struct lmb lmb;
- lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob); ut_assertok(lmb_test_dump_all(uts, &lmb)); if (IS_ENABLED(CONFIG_OF_REAL)) ut_assert_nextline("devicetree = %s", fdtdec_get_srcname());

On Thu, 4 Jul 2024 at 08:37, Sughosh Ganu sughosh.ganu@linaro.org wrote:
With the changes to make the LMB reservations persistent, the common memory regions are being added during board init. Remove the now superfluous lmb_init_and_reserve() function.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since V1:
- Removed the replacement of lmb_init_and_reserve() with lmb_add_memory(), as memory gets added during board init.
arch/arm/mach-apple/board.c | 2 -- arch/arm/mach-snapdragon/board.c | 2 -- arch/arm/mach-stm32mp/stm32mp1/cpu.c | 3 --- cmd/bdinfo.c | 1 - cmd/load.c | 2 -- fs/fs.c | 1 - lib/lmb.c | 13 ------------- net/tftp.c | 2 -- net/wget.c | 2 -- test/cmd/bdinfo.c | 9 --------- 10 files changed, 37 deletions(-)
Reviewed-by: Simon Glass sjg@chromium.org

On Thu, 4 Jul 2024 at 10:37, Sughosh Ganu sughosh.ganu@linaro.org wrote:
With the changes to make the LMB reservations persistent, the common memory regions are being added during board init. Remove the now superfluous lmb_init_and_reserve() function.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since V1:
- Removed the replacement of lmb_init_and_reserve() with lmb_add_memory(), as memory gets added during board init.
arch/arm/mach-apple/board.c | 2 -- arch/arm/mach-snapdragon/board.c | 2 -- arch/arm/mach-stm32mp/stm32mp1/cpu.c | 3 --- cmd/bdinfo.c | 1 - cmd/load.c | 2 -- fs/fs.c | 1 - lib/lmb.c | 13 ------------- net/tftp.c | 2 -- net/wget.c | 2 -- test/cmd/bdinfo.c | 9 --------- 10 files changed, 37 deletions(-)
diff --git a/arch/arm/mach-apple/board.c b/arch/arm/mach-apple/board.c index 213390d6e8..0b6d290b8a 100644 --- a/arch/arm/mach-apple/board.c +++ b/arch/arm/mach-apple/board.c @@ -775,8 +775,6 @@ int board_late_init(void) { u32 status = 0;
lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob);
/* somewhat based on the Linux Kernel boot requirements: * align by 2M and maximal FDT size 2M */
diff --git a/arch/arm/mach-snapdragon/board.c b/arch/arm/mach-snapdragon/board.c index a63c8bec45..22a7d2a637 100644 --- a/arch/arm/mach-snapdragon/board.c +++ b/arch/arm/mach-snapdragon/board.c @@ -282,8 +282,6 @@ int board_late_init(void) { u32 status = 0;
lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob);
/* We need to be fairly conservative here as we support boards with just 1G of TOTAL RAM */ status |= env_set_hex("kernel_addr_r", addr_alloc(SZ_128M)); status |= env_set_hex("ramdisk_addr_r", addr_alloc(SZ_128M));
diff --git a/arch/arm/mach-stm32mp/stm32mp1/cpu.c b/arch/arm/mach-stm32mp/stm32mp1/cpu.c index a913737342..64480da9f8 100644 --- a/arch/arm/mach-stm32mp/stm32mp1/cpu.c +++ b/arch/arm/mach-stm32mp/stm32mp1/cpu.c @@ -141,9 +141,6 @@ int mach_cpu_init(void)
void enable_caches(void) {
/* parse device tree when data cache is still activated */
lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob);
/* I-cache is already enabled in start.S: icache_enable() not needed */ /* deactivate the data cache, early enabled in arch_cpu_init() */
diff --git a/cmd/bdinfo.c b/cmd/bdinfo.c index b31e0208df..3c40dee143 100644 --- a/cmd/bdinfo.c +++ b/cmd/bdinfo.c @@ -162,7 +162,6 @@ static int bdinfo_print_all(struct bd_info *bd) bdinfo_print_num_l("multi_dtb_fit", (ulong)gd->multi_dtb_fit); #endif if (IS_ENABLED(CONFIG_LMB) && gd->fdt_blob) {
lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob); lmb_dump_all_force(); if (IS_ENABLED(CONFIG_OF_REAL)) printf("devicetree = %s\n", fdtdec_get_srcname());
diff --git a/cmd/load.c b/cmd/load.c index b4e747f966..29cc98ff37 100644 --- a/cmd/load.c +++ b/cmd/load.c @@ -153,8 +153,6 @@ static ulong load_serial(long offset) int line_count = 0; long ret;
lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob);
while (read_record(record, SREC_MAXRECLEN + 1) >= 0) { type = srec_decode(record, &binlen, &addr, binbuf);
diff --git a/fs/fs.c b/fs/fs.c index 3fb00590be..4bc28d1dff 100644 --- a/fs/fs.c +++ b/fs/fs.c @@ -549,7 +549,6 @@ static int fs_read_lmb_check(const char *filename, ulong addr, loff_t offset, if (len && len < read_len) read_len = len;
lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob); lmb_dump_all(); if (lmb_alloc_addr(addr, read_len) == addr)
diff --git a/lib/lmb.c b/lib/lmb.c index e4733b82ac..e1dde14a3c 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -226,19 +226,6 @@ static void lmb_reserve_common(void *fdt_blob) efi_lmb_reserve(); }
-/* Initialize the struct, add memory and call arch/board reserve functions */ -void lmb_init_and_reserve(struct bd_info *bd, void *fdt_blob) -{
int i;
for (i = 0; i < CONFIG_NR_DRAM_BANKS; i++) {
if (bd->bi_dram[i].size)
lmb_add(bd->bi_dram[i].start, bd->bi_dram[i].size);
}
lmb_reserve_common(fdt_blob);
-}
/**
- lmb_add_memory() - Add memory range for LMB allocations
diff --git a/net/tftp.c b/net/tftp.c index 5c75d9d04f..dfbc620c73 100644 --- a/net/tftp.c +++ b/net/tftp.c @@ -713,8 +713,6 @@ static int tftp_init_load_addr(void) #if CONFIG_IS_ENABLED(LMB) phys_size_t max_size;
lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob);
max_size = lmb_get_free_size(image_load_addr); if (!max_size) return -1;
diff --git a/net/wget.c b/net/wget.c index b8ea43e7f0..82a7e30ea7 100644 --- a/net/wget.c +++ b/net/wget.c @@ -75,8 +75,6 @@ static int wget_init_load_size(void) { phys_size_t max_size;
lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob);
max_size = lmb_get_free_size(image_load_addr); if (!max_size) return -1;
diff --git a/test/cmd/bdinfo.c b/test/cmd/bdinfo.c index 34d2b141d8..1cd81a195b 100644 --- a/test/cmd/bdinfo.c +++ b/test/cmd/bdinfo.c @@ -113,14 +113,6 @@ static int lmb_test_dump_region(struct unit_test_state *uts, end = base + size - 1; flags = rgn->region[i].flags;
/*
* this entry includes the stack (get_sp()) on many platforms
* so will different each time lmb_init_and_reserve() is called.
* We could instead have the bdinfo command put its lmb region
* in a known location, so we can check it directly, rather than
* calling lmb_init_and_reserve() to create a new (and hopefully
* identical one). But for now this seems good enough.
*/ if (!IS_ENABLED(CONFIG_SANDBOX) && i == 3) { ut_assert_nextlinen(" %s[%d]\t[", name, i); continue;
@@ -200,7 +192,6 @@ static int bdinfo_test_all(struct unit_test_state *uts) if (IS_ENABLED(CONFIG_LMB) && gd->fdt_blob) { struct lmb lmb;
lmb_init_and_reserve(gd->bd, (void *)gd->fdt_blob); ut_assertok(lmb_test_dump_all(uts, &lmb)); if (IS_ENABLED(CONFIG_OF_REAL)) ut_assert_nextline("devicetree = %s", fdtdec_get_srcname());
-- 2.34.1
Reviewed-by: Ilias Apalodimas ilias.apalodimas@linaro.org

The LMB module provides API's for allocating and reserving chunks of memory which is then typically used for things like loading images for booting. Reserve the portion of memory that is occupied by the U-Boot image itself, and other parts of memory that might have been marked as reserved in the board's DTB.
Mark these regions of memory with the LMB_NOOVERWRITE flag to indicate that these regions cannot be re-requested or overwritten.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since V1: * Mark the reserved regions as LMB_NOOVERWRITE. * Call the lmb_reserve_common() function in U-Boot proper after relocation.
lib/lmb.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/lib/lmb.c b/lib/lmb.c index e1dde14a3c..456b64c00a 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -13,6 +13,7 @@ #include <lmb.h> #include <log.h> #include <malloc.h> +#include <spl.h>
#include <asm/global_data.h> #include <asm/sections.h> @@ -173,10 +174,11 @@ void arch_lmb_reserve_generic(ulong sp, ulong end, ulong align) if (bank_end > end) bank_end = end - 1;
- lmb_reserve(sp, bank_end - sp + 1); + lmb_reserve_flags(sp, bank_end - sp + 1, LMB_NOOVERWRITE);
if (gd->flags & GD_FLG_SKIP_RELOC) - lmb_reserve((phys_addr_t)(uintptr_t)_start, gd->mon_len); + lmb_reserve_flags((phys_addr_t)(uintptr_t)_start, + gd->mon_len, LMB_NOOVERWRITE);
break; } @@ -739,5 +741,9 @@ int lmb_mem_regions_init(void)
lmb_add_memory();
+ /* Reserve the U-Boot image region once U-Boot has relocated */ + if (spl_phase() == PHASE_BOARD_R) + lmb_reserve_common((void *)gd->fdt_blob); + return 0; }

On Thu, 4 Jul 2024 at 08:37, Sughosh Ganu sughosh.ganu@linaro.org wrote:
The LMB module provides API's for allocating and reserving chunks of memory which is then typically used for things like loading images for booting. Reserve the portion of memory that is occupied by the U-Boot image itself, and other parts of memory that might have been marked as reserved in the board's DTB.
Mark these regions of memory with the LMB_NOOVERWRITE flag to indicate that these regions cannot be re-requested or overwritten.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since V1:
- Mark the reserved regions as LMB_NOOVERWRITE.
- Call the lmb_reserve_common() function in U-Boot proper after relocation.
lib/lmb.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-)
Reviewed-by: Simon Glass sjg@chromium.org

With the move to make the LMB allocations persistent and the common memory regions being reserved during board init, there is no need for an explicit reservation of a memory range. Remove the lmb_init_and_reserve_range() function.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org Reviewed-by: Ilias Apalodimas ilias.apalodimas@linaro.org Reviewed-by: Tom Rini trini@konsulko.com --- Changes since V1: None
boot/bootm.c | 15 +-------------- lib/lmb.c | 8 -------- 2 files changed, 1 insertion(+), 22 deletions(-)
diff --git a/boot/bootm.c b/boot/bootm.c index 5ce84b73b5..d44fd2ed87 100644 --- a/boot/bootm.c +++ b/boot/bootm.c @@ -239,18 +239,7 @@ static int boot_get_kernel(const char *addr_fit, struct bootm_headers *images, return 0; }
-#if CONFIG_IS_ENABLED(LMB) -static void boot_start_lmb(void) -{ - phys_addr_t mem_start; - phys_size_t mem_size; - - mem_start = env_get_bootm_low(); - mem_size = env_get_bootm_size(); - - lmb_init_and_reserve_range(mem_start, mem_size, NULL); -} -#else +#if !CONFIG_IS_ENABLED(LMB) #define lmb_reserve(base, size) static inline void boot_start_lmb(void) { } #endif @@ -260,8 +249,6 @@ static int bootm_start(void) memset((void *)&images, 0, sizeof(images)); images.verify = env_get_yesno("verify");
- boot_start_lmb(); - bootstage_mark_name(BOOTSTAGE_ID_BOOTM_START, "bootm_start"); images.state = BOOTM_STATE_START;
diff --git a/lib/lmb.c b/lib/lmb.c index 456b64c00a..bf6254f4fc 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -268,14 +268,6 @@ __weak void lmb_add_memory(void) } }
-/* Initialize the struct, add memory and call arch/board reserve functions */ -void lmb_init_and_reserve_range(phys_addr_t base, phys_size_t size, - void *fdt_blob) -{ - lmb_add(base, size); - lmb_reserve_common(fdt_blob); -} - static bool lmb_region_flags_match(struct lmb_region *rgn, unsigned long r1, enum lmb_flags flags) {

The memory map maintained by the LMB module is now persistent and global. This memory map is being maintained through the alloced list structure which can be extended at runtime -- there is one list for the available memory, and one for the used memory. Allocate and initialise these lists during the board init.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since V1: * Initialise the lmb structures as part of board init. * Initialise the lmb structure durint SPL init when enabled.
common/board_r.c | 4 ++++ common/spl/spl.c | 4 ++++ include/lmb.h | 11 +++++++++++ lib/lmb.c | 20 ++++++++++++++++++++ 4 files changed, 39 insertions(+)
diff --git a/common/board_r.c b/common/board_r.c index c823cd262f..1a5bb98218 100644 --- a/common/board_r.c +++ b/common/board_r.c @@ -22,6 +22,7 @@ #include <hang.h> #include <image.h> #include <irq_func.h> +#include <lmb.h> #include <log.h> #include <net.h> #include <asm/cache.h> @@ -611,6 +612,9 @@ static init_fnc_t init_sequence_r[] = { #ifdef CONFIG_CLOCKS set_cpu_clk_info, /* Setup clock information */ #endif +#if CONFIG_IS_ENABLED(LMB) + initr_lmb, +#endif #ifdef CONFIG_EFI_LOADER efi_memory_init, #endif diff --git a/common/spl/spl.c b/common/spl/spl.c index 7794ddccad..633dbd1234 100644 --- a/common/spl/spl.c +++ b/common/spl/spl.c @@ -686,6 +686,10 @@ void board_init_r(gd_t *dummy1, ulong dummy2) SPL_SYS_MALLOC_SIZE); gd->flags |= GD_FLG_FULL_MALLOC_INIT; } + + if (IS_ENABLED(CONFIG_SPL_LMB)) + initr_lmb(); + if (!(gd->flags & GD_FLG_SPL_INIT)) { if (spl_init()) hang(); diff --git a/include/lmb.h b/include/lmb.h index d0c094107c..02891a14be 100644 --- a/include/lmb.h +++ b/include/lmb.h @@ -36,6 +36,17 @@ struct lmb_region { enum lmb_flags flags; };
+/** + * initr_lmb() - Initialise the LMB lists + * + * Initialise the LMB lists needed for keeping the memory map. There + * are two lists, in form of alloced list data structure. One for the + * available memory, and one for the used memory. + * + * Return: 0 on success, -ve on error + */ +int initr_lmb(void); + /** * lmb_add_memory() - Add memory range for LMB allocations * diff --git a/lib/lmb.c b/lib/lmb.c index bf6254f4fc..1534380969 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -739,3 +739,23 @@ int lmb_mem_regions_init(void)
return 0; } + +/** + * initr_lmb() - Initialise the LMB lists + * + * Initialise the LMB lists needed for keeping the memory map. There + * are two lists, in form of alloced list data structure. One for the + * available memory, and one for the used memory. + * + * Return: 0 on success, -ve on error + */ +int initr_lmb(void) +{ + int ret; + + ret = lmb_mem_regions_init(); + if (ret) + printf("Unable to initialise the LMB data structures\n"); + + return ret; +}

Use the BIT macro for assigning values to the LMB flags instead of assigning random values to them.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since V1: New patch based on review comment from Heinrich
include/lmb.h | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/include/lmb.h b/include/lmb.h index 02891a14be..cc4cf9f3c8 100644 --- a/include/lmb.h +++ b/include/lmb.h @@ -5,6 +5,7 @@
#include <asm/types.h> #include <asm/u-boot.h> +#include <linux/bitops.h>
/* * Logical memory blocks. @@ -18,9 +19,9 @@ * @LMB_NOMAP: don't add to mmu configuration */ enum lmb_flags { - LMB_NONE = 0x0, - LMB_NOMAP = 0x4, - LMB_NOOVERWRITE = 0x8, + LMB_NONE = BIT(0), + LMB_NOMAP = BIT(1), + LMB_NOOVERWRITE = BIT(2), };
/**

Almost all of the current definitions of arch_lmb_reserve() are doing the same thing. The only exception in a couple of cases is the alignment parameter requirement. Have a generic weak implementation of this function, keeping the highest value of alignment that is being used(16K).
Also, instead of using the current value of stack pointer for starting the reserved region, have a fixed value, considering the stack size config value.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since V1: New patch
arch/arc/lib/cache.c | 14 -------------- arch/arm/lib/stack.c | 14 -------------- arch/m68k/lib/bootm.c | 17 ----------------- arch/microblaze/lib/bootm.c | 14 -------------- arch/mips/lib/bootm.c | 15 --------------- arch/nios2/lib/bootm.c | 13 ------------- arch/powerpc/lib/bootm.c | 13 +++---------- arch/riscv/lib/bootm.c | 13 ------------- arch/sh/lib/bootm.c | 13 ------------- arch/x86/lib/bootm.c | 18 ------------------ arch/xtensa/lib/bootm.c | 13 ------------- lib/lmb.c | 6 +++++- 12 files changed, 8 insertions(+), 155 deletions(-)
diff --git a/arch/arc/lib/cache.c b/arch/arc/lib/cache.c index 5151af917a..5169fc627f 100644 --- a/arch/arc/lib/cache.c +++ b/arch/arc/lib/cache.c @@ -10,7 +10,6 @@ #include <linux/compiler.h> #include <linux/kernel.h> #include <linux/log2.h> -#include <lmb.h> #include <asm/arcregs.h> #include <asm/arc-bcr.h> #include <asm/cache.h> @@ -820,16 +819,3 @@ void sync_n_cleanup_cache_all(void)
__ic_entire_invalidate(); } - -static ulong get_sp(void) -{ - ulong ret; - - asm("mov %0, sp" : "=r"(ret) : ); - return ret; -} - -void arch_lmb_reserve(void) -{ - arch_lmb_reserve_generic(get_sp(), gd->ram_top, 4096); -} diff --git a/arch/arm/lib/stack.c b/arch/arm/lib/stack.c index 87d5c962d7..2b21ec0734 100644 --- a/arch/arm/lib/stack.c +++ b/arch/arm/lib/stack.c @@ -11,7 +11,6 @@ * Marius Groeger mgroeger@sysgo.de */ #include <init.h> -#include <lmb.h> #include <asm/global_data.h>
DECLARE_GLOBAL_DATA_PTR; @@ -33,16 +32,3 @@ int arch_reserve_stacks(void)
return 0; } - -static ulong get_sp(void) -{ - ulong ret; - - asm("mov %0, sp" : "=r"(ret) : ); - return ret; -} - -void arch_lmb_reserve(void) -{ - arch_lmb_reserve_generic(get_sp(), gd->ram_top, 16384); -} diff --git a/arch/m68k/lib/bootm.c b/arch/m68k/lib/bootm.c index eb220d178d..06854e1442 100644 --- a/arch/m68k/lib/bootm.c +++ b/arch/m68k/lib/bootm.c @@ -9,7 +9,6 @@ #include <command.h> #include <env.h> #include <image.h> -#include <lmb.h> #include <log.h> #include <asm/global_data.h> #include <u-boot/zlib.h> @@ -27,14 +26,8 @@ DECLARE_GLOBAL_DATA_PTR; #define LINUX_MAX_ENVS 256 #define LINUX_MAX_ARGS 256
-static ulong get_sp (void); static void set_clocks_in_mhz (struct bd_info *kbd);
-void arch_lmb_reserve(void) -{ - arch_lmb_reserve_generic(get_sp(), gd->ram_top, 1024); -} - int do_bootm_linux(int flag, struct bootm_info *bmi) { struct bootm_headers *images = bmi->images; @@ -88,16 +81,6 @@ error: return 1; }
-static ulong get_sp (void) -{ - ulong sp; - - asm("movel %%a7, %%d0\n" - "movel %%d0, %0\n": "=d"(sp): :"%d0"); - - return sp; -} - static void set_clocks_in_mhz (struct bd_info *kbd) { char *s; diff --git a/arch/microblaze/lib/bootm.c b/arch/microblaze/lib/bootm.c index ce96bca28f..4879a41aab 100644 --- a/arch/microblaze/lib/bootm.c +++ b/arch/microblaze/lib/bootm.c @@ -15,7 +15,6 @@ #include <fdt_support.h> #include <hang.h> #include <image.h> -#include <lmb.h> #include <log.h> #include <asm/cache.h> #include <asm/global_data.h> @@ -24,19 +23,6 @@
DECLARE_GLOBAL_DATA_PTR;
-static ulong get_sp(void) -{ - ulong ret; - - asm("addik %0, r1, 0" : "=r"(ret) : ); - return ret; -} - -void arch_lmb_reserve(void) -{ - arch_lmb_reserve_generic(get_sp(), gd->ram_top, 4096); -} - static void boot_jump_linux(struct bootm_headers *images, int flag) { void (*thekernel)(char *cmdline, ulong rd, ulong dt); diff --git a/arch/mips/lib/bootm.c b/arch/mips/lib/bootm.c index 8fb3a3923f..8719510002 100644 --- a/arch/mips/lib/bootm.c +++ b/arch/mips/lib/bootm.c @@ -9,7 +9,6 @@ #include <env.h> #include <image.h> #include <fdt_support.h> -#include <lmb.h> #include <log.h> #include <asm/addrspace.h> #include <asm/global_data.h> @@ -28,20 +27,6 @@ static char **linux_env; static char *linux_env_p; static int linux_env_idx;
-static ulong arch_get_sp(void) -{ - ulong ret; - - __asm__ __volatile__("move %0, $sp" : "=r"(ret) : ); - - return ret; -} - -void arch_lmb_reserve(void) -{ - arch_lmb_reserve_generic(arch_get_sp(), gd->ram_top, 4096); -} - static void linux_cmdline_init(void) { linux_argc = 1; diff --git a/arch/nios2/lib/bootm.c b/arch/nios2/lib/bootm.c index d33d45d28f..71319839ba 100644 --- a/arch/nios2/lib/bootm.c +++ b/arch/nios2/lib/bootm.c @@ -64,16 +64,3 @@ int do_bootm_linux(int flag, struct bootm_info *bmi)
return 1; } - -static ulong get_sp(void) -{ - ulong ret; - - asm("mov %0, sp" : "=r"(ret) : ); - return ret; -} - -void arch_lmb_reserve(void) -{ - arch_lmb_reserve_generic(get_sp(), gd->ram_top, 4096); -} diff --git a/arch/powerpc/lib/bootm.c b/arch/powerpc/lib/bootm.c index 836e6478fc..a78472135b 100644 --- a/arch/powerpc/lib/bootm.c +++ b/arch/powerpc/lib/bootm.c @@ -38,7 +38,6 @@
DECLARE_GLOBAL_DATA_PTR;
-static ulong get_sp (void); extern void ft_fixup_num_cores(void *blob); static void set_clocks_in_mhz (struct bd_info *kbd);
@@ -119,6 +118,7 @@ static void boot_jump_linux(struct bootm_headers *images)
void arch_lmb_reserve(void) { + phys_addr_t rsv_start; phys_size_t bootm_size; ulong size, bootmap_base;
@@ -143,7 +143,8 @@ void arch_lmb_reserve(void) lmb_reserve(base, bootm_size - size); }
- arch_lmb_reserve_generic(get_sp(), gd->ram_top, 4096); + rsv_start = gd->start_addr_sp - CONFIG_STACK_SIZE; + arch_lmb_reserve_generic(rsv_start, gd->ram_top, 4096);
#ifdef CONFIG_MP cpu_mp_lmb_reserve(); @@ -251,14 +252,6 @@ int do_bootm_linux(int flag, struct bootm_info *bmi) return 0; }
-static ulong get_sp (void) -{ - ulong sp; - - asm( "mr %0,1": "=r"(sp) : ); - return sp; -} - static void set_clocks_in_mhz (struct bd_info *kbd) { char *s; diff --git a/arch/riscv/lib/bootm.c b/arch/riscv/lib/bootm.c index bbf62f9e05..82502972ee 100644 --- a/arch/riscv/lib/bootm.c +++ b/arch/riscv/lib/bootm.c @@ -133,16 +133,3 @@ int do_bootm_vxworks(int flag, struct bootm_info *bmi) { return do_bootm_linux(flag, bmi); } - -static ulong get_sp(void) -{ - ulong ret; - - asm("mv %0, sp" : "=r"(ret) : ); - return ret; -} - -void arch_lmb_reserve(void) -{ - arch_lmb_reserve_generic(get_sp(), gd->ram_top, 4096); -} diff --git a/arch/sh/lib/bootm.c b/arch/sh/lib/bootm.c index 44ac05988c..bb0f59e0aa 100644 --- a/arch/sh/lib/bootm.c +++ b/arch/sh/lib/bootm.c @@ -101,16 +101,3 @@ int do_bootm_linux(int flag, struct bootm_info *bmi) /* does not return */ return 1; } - -static ulong get_sp(void) -{ - ulong ret; - - asm("mov r15, %0" : "=r"(ret) : ); - return ret; -} - -void arch_lmb_reserve(void) -{ - arch_lmb_reserve_generic(get_sp(), gd->ram_top, 4096); -} diff --git a/arch/x86/lib/bootm.c b/arch/x86/lib/bootm.c index 114b31012e..55f581836d 100644 --- a/arch/x86/lib/bootm.c +++ b/arch/x86/lib/bootm.c @@ -253,21 +253,3 @@ int do_bootm_linux(int flag, struct bootm_info *bmi)
return boot_jump_linux(images); } - -static ulong get_sp(void) -{ - ulong ret; - -#if CONFIG_IS_ENABLED(X86_64) - asm("mov %%rsp, %0" : "=r"(ret) : ); -#else - asm("mov %%esp, %0" : "=r"(ret) : ); -#endif - - return ret; -} - -void arch_lmb_reserve(void) -{ - arch_lmb_reserve_generic(get_sp(), gd->ram_top, 4096); -} diff --git a/arch/xtensa/lib/bootm.c b/arch/xtensa/lib/bootm.c index bdbd6d4692..2958f20739 100644 --- a/arch/xtensa/lib/bootm.c +++ b/arch/xtensa/lib/bootm.c @@ -197,16 +197,3 @@ int do_bootm_linux(int flag, struct bootm_info *bmi)
return 1; } - -static ulong get_sp(void) -{ - ulong ret; - - asm("mov %0, a1" : "=r"(ret) : ); - return ret; -} - -void arch_lmb_reserve(void) -{ - arch_lmb_reserve_generic(get_sp(), gd->ram_top, 4096); -} diff --git a/lib/lmb.c b/lib/lmb.c index 1534380969..7cb97e2d42 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -699,7 +699,11 @@ __weak void board_lmb_reserve(void)
__weak void arch_lmb_reserve(void) { - /* please define platform specific arch_lmb_reserve() */ + phys_addr_t rsv_start; + + rsv_start = gd->start_addr_sp - CONFIG_STACK_SIZE; + + arch_lmb_reserve_generic(rsv_start, gd->ram_top, 16384); }
/**

Enable the LMB module in the SPL stage. This will allow the LMB code to be exercised and tested in the SPL stage.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since V1: New patch
configs/sandbox_spl_defconfig | 1 + 1 file changed, 1 insertion(+)
diff --git a/configs/sandbox_spl_defconfig b/configs/sandbox_spl_defconfig index f7b92dc844..31ccdbd502 100644 --- a/configs/sandbox_spl_defconfig +++ b/configs/sandbox_spl_defconfig @@ -249,6 +249,7 @@ CONFIG_ZSTD=y CONFIG_SPL_LZMA=y CONFIG_ERRNO_STR=y CONFIG_SPL_HEXDUMP=y +CONFIG_SPL_LMB=y CONFIG_UNIT_TEST=y CONFIG_SPL_UNIT_TEST=y CONFIG_UT_TIME=y

The sandbox iommu driver uses the LMB module to allocate a particular range of memory for the device virtual address(DVA). This used to work earlier since the LMB memory map was caller specific and not global. But with the change to make the LMB allocations global and persistent, adding this memory range has other side effects. On the other hand, the sandbox iommu test expects to see this particular value of the DVA. Use the DVA address directly, instead of mapping it in the LMB memory map, and then have it allocated.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since V1: None
drivers/iommu/sandbox_iommu.c | 7 ++----- 1 file changed, 2 insertions(+), 5 deletions(-)
diff --git a/drivers/iommu/sandbox_iommu.c b/drivers/iommu/sandbox_iommu.c index 5b4a6a8982..81f10e8433 100644 --- a/drivers/iommu/sandbox_iommu.c +++ b/drivers/iommu/sandbox_iommu.c @@ -9,6 +9,7 @@ #include <asm/io.h> #include <linux/sizes.h>
+#define DVA_ADDR 0x89abc000 #define IOMMU_PAGE_SIZE SZ_4K
static dma_addr_t sandbox_iommu_map(struct udevice *dev, void *addr, @@ -21,7 +22,7 @@ static dma_addr_t sandbox_iommu_map(struct udevice *dev, void *addr, off = virt_to_phys(addr) - paddr; psize = ALIGN(size + off, IOMMU_PAGE_SIZE);
- dva = lmb_alloc(psize, IOMMU_PAGE_SIZE); + dva = (phys_addr_t)DVA_ADDR;
return dva + off; } @@ -35,8 +36,6 @@ static void sandbox_iommu_unmap(struct udevice *dev, dma_addr_t addr, dva = ALIGN_DOWN(addr, IOMMU_PAGE_SIZE); psize = size + (addr - dva); psize = ALIGN(psize, IOMMU_PAGE_SIZE); - - lmb_free(dva, psize); }
static struct iommu_ops sandbox_iommu_ops = { @@ -46,8 +45,6 @@ static struct iommu_ops sandbox_iommu_ops = {
static int sandbox_iommu_probe(struct udevice *dev) { - lmb_add(0x89abc000, SZ_16K); - return 0; }

The LMB memory is typically not needed very early in the platform's boot. Do not add memory to the LMB map before relocation. Reservation of common areas and adding of memory is done after relocation.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since V1: None
board/xilinx/common/board.c | 31 ------------------------------- 1 file changed, 31 deletions(-)
diff --git a/board/xilinx/common/board.c b/board/xilinx/common/board.c index f04c92a70f..3440402ab4 100644 --- a/board/xilinx/common/board.c +++ b/board/xilinx/common/board.c @@ -12,7 +12,6 @@ #include <image.h> #include <init.h> #include <jffs2/load_kernel.h> -#include <lmb.h> #include <log.h> #include <asm/global_data.h> #include <asm/sections.h> @@ -665,36 +664,6 @@ int embedded_dtb_select(void) } #endif
-#if IS_ENABLED(CONFIG_LMB) - -#ifndef MMU_SECTION_SIZE -#define MMU_SECTION_SIZE (1 * 1024 * 1024) -#endif - -phys_addr_t board_get_usable_ram_top(phys_size_t total_size) -{ - phys_size_t size; - phys_addr_t reg; - - if (!total_size) - return gd->ram_top; - - if (!IS_ALIGNED((ulong)gd->fdt_blob, 0x8)) - panic("Not 64bit aligned DT location: %p\n", gd->fdt_blob); - - /* found enough not-reserved memory to relocated U-Boot */ - lmb_add(gd->ram_base, gd->ram_size); - boot_fdt_add_mem_rsv_regions((void *)gd->fdt_blob); - size = ALIGN(CONFIG_SYS_MALLOC_LEN + total_size, MMU_SECTION_SIZE); - reg = lmb_alloc(size, MMU_SECTION_SIZE); - - if (!reg) - reg = gd->ram_top - size; - - return reg + size; -} -#endif - #ifdef CONFIG_OF_BOARD_SETUP #define MAX_RAND_SIZE 8 int ft_board_setup(void *blob, struct bd_info *bd)

Instead of a randomly selected address, use an LMB allocated one for reading the file into memory. With the LMB map now being persistent and global, the address used for reading the file might be already allocated as non-overwritable, resulting in a failure. Get a valid address from LMB and then read the file to that address.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since V1: * Free up the memory allocated once done with it.
test/boot/cedit.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/test/boot/cedit.c b/test/boot/cedit.c index fd19da0a0c..923ddd1481 100644 --- a/test/boot/cedit.c +++ b/test/boot/cedit.c @@ -7,6 +7,7 @@ #include <cedit.h> #include <env.h> #include <expo.h> +#include <lmb.h> #include <mapmem.h> #include <dm/ofnode.h> #include <test/ut.h> @@ -61,7 +62,7 @@ static int cedit_fdt(struct unit_test_state *uts) struct video_priv *vid_priv; extern struct expo *cur_exp; struct scene_obj_menu *menu; - ulong addr = 0x1000; + ulong addr; struct ofprop prop; struct scene *scn; oftree tree; @@ -86,6 +87,8 @@ static int cedit_fdt(struct unit_test_state *uts) str = abuf_data(&tline->buf); strcpy(str, "my-machine");
+ addr = lmb_alloc(1024, 1024); + ut_asserteq(!!addr, !0); ut_assertok(run_command("cedit write_fdt hostfs - settings.dtb", 0)); ut_assertok(run_commandf("load hostfs - %lx settings.dtb", addr)); ut_assert_nextlinen("1024 bytes read"); @@ -94,6 +97,7 @@ static int cedit_fdt(struct unit_test_state *uts) tree = oftree_from_fdt(fdt); node = ofnode_find_subnode(oftree_root(tree), CEDIT_NODE_NAME); ut_assert(ofnode_valid(node)); + lmb_free(addr, 1024);
ut_asserteq(ID_CPU_SPEED_2, ofnode_read_u32_default(node, "cpu-speed", 0));

The LMB memory maps are now persistent, with alloced lists being used to keep track of the available and free memory. Make corresponding changes in the test functions so that the list information can be accessed by the tests for checking against expected values. Also introduce functions to initialise and cleanup the lists. These functions will be invoked from every test to start the memory map from a clean slate.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since V1: * Make changes to the lmb_init() function for working with alist based lmb lists. * Add a lmb_uninit() function to clear out the lists.
include/lmb.h | 7 ++ lib/lmb.c | 32 ++++++ test/lib/lmb.c | 292 ++++++++++++++++++++++++++++++------------------- 3 files changed, 219 insertions(+), 112 deletions(-)
diff --git a/include/lmb.h b/include/lmb.h index cc4cf9f3c8..d9d5e3f3cb 100644 --- a/include/lmb.h +++ b/include/lmb.h @@ -13,6 +13,8 @@ * Copyright (C) 2001 Peter Bergner, IBM Corp. */
+struct alist; + /** * enum lmb_flags - definition of memory region attributes * @LMB_NONE: no special request @@ -110,6 +112,11 @@ void arch_lmb_reserve_generic(ulong sp, ulong end, ulong align); */ int lmb_mem_regions_init(void);
+#if CONFIG_IS_ENABLED(UT_LMB) +int lmb_init(struct alist **mem_lst, struct alist **used_lst); +void lmb_uninit(struct alist *mem_lst, struct alist *used_lst); +#endif /* UT_LMB */ + #endif /* __KERNEL__ */
#endif /* _LINUX_LMB_H */ diff --git a/lib/lmb.c b/lib/lmb.c index 7cb97e2d42..4480710d73 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -763,3 +763,35 @@ int initr_lmb(void)
return ret; } + +#if CONFIG_IS_ENABLED(UT_LMB) +int lmb_init(struct alist **mem_lst, struct alist **used_lst) +{ + bool ret; + + ret = alist_init(&lmb_free_mem, sizeof(struct lmb_region), + (uint)LMB_ALIST_INITIAL_SIZE); + if (!ret) { + log_debug("Unable to initialise the list for LMB free memory\n"); + return -1; + } + + ret = alist_init(&lmb_used_mem, sizeof(struct lmb_region), + (uint)LMB_ALIST_INITIAL_SIZE); + if (!ret) { + log_debug("Unable to initialise the list for LMB used memory\n"); + return -1; + } + + *mem_lst = &lmb_free_mem; + *used_lst = &lmb_used_mem; + + return 0; +} + +void lmb_uninit(struct alist *mem_lst, struct alist *used_lst) +{ + alist_uninit(mem_lst); + alist_uninit(used_lst); +} +#endif /* UT_LMB */ diff --git a/test/lib/lmb.c b/test/lib/lmb.c index a3a7ad904c..50f32793bc 100644 --- a/test/lib/lmb.c +++ b/test/lib/lmb.c @@ -3,6 +3,7 @@ * (C) Copyright 2018 Simon Goldschmidt */
+#include <alist.h> #include <dm.h> #include <lmb.h> #include <log.h> @@ -12,52 +13,51 @@ #include <test/test.h> #include <test/ut.h>
-extern struct lmb lmb; - -static inline bool lmb_is_nomap(struct lmb_property *m) +static inline bool lmb_is_nomap(struct lmb_region *m) { return m->flags & LMB_NOMAP; }
-static int check_lmb(struct unit_test_state *uts, struct lmb *lmb, - phys_addr_t ram_base, phys_size_t ram_size, - unsigned long num_reserved, +static int check_lmb(struct unit_test_state *uts, struct alist *mem_lst, + struct alist *used_lst, phys_addr_t ram_base, + phys_size_t ram_size, unsigned long num_reserved, phys_addr_t base1, phys_size_t size1, phys_addr_t base2, phys_size_t size2, phys_addr_t base3, phys_size_t size3) { + struct lmb_region *mem, *used; + + mem = mem_lst->data; + used = used_lst->data; + if (ram_size) { - ut_asserteq(lmb->memory.cnt, 1); - ut_asserteq(lmb->memory.region[0].base, ram_base); - ut_asserteq(lmb->memory.region[0].size, ram_size); + ut_asserteq(mem_lst->count, 1); + ut_asserteq(mem[0].base, ram_base); + ut_asserteq(mem[0].size, ram_size); }
- ut_asserteq(lmb->reserved.cnt, num_reserved); + ut_asserteq(used_lst->count, num_reserved); if (num_reserved > 0) { - ut_asserteq(lmb->reserved.region[0].base, base1); - ut_asserteq(lmb->reserved.region[0].size, size1); + ut_asserteq(used[0].base, base1); + ut_asserteq(used[0].size, size1); } if (num_reserved > 1) { - ut_asserteq(lmb->reserved.region[1].base, base2); - ut_asserteq(lmb->reserved.region[1].size, size2); + ut_asserteq(used[1].base, base2); + ut_asserteq(used[1].size, size2); } if (num_reserved > 2) { - ut_asserteq(lmb->reserved.region[2].base, base3); - ut_asserteq(lmb->reserved.region[2].size, size3); + ut_asserteq(used[2].base, base3); + ut_asserteq(used[2].size, size3); } return 0; }
-#define ASSERT_LMB(lmb, ram_base, ram_size, num_reserved, base1, size1, \ +#define ASSERT_LMB(mem_lst, used_lst, ram_base, ram_size, num_reserved, base1, size1, \ base2, size2, base3, size3) \ - ut_assert(!check_lmb(uts, lmb, ram_base, ram_size, \ + ut_assert(!check_lmb(uts, mem_lst, used_lst, ram_base, ram_size, \ num_reserved, base1, size1, base2, size2, base3, \ size3))
-/* - * Test helper function that reserves 64 KiB somewhere in the simulated RAM and - * then does some alloc + free tests. - */ static int test_multi_alloc(struct unit_test_state *uts, const phys_addr_t ram, const phys_size_t ram_size, const phys_addr_t ram0, const phys_size_t ram0_size, @@ -67,6 +67,8 @@ static int test_multi_alloc(struct unit_test_state *uts, const phys_addr_t ram, const phys_addr_t alloc_64k_end = alloc_64k_addr + 0x10000;
long ret; + struct alist *mem_lst, *used_lst; + struct lmb_region *mem, *used; phys_addr_t a, a2, b, b2, c, d;
/* check for overflow */ @@ -76,6 +78,10 @@ static int test_multi_alloc(struct unit_test_state *uts, const phys_addr_t ram, ut_assert(alloc_64k_addr >= ram + 8); ut_assert(alloc_64k_end <= ram_end - 8);
+ ut_asserteq(lmb_init(&mem_lst, &used_lst), 0); + mem = mem_lst->data; + used = used_lst->data; + if (ram0_size) { ret = lmb_add(ram0, ram0_size); ut_asserteq(ret, 0); @@ -85,95 +91,97 @@ static int test_multi_alloc(struct unit_test_state *uts, const phys_addr_t ram, ut_asserteq(ret, 0);
if (ram0_size) { - ut_asserteq(lmb.memory.cnt, 2); - ut_asserteq(lmb.memory.region[0].base, ram0); - ut_asserteq(lmb.memory.region[0].size, ram0_size); - ut_asserteq(lmb.memory.region[1].base, ram); - ut_asserteq(lmb.memory.region[1].size, ram_size); + ut_asserteq(mem_lst->count, 2); + ut_asserteq(mem[0].base, ram0); + ut_asserteq(mem[0].size, ram0_size); + ut_asserteq(mem[1].base, ram); + ut_asserteq(mem[1].size, ram_size); } else { - ut_asserteq(lmb.memory.cnt, 1); - ut_asserteq(lmb.memory.region[0].base, ram); - ut_asserteq(lmb.memory.region[0].size, ram_size); + ut_asserteq(mem_lst->count, 1); + ut_asserteq(mem[0].base, ram); + ut_asserteq(mem[0].size, ram_size); }
/* reserve 64KiB somewhere */ ret = lmb_reserve(alloc_64k_addr, 0x10000); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, 0, 0, 1, alloc_64k_addr, 0x10000, + ASSERT_LMB(mem_lst, used_lst, 0, 0, 1, alloc_64k_addr, 0x10000, 0, 0, 0, 0);
/* allocate somewhere, should be at the end of RAM */ a = lmb_alloc(4, 1); ut_asserteq(a, ram_end - 4); - ASSERT_LMB(&lmb, 0, 0, 2, alloc_64k_addr, 0x10000, + ASSERT_LMB(mem_lst, used_lst, 0, 0, 2, alloc_64k_addr, 0x10000, ram_end - 4, 4, 0, 0); /* alloc below end of reserved region -> below reserved region */ b = lmb_alloc_base(4, 1, alloc_64k_end); ut_asserteq(b, alloc_64k_addr - 4); - ASSERT_LMB(&lmb, 0, 0, 2, + ASSERT_LMB(mem_lst, used_lst, 0, 0, 2, alloc_64k_addr - 4, 0x10000 + 4, ram_end - 4, 4, 0, 0);
/* 2nd time */ c = lmb_alloc(4, 1); ut_asserteq(c, ram_end - 8); - ASSERT_LMB(&lmb, 0, 0, 2, + ASSERT_LMB(mem_lst, used_lst, 0, 0, 2, alloc_64k_addr - 4, 0x10000 + 4, ram_end - 8, 8, 0, 0); d = lmb_alloc_base(4, 1, alloc_64k_end); ut_asserteq(d, alloc_64k_addr - 8); - ASSERT_LMB(&lmb, 0, 0, 2, + ASSERT_LMB(mem_lst, used_lst, 0, 0, 2, alloc_64k_addr - 8, 0x10000 + 8, ram_end - 8, 8, 0, 0);
ret = lmb_free(a, 4); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, 0, 0, 2, + ASSERT_LMB(mem_lst, used_lst, 0, 0, 2, alloc_64k_addr - 8, 0x10000 + 8, ram_end - 8, 4, 0, 0); /* allocate again to ensure we get the same address */ a2 = lmb_alloc(4, 1); ut_asserteq(a, a2); - ASSERT_LMB(&lmb, 0, 0, 2, + ASSERT_LMB(mem_lst, used_lst, 0, 0, 2, alloc_64k_addr - 8, 0x10000 + 8, ram_end - 8, 8, 0, 0); ret = lmb_free(a2, 4); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, 0, 0, 2, + ASSERT_LMB(mem_lst, used_lst, 0, 0, 2, alloc_64k_addr - 8, 0x10000 + 8, ram_end - 8, 4, 0, 0);
ret = lmb_free(b, 4); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, 0, 0, 3, + ASSERT_LMB(mem_lst, used_lst, 0, 0, 3, alloc_64k_addr - 8, 4, alloc_64k_addr, 0x10000, ram_end - 8, 4); /* allocate again to ensure we get the same address */ b2 = lmb_alloc_base(4, 1, alloc_64k_end); ut_asserteq(b, b2); - ASSERT_LMB(&lmb, 0, 0, 2, + ASSERT_LMB(mem_lst, used_lst, 0, 0, 2, alloc_64k_addr - 8, 0x10000 + 8, ram_end - 8, 4, 0, 0); ret = lmb_free(b2, 4); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, 0, 0, 3, + ASSERT_LMB(mem_lst, used_lst, 0, 0, 3, alloc_64k_addr - 8, 4, alloc_64k_addr, 0x10000, ram_end - 8, 4);
ret = lmb_free(c, 4); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, 0, 0, 2, + ASSERT_LMB(mem_lst, used_lst, 0, 0, 2, alloc_64k_addr - 8, 4, alloc_64k_addr, 0x10000, 0, 0); ret = lmb_free(d, 4); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, 0, 0, 1, alloc_64k_addr, 0x10000, + ASSERT_LMB(mem_lst, used_lst, 0, 0, 1, alloc_64k_addr, 0x10000, 0, 0, 0, 0);
if (ram0_size) { - ut_asserteq(lmb.memory.cnt, 2); - ut_asserteq(lmb.memory.region[0].base, ram0); - ut_asserteq(lmb.memory.region[0].size, ram0_size); - ut_asserteq(lmb.memory.region[1].base, ram); - ut_asserteq(lmb.memory.region[1].size, ram_size); + ut_asserteq(mem_lst->count, 2); + ut_asserteq(mem[0].base, ram0); + ut_asserteq(mem[0].size, ram0_size); + ut_asserteq(mem[1].base, ram); + ut_asserteq(mem[1].size, ram_size); } else { - ut_asserteq(lmb.memory.cnt, 1); - ut_asserteq(lmb.memory.region[0].base, ram); - ut_asserteq(lmb.memory.region[0].size, ram_size); + ut_asserteq(mem_lst->count, 1); + ut_asserteq(mem[0].base, ram); + ut_asserteq(mem[0].size, ram_size); }
+ lmb_uninit(mem_lst, used_lst); + return 0; }
@@ -228,45 +236,53 @@ static int test_bigblock(struct unit_test_state *uts, const phys_addr_t ram) const phys_size_t big_block_size = 0x10000000; const phys_addr_t ram_end = ram + ram_size; const phys_addr_t alloc_64k_addr = ram + 0x10000000; + struct alist *mem_lst, *used_lst; + struct lmb_region *mem, *used; long ret; phys_addr_t a, b;
/* check for overflow */ ut_assert(ram_end == 0 || ram_end > ram);
+ ut_asserteq(lmb_init(&mem_lst, &used_lst), 0); + mem = mem_lst->data; + used = used_lst->data; + ret = lmb_add(ram, ram_size); ut_asserteq(ret, 0);
/* reserve 64KiB in the middle of RAM */ ret = lmb_reserve(alloc_64k_addr, 0x10000); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, ram, ram_size, 1, alloc_64k_addr, 0x10000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, alloc_64k_addr, 0x10000, 0, 0, 0, 0);
/* allocate a big block, should be below reserved */ a = lmb_alloc(big_block_size, 1); ut_asserteq(a, ram); - ASSERT_LMB(&lmb, ram, ram_size, 1, a, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, a, big_block_size + 0x10000, 0, 0, 0, 0); /* allocate 2nd big block */ /* This should fail, printing an error */ b = lmb_alloc(big_block_size, 1); ut_asserteq(b, 0); - ASSERT_LMB(&lmb, ram, ram_size, 1, a, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, a, big_block_size + 0x10000, 0, 0, 0, 0);
ret = lmb_free(a, big_block_size); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, ram, ram_size, 1, alloc_64k_addr, 0x10000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, alloc_64k_addr, 0x10000, 0, 0, 0, 0);
/* allocate too big block */ /* This should fail, printing an error */ a = lmb_alloc(ram_size, 1); ut_asserteq(a, 0); - ASSERT_LMB(&lmb, ram, ram_size, 1, alloc_64k_addr, 0x10000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, alloc_64k_addr, 0x10000, 0, 0, 0, 0);
+ lmb_uninit(mem_lst, used_lst); + return 0; }
@@ -292,51 +308,62 @@ static int test_noreserved(struct unit_test_state *uts, const phys_addr_t ram, const phys_addr_t ram_end = ram + ram_size; long ret; phys_addr_t a, b; + struct alist *mem_lst, *used_lst; + struct lmb_region *mem, *used; const phys_addr_t alloc_size_aligned = (alloc_size + align - 1) & ~(align - 1);
/* check for overflow */ ut_assert(ram_end == 0 || ram_end > ram);
+ ut_asserteq(lmb_init(&mem_lst, &used_lst), 0); + mem = mem_lst->data; + used = used_lst->data; + ret = lmb_add(ram, ram_size); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, ram, ram_size, 0, 0, 0, 0, 0, 0, 0); + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 0, 0, 0, 0, 0, 0, 0);
/* allocate a block */ a = lmb_alloc(alloc_size, align); ut_assert(a != 0); - ASSERT_LMB(&lmb, ram, ram_size, 1, ram + ram_size - alloc_size_aligned, - alloc_size, 0, 0, 0, 0); + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, + ram + ram_size - alloc_size_aligned, alloc_size, 0, 0, 0, 0); + /* allocate another block */ b = lmb_alloc(alloc_size, align); ut_assert(b != 0); if (alloc_size == alloc_size_aligned) { - ASSERT_LMB(&lmb, ram, ram_size, 1, ram + ram_size - + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, ram + ram_size - (alloc_size_aligned * 2), alloc_size * 2, 0, 0, 0, 0); } else { - ASSERT_LMB(&lmb, ram, ram_size, 2, ram + ram_size - + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 2, ram + ram_size - (alloc_size_aligned * 2), alloc_size, ram + ram_size - alloc_size_aligned, alloc_size, 0, 0); } /* and free them */ ret = lmb_free(b, alloc_size); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, ram, ram_size, 1, ram + ram_size - alloc_size_aligned, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, + ram + ram_size - alloc_size_aligned, alloc_size, 0, 0, 0, 0); ret = lmb_free(a, alloc_size); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, ram, ram_size, 0, 0, 0, 0, 0, 0, 0); + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 0, 0, 0, 0, 0, 0, 0);
/* allocate a block with base*/ b = lmb_alloc_base(alloc_size, align, ram_end); ut_assert(a == b); - ASSERT_LMB(&lmb, ram, ram_size, 1, ram + ram_size - alloc_size_aligned, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, + ram + ram_size - alloc_size_aligned, alloc_size, 0, 0, 0, 0); /* and free it */ ret = lmb_free(b, alloc_size); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, ram, ram_size, 0, 0, 0, 0, 0, 0, 0); + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 0, 0, 0, 0, 0, 0, 0); + + lmb_uninit(mem_lst, used_lst);
return 0; } @@ -378,33 +405,41 @@ static int lib_test_lmb_at_0(struct unit_test_state *uts) { const phys_addr_t ram = 0; const phys_size_t ram_size = 0x20000000; + struct alist *mem_lst, *used_lst; + struct lmb_region *mem, *used; long ret; phys_addr_t a, b;
+ ut_asserteq(lmb_init(&mem_lst, &used_lst), 0); + mem = mem_lst->data; + used = used_lst->data; + ret = lmb_add(ram, ram_size); ut_asserteq(ret, 0);
/* allocate nearly everything */ a = lmb_alloc(ram_size - 4, 1); ut_asserteq(a, ram + 4); - ASSERT_LMB(&lmb, ram, ram_size, 1, a, ram_size - 4, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, a, ram_size - 4, 0, 0, 0, 0); /* allocate the rest */ /* This should fail as the allocated address would be 0 */ b = lmb_alloc(4, 1); ut_asserteq(b, 0); /* check that this was an error by checking lmb */ - ASSERT_LMB(&lmb, ram, ram_size, 1, a, ram_size - 4, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, a, ram_size - 4, 0, 0, 0, 0); /* check that this was an error by freeing b */ ret = lmb_free(b, 4); ut_asserteq(ret, -1); - ASSERT_LMB(&lmb, ram, ram_size, 1, a, ram_size - 4, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, a, ram_size - 4, 0, 0, 0, 0);
ret = lmb_free(a, ram_size - 4); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, ram, ram_size, 0, 0, 0, 0, 0, 0, 0); + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 0, 0, 0, 0, 0, 0, 0); + + lmb_uninit(mem_lst, used_lst);
return 0; } @@ -415,42 +450,51 @@ static int lib_test_lmb_overlapping_reserve(struct unit_test_state *uts) { const phys_addr_t ram = 0x40000000; const phys_size_t ram_size = 0x20000000; + struct alist *mem_lst, *used_lst; + struct lmb_region *mem, *used; long ret;
+ ut_asserteq(lmb_init(&mem_lst, &used_lst), 0); + mem = mem_lst->data; + used = used_lst->data; + ret = lmb_add(ram, ram_size); ut_asserteq(ret, 0);
ret = lmb_reserve(0x40010000, 0x10000); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40010000, 0x10000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, 0x40010000, 0x10000, 0, 0, 0, 0); - /* allocate overlapping region should fail */ + /* allocate overlapping region should return the coalesced count */ ret = lmb_reserve(0x40011000, 0x10000); - ut_asserteq(ret, -1); - ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40010000, 0x10000, + ut_asserteq(ret, 1); + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, 0x40010000, 0x11000, 0, 0, 0, 0); /* allocate 3nd region */ ret = lmb_reserve(0x40030000, 0x10000); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, ram, ram_size, 2, 0x40010000, 0x10000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 2, 0x40010000, 0x11000, 0x40030000, 0x10000, 0, 0); /* allocate 2nd region , This should coalesced all region into one */ ret = lmb_reserve(0x40020000, 0x10000); ut_assert(ret >= 0); - ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40010000, 0x30000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, 0x40010000, 0x30000, 0, 0, 0, 0);
/* allocate 2nd region, which should be added as first region */ ret = lmb_reserve(0x40000000, 0x8000); ut_assert(ret >= 0); - ASSERT_LMB(&lmb, ram, ram_size, 2, 0x40000000, 0x8000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 2, 0x40000000, 0x8000, 0x40010000, 0x30000, 0, 0);
/* allocate 3rd region, coalesce with first and overlap with second */ ret = lmb_reserve(0x40008000, 0x10000); ut_assert(ret >= 0); - ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40000000, 0x40000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, 0x40000000, 0x40000, 0, 0, 0, 0); + + lmb_uninit(mem_lst, used_lst); + return 0; } LIB_TEST(lib_test_lmb_overlapping_reserve, 0); @@ -461,6 +505,8 @@ LIB_TEST(lib_test_lmb_overlapping_reserve, 0); */ static int test_alloc_addr(struct unit_test_state *uts, const phys_addr_t ram) { + struct lmb_region *mem, *used; + struct alist *mem_lst, *used_lst; const phys_size_t ram_size = 0x20000000; const phys_addr_t ram_end = ram + ram_size; const phys_size_t alloc_addr_a = ram + 0x8000000; @@ -472,6 +518,10 @@ static int test_alloc_addr(struct unit_test_state *uts, const phys_addr_t ram) /* check for overflow */ ut_assert(ram_end == 0 || ram_end > ram);
+ ut_asserteq(lmb_init(&mem_lst, &used_lst), 0); + mem = mem_lst->data; + used = used_lst->data; + ret = lmb_add(ram, ram_size); ut_asserteq(ret, 0);
@@ -482,34 +532,34 @@ static int test_alloc_addr(struct unit_test_state *uts, const phys_addr_t ram) ut_asserteq(ret, 0); ret = lmb_reserve(alloc_addr_c, 0x10000); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, ram, ram_size, 3, alloc_addr_a, 0x10000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 3, alloc_addr_a, 0x10000, alloc_addr_b, 0x10000, alloc_addr_c, 0x10000);
/* allocate blocks */ a = lmb_alloc_addr(ram, alloc_addr_a - ram); ut_asserteq(a, ram); - ASSERT_LMB(&lmb, ram, ram_size, 3, ram, 0x8010000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 3, ram, 0x8010000, alloc_addr_b, 0x10000, alloc_addr_c, 0x10000); b = lmb_alloc_addr(alloc_addr_a + 0x10000, alloc_addr_b - alloc_addr_a - 0x10000); ut_asserteq(b, alloc_addr_a + 0x10000); - ASSERT_LMB(&lmb, ram, ram_size, 2, ram, 0x10010000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 2, ram, 0x10010000, alloc_addr_c, 0x10000, 0, 0); c = lmb_alloc_addr(alloc_addr_b + 0x10000, alloc_addr_c - alloc_addr_b - 0x10000); ut_asserteq(c, alloc_addr_b + 0x10000); - ASSERT_LMB(&lmb, ram, ram_size, 1, ram, 0x18010000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, ram, 0x18010000, 0, 0, 0, 0); d = lmb_alloc_addr(alloc_addr_c + 0x10000, ram_end - alloc_addr_c - 0x10000); ut_asserteq(d, alloc_addr_c + 0x10000); - ASSERT_LMB(&lmb, ram, ram_size, 1, ram, ram_size, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, ram, ram_size, 0, 0, 0, 0);
/* allocating anything else should fail */ e = lmb_alloc(1, 1); ut_asserteq(e, 0); - ASSERT_LMB(&lmb, ram, ram_size, 1, ram, ram_size, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, ram, ram_size, 0, 0, 0, 0);
ret = lmb_free(d, ram_end - alloc_addr_c - 0x10000); @@ -519,39 +569,39 @@ static int test_alloc_addr(struct unit_test_state *uts, const phys_addr_t ram)
d = lmb_alloc_addr(ram_end - 4, 4); ut_asserteq(d, ram_end - 4); - ASSERT_LMB(&lmb, ram, ram_size, 2, ram, 0x18010000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 2, ram, 0x18010000, d, 4, 0, 0); ret = lmb_free(d, 4); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, ram, ram_size, 1, ram, 0x18010000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, ram, 0x18010000, 0, 0, 0, 0);
d = lmb_alloc_addr(ram_end - 128, 4); ut_asserteq(d, ram_end - 128); - ASSERT_LMB(&lmb, ram, ram_size, 2, ram, 0x18010000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 2, ram, 0x18010000, d, 4, 0, 0); ret = lmb_free(d, 4); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, ram, ram_size, 1, ram, 0x18010000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, ram, 0x18010000, 0, 0, 0, 0);
d = lmb_alloc_addr(alloc_addr_c + 0x10000, 4); ut_asserteq(d, alloc_addr_c + 0x10000); - ASSERT_LMB(&lmb, ram, ram_size, 1, ram, 0x18010004, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, ram, 0x18010004, 0, 0, 0, 0); ret = lmb_free(d, 4); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, ram, ram_size, 1, ram, 0x18010000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, ram, 0x18010000, 0, 0, 0, 0);
/* allocate at the bottom */ ret = lmb_free(a, alloc_addr_a - ram); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, ram, ram_size, 1, ram + 0x8000000, 0x10010000, - 0, 0, 0, 0); + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, ram + 0x8000000, + 0x10010000, 0, 0, 0, 0); d = lmb_alloc_addr(ram, 4); ut_asserteq(d, ram); - ASSERT_LMB(&lmb, ram, ram_size, 2, d, 4, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 2, d, 4, ram + 0x8000000, 0x10010000, 0, 0);
/* check that allocating outside memory fails */ @@ -564,6 +614,8 @@ static int test_alloc_addr(struct unit_test_state *uts, const phys_addr_t ram) ut_asserteq(ret, 0); }
+ lmb_uninit(mem_lst, used_lst); + return 0; }
@@ -585,6 +637,8 @@ LIB_TEST(lib_test_lmb_alloc_addr, 0); static int test_get_unreserved_size(struct unit_test_state *uts, const phys_addr_t ram) { + struct lmb_region *mem, *used; + struct alist *mem_lst, *used_lst; const phys_size_t ram_size = 0x20000000; const phys_addr_t ram_end = ram + ram_size; const phys_size_t alloc_addr_a = ram + 0x8000000; @@ -596,6 +650,10 @@ static int test_get_unreserved_size(struct unit_test_state *uts, /* check for overflow */ ut_assert(ram_end == 0 || ram_end > ram);
+ ut_asserteq(lmb_init(&mem_lst, &used_lst), 0); + mem = mem_lst->data; + used = used_lst->data; + ret = lmb_add(ram, ram_size); ut_asserteq(ret, 0);
@@ -606,7 +664,7 @@ static int test_get_unreserved_size(struct unit_test_state *uts, ut_asserteq(ret, 0); ret = lmb_reserve(alloc_addr_c, 0x10000); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, ram, ram_size, 3, alloc_addr_a, 0x10000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 3, alloc_addr_a, 0x10000, alloc_addr_b, 0x10000, alloc_addr_c, 0x10000);
/* check addresses in between blocks */ @@ -631,6 +689,8 @@ static int test_get_unreserved_size(struct unit_test_state *uts, s = lmb_get_free_size(ram_end - 4); ut_asserteq(s, 4);
+ lmb_uninit(mem_lst, used_lst); + return 0; }
@@ -650,83 +710,91 @@ LIB_TEST(lib_test_lmb_get_free_size, 0);
static int lib_test_lmb_flags(struct unit_test_state *uts) { + struct lmb_region *mem, *used; + struct alist *mem_lst, *used_lst; const phys_addr_t ram = 0x40000000; const phys_size_t ram_size = 0x20000000; long ret;
+ ut_asserteq(lmb_init(&mem_lst, &used_lst), 0); + mem = mem_lst->data; + used = used_lst->data; + ret = lmb_add(ram, ram_size); ut_asserteq(ret, 0);
/* reserve, same flag */ ret = lmb_reserve_flags(0x40010000, 0x10000, LMB_NOMAP); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40010000, 0x10000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, 0x40010000, 0x10000, 0, 0, 0, 0);
/* reserve again, same flag */ ret = lmb_reserve_flags(0x40010000, 0x10000, LMB_NOMAP); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40010000, 0x10000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, 0x40010000, 0x10000, 0, 0, 0, 0);
/* reserve again, new flag */ ret = lmb_reserve_flags(0x40010000, 0x10000, LMB_NONE); ut_asserteq(ret, -1); - ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40010000, 0x10000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, 0x40010000, 0x10000, 0, 0, 0, 0);
- ut_asserteq(lmb_is_nomap(&lmb.reserved.region[0]), 1); + ut_asserteq(lmb_is_nomap(&used[0]), 1);
/* merge after */ ret = lmb_reserve_flags(0x40020000, 0x10000, LMB_NOMAP); ut_asserteq(ret, 1); - ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40010000, 0x20000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, 0x40010000, 0x20000, 0, 0, 0, 0);
/* merge before */ ret = lmb_reserve_flags(0x40000000, 0x10000, LMB_NOMAP); ut_asserteq(ret, 1); - ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40000000, 0x30000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 1, 0x40000000, 0x30000, 0, 0, 0, 0);
- ut_asserteq(lmb_is_nomap(&lmb.reserved.region[0]), 1); + ut_asserteq(lmb_is_nomap(&used[0]), 1);
ret = lmb_reserve_flags(0x40030000, 0x10000, LMB_NONE); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, ram, ram_size, 2, 0x40000000, 0x30000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 2, 0x40000000, 0x30000, 0x40030000, 0x10000, 0, 0);
- ut_asserteq(lmb_is_nomap(&lmb.reserved.region[0]), 1); - ut_asserteq(lmb_is_nomap(&lmb.reserved.region[1]), 0); + ut_asserteq(lmb_is_nomap(&used[0]), 1); + ut_asserteq(lmb_is_nomap(&used[1]), 0);
/* test that old API use LMB_NONE */ ret = lmb_reserve(0x40040000, 0x10000); ut_asserteq(ret, 1); - ASSERT_LMB(&lmb, ram, ram_size, 2, 0x40000000, 0x30000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 2, 0x40000000, 0x30000, 0x40030000, 0x20000, 0, 0);
- ut_asserteq(lmb_is_nomap(&lmb.reserved.region[0]), 1); - ut_asserteq(lmb_is_nomap(&lmb.reserved.region[1]), 0); + ut_asserteq(lmb_is_nomap(&used[0]), 1); + ut_asserteq(lmb_is_nomap(&used[1]), 0);
ret = lmb_reserve_flags(0x40070000, 0x10000, LMB_NOMAP); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, ram, ram_size, 3, 0x40000000, 0x30000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 3, 0x40000000, 0x30000, 0x40030000, 0x20000, 0x40070000, 0x10000);
ret = lmb_reserve_flags(0x40050000, 0x10000, LMB_NOMAP); ut_asserteq(ret, 0); - ASSERT_LMB(&lmb, ram, ram_size, 4, 0x40000000, 0x30000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 4, 0x40000000, 0x30000, 0x40030000, 0x20000, 0x40050000, 0x10000);
/* merge with 2 adjacent regions */ ret = lmb_reserve_flags(0x40060000, 0x10000, LMB_NOMAP); ut_asserteq(ret, 2); - ASSERT_LMB(&lmb, ram, ram_size, 3, 0x40000000, 0x30000, + ASSERT_LMB(mem_lst, used_lst, ram, ram_size, 3, 0x40000000, 0x30000, 0x40030000, 0x20000, 0x40050000, 0x30000);
- ut_asserteq(lmb_is_nomap(&lmb.reserved.region[0]), 1); - ut_asserteq(lmb_is_nomap(&lmb.reserved.region[1]), 0); - ut_asserteq(lmb_is_nomap(&lmb.reserved.region[2]), 1); + ut_asserteq(lmb_is_nomap(&used[0]), 1); + ut_asserteq(lmb_is_nomap(&used[1]), 0); + ut_asserteq(lmb_is_nomap(&used[2]), 1); + + lmb_uninit(mem_lst, used_lst);
return 0; }

The LMB code has been changed so that the memory reservations and allocations are now persistent and global. With this change, the design of the LMB tests needs to be changed accordingly. Mark the LMB tests to be run only manually. The tests won't be run as part of the unit test suite, but would be invoked through a separate test, and thus would not interfere with the running of the rest of the tests.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since V1: None
test/lib/lmb.c | 41 ++++++++++++++++++++--------------------- 1 file changed, 20 insertions(+), 21 deletions(-)
diff --git a/test/lib/lmb.c b/test/lib/lmb.c index 50f32793bc..878aacee8d 100644 --- a/test/lib/lmb.c +++ b/test/lib/lmb.c @@ -200,7 +200,7 @@ static int test_multi_alloc_512mb_x2(struct unit_test_state *uts, }
/* Create a memory region with one reserved region and allocate */ -static int lib_test_lmb_simple(struct unit_test_state *uts) +static int lib_test_lmb_simple_norun(struct unit_test_state *uts) { int ret;
@@ -212,10 +212,10 @@ static int lib_test_lmb_simple(struct unit_test_state *uts) /* simulate 512 MiB RAM beginning at 1.5GiB */ return test_multi_alloc_512mb(uts, 0xE0000000); } -LIB_TEST(lib_test_lmb_simple, 0); +LIB_TEST(lib_test_lmb_simple_norun, UT_TESTF_MANUAL);
/* Create two memory regions with one reserved region and allocate */ -static int lib_test_lmb_simple_x2(struct unit_test_state *uts) +static int lib_test_lmb_simple_x2_norun(struct unit_test_state *uts) { int ret;
@@ -227,7 +227,7 @@ static int lib_test_lmb_simple_x2(struct unit_test_state *uts) /* simulate 512 MiB RAM beginning at 3.5GiB and 1 GiB */ return test_multi_alloc_512mb_x2(uts, 0xE0000000, 0x40000000); } -LIB_TEST(lib_test_lmb_simple_x2, 0); +LIB_TEST(lib_test_lmb_simple_x2_norun, UT_TESTF_MANUAL);
/* Simulate 512 MiB RAM, allocate some blocks that fit/don't fit */ static int test_bigblock(struct unit_test_state *uts, const phys_addr_t ram) @@ -286,7 +286,7 @@ static int test_bigblock(struct unit_test_state *uts, const phys_addr_t ram) return 0; }
-static int lib_test_lmb_big(struct unit_test_state *uts) +static int lib_test_lmb_big_norun(struct unit_test_state *uts) { int ret;
@@ -298,7 +298,7 @@ static int lib_test_lmb_big(struct unit_test_state *uts) /* simulate 512 MiB RAM beginning at 1.5GiB */ return test_bigblock(uts, 0xE0000000); } -LIB_TEST(lib_test_lmb_big, 0); +LIB_TEST(lib_test_lmb_big_norun, UT_TESTF_MANUAL);
/* Simulate 512 MiB RAM, allocate a block without previous reservation */ static int test_noreserved(struct unit_test_state *uts, const phys_addr_t ram, @@ -368,7 +368,7 @@ static int test_noreserved(struct unit_test_state *uts, const phys_addr_t ram, return 0; }
-static int lib_test_lmb_noreserved(struct unit_test_state *uts) +static int lib_test_lmb_noreserved_norun(struct unit_test_state *uts) { int ret;
@@ -380,10 +380,9 @@ static int lib_test_lmb_noreserved(struct unit_test_state *uts) /* simulate 512 MiB RAM beginning at 1.5GiB */ return test_noreserved(uts, 0xE0000000, 4, 1); } +LIB_TEST(lib_test_lmb_noreserved_norun, UT_TESTF_MANUAL);
-LIB_TEST(lib_test_lmb_noreserved, 0); - -static int lib_test_lmb_unaligned_size(struct unit_test_state *uts) +static int lib_test_lmb_unaligned_size_norun(struct unit_test_state *uts) { int ret;
@@ -395,13 +394,13 @@ static int lib_test_lmb_unaligned_size(struct unit_test_state *uts) /* simulate 512 MiB RAM beginning at 1.5GiB */ return test_noreserved(uts, 0xE0000000, 5, 8); } -LIB_TEST(lib_test_lmb_unaligned_size, 0); +LIB_TEST(lib_test_lmb_unaligned_size_norun, UT_TESTF_MANUAL);
/* * Simulate a RAM that starts at 0 and allocate down to address 0, which must * fail as '0' means failure for the lmb_alloc functions. */ -static int lib_test_lmb_at_0(struct unit_test_state *uts) +static int lib_test_lmb_at_0_norun(struct unit_test_state *uts) { const phys_addr_t ram = 0; const phys_size_t ram_size = 0x20000000; @@ -443,10 +442,10 @@ static int lib_test_lmb_at_0(struct unit_test_state *uts)
return 0; } -LIB_TEST(lib_test_lmb_at_0, 0); +LIB_TEST(lib_test_lmb_at_0_norun, UT_TESTF_MANUAL);
/* Check that calling lmb_reserve with overlapping regions fails. */ -static int lib_test_lmb_overlapping_reserve(struct unit_test_state *uts) +static int lib_test_lmb_overlapping_reserve_norun(struct unit_test_state *uts) { const phys_addr_t ram = 0x40000000; const phys_size_t ram_size = 0x20000000; @@ -497,7 +496,7 @@ static int lib_test_lmb_overlapping_reserve(struct unit_test_state *uts)
return 0; } -LIB_TEST(lib_test_lmb_overlapping_reserve, 0); +LIB_TEST(lib_test_lmb_overlapping_reserve_norun, UT_TESTF_MANUAL);
/* * Simulate 512 MiB RAM, reserve 3 blocks, allocate addresses in between. @@ -619,7 +618,7 @@ static int test_alloc_addr(struct unit_test_state *uts, const phys_addr_t ram) return 0; }
-static int lib_test_lmb_alloc_addr(struct unit_test_state *uts) +static int lib_test_lmb_alloc_addr_norun(struct unit_test_state *uts) { int ret;
@@ -631,7 +630,7 @@ static int lib_test_lmb_alloc_addr(struct unit_test_state *uts) /* simulate 512 MiB RAM beginning at 1.5GiB */ return test_alloc_addr(uts, 0xE0000000); } -LIB_TEST(lib_test_lmb_alloc_addr, 0); +LIB_TEST(lib_test_lmb_alloc_addr_norun, UT_TESTF_MANUAL);
/* Simulate 512 MiB RAM, reserve 3 blocks, check addresses in between */ static int test_get_unreserved_size(struct unit_test_state *uts, @@ -694,7 +693,7 @@ static int test_get_unreserved_size(struct unit_test_state *uts, return 0; }
-static int lib_test_lmb_get_free_size(struct unit_test_state *uts) +static int lib_test_lmb_get_free_size_norun(struct unit_test_state *uts) { int ret;
@@ -706,9 +705,9 @@ static int lib_test_lmb_get_free_size(struct unit_test_state *uts) /* simulate 512 MiB RAM beginning at 1.5GiB */ return test_get_unreserved_size(uts, 0xE0000000); } -LIB_TEST(lib_test_lmb_get_free_size, 0); +LIB_TEST(lib_test_lmb_get_free_size_norun, UT_TESTF_MANUAL);
-static int lib_test_lmb_flags(struct unit_test_state *uts) +static int lib_test_lmb_flags_norun(struct unit_test_state *uts) { struct lmb_region *mem, *used; struct alist *mem_lst, *used_lst; @@ -798,4 +797,4 @@ static int lib_test_lmb_flags(struct unit_test_state *uts)
return 0; } -LIB_TEST(lib_test_lmb_flags, 0); +LIB_TEST(lib_test_lmb_flags_norun, UT_TESTF_MANUAL);

Add the LMB unit tests under a separate class of tests. The LMB tests involve changing the LMB's memory map. With the memory map now persistent and global, running these tests has a side effect and impact any subsequent tests. Run these tests separately so that the system can be reset on completion of these tests.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since V1: New patch
include/test/suites.h | 1 + test/Kconfig | 9 ++++++ test/Makefile | 1 + test/cmd_ut.c | 7 +++++ test/lib/Makefile | 1 - test/{lib/lmb.c => lmb_ut.c} | 53 ++++++++++++++++++++++-------------- 6 files changed, 50 insertions(+), 22 deletions(-) rename test/{lib/lmb.c => lmb_ut.c} (93%)
diff --git a/include/test/suites.h b/include/test/suites.h index 365d5f20df..5ef164a956 100644 --- a/include/test/suites.h +++ b/include/test/suites.h @@ -45,6 +45,7 @@ int do_ut_fdt(struct cmd_tbl *cmdtp, int flag, int argc, char *const argv[]); int do_ut_font(struct cmd_tbl *cmdtp, int flag, int argc, char *const argv[]); int do_ut_hush(struct cmd_tbl *cmdtp, int flag, int argc, char *const argv[]); int do_ut_lib(struct cmd_tbl *cmdtp, int flag, int argc, char *const argv[]); +int do_ut_lmb(struct cmd_tbl *cmdtp, int flag, int argc, char *const argv[]); int do_ut_loadm(struct cmd_tbl *cmdtp, int flag, int argc, char *const argv[]); int do_ut_log(struct cmd_tbl *cmdtp, int flag, int argc, char * const argv[]); int do_ut_mbr(struct cmd_tbl *cmdtp, int flag, int argc, char *const argv[]); diff --git a/test/Kconfig b/test/Kconfig index e2ec0994a2..49c24722dc 100644 --- a/test/Kconfig +++ b/test/Kconfig @@ -79,6 +79,15 @@ config UT_COMPRESSION Enables tests for compression and decompression routines for simple sanity and for buffer overflow conditions.
+config UT_LMB + bool "Unit tests for LMB functions" + depends on !SPL && UNIT_TEST + default y + help + Enables the 'ut lmb' commands which tests the lmb functions + responsible for reserving memory for loading images into + memory. + config UT_LOG bool "Unit tests for logging functions" depends on UNIT_TEST diff --git a/test/Makefile b/test/Makefile index ed312cd0a4..e9bdd14eba 100644 --- a/test/Makefile +++ b/test/Makefile @@ -14,6 +14,7 @@ obj-$(CONFIG_$(SPL_)CMDLINE) += command_ut.o obj-$(CONFIG_$(SPL_)UT_COMPRESSION) += compression.o obj-y += dm/ obj-$(CONFIG_FUZZ) += fuzz/ +obj-$(CONFIG_UT_LMB) += lmb_ut.o ifndef CONFIG_SANDBOX_VPL obj-$(CONFIG_UNIT_TEST) += lib/ endif diff --git a/test/cmd_ut.c b/test/cmd_ut.c index 4e4aa8f1cb..60ff872723 100644 --- a/test/cmd_ut.c +++ b/test/cmd_ut.c @@ -78,6 +78,7 @@ static struct cmd_tbl cmd_ut_sub[] = { #ifdef CONFIG_CONSOLE_TRUETYPE U_BOOT_CMD_MKENT(font, CONFIG_SYS_MAXARGS, 1, do_ut_font, "", ""), #endif + #ifdef CONFIG_UT_OPTEE U_BOOT_CMD_MKENT(optee, CONFIG_SYS_MAXARGS, 1, do_ut_optee, "", ""), #endif @@ -87,6 +88,9 @@ static struct cmd_tbl cmd_ut_sub[] = { #ifdef CONFIG_UT_LIB U_BOOT_CMD_MKENT(lib, CONFIG_SYS_MAXARGS, 1, do_ut_lib, "", ""), #endif +#ifdef CONFIG_UT_LMB + U_BOOT_CMD_MKENT(lmb, CONFIG_SYS_MAXARGS, 1, do_ut_lmb, "", ""), +#endif #ifdef CONFIG_UT_LOG U_BOOT_CMD_MKENT(log, CONFIG_SYS_MAXARGS, 1, do_ut_log, "", ""), #endif @@ -228,6 +232,9 @@ U_BOOT_LONGHELP(ut, #ifdef CONFIG_UT_LIB "\nlib - library functions" #endif +#ifdef CONFIG_UT_LMB + "\nlmb - lmb functions" +#endif #ifdef CONFIG_UT_LOG "\nlog - logging functions" #endif diff --git a/test/lib/Makefile b/test/lib/Makefile index 70f14c46b1..ecb96dc1d7 100644 --- a/test/lib/Makefile +++ b/test/lib/Makefile @@ -10,7 +10,6 @@ obj-$(CONFIG_EFI_LOADER) += efi_device_path.o obj-$(CONFIG_EFI_SECURE_BOOT) += efi_image_region.o obj-y += hexdump.o obj-$(CONFIG_SANDBOX) += kconfig.o -obj-y += lmb.o obj-y += longjmp.o obj-$(CONFIG_CONSOLE_RECORD) += test_print.o obj-$(CONFIG_SSCANF) += sscanf.o diff --git a/test/lib/lmb.c b/test/lmb_ut.c similarity index 93% rename from test/lib/lmb.c rename to test/lmb_ut.c index 878aacee8d..a304e9317e 100644 --- a/test/lib/lmb.c +++ b/test/lmb_ut.c @@ -9,10 +9,13 @@ #include <log.h> #include <malloc.h> #include <dm/test.h> -#include <test/lib.h> +#include <test/suites.h> #include <test/test.h> #include <test/ut.h>
+ +#define LMB_TEST(_name, _flags) UNIT_TEST(_name, _flags, lmb_test) + static inline bool lmb_is_nomap(struct lmb_region *m) { return m->flags & LMB_NOMAP; @@ -200,7 +203,7 @@ static int test_multi_alloc_512mb_x2(struct unit_test_state *uts, }
/* Create a memory region with one reserved region and allocate */ -static int lib_test_lmb_simple_norun(struct unit_test_state *uts) +static int lmb_test_lmb_simple_norun(struct unit_test_state *uts) { int ret;
@@ -212,10 +215,10 @@ static int lib_test_lmb_simple_norun(struct unit_test_state *uts) /* simulate 512 MiB RAM beginning at 1.5GiB */ return test_multi_alloc_512mb(uts, 0xE0000000); } -LIB_TEST(lib_test_lmb_simple_norun, UT_TESTF_MANUAL); +LMB_TEST(lmb_test_lmb_simple_norun, UT_TESTF_MANUAL);
/* Create two memory regions with one reserved region and allocate */ -static int lib_test_lmb_simple_x2_norun(struct unit_test_state *uts) +static int lmb_test_lmb_simple_x2_norun(struct unit_test_state *uts) { int ret;
@@ -227,7 +230,7 @@ static int lib_test_lmb_simple_x2_norun(struct unit_test_state *uts) /* simulate 512 MiB RAM beginning at 3.5GiB and 1 GiB */ return test_multi_alloc_512mb_x2(uts, 0xE0000000, 0x40000000); } -LIB_TEST(lib_test_lmb_simple_x2_norun, UT_TESTF_MANUAL); +LMB_TEST(lmb_test_lmb_simple_x2_norun, UT_TESTF_MANUAL);
/* Simulate 512 MiB RAM, allocate some blocks that fit/don't fit */ static int test_bigblock(struct unit_test_state *uts, const phys_addr_t ram) @@ -286,7 +289,7 @@ static int test_bigblock(struct unit_test_state *uts, const phys_addr_t ram) return 0; }
-static int lib_test_lmb_big_norun(struct unit_test_state *uts) +static int lmb_test_lmb_big_norun(struct unit_test_state *uts) { int ret;
@@ -298,7 +301,7 @@ static int lib_test_lmb_big_norun(struct unit_test_state *uts) /* simulate 512 MiB RAM beginning at 1.5GiB */ return test_bigblock(uts, 0xE0000000); } -LIB_TEST(lib_test_lmb_big_norun, UT_TESTF_MANUAL); +LMB_TEST(lmb_test_lmb_big_norun, UT_TESTF_MANUAL);
/* Simulate 512 MiB RAM, allocate a block without previous reservation */ static int test_noreserved(struct unit_test_state *uts, const phys_addr_t ram, @@ -368,7 +371,7 @@ static int test_noreserved(struct unit_test_state *uts, const phys_addr_t ram, return 0; }
-static int lib_test_lmb_noreserved_norun(struct unit_test_state *uts) +static int lmb_test_lmb_noreserved_norun(struct unit_test_state *uts) { int ret;
@@ -380,9 +383,9 @@ static int lib_test_lmb_noreserved_norun(struct unit_test_state *uts) /* simulate 512 MiB RAM beginning at 1.5GiB */ return test_noreserved(uts, 0xE0000000, 4, 1); } -LIB_TEST(lib_test_lmb_noreserved_norun, UT_TESTF_MANUAL); +LMB_TEST(lmb_test_lmb_noreserved_norun, UT_TESTF_MANUAL);
-static int lib_test_lmb_unaligned_size_norun(struct unit_test_state *uts) +static int lmb_test_lmb_unaligned_size_norun(struct unit_test_state *uts) { int ret;
@@ -394,13 +397,13 @@ static int lib_test_lmb_unaligned_size_norun(struct unit_test_state *uts) /* simulate 512 MiB RAM beginning at 1.5GiB */ return test_noreserved(uts, 0xE0000000, 5, 8); } -LIB_TEST(lib_test_lmb_unaligned_size_norun, UT_TESTF_MANUAL); +LMB_TEST(lmb_test_lmb_unaligned_size_norun, UT_TESTF_MANUAL);
/* * Simulate a RAM that starts at 0 and allocate down to address 0, which must * fail as '0' means failure for the lmb_alloc functions. */ -static int lib_test_lmb_at_0_norun(struct unit_test_state *uts) +static int lmb_test_lmb_at_0_norun(struct unit_test_state *uts) { const phys_addr_t ram = 0; const phys_size_t ram_size = 0x20000000; @@ -442,10 +445,10 @@ static int lib_test_lmb_at_0_norun(struct unit_test_state *uts)
return 0; } -LIB_TEST(lib_test_lmb_at_0_norun, UT_TESTF_MANUAL); +LMB_TEST(lmb_test_lmb_at_0_norun, UT_TESTF_MANUAL);
/* Check that calling lmb_reserve with overlapping regions fails. */ -static int lib_test_lmb_overlapping_reserve_norun(struct unit_test_state *uts) +static int lmb_test_lmb_overlapping_reserve_norun(struct unit_test_state *uts) { const phys_addr_t ram = 0x40000000; const phys_size_t ram_size = 0x20000000; @@ -496,7 +499,7 @@ static int lib_test_lmb_overlapping_reserve_norun(struct unit_test_state *uts)
return 0; } -LIB_TEST(lib_test_lmb_overlapping_reserve_norun, UT_TESTF_MANUAL); +LMB_TEST(lmb_test_lmb_overlapping_reserve_norun, UT_TESTF_MANUAL);
/* * Simulate 512 MiB RAM, reserve 3 blocks, allocate addresses in between. @@ -618,7 +621,7 @@ static int test_alloc_addr(struct unit_test_state *uts, const phys_addr_t ram) return 0; }
-static int lib_test_lmb_alloc_addr_norun(struct unit_test_state *uts) +static int lmb_test_lmb_alloc_addr_norun(struct unit_test_state *uts) { int ret;
@@ -630,7 +633,7 @@ static int lib_test_lmb_alloc_addr_norun(struct unit_test_state *uts) /* simulate 512 MiB RAM beginning at 1.5GiB */ return test_alloc_addr(uts, 0xE0000000); } -LIB_TEST(lib_test_lmb_alloc_addr_norun, UT_TESTF_MANUAL); +LMB_TEST(lmb_test_lmb_alloc_addr_norun, UT_TESTF_MANUAL);
/* Simulate 512 MiB RAM, reserve 3 blocks, check addresses in between */ static int test_get_unreserved_size(struct unit_test_state *uts, @@ -693,7 +696,7 @@ static int test_get_unreserved_size(struct unit_test_state *uts, return 0; }
-static int lib_test_lmb_get_free_size_norun(struct unit_test_state *uts) +static int lmb_test_lmb_get_free_size_norun(struct unit_test_state *uts) { int ret;
@@ -705,9 +708,9 @@ static int lib_test_lmb_get_free_size_norun(struct unit_test_state *uts) /* simulate 512 MiB RAM beginning at 1.5GiB */ return test_get_unreserved_size(uts, 0xE0000000); } -LIB_TEST(lib_test_lmb_get_free_size_norun, UT_TESTF_MANUAL); +LMB_TEST(lmb_test_lmb_get_free_size_norun, UT_TESTF_MANUAL);
-static int lib_test_lmb_flags_norun(struct unit_test_state *uts) +static int lmb_test_lmb_flags_norun(struct unit_test_state *uts) { struct lmb_region *mem, *used; struct alist *mem_lst, *used_lst; @@ -797,4 +800,12 @@ static int lib_test_lmb_flags_norun(struct unit_test_state *uts)
return 0; } -LIB_TEST(lib_test_lmb_flags_norun, UT_TESTF_MANUAL); +LMB_TEST(lmb_test_lmb_flags_norun, UT_TESTF_MANUAL); + +int do_ut_lmb(struct cmd_tbl *cmdtp, int flag, int argc, char *const argv[]) +{ + struct unit_test *tests = UNIT_TEST_SUITE_START(lmb_test); + const int n_ents = UNIT_TEST_SUITE_COUNT(lmb_test); + + return cmd_ut_category("lmb", "lmb_test_", tests, n_ents, argc, argv); +}

With the LMB tests moved under a separate class of unit tests, invoke these from a separate script which would allow for a system reset once the tests have been run. This enables clearing up the LMB memory map after having run the tests.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since V1: New patch
test/py/tests/test_lmb.py | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+) create mode 100644 test/py/tests/test_lmb.py
diff --git a/test/py/tests/test_lmb.py b/test/py/tests/test_lmb.py new file mode 100644 index 0000000000..b6f9ff9c6a --- /dev/null +++ b/test/py/tests/test_lmb.py @@ -0,0 +1,24 @@ +# SPDX-License-Identifier: GPL-2.0+ +# Copyright 2024 Linaro Ltd +# +# Run the LMB tests + +import pytest + +base_script = ''' +ut lmb -f +''' + +@pytest.mark.boardspec('sandbox') +def test_lmb(u_boot_console): + cons = u_boot_console + cmd = base_script + + with cons.log.section('LMB Unit Test'): + output = cons.run_command_list(cmd.splitlines()) + + assert 'Failures: 0' in output[-1] + + # Restart so that the LMB memory map starts with + # a clean slate for the next set of tests. + u_boot_console.restart_uboot()

The LMB code has been changed to make the memory reservations persistent and global. Make corresponding change the the lmb_test_dump_all() function to print the global LMB available and used memory.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since V1: * Corresponding changes needed to work with alist based lmb lists.
test/cmd/bdinfo.c | 28 ++++++++++++++++------------ 1 file changed, 16 insertions(+), 12 deletions(-)
diff --git a/test/cmd/bdinfo.c b/test/cmd/bdinfo.c index 1cd81a195b..3184aaf629 100644 --- a/test/cmd/bdinfo.c +++ b/test/cmd/bdinfo.c @@ -5,6 +5,7 @@ * Copyright 2023 Marek Vasut marek.vasut+renesas@mailbox.org */
+#include <alist.h> #include <console.h> #include <mapmem.h> #include <asm/global_data.h> @@ -21,6 +22,7 @@ #include <asm/cache.h> #include <asm/global_data.h> #include <display_options.h> +#include <linux/kernel.h>
DECLARE_GLOBAL_DATA_PTR;
@@ -99,19 +101,20 @@ static int test_video_info(struct unit_test_state *uts) }
static int lmb_test_dump_region(struct unit_test_state *uts, - struct lmb_region *rgn, char *name) + struct alist *lmb_rgn_lst, char *name) { + struct lmb_region *rgn = lmb_rgn_lst->data; unsigned long long base, size, end; enum lmb_flags flags; int i;
- ut_assert_nextline(" %s.cnt = 0x%lx / max = 0x%lx", name, rgn->cnt, rgn->max); + ut_assert_nextline(" %s.count = 0x%hx", name, lmb_rgn_lst->count);
- for (i = 0; i < rgn->cnt; i++) { - base = rgn->region[i].base; - size = rgn->region[i].size; + for (i = 0; i < lmb_rgn_lst->count; i++) { + base = rgn[i].base; + size = rgn[i].size; end = base + size - 1; - flags = rgn->region[i].flags; + flags = rgn[i].flags;
if (!IS_ENABLED(CONFIG_SANDBOX) && i == 3) { ut_assert_nextlinen(" %s[%d]\t[", name, i); @@ -124,11 +127,14 @@ static int lmb_test_dump_region(struct unit_test_state *uts, return 0; }
-static int lmb_test_dump_all(struct unit_test_state *uts, struct lmb *lmb) +static int lmb_test_dump_all(struct unit_test_state *uts) { + extern struct alist lmb_free_mem; + extern struct alist lmb_used_mem; + ut_assert_nextline("lmb_dump_all:"); - ut_assertok(lmb_test_dump_region(uts, &lmb->memory, "memory")); - ut_assertok(lmb_test_dump_region(uts, &lmb->reserved, "reserved")); + ut_assertok(lmb_test_dump_region(uts, &lmb_free_mem, "memory")); + ut_assertok(lmb_test_dump_region(uts, &lmb_used_mem, "reserved"));
return 0; } @@ -190,9 +196,7 @@ static int bdinfo_test_all(struct unit_test_state *uts) #endif
if (IS_ENABLED(CONFIG_LMB) && gd->fdt_blob) { - struct lmb lmb; - - ut_assertok(lmb_test_dump_all(uts, &lmb)); + ut_assertok(lmb_test_dump_all(uts)); if (IS_ENABLED(CONFIG_OF_REAL)) ut_assert_nextline("devicetree = %s", fdtdec_get_srcname()); }

The LMB module is to be used as a backend for allocating and freeing up memory requested from other modules like EFI. These memory requests are different from the typical LMB reservations in that memory required by the EFI module cannot be overwritten, or re-requested. Add versions of the LMB API functions with flags for allocating and freeing up memory. The caller can then use these API's for specifying the type of memory that is required. For now, these functions will be used by the EFI memory module.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since V1: New patch
include/lmb.h | 6 ++++++ lib/lmb.c | 39 ++++++++++++++++++++++++++++++++++++++- 2 files changed, 44 insertions(+), 1 deletion(-)
diff --git a/include/lmb.h b/include/lmb.h index d9d5e3f3cb..afab04426d 100644 --- a/include/lmb.h +++ b/include/lmb.h @@ -74,8 +74,13 @@ long lmb_reserve(phys_addr_t base, phys_size_t size); long lmb_reserve_flags(phys_addr_t base, phys_size_t size, enum lmb_flags flags); phys_addr_t lmb_alloc(phys_size_t size, ulong align); +phys_addr_t lmb_alloc_flags(phys_size_t size, ulong align, uint flags); phys_addr_t lmb_alloc_base(phys_size_t size, ulong align, phys_addr_t max_addr); +phys_addr_t lmb_alloc_base_flags(phys_size_t size, ulong align, + phys_addr_t max_addr, uint flags); phys_addr_t lmb_alloc_addr(phys_addr_t base, phys_size_t size); +phys_addr_t lmb_alloc_addr_flags(phys_addr_t base, phys_size_t size, + uint flags); phys_size_t lmb_get_free_size(phys_addr_t addr);
/** @@ -91,6 +96,7 @@ phys_size_t lmb_get_free_size(phys_addr_t addr); int lmb_is_reserved_flags(phys_addr_t addr, int flags);
long lmb_free(phys_addr_t base, phys_size_t size); +long lmb_free_flags(phys_addr_t base, phys_size_t size, uint flags);
void lmb_dump_all(void); void lmb_dump_all_force(void); diff --git a/lib/lmb.c b/lib/lmb.c index 4480710d73..d2edb3525a 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -468,7 +468,7 @@ long lmb_add(phys_addr_t base, phys_size_t size) return lmb_add_region(lmb_rgn_lst, base, size); }
-long lmb_free(phys_addr_t base, phys_size_t size) +static long __lmb_free(phys_addr_t base, phys_size_t size) { struct lmb_region *rgn; struct alist *lmb_rgn_lst = &lmb_used_mem; @@ -519,6 +519,17 @@ long lmb_free(phys_addr_t base, phys_size_t size) rgn[i].flags); }
+long lmb_free(phys_addr_t base, phys_size_t size) +{ + return __lmb_free(base, size); +} + +long lmb_free_flags(phys_addr_t base, phys_size_t size, + __always_unused uint flags) +{ + return __lmb_free(base, size); +} + long lmb_reserve_flags(phys_addr_t base, phys_size_t size, enum lmb_flags flags) { struct alist *lmb_rgn_lst = &lmb_used_mem; @@ -602,6 +613,12 @@ phys_addr_t lmb_alloc(phys_size_t size, ulong align) return lmb_alloc_base(size, align, LMB_ALLOC_ANYWHERE); }
+phys_addr_t lmb_alloc_flags(phys_size_t size, ulong align, uint flags) +{ + return __lmb_alloc_base(size, align, LMB_ALLOC_ANYWHERE, + flags); +} + phys_addr_t lmb_alloc_base(phys_size_t size, ulong align, phys_addr_t max_addr) { phys_addr_t alloc; @@ -615,6 +632,20 @@ phys_addr_t lmb_alloc_base(phys_size_t size, ulong align, phys_addr_t max_addr) return alloc; }
+phys_addr_t lmb_alloc_base_flags(phys_size_t size, ulong align, + phys_addr_t max_addr, uint flags) +{ + phys_addr_t alloc; + + alloc = __lmb_alloc_base(size, align, max_addr, flags); + + if (alloc == 0) + printf("ERROR: Failed to allocate 0x%lx bytes below 0x%lx.\n", + (ulong)size, (ulong)max_addr); + + return alloc; +} + static phys_addr_t __lmb_alloc_addr(phys_addr_t base, phys_size_t size, enum lmb_flags flags) { @@ -649,6 +680,12 @@ phys_addr_t lmb_alloc_addr(phys_addr_t base, phys_size_t size) return __lmb_alloc_addr(base, size, LMB_NONE); }
+phys_addr_t lmb_alloc_addr_flags(phys_addr_t base, phys_size_t size, + uint flags) +{ + return __lmb_alloc_addr(base, size, flags); +} + /* Return number of bytes from a given address that are free */ phys_size_t lmb_get_free_size(phys_addr_t addr) {

Add a flag LMB_NONOTIFY that can be passed to the LMB API's for reserving memory. This will then result in no notification being sent from the LMB module for the changes to the LMB's memory map.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since V1: New patch
include/lmb.h | 1 + 1 file changed, 1 insertion(+)
diff --git a/include/lmb.h b/include/lmb.h index afab04426d..dbf3e9e30f 100644 --- a/include/lmb.h +++ b/include/lmb.h @@ -24,6 +24,7 @@ enum lmb_flags { LMB_NONE = BIT(0), LMB_NOMAP = BIT(1), LMB_NOOVERWRITE = BIT(2), + LMB_NONOTIFY = BIT(3), };
/**

Use the LMB API's for allocating and freeing up memory. With this, the LMB module becomes the common backend for managing non U-Boot image memory that might be requested by other modules.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since V1: New patch
lib/efi_loader/Kconfig | 1 + lib/efi_loader/efi_memory.c | 78 +++++++++++-------------------------- 2 files changed, 23 insertions(+), 56 deletions(-)
diff --git a/lib/efi_loader/Kconfig b/lib/efi_loader/Kconfig index ee71f41714..bdf5732974 100644 --- a/lib/efi_loader/Kconfig +++ b/lib/efi_loader/Kconfig @@ -18,6 +18,7 @@ config EFI_LOADER select DM_EVENT select EVENT_DYNAMIC select LIB_UUID + select LMB imply PARTITION_UUIDS select REGEX imply FAT diff --git a/lib/efi_loader/efi_memory.c b/lib/efi_loader/efi_memory.c index 12cf23fa3f..5691b5da03 100644 --- a/lib/efi_loader/efi_memory.c +++ b/lib/efi_loader/efi_memory.c @@ -9,12 +9,14 @@
#include <efi_loader.h> #include <init.h> +#include <lmb.h> #include <log.h> #include <malloc.h> #include <mapmem.h> #include <watchdog.h> #include <asm/cache.h> #include <asm/global_data.h> +#include <asm/io.h> #include <asm/sections.h> #include <linux/list_sort.h> #include <linux/sizes.h> @@ -435,55 +437,6 @@ static efi_status_t efi_check_allocated(u64 addr, bool must_be_allocated) return EFI_NOT_FOUND; }
-/** - * efi_find_free_memory() - find free memory pages - * - * @len: size of memory area needed - * @max_addr: highest address to allocate - * Return: pointer to free memory area or 0 - */ -static uint64_t efi_find_free_memory(uint64_t len, uint64_t max_addr) -{ - struct list_head *lhandle; - - /* - * Prealign input max address, so we simplify our matching - * logic below and can just reuse it as return pointer. - */ - max_addr &= ~EFI_PAGE_MASK; - - list_for_each(lhandle, &efi_mem) { - struct efi_mem_list *lmem = list_entry(lhandle, - struct efi_mem_list, link); - struct efi_mem_desc *desc = &lmem->desc; - uint64_t desc_len = desc->num_pages << EFI_PAGE_SHIFT; - uint64_t desc_end = desc->physical_start + desc_len; - uint64_t curmax = min(max_addr, desc_end); - uint64_t ret = curmax - len; - - /* We only take memory from free RAM */ - if (desc->type != EFI_CONVENTIONAL_MEMORY) - continue; - - /* Out of bounds for max_addr */ - if ((ret + len) > max_addr) - continue; - - /* Out of bounds for upper map limit */ - if ((ret + len) > desc_end) - continue; - - /* Out of bounds for lower map limit */ - if (ret < desc->physical_start) - continue; - - /* Return the highest address in this map within bounds */ - return ret; - } - - return 0; -} - /** * efi_allocate_pages - allocate memory pages * @@ -498,6 +451,7 @@ efi_status_t efi_allocate_pages(enum efi_allocate_type type, efi_uintn_t pages, uint64_t *memory) { u64 len; + uint flags; efi_status_t ret; uint64_t addr;
@@ -513,33 +467,35 @@ efi_status_t efi_allocate_pages(enum efi_allocate_type type, (len >> EFI_PAGE_SHIFT) != (u64)pages) return EFI_OUT_OF_RESOURCES;
+ flags = LMB_NOOVERWRITE | LMB_NONOTIFY; switch (type) { case EFI_ALLOCATE_ANY_PAGES: /* Any page */ - addr = efi_find_free_memory(len, -1ULL); + addr = (u64)lmb_alloc_flags(len, EFI_PAGE_SIZE, flags); if (!addr) return EFI_OUT_OF_RESOURCES; break; case EFI_ALLOCATE_MAX_ADDRESS: /* Max address */ - addr = efi_find_free_memory(len, *memory); + addr = (u64)lmb_alloc_base_flags(len, EFI_PAGE_SIZE, *memory, + flags); if (!addr) return EFI_OUT_OF_RESOURCES; break; case EFI_ALLOCATE_ADDRESS: if (*memory & EFI_PAGE_MASK) return EFI_NOT_FOUND; - /* Exact address, reserve it. The addr is already in *memory. */ - ret = efi_check_allocated(*memory, false); - if (ret != EFI_SUCCESS) - return EFI_NOT_FOUND; - addr = *memory; + + addr = (u64)lmb_alloc_addr_flags(*memory, len, flags); + if (!addr) + return EFI_OUT_OF_RESOURCES; break; default: /* UEFI doesn't specify other allocation types */ return EFI_INVALID_PARAMETER; }
+ addr = (u64)(uintptr_t)map_sysmem(addr, 0); /* Reserve that map in our memory maps */ ret = efi_add_memory_map_pg(addr, pages, memory_type, true); if (ret != EFI_SUCCESS) @@ -560,6 +516,9 @@ efi_status_t efi_allocate_pages(enum efi_allocate_type type, */ efi_status_t efi_free_pages(uint64_t memory, efi_uintn_t pages) { + u64 len; + uint flags; + long status; efi_status_t ret;
ret = efi_check_allocated(memory, true); @@ -573,6 +532,13 @@ efi_status_t efi_free_pages(uint64_t memory, efi_uintn_t pages) return EFI_INVALID_PARAMETER; }
+ flags = LMB_NOOVERWRITE | LMB_NONOTIFY; + len = (u64)pages << EFI_PAGE_SHIFT; + status = lmb_free_flags(virt_to_phys((void *)(uintptr_t)memory), len, + flags); + if (status) + return EFI_NOT_FOUND; + ret = efi_add_memory_map_pg(memory, pages, EFI_CONVENTIONAL_MEMORY, false); if (ret != EFI_SUCCESS)

Add an event which would be used for notifying changes in the LMB modules' memory map. This is to be used for having a synchronous view of the memory that is currently in use, and that is available for allocations.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since V1: * Remove the event for EFI notifications.
common/event.c | 2 ++ include/event.h | 14 ++++++++++++++ 2 files changed, 16 insertions(+)
diff --git a/common/event.c b/common/event.c index dda569d447..fc8002603c 100644 --- a/common/event.c +++ b/common/event.c @@ -48,6 +48,8 @@ const char *const type_name[] = {
/* main loop events */ "main_loop", + + "lmb_map_update", };
_Static_assert(ARRAY_SIZE(type_name) == EVT_COUNT, "event type_name size"); diff --git a/include/event.h b/include/event.h index fb353ad623..fce7e96170 100644 --- a/include/event.h +++ b/include/event.h @@ -153,6 +153,14 @@ enum event_t { */ EVT_MAIN_LOOP,
+ /** + * @EVT_LMB_MAP_UPDATE: + * This event is triggered on an update to the LMB reserved memory + * region. This can be used to notify about any LMB memory allocation + * or freeing of memory having occurred. + */ + EVT_LMB_MAP_UPDATE, + /** * @EVT_COUNT: * This constants holds the maximum event number + 1 and is used when @@ -203,6 +211,12 @@ union event_data { oftree tree; struct bootm_headers *images; } ft_fixup; + + struct event_lmb_map_update { + u64 base; + u64 size; + u8 op; + } lmb_map; };
/**

Add a Kconfig symbol to enable getting updates on any memory map changes that might be done by the LMB module. This notification mechanism can then be used to have a synchronous view of allocated and free memory.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since V1: * Change the description to highlight only LMB notifications. * Add a separate line for dependencies.
lib/Kconfig | 10 ++++++++++ 1 file changed, 10 insertions(+)
diff --git a/lib/Kconfig b/lib/Kconfig index 7eea517b3b..b422183a0f 100644 --- a/lib/Kconfig +++ b/lib/Kconfig @@ -74,6 +74,16 @@ config HAVE_PRIVATE_LIBGCC config LIB_UUID bool
+config MEM_MAP_UPDATE_NOTIFY + bool "Get notified of any changes to the LMB memory map" + depends on EVENT && LMB && EFI_LOADER + default y + help + Enable this option to get notification on any changes to the + memory that is allocated or freed by the LMB module. This will + allow different modules that allocate memory or maintain a memory + map to have a synchronous view of available and allocated memory. + config RANDOM_UUID bool "GPT Random UUID generation" select LIB_UUID

On Thu, Jul 04, 2024 at 01:05:34PM +0530, Sughosh Ganu wrote:
Add a Kconfig symbol to enable getting updates on any memory map changes that might be done by the LMB module. This notification mechanism can then be used to have a synchronous view of allocated and free memory.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since V1:
- Change the description to highlight only LMB notifications.
- Add a separate line for dependencies.
lib/Kconfig | 10 ++++++++++ 1 file changed, 10 insertions(+)
diff --git a/lib/Kconfig b/lib/Kconfig index 7eea517b3b..b422183a0f 100644 --- a/lib/Kconfig +++ b/lib/Kconfig @@ -74,6 +74,16 @@ config HAVE_PRIVATE_LIBGCC config LIB_UUID bool
+config MEM_MAP_UPDATE_NOTIFY
- bool "Get notified of any changes to the LMB memory map"
- depends on EVENT && LMB && EFI_LOADER
- default y
- help
Enable this option to get notification on any changes to the
memory that is allocated or freed by the LMB module. This will
allow different modules that allocate memory or maintain a memory
map to have a synchronous view of available and allocated memory.
This needs to be select'd when it's going to be used, opting out of making sure memory reservations are obeyed isn't a good idea.

On Fri, 5 Jul 2024 at 22:51, Tom Rini trini@konsulko.com wrote:
On Thu, Jul 04, 2024 at 01:05:34PM +0530, Sughosh Ganu wrote:
Add a Kconfig symbol to enable getting updates on any memory map changes that might be done by the LMB module. This notification mechanism can then be used to have a synchronous view of allocated and free memory.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since V1:
- Change the description to highlight only LMB notifications.
- Add a separate line for dependencies.
lib/Kconfig | 10 ++++++++++ 1 file changed, 10 insertions(+)
diff --git a/lib/Kconfig b/lib/Kconfig index 7eea517b3b..b422183a0f 100644 --- a/lib/Kconfig +++ b/lib/Kconfig @@ -74,6 +74,16 @@ config HAVE_PRIVATE_LIBGCC config LIB_UUID bool
+config MEM_MAP_UPDATE_NOTIFY
bool "Get notified of any changes to the LMB memory map"
depends on EVENT && LMB && EFI_LOADER
default y
help
Enable this option to get notification on any changes to the
memory that is allocated or freed by the LMB module. This will
allow different modules that allocate memory or maintain a memory
map to have a synchronous view of available and allocated memory.
This needs to be select'd when it's going to be used, opting out of making sure memory reservations are obeyed isn't a good idea.
+1 which begs the question, do we need the config option at all ?
Cheers /Ilias
-- Tom

On Mon, 22 Jul 2024 at 18:00, Ilias Apalodimas ilias.apalodimas@linaro.org wrote:
On Fri, 5 Jul 2024 at 22:51, Tom Rini trini@konsulko.com wrote:
On Thu, Jul 04, 2024 at 01:05:34PM +0530, Sughosh Ganu wrote:
Add a Kconfig symbol to enable getting updates on any memory map changes that might be done by the LMB module. This notification mechanism can then be used to have a synchronous view of allocated and free memory.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since V1:
- Change the description to highlight only LMB notifications.
- Add a separate line for dependencies.
lib/Kconfig | 10 ++++++++++ 1 file changed, 10 insertions(+)
diff --git a/lib/Kconfig b/lib/Kconfig index 7eea517b3b..b422183a0f 100644 --- a/lib/Kconfig +++ b/lib/Kconfig @@ -74,6 +74,16 @@ config HAVE_PRIVATE_LIBGCC config LIB_UUID bool
+config MEM_MAP_UPDATE_NOTIFY
bool "Get notified of any changes to the LMB memory map"
depends on EVENT && LMB && EFI_LOADER
default y
help
Enable this option to get notification on any changes to the
memory that is allocated or freed by the LMB module. This will
allow different modules that allocate memory or maintain a memory
map to have a synchronous view of available and allocated memory.
This needs to be select'd when it's going to be used, opting out of making sure memory reservations are obeyed isn't a good idea.
+1 which begs the question, do we need the config option at all ?
The config symbol can be used for removing the code for platforms which do not support EFI ?
-sughosh
Cheers /Ilias
-- Tom

Hi Sughosh,
On Mon, 22 Jul 2024 at 15:59, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Mon, 22 Jul 2024 at 18:00, Ilias Apalodimas ilias.apalodimas@linaro.org wrote:
On Fri, 5 Jul 2024 at 22:51, Tom Rini trini@konsulko.com wrote:
On Thu, Jul 04, 2024 at 01:05:34PM +0530, Sughosh Ganu wrote:
Add a Kconfig symbol to enable getting updates on any memory map changes that might be done by the LMB module. This notification mechanism can then be used to have a synchronous view of allocated and free memory.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since V1:
- Change the description to highlight only LMB notifications.
- Add a separate line for dependencies.
lib/Kconfig | 10 ++++++++++ 1 file changed, 10 insertions(+)
diff --git a/lib/Kconfig b/lib/Kconfig index 7eea517b3b..b422183a0f 100644 --- a/lib/Kconfig +++ b/lib/Kconfig @@ -74,6 +74,16 @@ config HAVE_PRIVATE_LIBGCC config LIB_UUID bool
+config MEM_MAP_UPDATE_NOTIFY
bool "Get notified of any changes to the LMB memory map"
depends on EVENT && LMB && EFI_LOADER
default y
help
Enable this option to get notification on any changes to the
memory that is allocated or freed by the LMB module. This will
allow different modules that allocate memory or maintain a memory
map to have a synchronous view of available and allocated memory.
This needs to be select'd when it's going to be used, opting out of making sure memory reservations are obeyed isn't a good idea.
+1 which begs the question, do we need the config option at all ?
The config symbol can be used for removing the code for platforms which do not support EFI ?
In that case, it shouldn't be a user config option. It should be a Kconfig symbol that is automatically set if EFI_LOADER is enabled.
Regards /Ilias
-sughosh
Cheers /Ilias
-- Tom

Hi Sughosh,
On Mon, 22 Jul 2024 at 13:59, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Mon, 22 Jul 2024 at 18:00, Ilias Apalodimas ilias.apalodimas@linaro.org wrote:
On Fri, 5 Jul 2024 at 22:51, Tom Rini trini@konsulko.com wrote:
On Thu, Jul 04, 2024 at 01:05:34PM +0530, Sughosh Ganu wrote:
Add a Kconfig symbol to enable getting updates on any memory map changes that might be done by the LMB module. This notification mechanism can then be used to have a synchronous view of allocated and free memory.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since V1:
- Change the description to highlight only LMB notifications.
- Add a separate line for dependencies.
lib/Kconfig | 10 ++++++++++ 1 file changed, 10 insertions(+)
diff --git a/lib/Kconfig b/lib/Kconfig index 7eea517b3b..b422183a0f 100644 --- a/lib/Kconfig +++ b/lib/Kconfig @@ -74,6 +74,16 @@ config HAVE_PRIVATE_LIBGCC config LIB_UUID bool
+config MEM_MAP_UPDATE_NOTIFY
bool "Get notified of any changes to the LMB memory map"
depends on EVENT && LMB && EFI_LOADER
default y
help
Enable this option to get notification on any changes to the
memory that is allocated or freed by the LMB module. This will
allow different modules that allocate memory or maintain a memory
map to have a synchronous view of available and allocated memory.
This needs to be select'd when it's going to be used, opting out of making sure memory reservations are obeyed isn't a good idea.
+1 which begs the question, do we need the config option at all ?
The config symbol can be used for removing the code for platforms which do not support EFI ?
I am still of the so-far firm opinion that this can be done once, before booting, rather than maintaining two separate tables as we go.
Regards, Simon

On Tue, Jul 23, 2024 at 01:42:59PM +0100, Simon Glass wrote:
Hi Sughosh,
On Mon, 22 Jul 2024 at 13:59, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Mon, 22 Jul 2024 at 18:00, Ilias Apalodimas ilias.apalodimas@linaro.org wrote:
On Fri, 5 Jul 2024 at 22:51, Tom Rini trini@konsulko.com wrote:
On Thu, Jul 04, 2024 at 01:05:34PM +0530, Sughosh Ganu wrote:
Add a Kconfig symbol to enable getting updates on any memory map changes that might be done by the LMB module. This notification mechanism can then be used to have a synchronous view of allocated and free memory.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since V1:
- Change the description to highlight only LMB notifications.
- Add a separate line for dependencies.
lib/Kconfig | 10 ++++++++++ 1 file changed, 10 insertions(+)
diff --git a/lib/Kconfig b/lib/Kconfig index 7eea517b3b..b422183a0f 100644 --- a/lib/Kconfig +++ b/lib/Kconfig @@ -74,6 +74,16 @@ config HAVE_PRIVATE_LIBGCC config LIB_UUID bool
+config MEM_MAP_UPDATE_NOTIFY
bool "Get notified of any changes to the LMB memory map"
depends on EVENT && LMB && EFI_LOADER
default y
help
Enable this option to get notification on any changes to the
memory that is allocated or freed by the LMB module. This will
allow different modules that allocate memory or maintain a memory
map to have a synchronous view of available and allocated memory.
This needs to be select'd when it's going to be used, opting out of making sure memory reservations are obeyed isn't a good idea.
+1 which begs the question, do we need the config option at all ?
The config symbol can be used for removing the code for platforms which do not support EFI ?
I am still of the so-far firm opinion that this can be done once, before booting, rather than maintaining two separate tables as we go.
Did you see the part in the thread where he explained the multiple entry points that would need to be kept in sync?

Hi Tom,
On Tue, 23 Jul 2024 at 08:20, Tom Rini trini@konsulko.com wrote:
On Tue, Jul 23, 2024 at 01:42:59PM +0100, Simon Glass wrote:
Hi Sughosh,
On Mon, 22 Jul 2024 at 13:59, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Mon, 22 Jul 2024 at 18:00, Ilias Apalodimas ilias.apalodimas@linaro.org wrote:
On Fri, 5 Jul 2024 at 22:51, Tom Rini trini@konsulko.com wrote:
On Thu, Jul 04, 2024 at 01:05:34PM +0530, Sughosh Ganu wrote:
Add a Kconfig symbol to enable getting updates on any memory map changes that might be done by the LMB module. This notification mechanism can then be used to have a synchronous view of allocated and free memory.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since V1:
- Change the description to highlight only LMB notifications.
- Add a separate line for dependencies.
lib/Kconfig | 10 ++++++++++ 1 file changed, 10 insertions(+)
diff --git a/lib/Kconfig b/lib/Kconfig index 7eea517b3b..b422183a0f 100644 --- a/lib/Kconfig +++ b/lib/Kconfig @@ -74,6 +74,16 @@ config HAVE_PRIVATE_LIBGCC config LIB_UUID bool
+config MEM_MAP_UPDATE_NOTIFY
bool "Get notified of any changes to the LMB memory map"
depends on EVENT && LMB && EFI_LOADER
default y
help
Enable this option to get notification on any changes to the
memory that is allocated or freed by the LMB module. This will
allow different modules that allocate memory or maintain a memory
map to have a synchronous view of available and allocated memory.
This needs to be select'd when it's going to be used, opting out of making sure memory reservations are obeyed isn't a good idea.
+1 which begs the question, do we need the config option at all ?
The config symbol can be used for removing the code for platforms which do not support EFI ?
I am still of the so-far firm opinion that this can be done once, before booting, rather than maintaining two separate tables as we go.
Did you see the part in the thread where he explained the multiple entry points that would need to be kept in sync?
Yes, but I'm not sure what they are, nor why a shared function cannot be called twice from two different places.
Regards, Simon

On Wed, Jul 24, 2024 at 08:37:14AM -0600, Simon Glass wrote:
Hi Tom,
On Tue, 23 Jul 2024 at 08:20, Tom Rini trini@konsulko.com wrote:
On Tue, Jul 23, 2024 at 01:42:59PM +0100, Simon Glass wrote:
Hi Sughosh,
On Mon, 22 Jul 2024 at 13:59, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Mon, 22 Jul 2024 at 18:00, Ilias Apalodimas ilias.apalodimas@linaro.org wrote:
On Fri, 5 Jul 2024 at 22:51, Tom Rini trini@konsulko.com wrote:
On Thu, Jul 04, 2024 at 01:05:34PM +0530, Sughosh Ganu wrote:
> Add a Kconfig symbol to enable getting updates on any memory map > changes that might be done by the LMB module. This notification > mechanism can then be used to have a synchronous view of allocated and > free memory. > > Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org > --- > Changes since V1: > * Change the description to highlight only LMB notifications. > * Add a separate line for dependencies. > > lib/Kconfig | 10 ++++++++++ > 1 file changed, 10 insertions(+) > > diff --git a/lib/Kconfig b/lib/Kconfig > index 7eea517b3b..b422183a0f 100644 > --- a/lib/Kconfig > +++ b/lib/Kconfig > @@ -74,6 +74,16 @@ config HAVE_PRIVATE_LIBGCC > config LIB_UUID > bool > > +config MEM_MAP_UPDATE_NOTIFY > + bool "Get notified of any changes to the LMB memory map" > + depends on EVENT && LMB && EFI_LOADER > + default y > + help > + Enable this option to get notification on any changes to the > + memory that is allocated or freed by the LMB module. This will > + allow different modules that allocate memory or maintain a memory > + map to have a synchronous view of available and allocated memory.
This needs to be select'd when it's going to be used, opting out of making sure memory reservations are obeyed isn't a good idea.
+1 which begs the question, do we need the config option at all ?
The config symbol can be used for removing the code for platforms which do not support EFI ?
I am still of the so-far firm opinion that this can be done once, before booting, rather than maintaining two separate tables as we go.
Did you see the part in the thread where he explained the multiple entry points that would need to be kept in sync?
Yes, but I'm not sure what they are, nor why a shared function cannot be called twice from two different places.
Because we don't want to miss the third or fourth entry point down the road. That's why going the other direction makes more sense I believe, we won't have a future problem here because we designed with that in mind.

Hi Tom,
On Wed, 24 Jul 2024 at 08:52, Tom Rini trini@konsulko.com wrote:
On Wed, Jul 24, 2024 at 08:37:14AM -0600, Simon Glass wrote:
Hi Tom,
On Tue, 23 Jul 2024 at 08:20, Tom Rini trini@konsulko.com wrote:
On Tue, Jul 23, 2024 at 01:42:59PM +0100, Simon Glass wrote:
Hi Sughosh,
On Mon, 22 Jul 2024 at 13:59, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Mon, 22 Jul 2024 at 18:00, Ilias Apalodimas ilias.apalodimas@linaro.org wrote:
On Fri, 5 Jul 2024 at 22:51, Tom Rini trini@konsulko.com wrote: > > On Thu, Jul 04, 2024 at 01:05:34PM +0530, Sughosh Ganu wrote: > > > Add a Kconfig symbol to enable getting updates on any memory map > > changes that might be done by the LMB module. This notification > > mechanism can then be used to have a synchronous view of allocated and > > free memory. > > > > Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org > > --- > > Changes since V1: > > * Change the description to highlight only LMB notifications. > > * Add a separate line for dependencies. > > > > lib/Kconfig | 10 ++++++++++ > > 1 file changed, 10 insertions(+) > > > > diff --git a/lib/Kconfig b/lib/Kconfig > > index 7eea517b3b..b422183a0f 100644 > > --- a/lib/Kconfig > > +++ b/lib/Kconfig > > @@ -74,6 +74,16 @@ config HAVE_PRIVATE_LIBGCC > > config LIB_UUID > > bool > > > > +config MEM_MAP_UPDATE_NOTIFY > > + bool "Get notified of any changes to the LMB memory map" > > + depends on EVENT && LMB && EFI_LOADER > > + default y > > + help > > + Enable this option to get notification on any changes to the > > + memory that is allocated or freed by the LMB module. This will > > + allow different modules that allocate memory or maintain a memory > > + map to have a synchronous view of available and allocated memory. > > This needs to be select'd when it's going to be used, opting out of > making sure memory reservations are obeyed isn't a good idea.
+1 which begs the question, do we need the config option at all ?
The config symbol can be used for removing the code for platforms which do not support EFI ?
I am still of the so-far firm opinion that this can be done once, before booting, rather than maintaining two separate tables as we go.
Did you see the part in the thread where he explained the multiple entry points that would need to be kept in sync?
Yes, but I'm not sure what they are, nor why a shared function cannot be called twice from two different places.
Because we don't want to miss the third or fourth entry point down the road. That's why going the other direction makes more sense I believe, we won't have a future problem here because we designed with that in mind.
You might be right, but I don't even know what we are referring to here...what are the two 'entry points'?
My current belief is that we can set up the EFI memory table before booting, with a single pass through the table. Maintaining two independent tables as we go doesn't seem very useful. It is harder to test too.
Regards, Simon

On Wed, Jul 24, 2024 at 09:40:35AM -0600, Simon Glass wrote:
Hi Tom,
On Wed, 24 Jul 2024 at 08:52, Tom Rini trini@konsulko.com wrote:
On Wed, Jul 24, 2024 at 08:37:14AM -0600, Simon Glass wrote:
Hi Tom,
On Tue, 23 Jul 2024 at 08:20, Tom Rini trini@konsulko.com wrote:
On Tue, Jul 23, 2024 at 01:42:59PM +0100, Simon Glass wrote:
Hi Sughosh,
On Mon, 22 Jul 2024 at 13:59, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Mon, 22 Jul 2024 at 18:00, Ilias Apalodimas ilias.apalodimas@linaro.org wrote: > > On Fri, 5 Jul 2024 at 22:51, Tom Rini trini@konsulko.com wrote: > > > > On Thu, Jul 04, 2024 at 01:05:34PM +0530, Sughosh Ganu wrote: > > > > > Add a Kconfig symbol to enable getting updates on any memory map > > > changes that might be done by the LMB module. This notification > > > mechanism can then be used to have a synchronous view of allocated and > > > free memory. > > > > > > Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org > > > --- > > > Changes since V1: > > > * Change the description to highlight only LMB notifications. > > > * Add a separate line for dependencies. > > > > > > lib/Kconfig | 10 ++++++++++ > > > 1 file changed, 10 insertions(+) > > > > > > diff --git a/lib/Kconfig b/lib/Kconfig > > > index 7eea517b3b..b422183a0f 100644 > > > --- a/lib/Kconfig > > > +++ b/lib/Kconfig > > > @@ -74,6 +74,16 @@ config HAVE_PRIVATE_LIBGCC > > > config LIB_UUID > > > bool > > > > > > +config MEM_MAP_UPDATE_NOTIFY > > > + bool "Get notified of any changes to the LMB memory map" > > > + depends on EVENT && LMB && EFI_LOADER > > > + default y > > > + help > > > + Enable this option to get notification on any changes to the > > > + memory that is allocated or freed by the LMB module. This will > > > + allow different modules that allocate memory or maintain a memory > > > + map to have a synchronous view of available and allocated memory. > > > > This needs to be select'd when it's going to be used, opting out of > > making sure memory reservations are obeyed isn't a good idea. > > +1 which begs the question, do we need the config option at all ?
The config symbol can be used for removing the code for platforms which do not support EFI ?
I am still of the so-far firm opinion that this can be done once, before booting, rather than maintaining two separate tables as we go.
Did you see the part in the thread where he explained the multiple entry points that would need to be kept in sync?
Yes, but I'm not sure what they are, nor why a shared function cannot be called twice from two different places.
Because we don't want to miss the third or fourth entry point down the road. That's why going the other direction makes more sense I believe, we won't have a future problem here because we designed with that in mind.
You might be right, but I don't even know what we are referring to here...what are the two 'entry points'?
I would have to refer back to Sughosh's explanation to repeat it, sorry.
My current belief is that we can set up the EFI memory table before booting, with a single pass through the table. Maintaining two independent tables as we go doesn't seem very useful. It is harder to test too.
You seem to be fixated on "booting" when at least part of the issue is re-entering the EFI_LOADER. And since EFI_LOADER needs X/Y/Z as well, we had much earlier rejected "make EFI_LOADER's memory model what everything else uses".

Hi Tom,
On Wed, 24 Jul 2024 at 16:47, Tom Rini trini@konsulko.com wrote:
On Wed, Jul 24, 2024 at 09:40:35AM -0600, Simon Glass wrote:
Hi Tom,
On Wed, 24 Jul 2024 at 08:52, Tom Rini trini@konsulko.com wrote:
On Wed, Jul 24, 2024 at 08:37:14AM -0600, Simon Glass wrote:
Hi Tom,
On Tue, 23 Jul 2024 at 08:20, Tom Rini trini@konsulko.com wrote:
On Tue, Jul 23, 2024 at 01:42:59PM +0100, Simon Glass wrote:
Hi Sughosh,
On Mon, 22 Jul 2024 at 13:59, Sughosh Ganu sughosh.ganu@linaro.org wrote: > > On Mon, 22 Jul 2024 at 18:00, Ilias Apalodimas > ilias.apalodimas@linaro.org wrote: > > > > On Fri, 5 Jul 2024 at 22:51, Tom Rini trini@konsulko.com wrote: > > > > > > On Thu, Jul 04, 2024 at 01:05:34PM +0530, Sughosh Ganu wrote: > > > > > > > Add a Kconfig symbol to enable getting updates on any memory map > > > > changes that might be done by the LMB module. This notification > > > > mechanism can then be used to have a synchronous view of allocated and > > > > free memory. > > > > > > > > Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org > > > > --- > > > > Changes since V1: > > > > * Change the description to highlight only LMB notifications. > > > > * Add a separate line for dependencies. > > > > > > > > lib/Kconfig | 10 ++++++++++ > > > > 1 file changed, 10 insertions(+) > > > > > > > > diff --git a/lib/Kconfig b/lib/Kconfig > > > > index 7eea517b3b..b422183a0f 100644 > > > > --- a/lib/Kconfig > > > > +++ b/lib/Kconfig > > > > @@ -74,6 +74,16 @@ config HAVE_PRIVATE_LIBGCC > > > > config LIB_UUID > > > > bool > > > > > > > > +config MEM_MAP_UPDATE_NOTIFY > > > > + bool "Get notified of any changes to the LMB memory map" > > > > + depends on EVENT && LMB && EFI_LOADER > > > > + default y > > > > + help > > > > + Enable this option to get notification on any changes to the > > > > + memory that is allocated or freed by the LMB module. This will > > > > + allow different modules that allocate memory or maintain a memory > > > > + map to have a synchronous view of available and allocated memory. > > > > > > This needs to be select'd when it's going to be used, opting out of > > > making sure memory reservations are obeyed isn't a good idea. > > > > +1 which begs the question, do we need the config option at all ? > > The config symbol can be used for removing the code for platforms > which do not support EFI ?
I am still of the so-far firm opinion that this can be done once, before booting, rather than maintaining two separate tables as we go.
Did you see the part in the thread where he explained the multiple entry points that would need to be kept in sync?
Yes, but I'm not sure what they are, nor why a shared function cannot be called twice from two different places.
Because we don't want to miss the third or fourth entry point down the road. That's why going the other direction makes more sense I believe, we won't have a future problem here because we designed with that in mind.
You might be right, but I don't even know what we are referring to here...what are the two 'entry points'?
I would have to refer back to Sughosh's explanation to repeat it, sorry.
My current belief is that we can set up the EFI memory table before booting, with a single pass through the table. Maintaining two independent tables as we go doesn't seem very useful. It is harder to test too.
You seem to be fixated on "booting" when at least part of the issue is re-entering the EFI_LOADER. And since EFI_LOADER needs X/Y/Z as well, we had much earlier rejected "make EFI_LOADER's memory model what everything else uses".
I wouldn't say I am fixated. EFI_LOADER's purpose is to boot an OS.
What do you mean by re-entering the EFI_LOADER? Do you mean running one app, then coming back to U-Boot, then running another? I am not suggesting we reinit EFI in that case, if that is what you are worried about.
I sent a separate series to show some of the issues I've been concerned about for a few years now. Perhaps we should schedule a discussion to talk all this through one day?
Regards, Simon

Add a function to check if a given address falls within the RAM address used by U-Boot. This will be used to notify other modules if the address gets allocated, so as to not get re-allocated by some other module.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since V1: * Have a common weak function for all platforms, sandbox included.
common/board_r.c | 5 +++++ 1 file changed, 5 insertions(+)
diff --git a/common/board_r.c b/common/board_r.c index 1a5bb98218..427688168c 100644 --- a/common/board_r.c +++ b/common/board_r.c @@ -72,6 +72,11 @@ DECLARE_GLOBAL_DATA_PTR;
ulong monitor_flash_len;
+__weak bool __maybe_unused is_addr_in_ram(uintptr_t addr) +{ + return addr >= gd->ram_base && addr <= gd->ram_top; +} + __weak int board_flash_wp_on(void) { /*

In U-Boot, LMB and EFI are two primary modules who provide memory allocation and reservation API's. Both these modules operate with the same regions of memory for allocations. Use the LMB memory map update event to notify other interested listeners about a change in it's memory map. This can then be used by the other module to keep track of available and used memory.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since V1: * Notify addition of memory to the LMB memory map. * Add a function lmb_notify() to check if notification has to be sent.
lib/lmb.c | 61 ++++++++++++++++++++++++++++++++++++++++++++++++++++--- 1 file changed, 58 insertions(+), 3 deletions(-)
diff --git a/lib/lmb.c b/lib/lmb.c index d2edb3525a..387ec2ac65 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -8,6 +8,7 @@
#include <alist.h> #include <efi_loader.h> +#include <event.h> #include <image.h> #include <mapmem.h> #include <lmb.h> @@ -21,12 +22,36 @@
DECLARE_GLOBAL_DATA_PTR;
+#define MAP_OP_RESERVE (u8)0x1 +#define MAP_OP_FREE (u8)0x2 +#define MAP_OP_ADD (u8)0x3 + #define LMB_ALLOC_ANYWHERE 0 #define LMB_ALIST_INITIAL_SIZE 4
struct alist lmb_free_mem; struct alist lmb_used_mem;
+extern bool is_addr_in_ram(uintptr_t addr); + +static bool lmb_notify(enum lmb_flags flags) +{ + return !(flags & LMB_NONOTIFY); +} + +static void lmb_map_update_notify(phys_addr_t addr, phys_size_t size, + u8 op) +{ + struct event_lmb_map_update lmb_map = {0}; + + lmb_map.base = addr; + lmb_map.size = size; + lmb_map.op = op; + + if (is_addr_in_ram((uintptr_t)addr)) + event_notify(EVT_LMB_MAP_UPDATE, &lmb_map, sizeof(lmb_map)); +} + static void lmb_dump_region(struct alist *lmb_rgn_lst, char *name) { struct lmb_region *rgn = lmb_rgn_lst->data; @@ -463,9 +488,17 @@ static long lmb_add_region(struct alist *lmb_rgn_lst, phys_addr_t base, /* This routine may be called with relocation disabled. */ long lmb_add(phys_addr_t base, phys_size_t size) { + long ret; struct alist *lmb_rgn_lst = &lmb_free_mem;
- return lmb_add_region(lmb_rgn_lst, base, size); + ret = lmb_add_region(lmb_rgn_lst, base, size); + if (ret) + return ret; + + if (CONFIG_IS_ENABLED(MEM_MAP_UPDATE_NOTIFY)) + lmb_map_update_notify(base, size, MAP_OP_ADD); + + return 0; }
static long __lmb_free(phys_addr_t base, phys_size_t size) @@ -521,7 +554,16 @@ static long __lmb_free(phys_addr_t base, phys_size_t size)
long lmb_free(phys_addr_t base, phys_size_t size) { - return __lmb_free(base, size); + long ret; + + ret = __lmb_free(base, size); + if (ret < 0) + return ret; + + if (CONFIG_IS_ENABLED(MEM_MAP_UPDATE_NOTIFY)) + lmb_map_update_notify(base, size, MAP_OP_FREE); + + return 0; }
long lmb_free_flags(phys_addr_t base, phys_size_t size, @@ -532,9 +574,17 @@ long lmb_free_flags(phys_addr_t base, phys_size_t size,
long lmb_reserve_flags(phys_addr_t base, phys_size_t size, enum lmb_flags flags) { + long ret = 0; struct alist *lmb_rgn_lst = &lmb_used_mem;
- return lmb_add_region_flags(lmb_rgn_lst, base, size, flags); + ret = lmb_add_region_flags(lmb_rgn_lst, base, size, flags); + if (ret < 0) + return -1; + + if (CONFIG_IS_ENABLED(MEM_MAP_UPDATE_NOTIFY) && lmb_notify(flags)) + lmb_map_update_notify(base, size, MAP_OP_RESERVE); + + return ret; }
long lmb_reserve(phys_addr_t base, phys_size_t size) @@ -596,6 +646,11 @@ static phys_addr_t __lmb_alloc_base(phys_size_t size, ulong align, if (lmb_add_region_flags(&lmb_used_mem, base, size, flags) < 0) return 0; + + if (CONFIG_IS_ENABLED(MEM_MAP_UPDATE_NOTIFY) && + lmb_notify(flags)) + lmb_map_update_notify(base, size, + MAP_OP_RESERVE); return base; }

There are events that would be used to notify other interested modules of any changes in available and occupied memory. This would happen when a module allocates or reserves memory, or frees up memory. These changes in memory map should be notified to other interested modules so that the allocated memory does not get overwritten. Add an event handler in the EFI memory module to update the EFI memory map accordingly when such changes happen. As a consequence, any subsequent memory request would honour the updated memory map and only available memory would be allocated from.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since V1: * Handle the addition of memory to the LMB memory map. * Pass the overlap_only_ram parameter to the efi_add_memory_map_pg() based on the type of operation.
lib/efi_loader/Kconfig | 1 + lib/efi_loader/efi_memory.c | 34 ++++++++++++++++++++++++++++++++++ 2 files changed, 35 insertions(+)
diff --git a/lib/efi_loader/Kconfig b/lib/efi_loader/Kconfig index bdf5732974..2d90bcef2f 100644 --- a/lib/efi_loader/Kconfig +++ b/lib/efi_loader/Kconfig @@ -16,6 +16,7 @@ config EFI_LOADER select CHARSET # We need to send DM events, dynamically, in the EFI block driver select DM_EVENT + select EVENT select EVENT_DYNAMIC select LIB_UUID select LMB diff --git a/lib/efi_loader/efi_memory.c b/lib/efi_loader/efi_memory.c index 5691b5da03..bd12504f72 100644 --- a/lib/efi_loader/efi_memory.c +++ b/lib/efi_loader/efi_memory.c @@ -45,6 +45,10 @@ static LIST_HEAD(efi_mem); void *efi_bounce_buffer; #endif
+#define MAP_OP_RESERVE (u8)0x1 +#define MAP_OP_FREE (u8)0x2 +#define MAP_OP_ADD (u8)0x3 + /** * struct efi_pool_allocation - memory block allocated from pool * @@ -928,3 +932,33 @@ int efi_memory_init(void)
return 0; } + +#if CONFIG_IS_ENABLED(MEM_MAP_UPDATE_NOTIFY) +static int lmb_mem_map_update_sync(void *ctx, struct event *event) +{ + u8 op; + u64 addr; + u64 pages; + efi_status_t status; + struct event_lmb_map_update *lmb_map = &event->data.lmb_map; + + addr = (uintptr_t)map_sysmem(lmb_map->base, 0); + pages = efi_size_in_pages(lmb_map->size + (addr & EFI_PAGE_MASK)); + op = lmb_map->op; + addr &= ~EFI_PAGE_MASK; + + if (op != MAP_OP_RESERVE && op != MAP_OP_FREE && op != MAP_OP_ADD) { + log_debug("Invalid map update op received (%d)\n", op); + return -1; + } + + status = efi_add_memory_map_pg(addr, pages, + op == MAP_OP_RESERVE ? + EFI_BOOT_SERVICES_DATA : + EFI_CONVENTIONAL_MEMORY, + op == MAP_OP_RESERVE ? true : false); + + return status == EFI_SUCCESS ? 0 : -1; +} +EVENT_SPY_FULL(EVT_LMB_MAP_UPDATE, lmb_mem_map_update_sync); +#endif /* MEM_MAP_UPDATE_NOTIFY */

Hi Sughosh,
On Thu, 4 Jul 2024 at 08:38, Sughosh Ganu sughosh.ganu@linaro.org wrote:
There are events that would be used to notify other interested modules of any changes in available and occupied memory. This would happen when a module allocates or reserves memory, or frees up memory. These changes in memory map should be notified to other interested modules so that the allocated memory does not get overwritten. Add an event handler in the EFI memory module to update the EFI memory map accordingly when such changes happen. As a consequence, any subsequent memory request would honour the updated memory map and only available memory would be allocated from.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since V1:
- Handle the addition of memory to the LMB memory map.
- Pass the overlap_only_ram parameter to the efi_add_memory_map_pg() based on the type of operation.
lib/efi_loader/Kconfig | 1 + lib/efi_loader/efi_memory.c | 34 ++++++++++++++++++++++++++++++++++ 2 files changed, 35 insertions(+)
This is getting complicated and I don't believe it is needed.
EFI should not be allocating memory 'in free space' until it starts up. For the very few (if any) cases where it does, it can do an lmb allocation.
As to the lmb allocations themselves, EFI can simply call look through the lmb list and call efi_add_memory_map_pg() for each entry, when it is ready to boot. There is no need to do it earlier.
diff --git a/lib/efi_loader/Kconfig b/lib/efi_loader/Kconfig index bdf5732974..2d90bcef2f 100644 --- a/lib/efi_loader/Kconfig +++ b/lib/efi_loader/Kconfig @@ -16,6 +16,7 @@ config EFI_LOADER select CHARSET # We need to send DM events, dynamically, in the EFI block driver select DM_EVENT
select EVENT select EVENT_DYNAMIC select LIB_UUID select LMB
diff --git a/lib/efi_loader/efi_memory.c b/lib/efi_loader/efi_memory.c index 5691b5da03..bd12504f72 100644 --- a/lib/efi_loader/efi_memory.c +++ b/lib/efi_loader/efi_memory.c @@ -45,6 +45,10 @@ static LIST_HEAD(efi_mem); void *efi_bounce_buffer; #endif
+#define MAP_OP_RESERVE (u8)0x1 +#define MAP_OP_FREE (u8)0x2 +#define MAP_OP_ADD (u8)0x3
/**
- struct efi_pool_allocation - memory block allocated from pool
@@ -928,3 +932,33 @@ int efi_memory_init(void)
return 0;
}
+#if CONFIG_IS_ENABLED(MEM_MAP_UPDATE_NOTIFY) +static int lmb_mem_map_update_sync(void *ctx, struct event *event) +{
u8 op;
u64 addr;
u64 pages;
efi_status_t status;
struct event_lmb_map_update *lmb_map = &event->data.lmb_map;
addr = (uintptr_t)map_sysmem(lmb_map->base, 0);
pages = efi_size_in_pages(lmb_map->size + (addr & EFI_PAGE_MASK));
op = lmb_map->op;
addr &= ~EFI_PAGE_MASK;
if (op != MAP_OP_RESERVE && op != MAP_OP_FREE && op != MAP_OP_ADD) {
log_debug("Invalid map update op received (%d)\n", op);
return -1;
}
status = efi_add_memory_map_pg(addr, pages,
op == MAP_OP_RESERVE ?
EFI_BOOT_SERVICES_DATA :
EFI_CONVENTIONAL_MEMORY,
op == MAP_OP_RESERVE ? true : false);
return status == EFI_SUCCESS ? 0 : -1;
+} +EVENT_SPY_FULL(EVT_LMB_MAP_UPDATE, lmb_mem_map_update_sync);
+#endif /* MEM_MAP_UPDATE_NOTIFY */
2.34.1
Regards, Simon

hi Simon,
On Sat, 13 Jul 2024 at 20:46, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Thu, 4 Jul 2024 at 08:38, Sughosh Ganu sughosh.ganu@linaro.org wrote:
There are events that would be used to notify other interested modules of any changes in available and occupied memory. This would happen when a module allocates or reserves memory, or frees up memory. These changes in memory map should be notified to other interested modules so that the allocated memory does not get overwritten. Add an event handler in the EFI memory module to update the EFI memory map accordingly when such changes happen. As a consequence, any subsequent memory request would honour the updated memory map and only available memory would be allocated from.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since V1:
- Handle the addition of memory to the LMB memory map.
- Pass the overlap_only_ram parameter to the efi_add_memory_map_pg() based on the type of operation.
lib/efi_loader/Kconfig | 1 + lib/efi_loader/efi_memory.c | 34 ++++++++++++++++++++++++++++++++++ 2 files changed, 35 insertions(+)
This is getting complicated and I don't believe it is needed.
EFI should not be allocating memory 'in free space' until it starts up. For the very few (if any) cases where it does, it can do an lmb allocation.
EFI memory module is not allocating memory at all now. This patch is adding an event handler for updating the EFI memory map, whenever the LMB memory map changes. All the EFI allocations are now being routed through the LMB API's.
As to the lmb allocations themselves, EFI can simply call look through the lmb list and call efi_add_memory_map_pg() for each entry, when it is ready to boot. There is no need to do it earlier.
So in this case, I believe that rather than adding code in multiple places where the EFI memory module would have to get the LMB map and then update it's own, I think it is easier to update the EFI memory map as and when the LMB map gets updated. Else, we have a scenario where the EFI memory map would have to be updated as part of the EFI memory map dump function, as well as before the EFI boot. Any new code that would be subsequently introduced that might have a similar requirement would then be needed to keep this point in mind(to get the memory map from LMB).
This is not an OS file-system kind of an operation where performance is critical, nor is this event(LMB memory map update) going to happen very frequently. So I believe that it would be better to keep the EFI memory map updated along with the LMB one.
-sughosh
diff --git a/lib/efi_loader/Kconfig b/lib/efi_loader/Kconfig index bdf5732974..2d90bcef2f 100644 --- a/lib/efi_loader/Kconfig +++ b/lib/efi_loader/Kconfig @@ -16,6 +16,7 @@ config EFI_LOADER select CHARSET # We need to send DM events, dynamically, in the EFI block driver select DM_EVENT
select EVENT select EVENT_DYNAMIC select LIB_UUID select LMB
diff --git a/lib/efi_loader/efi_memory.c b/lib/efi_loader/efi_memory.c index 5691b5da03..bd12504f72 100644 --- a/lib/efi_loader/efi_memory.c +++ b/lib/efi_loader/efi_memory.c @@ -45,6 +45,10 @@ static LIST_HEAD(efi_mem); void *efi_bounce_buffer; #endif
+#define MAP_OP_RESERVE (u8)0x1 +#define MAP_OP_FREE (u8)0x2 +#define MAP_OP_ADD (u8)0x3
/**
- struct efi_pool_allocation - memory block allocated from pool
@@ -928,3 +932,33 @@ int efi_memory_init(void)
return 0;
}
+#if CONFIG_IS_ENABLED(MEM_MAP_UPDATE_NOTIFY) +static int lmb_mem_map_update_sync(void *ctx, struct event *event) +{
u8 op;
u64 addr;
u64 pages;
efi_status_t status;
struct event_lmb_map_update *lmb_map = &event->data.lmb_map;
addr = (uintptr_t)map_sysmem(lmb_map->base, 0);
pages = efi_size_in_pages(lmb_map->size + (addr & EFI_PAGE_MASK));
op = lmb_map->op;
addr &= ~EFI_PAGE_MASK;
if (op != MAP_OP_RESERVE && op != MAP_OP_FREE && op != MAP_OP_ADD) {
log_debug("Invalid map update op received (%d)\n", op);
return -1;
}
status = efi_add_memory_map_pg(addr, pages,
op == MAP_OP_RESERVE ?
EFI_BOOT_SERVICES_DATA :
EFI_CONVENTIONAL_MEMORY,
op == MAP_OP_RESERVE ? true : false);
return status == EFI_SUCCESS ? 0 : -1;
+} +EVENT_SPY_FULL(EVT_LMB_MAP_UPDATE, lmb_mem_map_update_sync);
+#endif /* MEM_MAP_UPDATE_NOTIFY */
2.34.1
Regards, Simon

Hi Sughosh,
On Mon, 15 Jul 2024 at 10:39, Sughosh Ganu sughosh.ganu@linaro.org wrote:
hi Simon,
On Sat, 13 Jul 2024 at 20:46, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Thu, 4 Jul 2024 at 08:38, Sughosh Ganu sughosh.ganu@linaro.org wrote:
There are events that would be used to notify other interested modules of any changes in available and occupied memory. This would happen when a module allocates or reserves memory, or frees up memory. These changes in memory map should be notified to other interested modules so that the allocated memory does not get overwritten. Add an event handler in the EFI memory module to update the EFI memory map accordingly when such changes happen. As a consequence, any subsequent memory request would honour the updated memory map and only available memory would be allocated from.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since V1:
- Handle the addition of memory to the LMB memory map.
- Pass the overlap_only_ram parameter to the efi_add_memory_map_pg() based on the type of operation.
lib/efi_loader/Kconfig | 1 + lib/efi_loader/efi_memory.c | 34 ++++++++++++++++++++++++++++++++++ 2 files changed, 35 insertions(+)
This is getting complicated and I don't believe it is needed.
EFI should not be allocating memory 'in free space' until it starts up. For the very few (if any) cases where it does, it can do an lmb allocation.
EFI memory module is not allocating memory at all now. This patch is adding an event handler for updating the EFI memory map, whenever the LMB memory map changes. All the EFI allocations are now being routed through the LMB API's.
OK
As to the lmb allocations themselves, EFI can simply call look through the lmb list and call efi_add_memory_map_pg() for each entry, when it is ready to boot. There is no need to do it earlier.
So in this case, I believe that rather than adding code in multiple places where the EFI memory module would have to get the LMB map and then update it's own, I think it is easier to update the EFI memory map as and when the LMB map gets updated. Else, we have a scenario where the EFI memory map would have to be updated as part of the EFI memory map dump function, as well as before the EFI boot. Any new code that would be subsequently introduced that might have a similar requirement would then be needed to keep this point in mind(to get the memory map from LMB).
That doesn't hold water in my eyes. I actually like the idea of the EFI memory map being set up before booting. It should be done in a single function called from one place, just before booting. Well, I suppose it could be called from the memory-map-dump function too. But it should be pretty simple...just add some pre-defined things and then add the lmb records. You can even write a unit test for it.
This is not an OS file-system kind of an operation where performance is critical, nor is this event(LMB memory map update) going to happen very frequently. So I believe that it would be better to keep the EFI memory map updated along with the LMB one.
I really don't like that idea at all. One table is enough for use by U-Boot. The EFI one is needed for booting. Keeping them in sync as U-Boot is running is not necessary, just invites bugs and makes the whole thing harder to test.
Regards, Simon

On Mon, Jul 15, 2024 at 12:39:32PM +0100, Simon Glass wrote:
Hi Sughosh,
On Mon, 15 Jul 2024 at 10:39, Sughosh Ganu sughosh.ganu@linaro.org wrote:
hi Simon,
On Sat, 13 Jul 2024 at 20:46, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Thu, 4 Jul 2024 at 08:38, Sughosh Ganu sughosh.ganu@linaro.org wrote:
There are events that would be used to notify other interested modules of any changes in available and occupied memory. This would happen when a module allocates or reserves memory, or frees up memory. These changes in memory map should be notified to other interested modules so that the allocated memory does not get overwritten. Add an event handler in the EFI memory module to update the EFI memory map accordingly when such changes happen. As a consequence, any subsequent memory request would honour the updated memory map and only available memory would be allocated from.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since V1:
- Handle the addition of memory to the LMB memory map.
- Pass the overlap_only_ram parameter to the efi_add_memory_map_pg() based on the type of operation.
lib/efi_loader/Kconfig | 1 + lib/efi_loader/efi_memory.c | 34 ++++++++++++++++++++++++++++++++++ 2 files changed, 35 insertions(+)
This is getting complicated and I don't believe it is needed.
EFI should not be allocating memory 'in free space' until it starts up. For the very few (if any) cases where it does, it can do an lmb allocation.
EFI memory module is not allocating memory at all now. This patch is adding an event handler for updating the EFI memory map, whenever the LMB memory map changes. All the EFI allocations are now being routed through the LMB API's.
OK
As to the lmb allocations themselves, EFI can simply call look through the lmb list and call efi_add_memory_map_pg() for each entry, when it is ready to boot. There is no need to do it earlier.
So in this case, I believe that rather than adding code in multiple places where the EFI memory module would have to get the LMB map and then update it's own, I think it is easier to update the EFI memory map as and when the LMB map gets updated. Else, we have a scenario where the EFI memory map would have to be updated as part of the EFI memory map dump function, as well as before the EFI boot. Any new code that would be subsequently introduced that might have a similar requirement would then be needed to keep this point in mind(to get the memory map from LMB).
That doesn't hold water in my eyes. I actually like the idea of the EFI memory map being set up before booting. It should be done in a single function called from one place, just before booting. Well, I suppose it could be called from the memory-map-dump function too. But it should be pretty simple...just add some pre-defined things and then add the lmb records. You can even write a unit test for it.
This is not an OS file-system kind of an operation where performance is critical, nor is this event(LMB memory map update) going to happen very frequently. So I believe that it would be better to keep the EFI memory map updated along with the LMB one.
I really don't like that idea at all. One table is enough for use by U-Boot. The EFI one is needed for booting. Keeping them in sync as U-Boot is running is not necessary, just invites bugs and makes the whole thing harder to test.
Doesn't that ignore the issue of EFI being re-entrant to us? Or no, because you're suggesting we only update the EFI map before entering the EFI loader, not strictly "booting the OS"? In which case, maybe that does end up being both cleaner and smaller? I'm not sure.

On Tue, 16 Jul 2024 at 00:35, Tom Rini trini@konsulko.com wrote:
On Mon, Jul 15, 2024 at 12:39:32PM +0100, Simon Glass wrote:
Hi Sughosh,
On Mon, 15 Jul 2024 at 10:39, Sughosh Ganu sughosh.ganu@linaro.org wrote:
hi Simon,
On Sat, 13 Jul 2024 at 20:46, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Thu, 4 Jul 2024 at 08:38, Sughosh Ganu sughosh.ganu@linaro.org wrote:
There are events that would be used to notify other interested modules of any changes in available and occupied memory. This would happen when a module allocates or reserves memory, or frees up memory. These changes in memory map should be notified to other interested modules so that the allocated memory does not get overwritten. Add an event handler in the EFI memory module to update the EFI memory map accordingly when such changes happen. As a consequence, any subsequent memory request would honour the updated memory map and only available memory would be allocated from.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since V1:
- Handle the addition of memory to the LMB memory map.
- Pass the overlap_only_ram parameter to the efi_add_memory_map_pg() based on the type of operation.
lib/efi_loader/Kconfig | 1 + lib/efi_loader/efi_memory.c | 34 ++++++++++++++++++++++++++++++++++ 2 files changed, 35 insertions(+)
This is getting complicated and I don't believe it is needed.
EFI should not be allocating memory 'in free space' until it starts up. For the very few (if any) cases where it does, it can do an lmb allocation.
EFI memory module is not allocating memory at all now. This patch is adding an event handler for updating the EFI memory map, whenever the LMB memory map changes. All the EFI allocations are now being routed through the LMB API's.
OK
As to the lmb allocations themselves, EFI can simply call look through the lmb list and call efi_add_memory_map_pg() for each entry, when it is ready to boot. There is no need to do it earlier.
So in this case, I believe that rather than adding code in multiple places where the EFI memory module would have to get the LMB map and then update it's own, I think it is easier to update the EFI memory map as and when the LMB map gets updated. Else, we have a scenario where the EFI memory map would have to be updated as part of the EFI memory map dump function, as well as before the EFI boot. Any new code that would be subsequently introduced that might have a similar requirement would then be needed to keep this point in mind(to get the memory map from LMB).
That doesn't hold water in my eyes. I actually like the idea of the EFI memory map being set up before booting. It should be done in a single function called from one place, just before booting. Well, I suppose it could be called from the memory-map-dump function too. But it should be pretty simple...just add some pre-defined things and then add the lmb records. You can even write a unit test for it.
This is not an OS file-system kind of an operation where performance is critical, nor is this event(LMB memory map update) going to happen very frequently. So I believe that it would be better to keep the EFI memory map updated along with the LMB one.
I really don't like that idea at all. One table is enough for use by U-Boot. The EFI one is needed for booting. Keeping them in sync as U-Boot is running is not necessary, just invites bugs and makes the whole thing harder to test.
Doesn't that ignore the issue of EFI being re-entrant to us? Or no, because you're suggesting we only update the EFI map before entering the EFI loader, not strictly "booting the OS"? In which case, maybe that does end up being both cleaner and smaller? I'm not sure.
And that is my concern here about having to update the EFI map at multiple call points, instead of keeping it updated -- I feel this is more prone to being buggy. And to add to this, there might be code paths added subsequently which might need an updated EFI map where this gets missed. The only downside to the current design is that it might be slower since the EFI map gets updated on every change to the LMB map. But I am not sure if the LMB map changes are going to be that frequent.
-sughosh
-- Tom

Hi Sughosh,
On Tue, 16 Jul 2024 at 07:25, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Tue, 16 Jul 2024 at 00:35, Tom Rini trini@konsulko.com wrote:
On Mon, Jul 15, 2024 at 12:39:32PM +0100, Simon Glass wrote:
Hi Sughosh,
On Mon, 15 Jul 2024 at 10:39, Sughosh Ganu sughosh.ganu@linaro.org wrote:
hi Simon,
On Sat, 13 Jul 2024 at 20:46, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Thu, 4 Jul 2024 at 08:38, Sughosh Ganu sughosh.ganu@linaro.org wrote:
There are events that would be used to notify other interested modules of any changes in available and occupied memory. This would happen when a module allocates or reserves memory, or frees up memory. These changes in memory map should be notified to other interested modules so that the allocated memory does not get overwritten. Add an event handler in the EFI memory module to update the EFI memory map accordingly when such changes happen. As a consequence, any subsequent memory request would honour the updated memory map and only available memory would be allocated from.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since V1:
- Handle the addition of memory to the LMB memory map.
- Pass the overlap_only_ram parameter to the efi_add_memory_map_pg() based on the type of operation.
lib/efi_loader/Kconfig | 1 + lib/efi_loader/efi_memory.c | 34 ++++++++++++++++++++++++++++++++++ 2 files changed, 35 insertions(+)
This is getting complicated and I don't believe it is needed.
EFI should not be allocating memory 'in free space' until it starts up. For the very few (if any) cases where it does, it can do an lmb allocation.
EFI memory module is not allocating memory at all now. This patch is adding an event handler for updating the EFI memory map, whenever the LMB memory map changes. All the EFI allocations are now being routed through the LMB API's.
OK
As to the lmb allocations themselves, EFI can simply call look through the lmb list and call efi_add_memory_map_pg() for each entry, when it is ready to boot. There is no need to do it earlier.
So in this case, I believe that rather than adding code in multiple places where the EFI memory module would have to get the LMB map and then update it's own, I think it is easier to update the EFI memory map as and when the LMB map gets updated. Else, we have a scenario where the EFI memory map would have to be updated as part of the EFI memory map dump function, as well as before the EFI boot. Any new code that would be subsequently introduced that might have a similar requirement would then be needed to keep this point in mind(to get the memory map from LMB).
That doesn't hold water in my eyes. I actually like the idea of the EFI memory map being set up before booting. It should be done in a single function called from one place, just before booting. Well, I suppose it could be called from the memory-map-dump function too. But it should be pretty simple...just add some pre-defined things and then add the lmb records. You can even write a unit test for it.
This is not an OS file-system kind of an operation where performance is critical, nor is this event(LMB memory map update) going to happen very frequently. So I believe that it would be better to keep the EFI memory map updated along with the LMB one.
I really don't like that idea at all. One table is enough for use by U-Boot. The EFI one is needed for booting. Keeping them in sync as U-Boot is running is not necessary, just invites bugs and makes the whole thing harder to test.
Doesn't that ignore the issue of EFI being re-entrant to us? Or no, because you're suggesting we only update the EFI map before entering the EFI loader, not strictly "booting the OS"? In which case, maybe that does end up being both cleaner and smaller? I'm not sure.
And that is my concern here about having to update the EFI map at multiple call points, instead of keeping it updated -- I feel this is more prone to being buggy. And to add to this, there might be code paths added subsequently which might need an updated EFI map where this gets missed. The only downside to the current design is that it might be slower since the EFI map gets updated on every change to the LMB map. But I am not sure if the LMB map changes are going to be that frequent.
I think it is worth exploring doing the conversion from lmb to EFI once, in one place. We have this syncing problem, to some extent, with partitions, which automatically update an EFI table when they are bound. The design of EFI was somewhat confusing from the start, in that it created its own tables to avoid relying on driver model. We are still paying the cost of that design decision. If we are saying that lmb holds the allocations, then it should be in lmb. The EFI ones should just be a slave to that, in that it should be possible to throw away the EFI ones and regenerate them from lmb.
Of course that isn't quite where we are today, but I bet it is close.
Regards, Simon

hi Simon,
On Tue, 16 Jul 2024 at 12:40, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Tue, 16 Jul 2024 at 07:25, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Tue, 16 Jul 2024 at 00:35, Tom Rini trini@konsulko.com wrote:
On Mon, Jul 15, 2024 at 12:39:32PM +0100, Simon Glass wrote:
Hi Sughosh,
On Mon, 15 Jul 2024 at 10:39, Sughosh Ganu sughosh.ganu@linaro.org wrote:
hi Simon,
On Sat, 13 Jul 2024 at 20:46, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Thu, 4 Jul 2024 at 08:38, Sughosh Ganu sughosh.ganu@linaro.org wrote: > > There are events that would be used to notify other interested modules > of any changes in available and occupied memory. This would happen > when a module allocates or reserves memory, or frees up memory. These > changes in memory map should be notified to other interested modules > so that the allocated memory does not get overwritten. Add an event > handler in the EFI memory module to update the EFI memory map > accordingly when such changes happen. As a consequence, any subsequent > memory request would honour the updated memory map and only available > memory would be allocated from. > > Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org > --- > Changes since V1: > * Handle the addition of memory to the LMB memory map. > * Pass the overlap_only_ram parameter to the efi_add_memory_map_pg() > based on the type of operation. > > lib/efi_loader/Kconfig | 1 + > lib/efi_loader/efi_memory.c | 34 ++++++++++++++++++++++++++++++++++ > 2 files changed, 35 insertions(+) >
This is getting complicated and I don't believe it is needed.
EFI should not be allocating memory 'in free space' until it starts up. For the very few (if any) cases where it does, it can do an lmb allocation.
EFI memory module is not allocating memory at all now. This patch is adding an event handler for updating the EFI memory map, whenever the LMB memory map changes. All the EFI allocations are now being routed through the LMB API's.
OK
As to the lmb allocations themselves, EFI can simply call look through the lmb list and call efi_add_memory_map_pg() for each entry, when it is ready to boot. There is no need to do it earlier.
So in this case, I believe that rather than adding code in multiple places where the EFI memory module would have to get the LMB map and then update it's own, I think it is easier to update the EFI memory map as and when the LMB map gets updated. Else, we have a scenario where the EFI memory map would have to be updated as part of the EFI memory map dump function, as well as before the EFI boot. Any new code that would be subsequently introduced that might have a similar requirement would then be needed to keep this point in mind(to get the memory map from LMB).
That doesn't hold water in my eyes. I actually like the idea of the EFI memory map being set up before booting. It should be done in a single function called from one place, just before booting. Well, I suppose it could be called from the memory-map-dump function too. But it should be pretty simple...just add some pre-defined things and then add the lmb records. You can even write a unit test for it.
This is not an OS file-system kind of an operation where performance is critical, nor is this event(LMB memory map update) going to happen very frequently. So I believe that it would be better to keep the EFI memory map updated along with the LMB one.
I really don't like that idea at all. One table is enough for use by U-Boot. The EFI one is needed for booting. Keeping them in sync as U-Boot is running is not necessary, just invites bugs and makes the whole thing harder to test.
Doesn't that ignore the issue of EFI being re-entrant to us? Or no, because you're suggesting we only update the EFI map before entering the EFI loader, not strictly "booting the OS"? In which case, maybe that does end up being both cleaner and smaller? I'm not sure.
And that is my concern here about having to update the EFI map at multiple call points, instead of keeping it updated -- I feel this is more prone to being buggy. And to add to this, there might be code paths added subsequently which might need an updated EFI map where this gets missed. The only downside to the current design is that it might be slower since the EFI map gets updated on every change to the LMB map. But I am not sure if the LMB map changes are going to be that frequent.
I think it is worth exploring doing the conversion from lmb to EFI once, in one place. We have this syncing problem, to some extent, with partitions, which automatically update an EFI table when they are bound. The design of EFI was somewhat confusing from the start, in that it created its own tables to avoid relying on driver model. We are still paying the cost of that design decision. If we are saying that lmb holds the allocations, then it should be in lmb. The EFI ones should just be a slave to that, in that it should be possible to throw away the EFI ones and regenerate them from lmb.
So if we talk specifically about maintaining the EFI memory map, what this patch does is exactly on the lines that you are referring to -- the memory map is being maintained by lmb, and simply notified to the EFI memory module. The EFI memory map, when it comes to ram memory, is indeed being made dependent on lmb. I think the only question is, how does EFI get this information from lmb. This patch is notifying the EFI memory module as and when the change happens in the lmb memory map. I think what you are suggesting is that this information should be obtained on a need basis? And I have listed possible issues that could crop up with that approach.
-sughosh
Of course that isn't quite where we are today, but I bet it is close.
Regards, Simon

On Tue, Jul 16, 2024 at 11:55:10AM +0530, Sughosh Ganu wrote:
On Tue, 16 Jul 2024 at 00:35, Tom Rini trini@konsulko.com wrote:
On Mon, Jul 15, 2024 at 12:39:32PM +0100, Simon Glass wrote:
Hi Sughosh,
On Mon, 15 Jul 2024 at 10:39, Sughosh Ganu sughosh.ganu@linaro.org wrote:
hi Simon,
On Sat, 13 Jul 2024 at 20:46, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Thu, 4 Jul 2024 at 08:38, Sughosh Ganu sughosh.ganu@linaro.org wrote:
There are events that would be used to notify other interested modules of any changes in available and occupied memory. This would happen when a module allocates or reserves memory, or frees up memory. These changes in memory map should be notified to other interested modules so that the allocated memory does not get overwritten. Add an event handler in the EFI memory module to update the EFI memory map accordingly when such changes happen. As a consequence, any subsequent memory request would honour the updated memory map and only available memory would be allocated from.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since V1:
- Handle the addition of memory to the LMB memory map.
- Pass the overlap_only_ram parameter to the efi_add_memory_map_pg() based on the type of operation.
lib/efi_loader/Kconfig | 1 + lib/efi_loader/efi_memory.c | 34 ++++++++++++++++++++++++++++++++++ 2 files changed, 35 insertions(+)
This is getting complicated and I don't believe it is needed.
EFI should not be allocating memory 'in free space' until it starts up. For the very few (if any) cases where it does, it can do an lmb allocation.
EFI memory module is not allocating memory at all now. This patch is adding an event handler for updating the EFI memory map, whenever the LMB memory map changes. All the EFI allocations are now being routed through the LMB API's.
OK
As to the lmb allocations themselves, EFI can simply call look through the lmb list and call efi_add_memory_map_pg() for each entry, when it is ready to boot. There is no need to do it earlier.
So in this case, I believe that rather than adding code in multiple places where the EFI memory module would have to get the LMB map and then update it's own, I think it is easier to update the EFI memory map as and when the LMB map gets updated. Else, we have a scenario where the EFI memory map would have to be updated as part of the EFI memory map dump function, as well as before the EFI boot. Any new code that would be subsequently introduced that might have a similar requirement would then be needed to keep this point in mind(to get the memory map from LMB).
That doesn't hold water in my eyes. I actually like the idea of the EFI memory map being set up before booting. It should be done in a single function called from one place, just before booting. Well, I suppose it could be called from the memory-map-dump function too. But it should be pretty simple...just add some pre-defined things and then add the lmb records. You can even write a unit test for it.
This is not an OS file-system kind of an operation where performance is critical, nor is this event(LMB memory map update) going to happen very frequently. So I believe that it would be better to keep the EFI memory map updated along with the LMB one.
I really don't like that idea at all. One table is enough for use by U-Boot. The EFI one is needed for booting. Keeping them in sync as U-Boot is running is not necessary, just invites bugs and makes the whole thing harder to test.
Doesn't that ignore the issue of EFI being re-entrant to us? Or no, because you're suggesting we only update the EFI map before entering the EFI loader, not strictly "booting the OS"? In which case, maybe that does end up being both cleaner and smaller? I'm not sure.
And that is my concern here about having to update the EFI map at multiple call points, instead of keeping it updated -- I feel this is more prone to being buggy. And to add to this, there might be code paths added subsequently which might need an updated EFI map where this gets missed. The only downside to the current design is that it might be slower since the EFI map gets updated on every change to the LMB map. But I am not sure if the LMB map changes are going to be that frequent.
To me the question really is, do we have a single entry point to the EFI loader that must always be used (and is before EFI loader must know for sure it has up to date memory reservations) or are there two or more points? If there's two or more points then yes, your current approach is best as we don't want to introduce problems down the line. If there's one (which is what I hope), then we can just make that entry point be where the resync happens.

On Tue, 16 Jul 2024 at 22:30, Tom Rini trini@konsulko.com wrote:
On Tue, Jul 16, 2024 at 11:55:10AM +0530, Sughosh Ganu wrote:
On Tue, 16 Jul 2024 at 00:35, Tom Rini trini@konsulko.com wrote:
On Mon, Jul 15, 2024 at 12:39:32PM +0100, Simon Glass wrote:
Hi Sughosh,
On Mon, 15 Jul 2024 at 10:39, Sughosh Ganu sughosh.ganu@linaro.org wrote:
hi Simon,
On Sat, 13 Jul 2024 at 20:46, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Thu, 4 Jul 2024 at 08:38, Sughosh Ganu sughosh.ganu@linaro.org wrote: > > There are events that would be used to notify other interested modules > of any changes in available and occupied memory. This would happen > when a module allocates or reserves memory, or frees up memory. These > changes in memory map should be notified to other interested modules > so that the allocated memory does not get overwritten. Add an event > handler in the EFI memory module to update the EFI memory map > accordingly when such changes happen. As a consequence, any subsequent > memory request would honour the updated memory map and only available > memory would be allocated from. > > Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org > --- > Changes since V1: > * Handle the addition of memory to the LMB memory map. > * Pass the overlap_only_ram parameter to the efi_add_memory_map_pg() > based on the type of operation. > > lib/efi_loader/Kconfig | 1 + > lib/efi_loader/efi_memory.c | 34 ++++++++++++++++++++++++++++++++++ > 2 files changed, 35 insertions(+) >
This is getting complicated and I don't believe it is needed.
EFI should not be allocating memory 'in free space' until it starts up. For the very few (if any) cases where it does, it can do an lmb allocation.
EFI memory module is not allocating memory at all now. This patch is adding an event handler for updating the EFI memory map, whenever the LMB memory map changes. All the EFI allocations are now being routed through the LMB API's.
OK
As to the lmb allocations themselves, EFI can simply call look through the lmb list and call efi_add_memory_map_pg() for each entry, when it is ready to boot. There is no need to do it earlier.
So in this case, I believe that rather than adding code in multiple places where the EFI memory module would have to get the LMB map and then update it's own, I think it is easier to update the EFI memory map as and when the LMB map gets updated. Else, we have a scenario where the EFI memory map would have to be updated as part of the EFI memory map dump function, as well as before the EFI boot. Any new code that would be subsequently introduced that might have a similar requirement would then be needed to keep this point in mind(to get the memory map from LMB).
That doesn't hold water in my eyes. I actually like the idea of the EFI memory map being set up before booting. It should be done in a single function called from one place, just before booting. Well, I suppose it could be called from the memory-map-dump function too. But it should be pretty simple...just add some pre-defined things and then add the lmb records. You can even write a unit test for it.
This is not an OS file-system kind of an operation where performance is critical, nor is this event(LMB memory map update) going to happen very frequently. So I believe that it would be better to keep the EFI memory map updated along with the LMB one.
I really don't like that idea at all. One table is enough for use by U-Boot. The EFI one is needed for booting. Keeping them in sync as U-Boot is running is not necessary, just invites bugs and makes the whole thing harder to test.
Doesn't that ignore the issue of EFI being re-entrant to us? Or no, because you're suggesting we only update the EFI map before entering the EFI loader, not strictly "booting the OS"? In which case, maybe that does end up being both cleaner and smaller? I'm not sure.
And that is my concern here about having to update the EFI map at multiple call points, instead of keeping it updated -- I feel this is more prone to being buggy. And to add to this, there might be code paths added subsequently which might need an updated EFI map where this gets missed. The only downside to the current design is that it might be slower since the EFI map gets updated on every change to the LMB map. But I am not sure if the LMB map changes are going to be that frequent.
To me the question really is, do we have a single entry point to the EFI loader that must always be used (and is before EFI loader must know for sure it has up to date memory reservations) or are there two or more points? If there's two or more points then yes, your current approach is best as we don't want to introduce problems down the line. If there's one (which is what I hope), then we can just make that entry point be where the resync happens.
There are a couple of places where the memory map gets used, one from the external get_memory_map API, and one from the 'efidebug memmap' command. But they do call the same internal function to get the memory map. So that function can be the common point of getting the current memory map from lmb. This will mean that we expose the lmb structures into the efi memory module. I will put out an implementation and we can see about it at that point. This will be part of the second part of the patches that I will be sending. Will work on getting the lmb patches out first.
-sughosh
-- Tom

The efi_add_known_memory() function for the TI K3 platforms is adding the EFI_CONVENTIONAL_MEMORY type. This memory is now being handled through the LMB module -- the lmb_add_memory() adds this memory to the memory map. Remove the definition of the now superfluous efi_add_known_memory() function.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since V1: New patch
arch/arm/mach-k3/common.c | 11 ----------- 1 file changed, 11 deletions(-)
diff --git a/arch/arm/mach-k3/common.c b/arch/arm/mach-k3/common.c index eaa7d36176..a5c0170cd2 100644 --- a/arch/arm/mach-k3/common.c +++ b/arch/arm/mach-k3/common.c @@ -310,14 +310,3 @@ void setup_qos(void) writel(qos_data[i].val, (uintptr_t)qos_data[i].reg); } #endif - -void efi_add_known_memory(void) -{ - if (IS_ENABLED(CONFIG_EFI_LOADER)) - /* - * Memory over ram_top can be used by various firmware - * Declare to EFI only memory area below ram_top - */ - efi_add_memory_map(gd->ram_base, gd->ram_top - gd->ram_base, - EFI_CONVENTIONAL_MEMORY); -}

The EFI memory allocations are now being done through the LMB module, and hence the memory map is maintained by the LMB module. Use the lmb_add_memory() API function to add the usable RAM memory to the LMB's memory map.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since V1: New patch
arch/arm/cpu/armv8/fsl-layerscape/cpu.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/arch/arm/cpu/armv8/fsl-layerscape/cpu.c b/arch/arm/cpu/armv8/fsl-layerscape/cpu.c index d2dbfdd08a..dacb95f1a8 100644 --- a/arch/arm/cpu/armv8/fsl-layerscape/cpu.c +++ b/arch/arm/cpu/armv8/fsl-layerscape/cpu.c @@ -10,6 +10,7 @@ #include <env.h> #include <init.h> #include <hang.h> +#include <lmb.h> #include <log.h> #include <net.h> #include <vsprintf.h> @@ -1525,8 +1526,8 @@ int dram_init_banksize(void) return 0; }
-#if CONFIG_IS_ENABLED(EFI_LOADER) -void efi_add_known_memory(void) +#if CONFIG_IS_ENABLED(LMB) +void lmb_add_memory(void) { int i; phys_addr_t ram_start; @@ -1548,8 +1549,7 @@ void efi_add_known_memory(void) gd->arch.resv_ram < ram_start + ram_size) ram_size = gd->arch.resv_ram - ram_start; #endif - efi_add_memory_map(ram_start, ram_size, - EFI_CONVENTIONAL_MEMORY); + lmb_add(ram_start, ram_size); } } #endif

The EFI_CONVENTIONAL_MEMORY type is now being managed through the LMB module. Add a separate function, lmb_add_memory() to add the RAM memory to the LMB memory map. The efi_add_known_memory() function is now used for adding any other memory type to the EFI memory map.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since V1: New patch
arch/x86/lib/e820.c | 47 ++++++++++++++++++++++++++++++++++----------- 1 file changed, 36 insertions(+), 11 deletions(-)
diff --git a/arch/x86/lib/e820.c b/arch/x86/lib/e820.c index 122b4f7ca0..8b3ce8c6ec 100644 --- a/arch/x86/lib/e820.c +++ b/arch/x86/lib/e820.c @@ -4,6 +4,7 @@ */
#include <efi_loader.h> +#include <lmb.h> #include <asm/e820.h> #include <asm/global_data.h>
@@ -41,15 +42,11 @@ void efi_add_known_memory(void) { struct e820_entry e820[E820MAX]; unsigned int i, num; - u64 start, ram_top; + u64 start; int type;
num = install_e820_map(ARRAY_SIZE(e820), e820);
- ram_top = (u64)gd->ram_top & ~EFI_PAGE_MASK; - if (!ram_top) - ram_top = 0x100000000ULL; - for (i = 0; i < num; ++i) { start = e820[i].addr;
@@ -72,13 +69,41 @@ void efi_add_known_memory(void) break; }
- if (type == EFI_CONVENTIONAL_MEMORY) { - efi_add_conventional_memory_map(start, - start + e820[i].size, - ram_top); - } else { + if (type != EFI_CONVENTIONAL_MEMORY) efi_add_memory_map(start, e820[i].size, type); - } } } #endif /* CONFIG_IS_ENABLED(EFI_LOADER) */ + +#if CONFIG_IS_ENABLED(LMB) +void lmb_add_memory(void) +{ + struct e820_entry e820[E820MAX]; + unsigned int i, num; + u64 ram_top; + + num = install_e820_map(ARRAY_SIZE(e820), e820); + + ram_top = (u64)gd->ram_top & ~EFI_PAGE_MASK; + if (!ram_top) + ram_top = 0x100000000ULL; + + for (i = 0; i < num; ++i) { + if (e820[i].type == E820_RAM) { + u64 start, size, rgn_top; + + start = e820[i].addr; + size = e820[i].size; + rgn_top = start + size; + + if (start > ram_top) + continue; + + if (rgn_top > ram_top) + size -= rgn_top - ram_top; + + lmb_add(start, size); + } + } +} +#endif /* CONFIG_IS_ENABLED(LMB) */

The EFI_CONVENTIONAL_MEMORY type, which is the usable RAM memory is now being managed by the LMB module. Remove the addition of this memory type to the EFI memory map. This memory now gets added to the EFI memory map as part of the LMB memory map update event handler.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since V1: New patch
include/efi_loader.h | 12 +++--- lib/efi_loader/efi_memory.c | 75 +++---------------------------------- 2 files changed, 12 insertions(+), 75 deletions(-)
diff --git a/include/efi_loader.h b/include/efi_loader.h index 6c993e1a69..e5090afe2a 100644 --- a/include/efi_loader.h +++ b/include/efi_loader.h @@ -795,9 +795,6 @@ efi_status_t efi_get_memory_map(efi_uintn_t *memory_map_size, uint32_t *descriptor_version); /* Adds a range into the EFI memory map */ efi_status_t efi_add_memory_map(u64 start, u64 size, int memory_type); -/* Adds a conventional range into the EFI memory map */ -efi_status_t efi_add_conventional_memory_map(u64 ram_start, u64 ram_end, - u64 ram_top);
/* Called by board init to initialize the EFI drivers */ efi_status_t efi_driver_init(void); @@ -1183,9 +1180,14 @@ efi_status_t efi_console_get_u16_string efi_status_t efi_disk_get_device_name(const efi_handle_t handle, char *buf, int size);
/** - * efi_add_known_memory() - add memory banks to EFI memory map + * efi_add_known_memory() - add memory types to the EFI memory map * - * This weak function may be overridden for specific architectures. + * This function is to be used to adding different memory types other + * than EFI_CONVENTIONAL_MEMORY to the EFI memory map. The conventional + * memory is handled by the LMB module, and gets added to the memory + * map through the LMB module. + * + * This function may be overridden for specific architectures. */ void efi_add_known_memory(void);
diff --git a/lib/efi_loader/efi_memory.c b/lib/efi_loader/efi_memory.c index bd12504f72..3ceb670e79 100644 --- a/lib/efi_loader/efi_memory.c +++ b/lib/efi_loader/efi_memory.c @@ -793,82 +793,17 @@ efi_status_t efi_get_memory_map_alloc(efi_uintn_t *map_size, }
/** - * efi_add_conventional_memory_map() - add a RAM memory area to the map + * efi_add_known_memory() - add memory types to the EFI memory map * - * @ram_start: start address of a RAM memory area - * @ram_end: end address of a RAM memory area - * @ram_top: max address to be used as conventional memory - * Return: status code - */ -efi_status_t efi_add_conventional_memory_map(u64 ram_start, u64 ram_end, - u64 ram_top) -{ - u64 pages; - - /* Remove partial pages */ - ram_end &= ~EFI_PAGE_MASK; - ram_start = (ram_start + EFI_PAGE_MASK) & ~EFI_PAGE_MASK; - - if (ram_end <= ram_start) { - /* Invalid mapping */ - return EFI_INVALID_PARAMETER; - } - - pages = (ram_end - ram_start) >> EFI_PAGE_SHIFT; - - efi_add_memory_map_pg(ram_start, pages, - EFI_CONVENTIONAL_MEMORY, false); - - /* - * Boards may indicate to the U-Boot memory core that they - * can not support memory above ram_top. Let's honor this - * in the efi_loader subsystem too by declaring any memory - * above ram_top as "already occupied by firmware". - */ - if (ram_top < ram_start) { - /* ram_top is before this region, reserve all */ - efi_add_memory_map_pg(ram_start, pages, - EFI_BOOT_SERVICES_DATA, true); - } else if (ram_top < ram_end) { - /* ram_top is inside this region, reserve parts */ - pages = (ram_end - ram_top) >> EFI_PAGE_SHIFT; - - efi_add_memory_map_pg(ram_top, pages, - EFI_BOOT_SERVICES_DATA, true); - } - - return EFI_SUCCESS; -} - -/** - * efi_add_known_memory() - add memory banks to map + * This function is to be used to adding different memory types other + * than EFI_CONVENTIONAL_MEMORY to the EFI memory map. The conventional + * memory is handled by the LMB module, and gets added to the memory + * map through the LMB module. * * This function may be overridden for specific architectures. */ __weak void efi_add_known_memory(void) { - u64 ram_top = gd->ram_top & ~EFI_PAGE_MASK; - int i; - - /* - * ram_top is just outside mapped memory. So use an offset of one for - * mapping the sandbox address. - */ - ram_top = (uintptr_t)map_sysmem(ram_top - 1, 0) + 1; - - /* Fix for 32bit targets with ram_top at 4G */ - if (!ram_top) - ram_top = 0x100000000ULL; - - /* Add RAM */ - for (i = 0; i < CONFIG_NR_DRAM_BANKS; i++) { - u64 ram_end, ram_start; - - ram_start = (uintptr_t)map_sysmem(gd->bd->bi_dram[i].start, 0); - ram_end = ram_start + gd->bd->bi_dram[i].size; - - efi_add_conventional_memory_map(ram_start, ram_end, ram_top); - } }
/**

On Thu, 4 Jul 2024 at 08:38, Sughosh Ganu sughosh.ganu@linaro.org wrote:
The EFI_CONVENTIONAL_MEMORY type, which is the usable RAM memory is now being managed by the LMB module. Remove the addition of this memory type to the EFI memory map. This memory now gets added to the EFI memory map as part of the LMB memory map update event handler.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since V1: New patch
include/efi_loader.h | 12 +++--- lib/efi_loader/efi_memory.c | 75 +++---------------------------------- 2 files changed, 12 insertions(+), 75 deletions(-)
Reviewed-by: Simon Glass sjg@chromium.org
This is definitely moving EFI in the right direction...using the existing U-Boot memory stuff rather than inventing its own.

Mark the EFI runtime memory region as reserved memory during board init so that it does not get allocated by the LMB module on subsequent memory requests.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since V1: New patch
lib/lmb.c | 41 ++++++++++++++++++++--------------------- 1 file changed, 20 insertions(+), 21 deletions(-)
diff --git a/lib/lmb.c b/lib/lmb.c index 387ec2ac65..6018f1de31 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -19,6 +19,7 @@ #include <asm/global_data.h> #include <asm/sections.h> #include <linux/kernel.h> +#include <linux/sizes.h>
DECLARE_GLOBAL_DATA_PTR;
@@ -212,33 +213,31 @@ void arch_lmb_reserve_generic(ulong sp, ulong end, ulong align) /** * efi_lmb_reserve() - add reservations for EFI memory * - * Add reservations for all EFI memory areas that are not - * EFI_CONVENTIONAL_MEMORY. + * Add reservations for EFI runtime services memory * - * Return: 0 on success, 1 on failure + * Return: None */ -static __maybe_unused int efi_lmb_reserve(void) +static __maybe_unused void efi_lmb_reserve(void) { - struct efi_mem_desc *memmap = NULL, *map; - efi_uintn_t i, map_size = 0; - efi_status_t ret; + phys_addr_t runtime_start, runtime_end; + unsigned long runtime_mask = EFI_PAGE_MASK;
- ret = efi_get_memory_map_alloc(&map_size, &memmap); - if (ret != EFI_SUCCESS) - return 1; +#if defined(__aarch64__) + /* + * Runtime Services must be 64KiB aligned according to the + * "AArch64 Platforms" section in the UEFI spec (2.7+). + */
- for (i = 0, map = memmap; i < map_size / sizeof(*map); ++map, ++i) { - if (map->type != EFI_CONVENTIONAL_MEMORY) { - lmb_reserve_flags(map_to_sysmem((void *)(uintptr_t) - map->physical_start), - map->num_pages * EFI_PAGE_SIZE, - map->type == EFI_RESERVED_MEMORY_TYPE - ? LMB_NOMAP : LMB_NONE); - } - } - efi_free_pool(memmap); + runtime_mask = SZ_64K - 1; +#endif
- return 0; + /* Reserve the EFI runtime services memory */ + runtime_start = (uintptr_t)__efi_runtime_start & ~runtime_mask; + runtime_end = (uintptr_t)__efi_runtime_stop; + runtime_end = (runtime_end + runtime_mask) & ~runtime_mask; + + lmb_reserve_flags(runtime_start, runtime_end - runtime_start, + LMB_NOOVERWRITE | LMB_NONOTIFY); }
static void lmb_reserve_common(void *fdt_blob)

Hi Sughosh,
On Thu, 4 Jul 2024 at 08:38, Sughosh Ganu sughosh.ganu@linaro.org wrote:
Mark the EFI runtime memory region as reserved memory during board init so that it does not get allocated by the LMB module on subsequent memory requests.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since V1: New patch
lib/lmb.c | 41 ++++++++++++++++++++--------------------- 1 file changed, 20 insertions(+), 21 deletions(-)
I see again that this is getting circular. Can you look at what is actually allocated by EFI on init (i.e. before anything is booted)?
diff --git a/lib/lmb.c b/lib/lmb.c index 387ec2ac65..6018f1de31 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -19,6 +19,7 @@ #include <asm/global_data.h> #include <asm/sections.h> #include <linux/kernel.h> +#include <linux/sizes.h>
DECLARE_GLOBAL_DATA_PTR;
@@ -212,33 +213,31 @@ void arch_lmb_reserve_generic(ulong sp, ulong end, ulong align) /**
- efi_lmb_reserve() - add reservations for EFI memory
- Add reservations for all EFI memory areas that are not
- EFI_CONVENTIONAL_MEMORY.
- Add reservations for EFI runtime services memory
- Return: 0 on success, 1 on failure
*/
- Return: None
-static __maybe_unused int efi_lmb_reserve(void) +static __maybe_unused void efi_lmb_reserve(void) {
struct efi_mem_desc *memmap = NULL, *map;
efi_uintn_t i, map_size = 0;
efi_status_t ret;
phys_addr_t runtime_start, runtime_end;
unsigned long runtime_mask = EFI_PAGE_MASK;
ret = efi_get_memory_map_alloc(&map_size, &memmap);
if (ret != EFI_SUCCESS)
return 1;
+#if defined(__aarch64__)
/*
* Runtime Services must be 64KiB aligned according to the
* "AArch64 Platforms" section in the UEFI spec (2.7+).
*/
for (i = 0, map = memmap; i < map_size / sizeof(*map); ++map, ++i) {
if (map->type != EFI_CONVENTIONAL_MEMORY) {
lmb_reserve_flags(map_to_sysmem((void *)(uintptr_t)
map->physical_start),
map->num_pages * EFI_PAGE_SIZE,
map->type == EFI_RESERVED_MEMORY_TYPE
? LMB_NOMAP : LMB_NONE);
}
}
efi_free_pool(memmap);
runtime_mask = SZ_64K - 1;
+#endif
return 0;
/* Reserve the EFI runtime services memory */
runtime_start = (uintptr_t)__efi_runtime_start & ~runtime_mask;
runtime_end = (uintptr_t)__efi_runtime_stop;
runtime_end = (runtime_end + runtime_mask) & ~runtime_mask;
lmb_reserve_flags(runtime_start, runtime_end - runtime_start,
LMB_NOOVERWRITE | LMB_NONOTIFY);
}
static void lmb_reserve_common(void *fdt_blob)
2.34.1
Regards, Simon

hi Simon,
On Sat, 13 Jul 2024 at 20:46, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Thu, 4 Jul 2024 at 08:38, Sughosh Ganu sughosh.ganu@linaro.org wrote:
Mark the EFI runtime memory region as reserved memory during board init so that it does not get allocated by the LMB module on subsequent memory requests.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since V1: New patch
lib/lmb.c | 41 ++++++++++++++++++++--------------------- 1 file changed, 20 insertions(+), 21 deletions(-)
I see again that this is getting circular. Can you look at what is actually allocated by EFI on init (i.e. before anything is booted)?
This code is simply ensuring that the EFI runtime regions are marked as reserved by the LMB module to prevent that region from being allocated. Do you have any other way by which this can be communicated to the LMB module ?
-sughosh
diff --git a/lib/lmb.c b/lib/lmb.c index 387ec2ac65..6018f1de31 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -19,6 +19,7 @@ #include <asm/global_data.h> #include <asm/sections.h> #include <linux/kernel.h> +#include <linux/sizes.h>
DECLARE_GLOBAL_DATA_PTR;
@@ -212,33 +213,31 @@ void arch_lmb_reserve_generic(ulong sp, ulong end, ulong align) /**
- efi_lmb_reserve() - add reservations for EFI memory
- Add reservations for all EFI memory areas that are not
- EFI_CONVENTIONAL_MEMORY.
- Add reservations for EFI runtime services memory
- Return: 0 on success, 1 on failure
*/
- Return: None
-static __maybe_unused int efi_lmb_reserve(void) +static __maybe_unused void efi_lmb_reserve(void) {
struct efi_mem_desc *memmap = NULL, *map;
efi_uintn_t i, map_size = 0;
efi_status_t ret;
phys_addr_t runtime_start, runtime_end;
unsigned long runtime_mask = EFI_PAGE_MASK;
ret = efi_get_memory_map_alloc(&map_size, &memmap);
if (ret != EFI_SUCCESS)
return 1;
+#if defined(__aarch64__)
/*
* Runtime Services must be 64KiB aligned according to the
* "AArch64 Platforms" section in the UEFI spec (2.7+).
*/
for (i = 0, map = memmap; i < map_size / sizeof(*map); ++map, ++i) {
if (map->type != EFI_CONVENTIONAL_MEMORY) {
lmb_reserve_flags(map_to_sysmem((void *)(uintptr_t)
map->physical_start),
map->num_pages * EFI_PAGE_SIZE,
map->type == EFI_RESERVED_MEMORY_TYPE
? LMB_NOMAP : LMB_NONE);
}
}
efi_free_pool(memmap);
runtime_mask = SZ_64K - 1;
+#endif
return 0;
/* Reserve the EFI runtime services memory */
runtime_start = (uintptr_t)__efi_runtime_start & ~runtime_mask;
runtime_end = (uintptr_t)__efi_runtime_stop;
runtime_end = (runtime_end + runtime_mask) & ~runtime_mask;
lmb_reserve_flags(runtime_start, runtime_end - runtime_start,
LMB_NOOVERWRITE | LMB_NONOTIFY);
}
static void lmb_reserve_common(void *fdt_blob)
2.34.1
Regards, Simon

Hi Sughosh,
On Mon, 15 Jul 2024 at 10:42, Sughosh Ganu sughosh.ganu@linaro.org wrote:
hi Simon,
On Sat, 13 Jul 2024 at 20:46, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Thu, 4 Jul 2024 at 08:38, Sughosh Ganu sughosh.ganu@linaro.org wrote:
Mark the EFI runtime memory region as reserved memory during board init so that it does not get allocated by the LMB module on subsequent memory requests.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since V1: New patch
lib/lmb.c | 41 ++++++++++++++++++++--------------------- 1 file changed, 20 insertions(+), 21 deletions(-)
I see again that this is getting circular. Can you look at what is actually allocated by EFI on init (i.e. before anything is booted)?
This code is simply ensuring that the EFI runtime regions are marked as reserved by the LMB module to prevent that region from being allocated. Do you have any other way by which this can be communicated to the LMB module ?
But this region is within the U-Boot code area, so it should be covered by the normal exclusion of that area?
Regards, Simon

hi Simon,
On Mon, 15 Jul 2024 at 17:09, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Mon, 15 Jul 2024 at 10:42, Sughosh Ganu sughosh.ganu@linaro.org wrote:
hi Simon,
On Sat, 13 Jul 2024 at 20:46, Simon Glass sjg@chromium.org wrote:
Hi Sughosh,
On Thu, 4 Jul 2024 at 08:38, Sughosh Ganu sughosh.ganu@linaro.org wrote:
Mark the EFI runtime memory region as reserved memory during board init so that it does not get allocated by the LMB module on subsequent memory requests.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Changes since V1: New patch
lib/lmb.c | 41 ++++++++++++++++++++--------------------- 1 file changed, 20 insertions(+), 21 deletions(-)
I see again that this is getting circular. Can you look at what is actually allocated by EFI on init (i.e. before anything is booted)?
This code is simply ensuring that the EFI runtime regions are marked as reserved by the LMB module to prevent that region from being allocated. Do you have any other way by which this can be communicated to the LMB module ?
But this region is within the U-Boot code area, so it should be covered by the normal exclusion of that area?
Yes that is the case for all arch's today. Let me check if I can simply drop this function then.
-sughosh
Regards, Simon

With the addition of two events for notification of any changes to memory that is occupied and is free, the output of the event_dump.py script has changed. Update the expected event log to incorporate this change.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since V1: * Remove the line for EFI mem map update.
test/py/tests/test_event_dump.py | 1 + 1 file changed, 1 insertion(+)
diff --git a/test/py/tests/test_event_dump.py b/test/py/tests/test_event_dump.py index e282c67335..3537f0383c 100644 --- a/test/py/tests/test_event_dump.py +++ b/test/py/tests/test_event_dump.py @@ -19,6 +19,7 @@ def test_event_dump(u_boot_console): EVT_FT_FIXUP bootmeth_vbe_ft_fixup .*boot/vbe_request.c:.* EVT_FT_FIXUP bootmeth_vbe_simple_ft_fixup .*boot/vbe_simple_os.c:.* EVT_LAST_STAGE_INIT install_smbios_table .*lib/efi_loader/efi_smbios.c:.* +EVT_LMB_MAP_UPDATE lmb_mem_map_update_sync .*lib/efi_loader/efi_memory.c:.* EVT_MISC_INIT_F sandbox_early_getopt_check .*arch/sandbox/cpu/start.c:.* EVT_TEST h_adder_simple .*test/common/event.c:''' assert re.match(expect, out, re.MULTILINE) is not None

With the changes to add notifications for any changes to the LMB map, the size of the image exceeds the limit set. Bump up the image size limit for now to get the platform to build.
This is not for committing.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org --- Changes since V1: None
configs/mx6sabresd_defconfig | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/configs/mx6sabresd_defconfig b/configs/mx6sabresd_defconfig index 868f6b1551..7308ae2ec6 100644 --- a/configs/mx6sabresd_defconfig +++ b/configs/mx6sabresd_defconfig @@ -23,7 +23,7 @@ CONFIG_SPL_LIBDISK_SUPPORT=y CONFIG_PCI=y CONFIG_LTO=y CONFIG_HAS_BOARD_SIZE_LIMIT=y -CONFIG_BOARD_SIZE_LIMIT=715766 +CONFIG_BOARD_SIZE_LIMIT=718108 CONFIG_FIT=y CONFIG_SPL_FIT_PRINT=y CONFIG_SPL_LOAD_FIT=y

On Thu, Jul 04, 2024 at 01:05:44PM +0530, Sughosh Ganu wrote: o
With the changes to add notifications for any changes to the LMB map, the size of the image exceeds the limit set. Bump up the image size limit for now to get the platform to build.
This is not for committing.
Signed-off-by: Sughosh Ganu sughosh.ganu@linaro.org
Please replace this with:
diff --git a/configs/mx6sabresd_defconfig b/configs/mx6sabresd_defconfig index 868f6b15513e..80f7cd60b5b8 100644 --- a/configs/mx6sabresd_defconfig +++ b/configs/mx6sabresd_defconfig @@ -27,6 +27,7 @@ CONFIG_BOARD_SIZE_LIMIT=715766 CONFIG_FIT=y CONFIG_SPL_FIT_PRINT=y CONFIG_SPL_LOAD_FIT=y +# CONFIG_BOOTMETH_VBE is not set CONFIG_SUPPORT_RAW_INITRD=y CONFIG_USE_BOOTCOMMAND=y CONFIG_BOOTCOMMAND="run findfdt;mmc dev ${mmcdev};if mmc rescan; then if run loadbootscript; then run bootscript; else if run loadimage; then run mmcboot; else run netboot; fi; fi; else run netboot; fi"

On Thu, Jul 04, 2024 at 01:04:56PM +0530, Sughosh Ganu wrote:
The aim of this patch series is to fix the current state of incoherence between modules when it comes to memory usage. The primary issue that this series is trying to fix is that the EFI memory module which is responsible for allocating and freeing memory, does not have any visibility of the memory that is being used by the LMB module. This is further complicated by the fact that the LMB allocations are caller specific -- the LMB memory map is not global nor persistent. This means that the memory "allocated" by the LMB module might be relevant only for a given function. Hence one of the requirements for making the memory usage visible across modules is to make LMB allocations persistent and global, and then have means to communicate the use of memory across modules.
The first set of patches in this series work on making the LMB memory map persistent and global. This is being done keeping in mind the usage of LMB memory by platforms where the same memory region can be used to load multiple different images. What is not allowed is to overwrite memory that has been allocated by the other module, currently the EFI memory module. This is being achieved by introducing a new flag, LMB_NOOVERWRITE, which represents memory which cannot be re-requested once allocated.
A review comment on the earlier version was to do away with the static arrays for the LMB lists of free and used memory. This version uses the alloced list data structure for the LMB lists.
The second set of patches are making changes to the EFI memory module to make use of the LMB functions to allocate and free memory. A *_flags() version of LMB API's has been introduced for the same. The earlier version was using notification mechanism from both LMB and EFI modules to maintain memory coherence. This version makes use of the LMB API functions for the memory allocations. This is based on review comments of EFI maintainers.
On am64x_evm_a53, the last test in test/py/tests/test_net_boot.py fails due to: ... TFTP from server 192.168.116.10; our IP address is 192.168.116.23 Filename 'pxelinux.cfg/default-arm-k3'. Load address: 0x80100000 Loading: ################################################## 64 Bytes 8.8 KiB/s done Bytes transferred = 64 (40 hex) 1 pxe ready ethernet 0 port@1.bootdev.0 extlinux/extlinux.conf ** Booting bootflow 'port@1.bootdev.0' with pxe Retrieving file: pxelinux.cfg/default-arm am65_cpsw_nuss_port ethernet@8000000port@1: K3 CPSW: rflow_id_base: 16 link up on port 1, speed 1000, full duplex Using ethernet@8000000port@1 device TFTP from server 192.168.116.10; our IP address is 192.168.116.23 Filename 'pxelinux.cfg/default-arm'.
TFTP error: trying to overwrite reserved memory... Couldn't retrieve pxelinux.cfg/default-arm
And note that the pxelinux.cfg files are created as defined by the example within the test. This test is also still fine on Pi 4.

On Mon, 8 Jul 2024 at 19:32, Tom Rini trini@konsulko.com wrote:
On Thu, Jul 04, 2024 at 01:04:56PM +0530, Sughosh Ganu wrote:
The aim of this patch series is to fix the current state of incoherence between modules when it comes to memory usage. The primary issue that this series is trying to fix is that the EFI memory module which is responsible for allocating and freeing memory, does not have any visibility of the memory that is being used by the LMB module. This is further complicated by the fact that the LMB allocations are caller specific -- the LMB memory map is not global nor persistent. This means that the memory "allocated" by the LMB module might be relevant only for a given function. Hence one of the requirements for making the memory usage visible across modules is to make LMB allocations persistent and global, and then have means to communicate the use of memory across modules.
The first set of patches in this series work on making the LMB memory map persistent and global. This is being done keeping in mind the usage of LMB memory by platforms where the same memory region can be used to load multiple different images. What is not allowed is to overwrite memory that has been allocated by the other module, currently the EFI memory module. This is being achieved by introducing a new flag, LMB_NOOVERWRITE, which represents memory which cannot be re-requested once allocated.
A review comment on the earlier version was to do away with the static arrays for the LMB lists of free and used memory. This version uses the alloced list data structure for the LMB lists.
The second set of patches are making changes to the EFI memory module to make use of the LMB functions to allocate and free memory. A *_flags() version of LMB API's has been introduced for the same. The earlier version was using notification mechanism from both LMB and EFI modules to maintain memory coherence. This version makes use of the LMB API functions for the memory allocations. This is based on review comments of EFI maintainers.
On am64x_evm_a53, the last test in test/py/tests/test_net_boot.py fails due to: ... TFTP from server 192.168.116.10; our IP address is 192.168.116.23 Filename 'pxelinux.cfg/default-arm-k3'. Load address: 0x80100000 Loading: ################################################## 64 Bytes 8.8 KiB/s done Bytes transferred = 64 (40 hex) 1 pxe ready ethernet 0 port@1.bootdev.0 extlinux/extlinux.conf ** Booting bootflow 'port@1.bootdev.0' with pxe Retrieving file: pxelinux.cfg/default-arm am65_cpsw_nuss_port ethernet@8000000port@1: K3 CPSW: rflow_id_base: 16 link up on port 1, speed 1000, full duplex Using ethernet@8000000port@1 device TFTP from server 192.168.116.10; our IP address is 192.168.116.23 Filename 'pxelinux.cfg/default-arm'.
TFTP error: trying to overwrite reserved memory... Couldn't retrieve pxelinux.cfg/default-arm
So this seems to be failing because the address used to load the pxe config file seems to be overlapping with an already reserved region of memory. Can you please check if modifying the address works?
And note that the pxelinux.cfg files are created as defined by the example within the test. This test is also still fine on Pi 4.
If this is working fine on the Pi 4, this is mostly to do with needing to tweak the load address.
-sughosh
-- Tom

On Mon, Jul 22, 2024 at 11:58:18AM +0530, Sughosh Ganu wrote:
On Mon, 8 Jul 2024 at 19:32, Tom Rini trini@konsulko.com wrote:
On Thu, Jul 04, 2024 at 01:04:56PM +0530, Sughosh Ganu wrote:
The aim of this patch series is to fix the current state of incoherence between modules when it comes to memory usage. The primary issue that this series is trying to fix is that the EFI memory module which is responsible for allocating and freeing memory, does not have any visibility of the memory that is being used by the LMB module. This is further complicated by the fact that the LMB allocations are caller specific -- the LMB memory map is not global nor persistent. This means that the memory "allocated" by the LMB module might be relevant only for a given function. Hence one of the requirements for making the memory usage visible across modules is to make LMB allocations persistent and global, and then have means to communicate the use of memory across modules.
The first set of patches in this series work on making the LMB memory map persistent and global. This is being done keeping in mind the usage of LMB memory by platforms where the same memory region can be used to load multiple different images. What is not allowed is to overwrite memory that has been allocated by the other module, currently the EFI memory module. This is being achieved by introducing a new flag, LMB_NOOVERWRITE, which represents memory which cannot be re-requested once allocated.
A review comment on the earlier version was to do away with the static arrays for the LMB lists of free and used memory. This version uses the alloced list data structure for the LMB lists.
The second set of patches are making changes to the EFI memory module to make use of the LMB functions to allocate and free memory. A *_flags() version of LMB API's has been introduced for the same. The earlier version was using notification mechanism from both LMB and EFI modules to maintain memory coherence. This version makes use of the LMB API functions for the memory allocations. This is based on review comments of EFI maintainers.
On am64x_evm_a53, the last test in test/py/tests/test_net_boot.py fails due to: ... TFTP from server 192.168.116.10; our IP address is 192.168.116.23 Filename 'pxelinux.cfg/default-arm-k3'. Load address: 0x80100000 Loading: ################################################## 64 Bytes 8.8 KiB/s done Bytes transferred = 64 (40 hex) 1 pxe ready ethernet 0 port@1.bootdev.0 extlinux/extlinux.conf ** Booting bootflow 'port@1.bootdev.0' with pxe Retrieving file: pxelinux.cfg/default-arm am65_cpsw_nuss_port ethernet@8000000port@1: K3 CPSW: rflow_id_base: 16 link up on port 1, speed 1000, full duplex Using ethernet@8000000port@1 device TFTP from server 192.168.116.10; our IP address is 192.168.116.23 Filename 'pxelinux.cfg/default-arm'.
TFTP error: trying to overwrite reserved memory... Couldn't retrieve pxelinux.cfg/default-arm
So this seems to be failing because the address used to load the pxe config file seems to be overlapping with an already reserved region of memory. Can you please check if modifying the address works?
I'm not sure what address you're thinking of modifying but, this isn't overwriting U-Boot itself so it's a case that needs to work.

On Mon, 22 Jul 2024 at 23:03, Tom Rini trini@konsulko.com wrote:
On Mon, Jul 22, 2024 at 11:58:18AM +0530, Sughosh Ganu wrote:
On Mon, 8 Jul 2024 at 19:32, Tom Rini trini@konsulko.com wrote:
On Thu, Jul 04, 2024 at 01:04:56PM +0530, Sughosh Ganu wrote:
The aim of this patch series is to fix the current state of incoherence between modules when it comes to memory usage. The primary issue that this series is trying to fix is that the EFI memory module which is responsible for allocating and freeing memory, does not have any visibility of the memory that is being used by the LMB module. This is further complicated by the fact that the LMB allocations are caller specific -- the LMB memory map is not global nor persistent. This means that the memory "allocated" by the LMB module might be relevant only for a given function. Hence one of the requirements for making the memory usage visible across modules is to make LMB allocations persistent and global, and then have means to communicate the use of memory across modules.
The first set of patches in this series work on making the LMB memory map persistent and global. This is being done keeping in mind the usage of LMB memory by platforms where the same memory region can be used to load multiple different images. What is not allowed is to overwrite memory that has been allocated by the other module, currently the EFI memory module. This is being achieved by introducing a new flag, LMB_NOOVERWRITE, which represents memory which cannot be re-requested once allocated.
A review comment on the earlier version was to do away with the static arrays for the LMB lists of free and used memory. This version uses the alloced list data structure for the LMB lists.
The second set of patches are making changes to the EFI memory module to make use of the LMB functions to allocate and free memory. A *_flags() version of LMB API's has been introduced for the same. The earlier version was using notification mechanism from both LMB and EFI modules to maintain memory coherence. This version makes use of the LMB API functions for the memory allocations. This is based on review comments of EFI maintainers.
On am64x_evm_a53, the last test in test/py/tests/test_net_boot.py fails due to: ... TFTP from server 192.168.116.10; our IP address is 192.168.116.23 Filename 'pxelinux.cfg/default-arm-k3'. Load address: 0x80100000 Loading: ################################################## 64 Bytes 8.8 KiB/s done Bytes transferred = 64 (40 hex) 1 pxe ready ethernet 0 port@1.bootdev.0 extlinux/extlinux.conf ** Booting bootflow 'port@1.bootdev.0' with pxe Retrieving file: pxelinux.cfg/default-arm am65_cpsw_nuss_port ethernet@8000000port@1: K3 CPSW: rflow_id_base: 16 link up on port 1, speed 1000, full duplex Using ethernet@8000000port@1 device TFTP from server 192.168.116.10; our IP address is 192.168.116.23 Filename 'pxelinux.cfg/default-arm'.
TFTP error: trying to overwrite reserved memory... Couldn't retrieve pxelinux.cfg/default-arm
So this seems to be failing because the address used to load the pxe config file seems to be overlapping with an already reserved region of memory. Can you please check if modifying the address works?
I'm not sure what address you're thinking of modifying but, this isn't overwriting U-Boot itself so it's a case that needs to work.
Can you please print the lmb memory map through bdinfo and share it with me. That will give some info on what is causing the issue. Thing is, with this patchset, if there is another reservation with a different flag(like LMB_NOMAP, LMB_NOOVERWRITE), this would cause the load to fail.
-sughosh
-- Tom

Hi Sughosh,
On Mon, 22 Jul 2024 at 18:37, Sughosh Ganu sughosh.ganu@linaro.org wrote:
On Mon, 22 Jul 2024 at 23:03, Tom Rini trini@konsulko.com wrote:
On Mon, Jul 22, 2024 at 11:58:18AM +0530, Sughosh Ganu wrote:
On Mon, 8 Jul 2024 at 19:32, Tom Rini trini@konsulko.com wrote:
On Thu, Jul 04, 2024 at 01:04:56PM +0530, Sughosh Ganu wrote:
The aim of this patch series is to fix the current state of incoherence between modules when it comes to memory usage. The primary issue that this series is trying to fix is that the EFI memory module which is responsible for allocating and freeing memory, does not have any visibility of the memory that is being used by the LMB module. This is further complicated by the fact that the LMB allocations are caller specific -- the LMB memory map is not global nor persistent. This means that the memory "allocated" by the LMB module might be relevant only for a given function. Hence one of the requirements for making the memory usage visible across modules is to make LMB allocations persistent and global, and then have means to communicate the use of memory across modules.
The first set of patches in this series work on making the LMB memory map persistent and global. This is being done keeping in mind the usage of LMB memory by platforms where the same memory region can be used to load multiple different images. What is not allowed is to overwrite memory that has been allocated by the other module, currently the EFI memory module. This is being achieved by introducing a new flag, LMB_NOOVERWRITE, which represents memory which cannot be re-requested once allocated.
A review comment on the earlier version was to do away with the static arrays for the LMB lists of free and used memory. This version uses the alloced list data structure for the LMB lists.
The second set of patches are making changes to the EFI memory module to make use of the LMB functions to allocate and free memory. A *_flags() version of LMB API's has been introduced for the same. The earlier version was using notification mechanism from both LMB and EFI modules to maintain memory coherence. This version makes use of the LMB API functions for the memory allocations. This is based on review comments of EFI maintainers.
On am64x_evm_a53, the last test in test/py/tests/test_net_boot.py fails due to: ... TFTP from server 192.168.116.10; our IP address is 192.168.116.23 Filename 'pxelinux.cfg/default-arm-k3'. Load address: 0x80100000 Loading: ################################################## 64 Bytes 8.8 KiB/s done Bytes transferred = 64 (40 hex) 1 pxe ready ethernet 0 port@1.bootdev.0 extlinux/extlinux.conf ** Booting bootflow 'port@1.bootdev.0' with pxe Retrieving file: pxelinux.cfg/default-arm am65_cpsw_nuss_port ethernet@8000000port@1: K3 CPSW: rflow_id_base: 16 link up on port 1, speed 1000, full duplex Using ethernet@8000000port@1 device TFTP from server 192.168.116.10; our IP address is 192.168.116.23 Filename 'pxelinux.cfg/default-arm'.
TFTP error: trying to overwrite reserved memory... Couldn't retrieve pxelinux.cfg/default-arm
So this seems to be failing because the address used to load the pxe config file seems to be overlapping with an already reserved region of memory. Can you please check if modifying the address works?
I'm not sure what address you're thinking of modifying but, this isn't overwriting U-Boot itself so it's a case that needs to work.
Can you please print the lmb memory map through bdinfo and share it with me. That will give some info on what is causing the issue. Thing is, with this patchset, if there is another reservation with a different flag(like LMB_NOMAP, LMB_NOOVERWRITE), this would cause the load to fail.
I too am interested in what it might be. Scripts in general can load things where they want, so we need to be careful.
Also, as mentioned I would really like to clean up the 'EFI' 'allocations' so that they only happen when booting an EFI payload. I am not too sure what the current state is, but I will see if I can take a look.
Regards, SImon

On Mon, Jul 22, 2024 at 11:07:45PM +0530, Sughosh Ganu wrote:
On Mon, 22 Jul 2024 at 23:03, Tom Rini trini@konsulko.com wrote:
On Mon, Jul 22, 2024 at 11:58:18AM +0530, Sughosh Ganu wrote:
On Mon, 8 Jul 2024 at 19:32, Tom Rini trini@konsulko.com wrote:
On Thu, Jul 04, 2024 at 01:04:56PM +0530, Sughosh Ganu wrote:
The aim of this patch series is to fix the current state of incoherence between modules when it comes to memory usage. The primary issue that this series is trying to fix is that the EFI memory module which is responsible for allocating and freeing memory, does not have any visibility of the memory that is being used by the LMB module. This is further complicated by the fact that the LMB allocations are caller specific -- the LMB memory map is not global nor persistent. This means that the memory "allocated" by the LMB module might be relevant only for a given function. Hence one of the requirements for making the memory usage visible across modules is to make LMB allocations persistent and global, and then have means to communicate the use of memory across modules.
The first set of patches in this series work on making the LMB memory map persistent and global. This is being done keeping in mind the usage of LMB memory by platforms where the same memory region can be used to load multiple different images. What is not allowed is to overwrite memory that has been allocated by the other module, currently the EFI memory module. This is being achieved by introducing a new flag, LMB_NOOVERWRITE, which represents memory which cannot be re-requested once allocated.
A review comment on the earlier version was to do away with the static arrays for the LMB lists of free and used memory. This version uses the alloced list data structure for the LMB lists.
The second set of patches are making changes to the EFI memory module to make use of the LMB functions to allocate and free memory. A *_flags() version of LMB API's has been introduced for the same. The earlier version was using notification mechanism from both LMB and EFI modules to maintain memory coherence. This version makes use of the LMB API functions for the memory allocations. This is based on review comments of EFI maintainers.
On am64x_evm_a53, the last test in test/py/tests/test_net_boot.py fails due to: ... TFTP from server 192.168.116.10; our IP address is 192.168.116.23 Filename 'pxelinux.cfg/default-arm-k3'. Load address: 0x80100000 Loading: ################################################## 64 Bytes 8.8 KiB/s done Bytes transferred = 64 (40 hex) 1 pxe ready ethernet 0 port@1.bootdev.0 extlinux/extlinux.conf ** Booting bootflow 'port@1.bootdev.0' with pxe Retrieving file: pxelinux.cfg/default-arm am65_cpsw_nuss_port ethernet@8000000port@1: K3 CPSW: rflow_id_base: 16 link up on port 1, speed 1000, full duplex Using ethernet@8000000port@1 device TFTP from server 192.168.116.10; our IP address is 192.168.116.23 Filename 'pxelinux.cfg/default-arm'.
TFTP error: trying to overwrite reserved memory... Couldn't retrieve pxelinux.cfg/default-arm
So this seems to be failing because the address used to load the pxe config file seems to be overlapping with an already reserved region of memory. Can you please check if modifying the address works?
I'm not sure what address you're thinking of modifying but, this isn't overwriting U-Boot itself so it's a case that needs to work.
Can you please print the lmb memory map through bdinfo and share it with me. That will give some info on what is causing the issue. Thing is, with this patchset, if there is another reservation with a different flag(like LMB_NOMAP, LMB_NOOVERWRITE), this would cause the load to fail.
Well hunh. I thought I had reproduced the issue before posting, but I just pushed the same tree (I'm fairly certain) over to my lab and the tests are passing now. So, lets just see what happens with the next iteration of the series, sorry for the noise.

On Tue, 23 Jul 2024 at 20:18, Tom Rini trini@konsulko.com wrote:
On Mon, Jul 22, 2024 at 11:07:45PM +0530, Sughosh Ganu wrote:
On Mon, 22 Jul 2024 at 23:03, Tom Rini trini@konsulko.com wrote:
On Mon, Jul 22, 2024 at 11:58:18AM +0530, Sughosh Ganu wrote:
On Mon, 8 Jul 2024 at 19:32, Tom Rini trini@konsulko.com wrote:
On Thu, Jul 04, 2024 at 01:04:56PM +0530, Sughosh Ganu wrote:
The aim of this patch series is to fix the current state of incoherence between modules when it comes to memory usage. The primary issue that this series is trying to fix is that the EFI memory module which is responsible for allocating and freeing memory, does not have any visibility of the memory that is being used by the LMB module. This is further complicated by the fact that the LMB allocations are caller specific -- the LMB memory map is not global nor persistent. This means that the memory "allocated" by the LMB module might be relevant only for a given function. Hence one of the requirements for making the memory usage visible across modules is to make LMB allocations persistent and global, and then have means to communicate the use of memory across modules.
The first set of patches in this series work on making the LMB memory map persistent and global. This is being done keeping in mind the usage of LMB memory by platforms where the same memory region can be used to load multiple different images. What is not allowed is to overwrite memory that has been allocated by the other module, currently the EFI memory module. This is being achieved by introducing a new flag, LMB_NOOVERWRITE, which represents memory which cannot be re-requested once allocated.
A review comment on the earlier version was to do away with the static arrays for the LMB lists of free and used memory. This version uses the alloced list data structure for the LMB lists.
The second set of patches are making changes to the EFI memory module to make use of the LMB functions to allocate and free memory. A *_flags() version of LMB API's has been introduced for the same. The earlier version was using notification mechanism from both LMB and EFI modules to maintain memory coherence. This version makes use of the LMB API functions for the memory allocations. This is based on review comments of EFI maintainers.
On am64x_evm_a53, the last test in test/py/tests/test_net_boot.py fails due to: ... TFTP from server 192.168.116.10; our IP address is 192.168.116.23 Filename 'pxelinux.cfg/default-arm-k3'. Load address: 0x80100000 Loading: ################################################## 64 Bytes 8.8 KiB/s done Bytes transferred = 64 (40 hex) 1 pxe ready ethernet 0 port@1.bootdev.0 extlinux/extlinux.conf ** Booting bootflow 'port@1.bootdev.0' with pxe Retrieving file: pxelinux.cfg/default-arm am65_cpsw_nuss_port ethernet@8000000port@1: K3 CPSW: rflow_id_base: 16 link up on port 1, speed 1000, full duplex Using ethernet@8000000port@1 device TFTP from server 192.168.116.10; our IP address is 192.168.116.23 Filename 'pxelinux.cfg/default-arm'.
TFTP error: trying to overwrite reserved memory... Couldn't retrieve pxelinux.cfg/default-arm
So this seems to be failing because the address used to load the pxe config file seems to be overlapping with an already reserved region of memory. Can you please check if modifying the address works?
I'm not sure what address you're thinking of modifying but, this isn't overwriting U-Boot itself so it's a case that needs to work.
Can you please print the lmb memory map through bdinfo and share it with me. That will give some info on what is causing the issue. Thing is, with this patchset, if there is another reservation with a different flag(like LMB_NOMAP, LMB_NOOVERWRITE), this would cause the load to fail.
Well hunh. I thought I had reproduced the issue before posting, but I just pushed the same tree (I'm fairly certain) over to my lab and the tests are passing now. So, lets just see what happens with the next iteration of the series, sorry for the noise.
Okay, I will put out the LMB only, non-rfc series once the CI has gone through fine. Btw, I hope you have seen my comment on irc about having the SPL_LMB config symbol as a bool, instead of def_bool y. Thanks.
-sughosh
-- Tom

On Tue, Jul 23, 2024 at 08:21:11PM +0530, Sughosh Ganu wrote:
On Tue, 23 Jul 2024 at 20:18, Tom Rini trini@konsulko.com wrote:
On Mon, Jul 22, 2024 at 11:07:45PM +0530, Sughosh Ganu wrote:
On Mon, 22 Jul 2024 at 23:03, Tom Rini trini@konsulko.com wrote:
On Mon, Jul 22, 2024 at 11:58:18AM +0530, Sughosh Ganu wrote:
On Mon, 8 Jul 2024 at 19:32, Tom Rini trini@konsulko.com wrote:
On Thu, Jul 04, 2024 at 01:04:56PM +0530, Sughosh Ganu wrote:
> The aim of this patch series is to fix the current state of > incoherence between modules when it comes to memory usage. The primary > issue that this series is trying to fix is that the EFI memory module > which is responsible for allocating and freeing memory, does not have > any visibility of the memory that is being used by the LMB > module. This is further complicated by the fact that the LMB > allocations are caller specific -- the LMB memory map is not global > nor persistent. This means that the memory "allocated" by the LMB > module might be relevant only for a given function. Hence one of the > requirements for making the memory usage visible across modules is to > make LMB allocations persistent and global, and then have means to > communicate the use of memory across modules. > > The first set of patches in this series work on making the LMB memory > map persistent and global. This is being done keeping in mind the > usage of LMB memory by platforms where the same memory region can be > used to load multiple different images. What is not allowed is to > overwrite memory that has been allocated by the other module, > currently the EFI memory module. This is being achieved by introducing > a new flag, LMB_NOOVERWRITE, which represents memory which cannot be > re-requested once allocated. > > A review comment on the earlier version was to do away with the static > arrays for the LMB lists of free and used memory. This version > uses the alloced list data structure for the LMB lists. > > The second set of patches are making changes to the EFI memory module > to make use of the LMB functions to allocate and free memory. A > *_flags() version of LMB API's has been introduced for the same. The > earlier version was using notification mechanism from both LMB and EFI > modules to maintain memory coherence. This version makes use of the > LMB API functions for the memory allocations. This is based on review > comments of EFI maintainers.
On am64x_evm_a53, the last test in test/py/tests/test_net_boot.py fails due to: ... TFTP from server 192.168.116.10; our IP address is 192.168.116.23 Filename 'pxelinux.cfg/default-arm-k3'. Load address: 0x80100000 Loading: ################################################## 64 Bytes 8.8 KiB/s done Bytes transferred = 64 (40 hex) 1 pxe ready ethernet 0 port@1.bootdev.0 extlinux/extlinux.conf ** Booting bootflow 'port@1.bootdev.0' with pxe Retrieving file: pxelinux.cfg/default-arm am65_cpsw_nuss_port ethernet@8000000port@1: K3 CPSW: rflow_id_base: 16 link up on port 1, speed 1000, full duplex Using ethernet@8000000port@1 device TFTP from server 192.168.116.10; our IP address is 192.168.116.23 Filename 'pxelinux.cfg/default-arm'.
TFTP error: trying to overwrite reserved memory... Couldn't retrieve pxelinux.cfg/default-arm
So this seems to be failing because the address used to load the pxe config file seems to be overlapping with an already reserved region of memory. Can you please check if modifying the address works?
I'm not sure what address you're thinking of modifying but, this isn't overwriting U-Boot itself so it's a case that needs to work.
Can you please print the lmb memory map through bdinfo and share it with me. That will give some info on what is causing the issue. Thing is, with this patchset, if there is another reservation with a different flag(like LMB_NOMAP, LMB_NOOVERWRITE), this would cause the load to fail.
Well hunh. I thought I had reproduced the issue before posting, but I just pushed the same tree (I'm fairly certain) over to my lab and the tests are passing now. So, lets just see what happens with the next iteration of the series, sorry for the noise.
Okay, I will put out the LMB only, non-rfc series once the CI has gone through fine. Btw, I hope you have seen my comment on irc about having the SPL_LMB config symbol as a bool, instead of def_bool y. Thanks.
I'll investigate that further once I can poke at the code, thanks.

On Thu, Jul 04, 2024 at 01:04:56PM +0530, Sughosh Ganu wrote:
The aim of this patch series is to fix the current state of incoherence between modules when it comes to memory usage. The primary issue that this series is trying to fix is that the EFI memory module which is responsible for allocating and freeing memory, does not have any visibility of the memory that is being used by the LMB module. This is further complicated by the fact that the LMB allocations are caller specific -- the LMB memory map is not global nor persistent. This means that the memory "allocated" by the LMB module might be relevant only for a given function. Hence one of the requirements for making the memory usage visible across modules is to make LMB allocations persistent and global, and then have means to communicate the use of memory across modules.
The first set of patches in this series work on making the LMB memory map persistent and global. This is being done keeping in mind the usage of LMB memory by platforms where the same memory region can be used to load multiple different images. What is not allowed is to overwrite memory that has been allocated by the other module, currently the EFI memory module. This is being achieved by introducing a new flag, LMB_NOOVERWRITE, which represents memory which cannot be re-requested once allocated.
A review comment on the earlier version was to do away with the static arrays for the LMB lists of free and used memory. This version uses the alloced list data structure for the LMB lists.
The second set of patches are making changes to the EFI memory module to make use of the LMB functions to allocate and free memory. A *_flags() version of LMB API's has been introduced for the same. The earlier version was using notification mechanism from both LMB and EFI modules to maintain memory coherence. This version makes use of the LMB API functions for the memory allocations. This is based on review comments of EFI maintainers.
Please re-work so that the series is bisectable. For example xilinx_zynqmp_r5 fails that check currently. And I found that looking in to why it grows by ~1500 bytes overall. This likely is due to CONFIG_EFI_LOADER=n and so while the case where EFI_LOADER is enabled tends to be a size win (reduction) or wash we need to look at the CONFIG_EFI_LOADER=n case more. The alist code will be a little growth and that's fine enough. But realloc and do_bdinfo are the two big growths at the top, in this case.

Hi Sugosh,
On Mon, 8 Jul 2024 at 15:35, Tom Rini trini@konsulko.com wrote:
On Thu, Jul 04, 2024 at 01:04:56PM +0530, Sughosh Ganu wrote:
I don't believe coherent is the right word here. Perhaps 'more persistent' is better?
The aim of this patch series is to fix the current state of incoherence between modules when it comes to memory usage. The primary issue that this series is trying to fix is that the EFI memory module which is responsible for allocating and freeing memory, does not have any visibility of the memory that is being used by the LMB module. This is further complicated by the fact that the LMB allocations are caller specific -- the LMB memory map is not global nor persistent. This means that the memory "allocated" by the LMB module might be relevant only for a given function. Hence one of the requirements for making the memory usage visible across modules is to make LMB allocations persistent and global, and then have means to communicate the use of memory across modules.
The first set of patches in this series work on making the LMB memory map persistent and global. This is being done keeping in mind the usage of LMB memory by platforms where the same memory region can be used to load multiple different images. What is not allowed is to overwrite memory that has been allocated by the other module, currently the EFI memory module. This is being achieved by introducing a new flag, LMB_NOOVERWRITE, which represents memory which cannot be re-requested once allocated.
A review comment on the earlier version was to do away with the static arrays for the LMB lists of free and used memory. This version uses the alloced list data structure for the LMB lists.
The second set of patches are making changes to the EFI memory module to make use of the LMB functions to allocate and free memory. A *_flags() version of LMB API's has been introduced for the same. The earlier version was using notification mechanism from both LMB and EFI modules to maintain memory coherence. This version makes use of the LMB API functions for the memory allocations. This is based on review comments of EFI maintainers.
Please re-work so that the series is bisectable. For example xilinx_zynqmp_r5 fails that check currently. And I found that looking in to why it grows by ~1500 bytes overall. This likely is due to CONFIG_EFI_LOADER=n and so while the case where EFI_LOADER is enabled tends to be a size win (reduction) or wash we need to look at the CONFIG_EFI_LOADER=n case more. The alist code will be a little growth and that's fine enough. But realloc and do_bdinfo are the two big growths at the top, in this case.
-- Tom
Regards, Simon
participants (4)
-
Ilias Apalodimas
-
Simon Glass
-
Sughosh Ganu
-
Tom Rini