[PATCH 00/19] vbe: Series part G

This includes the VBE ABrec (A/B/recovery) implementation as well as a number of patches needed to make it work:
- marking some code as used by SPL_RELOC - selection of images from a FIT based on the boot phase - removal of unwanted hash code which increases code-size too much - a few Kconfig-related additions for VPL
Note: The goal for the next series (part H) is to enable VBE on rk3399-generic, i.e. able to boot on multiple rk3399-based boards with only the TPL phase being different for each board.
Simon Glass (19): mbedtls: Add SHA symbols for VPL sandbox: Update sandbox_vpl to select sha1 and sha256 mmc: Allow controlling DM_MMC for VPL builds lib: Allow crc16 code to be dropped spl: Adjust debugging and xPL symbols spl: Avoid including hash algorithms which are not wanted spl: Support selecting images based on phase in simple FIT vbe: Support selecting images based on phase in FIT spl: Allow spl_load() to be controlled in any xPL phase spl: Provide a way to mark code needed for relocation lib: Mark crc8 as relocation code lib: Mark lz4 as relocation code lib: Mark memcpy() and memmove() as relocation code lib: Mark gunzip as relocation code vbe: Support providing a linker script vbe: Provide VPL binman-symbols for the next phase vbe: Tidy up a few comments vbe: Allow VBE to disable adding loadables to the FDT vbe: Add an implementation of VBE-ABrec
MAINTAINERS | 7 + boot/Kconfig | 73 +++++++++ boot/Makefile | 4 + boot/image-fit.c | 29 ++-- boot/vbe_abrec.c | 83 ++++++++++ boot/vbe_abrec.h | 115 ++++++++++++++ boot/vbe_abrec_fw.c | 276 +++++++++++++++++++++++++++++++++ boot/vbe_common.c | 24 +-- boot/vbe_common.h | 43 +++++ common/hash.c | 17 +- common/spl/Kconfig.vpl | 13 ++ common/spl/spl.c | 26 +++- common/spl/spl_fit.c | 40 +++-- common/spl/spl_reloc.c | 2 +- configs/sandbox_vpl_defconfig | 2 + drivers/mmc/dw_mmc.c | 4 +- include/asm-generic/sections.h | 16 ++ include/spl.h | 19 ++- include/spl_load.h | 9 +- include/vbe.h | 21 +++ lib/Kconfig | 10 ++ lib/Makefile | 11 +- lib/crc8.c | 5 +- lib/gunzip.c | 9 +- lib/lz4.c | 37 +++-- lib/lz4_wrapper.c | 2 +- lib/mbedtls/Kconfig | 40 +++++ lib/string.c | 5 +- lib/zlib/inflate.c | 18 ++- tools/Kconfig | 5 + 30 files changed, 879 insertions(+), 86 deletions(-) create mode 100644 boot/vbe_abrec.c create mode 100644 boot/vbe_abrec.h create mode 100644 boot/vbe_abrec_fw.c

Add some symbols for supporting SHA1 etc. for VPL.
Signed-off-by: Simon Glass sjg@chromium.org ---
lib/mbedtls/Kconfig | 40 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 40 insertions(+)
diff --git a/lib/mbedtls/Kconfig b/lib/mbedtls/Kconfig index 78167ffa252..81274786106 100644 --- a/lib/mbedtls/Kconfig +++ b/lib/mbedtls/Kconfig @@ -112,6 +112,46 @@ config SPL_MD5_LEGACY
endif # SPL
+if VPL + +config VPL_SHA1_LEGACY + bool "Enable SHA1 support in VPL with legacy crypto library" + depends on LEGACY_CRYPTO_BASIC && VPL_SHA1 + help + This option enables support of hashing using SHA1 algorithm + with legacy crypto library. + +config VPL_SHA256_LEGACY + bool "Enable SHA256 support in VPL with legacy crypto library" + depends on LEGACY_CRYPTO_BASIC && VPL_SHA256 + help + This option enables support of hashing using SHA256 algorithm + with legacy crypto library. + +config VPL_SHA512_LEGACY + bool "Enable SHA512 support in VPL with legacy crypto library" + depends on LEGACY_CRYPTO_BASIC && VPL_SHA512 + help + This option enables support of hashing using SHA512 algorithm + with legacy crypto library. + +config VPL_SHA384_LEGACY + bool "Enable SHA384 support in VPL with legacy crypto library" + depends on LEGACY_CRYPTO_BASIC && VPL_SHA384 + select VPL_SHA512_LEGACY + help + This option enables support of hashing using SHA384 algorithm + with legacy crypto library. + +config VPL_MD5_LEGACY + bool "Enable MD5 support in VPL with legacy crypto library" + depends on LEGACY_CRYPTO_BASIC && VPL_MD5 + help + This option enables support of hashing using MD5 algorithm + with legacy crypto library. + +endif # VPL + endif # LEGACY_CRYPTO_BASIC
config LEGACY_CRYPTO_CERT

Hi Simon,
On Sun, 26 Jan 2025 at 13:43, Simon Glass sjg@chromium.org wrote:
Add some symbols for supporting SHA1 etc. for VPL.
Signed-off-by: Simon Glass sjg@chromium.org
lib/mbedtls/Kconfig | 40 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 40 insertions(+)
diff --git a/lib/mbedtls/Kconfig b/lib/mbedtls/Kconfig index 78167ffa252..81274786106 100644 --- a/lib/mbedtls/Kconfig +++ b/lib/mbedtls/Kconfig @@ -112,6 +112,46 @@ config SPL_MD5_LEGACY
endif # SPL
+if VPL
+config VPL_SHA1_LEGACY
bool "Enable SHA1 support in VPL with legacy crypto library"
depends on LEGACY_CRYPTO_BASIC && VPL_SHA1
help
This option enables support of hashing using SHA1 algorithm
with legacy crypto library.
+config VPL_SHA256_LEGACY
bool "Enable SHA256 support in VPL with legacy crypto library"
depends on LEGACY_CRYPTO_BASIC && VPL_SHA256
help
This option enables support of hashing using SHA256 algorithm
with legacy crypto library.
+config VPL_SHA512_LEGACY
bool "Enable SHA512 support in VPL with legacy crypto library"
depends on LEGACY_CRYPTO_BASIC && VPL_SHA512
help
This option enables support of hashing using SHA512 algorithm
with legacy crypto library.
+config VPL_SHA384_LEGACY
bool "Enable SHA384 support in VPL with legacy crypto library"
depends on LEGACY_CRYPTO_BASIC && VPL_SHA384
select VPL_SHA512_LEGACY
help
This option enables support of hashing using SHA384 algorithm
with legacy crypto library.
+config VPL_MD5_LEGACY
bool "Enable MD5 support in VPL with legacy crypto library"
depends on LEGACY_CRYPTO_BASIC && VPL_MD5
help
This option enables support of hashing using MD5 algorithm
with legacy crypto library.
+endif # VPL
Do you mind rebasing this patch on top of my series below? https://lore.kernel.org/u-boot/20250127151657.648255-1-raymond.mao@linaro.or...
My one has refactored the entire mbedtls kconfig submenu, makefile and default config file to adapt with XPL, so that users can have independent config options in both U-Boot proper and XPLs.
Regards, Raymond
endif # LEGACY_CRYPTO_BASIC
config LEGACY_CRYPTO_CERT
2.43.0

These algorithms are used in VPL, so enable them.
Signed-off-by: Simon Glass sjg@chromium.org ---
configs/sandbox_vpl_defconfig | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/configs/sandbox_vpl_defconfig b/configs/sandbox_vpl_defconfig index 46329395ba5..f0315f6ab33 100644 --- a/configs/sandbox_vpl_defconfig +++ b/configs/sandbox_vpl_defconfig @@ -252,6 +252,8 @@ CONFIG_FS_CBFS=y CONFIG_FS_CRAMFS=y # CONFIG_SPL_USE_TINY_PRINTF is not set CONFIG_CMD_DHRYSTONE=y +CONFIG_VPL_SHA1_LEGACY=y +CONFIG_VPL_SHA256_LEGACY=y CONFIG_RSA_VERIFY_WITH_PKEY=y CONFIG_TPM=y CONFIG_ZSTD=y

VPL may want to use driver model for MMC even if TPL does not. Update the rule in this driver to support that.
Signed-off-by: Simon Glass sjg@chromium.org ---
drivers/mmc/dw_mmc.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/mmc/dw_mmc.c b/drivers/mmc/dw_mmc.c index e1110cace89..a51494380ce 100644 --- a/drivers/mmc/dw_mmc.c +++ b/drivers/mmc/dw_mmc.c @@ -724,7 +724,7 @@ static int dwmci_init(struct mmc *mmc) return 0; }
-#ifdef CONFIG_DM_MMC +#if CONFIG_IS_ENABLED(DM_MMC) int dwmci_probe(struct udevice *dev) { struct mmc *mmc = mmc_get_mmc_dev(dev); @@ -749,7 +749,7 @@ void dwmci_setup_cfg(struct mmc_config *cfg, struct dwmci_host *host, u32 max_clk, u32 min_clk) { cfg->name = host->name; -#ifndef CONFIG_DM_MMC +#if !CONFIG_IS_ENABLED(DM_MMC) cfg->ops = &dwmci_ops; #endif cfg->f_min = min_clk;

This code is not necessarily needed in VPL, even if SPL uses it, so adjust the rules to allow it to be dropped.
Do the same for the hash API.
Signed-off-by: Simon Glass sjg@chromium.org ---
common/hash.c | 17 +++++++++++------ lib/Kconfig | 10 ++++++++++ lib/Makefile | 3 +-- tools/Kconfig | 5 +++++ 4 files changed, 27 insertions(+), 8 deletions(-)
diff --git a/common/hash.c b/common/hash.c index 8dd9da85768..0c45992d5c7 100644 --- a/common/hash.c +++ b/common/hash.c @@ -143,7 +143,8 @@ static int __maybe_unused hash_finish_sha512(struct hash_algo *algo, void *ctx, return 0; }
-static int hash_init_crc16_ccitt(struct hash_algo *algo, void **ctxp) +static int __maybe_unused hash_init_crc16_ccitt(struct hash_algo *algo, + void **ctxp) { uint16_t *ctx = malloc(sizeof(uint16_t)); *ctx = 0; @@ -151,16 +152,18 @@ static int hash_init_crc16_ccitt(struct hash_algo *algo, void **ctxp) return 0; }
-static int hash_update_crc16_ccitt(struct hash_algo *algo, void *ctx, - const void *buf, unsigned int size, - int is_last) +static int __maybe_unused hash_update_crc16_ccitt(struct hash_algo *algo, + void *ctx, const void *buf, + unsigned int size, + int is_last) { *((uint16_t *)ctx) = crc16_ccitt(*((uint16_t *)ctx), buf, size); return 0; }
-static int hash_finish_crc16_ccitt(struct hash_algo *algo, void *ctx, - void *dest_buf, int size) +static int __maybe_unused hash_finish_crc16_ccitt(struct hash_algo *algo, + void *ctx, void *dest_buf, + int size) { if (size < algo->digest_size) return -1; @@ -295,6 +298,7 @@ static struct hash_algo hash_algo[] = { #endif }, #endif +#if CONFIG_IS_ENABLED(CRC16) { .name = "crc16-ccitt", .digest_size = 2, @@ -304,6 +308,7 @@ static struct hash_algo hash_algo[] = { .hash_update = hash_update_crc16_ccitt, .hash_finish = hash_finish_crc16_ccitt, }, +#endif #if CONFIG_IS_ENABLED(CRC8) && IS_ENABLED(CONFIG_HASH_CRC8) { .name = "crc8", diff --git a/lib/Kconfig b/lib/Kconfig index c8ac99df78e..c228e4ca661 100644 --- a/lib/Kconfig +++ b/lib/Kconfig @@ -737,6 +737,16 @@ config VPL_CRC8 checksum with feedback to produce an 8-bit result. The code is small and it does not require a lookup table (unlike CRC32).
+config CRC16 + bool "Support CRC16" + default y + help + Enables CRC16 support. This is normally required. Two algorithms are + provided: + + - CCITT, with a polynomical x^16 + x^12 + x^5 + 1 + - standard, with polynomial x^16 + x^15 + x^2 + 1 (0x8005) + config SPL_CRC16 bool "Support CRC16 in SPL" depends on SPL diff --git a/lib/Makefile b/lib/Makefile index 31cfbb67aa0..3f0936b0a27 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -35,8 +35,6 @@ obj-$(CONFIG_CIRCBUF) += circbuf.o endif
obj-y += crc8.o -obj-y += crc16.o -obj-y += crc16-ccitt.o obj-$(CONFIG_ERRNO_STR) += errno_str.o obj-$(CONFIG_FIT) += fdtdec_common.o obj-$(CONFIG_TEST_FDTDEC) += fdtdec_test.o @@ -66,6 +64,7 @@ endif
obj-$(CONFIG_$(PHASE_)CRC8) += crc8.o obj-$(CONFIG_$(PHASE_)CRC16) += crc16.o +obj-$(CONFIG_$(PHASE_)CRC16) += crc16-ccitt.o
obj-y += crypto/
diff --git a/tools/Kconfig b/tools/Kconfig index 5c75af48fe3..01ff0fcf748 100644 --- a/tools/Kconfig +++ b/tools/Kconfig @@ -9,6 +9,11 @@ config MKIMAGE_DTC_PATH some cases the system dtc may not support all required features and the path to a different version should be given here.
+config TOOLS_CRC16 + def_bool y + help + Enable CRC32 support in the tools builds + config TOOLS_CRC32 def_bool y help

The size of some malloc() fields has reduced on 64-bit machines, but the spl_reloc code was not updated. Fix this to avoid a compiler warning.
Also update for the new xPL naming.
Signed-off-by: Simon Glass sjg@chromium.org ---
common/spl/spl_reloc.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/common/spl/spl_reloc.c b/common/spl/spl_reloc.c index be8349b535b..324b98eaf98 100644 --- a/common/spl/spl_reloc.c +++ b/common/spl/spl_reloc.c @@ -154,7 +154,7 @@ int spl_reloc_jump(struct spl_image_info *image, spl_jump_to_image_t jump) rcode_func loader; int ret;
- log_debug("malloc usage %lx bytes (%ld KB of %d KB)\n", gd->malloc_ptr, + log_debug("malloc usage %x bytes (%d KB of %d KB)\n", gd->malloc_ptr, gd->malloc_ptr / 1024, CONFIG_VAL(SYS_MALLOC_F_LEN) / 1024);
if (*image->stack_prot != STACK_PROT_VALUE) {

Update the build rule so that hash algorithms are only included in an SPL build if they are requested. This helps to reduce code size.
Signed-off-by: Simon Glass sjg@chromium.org ---
lib/Makefile | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/lib/Makefile b/lib/Makefile index 3f0936b0a27..ccc6df6fc2d 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -74,10 +74,10 @@ obj-$(CONFIG_$(XPL_)RSA) += rsa/ obj-$(CONFIG_HASH) += hash-checksum.o obj-$(CONFIG_BLAKE2) += blake2/blake2b.o
-obj-$(CONFIG_$(XPL_)MD5_LEGACY) += md5.o -obj-$(CONFIG_$(XPL_)SHA1_LEGACY) += sha1.o -obj-$(CONFIG_$(XPL_)SHA256_LEGACY) += sha256.o -obj-$(CONFIG_$(XPL_)SHA512_LEGACY) += sha512.o +obj-$(CONFIG_$(PHASE_)MD5_LEGACY) += md5.o +obj-$(CONFIG_$(PHASE_)SHA1_LEGACY) += sha1.o +obj-$(CONFIG_$(PHASE_)SHA256_LEGACY) += sha256.o +obj-$(CONFIG_$(PHASE_)SHA512_LEGACY) += sha512.o
obj-$(CONFIG_CRYPT_PW) += crypt/ obj-$(CONFIG_$(XPL_)ASN1_DECODER_LEGACY) += asn1_decoder.o

At present the simple FIT-loader always loads images, ignoring whether they are intended for the next phase or not.
VBE packages up several images in the same FIT, some destined for VPL and some for SPL. Add logic to check the phase before loading the image. Return -EPERM in that case and handle it gracefully.
Fix a unnecessary re-computation of read_offset while here.
Signed-off-by: Simon Glass sjg@chromium.org ---
common/spl/spl_fit.c | 34 +++++++++++++++++++++++++++------- 1 file changed, 27 insertions(+), 7 deletions(-)
diff --git a/common/spl/spl_fit.c b/common/spl/spl_fit.c index ac8462577ff..6c068a842d9 100644 --- a/common/spl/spl_fit.c +++ b/common/spl/spl_fit.c @@ -199,7 +199,9 @@ static int get_aligned_image_size(struct spl_load_info *info, int data_size, * the image gets loaded to the address pointed to by the * load_addr member in this struct, if load_addr is not 0 * - * Return: 0 on success or a negative error number. + * Return: 0 on success, -EPERM if this image is not the correct phase + * (for CONFIG_BOOTMETH_VBE_SIMPLE_FW), or another negative error number on + * other error. */ static int load_simple_fit(struct spl_load_info *info, ulong fit_offset, const struct spl_fit_info *ctx, int node, @@ -218,6 +220,25 @@ static int load_simple_fit(struct spl_load_info *info, ulong fit_offset, const void *fit = ctx->fit; bool external_data = false;
+ log_debug("starting\n"); + if (CONFIG_IS_ENABLED(BOOTMETH_VBE) && + xpl_get_phase(info) != IH_PHASE_NONE) { + enum image_phase_t phase; + int ret; + + ret = fit_image_get_phase(fit, node, &phase); + /* if the image is for any phase, let's use it */ + if (ret == -ENOENT || phase == xpl_get_phase(info)) { + log_debug("found\n"); + } else if (ret < 0) { + log_debug("err=%d\n", ret); + return ret; + } else { + log_debug("- phase mismatch, skipping this image\n"); + return -EPERM; + } + } + if (IS_ENABLED(CONFIG_SPL_FPGA) || (IS_ENABLED(CONFIG_SPL_OS_BOOT) && spl_decompression_enabled())) { if (fit_image_get_type(fit, node, &type)) @@ -278,10 +299,7 @@ static int load_simple_fit(struct spl_load_info *info, ulong fit_offset, log_debug("reading from offset %x / %lx size %lx to %p: ", offset, read_offset, size, src_ptr);
- if (info->read(info, - fit_offset + - get_aligned_image_offset(info, offset), size, - src_ptr) < length) + if (info->read(info, read_offset, size, src_ptr) < length) return -EIO;
debug("External data: dst=%p, offset=%x, size=%lx\n", @@ -456,7 +474,9 @@ static int spl_fit_append_fdt(struct spl_image_info *spl_image, image_info.load_addr = (ulong)tmpbuffer; ret = load_simple_fit(info, offset, ctx, node, &image_info); - if (ret < 0) + if (ret == -EPERM) + continue; + else if (ret < 0) break;
/* Make room in FDT for changes from the overlay */ @@ -817,7 +837,7 @@ int spl_load_simple_fit(struct spl_image_info *spl_image,
image_info.load_addr = 0; ret = load_simple_fit(info, offset, &ctx, node, &image_info); - if (ret < 0) { + if (ret < 0 && ret != -EPERM) { printf("%s: can't load image loadables index %d (ret = %d)\n", __func__, index, ret); return ret;

With SPL we want to specify the phase of the image to be loaded. Add support for this.
This is the implementation of a FIT feature added to the spec a few years ago and entails a small code-size increase, about 70 bytes on Thumb2.
Signed-off-by: Simon Glass sjg@chromium.org Link: https://docs.u-boot.org/en/latest/usage/fit/index.html ---
boot/image-fit.c | 29 +++++++++++++++++++---------- 1 file changed, 19 insertions(+), 10 deletions(-)
diff --git a/boot/image-fit.c b/boot/image-fit.c index db7fb61bca9..40335bc3345 100644 --- a/boot/image-fit.c +++ b/boot/image-fit.c @@ -1908,24 +1908,30 @@ int fit_conf_get_prop_node(const void *fit, int noffset, const char *prop_name, count = fit_conf_get_prop_node_count(fit, noffset, prop_name); if (count < 0) return count; + log_debug("looking for %s (%s, image-count %d):\n", prop_name, + genimg_get_phase_name(image_ph_phase(sel_phase)), count);
/* check each image in the list */ for (i = 0; i < count; i++) { - enum image_phase_t phase; + enum image_phase_t phase = IH_PHASE_NONE; int ret, node;
node = fit_conf_get_prop_node_index(fit, noffset, prop_name, i); ret = fit_image_get_phase(fit, node, &phase); + log_debug("- %s (%s): ", fdt_get_name(fit, node, NULL), + genimg_get_phase_name(phase));
/* if the image is for any phase, let's use it */ - if (ret == -ENOENT) + if (ret == -ENOENT || phase == sel_phase) { + log_debug("found\n"); return node; - else if (ret < 0) + } else if (ret < 0) { + log_debug("err=%d\n", ret); return ret; - - if (phase == sel_phase) - return node; + } + log_debug("no match\n"); } + log_debug("- not found\n");
return -ENOENT; } @@ -2013,13 +2019,15 @@ int fit_get_node_from_config(struct bootm_headers *images, }
/** - * fit_get_image_type_property() - get property name for IH_TYPE_... + * fit_get_image_type_property() - get property name for sel_phase * * Return: the properly name where we expect to find the image in the * config node */ -static const char *fit_get_image_type_property(int type) +static const char *fit_get_image_type_property(int ph_type) { + int type = image_ph_type(ph_type); + /* * This is sort-of available in the uimage_type[] table in image.c * but we don't have access to the short name, and "fdt" is different @@ -2071,8 +2079,9 @@ int fit_image_load(struct bootm_headers *images, ulong addr, fit_uname = fit_unamep ? *fit_unamep : NULL; fit_uname_config = fit_uname_configp ? *fit_uname_configp : NULL; fit_base_uname_config = NULL; - prop_name = fit_get_image_type_property(image_type); - printf("## Loading %s from FIT Image at %08lx ...\n", prop_name, addr); + prop_name = fit_get_image_type_property(ph_type); + printf("## Loading %s (%s) from FIT Image at %08lx ...\n", + prop_name, genimg_get_phase_name(image_ph_phase(ph_type)), addr);
bootstage_mark(bootstage_id + BOOTSTAGE_SUB_FORMAT); ret = fit_check_format(fit, IMAGE_SIZE_INVAL);

Some phases may wish to use full FIT-loading and others not, so allow this to be controlled.
Add some debugging while we are here.
Signed-off-by: Simon Glass sjg@chromium.org ---
include/spl_load.h | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/include/spl_load.h b/include/spl_load.h index 935f7d336f2..525e0c9e86c 100644 --- a/include/spl_load.h +++ b/include/spl_load.h @@ -20,13 +20,15 @@ static inline int _spl_load(struct spl_image_info *spl_image, ulong base_offset, image_offset, overhead; int read, ret;
+ log_debug("\nloading hdr from %lx to %p\n", (ulong)offset, header); read = info->read(info, offset, ALIGN(sizeof(*header), spl_get_bl_len(info)), header); if (read < (int)sizeof(*header)) return -EIO;
if (image_get_magic(header) == FDT_MAGIC) { - if (IS_ENABLED(CONFIG_SPL_LOAD_FIT_FULL)) { + log_debug("Found FIT\n"); + if (CONFIG_IS_ENABLED(LOAD_FIT_FULL)) { void *buf;
/* @@ -48,9 +50,12 @@ static inline int _spl_load(struct spl_image_info *spl_image, return spl_parse_image_header(spl_image, bootdev, buf); }
- if (IS_ENABLED(CONFIG_SPL_LOAD_FIT)) + if (CONFIG_IS_ENABLED(LOAD_FIT)) { + log_debug("Simple loading\n"); return spl_load_simple_fit(spl_image, info, offset, header); + } + log_debug("No FIT support\n"); }
if (IS_ENABLED(CONFIG_SPL_LOAD_IMX_CONTAINER) &&

Add a linker symbol which can be used to mark relocation code, so it can be collected by the linker and copied into a suitable place and executed when needed.
Signed-off-by: Simon Glass sjg@chromium.org ---
include/asm-generic/sections.h | 16 ++++++++++++++++ 1 file changed, 16 insertions(+)
diff --git a/include/asm-generic/sections.h b/include/asm-generic/sections.h index 5b040d0acd0..90eba30b68d 100644 --- a/include/asm-generic/sections.h +++ b/include/asm-generic/sections.h @@ -67,6 +67,9 @@ extern char __text_start[]; /* This marks the text region which must be relocated */ extern char __image_copy_start[], __image_copy_end[];
+/* This marks the rcode region used for SPL relocation */ +extern char _rcode_start[], _rcode_end[]; + extern char __bss_end[]; extern char __rel_dyn_start[], __rel_dyn_end[]; extern char _image_binary_end[]; @@ -78,4 +81,17 @@ extern char __dtb[]; */ extern void _start(void);
+#ifndef USE_HOSTCC +#if CONFIG_IS_ENABLED(RELOC_LOADER) +#define __rcode __section(".text.rcode") +#define __rdata __section(".text.rdata") +#else +#define __rcode +#define __rdata +#endif +#else +#define __rcode +#define __rdata +#endif + #endif /* _ASM_GENERIC_SECTIONS_H_ */

Mark the crc8 code as needed by relocation. This is used as a simple check against corruption of the code when copying.
Signed-off-by: Simon Glass sjg@chromium.org ---
lib/crc8.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/lib/crc8.c b/lib/crc8.c index 811e19917b4..bbb229c3892 100644 --- a/lib/crc8.c +++ b/lib/crc8.c @@ -6,11 +6,12 @@ #ifdef USE_HOSTCC #include <arpa/inet.h> #endif +#include <asm/sections.h> #include <u-boot/crc.h>
#define POLY (0x1070U << 3)
-static unsigned char _crc8(unsigned short data) +__rcode static unsigned char _crc8(unsigned short data) { int i;
@@ -23,7 +24,7 @@ static unsigned char _crc8(unsigned short data) return (unsigned char)(data >> 8); }
-unsigned int crc8(unsigned int crc, const unsigned char *vptr, int len) +__rcode unsigned int crc8(unsigned int crc, const unsigned char *vptr, int len) { int i;

Mark the lz4 decompression code as needed by relocation. This is used to decompress the next-phase image.
Drop the 'safe' versions from SPL as they are not needed. Change the static array to a local one, to avoid a crash errors when trying to access the data from relocated code. Make this conditional to avoid a code-size increase when SPL_RELOC is not used/
Signed-off-by: Simon Glass sjg@chromium.org ---
lib/lz4.c | 37 ++++++++++++++++++++++++++----------- lib/lz4_wrapper.c | 2 +- 2 files changed, 27 insertions(+), 12 deletions(-)
diff --git a/lib/lz4.c b/lib/lz4.c index 63955a0b178..c718659c590 100644 --- a/lib/lz4.c +++ b/lib/lz4.c @@ -33,15 +33,16 @@ #include <linux/bug.h> #include <asm/unaligned.h> #include <u-boot/lz4.h> +#include <asm/sections.h>
#define FORCE_INLINE inline __attribute__((always_inline))
-static FORCE_INLINE u16 LZ4_readLE16(const void *src) +__rcode static FORCE_INLINE u16 LZ4_readLE16(const void *src) { return get_unaligned_le16(src); }
-static FORCE_INLINE void LZ4_copy8(void *dst, const void *src) +__rcode static FORCE_INLINE void LZ4_copy8(void *dst, const void *src) { put_unaligned(get_unaligned((const u64 *)src), (u64 *)dst); } @@ -53,7 +54,7 @@ typedef int32_t S32; typedef uint64_t U64; typedef uintptr_t uptrval;
-static FORCE_INLINE void LZ4_write32(void *memPtr, U32 value) +__rcode static FORCE_INLINE void LZ4_write32(void *memPtr, U32 value) { put_unaligned(value, (U32 *)memPtr); } @@ -63,7 +64,7 @@ static FORCE_INLINE void LZ4_write32(void *memPtr, U32 value) **************************************/
/* customized version of memcpy, which may overwrite up to 7 bytes beyond dstEnd */ -static void LZ4_wildCopy(void* dstPtr, const void* srcPtr, void* dstEnd) +__rcode static void LZ4_wildCopy(void *dstPtr, const void *srcPtr, void *dstEnd) { BYTE* d = (BYTE*)dstPtr; const BYTE* s = (const BYTE*)srcPtr; @@ -111,6 +112,17 @@ typedef enum { decode_full_block = 0, partial_decode = 1 } earlyEnd_directive; #define assert(condition) ((void)0) #endif
+/* + * spl_reloc needs all necessary data to be set up within its code, since the + * code is relocated at runtime. Unfortunately this increase code-size slightly + * so only do it if spl_reloc is enabled + */ +#if CONFIG_IS_ENABLED(RELOC_LOADER) +#define STATIC +#else +#define STATIC static +#endif + /* * LZ4_decompress_generic() : * This generic decompression function covers all use cases. @@ -118,7 +130,7 @@ typedef enum { decode_full_block = 0, partial_decode = 1 } earlyEnd_directive; * Note that it is important for performance that this function really get inlined, * in order to remove useless branches during compilation optimization. */ -static FORCE_INLINE int LZ4_decompress_generic( +__rcode static FORCE_INLINE int LZ4_decompress_generic( const char * const src, char * const dst, int srcSize, @@ -141,6 +153,8 @@ static FORCE_INLINE int LZ4_decompress_generic( const size_t dictSize ) { + STATIC const unsigned int inc32table[8] = {0, 1, 2, 1, 0, 4, 4, 4}; + STATIC const int dec64table[8] = {0, 0, 0, -1, -4, 1, 2, 3}; const BYTE *ip = (const BYTE *) src; const BYTE * const iend = ip + srcSize;
@@ -149,8 +163,6 @@ static FORCE_INLINE int LZ4_decompress_generic( BYTE *cpy;
const BYTE * const dictEnd = (const BYTE *)dictStart + dictSize; - static const unsigned int inc32table[8] = {0, 1, 2, 1, 0, 4, 4, 4}; - static const int dec64table[8] = {0, 0, 0, -1, -4, 1, 2, 3};
const int safeDecode = (endOnInput == endOnInputSize); const int checkOffset = ((safeDecode) && (dictSize < (int)(64 * KB))); @@ -514,8 +526,9 @@ _output_error: return (int) (-(((const char *)ip) - src)) - 1; }
-int LZ4_decompress_safe(const char *source, char *dest, - int compressedSize, int maxDecompressedSize) +#ifndef CONFIG_SPL_BUILD +__rcode int LZ4_decompress_safe(const char *source, char *dest, + int compressedSize, int maxDecompressedSize) { return LZ4_decompress_generic(source, dest, compressedSize, maxDecompressedSize, @@ -523,11 +536,13 @@ int LZ4_decompress_safe(const char *source, char *dest, noDict, (BYTE *)dest, NULL, 0); }
-int LZ4_decompress_safe_partial(const char *src, char *dst, - int compressedSize, int targetOutputSize, int dstCapacity) +__rcode int LZ4_decompress_safe_partial(const char *src, char *dst, + int compressedSize, + int targetOutputSize, int dstCapacity) { dstCapacity = min(targetOutputSize, dstCapacity); return LZ4_decompress_generic(src, dst, compressedSize, dstCapacity, endOnInputSize, partial_decode, noDict, (BYTE *)dst, NULL, 0); } +#endif diff --git a/lib/lz4_wrapper.c b/lib/lz4_wrapper.c index 4d48e7b0e8b..b1204511170 100644 --- a/lib/lz4_wrapper.c +++ b/lib/lz4_wrapper.c @@ -15,7 +15,7 @@
#define LZ4F_BLOCKUNCOMPRESSED_FLAG 0x80000000U
-int ulz4fn(const void *src, size_t srcn, void *dst, size_t *dstn) +__rcode int ulz4fn(const void *src, size_t srcn, void *dst, size_t *dstn) { const void *end = dst + *dstn; const void *in = src;

Mark these functions as needed by relocation. These functions are used to copy data while relocating the next-phase image.
Drop the 'safe' versions from SPL as they are not needed. Change the static array to a local one, to avoid link errors when trying to access the data.
Signed-off-by: Simon Glass sjg@chromium.org ---
lib/string.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/lib/string.c b/lib/string.c index feae9519f2f..0e0900de8bf 100644 --- a/lib/string.c +++ b/lib/string.c @@ -21,6 +21,7 @@ #include <linux/string.h> #include <linux/ctype.h> #include <malloc.h> +#include <asm/sections.h>
/** * strncasecmp - Case insensitive, length-limited string comparison @@ -559,7 +560,7 @@ __used void * memset(void * s,int c,size_t count) * You should not use this function to access IO space, use memcpy_toio() * or memcpy_fromio() instead. */ -__used void * memcpy(void *dest, const void *src, size_t count) +__rcode __used void *memcpy(void *dest, const void *src, size_t count) { unsigned long *dl = (unsigned long *)dest, *sl = (unsigned long *)src; char *d8, *s8; @@ -593,7 +594,7 @@ __used void * memcpy(void *dest, const void *src, size_t count) * * Unlike memcpy(), memmove() copes with overlapping areas. */ -__used void * memmove(void * dest,const void *src,size_t count) +__rcode __used void *memmove(void *dest, const void *src, size_t count) { char *tmp, *s;

Mark the gunzip code as needed by relocation. This is used to decompress the next-phase image.
Signed-off-by: Simon Glass sjg@chromium.org ---
lib/gunzip.c | 9 +++++---- lib/zlib/inflate.c | 18 ++++++++++-------- 2 files changed, 15 insertions(+), 12 deletions(-)
diff --git a/lib/gunzip.c b/lib/gunzip.c index e71d8d00ccb..52007715bda 100644 --- a/lib/gunzip.c +++ b/lib/gunzip.c @@ -15,6 +15,7 @@ #include <u-boot/crc.h> #include <watchdog.h> #include <u-boot/zlib.h> +#include <asm/sections.h>
#define HEADER0 '\x1f' #define HEADER1 '\x8b' @@ -43,7 +44,7 @@ void gzfree(void *x, void *addr, unsigned nb) free (addr); }
-int gzip_parse_header(const unsigned char *src, unsigned long len) +__rcode int gzip_parse_header(const unsigned char *src, unsigned long len) { int i, flags;
@@ -71,7 +72,7 @@ int gzip_parse_header(const unsigned char *src, unsigned long len) return i; }
-int gunzip(void *dst, int dstlen, unsigned char *src, unsigned long *lenp) +__rcode int gunzip(void *dst, int dstlen, unsigned char *src, unsigned long *lenp) { int offset = gzip_parse_header(src, *lenp);
@@ -274,8 +275,8 @@ out: /* * Uncompress blocks compressed with zlib without headers */ -int zunzip(void *dst, int dstlen, unsigned char *src, unsigned long *lenp, - int stoponerr, int offset) +__rcode int zunzip(void *dst, int dstlen, unsigned char *src, + unsigned long *lenp, int stoponerr, int offset) { z_stream s; int err = 0; diff --git a/lib/zlib/inflate.c b/lib/zlib/inflate.c index f7e81fc8b2a..b4c72cc2c5c 100644 --- a/lib/zlib/inflate.c +++ b/lib/zlib/inflate.c @@ -2,10 +2,12 @@ * Copyright (C) 1995-2005 Mark Adler * For conditions of distribution and use, see copyright notice in zlib.h */ +#include <asm/sections.h> + local void fixedtables OF((struct inflate_state FAR *state)); local int updatewindow OF((z_streamp strm, unsigned out));
-int ZEXPORT inflateReset(z_streamp strm) +__rcode int ZEXPORT inflateReset(z_streamp strm) { struct inflate_state FAR *state;
@@ -30,8 +32,8 @@ int ZEXPORT inflateReset(z_streamp strm) return Z_OK; }
-int ZEXPORT inflateInit2_(z_streamp strm, int windowBits, - int stream_size) +__rcode int ZEXPORT inflateInit2_(z_streamp strm, int windowBits, + int stream_size) { struct inflate_state FAR *state;
@@ -67,12 +69,12 @@ int ZEXPORT inflateInit2_(z_streamp strm, int windowBits, return inflateReset(strm); }
-int ZEXPORT inflateInit_(z_streamp strm, int stream_size) +__rcode int ZEXPORT inflateInit_(z_streamp strm, int stream_size) { return inflateInit2_(strm, DEF_WBITS, stream_size); }
-local void fixedtables(struct inflate_state FAR *state) +__rcode local void fixedtables(struct inflate_state FAR *state) { state->lencode = lenfix; state->lenbits = 9; @@ -94,7 +96,7 @@ local void fixedtables(struct inflate_state FAR *state) output will fall in the output data, making match copies simpler and faster. The advantage may be dependent on the size of the processor's data caches. */ -local int updatewindow(z_streamp strm, unsigned out) +__rcode local int updatewindow(z_streamp strm, unsigned int out) { struct inflate_state FAR *state; unsigned copy, dist; @@ -322,7 +324,7 @@ local int updatewindow(z_streamp strm, unsigned out) when flush is set to Z_FINISH, inflate() cannot return Z_OK. Instead it will return Z_BUF_ERROR if it has not reached the end of the stream. */ -int ZEXPORT inflate(z_streamp strm, int flush) +__rcode int ZEXPORT inflate(z_streamp strm, int flush) { struct inflate_state FAR *state; unsigned char FAR *next; /* next input */ @@ -924,7 +926,7 @@ int ZEXPORT inflate(z_streamp strm, int flush) return ret; }
-int ZEXPORT inflateEnd(z_streamp strm) +__rcode int ZEXPORT inflateEnd(z_streamp strm) { struct inflate_state FAR *state; if (strm == Z_NULL || strm->state == Z_NULL || strm->zfree == (free_func)0)

Allow a linker script to be provided for VPL as it is for other U-Boot phases.
Signed-off-by: Simon Glass sjg@chromium.org ---
common/spl/Kconfig.vpl | 13 +++++++++++++ 1 file changed, 13 insertions(+)
diff --git a/common/spl/Kconfig.vpl b/common/spl/Kconfig.vpl index 97dfc630152..cf6b36c8e38 100644 --- a/common/spl/Kconfig.vpl +++ b/common/spl/Kconfig.vpl @@ -9,6 +9,19 @@ config VPL_BANNER_PRINT info. Disabling this option could be useful to reduce VPL boot time (e.g. approx. 6 ms faster, when output on i.MX6 with 115200 baud).
+config VPL_LDSCRIPT + string "Linker script for the VPL stage" + default "arch/arm/cpu/armv8/u-boot-spl.lds" if ARM64 + default "arch/$(ARCH)/cpu/u-boot-spl.lds" + help + The TPL stage will usually require a different linker-script + (as it runs from a different memory region) than the regular + U-Boot stage. Set this to the path of the linker-script to + be used for TPL. + + May be left empty to trigger the Makefile infrastructure to + fall back to the linker-script used for the SPL stage. + config VPL_BOARD_INIT bool "Call board-specific initialization in VPL" help

Add support for moving from TPL->VPL->SPL so that the VPL build can fit properly into the boot flow.
Use #ifdefs to avoid creating unwanted symbols which Binman would then try (and perhaps fail) to provide.
Add debugging to indicate the next phase.
Signed-off-by: Simon Glass sjg@chromium.org ---
common/spl/spl.c | 26 ++++++++++++++++++++------ 1 file changed, 20 insertions(+), 6 deletions(-)
diff --git a/common/spl/spl.c b/common/spl/spl.c index 7cfbab06419..6b75910e243 100644 --- a/common/spl/spl.c +++ b/common/spl/spl.c @@ -50,8 +50,10 @@ u32 *boot_params_ptr = NULL;
#if CONFIG_IS_ENABLED(BINMAN_UBOOT_SYMBOLS) /* See spl.h for information about this */ +#if defined(CONFIG_SPL_BUILD) binman_sym_declare(ulong, u_boot_any, image_pos); binman_sym_declare(ulong, u_boot_any, size); +#endif
#ifdef CONFIG_TPL binman_sym_declare(ulong, u_boot_spl_any, image_pos); @@ -179,9 +181,15 @@ ulong spl_get_image_pos(void) if (xpl_next_phase() == PHASE_VPL) return binman_sym(ulong, u_boot_vpl_any, image_pos); #endif - return xpl_next_phase() == PHASE_SPL ? - binman_sym(ulong, u_boot_spl_any, image_pos) : - binman_sym(ulong, u_boot_any, image_pos); +#if defined(CONFIG_TPL) && !defined(CONFIG_VPL) + if (xpl_next_phase() == PHASE_SPL) + return binman_sym(ulong, u_boot_spl_any, image_pos); +#endif +#if defined(CONFIG_SPL_BUILD) + return binman_sym(ulong, u_boot_any, image_pos); +#endif + + return BINMAN_SYM_MISSING; }
ulong spl_get_image_size(void) @@ -263,14 +271,20 @@ void spl_set_header_raw_uboot(struct spl_image_info *spl_image) */ if (u_boot_pos && u_boot_pos != BINMAN_SYM_MISSING) { /* Binman does not support separated entry addresses */ - spl_image->entry_point = u_boot_pos; - spl_image->load_addr = u_boot_pos; + spl_image->entry_point = spl_get_image_text_base(); + spl_image->load_addr = spl_get_image_text_base(); + spl_image->size = spl_get_image_size(); + log_debug("Next load addr %lx\n", spl_image->load_addr); } else { spl_image->entry_point = CONFIG_SYS_UBOOT_START; spl_image->load_addr = CONFIG_TEXT_BASE; + log_debug("Default load addr %x (u_boot_pos=%lx)\n", + CONFIG_TEXT_BASE, u_boot_pos); } spl_image->os = IH_OS_U_BOOT; - spl_image->name = "U-Boot"; + spl_image->name = xpl_name(xpl_next_phase()); + log_debug("Next phase: %s at %lx size %lx\n", spl_image->name, + spl_image->load_addr, (ulong)spl_image->size); }
__weak int spl_parse_board_header(struct spl_image_info *spl_image,

Join the comment block for the fit_image_load() call back to where it should be. Also fix a debug statement.
Signed-off-by: Simon Glass sjg@chromium.org ---
boot/vbe_common.c | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-)
diff --git a/boot/vbe_common.c b/boot/vbe_common.c index 0d51fe762c3..009bbcd3e76 100644 --- a/boot/vbe_common.c +++ b/boot/vbe_common.c @@ -202,14 +202,6 @@ int vbe_read_fit(struct udevice *blk, ulong area_offset, ulong area_size, phase = IS_ENABLED(CONFIG_TPL_BUILD) ? IH_PHASE_NONE : IS_ENABLED(CONFIG_VPL_BUILD) ? IH_PHASE_SPL : IH_PHASE_U_BOOT;
- /* - * Load the image from the FIT. We ignore any load-address information - * so in practice this simply locates the image in the external-data - * region and returns its address and size. Since we only loaded the FIT - * itself, only a part of the image will be present, at best. - */ - fit_uname = NULL; - fit_uname_config = NULL; log_debug("loading FIT\n");
if (xpl_phase() == PHASE_SPL && !IS_ENABLED(CONFIG_SANDBOX)) { @@ -220,11 +212,19 @@ int vbe_read_fit(struct udevice *blk, ulong area_offset, ulong area_size, log_debug("doing SPL from %s blksz %lx log2blksz %x area_offset %lx + fdt_size %lx\n", blk->name, desc->blksz, desc->log2blksz, area_offset, ALIGN(size, 4)); ret = spl_load_simple_fit(image, &info, area_offset, buf); - log_debug("spl_load_abrec_fit() ret=%d\n", ret); + log_debug("spl_load_simple_fit() ret=%d\n", ret);
return ret; }
+ /* + * Load the image from the FIT. We ignore any load-address information + * so in practice this simply locates the image in the external-data + * region and returns its address and size. Since we only loaded the FIT + * itself, only a part of the image will be present, at best. + */ + fit_uname = NULL; + fit_uname_config = NULL; ret = fit_image_load(&images, addr, &fit_uname, &fit_uname_config, IH_ARCH_DEFAULT, image_ph(phase, IH_TYPE_FIRMWARE), BOOTSTAGE_ID_FIT_SPL_START, FIT_LOAD_IGNORED,

When VBE operates within VPL it does not want the FDT to be changed. Provide a way to disable this feature.
Move the FIT_IMAGE_TINY condition out of spl_fit_record_loadable() so that both conditions are together. This makes the code easier to understand.
Replace the existing fit_loaded member, which is no-longer used.
Signed-off-by: Simon Glass sjg@chromium.org ---
boot/vbe_common.c | 1 + common/spl/spl_fit.c | 6 ++---- include/spl.h | 19 ++++++++++++++----- 3 files changed, 17 insertions(+), 9 deletions(-)
diff --git a/boot/vbe_common.c b/boot/vbe_common.c index 009bbcd3e76..801ab9da045 100644 --- a/boot/vbe_common.c +++ b/boot/vbe_common.c @@ -208,6 +208,7 @@ int vbe_read_fit(struct udevice *blk, ulong area_offset, ulong area_size, struct spl_load_info info;
spl_load_init(&info, h_vbe_load_read, desc, desc->blksz); + xpl_set_fdt_update(&info, false); xpl_set_phase(&info, IH_PHASE_U_BOOT); log_debug("doing SPL from %s blksz %lx log2blksz %x area_offset %lx + fdt_size %lx\n", blk->name, desc->blksz, desc->log2blksz, area_offset, ALIGN(size, 4)); diff --git a/common/spl/spl_fit.c b/common/spl/spl_fit.c index 6c068a842d9..d663737089d 100644 --- a/common/spl/spl_fit.c +++ b/common/spl/spl_fit.c @@ -515,9 +515,6 @@ static int spl_fit_record_loadable(const struct spl_fit_info *ctx, int index, const char *name; int node;
- if (CONFIG_IS_ENABLED(FIT_IMAGE_TINY)) - return 0; - ret = spl_fit_get_image_name(ctx, "loadables", index, &name); if (ret < 0) return ret; @@ -863,7 +860,8 @@ int spl_load_simple_fit(struct spl_image_info *spl_image, spl_image->entry_point = image_info.entry_point;
/* Record our loadables into the FDT */ - if (spl_image->fdt_addr) + if (!CONFIG_IS_ENABLED(FIT_IMAGE_TINY) && + xpl_get_fdt_update(info) && spl_image->fdt_addr) spl_fit_record_loadable(&ctx, index, spl_image->fdt_addr, &image_info); diff --git a/include/spl.h b/include/spl.h index 7155e9c67aa..850c64d4b19 100644 --- a/include/spl.h +++ b/include/spl.h @@ -345,7 +345,7 @@ typedef ulong (*spl_load_reader)(struct spl_load_info *load, ulong sector, * @priv: Private data for the device * @bl_len: Block length for reading in bytes * @phase: Image phase to load - * @fit_loaded: true if the FIT has been loaded, except for external data + * @no_fdt_update: true to update the FDT with any loadables that are loaded */ struct spl_load_info { spl_load_reader read; @@ -355,7 +355,7 @@ struct spl_load_info { #endif #if CONFIG_IS_ENABLED(BOOTMETH_VBE) u8 phase; - u8 fit_loaded; + u8 fdt_update; #endif };
@@ -395,12 +395,20 @@ static inline enum image_phase_t xpl_get_phase(struct spl_load_info *info) #endif }
-static inline bool xpl_get_fit_loaded(struct spl_load_info *info) +static inline void xpl_set_fdt_update(struct spl_load_info *info, + bool fdt_update) { #if CONFIG_IS_ENABLED(BOOTMETH_VBE) - return info->fit_loaded; + info->fdt_update = fdt_update; +#endif +} + +static inline enum image_phase_t xpl_get_fdt_update(struct spl_load_info *info) +{ +#if CONFIG_IS_ENABLED(BOOTMETH_VBE) + return info->fdt_update; #else - return false; + return true; #endif }
@@ -415,6 +423,7 @@ static inline void spl_load_init(struct spl_load_info *load, load->priv = priv; spl_set_bl_len(load, bl_len); xpl_set_phase(load, IH_PHASE_NONE); + xpl_set_fdt_update(load, true); }
/*

So far only VBE-simple is implemented in U-Boot. This supports a single image which can be updated in situ.
It is often necessary to support two images (A and B) so that the board is not bricked if the update is interrupted or is bad.
In some cases, a non-updatable recovery image is desirable, so that the board can be returned to a known-good state in the event of a serious failure.
Introduce ABrec which provides these features. It supports three independent images and the logic to select the desired one on boot.
While we are here, fix a debug message to indicate the function it called. Provide a maintainers entry for VBE.
Note that fwupdated only supports VBE-simple so far, but supports for ABrec will appear in time.
Signed-off-by: Simon Glass sjg@chromium.org ---
MAINTAINERS | 7 ++ boot/Kconfig | 73 ++++++++++++ boot/Makefile | 4 + boot/vbe_abrec.c | 83 +++++++++++++ boot/vbe_abrec.h | 115 ++++++++++++++++++ boot/vbe_abrec_fw.c | 276 ++++++++++++++++++++++++++++++++++++++++++++ boot/vbe_common.c | 5 + boot/vbe_common.h | 43 +++++++ include/vbe.h | 21 ++++ 9 files changed, 627 insertions(+) create mode 100644 boot/vbe_abrec.c create mode 100644 boot/vbe_abrec.h create mode 100644 boot/vbe_abrec_fw.c
diff --git a/MAINTAINERS b/MAINTAINERS index 9ba0c98cef2..d44233875df 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1802,6 +1802,13 @@ M: Abdellatif El Khlifi abdellatif.elkhlifi@arm.com S: Maintained F: test/lib/uuid.c
+VBE +M: Simon Glass sjg@chromium.org +S: Maintained +F: boot/vbe* +F: common/spl_reloc.c +F: include/vbe.h + VIDEO M: Anatolij Gustschin agust@denx.de S: Maintained diff --git a/boot/Kconfig b/boot/Kconfig index 73106f7a617..415d9fdfb80 100644 --- a/boot/Kconfig +++ b/boot/Kconfig @@ -740,6 +740,20 @@ config BOOTMETH_VBE_SIMPLE firmware image in boot media such as MMC. It does not support any sort of rollback, recovery or A/B boot.
+config BOOTMETH_VBE_ABREC + bool "Bootdev support for VBE 'a/b/recovery' method" + imply SPL_CRC8 + imply VPL_CRC8 + help + Enables support for VBE 'abrec' boot. This allows updating one of an + A or B firmware image in boot media such as MMC. The new firmware is + tried and if it boots, it is copied to the other image, so that both + A and B have the same version. If neither firmware image passes the + verification step, a recovery image is booted. This method will + eventually provide rollback protection as well. + +if BOOTMETH_VBE_SIMPLE + config BOOTMETH_VBE_SIMPLE_OS bool "Bootdev support for VBE 'simple' method OS phase" default y @@ -798,6 +812,65 @@ config TPL_BOOTMETH_VBE_SIMPLE_FW
endif # BOOTMETH_VBE_SIMPLE
+if BOOTMETH_VBE_ABREC + +config SPL_BOOTMETH_VBE_ABREC + bool "Bootdev support for VBE 'abrec' method (SPL)" + depends on SPL + default y if VPL + help + Enables support for VBE 'abrec' boot. The SPL part of this + implementation simply loads U-Boot from the image selected by the + VPL phase. + +config TPL_BOOTMETH_VBE_ABREC + bool "Bootdev support for VBE 'abrec' method (TPL)" + depends on TPL + select TPL_FIT + default y + help + Enables support for VBE 'abrec' boot. The TPL part of this + implementation simply jumps to VPL after device init is completed. + +config VPL_BOOTMETH_VBE_ABREC + bool "Bootdev support for VBE 'abrec' method (VPL)" + depends on VPL + default y + help + Enables support for VBE 'abrec' boot. The VPL part of this + implementation selects which SPL to use (A, B or recovery) and then + boots into SPL. + +config SPL_BOOTMETH_VBE_ABREC_FW + bool "Bootdev support for VBE 'abrec' method firmware phase (SPL)" + depends on SPL + default y if VPL + help + Enables support for VBE 'abrec' boot. The SPL part of this + implementation simply loads U-Boot from the image selected by the + VPL phase. + +config TPL_BOOTMETH_VBE_ABREC_FW + bool "Bootdev support for VBE 'abrec' method firmware phase (TPL)" + depends on TPL + default y if VPL + help + Enables support for VBE 'abrec' boot. The TPL part of this + implementation simply jumps to VPL after device init is completed. + +config VPL_BOOTMETH_VBE_ABREC_FW + bool "Bootdev support for VBE 'abrec' method firmware phase (VPL)" + depends on VPL + default y + help + Enables support for VBE 'abrec' boot. The VPL part of this + implementation selects which SPL to use (A, B or recovery) and then + boots into SPL. + +endif # BOOTMETH_VBE_ABREC + +endif # BOOTMETH_VBE + config EXPO bool "Support for expos - groups of scenes displaying a UI" depends on VIDEO diff --git a/boot/Makefile b/boot/Makefile index 30529bac367..284ade3def0 100644 --- a/boot/Makefile +++ b/boot/Makefile @@ -71,3 +71,7 @@ obj-$(CONFIG_$(PHASE_)BOOTMETH_VBE_SIMPLE_FW) += vbe_simple_fw.o obj-$(CONFIG_$(PHASE_)BOOTMETH_VBE_SIMPLE_OS) += vbe_simple_os.o
obj-$(CONFIG_$(PHASE_)BOOTMETH_ANDROID) += bootmeth_android.o + +obj-$(CONFIG_$(PHASE_)BOOTMETH_VBE_ABREC) += vbe_abrec.o vbe_common.o +obj-$(CONFIG_$(PHASE_)BOOTMETH_VBE_ABREC_FW) += vbe_abrec_fw.o +obj-$(CONFIG_$(PHASE_)BOOTMETH_VBE_ABREC_OS) += vbe_abrec_os.o diff --git a/boot/vbe_abrec.c b/boot/vbe_abrec.c new file mode 100644 index 00000000000..6d0f622262d --- /dev/null +++ b/boot/vbe_abrec.c @@ -0,0 +1,83 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* + * Verified Boot for Embedded (VBE) 'simple' method + * + * Copyright 2024 Google LLC + * Written by Simon Glass sjg@chromium.org + */ + +#define LOG_CATEGORY LOGC_BOOT + +#include <dm.h> +#include <memalign.h> +#include <mmc.h> +#include <dm/ofnode.h> +#include "vbe_abrec.h" + +int abrec_read_priv(ofnode node, struct abrec_priv *priv) +{ + memset(priv, '\0', sizeof(*priv)); + if (ofnode_read_u32(node, "area-start", &priv->area_start) || + ofnode_read_u32(node, "area-size", &priv->area_size) || + ofnode_read_u32(node, "version-offset", &priv->version_offset) || + ofnode_read_u32(node, "version-size", &priv->version_size) || + ofnode_read_u32(node, "state-offset", &priv->state_offset) || + ofnode_read_u32(node, "state-size", &priv->state_size)) + return log_msg_ret("read", -EINVAL); + ofnode_read_u32(node, "skip-offset", &priv->skip_offset); + priv->storage = strdup(ofnode_read_string(node, "storage")); + if (!priv->storage) + return log_msg_ret("str", -EINVAL); + + return 0; +} + +int abrec_read_nvdata(struct abrec_priv *priv, struct udevice *blk, + struct abrec_state *state) +{ + ALLOC_CACHE_ALIGN_BUFFER(u8, buf, MMC_MAX_BLOCK_LEN); + const struct vbe_nvdata *nvd = (struct vbe_nvdata *)buf; + uint flags; + int ret; + + ret = vbe_read_nvdata(blk, priv->area_start + priv->state_offset, + priv->state_size, buf); + if (ret == -EPERM) { + memset(buf, '\0', MMC_MAX_BLOCK_LEN); + log_warning("Starting with empty state\n"); + } else if (ret) { + return log_msg_ret("nv", ret); + } + + state->fw_vernum = nvd->fw_vernum; + flags = nvd->flags; + state->try_count = flags & VBEF_TRY_COUNT_MASK; + state->try_b = flags & VBEF_TRY_B; + state->recovery = flags & VBEF_RECOVERY; + state->pick = (flags & VBEF_PICK_MASK) >> VBEF_PICK_SHIFT; + + return 0; +} + +int abrec_read_state(struct udevice *dev, struct abrec_state *state) +{ + struct abrec_priv *priv = dev_get_priv(dev); + struct udevice *blk; + int ret; + + ret = vbe_get_blk(priv->storage, &blk); + if (ret) + return log_msg_ret("blk", ret); + + ret = vbe_read_version(blk, priv->area_start + priv->version_offset, + state->fw_version, MAX_VERSION_LEN); + if (ret) + return log_msg_ret("ver", ret); + log_debug("version=%s\n", state->fw_version); + + ret = abrec_read_nvdata(priv, blk, state); + if (ret) + return log_msg_ret("nvd", ret); + + return 0; +} diff --git a/boot/vbe_abrec.h b/boot/vbe_abrec.h new file mode 100644 index 00000000000..63c73297351 --- /dev/null +++ b/boot/vbe_abrec.h @@ -0,0 +1,115 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Verified Boot for Embedded (VBE) vbe-abrec common file + * + * Copyright 2024 Google LLC + * Written by Simon Glass sjg@chromium.org + */ + +#ifndef __VBE_ABREC_H +#define __VBE_ABREC_H + +#include <vbe.h> +#include <dm/ofnode_decl.h> + +#include "vbe_common.h" + +struct bootflow; +struct udevice; + +/** + * struct abrec_priv - information read from the device tree + * + * @area_start: Start offset of the VBE area in the device, in bytes + * @area_size: Total size of the VBE area + * @skip_offset: Size of an initial part of the device to skip, when using + * area_start. This is effectively added to area_start to calculate the + * actual start position on the device + * @state_offset: Offset from area_start of the VBE state, in bytes + * @state_size: Size of the state information + * @version_offset: Offset from from area_start of the VBE version info + * @version_size: Size of the version info + * @storage: Storage device to use, in the form <uclass><devnum>, e.g. "mmc1" + */ +struct abrec_priv { + u32 area_start; + u32 area_size; + u32 skip_offset; + u32 state_offset; + u32 state_size; + u32 version_offset; + u32 version_size; + const char *storage; +}; + +/** struct abrec_state - state information read from media + * + * The state on the media is converted into this more code-friendly structure. + * + * @fw_version: Firmware version string + * @fw_vernum: Firmware version number + * @try_count: Number of times the B firmware has been tried + * @try_b: true to try B firmware on the next boot + * @recovery: true to enter recovery firmware on the next boot + * @try_result: Result of trying to boot with the last firmware + * @pick: Firmware which was chosen in this boot + */ +struct abrec_state { + char fw_version[MAX_VERSION_LEN]; + u32 fw_vernum; + u8 try_count; + bool try_b; + bool recovery; + enum vbe_try_result try_result; + enum vbe_pick_t pick; +}; + +/** + * abrec_read_fw_bootflow() - Read a bootflow for firmware + * + * Locates and loads the firmware image (FIT) needed for the next phase. The FIT + * should ideally use external data, to reduce the amount of it that needs to be + * read. + * + * @bdev: bootdev device containing the firmwre + * @bflow: Place to put the created bootflow, on success + * @return 0 if OK, -ve on error + */ +int abrec_read_bootflow_fw(struct udevice *dev, struct bootflow *bflow); + +/** + * vbe_simple_read_state() - Read the VBE simple state information + * + * @dev: VBE bootmeth + * @state: Place to put the state + * @return 0 if OK, -ve on error + */ +int abrec_read_state(struct udevice *dev, struct abrec_state *state); + +/** + * abrec_read_nvdata() - Read non-volatile data from a block device + * + * Reads the ABrec VBE nvdata from a device. This function reads a single block + * from the device, so the nvdata cannot be larger than that. + * + * @blk: Device to read from + * @offset: Offset to read, in bytes + * @size: Number of bytes to read + * @buf: Buffer to hold the data + * Return: 0 if OK, -E2BIG if @size > block size, -EBADF if the offset is not + * block-aligned, -EIO if an I/O error occurred, -EPERM if the header version is + * incorrect, the header size is invalid or the data fails its CRC check + */ +int abrec_read_nvdata(struct abrec_priv *priv, struct udevice *blk, + struct abrec_state *state); + +/** + * abrec_read_priv() - Read info from the devicetree + * + * @node: Node to read from + * @priv: Information to fill in + * Return 0 if OK, -EINVAL if something is wrong with the devicetree node + */ +int abrec_read_priv(ofnode node, struct abrec_priv *priv); + +#endif /* __VBE_ABREC_H */ diff --git a/boot/vbe_abrec_fw.c b/boot/vbe_abrec_fw.c new file mode 100644 index 00000000000..d52bd9ddff0 --- /dev/null +++ b/boot/vbe_abrec_fw.c @@ -0,0 +1,276 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Verified Boot for Embedded (VBE) loading firmware phases + * + * Copyright 2022 Google LLC + * Written by Simon Glass sjg@chromium.org + */ + +#define LOG_CATEGORY LOGC_BOOT + +#include <binman_sym.h> +#include <bloblist.h> +#include <bootdev.h> +#include <bootflow.h> +#include <bootmeth.h> +#include <bootstage.h> +#include <display_options.h> +#include <dm.h> +#include <image.h> +#include <log.h> +#include <mapmem.h> +#include <memalign.h> +#include <mmc.h> +#include <spl.h> +#include <vbe.h> +#include <dm/device-internal.h> +#include "vbe_abrec.h" +#include "vbe_common.h" + +binman_sym_declare(ulong, spl_a, image_pos); +binman_sym_declare(ulong, spl_b, image_pos); +binman_sym_declare(ulong, spl_recovery, image_pos); + +binman_sym_declare(ulong, spl_a, size); +binman_sym_declare(ulong, spl_b, size); +binman_sym_declare(ulong, spl_recovery, size); + +binman_sym_declare(ulong, u_boot_a, image_pos); +binman_sym_declare(ulong, u_boot_b, image_pos); +binman_sym_declare(ulong, u_boot_recovery, image_pos); + +binman_sym_declare(ulong, u_boot_a, size); +binman_sym_declare(ulong, u_boot_b, size); +binman_sym_declare(ulong, u_boot_recovery, size); + +binman_sym_declare(ulong, vpl, image_pos); +binman_sym_declare(ulong, vpl, size); + +static const char *const pick_names[] = {"A", "B", "Recovery"}; + +/** + * abrec_read_bootflow_fw() - Create a bootflow for firmware + * + * Locates and loads the firmware image (FIT) needed for the next phase. The FIT + * should ideally use external data, to reduce the amount of it that needs to be + * read. + * + * @bdev: bootdev device containing the firmwre + * @meth: VBE abrec bootmeth + * @blow: Place to put the created bootflow, on success + * @return 0 if OK, -ve on error + */ +int abrec_read_bootflow_fw(struct udevice *dev, struct bootflow *bflow) +{ + struct udevice *media = dev_get_parent(bflow->dev); + struct udevice *meth = bflow->method; + struct abrec_priv *priv = dev_get_priv(meth); + ulong len, load_addr; + struct udevice *blk; + int ret; + + log_debug("media=%s\n", media->name); + ret = blk_get_from_parent(media, &blk); + if (ret) + return log_msg_ret("med", ret); + + ret = vbe_read_fit(blk, priv->area_start + priv->skip_offset, + priv->area_size, NULL, &load_addr, &len, &bflow->name); + if (ret) + return log_msg_ret("vbe", ret); + + /* set up the bootflow with the info we obtained */ + bflow->blk = blk; + bflow->buf = map_sysmem(load_addr, len); + bflow->size = len; + + return 0; +} + +static int abrec_run_vpl(struct udevice *blk, struct spl_image_info *image, + struct vbe_handoff *handoff) +{ + uint flags, tries, prev_result; + struct abrec_priv priv; + struct abrec_state state; + enum vbe_pick_t pick; + uint try_count; + ulong offset, size; + ulong ub_offset, ub_size; + ofnode node; + int ret; + + node = vbe_get_node(); + if (!ofnode_valid(node)) + return log_msg_ret("nod", -EINVAL); + + ret = abrec_read_priv(node, &priv); + if (ret) + return log_msg_ret("pri", ret); + + ret = abrec_read_nvdata(&priv, blk, &state); + if (ret) + return log_msg_ret("sta", ret); + + prev_result = state.try_result; + try_count = state.try_count; + + if (state.recovery) { + pick = VBEP_RECOVERY; + + /* if we are trying B but ran out of tries, use A */ + } else if ((prev_result == VBETR_TRYING) && !tries) { + pick = VBEP_A; + state.try_result = VBETR_BAD; + + /* if requested, try B */ + } else if (flags & VBEF_TRY_B) { + pick = VBEP_B; + + /* decrement the try count if not already zero */ + if (try_count) + try_count--; + state.try_result = VBETR_TRYING; + } else { + pick = VBEP_A; + } + state.try_count = try_count; + + switch (pick) { + case VBEP_A: + offset = binman_sym(ulong, spl_a, image_pos); + size = binman_sym(ulong, spl_a, size); + ub_offset = binman_sym(ulong, u_boot_a, image_pos); + ub_size = binman_sym(ulong, u_boot_a, size); + break; + case VBEP_B: + offset = binman_sym(ulong, spl_b, image_pos); + size = binman_sym(ulong, spl_b, size); + ub_offset = binman_sym(ulong, u_boot_b, image_pos); + ub_size = binman_sym(ulong, u_boot_b, size); + break; + case VBEP_RECOVERY: + offset = binman_sym(ulong, spl_recovery, image_pos); + size = binman_sym(ulong, spl_recovery, size); + ub_offset = binman_sym(ulong, u_boot_recovery, image_pos); + ub_size = binman_sym(ulong, u_boot_recovery, size); + break; + } + log_debug("pick=%d, offset=%lx size=%lx\n", pick, offset, size); + log_info("VBE: Firmware pick %s at %lx\n", pick_names[pick], offset); + + ret = vbe_read_fit(blk, offset, size, image, NULL, NULL, NULL); + if (ret) + return log_msg_ret("vbe", ret); + handoff->offset = ub_offset; + handoff->size = ub_size; + handoff->pick = pick; + image->load_addr = spl_get_image_text_base(); + image->entry_point = image->load_addr; + + return 0; +} + +static int abrec_run_spl(struct udevice *blk, struct spl_image_info *image, + struct vbe_handoff *handoff) +{ + int ret; + + log_info("VBE: Firmware pick %s at %lx\n", pick_names[handoff->pick], + handoff->offset); + ret = vbe_read_fit(blk, handoff->offset, handoff->size, image, NULL, + NULL, NULL); + if (ret) + return log_msg_ret("vbe", ret); + image->load_addr = spl_get_image_text_base(); + image->entry_point = image->load_addr; + + return 0; +} + +static int abrec_load_from_image(struct spl_image_info *image, + struct spl_boot_device *bootdev) +{ + struct vbe_handoff *handoff; + int ret; + + printf("load: %s\n", ofnode_read_string(ofnode_root(), "model")); + if (xpl_phase() != PHASE_VPL && xpl_phase() != PHASE_SPL && + xpl_phase() != PHASE_TPL) + return -ENOENT; + + ret = bloblist_ensure_size(BLOBLISTT_VBE, sizeof(struct vbe_handoff), + 0, (void **)&handoff); + if (ret) + return log_msg_ret("ro", ret); + + if (USE_BOOTMETH) { + struct udevice *meth, *bdev; + struct abrec_priv *priv; + struct bootflow bflow; + + vbe_find_first_device(&meth); + if (!meth) + return log_msg_ret("vd", -ENODEV); + log_debug("vbe dev %s\n", meth->name); + ret = device_probe(meth); + if (ret) + return log_msg_ret("probe", ret); + + priv = dev_get_priv(meth); + log_debug("abrec %s\n", priv->storage); + ret = bootdev_find_by_label(priv->storage, &bdev, NULL); + if (ret) + return log_msg_ret("bd", ret); + log_debug("bootdev %s\n", bdev->name); + + bootflow_init(&bflow, bdev, meth); + ret = bootmeth_read_bootflow(meth, &bflow); + log_debug("\nfw ret=%d\n", ret); + if (ret) + return log_msg_ret("rd", ret); + + /* jump to the image */ + image->flags = SPL_SANDBOXF_ARG_IS_BUF; + image->arg = bflow.buf; + image->size = bflow.size; + log_debug("Image: %s at %p size %x\n", bflow.name, bflow.buf, + bflow.size); + + /* this is not used from now on, so free it */ + bootflow_free(&bflow); + } else { + struct udevice *media; + struct udevice *blk; + + ret = uclass_get_device_by_seq(UCLASS_MMC, 1, &media); + if (ret) + return log_msg_ret("vdv", ret); + ret = blk_get_from_parent(media, &blk); + if (ret) + return log_msg_ret("med", ret); + + if (xpl_phase() == PHASE_TPL) { + ulong offset, size; + + offset = binman_sym(ulong, vpl, image_pos); + size = binman_sym(ulong, vpl, size); + log_debug("VPL at offset %lx size %lx\n", offset, size); + ret = vbe_read_fit(blk, offset, size, image, NULL, + NULL, NULL); + if (ret) + return log_msg_ret("vbe", ret); + } else if (xpl_phase() == PHASE_VPL) { + ret = abrec_run_vpl(blk, image, handoff); + } else { + ret = abrec_run_spl(blk, image, handoff); + } + } + + /* Record that VBE was used in this phase */ + handoff->phases |= 1 << xpl_phase(); + + return 0; +} +SPL_LOAD_IMAGE_METHOD("vbe_abrec", 5, BOOT_DEVICE_VBE, + abrec_load_from_image); diff --git a/boot/vbe_common.c b/boot/vbe_common.c index 801ab9da045..a86986d86e9 100644 --- a/boot/vbe_common.c +++ b/boot/vbe_common.c @@ -374,3 +374,8 @@ int vbe_read_fit(struct udevice *blk, ulong area_offset, ulong area_size,
return 0; } + +ofnode vbe_get_node(void) +{ + return ofnode_path("/bootstd/firmware0"); +} diff --git a/boot/vbe_common.h b/boot/vbe_common.h index 84117815a19..493cbdc3694 100644 --- a/boot/vbe_common.h +++ b/boot/vbe_common.h @@ -9,6 +9,8 @@ #ifndef __VBE_COMMON_H #define __VBE_COMMON_H
+#include <dm/ofnode_decl.h> +#include <linux/bitops.h> #include <linux/types.h>
struct spl_image_info; @@ -38,6 +40,40 @@ enum { NVD_HDR_VER_CUR = 1, /* current version */ };
+/** + * enum vbe_try_result - result of trying a firmware pick + * + * @VBETR_UNKNOWN: Unknown / invalid result + * @VBETR_TRYING: Firmware pick is being tried + * @VBETR_OK: Firmware pick is OK and can be used from now on + * @VBETR_BAD: Firmware pick is bad and should be removed + */ +enum vbe_try_result { + VBETR_UNKNOWN, + VBETR_TRYING, + VBETR_OK, + VBETR_BAD, +}; + +/** + * enum vbe_flags - flags controlling operation + * + * @VBEF_TRY_COUNT_MASK: mask for the 'try count' value + * @VBEF_TRY_B: Try the B slot + * @VBEF_RECOVERY: Use recovery slot + */ +enum vbe_flags { + VBEF_TRY_COUNT_MASK = 0x3, + VBEF_TRY_B = BIT(2), + VBEF_RECOVERY = BIT(3), + + VBEF_RESULT_SHIFT = 4, + VBEF_RESULT_MASK = 3 << VBEF_RESULT_SHIFT, + + VBEF_PICK_SHIFT = 6, + VBEF_PICK_MASK = 3 << VBEF_PICK_SHIFT, +}; + /** * struct vbe_nvdata - basic storage format for non-volatile data * @@ -134,4 +170,11 @@ int vbe_read_fit(struct udevice *blk, ulong area_offset, ulong area_size, struct spl_image_info *image, ulong *load_addrp, ulong *lenp, char **namep);
+/** + * vbe_get_node() - Get the node containing the VBE settings + * + * Return: VBE node (typically "/bootstd/firmware0") + */ +ofnode vbe_get_node(void); + #endif /* __VBE_ABREC_H */ diff --git a/include/vbe.h b/include/vbe.h index 56bff63362f..61bfa0e557d 100644 --- a/include/vbe.h +++ b/include/vbe.h @@ -10,6 +10,8 @@ #ifndef __VBE_H #define __VBE_H
+#include <linux/types.h> + /** * enum vbe_phase_t - current phase of VBE * @@ -25,13 +27,32 @@ enum vbe_phase_t { VBE_PHASE_OS, };
+/** + * enum vbe_pick_t - indicates which firmware is picked + * + * @VBEFT_A: Firmware A + * @VBEFT_B: Firmware B + * @VBEFT_RECOVERY: Recovery firmware + */ +enum vbe_pick_t { + VBEP_A, + VBEP_B, + VBEP_RECOVERY, +}; + /** * struct vbe_handoff - information about VBE progress * + * @offset: Offset of the FIT to use for SPL onwards + * @size: Size of the area containing the FIT * @phases: Indicates which phases used the VBE bootmeth (1 << PHASE_...) + * @pick: Indicates which firmware pick was used (enum vbe_pick_t) */ struct vbe_handoff { + ulong offset; + ulong size; u8 phases; + u8 pick; };
/**

On Sun, Jan 26, 2025 at 11:43:10AM -0700, Simon Glass wrote:
This includes the VBE ABrec (A/B/recovery) implementation as well as a number of patches needed to make it work:
- marking some code as used by SPL_RELOC
- selection of images from a FIT based on the boot phase
- removal of unwanted hash code which increases code-size too much
- a few Kconfig-related additions for VPL
Note: The goal for the next series (part H) is to enable VBE on rk3399-generic, i.e. able to boot on multiple rk3399-based boards with only the TPL phase being different for each board.
Size-wise, this seems fine and it's nice to be able to drop crc16 from SPL from some platforms. I'll try and find time to look at the patches themselves as well soon.
participants (3)
-
Raymond Mao
-
Simon Glass
-
Tom Rini