[U-Boot] [PATCH v2 00/37] binman: Add CBFS support

CBFS (Coreboot Filesystem) is a simple ROM-based filesystem used for locating data needed during booting. It has particular features for x86 such as defining a boot block at the top of the ROM image. At run-time, boot loaders can locate files in the ROM by searching for a header and magic strings in the ROM.
It is common to have multiple, separate CBFSs in the ROM.
Binman provides its own way of locating data, typically with a device tree built into the image, or linker symbols when space is short. But it is useful to be able to create images containing CBFSs with binman, since it allows all image structure to defined in one place, rather than spread out over multiple cbfstool invocations, etc.
The approach taken here is to directly create the CBFS in binman rather than calling out to cbfstool. The latter is quite slow for multiple invocations and the tool itself has no tests.
Also included is ifwitool lifted from coreboot, which is needed to package images for Intel apollo lake SoCs. This seems easier than implementing this functionality in binman.
Tests are provided for new binman functionality, although ifwitool has no tests and is tested only by invocation from binman.
This series does not support all CBFS features, just raw binaries and Elf files (called 'stages' in CBFS).
Changes in v2: - Deal with travis's old lz4 version by skipping tests as necessary - Install lzma tool in travis - Skip use of cbfstool in tests if it is not available
Simon Glass (37): x86: Add ifwitool for Intel Integrated Firmware Image cbfs: Add an enum and comment for the magic number cbfs: Rename checksum to attributes_offset tools: Drop duplicate raise_on_error argument binman: Fix comment in bsection.GetEntries() binman: Correct two typos in function names in ftest binman: Add coverage tools info for Python 3 patman: Add a way to set the search path for tools binman: Add a --toolpath option to set the tool search path binman: Add missing comments to bsection binman: Add missing comments toentry binman: Tidy up help for --indir binman: Use a better error for missing Intel descriptor binman: Detect skipped tests binman: Add a function to create a sample ELF file binman: Add a function to decode an ELF file binman: Ensure that coverage has access to site packages binman: Assume Intel descriptor is at the start of the image binman: Don't assume there is an ME region binman: Update entry.SetOffsetSize to be optional binman: Allow text directly in the node patman: Add functions to compress and decompress data binman: Use the tools.Decompress method binman: Drop unnecessary debug handling binman: Use tools compression function for blob handling binman: Correct comment in u_boot_spl_elf binman: Support ELF files for TPL binman: Fix up the _DoTestFile() function -u argument binman: Allow verbosity control when running tests binman: Allow preserving test directories binman: Pass the toolpath to tests patman: Add a function to write ifwitool binman: Add a utility library for coreboot CBFS binman: Add support for CBFS entries binman: Add support for Intel IFWI entries binman: Pad empty areas of the CBFS with files binman: Add support for fixed-offset files in CBFS
.travis.yml | 3 +- fs/cbfs/cbfs.c | 4 +- include/cbfs.h | 16 +- test/run | 9 +- tools/Makefile | 3 + tools/binman/README | 25 +- tools/binman/README.entries | 204 +- tools/binman/binman.py | 52 +- tools/binman/bsection.py | 42 +- tools/binman/cbfs_util.py | 861 ++++++ tools/binman/cbfs_util_test.py | 625 +++++ tools/binman/cmdline.py | 8 +- tools/binman/control.py | 3 + tools/binman/elf.py | 174 ++ tools/binman/elf_test.py | 41 + tools/binman/entry.py | 38 +- tools/binman/etype/blob.py | 16 +- tools/binman/etype/cbfs.py | 205 ++ tools/binman/etype/intel_descriptor.py | 16 +- tools/binman/etype/intel_ifwi.py | 100 + tools/binman/etype/intel_me.py | 2 + tools/binman/etype/text.py | 23 +- tools/binman/etype/u_boot_spl_elf.py | 2 +- tools/binman/etype/u_boot_tpl_elf.py | 24 + tools/binman/ftest.py | 266 +- tools/binman/test/066_text.dts | 5 + tools/binman/test/096_elf.dts | 2 + tools/binman/test/102_cbfs_raw.dts | 20 + tools/binman/test/103_cbfs_raw_ppc.dts | 21 + tools/binman/test/104_cbfs_stage.dts | 19 + tools/binman/test/105_cbfs_raw_compress.dts | 26 + tools/binman/test/106_cbfs_bad_arch.dts | 15 + tools/binman/test/107_cbfs_no_size.dts | 13 + tools/binman/test/108_cbfs_no_contents.dts | 17 + tools/binman/test/109_cbfs_bad_compress.dts | 18 + tools/binman/test/110_cbfs_name.dts | 24 + tools/binman/test/111_x86-rom-ifwi.dts | 29 + tools/binman/test/112_x86-rom-ifwi-nodesc.dts | 28 + tools/binman/test/113_x86-rom-ifwi-nodata.dts | 29 + tools/binman/test/114_cbfs_offset.dts | 26 + tools/binman/test/fitimage.bin.gz | Bin 0 -> 8418 bytes tools/binman/test/ifwi.bin.gz | Bin 0 -> 1884 bytes tools/ifwitool.c | 2314 +++++++++++++++++ tools/patman/command.py | 4 +- tools/patman/tools.py | 141 +- 45 files changed, 5434 insertions(+), 79 deletions(-) create mode 100644 tools/binman/cbfs_util.py create mode 100755 tools/binman/cbfs_util_test.py create mode 100644 tools/binman/etype/cbfs.py create mode 100644 tools/binman/etype/intel_ifwi.py create mode 100644 tools/binman/etype/u_boot_tpl_elf.py create mode 100644 tools/binman/test/102_cbfs_raw.dts create mode 100644 tools/binman/test/103_cbfs_raw_ppc.dts create mode 100644 tools/binman/test/104_cbfs_stage.dts create mode 100644 tools/binman/test/105_cbfs_raw_compress.dts create mode 100644 tools/binman/test/106_cbfs_bad_arch.dts create mode 100644 tools/binman/test/107_cbfs_no_size.dts create mode 100644 tools/binman/test/108_cbfs_no_contents.dts create mode 100644 tools/binman/test/109_cbfs_bad_compress.dts create mode 100644 tools/binman/test/110_cbfs_name.dts create mode 100644 tools/binman/test/111_x86-rom-ifwi.dts create mode 100644 tools/binman/test/112_x86-rom-ifwi-nodesc.dts create mode 100644 tools/binman/test/113_x86-rom-ifwi-nodata.dts create mode 100644 tools/binman/test/114_cbfs_offset.dts create mode 100644 tools/binman/test/fitimage.bin.gz create mode 100644 tools/binman/test/ifwi.bin.gz create mode 100644 tools/ifwitool.c

Some Intel SoCs from about 2016 boot using an internal microcontroller via an 'IFWI' image. This is a special format which can hold firmware images. In U-Boot's case it holds u-boot-tpl.bin.
Add this tool, taken from coreboot, so that we can build bootable images on apollolake SoCs.
This tool itself has no tests. Some amount of coverage will be provided by the binman tests that use it, so enable building the tool on sandbox.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: None
tools/Makefile | 3 + tools/ifwitool.c | 2314 ++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 2317 insertions(+) create mode 100644 tools/ifwitool.c
diff --git a/tools/Makefile b/tools/Makefile index 33e90a8025a..87d81a3d416 100644 --- a/tools/Makefile +++ b/tools/Makefile @@ -175,6 +175,9 @@ HOSTCFLAGS_mkexynosspl.o := -pedantic ifdtool-objs := $(LIBFDT_OBJS) ifdtool.o hostprogs-$(CONFIG_X86) += ifdtool
+ifwitool-objs := ifwitool.o +hostprogs-$(CONFIG_X86)$(CONFIG_SANDBOX) += ifwitool + hostprogs-$(CONFIG_MX23) += mxsboot hostprogs-$(CONFIG_MX28) += mxsboot HOSTCFLAGS_mxsboot.o := -pedantic diff --git a/tools/ifwitool.c b/tools/ifwitool.c new file mode 100644 index 00000000000..476f0a3c6fc --- /dev/null +++ b/tools/ifwitool.c @@ -0,0 +1,2314 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * ifwitool, CLI utility for Integrated Firmware Image (IFWI) manipulation + * + * This is taken from the Coreboot project + */ + +#include <assert.h> +#include <stdbool.h> +#include <getopt.h> +#include "os_support.h" + +#define __packed __attribute__((packed)) +#define KiB 1024 +#define ALIGN(x, a) __ALIGN_MASK((x), (typeof(x))(a) - 1) +#define __ALIGN_MASK(x, mask) (((x) + (mask)) & ~(mask)) +#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0])) + +/* + * min()/max()/clamp() macros that also do + * strict type-checking.. See the + * "unnecessary" pointer comparison. + */ +#define min(x, y) ({ \ + typeof(x) _min1 = (x); \ + typeof(y) _min2 = (y); \ + (void)&_min1 == &_min2); \ + _min1 < _min2 ? _min1 : _min2; }) + +#define max(x, y) ({ \ + typeof(x) _max1 = (x); \ + typeof(y) _max2 = (y); \ + (void)(&_max1 == &_max2); \ + _max1 > _max2 ? _max1 : _max2; }) + +static int verbose = 1; + +/* Buffer and file I/O */ +struct buffer { + char *name; + char *data; + size_t offset; + size_t size; +}; + +#define ERROR(...) { fprintf(stderr, "E: " __VA_ARGS__); } +#define INFO(...) { if (verbose > 0) fprintf(stderr, "INFO: " __VA_ARGS__); } +#define DEBUG(...) { if (verbose > 1) fprintf(stderr, "DEBUG: " __VA_ARGS__); } + +/* + * BPDT is Boot Partition Descriptor Table. It is located at the start of a + * logical boot partition(LBP). It stores information about the critical + * sub-partitions present within the LBP. + * + * S-BPDT is Secondary Boot Partition Descriptor Table. It is located after the + * critical sub-partitions and contains information about the non-critical + * sub-partitions present within the LBP. + * + * Both tables are identified by BPDT_SIGNATURE stored at the start of the + * table. + */ +#define BPDT_SIGNATURE (0x000055AA) + +/* Parameters passed in by caller */ +static struct param { + const char *file_name; + const char *subpart_name; + const char *image_name; + bool dir_ops; + const char *dentry_name; +} param; + +struct bpdt_header { + /* + * This is used to identify start of BPDT. It should always be + * BPDT_SIGNATURE. + */ + uint32_t signature; + /* Count of BPDT entries present */ + uint16_t descriptor_count; + /* Version - Currently supported = 1 */ + uint16_t bpdt_version; + /* Unused - Should be 0 */ + uint32_t xor_redundant_block; + /* Version of IFWI build */ + uint32_t ifwi_version; + /* Version of FIT tool used to create IFWI */ + uint64_t fit_tool_version; +} __packed; +#define BPDT_HEADER_SIZE (sizeof(struct bpdt_header)) + +struct bpdt_entry { + /* Type of sub-partition */ + uint16_t type; + /* Attributes of sub-partition */ + uint16_t flags; + /* Offset of sub-partition from beginning of LBP */ + uint32_t offset; + /* Size in bytes of sub-partition */ + uint32_t size; +} __packed; +#define BPDT_ENTRY_SIZE (sizeof(struct bpdt_entry)) + +struct bpdt { + struct bpdt_header h; + /* In practice, this could be an array of 0 to n entries */ + struct bpdt_entry e[0]; +} __packed; + +static inline size_t get_bpdt_size(struct bpdt_header *h) +{ + return (sizeof(*h) + BPDT_ENTRY_SIZE * h->descriptor_count); +} + +/* Minimum size in bytes allocated to BPDT in IFWI */ +#define BPDT_MIN_SIZE ((size_t)512) + +/* Header to define directory header for sub-partition */ +struct subpart_dir_header { + /* Should be SUBPART_DIR_MARKER */ + uint32_t marker; + /* Number of directory entries in the sub-partition */ + uint32_t num_entries; + /* Currenty supported - 1 */ + uint8_t header_version; + /* Currenty supported - 1 */ + uint8_t entry_version; + /* Length of directory header in bytes */ + uint8_t header_length; + /* + * 2s complement of 8-bit sum from first byte of header to last byte of + * last directory entry. + */ + uint8_t checksum; + /* ASCII short name of sub-partition */ + uint8_t name[4]; +} __packed; +#define SUBPART_DIR_HEADER_SIZE \ + (sizeof(struct subpart_dir_header)) +#define SUBPART_DIR_MARKER 0x44504324 +#define SUBPART_DIR_HEADER_VERSION_SUPPORTED 1 +#define SUBPART_DIR_ENTRY_VERSION_SUPPORTED 1 + +/* Structure for each directory entry for sub-partition */ +struct subpart_dir_entry { + /* Name of directory entry - Not guaranteed to be NULL-terminated */ + uint8_t name[12]; + /* Offset of entry from beginning of sub-partition */ + uint32_t offset; + /* Length in bytes of sub-directory entry */ + uint32_t length; + /* Must be zero */ + uint32_t rsvd; +} __packed; +#define SUBPART_DIR_ENTRY_SIZE \ + (sizeof(struct subpart_dir_entry)) + +struct subpart_dir { + struct subpart_dir_header h; + /* In practice, this could be an array of 0 to n entries */ + struct subpart_dir_entry e[0]; +} __packed; + +static inline size_t subpart_dir_size(struct subpart_dir_header *h) +{ + return (sizeof(*h) + SUBPART_DIR_ENTRY_SIZE * h->num_entries); +} + +struct manifest_header { + uint32_t header_type; + uint32_t header_length; + uint32_t header_version; + uint32_t flags; + uint32_t vendor; + uint32_t date; + uint32_t size; + uint32_t id; + uint32_t rsvd; + uint64_t version; + uint32_t svn; + uint64_t rsvd1; + uint8_t rsvd2[64]; + uint32_t modulus_size; + uint32_t exponent_size; + uint8_t public_key[256]; + uint32_t exponent; + uint8_t signature[256]; +} __packed; + +#define DWORD_SIZE 4 +#define MANIFEST_HDR_SIZE (sizeof(struct manifest_header)) +#define MANIFEST_ID_MAGIC (0x324e4d24) + +struct module { + uint8_t name[12]; + uint8_t type; + uint8_t hash_alg; + uint16_t hash_size; + uint32_t metadata_size; + uint8_t metadata_hash[32]; +} __packed; + +#define MODULE_SIZE (sizeof(struct module)) + +struct signed_pkg_info_ext { + uint32_t ext_type; + uint32_t ext_length; + uint8_t name[4]; + uint32_t vcn; + uint8_t bitmap[16]; + uint32_t svn; + uint8_t rsvd[16]; +} __packed; + +#define SIGNED_PKG_INFO_EXT_TYPE 0x15 +#define SIGNED_PKG_INFO_EXT_SIZE \ + (sizeof(struct signed_pkg_info_ext)) + +/* + * Attributes for various IFWI sub-partitions. + * LIES_WITHIN_BPDT_4K = Sub-Partition should lie within the same 4K block as + * BPDT. + * NON_CRITICAL_SUBPART = Sub-Partition entry should be present in S-BPDT. + * CONTAINS_DIR = Sub-Partition contains directory. + * AUTO_GENERATED = Sub-Partition is generated by the tool. + * MANDATORY_BPDT_ENTRY = Even if sub-partition is deleted, BPDT should contain + * an entry for it with size 0 and offset 0. + */ +enum subpart_attributes { + LIES_WITHIN_BPDT_4K = (1 << 0), + NON_CRITICAL_SUBPART = (1 << 1), + CONTAINS_DIR = (1 << 2), + AUTO_GENERATED = (1 << 3), + MANDATORY_BPDT_ENTRY = (1 << 4), +}; + +/* Type value for various IFWI sub-partitions */ +enum bpdt_entry_type { + SMIP_TYPE = 0, + CSE_RBE_TYPE = 1, + CSE_BUP_TYPE = 2, + UCODE_TYPE = 3, + IBB_TYPE = 4, + S_BPDT_TYPE = 5, + OBB_TYPE = 6, + CSE_MAIN_TYPE = 7, + ISH_TYPE = 8, + CSE_IDLM_TYPE = 9, + IFP_OVERRIDE_TYPE = 10, + DEBUG_TOKENS_TYPE = 11, + UFS_PHY_TYPE = 12, + UFS_GPP_TYPE = 13, + PMC_TYPE = 14, + IUNIT_TYPE = 15, + NVM_CONFIG_TYPE = 16, + UEP_TYPE = 17, + UFS_RATE_B_TYPE = 18, + MAX_SUBPARTS = 19, +}; + +/* + * There are two order requirements for an IFWI image: + * 1. Order in which the sub-partitions lie within the BPDT entries. + * 2. Order in which the sub-partitions lie within the image. + * + * header_order defines #1 i.e. the order in which the sub-partitions should + * appear in the BPDT entries. pack_order defines #2 i.e. the order in which + * sub-partitions appear in the IFWI image. pack_order controls the offset and + * thus sub-partitions would have increasing offsets as we loop over pack_order. + */ +const enum bpdt_entry_type bpdt_header_order[MAX_SUBPARTS] = { + /* Order of the following entries is mandatory */ + CSE_IDLM_TYPE, + IFP_OVERRIDE_TYPE, + S_BPDT_TYPE, + CSE_RBE_TYPE, + UFS_PHY_TYPE, + UFS_GPP_TYPE, + /* Order of the following entries is recommended */ + UEP_TYPE, + NVM_CONFIG_TYPE, + UFS_RATE_B_TYPE, + IBB_TYPE, + SMIP_TYPE, + PMC_TYPE, + CSE_BUP_TYPE, + UCODE_TYPE, + DEBUG_TOKENS_TYPE, + IUNIT_TYPE, + CSE_MAIN_TYPE, + ISH_TYPE, + OBB_TYPE, +}; + +const enum bpdt_entry_type bpdt_pack_order[MAX_SUBPARTS] = { + /* Order of the following entries is mandatory */ + UFS_GPP_TYPE, + UFS_PHY_TYPE, + IFP_OVERRIDE_TYPE, + UEP_TYPE, + NVM_CONFIG_TYPE, + UFS_RATE_B_TYPE, + /* Order of the following entries is recommended */ + IBB_TYPE, + SMIP_TYPE, + CSE_RBE_TYPE, + PMC_TYPE, + CSE_BUP_TYPE, + UCODE_TYPE, + CSE_IDLM_TYPE, + DEBUG_TOKENS_TYPE, + S_BPDT_TYPE, + IUNIT_TYPE, + CSE_MAIN_TYPE, + ISH_TYPE, + OBB_TYPE, +}; + +/* Utility functions */ +enum ifwi_ret { + COMMAND_ERR = -1, + NO_ACTION_REQUIRED = 0, + REPACK_REQUIRED = 1, +}; + +struct dir_ops { + enum ifwi_ret (*dir_add)(int type); +}; + +static enum ifwi_ret ibbp_dir_add(int type); + +const struct subpart_info { + const char *name; + const char *readable_name; + uint32_t attr; + struct dir_ops dir_ops; +} subparts[MAX_SUBPARTS] = { + /* OEM SMIP */ + [SMIP_TYPE] = {"SMIP", "SMIP", CONTAINS_DIR, {NULL} }, + /* CSE RBE */ + [CSE_RBE_TYPE] = {"RBEP", "CSE_RBE", CONTAINS_DIR | + MANDATORY_BPDT_ENTRY, {NULL} }, + /* CSE BUP */ + [CSE_BUP_TYPE] = {"FTPR", "CSE_BUP", CONTAINS_DIR | + MANDATORY_BPDT_ENTRY, {NULL} }, + /* uCode */ + [UCODE_TYPE] = {"UCOD", "Microcode", CONTAINS_DIR, {NULL} }, + /* IBB */ + [IBB_TYPE] = {"IBBP", "Bootblock", CONTAINS_DIR, {ibbp_dir_add} }, + /* S-BPDT */ + [S_BPDT_TYPE] = {"S_BPDT", "S-BPDT", AUTO_GENERATED | + MANDATORY_BPDT_ENTRY, {NULL} }, + /* OBB */ + [OBB_TYPE] = {"OBBP", "OEM boot block", CONTAINS_DIR | + NON_CRITICAL_SUBPART, {NULL} }, + /* CSE Main */ + [CSE_MAIN_TYPE] = {"NFTP", "CSE_MAIN", CONTAINS_DIR | + NON_CRITICAL_SUBPART, {NULL} }, + /* ISH */ + [ISH_TYPE] = {"ISHP", "ISH", NON_CRITICAL_SUBPART, {NULL} }, + /* CSE IDLM */ + [CSE_IDLM_TYPE] = {"DLMP", "CSE_IDLM", CONTAINS_DIR | + MANDATORY_BPDT_ENTRY, {NULL} }, + /* IFP Override */ + [IFP_OVERRIDE_TYPE] = {"IFP_OVERRIDE", "IFP_OVERRIDE", + LIES_WITHIN_BPDT_4K | MANDATORY_BPDT_ENTRY, + {NULL} }, + /* Debug Tokens */ + [DEBUG_TOKENS_TYPE] = {"DEBUG_TOKENS", "Debug Tokens", 0, {NULL} }, + /* UFS Phy Configuration */ + [UFS_PHY_TYPE] = {"UFS_PHY", "UFS Phy", LIES_WITHIN_BPDT_4K | + MANDATORY_BPDT_ENTRY, {NULL} }, + /* UFS GPP LUN ID */ + [UFS_GPP_TYPE] = {"UFS_GPP", "UFS GPP", LIES_WITHIN_BPDT_4K | + MANDATORY_BPDT_ENTRY, {NULL} }, + /* PMC */ + [PMC_TYPE] = {"PMCP", "PMC firmware", CONTAINS_DIR, {NULL} }, + /* IUNIT */ + [IUNIT_TYPE] = {"IUNP", "IUNIT", NON_CRITICAL_SUBPART, {NULL} }, + /* NVM Config */ + [NVM_CONFIG_TYPE] = {"NVM_CONFIG", "NVM Config", 0, {NULL} }, + /* UEP */ + [UEP_TYPE] = {"UEP", "UEP", LIES_WITHIN_BPDT_4K | MANDATORY_BPDT_ENTRY, + {NULL} }, + /* UFS Rate B Config */ + [UFS_RATE_B_TYPE] = {"UFS_RATE_B", "UFS Rate B Config", 0, {NULL} }, +}; + +struct ifwi_image { + /* Data read from input file */ + struct buffer input_buff; + + /* BPDT header and entries */ + struct buffer bpdt; + size_t input_ifwi_start_offset; + size_t input_ifwi_end_offset; + + /* Subpartition content */ + struct buffer subpart_buf[MAX_SUBPARTS]; +} ifwi_image; + +/* Buffer and file I/O */ +static off_t get_file_size(FILE *f) +{ + off_t fsize; + + fseek(f, 0, SEEK_END); + fsize = ftell(f); + fseek(f, 0, SEEK_SET); + return fsize; +} + +static inline void *buffer_get(const struct buffer *b) +{ + return b->data; +} + +static inline size_t buffer_size(const struct buffer *b) +{ + return b->size; +} + +static inline size_t buffer_offset(const struct buffer *b) +{ + return b->offset; +} + +/* + * Shrink a buffer toward the beginning of its previous space. + * Afterward, buffer_delete() remains the means of cleaning it up + */ +static inline void buffer_set_size(struct buffer *b, size_t size) +{ + b->size = size; +} + +/* Splice a buffer into another buffer. Note that it's up to the caller to + * bounds check the offset and size. The resulting buffer is backed by the same + * storage as the original, so although it is valid to buffer_delete() either + * one of them, doing so releases both simultaneously + */ +static void buffer_splice(struct buffer *dest, const struct buffer *src, + size_t offset, size_t size) +{ + dest->name = src->name; + dest->data = src->data + offset; + dest->offset = src->offset + offset; + dest->size = size; +} + +/* + * Shrink a buffer toward the end of its previous space. + * Afterward, buffer_delete() remains the means of cleaning it up + */ +static inline void buffer_seek(struct buffer *b, size_t size) +{ + b->offset += size; + b->size -= size; + b->data += size; +} + +/* Returns the start of the underlying buffer, with the offset undone */ +static inline void *buffer_get_original_backing(const struct buffer *b) +{ + if (!b) + return NULL; + return buffer_get(b) - buffer_offset(b); +} + +int buffer_create(struct buffer *buffer, size_t size, const char *name) +{ + buffer->name = strdup(name); + buffer->offset = 0; + buffer->size = size; + buffer->data = (char *)malloc(buffer->size); + if (!buffer->data) { + fprintf(stderr, "%s: Insufficient memory (0x%zx).\n", __func__, + size); + } + + return !buffer->data; +} + +int buffer_write_file(struct buffer *buffer, const char *filename) +{ + FILE *fp = fopen(filename, "wb"); + + if (!fp) { + perror(filename); + return -1; + } + assert(buffer && buffer->data); + if (fwrite(buffer->data, 1, buffer->size, fp) != buffer->size) { + fprintf(stderr, "incomplete write: %s\n", filename); + fclose(fp); + return -1; + } + fclose(fp); + return 0; +} + +void buffer_delete(struct buffer *buffer) +{ + assert(buffer); + if (buffer->name) { + free(buffer->name); + buffer->name = NULL; + } + if (buffer->data) { + free(buffer_get_original_backing(buffer)); + buffer->data = NULL; + } + buffer->offset = 0; + buffer->size = 0; +} + +int buffer_from_file(struct buffer *buffer, const char *filename) +{ + FILE *fp = fopen(filename, "rb"); + + if (!fp) { + perror(filename); + return -1; + } + buffer->offset = 0; + off_t file_size = get_file_size(fp); + + if (file_size < 0) { + fprintf(stderr, "could not determine size of %s\n", filename); + fclose(fp); + return -1; + } + buffer->size = file_size; + buffer->name = strdup(filename); + buffer->data = (char *)malloc(buffer->size); + assert(buffer->data); + if (fread(buffer->data, 1, buffer->size, fp) != buffer->size) { + fprintf(stderr, "incomplete read: %s\n", filename); + fclose(fp); + buffer_delete(buffer); + return -1; + } + fclose(fp); + return 0; +} + +static void alloc_buffer(struct buffer *b, size_t s, const char *n) +{ + if (buffer_create(b, s, n) == 0) + return; + + ERROR("Buffer allocation failure for %s (size = %zx).\n", n, s); + exit(-1); +} + +/* Little-Endian functions */ +static inline uint8_t read_ble8(const void *src) +{ + const uint8_t *s = src; + return *s; +} + +static inline uint8_t read_at_ble8(const void *src, size_t offset) +{ + const uint8_t *s = src; + + s += offset; + return read_ble8(s); +} + +static inline void write_ble8(void *dest, uint8_t val) +{ + *(uint8_t *)dest = val; +} + +static inline void write_at_ble8(void *dest, uint8_t val, size_t offset) +{ + uint8_t *d = dest; + + d += offset; + write_ble8(d, val); +} + +static inline uint8_t read_le8(const void *src) +{ + return read_ble8(src); +} + +static inline uint8_t read_at_le8(const void *src, size_t offset) +{ + return read_at_ble8(src, offset); +} + +static inline void write_le8(void *dest, uint8_t val) +{ + write_ble8(dest, val); +} + +static inline void write_at_le8(void *dest, uint8_t val, size_t offset) +{ + write_at_ble8(dest, val, offset); +} + +static inline uint16_t read_le16(const void *src) +{ + const uint8_t *s = src; + + return (((uint16_t)s[1]) << 8) | (((uint16_t)s[0]) << 0); +} + +static inline uint16_t read_at_le16(const void *src, size_t offset) +{ + const uint8_t *s = src; + + s += offset; + return read_le16(s); +} + +static inline void write_le16(void *dest, uint16_t val) +{ + write_le8(dest, val >> 0); + write_at_le8(dest, val >> 8, sizeof(uint8_t)); +} + +static inline void write_at_le16(void *dest, uint16_t val, size_t offset) +{ + uint8_t *d = dest; + + d += offset; + write_le16(d, val); +} + +static inline uint32_t read_le32(const void *src) +{ + const uint8_t *s = src; + + return (((uint32_t)s[3]) << 24) | (((uint32_t)s[2]) << 16) | + (((uint32_t)s[1]) << 8) | (((uint32_t)s[0]) << 0); +} + +static inline uint32_t read_at_le32(const void *src, size_t offset) +{ + const uint8_t *s = src; + + s += offset; + return read_le32(s); +} + +static inline void write_le32(void *dest, uint32_t val) +{ + write_le16(dest, val >> 0); + write_at_le16(dest, val >> 16, sizeof(uint16_t)); +} + +static inline void write_at_le32(void *dest, uint32_t val, size_t offset) +{ + uint8_t *d = dest; + + d += offset; + write_le32(d, val); +} + +static inline uint64_t read_le64(const void *src) +{ + uint64_t val; + + val = read_at_le32(src, sizeof(uint32_t)); + val <<= 32; + val |= read_le32(src); + return val; +} + +static inline uint64_t read_at_le64(const void *src, size_t offset) +{ + const uint8_t *s = src; + + s += offset; + return read_le64(s); +} + +static inline void write_le64(void *dest, uint64_t val) +{ + write_le32(dest, val >> 0); + write_at_le32(dest, val >> 32, sizeof(uint32_t)); +} + +static inline void write_at_le64(void *dest, uint64_t val, size_t offset) +{ + uint8_t *d = dest; + + d += offset; + write_le64(d, val); +} + +static inline void zero_n(void *dest, size_t n) +{ + memset(dest, 0, n); +} + +/* + * Read header/entry members in little-endian format. + * Returns the offset upto which the read was performed. + */ +static size_t read_member(void *src, size_t offset, size_t size_bytes, + void *dst) +{ + switch (size_bytes) { + case 1: + *(uint8_t *)dst = read_at_le8(src, offset); + break; + case 2: + *(uint16_t *)dst = read_at_le16(src, offset); + break; + case 4: + *(uint32_t *)dst = read_at_le32(src, offset); + break; + case 8: + *(uint64_t *)dst = read_at_le64(src, offset); + break; + default: + ERROR("Read size not supported %zd\n", size_bytes); + exit(-1); + } + + return (offset + size_bytes); +} + +/* + * Convert to little endian format. + * Returns the offset upto which the fixup was performed. + */ +static size_t fix_member(void *data, size_t offset, size_t size_bytes) +{ + uint8_t *src = (uint8_t *)data + offset; + + switch (size_bytes) { + case 1: + write_at_le8(data, *(uint8_t *)src, offset); + break; + case 2: + write_at_le16(data, *(uint16_t *)src, offset); + break; + case 4: + write_at_le32(data, *(uint32_t *)src, offset); + break; + case 8: + write_at_le64(data, *(uint64_t *)src, offset); + break; + default: + ERROR("Write size not supported %zd\n", size_bytes); + exit(-1); + } + return (offset + size_bytes); +} + +static void print_subpart_dir(struct subpart_dir *s) +{ + if (verbose == 0) + return; + + size_t i; + + printf("%-25s 0x%-23.8x\n", "Marker", s->h.marker); + printf("%-25s %-25d\n", "Num entries", s->h.num_entries); + printf("%-25s %-25d\n", "Header Version", s->h.header_version); + printf("%-25s %-25d\n", "Entry Version", s->h.entry_version); + printf("%-25s 0x%-23x\n", "Header Length", s->h.header_length); + printf("%-25s 0x%-23x\n", "Checksum", s->h.checksum); + printf("%-25s ", "Name"); + for (i = 0; i < sizeof(s->h.name); i++) + printf("%c", s->h.name[i]); + + printf("\n"); + + printf("%-25s%-25s%-25s%-25s%-25s\n", "Entry #", "Name", "Offset", + "Length", "Rsvd"); + + printf("=========================================================================================================================\n"); + + for (i = 0; i < s->h.num_entries; i++) { + printf("%-25zd%-25.12s0x%-23x0x%-23x0x%-23x\n", i + 1, + s->e[i].name, s->e[i].offset, s->e[i].length, + s->e[i].rsvd); + } + + printf("=========================================================================================================================\n"); +} + +static void bpdt_print_header(struct bpdt_header *h, const char *name) +{ + if (verbose == 0) + return; + + printf("%-25s %-25s\n", "Header", name); + printf("%-25s 0x%-23.8x\n", "Signature", h->signature); + printf("%-25s %-25d\n", "Descriptor count", h->descriptor_count); + printf("%-25s %-25d\n", "BPDT Version", h->bpdt_version); + printf("%-25s 0x%-23x\n", "XOR checksum", h->xor_redundant_block); + printf("%-25s 0x%-23x\n", "IFWI Version", h->ifwi_version); + printf("%-25s 0x%-23llx\n", "FIT Tool Version", + (long long)h->fit_tool_version); +} + +static void bpdt_print_entries(struct bpdt_entry *e, size_t count, + const char *name) +{ + size_t i; + + if (verbose == 0) + return; + + printf("%s entries\n", name); + + printf("%-25s%-25s%-25s%-25s%-25s%-25s%-25s%-25s\n", "Entry #", + "Sub-Partition", "Name", "Type", "Flags", "Offset", "Size", + "File Offset"); + + printf("=========================================================================================================================================================================================================\n"); + + for (i = 0; i < count; i++) { + printf("%-25zd%-25s%-25s%-25d0x%-23.08x0x%-23x0x%-23x0x%-23zx\n", + i + 1, subparts[e[i].type].name, + subparts[e[i].type].readable_name, e[i].type, e[i].flags, + e[i].offset, e[i].size, + e[i].offset + ifwi_image.input_ifwi_start_offset); + } + + printf("=========================================================================================================================================================================================================\n"); +} + +static void bpdt_validate_header(struct bpdt_header *h, const char *name) +{ + assert(h->signature == BPDT_SIGNATURE); + + if (h->bpdt_version != 1) { + ERROR("Invalid header : %s\n", name); + exit(-1); + } + + DEBUG("Validated header : %s\n", name); +} + +static void bpdt_read_header(void *data, struct bpdt_header *h, + const char *name) +{ + size_t offset = 0; + + offset = read_member(data, offset, sizeof(h->signature), &h->signature); + offset = read_member(data, offset, sizeof(h->descriptor_count), + &h->descriptor_count); + offset = read_member(data, offset, sizeof(h->bpdt_version), + &h->bpdt_version); + offset = read_member(data, offset, sizeof(h->xor_redundant_block), + &h->xor_redundant_block); + offset = read_member(data, offset, sizeof(h->ifwi_version), + &h->ifwi_version); + read_member(data, offset, sizeof(h->fit_tool_version), + &h->fit_tool_version); + + bpdt_validate_header(h, name); + bpdt_print_header(h, name); +} + +static void bpdt_read_entries(void *data, struct bpdt *bpdt, const char *name) +{ + size_t i, offset = 0; + struct bpdt_entry *e = &bpdt->e[0]; + size_t count = bpdt->h.descriptor_count; + + for (i = 0; i < count; i++) { + offset = read_member(data, offset, sizeof(e[i].type), + &e[i].type); + offset = read_member(data, offset, sizeof(e[i].flags), + &e[i].flags); + offset = read_member(data, offset, sizeof(e[i].offset), + &e[i].offset); + offset = read_member(data, offset, sizeof(e[i].size), + &e[i].size); + } + + bpdt_print_entries(e, count, name); +} + +/* + * Given type of sub-partition, identify BPDT entry for it. + * Sub-Partition could lie either within BPDT or S-BPDT. + */ +static struct bpdt_entry *__find_entry_by_type(struct bpdt_entry *e, + size_t count, int type) +{ + size_t i; + + for (i = 0; i < count; i++) { + if (e[i].type == type) + break; + } + + if (i == count) + return NULL; + + return &e[i]; +} + +static struct bpdt_entry *find_entry_by_type(int type) +{ + struct bpdt *b = buffer_get(&ifwi_image.bpdt); + + if (!b) + return NULL; + + struct bpdt_entry *curr = __find_entry_by_type(&b->e[0], + b->h.descriptor_count, + type); + + if (curr) + return curr; + + b = buffer_get(&ifwi_image.subpart_buf[S_BPDT_TYPE]); + if (!b) + return NULL; + + return __find_entry_by_type(&b->e[0], b->h.descriptor_count, type); +} + +/* + * Find sub-partition type given its name. If the name does not exist, returns + * -1. + */ +static int find_type_by_name(const char *name) +{ + int i; + + for (i = 0; i < MAX_SUBPARTS; i++) { + if ((strlen(subparts[i].name) == strlen(name)) && + (!strcmp(subparts[i].name, name))) + break; + } + + if (i == MAX_SUBPARTS) { + ERROR("Invalid sub-partition name %s.\n", name); + return -1; + } + + return i; +} + +/* + * Read the content of a sub-partition from input file and store it in + * ifwi_image.subpart_buf[SUB-PARTITION_TYPE]. + * + * Returns the maximum offset occupied by the sub-partitions. + */ +static size_t read_subpart_buf(void *data, size_t size, struct bpdt_entry *e, + size_t count) +{ + size_t i, type; + struct buffer *buf; + size_t max_offset = 0; + + for (i = 0; i < count; i++) { + type = e[i].type; + + if (type >= MAX_SUBPARTS) { + ERROR("Invalid sub-partition type %zd.\n", type); + exit(-1); + } + + if (buffer_size(&ifwi_image.subpart_buf[type])) { + ERROR("Multiple sub-partitions of type %zd(%s).\n", + type, subparts[type].name); + exit(-1); + } + + if (e[i].size == 0) { + INFO("Dummy sub-partition %zd(%s). Skipping.\n", type, + subparts[type].name); + continue; + } + + assert((e[i].offset + e[i].size) <= size); + + /* + * Sub-partitions in IFWI image are not in the same order as + * in BPDT entries. BPDT entires are in header_order whereas + * sub-partition offsets in the image are in pack_order. + */ + if ((e[i].offset + e[i].size) > max_offset) + max_offset = e[i].offset + e[i].size; + + /* + * S-BPDT sub-partition contains information about all the + * non-critical sub-partitions. Thus, size of S-BPDT + * sub-partition equals size of S-BPDT plus size of all the + * non-critical sub-partitions. Thus, reading whole of S-BPDT + * here would be redundant as the non-critical partitions are + * read and allocated buffers separately. Also, S-BPDT requires + * special handling for reading header and entries. + */ + if (type == S_BPDT_TYPE) + continue; + + buf = &ifwi_image.subpart_buf[type]; + + alloc_buffer(buf, e[i].size, subparts[type].name); + memcpy(buffer_get(buf), (uint8_t *)data + e[i].offset, + e[i].size); + } + + assert(max_offset); + return max_offset; +} + +/* + * Allocate buffer for bpdt header, entries and all sub-partition content. + * Returns offset in data where BPDT ends. + */ +static size_t alloc_bpdt_buffer(void *data, size_t size, size_t offset, + struct buffer *b, const char *name) +{ + struct bpdt_header bpdt_header; + + assert((offset + BPDT_HEADER_SIZE) < size); + bpdt_read_header((uint8_t *)data + offset, &bpdt_header, name); + + /* Buffer to read BPDT header and entries */ + alloc_buffer(b, get_bpdt_size(&bpdt_header), name); + + struct bpdt *bpdt = buffer_get(b); + + memcpy(&bpdt->h, &bpdt_header, BPDT_HEADER_SIZE); + + /* + * If no entries are present, maximum offset occupied is (offset + + * BPDT_HEADER_SIZE). + */ + if (bpdt->h.descriptor_count == 0) + return (offset + BPDT_HEADER_SIZE); + + /* Read all entries */ + assert((offset + get_bpdt_size(&bpdt->h)) < size); + bpdt_read_entries((uint8_t *)data + offset + BPDT_HEADER_SIZE, bpdt, + name); + + /* Read all sub-partition content in subpart_buf */ + return read_subpart_buf(data, size, &bpdt->e[0], + bpdt->h.descriptor_count); +} + +static void parse_sbpdt(void *data, size_t size) +{ + struct bpdt_entry *s; + + s = find_entry_by_type(S_BPDT_TYPE); + if (!s) + return; + + assert(size > s->offset); + + alloc_bpdt_buffer(data, size, s->offset, + &ifwi_image.subpart_buf[S_BPDT_TYPE], + "S-BPDT"); +} + +static uint8_t calc_checksum(struct subpart_dir *s) +{ + size_t size = subpart_dir_size(&s->h); + uint8_t *data = (uint8_t *)s; + uint8_t checksum = 0; + size_t i; + uint8_t old_checksum = s->h.checksum; + + s->h.checksum = 0; + + for (i = 0; i < size; i++) + checksum += data[i]; + + s->h.checksum = old_checksum; + + /* 2s complement */ + return -checksum; +} + +static void validate_subpart_dir(struct subpart_dir *s, const char *name, + bool checksum_check) +{ + if (s->h.marker != SUBPART_DIR_MARKER || + s->h.header_version != SUBPART_DIR_HEADER_VERSION_SUPPORTED || + s->h.entry_version != SUBPART_DIR_ENTRY_VERSION_SUPPORTED || + s->h.header_length != SUBPART_DIR_HEADER_SIZE) { + ERROR("Invalid subpart_dir for %s.\n", name); + exit(-1); + } + + if (!checksum_check) + return; + + uint8_t checksum = calc_checksum(s); + + if (checksum != s->h.checksum) + ERROR("Invalid checksum for %s (Expected=0x%x, Actual=0x%x).\n", + name, checksum, s->h.checksum); +} + +static void validate_subpart_dir_without_checksum(struct subpart_dir *s, + const char *name) +{ + validate_subpart_dir(s, name, 0); +} + +static void validate_subpart_dir_with_checksum(struct subpart_dir *s, + const char *name) +{ + validate_subpart_dir(s, name, 1); +} + +static void parse_subpart_dir(struct buffer *subpart_dir_buf, + struct buffer *input_buf, const char *name) +{ + struct subpart_dir_header hdr; + size_t offset = 0; + uint8_t *data = buffer_get(input_buf); + size_t size = buffer_size(input_buf); + + /* Read Subpart_Dir header */ + assert(size >= SUBPART_DIR_HEADER_SIZE); + offset = read_member(data, offset, sizeof(hdr.marker), &hdr.marker); + offset = read_member(data, offset, sizeof(hdr.num_entries), + &hdr.num_entries); + offset = read_member(data, offset, sizeof(hdr.header_version), + &hdr.header_version); + offset = read_member(data, offset, sizeof(hdr.entry_version), + &hdr.entry_version); + offset = read_member(data, offset, sizeof(hdr.header_length), + &hdr.header_length); + offset = read_member(data, offset, sizeof(hdr.checksum), &hdr.checksum); + memcpy(hdr.name, data + offset, sizeof(hdr.name)); + offset += sizeof(hdr.name); + + validate_subpart_dir_without_checksum((struct subpart_dir *)&hdr, name); + + assert(size > subpart_dir_size(&hdr)); + alloc_buffer(subpart_dir_buf, subpart_dir_size(&hdr), "Subpart Dir"); + memcpy(buffer_get(subpart_dir_buf), &hdr, SUBPART_DIR_HEADER_SIZE); + + /* Read Subpart Dir entries */ + struct subpart_dir *subpart_dir = buffer_get(subpart_dir_buf); + struct subpart_dir_entry *e = &subpart_dir->e[0]; + uint32_t i; + + for (i = 0; i < hdr.num_entries; i++) { + memcpy(e[i].name, data + offset, sizeof(e[i].name)); + offset += sizeof(e[i].name); + offset = read_member(data, offset, sizeof(e[i].offset), + &e[i].offset); + offset = read_member(data, offset, sizeof(e[i].length), + &e[i].length); + offset = read_member(data, offset, sizeof(e[i].rsvd), + &e[i].rsvd); + } + + validate_subpart_dir_with_checksum(subpart_dir, name); + + print_subpart_dir(subpart_dir); +} + +/* Parse input image file to identify different sub-partitions */ +static int ifwi_parse(void) +{ + struct buffer *buff = &ifwi_image.input_buff; + const char *image_name = param.image_name; + + DEBUG("Parsing IFWI image...\n"); + + /* Read input file */ + if (buffer_from_file(buff, image_name)) { + ERROR("Failed to read input file %s.\n", image_name); + return -1; + } + + INFO("Buffer %p size 0x%zx\n", buff->data, buff->size); + + /* Look for BPDT signature at 4K intervals */ + size_t offset = 0; + void *data = buffer_get(buff); + + while (offset < buffer_size(buff)) { + if (read_at_le32(data, offset) == BPDT_SIGNATURE) + break; + offset += 4 * KiB; + } + + if (offset >= buffer_size(buff)) { + ERROR("Image does not contain BPDT!!\n"); + return -1; + } + + ifwi_image.input_ifwi_start_offset = offset; + INFO("BPDT starts at offset 0x%zx.\n", offset); + + data = (uint8_t *)data + offset; + size_t ifwi_size = buffer_size(buff) - offset; + + /* Read BPDT and sub-partitions */ + uintptr_t end_offset; + + end_offset = ifwi_image.input_ifwi_start_offset + + alloc_bpdt_buffer(data, ifwi_size, 0, &ifwi_image.bpdt, "BPDT"); + + /* Parse S-BPDT, if any */ + parse_sbpdt(data, ifwi_size); + + /* + * Store end offset of IFWI. Required for copying any trailing non-IFWI + * part of the image. + * ASSUMPTION: IFWI image always ends on a 4K boundary. + */ + ifwi_image.input_ifwi_end_offset = ALIGN(end_offset, 4 * KiB); + DEBUG("Parsing done.\n"); + + return 0; +} + +/* + * This function is used by repack to count the number of BPDT and S-BPDT + * entries that are present. It frees the current buffers used by the entries + * and allocates fresh buffers that can be used for repacking. Returns BPDT + * entries which are empty and need to be filled in. + */ +static void __bpdt_reset(struct buffer *b, size_t count, size_t size) +{ + size_t bpdt_size = BPDT_HEADER_SIZE + count * BPDT_ENTRY_SIZE; + + assert(size >= bpdt_size); + + /* + * If buffer does not have the required size, allocate a fresh buffer. + */ + if (buffer_size(b) != size) { + struct buffer temp; + + alloc_buffer(&temp, size, b->name); + memcpy(buffer_get(&temp), buffer_get(b), buffer_size(b)); + buffer_delete(b); + *b = temp; + } + + struct bpdt *bpdt = buffer_get(b); + uint8_t *ptr = (uint8_t *)&bpdt->e[0]; + size_t entries_size = BPDT_ENTRY_SIZE * count; + + /* Zero out BPDT entries */ + memset(ptr, 0, entries_size); + /* Fill any pad-space with FF */ + memset(ptr + entries_size, 0xFF, size - bpdt_size); + + bpdt->h.descriptor_count = count; +} + +static void bpdt_reset(void) +{ + size_t i; + size_t bpdt_count = 0, sbpdt_count = 0, dummy_bpdt_count = 0; + + /* Count number of BPDT and S-BPDT entries */ + for (i = 0; i < MAX_SUBPARTS; i++) { + if (buffer_size(&ifwi_image.subpart_buf[i]) == 0) { + if (subparts[i].attr & MANDATORY_BPDT_ENTRY) { + bpdt_count++; + dummy_bpdt_count++; + } + continue; + } + + if (subparts[i].attr & NON_CRITICAL_SUBPART) + sbpdt_count++; + else + bpdt_count++; + } + + DEBUG("Count: BPDT = %zd, Dummy BPDT = %zd, S-BPDT = %zd\n", bpdt_count, + dummy_bpdt_count, sbpdt_count); + + /* Update BPDT if required */ + size_t bpdt_size = max(BPDT_MIN_SIZE, + BPDT_HEADER_SIZE + bpdt_count * BPDT_ENTRY_SIZE); + __bpdt_reset(&ifwi_image.bpdt, bpdt_count, bpdt_size); + + /* Update S-BPDT if required */ + bpdt_size = ALIGN(BPDT_HEADER_SIZE + sbpdt_count * BPDT_ENTRY_SIZE, + 4 * KiB); + __bpdt_reset(&ifwi_image.subpart_buf[S_BPDT_TYPE], sbpdt_count, + bpdt_size); +} + +/* Initialize BPDT entries in header order */ +static void bpdt_entries_init_header_order(void) +{ + int i, type; + size_t size; + + struct bpdt *bpdt, *sbpdt, *curr; + size_t bpdt_curr = 0, sbpdt_curr = 0, *count_ptr; + + bpdt = buffer_get(&ifwi_image.bpdt); + sbpdt = buffer_get(&ifwi_image.subpart_buf[S_BPDT_TYPE]); + + for (i = 0; i < MAX_SUBPARTS; i++) { + type = bpdt_header_order[i]; + size = buffer_size(&ifwi_image.subpart_buf[type]); + + if (size == 0 && !(subparts[type].attr & MANDATORY_BPDT_ENTRY)) + continue; + + if (subparts[type].attr & NON_CRITICAL_SUBPART) { + curr = sbpdt; + count_ptr = &sbpdt_curr; + } else { + curr = bpdt; + count_ptr = &bpdt_curr; + } + + assert(*count_ptr < curr->h.descriptor_count); + curr->e[*count_ptr].type = type; + curr->e[*count_ptr].flags = 0; + curr->e[*count_ptr].offset = 0; + curr->e[*count_ptr].size = size; + + (*count_ptr)++; + } +} + +static void pad_buffer(struct buffer *b, size_t size) +{ + size_t buff_size = buffer_size(b); + + assert(buff_size <= size); + + if (buff_size == size) + return; + + struct buffer temp; + + alloc_buffer(&temp, size, b->name); + uint8_t *data = buffer_get(&temp); + + memcpy(data, buffer_get(b), buff_size); + memset(data + buff_size, 0xFF, size - buff_size); + + *b = temp; +} + +/* Initialize offsets of entries using pack order */ +static void bpdt_entries_init_pack_order(void) +{ + int i, type; + struct bpdt_entry *curr; + size_t curr_offset, curr_end; + + curr_offset = max(BPDT_MIN_SIZE, buffer_size(&ifwi_image.bpdt)); + + /* + * There are two types of sub-partitions that need to be handled here: + * 1. Sub-partitions that lie within the same 4K as BPDT + * 2. Sub-partitions that lie outside the 4K of BPDT + * + * For sub-partitions of type # 1, there is no requirement on the start + * or end of the sub-partition. They need to be packed in without any + * holes left in between. If there is any empty space left after the end + * of the last sub-partition in 4K of BPDT, then that space needs to be + * padded with FF bytes, but the size of the last sub-partition remains + * unchanged. + * + * For sub-partitions of type # 2, both the start and end should be a + * multiple of 4K. If not, then it needs to be padded with FF bytes and + * size adjusted such that the sub-partition ends on 4K boundary. + */ + + /* #1 Sub-partitions that lie within same 4K as BPDT */ + struct buffer *last_bpdt_buff = &ifwi_image.bpdt; + + for (i = 0; i < MAX_SUBPARTS; i++) { + type = bpdt_pack_order[i]; + curr = find_entry_by_type(type); + + if (!curr || curr->size == 0) + continue; + + if (!(subparts[type].attr & LIES_WITHIN_BPDT_4K)) + continue; + + curr->offset = curr_offset; + curr_offset = curr->offset + curr->size; + last_bpdt_buff = &ifwi_image.subpart_buf[type]; + DEBUG("type=%d, curr_offset=0x%zx, curr->offset=0x%x, curr->size=0x%x, buff_size=0x%zx\n", + type, curr_offset, curr->offset, curr->size, + buffer_size(&ifwi_image.subpart_buf[type])); + } + + /* Pad ff bytes if there is any empty space left in BPDT 4K */ + curr_end = ALIGN(curr_offset, 4 * KiB); + pad_buffer(last_bpdt_buff, + buffer_size(last_bpdt_buff) + (curr_end - curr_offset)); + curr_offset = curr_end; + + /* #2 Sub-partitions that lie outside of BPDT 4K */ + for (i = 0; i < MAX_SUBPARTS; i++) { + type = bpdt_pack_order[i]; + curr = find_entry_by_type(type); + + if (!curr || curr->size == 0) + continue; + + if (subparts[type].attr & LIES_WITHIN_BPDT_4K) + continue; + + assert(curr_offset == ALIGN(curr_offset, 4 * KiB)); + curr->offset = curr_offset; + curr_end = ALIGN(curr->offset + curr->size, 4 * KiB); + curr->size = curr_end - curr->offset; + + pad_buffer(&ifwi_image.subpart_buf[type], curr->size); + + curr_offset = curr_end; + DEBUG("type=%d, curr_offset=0x%zx, curr->offset=0x%x, curr->size=0x%x, buff_size=0x%zx\n", + type, curr_offset, curr->offset, curr->size, + buffer_size(&ifwi_image.subpart_buf[type])); + } + + /* + * Update size of S-BPDT to include size of all non-critical + * sub-partitions. + * + * Assumption: S-BPDT always lies at the end of IFWI image. + */ + curr = find_entry_by_type(S_BPDT_TYPE); + assert(curr); + + assert(curr_offset == ALIGN(curr_offset, 4 * KiB)); + curr->size = curr_offset - curr->offset; +} + +/* Convert all members of BPDT to little-endian format */ +static void bpdt_fixup_write_buffer(struct buffer *buf) +{ + struct bpdt *s = buffer_get(buf); + + struct bpdt_header *h = &s->h; + struct bpdt_entry *e = &s->e[0]; + + size_t count = h->descriptor_count; + + size_t offset = 0; + + offset = fix_member(&h->signature, offset, sizeof(h->signature)); + offset = fix_member(&h->descriptor_count, offset, + sizeof(h->descriptor_count)); + offset = fix_member(&h->bpdt_version, offset, sizeof(h->bpdt_version)); + offset = fix_member(&h->xor_redundant_block, offset, + sizeof(h->xor_redundant_block)); + offset = fix_member(&h->ifwi_version, offset, sizeof(h->ifwi_version)); + offset = fix_member(&h->fit_tool_version, offset, + sizeof(h->fit_tool_version)); + + uint32_t i; + + for (i = 0; i < count; i++) { + offset = fix_member(&e[i].type, offset, sizeof(e[i].type)); + offset = fix_member(&e[i].flags, offset, sizeof(e[i].flags)); + offset = fix_member(&e[i].offset, offset, sizeof(e[i].offset)); + offset = fix_member(&e[i].size, offset, sizeof(e[i].size)); + } +} + +/* Write BPDT to output buffer after fixup */ +static void bpdt_write(struct buffer *dst, size_t offset, struct buffer *src) +{ + bpdt_fixup_write_buffer(src); + memcpy(buffer_get(dst) + offset, buffer_get(src), buffer_size(src)); +} + +/* + * Follows these steps to re-create image: + * 1. Write any non-IFWI prefix. + * 2. Write out BPDT header and entries. + * 3. Write sub-partition buffers to respective offsets. + * 4. Write any non-IFWI suffix. + * + * While performing the above steps, make sure that any empty holes are filled + * with FF. + */ +static void ifwi_write(const char *image_name) +{ + struct bpdt_entry *s = find_entry_by_type(S_BPDT_TYPE); + + assert(s); + + size_t ifwi_start, ifwi_end, file_end; + + ifwi_start = ifwi_image.input_ifwi_start_offset; + ifwi_end = ifwi_start + ALIGN(s->offset + s->size, 4 * KiB); + file_end = ifwi_end + (buffer_size(&ifwi_image.input_buff) - + ifwi_image.input_ifwi_end_offset); + + struct buffer b; + + alloc_buffer(&b, file_end, "Final-IFWI"); + + uint8_t *input_data = buffer_get(&ifwi_image.input_buff); + uint8_t *output_data = buffer_get(&b); + + DEBUG("ifwi_start:0x%zx, ifwi_end:0x%zx, file_end:0x%zx\n", ifwi_start, + ifwi_end, file_end); + + /* Copy non-IFWI prefix, if any */ + memcpy(output_data, input_data, ifwi_start); + + DEBUG("Copied non-IFWI prefix (offset=0x0, size=0x%zx).\n", ifwi_start); + + struct buffer ifwi; + + buffer_splice(&ifwi, &b, ifwi_start, ifwi_end - ifwi_start); + uint8_t *ifwi_data = buffer_get(&ifwi); + + /* Copy sub-partitions using pack_order */ + struct bpdt_entry *curr; + struct buffer *subpart_buf; + int i, type; + + for (i = 0; i < MAX_SUBPARTS; i++) { + type = bpdt_pack_order[i]; + + if (type == S_BPDT_TYPE) + continue; + + curr = find_entry_by_type(type); + + if (!curr || !curr->size) + continue; + + subpart_buf = &ifwi_image.subpart_buf[type]; + + DEBUG("curr->offset=0x%x, curr->size=0x%x, type=%d, write_size=0x%zx\n", + curr->offset, curr->size, type, buffer_size(subpart_buf)); + + assert((curr->offset + buffer_size(subpart_buf)) <= + buffer_size(&ifwi)); + + memcpy(ifwi_data + curr->offset, buffer_get(subpart_buf), + buffer_size(subpart_buf)); + } + + /* Copy non-IFWI suffix, if any */ + if (ifwi_end != file_end) { + memcpy(output_data + ifwi_end, + input_data + ifwi_image.input_ifwi_end_offset, + file_end - ifwi_end); + DEBUG("Copied non-IFWI suffix (offset=0x%zx,size=0x%zx).\n", + ifwi_end, file_end - ifwi_end); + } + + /* + * Convert BPDT to little-endian format and write it to output buffer. + * S-BPDT is written first and then BPDT. + */ + bpdt_write(&ifwi, s->offset, &ifwi_image.subpart_buf[S_BPDT_TYPE]); + bpdt_write(&ifwi, 0, &ifwi_image.bpdt); + + if (buffer_write_file(&b, image_name)) { + ERROR("File write error\n"); + exit(-1); + } + + buffer_delete(&b); + printf("Image written successfully to %s.\n", image_name); +} + +/* + * Calculate size and offset of each sub-partition again since it might have + * changed because of add/delete operation. Also, re-create BPDT and S-BPDT + * entries and write back the new IFWI image to file. + */ +static void ifwi_repack(void) +{ + bpdt_reset(); + bpdt_entries_init_header_order(); + bpdt_entries_init_pack_order(); + + struct bpdt *b = buffer_get(&ifwi_image.bpdt); + + bpdt_print_entries(&b->e[0], b->h.descriptor_count, "BPDT"); + + b = buffer_get(&ifwi_image.subpart_buf[S_BPDT_TYPE]); + bpdt_print_entries(&b->e[0], b->h.descriptor_count, "S-BPDT"); + + DEBUG("Repack done.. writing image.\n"); + ifwi_write(param.image_name); +} + +static void init_subpart_dir_header(struct subpart_dir_header *hdr, + size_t count, const char *name) +{ + memset(hdr, 0, sizeof(*hdr)); + + hdr->marker = SUBPART_DIR_MARKER; + hdr->num_entries = count; + hdr->header_version = SUBPART_DIR_HEADER_VERSION_SUPPORTED; + hdr->entry_version = SUBPART_DIR_ENTRY_VERSION_SUPPORTED; + hdr->header_length = SUBPART_DIR_HEADER_SIZE; + memcpy(hdr->name, name, sizeof(hdr->name)); +} + +static size_t init_subpart_dir_entry(struct subpart_dir_entry *e, + struct buffer *b, size_t offset) +{ + memset(e, 0, sizeof(*e)); + + assert(strlen(b->name) <= sizeof(e->name)); + strncpy((char *)e->name, (char *)b->name, sizeof(e->name)); + e->offset = offset; + e->length = buffer_size(b); + + return (offset + buffer_size(b)); +} + +static void init_manifest_header(struct manifest_header *hdr, size_t size) +{ + memset(hdr, 0, sizeof(*hdr)); + + hdr->header_type = 0x4; + assert((MANIFEST_HDR_SIZE % DWORD_SIZE) == 0); + hdr->header_length = MANIFEST_HDR_SIZE / DWORD_SIZE; + hdr->header_version = 0x10000; + hdr->vendor = 0x8086; + + struct tm *local_time; + time_t curr_time; + char buffer[11]; + + curr_time = time(NULL); + local_time = localtime(&curr_time); + strftime(buffer, sizeof(buffer), "0x%Y%m%d", local_time); + hdr->date = strtoul(buffer, NULL, 16); + + assert((size % DWORD_SIZE) == 0); + hdr->size = size / DWORD_SIZE; + hdr->id = MANIFEST_ID_MAGIC; +} + +static void init_signed_pkg_info_ext(struct signed_pkg_info_ext *ext, + size_t count, const char *name) +{ + memset(ext, 0, sizeof(*ext)); + + ext->ext_type = SIGNED_PKG_INFO_EXT_TYPE; + ext->ext_length = SIGNED_PKG_INFO_EXT_SIZE + count * MODULE_SIZE; + memcpy(ext->name, name, sizeof(ext->name)); +} + +static void subpart_dir_fixup_write_buffer(struct buffer *buf) +{ + struct subpart_dir *s = buffer_get(buf); + struct subpart_dir_header *h = &s->h; + struct subpart_dir_entry *e = &s->e[0]; + + size_t count = h->num_entries; + size_t offset = 0; + + offset = fix_member(&h->marker, offset, sizeof(h->marker)); + offset = fix_member(&h->num_entries, offset, sizeof(h->num_entries)); + offset = fix_member(&h->header_version, offset, + sizeof(h->header_version)); + offset = fix_member(&h->entry_version, offset, + sizeof(h->entry_version)); + offset = fix_member(&h->header_length, offset, + sizeof(h->header_length)); + offset = fix_member(&h->checksum, offset, sizeof(h->checksum)); + offset += sizeof(h->name); + + uint32_t i; + + for (i = 0; i < count; i++) { + offset += sizeof(e[i].name); + offset = fix_member(&e[i].offset, offset, sizeof(e[i].offset)); + offset = fix_member(&e[i].length, offset, sizeof(e[i].length)); + offset = fix_member(&e[i].rsvd, offset, sizeof(e[i].rsvd)); + } +} + +static void create_subpart(struct buffer *dst, struct buffer *info[], + size_t count, const char *name) +{ + struct buffer subpart_dir_buff; + size_t size = SUBPART_DIR_HEADER_SIZE + count * SUBPART_DIR_ENTRY_SIZE; + + alloc_buffer(&subpart_dir_buff, size, "subpart-dir"); + + struct subpart_dir_header *h = buffer_get(&subpart_dir_buff); + struct subpart_dir_entry *e = (struct subpart_dir_entry *)(h + 1); + + init_subpart_dir_header(h, count, name); + + size_t curr_offset = size; + size_t i; + + for (i = 0; i < count; i++) { + curr_offset = init_subpart_dir_entry(&e[i], info[i], + curr_offset); + } + + alloc_buffer(dst, curr_offset, name); + uint8_t *data = buffer_get(dst); + + for (i = 0; i < count; i++) { + memcpy(data + e[i].offset, buffer_get(info[i]), + buffer_size(info[i])); + } + + h->checksum = calc_checksum(buffer_get(&subpart_dir_buff)); + + struct subpart_dir *dir = buffer_get(&subpart_dir_buff); + + print_subpart_dir(dir); + + subpart_dir_fixup_write_buffer(&subpart_dir_buff); + memcpy(data, dir, buffer_size(&subpart_dir_buff)); + + buffer_delete(&subpart_dir_buff); +} + +static enum ifwi_ret ibbp_dir_add(int type) +{ + struct buffer manifest; + struct signed_pkg_info_ext *ext; + struct buffer ibbl; + struct buffer ibb; + +#define DUMMY_IBB_SIZE (4 * KiB) + + assert(type == IBB_TYPE); + + /* + * Entry # 1 - IBBP.man + * Contains manifest header and signed pkg info extension. + */ + size_t size = MANIFEST_HDR_SIZE + SIGNED_PKG_INFO_EXT_SIZE; + + alloc_buffer(&manifest, size, "IBBP.man"); + + struct manifest_header *man_hdr = buffer_get(&manifest); + + init_manifest_header(man_hdr, size); + + ext = (struct signed_pkg_info_ext *)(man_hdr + 1); + + init_signed_pkg_info_ext(ext, 0, subparts[type].name); + + /* Entry # 2 - IBBL */ + if (buffer_from_file(&ibbl, param.file_name)) + return COMMAND_ERR; + + /* Entry # 3 - IBB */ + alloc_buffer(&ibb, DUMMY_IBB_SIZE, "IBB"); + memset(buffer_get(&ibb), 0xFF, DUMMY_IBB_SIZE); + + /* Create subpartition */ + struct buffer *info[] = { + &manifest, &ibbl, &ibb, + }; + create_subpart(&ifwi_image.subpart_buf[type], &info[0], + ARRAY_SIZE(info), subparts[type].name); + + return REPACK_REQUIRED; +} + +static enum ifwi_ret ifwi_raw_add(int type) +{ + if (buffer_from_file(&ifwi_image.subpart_buf[type], param.file_name)) + return COMMAND_ERR; + + printf("Sub-partition %s(%d) added from file %s.\n", param.subpart_name, + type, param.file_name); + return REPACK_REQUIRED; +} + +static enum ifwi_ret ifwi_dir_add(int type) +{ + if (!(subparts[type].attr & CONTAINS_DIR) || + !subparts[type].dir_ops.dir_add) { + ERROR("Sub-Partition %s(%d) does not support dir ops.\n", + subparts[type].name, type); + return COMMAND_ERR; + } + + if (!param.dentry_name) { + ERROR("%s: -e option required\n", __func__); + return COMMAND_ERR; + } + + enum ifwi_ret ret = subparts[type].dir_ops.dir_add(type); + + if (ret != COMMAND_ERR) + printf("Sub-partition %s(%d) entry %s added from file %s.\n", + param.subpart_name, type, param.dentry_name, + param.file_name); + else + ERROR("Sub-partition dir operation failed.\n"); + + return ret; +} + +static enum ifwi_ret ifwi_add(void) +{ + if (!param.file_name) { + ERROR("%s: -f option required\n", __func__); + return COMMAND_ERR; + } + + if (!param.subpart_name) { + ERROR("%s: -n option required\n", __func__); + return COMMAND_ERR; + } + + int type = find_type_by_name(param.subpart_name); + + if (type == -1) + return COMMAND_ERR; + + const struct subpart_info *curr_subpart = &subparts[type]; + + if (curr_subpart->attr & AUTO_GENERATED) { + ERROR("Cannot add auto-generated sub-partitions.\n"); + return COMMAND_ERR; + } + + if (buffer_size(&ifwi_image.subpart_buf[type])) { + ERROR("Image already contains sub-partition %s(%d).\n", + param.subpart_name, type); + return COMMAND_ERR; + } + + if (param.dir_ops) + return ifwi_dir_add(type); + + return ifwi_raw_add(type); +} + +static enum ifwi_ret ifwi_delete(void) +{ + if (!param.subpart_name) { + ERROR("%s: -n option required\n", __func__); + return COMMAND_ERR; + } + + int type = find_type_by_name(param.subpart_name); + + if (type == -1) + return COMMAND_ERR; + + const struct subpart_info *curr_subpart = &subparts[type]; + + if (curr_subpart->attr & AUTO_GENERATED) { + ERROR("Cannot delete auto-generated sub-partitions.\n"); + return COMMAND_ERR; + } + + if (buffer_size(&ifwi_image.subpart_buf[type]) == 0) { + printf("Image does not contain sub-partition %s(%d).\n", + param.subpart_name, type); + return NO_ACTION_REQUIRED; + } + + buffer_delete(&ifwi_image.subpart_buf[type]); + printf("Sub-Partition %s(%d) deleted.\n", subparts[type].name, type); + return REPACK_REQUIRED; +} + +static enum ifwi_ret ifwi_dir_extract(int type) +{ + if (!(subparts[type].attr & CONTAINS_DIR)) { + ERROR("Sub-Partition %s(%d) does not support dir ops.\n", + subparts[type].name, type); + return COMMAND_ERR; + } + + if (!param.dentry_name) { + ERROR("%s: -e option required.\n", __func__); + return COMMAND_ERR; + } + + struct buffer subpart_dir_buff; + + parse_subpart_dir(&subpart_dir_buff, &ifwi_image.subpart_buf[type], + subparts[type].name); + + uint32_t i; + struct subpart_dir *s = buffer_get(&subpart_dir_buff); + + for (i = 0; i < s->h.num_entries; i++) { + if (!strncmp((char *)s->e[i].name, param.dentry_name, + sizeof(s->e[i].name))) + break; + } + + if (i == s->h.num_entries) { + ERROR("Entry %s not found in subpartition for %s.\n", + param.dentry_name, param.subpart_name); + exit(-1); + } + + struct buffer dst; + + DEBUG("Splicing buffer at 0x%x size 0x%x\n", s->e[i].offset, + s->e[i].length); + buffer_splice(&dst, &ifwi_image.subpart_buf[type], s->e[i].offset, + s->e[i].length); + + if (buffer_write_file(&dst, param.file_name)) + return COMMAND_ERR; + + printf("Sub-Partition %s(%d), entry(%s) stored in %s.\n", + param.subpart_name, type, param.dentry_name, param.file_name); + + return NO_ACTION_REQUIRED; +} + +static enum ifwi_ret ifwi_raw_extract(int type) +{ + if (buffer_write_file(&ifwi_image.subpart_buf[type], param.file_name)) + return COMMAND_ERR; + + printf("Sub-Partition %s(%d) stored in %s.\n", param.subpart_name, type, + param.file_name); + + return NO_ACTION_REQUIRED; +} + +static enum ifwi_ret ifwi_extract(void) +{ + if (!param.file_name) { + ERROR("%s: -f option required\n", __func__); + return COMMAND_ERR; + } + + if (!param.subpart_name) { + ERROR("%s: -n option required\n", __func__); + return COMMAND_ERR; + } + + int type = find_type_by_name(param.subpart_name); + + if (type == -1) + return COMMAND_ERR; + + if (type == S_BPDT_TYPE) { + INFO("Tool does not support raw extract for %s\n", + param.subpart_name); + return NO_ACTION_REQUIRED; + } + + if (buffer_size(&ifwi_image.subpart_buf[type]) == 0) { + ERROR("Image does not contain sub-partition %s(%d).\n", + param.subpart_name, type); + return COMMAND_ERR; + } + + INFO("Extracting sub-partition %s(%d).\n", param.subpart_name, type); + if (param.dir_ops) + return ifwi_dir_extract(type); + + return ifwi_raw_extract(type); +} + +static enum ifwi_ret ifwi_print(void) +{ + verbose += 2; + + struct bpdt *b = buffer_get(&ifwi_image.bpdt); + + bpdt_print_header(&b->h, "BPDT"); + bpdt_print_entries(&b->e[0], b->h.descriptor_count, "BPDT"); + + b = buffer_get(&ifwi_image.subpart_buf[S_BPDT_TYPE]); + bpdt_print_header(&b->h, "S-BPDT"); + bpdt_print_entries(&b->e[0], b->h.descriptor_count, "S-BPDT"); + + if (param.dir_ops == 0) { + verbose -= 2; + return NO_ACTION_REQUIRED; + } + + int i; + struct buffer subpart_dir_buf; + + for (i = 0; i < MAX_SUBPARTS ; i++) { + if (!(subparts[i].attr & CONTAINS_DIR) || + (buffer_size(&ifwi_image.subpart_buf[i]) == 0)) + continue; + + parse_subpart_dir(&subpart_dir_buf, &ifwi_image.subpart_buf[i], + subparts[i].name); + buffer_delete(&subpart_dir_buf); + } + + verbose -= 2; + + return NO_ACTION_REQUIRED; +} + +static enum ifwi_ret ifwi_raw_replace(int type) +{ + buffer_delete(&ifwi_image.subpart_buf[type]); + return ifwi_raw_add(type); +} + +static enum ifwi_ret ifwi_dir_replace(int type) +{ + if (!(subparts[type].attr & CONTAINS_DIR)) { + ERROR("Sub-Partition %s(%d) does not support dir ops.\n", + subparts[type].name, type); + return COMMAND_ERR; + } + + if (!param.dentry_name) { + ERROR("%s: -e option required.\n", __func__); + return COMMAND_ERR; + } + + struct buffer subpart_dir_buf; + + parse_subpart_dir(&subpart_dir_buf, &ifwi_image.subpart_buf[type], + subparts[type].name); + + uint32_t i; + struct subpart_dir *s = buffer_get(&subpart_dir_buf); + + for (i = 0; i < s->h.num_entries; i++) { + if (!strcmp((char *)s->e[i].name, param.dentry_name)) + break; + } + + if (i == s->h.num_entries) { + ERROR("Entry %s not found in subpartition for %s.\n", + param.dentry_name, param.subpart_name); + exit(-1); + } + + struct buffer b; + + if (buffer_from_file(&b, param.file_name)) { + ERROR("Failed to read %s\n", param.file_name); + exit(-1); + } + + struct buffer dst; + size_t dst_size = buffer_size(&ifwi_image.subpart_buf[type]) + + buffer_size(&b) - s->e[i].length; + size_t subpart_start = s->e[i].offset; + size_t subpart_end = s->e[i].offset + s->e[i].length; + + alloc_buffer(&dst, dst_size, ifwi_image.subpart_buf[type].name); + + uint8_t *src_data = buffer_get(&ifwi_image.subpart_buf[type]); + uint8_t *dst_data = buffer_get(&dst); + size_t curr_offset = 0; + + /* Copy data before the sub-partition entry */ + memcpy(dst_data + curr_offset, src_data, subpart_start); + curr_offset += subpart_start; + + /* Copy sub-partition entry */ + memcpy(dst_data + curr_offset, buffer_get(&b), buffer_size(&b)); + curr_offset += buffer_size(&b); + + /* Copy remaining data */ + memcpy(dst_data + curr_offset, src_data + subpart_end, + buffer_size(&ifwi_image.subpart_buf[type]) - subpart_end); + + /* Update sub-partition buffer */ + int offset = s->e[i].offset; + + buffer_delete(&ifwi_image.subpart_buf[type]); + ifwi_image.subpart_buf[type] = dst; + + /* Update length of entry in the subpartition */ + s->e[i].length = buffer_size(&b); + buffer_delete(&b); + + /* Adjust offsets of affected entries in subpartition */ + offset = s->e[i].offset - offset; + for (; i < s->h.num_entries; i++) + s->e[i].offset += offset; + + /* Re-calculate checksum */ + s->h.checksum = calc_checksum(s); + + /* Convert members to litte-endian */ + subpart_dir_fixup_write_buffer(&subpart_dir_buf); + + memcpy(dst_data, buffer_get(&subpart_dir_buf), + buffer_size(&subpart_dir_buf)); + + buffer_delete(&subpart_dir_buf); + + printf("Sub-partition %s(%d) entry %s replaced from file %s.\n", + param.subpart_name, type, param.dentry_name, param.file_name); + + return REPACK_REQUIRED; +} + +static enum ifwi_ret ifwi_replace(void) +{ + if (!param.file_name) { + ERROR("%s: -f option required\n", __func__); + return COMMAND_ERR; + } + + if (!param.subpart_name) { + ERROR("%s: -n option required\n", __func__); + return COMMAND_ERR; + } + + int type = find_type_by_name(param.subpart_name); + + if (type == -1) + return COMMAND_ERR; + + const struct subpart_info *curr_subpart = &subparts[type]; + + if (curr_subpart->attr & AUTO_GENERATED) { + ERROR("Cannot replace auto-generated sub-partitions.\n"); + return COMMAND_ERR; + } + + if (buffer_size(&ifwi_image.subpart_buf[type]) == 0) { + ERROR("Image does not contain sub-partition %s(%d).\n", + param.subpart_name, type); + return COMMAND_ERR; + } + + if (param.dir_ops) + return ifwi_dir_replace(type); + + return ifwi_raw_replace(type); +} + +static enum ifwi_ret ifwi_create(void) +{ + /* + * Create peels off any non-IFWI content present in the input buffer and + * creates output file with only the IFWI present. + */ + + if (!param.file_name) { + ERROR("%s: -f option required\n", __func__); + return COMMAND_ERR; + } + + /* Peel off any non-IFWI prefix */ + buffer_seek(&ifwi_image.input_buff, + ifwi_image.input_ifwi_start_offset); + /* Peel off any non-IFWI suffix */ + buffer_set_size(&ifwi_image.input_buff, + ifwi_image.input_ifwi_end_offset - + ifwi_image.input_ifwi_start_offset); + + /* + * Adjust start and end offset of IFWI now that non-IFWI prefix is gone. + */ + ifwi_image.input_ifwi_end_offset -= ifwi_image.input_ifwi_start_offset; + ifwi_image.input_ifwi_start_offset = 0; + + param.image_name = param.file_name; + + return REPACK_REQUIRED; +} + +struct command { + const char *name; + const char *optstring; + enum ifwi_ret (*function)(void); +}; + +static const struct command commands[] = { + {"add", "f:n:e:dvh?", ifwi_add}, + {"create", "f:vh?", ifwi_create}, + {"delete", "f:n:vh?", ifwi_delete}, + {"extract", "f:n:e:dvh?", ifwi_extract}, + {"print", "dh?", ifwi_print}, + {"replace", "f:n:e:dvh?", ifwi_replace}, +}; + +static struct option long_options[] = { + {"subpart_dentry", required_argument, 0, 'e'}, + {"file", required_argument, 0, 'f'}, + {"help", required_argument, 0, 'h'}, + {"name", required_argument, 0, 'n'}, + {"dir_ops", no_argument, 0, 'd'}, + {"verbose", no_argument, 0, 'v'}, + {NULL, 0, 0, 0 } +}; + +static void usage(const char *name) +{ + printf("ifwitool: Utility for IFWI manipulation\n\n" + "USAGE:\n" + " %s [-h]\n" + " %s FILE COMMAND [PARAMETERS]\n\n" + "COMMANDs:\n" + " add -f FILE -n NAME [-d -e ENTRY]\n" + " create -f FILE\n" + " delete -n NAME\n" + " extract -f FILE -n NAME [-d -e ENTRY]\n" + " print [-d]\n" + " replace -f FILE -n NAME [-d -e ENTRY]\n" + "OPTIONs:\n" + " -f FILE : File to read/write/create/extract\n" + " -d : Perform directory operation\n" + " -e ENTRY: Name of directory entry to operate on\n" + " -v : Verbose level\n" + " -h : Help message\n" + " -n NAME : Name of sub-partition to operate on\n", + name, name + ); + + printf("\nNAME should be one of:\n"); + int i; + + for (i = 0; i < MAX_SUBPARTS; i++) + printf("%s(%s)\n", subparts[i].name, subparts[i].readable_name); + printf("\n"); +} + +int main(int argc, char **argv) +{ + if (argc < 3) { + usage(argv[0]); + return 1; + } + + param.image_name = argv[1]; + char *cmd = argv[2]; + + optind += 2; + + uint32_t i; + + for (i = 0; i < ARRAY_SIZE(commands); i++) { + if (strcmp(cmd, commands[i].name) != 0) + continue; + + int c; + + while (1) { + int option_index; + + c = getopt_long(argc, argv, commands[i].optstring, + long_options, &option_index); + + if (c == -1) + break; + + /* Filter out illegal long options */ + if (!strchr(commands[i].optstring, c)) { + ERROR("%s: invalid option -- '%c'\n", argv[0], + c); + c = '?'; + } + + switch (c) { + case 'n': + param.subpart_name = optarg; + break; + case 'f': + param.file_name = optarg; + break; + case 'd': + param.dir_ops = 1; + break; + case 'e': + param.dentry_name = optarg; + break; + case 'v': + verbose++; + break; + case 'h': + case '?': + usage(argv[0]); + return 1; + default: + break; + } + } + + if (ifwi_parse()) { + ERROR("%s: ifwi parsing failed\n", argv[0]); + return 1; + } + + enum ifwi_ret ret = commands[i].function(); + + if (ret == COMMAND_ERR) { + ERROR("%s: failed execution\n", argv[0]); + return 1; + } + + if (ret == REPACK_REQUIRED) + ifwi_repack(); + + return 0; + } + + ERROR("%s: invalid command\n", argv[0]); + return 1; +}

This field is not commented in the original file. Add a comment.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: None
include/cbfs.h | 11 +++++++++++ 1 file changed, 11 insertions(+)
diff --git a/include/cbfs.h b/include/cbfs.h index bd1bf75bbfc..f2ede25f517 100644 --- a/include/cbfs.h +++ b/include/cbfs.h @@ -40,6 +40,17 @@ enum cbfs_filetype { CBFS_TYPE_CMOS_LAYOUT = 0x01aa };
+enum { + CBFS_HEADER_MAGIC = 0x4f524243, +}; + +/** + * struct cbfs_header - header at the start of a CBFS region + * + * All fields use big-endian format. + * + * @magic: Magic number (CBFS_HEADER_MAGIC) + */ struct cbfs_header { u32 magic; u32 version;

It seems that this field has been renamed in later version of coreboot. Update it.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: None
fs/cbfs/cbfs.c | 4 ++-- include/cbfs.h | 5 +++-- 2 files changed, 5 insertions(+), 4 deletions(-)
diff --git a/fs/cbfs/cbfs.c b/fs/cbfs/cbfs.c index 7b2513cb24b..af4d3c5e564 100644 --- a/fs/cbfs/cbfs.c +++ b/fs/cbfs/cbfs.c @@ -55,7 +55,7 @@ static void swap_file_header(struct cbfs_fileheader *dest, memcpy(&dest->magic, &src->magic, sizeof(dest->magic)); dest->len = be32_to_cpu(src->len); dest->type = be32_to_cpu(src->type); - dest->checksum = be32_to_cpu(src->checksum); + dest->attributes_offset = be32_to_cpu(src->attributes_offset); dest->offset = be32_to_cpu(src->offset); }
@@ -108,7 +108,7 @@ static int file_cbfs_next_file(u8 *start, u32 size, u32 align, newNode->name = (char *)fileHeader + sizeof(struct cbfs_fileheader); newNode->name_length = name_len; - newNode->checksum = header.checksum; + newNode->attributes_offset = header.attributes_offset;
step = header.len; if (step % align) diff --git a/include/cbfs.h b/include/cbfs.h index f2ede25f517..b8d1dabbf63 100644 --- a/include/cbfs.h +++ b/include/cbfs.h @@ -65,7 +65,8 @@ struct cbfs_fileheader { u8 magic[8]; u32 len; u32 type; - u32 checksum; + /* offset to struct cbfs_file_attribute or 0 */ + u32 attributes_offset; u32 offset; } __packed;
@@ -76,7 +77,7 @@ struct cbfs_cachenode { u32 data_length; char *name; u32 name_length; - u32 checksum; + u32 attributes_offset; } __packed;
extern enum cbfs_result file_cbfs_result;

If kwargs contains raise_on_error then this function generates an error due to a duplicate argument. Fix this.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: None
tools/patman/command.py | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/tools/patman/command.py b/tools/patman/command.py index 14edcdaffd2..16299f3f5be 100644 --- a/tools/patman/command.py +++ b/tools/patman/command.py @@ -108,8 +108,8 @@ def RunPipe(pipe_list, infile=None, outfile=None, return result
def Output(*cmd, **kwargs): - raise_on_error = kwargs.get('raise_on_error', True) - return RunPipe([cmd], capture=True, raise_on_error=raise_on_error).stdout + kwargs['raise_on_error'] = kwargs.get('raise_on_error', True) + return RunPipe([cmd], capture=True, **kwargs).stdout
def OutputOneLine(*cmd, **kwargs): raise_on_error = kwargs.pop('raise_on_error', True)

This comment is out of date as it does not correctly describe the return value. Fix it.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: None
tools/binman/bsection.py | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/tools/binman/bsection.py b/tools/binman/bsection.py index 03dfa2f805c..a3026718794 100644 --- a/tools/binman/bsection.py +++ b/tools/binman/bsection.py @@ -399,10 +399,10 @@ class Section(object): raise ValueError("%s: No such property '%s'" % (msg, prop_name))
def GetEntries(self): - """Get the number of entries in a section + """Get the dict of entries in a section
Returns: - Number of entries in a section + OrderedDict of entries in a section """ return self._entries

Two functions have incorrect names. Fix them.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: None
tools/binman/ftest.py | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/tools/binman/ftest.py b/tools/binman/ftest.py index cc57ef3e04a..46f669e73b4 100644 --- a/tools/binman/ftest.py +++ b/tools/binman/ftest.py @@ -1758,7 +1758,7 @@ class TestFunctional(unittest.TestCase): TestFunctional._MakeInputFile('-boot', fd.read()) data = self._DoReadFile('096_elf.dts')
- def testElfStripg(self): + def testElfStrip(self): """Basic test of ELF entries""" self._SetupSplElf() with open(self.TestFile('bss_data'), 'rb') as fd: @@ -1784,7 +1784,7 @@ class TestFunctional(unittest.TestCase): <none> 00000003 00000004 u-boot-align ''', map_data)
- def testPacRefCode(self): + def testPackRefCode(self): """Test that an image with an Intel Reference code binary works""" data = self._DoReadFile('100_intel_refcode.dts') self.assertEqual(REFCODE_DATA, data[:len(REFCODE_DATA)])

Test coverage with Python 3 requires a new package. Add details about this.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: None
tools/binman/README | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/binman/README b/tools/binman/README index ac193f16cf7..decca47bbf3 100644 --- a/tools/binman/README +++ b/tools/binman/README @@ -699,7 +699,7 @@ implementations target 100% test coverage. Run 'binman -T' to check this.
To enable Python test coverage on Debian-type distributions (e.g. Ubuntu):
- $ sudo apt-get install python-coverage python-pytest + $ sudo apt-get install python-coverage python3-coverage python-pytest
Concurrent tests

Sometimes tools can be located by looking in other locations. Add a way to direct the search.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: None
tools/patman/tools.py | 42 +++++++++++++++++++++++++++++++++++++----- 1 file changed, 37 insertions(+), 5 deletions(-)
diff --git a/tools/patman/tools.py b/tools/patman/tools.py index 8e9f22afe8a..0b3049f91f4 100644 --- a/tools/patman/tools.py +++ b/tools/patman/tools.py @@ -24,6 +24,8 @@ chroot_path = None # Search paths to use for Filename(), used to find files search_paths = []
+tool_search_paths = [] + # Tools and the packages that contain them, on debian packages = { 'lz4': 'liblz4-tool', @@ -154,26 +156,56 @@ def Align(pos, align): def NotPowerOfTwo(num): return num and (num & (num - 1))
-def PathHasFile(fname): +def SetToolPaths(toolpaths): + """Set the path to search for tools + + Args: + toolpaths: List of paths to search for tools executed by Run() + """ + global tool_search_paths + + tool_search_paths = toolpaths + +def PathHasFile(path_spec, fname): """Check if a given filename is in the PATH
Args: + path_spec: Value of PATH variable to check fname: Filename to check
Returns: True if found, False if not """ - for dir in os.environ['PATH'].split(':'): + for dir in path_spec.split(':'): if os.path.exists(os.path.join(dir, fname)): return True return False
def Run(name, *args, **kwargs): + """Run a tool with some arguments + + This runs a 'tool', which is a program used by binman to process files and + perhaps produce some output. Tools can be located on the PATH or in a + search path. + + Args: + name: Command name to run + args: Arguments to the tool + kwargs: Options to pass to command.run() + + Returns: + CommandResult object + """ try: - return command.Run(name, *args, cwd=outdir, capture=True, **kwargs) + env = None + if tool_search_paths: + env = dict(os.environ) + env['PATH'] = ':'.join(tool_search_paths) + ':' + env['PATH'] + return command.Run(name, *args, capture=True, + capture_stderr=True, env=env, **kwargs) except: - if not PathHasFile(name): - msg = "Plesae install tool '%s'" % name + if env and not PathHasFile(env['PATH'], name): + msg = "Please install tool '%s'" % name package = packages.get(name) if package: msg += " (e.g. from package '%s')" % package

Sometimes tools used by binman may not be in the normal PATH search path, such as when the tool is built by the U-Boot build itself (e.g. mkimage). Provide a way to specify an additional search path for tools. The flag can be used multiple times.
Update the help to describe this option.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: None
tools/binman/README | 10 ++++++++++ tools/binman/cmdline.py | 2 ++ tools/binman/control.py | 1 + 3 files changed, 13 insertions(+)
diff --git a/tools/binman/README b/tools/binman/README index decca47bbf3..28624fadb33 100644 --- a/tools/binman/README +++ b/tools/binman/README @@ -691,6 +691,16 @@ Not all properties can be provided this way. Only some entries support it, typically for filenames.
+External tools +-------------- + +Binman can make use of external command-line tools to handle processing of +entry contents or to generate entry contents. These tools are executed using +the 'tools' module's Run() method. The tools generally must exist on the PATH, +but the --toolpath option can be used to specify additional search paths to +use. This option can be specified multiple times to add more than one path. + + Code coverage -------------
diff --git a/tools/binman/cmdline.py b/tools/binman/cmdline.py index 3886d52b3a0..ee19c5e33fe 100644 --- a/tools/binman/cmdline.py +++ b/tools/binman/cmdline.py @@ -52,6 +52,8 @@ def ParseArgs(argv): default=False, help='run tests') parser.add_option('-T', '--test-coverage', action='store_true', default=False, help='run tests and check for 100% coverage') + parser.add_option('--toolpath', type='string', action='append', + help='Add a path to the directories containing tools') parser.add_option('-u', '--update-fdt', action='store_true', default=False, help='Update the binman node with offset/size info') parser.add_option('-v', '--verbosity', default=1, diff --git a/tools/binman/control.py b/tools/binman/control.py index 20186ee1980..df78848e13d 100644 --- a/tools/binman/control.py +++ b/tools/binman/control.py @@ -112,6 +112,7 @@ def Binman(options, args): try: tools.SetInputDirs(options.indir) tools.PrepareOutputDir(options.outdir, options.preserve) + tools.SetToolPaths(options.toolpath) state.SetEntryArgs(options.entry_arg)
# Get the device tree ready by compiling it and copying the compiled

Some functions lack comments in this file. Add comments to cover this functionality.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: None
tools/binman/bsection.py | 23 +++++++++++++++++++++++ 1 file changed, 23 insertions(+)
diff --git a/tools/binman/bsection.py b/tools/binman/bsection.py index a3026718794..e0e3707db41 100644 --- a/tools/binman/bsection.py +++ b/tools/binman/bsection.py @@ -452,13 +452,36 @@ class Section(object): source_entry.Raise("Cannot find entry for node '%s'" % node.name)
def ExpandSize(self, size): + """Change the size of an entry + + Args: + size: New size for entry + """ if size != self._size: self._size = size
def GetRootSkipAtStart(self): + """Get the skip-at-start value for the top-level section + + This is used to find out the starting offset for root section that + contains this section. If this is a top-level section then it returns + the skip-at-start offset for this section. + + This is used to get the absolute position of section within the image. + + Returns: + Integer skip-at-start value for the root section containing this + section + """ if self._parent_section: return self._parent_section.GetRootSkipAtStart() return self._skip_at_start
def GetImageSize(self): + """Get the size of the image containing this section + + Returns: + Image size as an integer number of bytes, which may be None if the + image size is dynamic and its sections have not yet been packed + """ return self._image._size

At present GetOffsets() lacks a function comment. Add one.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: None
tools/binman/entry.py | 15 +++++++++++++++ 1 file changed, 15 insertions(+)
diff --git a/tools/binman/entry.py b/tools/binman/entry.py index d842d89dd66..e8d0adec1e9 100644 --- a/tools/binman/entry.py +++ b/tools/binman/entry.py @@ -355,6 +355,21 @@ class Entry(object): return self.data
def GetOffsets(self): + """Get the offsets for siblings + + Some entry types can contain information about the position or size of + other entries. An example of this is the Intel Flash Descriptor, which + knows where the Intel Management Engine section should go. + + If this entry knows about the position of other entries, it can specify + this by returning values here + + Returns: + Dict: + key: Entry type + value: List containing position and size of the given entry + type. + """ return {}
def SetOffsetSize(self, pos, size):

At present GetOffsets() lacks a function comment. Add one.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: None
tools/binman/entry.py | 15 +++++++++++++++ 1 file changed, 15 insertions(+)
Applied to u-boot-dm, thanks!

The current help is confusing. Adjust it to indicate what the flag actually does.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: None
tools/binman/cmdline.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/binman/cmdline.py b/tools/binman/cmdline.py index ee19c5e33fe..39b835666ea 100644 --- a/tools/binman/cmdline.py +++ b/tools/binman/cmdline.py @@ -35,7 +35,7 @@ def ParseArgs(argv): parser.add_option('-i', '--image', type='string', action='append', help='Image filename to build (if not specified, build all)') parser.add_option('-I', '--indir', action='append', - help='Add a path to a directory to use for input files') + help='Add a path to the list of directories to use for input files') parser.add_option('-H', '--full-help', action='store_true', default=False, help='Display the README file') parser.add_option('-m', '--map', action='store_true',

The current help is confusing. Adjust it to indicate what the flag actually does.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: None
tools/binman/cmdline.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
Applied to u-boot-dm, thanks!

FD is a bit confusing so write this out in full. Also avoid splitting the string so that people can grep for the error message more easily.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: None
tools/binman/etype/intel_descriptor.py | 2 +- tools/binman/ftest.py | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/tools/binman/etype/intel_descriptor.py b/tools/binman/etype/intel_descriptor.py index 6acbbd8b7a5..9deb8dcf42c 100644 --- a/tools/binman/etype/intel_descriptor.py +++ b/tools/binman/etype/intel_descriptor.py @@ -51,7 +51,7 @@ class Entry_intel_descriptor(Entry_blob): def GetOffsets(self): offset = self.data.find(FD_SIGNATURE) if offset == -1: - self.Raise('Cannot find FD signature') + self.Raise('Cannot find Intel Flash Descriptor (FD) signature') flvalsig, flmap0, flmap1, flmap2 = struct.unpack('<LLLL', self.data[offset:offset + 16]) frba = ((flmap0 >> 16) & 0xff) << 4 diff --git a/tools/binman/ftest.py b/tools/binman/ftest.py index 46f669e73b4..8577adb5380 100644 --- a/tools/binman/ftest.py +++ b/tools/binman/ftest.py @@ -806,8 +806,8 @@ class TestFunctional(unittest.TestCase): TestFunctional._MakeInputFile('descriptor.bin', b'') with self.assertRaises(ValueError) as e: self._DoTestFile('031_x86-rom-me.dts') - self.assertIn("Node '/binman/intel-descriptor': Cannot find FD " - "signature", str(e.exception)) + self.assertIn("Node '/binman/intel-descriptor': Cannot find Intel Flash Descriptor (FD) signature", + str(e.exception))
def testPackX86RomBadDesc(self): """Test that the Intel requires a descriptor entry"""

FD is a bit confusing so write this out in full. Also avoid splitting the string so that people can grep for the error message more easily.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: None
tools/binman/etype/intel_descriptor.py | 2 +- tools/binman/ftest.py | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-)
Applied to u-boot-dm, thanks!

If tests are skipped we should ideally exit with an error, since there may be a missing dependency. However at present this is not desirable since it breaks travis tests. For now, just report the skips.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: None
tools/binman/binman.py | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/tools/binman/binman.py b/tools/binman/binman.py index aad2e9c8bc4..9f8c5c99b79 100755 --- a/tools/binman/binman.py +++ b/tools/binman/binman.py @@ -104,9 +104,14 @@ def RunTests(debug, processes, args): print(test.id(), err) for test, err in result.failures: print(err, result.failures) + if result.skipped: + print('%d binman test%s SKIPPED:' % + (len(result.skipped), 's' if len(result.skipped) > 1 else '')) + for skip_info in result.skipped: + print('%s: %s' % (skip_info[0], skip_info[1])) if result.errors or result.failures: - print('binman tests FAILED') - return 1 + print('binman tests FAILED') + return 1 return 0
def GetEntryModules(include_testing=True):

If tests are skipped we should ideally exit with an error, since there may be a missing dependency. However at present this is not desirable since it breaks travis tests. For now, just report the skips.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: None
tools/binman/binman.py | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-)
Applied to u-boot-dm, thanks!

It is useful to create an ELF file for testing purposes, with just the right attributes used by the test. Add a function to handle this, along with a test that it works correctly.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: None
tools/binman/elf.py | 98 ++++++++++++++++++++++++++++++++++++++++ tools/binman/elf_test.py | 20 ++++++++ 2 files changed, 118 insertions(+)
diff --git a/tools/binman/elf.py b/tools/binman/elf.py index 828681d76d0..e6dc6ec1a9b 100644 --- a/tools/binman/elf.py +++ b/tools/binman/elf.py @@ -5,11 +5,15 @@ # Handle various things related to ELF images #
+from __future__ import print_function + from collections import namedtuple, OrderedDict import command import os import re +import shutil import struct +import tempfile
import tools
@@ -128,3 +132,97 @@ def LookupAndWriteSymbols(elf_fname, entry, section): (msg, name, offset, value, len(value_bytes))) entry.data = (entry.data[:offset] + value_bytes + entry.data[offset + sym.size:]) + +def MakeElf(elf_fname, text, data): + """Make an elf file with the given data in a single section + + The output file has a several section including '.text' and '.data', + containing the info provided in arguments. + + Args: + elf_fname: Output filename + text: Text (code) to put in the file's .text section + data: Data to put in the file's .data section + """ + outdir = tempfile.mkdtemp(prefix='binman.elf.') + s_file = os.path.join(outdir, 'elf.S') + + # Spilt the text into two parts so that we can make the entry point two + # bytes after the start of the text section + text_bytes1 = ['\t.byte\t%#x' % tools.ToByte(byte) for byte in text[:2]] + text_bytes2 = ['\t.byte\t%#x' % tools.ToByte(byte) for byte in text[2:]] + data_bytes = ['\t.byte\t%#x' % tools.ToByte(byte) for byte in data] + with open(s_file, 'w') as fd: + print('''/* Auto-generated C program to produce an ELF file for testing */ + +.section .text +.code32 +.globl _start +.type _start, @function +%s +_start: +%s +.ident "comment" + +.comm fred,8,4 + +.section .empty +.globl _empty +_empty: +.byte 1 + +.globl ernie +.data +.type ernie, @object +.size ernie, 4 +ernie: +%s +''' % ('\n'.join(text_bytes1), '\n'.join(text_bytes2), '\n'.join(data_bytes)), + file=fd) + lds_file = os.path.join(outdir, 'elf.lds') + + # Use a linker script to set the alignment and text address. + with open(lds_file, 'w') as fd: + print('''/* Auto-generated linker script to produce an ELF file for testing */ + +PHDRS +{ + text PT_LOAD ; + data PT_LOAD ; + empty PT_LOAD FLAGS ( 6 ) ; + note PT_NOTE ; +} + +SECTIONS +{ + . = 0xfef20000; + ENTRY(_start) + .text . : SUBALIGN(0) + { + *(.text) + } :text + .data : { + *(.data) + } :data + _bss_start = .; + .empty : { + *(.empty) + } :empty + .note : { + *(.comment) + } :note + .bss _bss_start (OVERLAY) : { + *(.bss) + } +} +''', file=fd) + # -static: Avoid requiring any shared libraries + # -nostdlib: Don't link with C library + # -Wl,--build-id=none: Don't generate a build ID, so that we just get the + # text section at the start + # -m32: Build for 32-bit x86 + # -T...: Specifies the link script, which sets the start address + stdout = command.Output('cc', '-static', '-nostdlib', '-Wl,--build-id=none', + '-m32','-T', lds_file, '-o', elf_fname, s_file) + shutil.rmtree(outdir) + diff --git a/tools/binman/elf_test.py b/tools/binman/elf_test.py index 42d94cbbbe2..3172982427d 100644 --- a/tools/binman/elf_test.py +++ b/tools/binman/elf_test.py @@ -5,9 +5,12 @@ # Test for the elf module
import os +import shutil import sys +import tempfile import unittest
+import command import elf import test_util import tools @@ -136,6 +139,23 @@ class TestElf(unittest.TestCase): elf.debug = False self.assertTrue(len(stdout.getvalue()) > 0)
+ def testMakeElf(self): + """Test for the MakeElf function""" + outdir = tempfile.mkdtemp(prefix='elf.') + expected_text = b'1234' + expected_data = b'wxyz' + elf_fname = os.path.join(outdir, 'elf') + bin_fname = os.path.join(outdir, 'elf') + + # Make an Elf file and then convert it to a fkat binary file. This + # should produce the original data. + elf.MakeElf(elf_fname, expected_text, expected_data) + stdout = command.Output('objcopy', '-O', 'binary', elf_fname, bin_fname) + with open(bin_fname, 'rb') as fd: + data = fd.read() + self.assertEqual(expected_text + expected_data, data) + shutil.rmtree(outdir) +
if __name__ == '__main__': unittest.main()

It is useful to create an ELF file for testing purposes, with just the right attributes used by the test. Add a function to handle this, along with a test that it works correctly.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: None
tools/binman/elf.py | 98 ++++++++++++++++++++++++++++++++++++++++ tools/binman/elf_test.py | 20 ++++++++ 2 files changed, 118 insertions(+)
Applied to u-boot-dm, thanks!

Add a function which decodes an ELF file, working out where in memory each part of the data should be written.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: None
tools/binman/README | 4 +++ tools/binman/elf.py | 76 ++++++++++++++++++++++++++++++++++++++++ tools/binman/elf_test.py | 21 +++++++++++ 3 files changed, 101 insertions(+)
diff --git a/tools/binman/README b/tools/binman/README index 28624fadb33..0ff30ef6fd9 100644 --- a/tools/binman/README +++ b/tools/binman/README @@ -181,6 +181,10 @@ the configuration of the Intel-format descriptor. Running binman --------------
+First install prerequisites, e.g. + + sudo apt-get install python-pyelftools python3-pyelftools + Type:
binman -b <board_name> diff --git a/tools/binman/elf.py b/tools/binman/elf.py index e6dc6ec1a9b..8147b3437dd 100644 --- a/tools/binman/elf.py +++ b/tools/binman/elf.py @@ -9,6 +9,7 @@ from __future__ import print_function
from collections import namedtuple, OrderedDict import command +import io import os import re import shutil @@ -17,11 +18,26 @@ import tempfile
import tools
+ELF_TOOLS = True +try: + from elftools.elf.elffile import ELFFile + from elftools.elf.sections import SymbolTableSection +except: # pragma: no cover + ELF_TOOLS = False + # This is enabled from control.py debug = False
Symbol = namedtuple('Symbol', ['section', 'address', 'size', 'weak'])
+# Information about an ELF file: +# data: Extracted program contents of ELF file (this would be loaded by an +# ELF loader when reading this file +# load: Load address of code +# entry: Entry address of code +# memsize: Number of bytes in memory occupied by loading this ELF file +ElfInfo = namedtuple('ElfInfo', ['data', 'load', 'entry', 'memsize']) +
def GetSymbols(fname, patterns): """Get the symbols from an ELF file @@ -226,3 +242,63 @@ SECTIONS '-m32','-T', lds_file, '-o', elf_fname, s_file) shutil.rmtree(outdir)
+def DecodeElf(data, location): + """Decode an ELF file and return information about it + + Args: + data: Data from ELF file + location: Start address of data to return + + Returns: + ElfInfo object containing information about the decoded ELF file + """ + file_size = len(data) + with io.BytesIO(data) as fd: + elf = ELFFile(fd) + data_start = 0xffffffff; + data_end = 0; + mem_end = 0; + virt_to_phys = 0; + + for i in range(elf.num_segments()): + segment = elf.get_segment(i) + if segment['p_type'] != 'PT_LOAD' or not segment['p_memsz']: + skipped = 1 # To make code-coverage see this line + continue + start = segment['p_paddr'] + mend = start + segment['p_memsz'] + rend = start + segment['p_filesz'] + data_start = min(data_start, start) + data_end = max(data_end, rend) + mem_end = max(mem_end, mend) + if not virt_to_phys: + virt_to_phys = segment['p_paddr'] - segment['p_vaddr'] + + output = bytearray(data_end - data_start) + for i in range(elf.num_segments()): + segment = elf.get_segment(i) + if segment['p_type'] != 'PT_LOAD' or not segment['p_memsz']: + skipped = 1 # To make code-coverage see this line + continue + start = segment['p_paddr'] + offset = 0 + if start < location: + offset = location - start + start = location + # A legal ELF file can have a program header with non-zero length + # but zero-length file size and a non-zero offset which, added + # together, are greater than input->size (i.e. the total file size). + # So we need to not even test in the case that p_filesz is zero. + # Note: All of this code is commented out since we don't have a test + # case for it. + size = segment['p_filesz'] + #if not size: + #continue + #end = segment['p_offset'] + segment['p_filesz'] + #if end > file_size: + #raise ValueError('Underflow copying out the segment. File has %#x bytes left, segment end is %#x\n', + #file_size, end) + output[start - data_start:start - data_start + size] = ( + segment.data()[offset:]) + return ElfInfo(output, data_start, elf.header['e_entry'] + virt_to_phys, + mem_end - data_start) diff --git a/tools/binman/elf_test.py b/tools/binman/elf_test.py index 3172982427d..e2506377f26 100644 --- a/tools/binman/elf_test.py +++ b/tools/binman/elf_test.py @@ -156,6 +156,27 @@ class TestElf(unittest.TestCase): self.assertEqual(expected_text + expected_data, data) shutil.rmtree(outdir)
+ def testDecodeElf(self): + """Test for the MakeElf function""" + if not elf.ELF_TOOLS: + self.skipTest('Python elftools not available') + outdir = tempfile.mkdtemp(prefix='elf.') + expected_text = b'1234' + expected_data = b'wxyz' + elf_fname = os.path.join(outdir, 'elf') + elf.MakeElf(elf_fname, expected_text, expected_data) + data = tools.ReadFile(elf_fname) + + load = 0xfef20000 + entry = load + 2 + expected = expected_text + expected_data + self.assertEqual(elf.ElfInfo(expected, load, entry, len(expected)), + elf.DecodeElf(data, 0)) + self.assertEqual(elf.ElfInfo(b'\0\0' + expected[2:], + load, entry, len(expected)), + elf.DecodeElf(data, load + 2)) + #shutil.rmtree(outdir) +
if __name__ == '__main__': unittest.main()

Add a function which decodes an ELF file, working out where in memory each part of the data should be written.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: None
tools/binman/README | 4 +++ tools/binman/elf.py | 76 ++++++++++++++++++++++++++++++++++++++++ tools/binman/elf_test.py | 21 +++++++++++ 3 files changed, 101 insertions(+)
Applied to u-boot-dm, thanks!

Code coverage tests fail on binman due to dist-packages being dropped from the python path on Ubuntu 16.04. Add them in so that we can find the elffile module, which is required by binman.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: None
tools/binman/binman.py | 8 ++++++++ 1 file changed, 8 insertions(+)
diff --git a/tools/binman/binman.py b/tools/binman/binman.py index 9f8c5c99b79..05aeaecd8f3 100755 --- a/tools/binman/binman.py +++ b/tools/binman/binman.py @@ -11,9 +11,11 @@
from __future__ import print_function
+from distutils.sysconfig import get_python_lib import glob import multiprocessing import os +import site import sys import traceback import unittest @@ -28,6 +30,12 @@ sys.path.insert(0, 'scripts/dtc/pylibfdt') sys.path.insert(0, os.path.join(our_path, '../../build-sandbox_spl/scripts/dtc/pylibfdt'))
+# When running under python-coverage on Ubuntu 16.04, the dist-packages +# directories are dropped from the python path. Add them in so that we can find +# the elffile module. We could use site.getsitepackages() here but unfortunately +# that is not available in a virtualenv. +sys.path.append(get_python_lib()) + import cmdline import command use_concurrent = True

Code coverage tests fail on binman due to dist-packages being dropped from the python path on Ubuntu 16.04. Add them in so that we can find the elffile module, which is required by binman.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: None
tools/binman/binman.py | 8 ++++++++ 1 file changed, 8 insertions(+)
Applied to u-boot-dm, thanks!

At present binman requires that the Intel descriptor has an explicit offset. Generally this is 0 since the descriptor is at the start of the image. Add a default to handle this, so users don't need to specify the offset.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: None
tools/binman/bsection.py | 8 ++++++++ tools/binman/etype/intel_descriptor.py | 2 ++ 2 files changed, 10 insertions(+)
diff --git a/tools/binman/bsection.py b/tools/binman/bsection.py index e0e3707db41..49b8ef3e3e0 100644 --- a/tools/binman/bsection.py +++ b/tools/binman/bsection.py @@ -477,6 +477,14 @@ class Section(object): return self._parent_section.GetRootSkipAtStart() return self._skip_at_start
+ def GetStartOffset(self): + """Get the start offset for this section + + Returns: + The first available offset in this section (typically 0) + """ + return self._skip_at_start + def GetImageSize(self): """Get the size of the image containing this section
diff --git a/tools/binman/etype/intel_descriptor.py b/tools/binman/etype/intel_descriptor.py index 9deb8dcf42c..661063457ed 100644 --- a/tools/binman/etype/intel_descriptor.py +++ b/tools/binman/etype/intel_descriptor.py @@ -47,6 +47,8 @@ class Entry_intel_descriptor(Entry_blob): def __init__(self, section, etype, node): Entry_blob.__init__(self, section, etype, node) self._regions = [] + if self.offset is None: + self.offset = self.section.GetStartOffset()
def GetOffsets(self): offset = self.data.find(FD_SIGNATURE)

At present binman requires that the Intel descriptor has an explicit offset. Generally this is 0 since the descriptor is at the start of the image. Add a default to handle this, so users don't need to specify the offset.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: None
tools/binman/bsection.py | 8 ++++++++ tools/binman/etype/intel_descriptor.py | 2 ++ 2 files changed, 10 insertions(+)
Applied to u-boot-dm, thanks!

At present having a descriptor means that there is an ME (Intel Management Engine) entry as well. The descriptor provides the ME location and assumes that it is present.
For some SoCs this is not true. Before providing the location of a potentially non-existent entry, check if it is present.
Update the comment in the ME entry also.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: None
tools/binman/README.entries | 2 ++ tools/binman/entry.py | 9 +++++++++ tools/binman/etype/intel_descriptor.py | 10 +++++++--- tools/binman/etype/intel_me.py | 2 ++ 4 files changed, 20 insertions(+), 3 deletions(-)
diff --git a/tools/binman/README.entries b/tools/binman/README.entries index 357946d6305..702fc9fda08 100644 --- a/tools/binman/README.entries +++ b/tools/binman/README.entries @@ -206,6 +206,8 @@ does not directly execute code in the ME binary.
A typical filename is 'me.bin'.
+The position of this entry is generally set by the intel-descriptor entry. + See README.x86 for information about x86 binary blobs.
diff --git a/tools/binman/entry.py b/tools/binman/entry.py index e8d0adec1e9..7ead997e0fd 100644 --- a/tools/binman/entry.py +++ b/tools/binman/entry.py @@ -544,3 +544,12 @@ features to produce new behaviours. # the data grows. This should not fail, but check it to be sure. if not self.ObtainContents(): self.Raise('Cannot obtain contents when expanding entry') + + def HasSibling(self, name): + """Check if there is a sibling of a given name + + Returns: + True if there is an entry with this name in the the same section, + else False + """ + return name in self.section.GetEntries() diff --git a/tools/binman/etype/intel_descriptor.py b/tools/binman/etype/intel_descriptor.py index 661063457ed..65ba2391e69 100644 --- a/tools/binman/etype/intel_descriptor.py +++ b/tools/binman/etype/intel_descriptor.py @@ -60,6 +60,10 @@ class Entry_intel_descriptor(Entry_blob): for i in range(MAX_REGIONS): self._regions.append(Region(self.data, frba, i))
- # Set the offset for ME only, for now, since the others are not used - return {'intel-me': [self._regions[REGION_ME].base, - self._regions[REGION_ME].size]} + # Set the offset for ME (Management Engine) only, for now, since the + # others are not used + info = {} + if self.HasSibling('intel-me'): + info['intel-me'] = [self._regions[REGION_ME].base, + self._regions[REGION_ME].size] + return info diff --git a/tools/binman/etype/intel_me.py b/tools/binman/etype/intel_me.py index 247c5b33866..c932ec52225 100644 --- a/tools/binman/etype/intel_me.py +++ b/tools/binman/etype/intel_me.py @@ -22,6 +22,8 @@ class Entry_intel_me(Entry_blob):
A typical filename is 'me.bin'.
+ The position of this entry is generally set by the intel-descriptor entry. + See README.x86 for information about x86 binary blobs. """ def __init__(self, section, etype, node):

At present having a descriptor means that there is an ME (Intel Management Engine) entry as well. The descriptor provides the ME location and assumes that it is present.
For some SoCs this is not true. Before providing the location of a potentially non-existent entry, check if it is present.
Update the comment in the ME entry also.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: None
tools/binman/README.entries | 2 ++ tools/binman/entry.py | 9 +++++++++ tools/binman/etype/intel_descriptor.py | 10 +++++++--- tools/binman/etype/intel_me.py | 2 ++ 4 files changed, 20 insertions(+), 3 deletions(-)
Applied to u-boot-dm, thanks!

At present this function always sets both the offset and the size of entries. But in some cases we want to set only one or the other, for example with the forthcoming ifwi entry, where we only set the offset. Update the function to handle this.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: None
tools/binman/bsection.py | 7 ++++--- tools/binman/entry.py | 16 ++++++++++++---- 2 files changed, 16 insertions(+), 7 deletions(-)
diff --git a/tools/binman/bsection.py b/tools/binman/bsection.py index 49b8ef3e3e0..3e3d369d5e4 100644 --- a/tools/binman/bsection.py +++ b/tools/binman/bsection.py @@ -236,14 +236,15 @@ class Section(object):
Args: name: Entry name to update - offset: New offset - size: New size + offset: New offset, or None to leave alone + size: New size, or None to leave alone """ entry = self._entries.get(name) if not entry: self._Raise("Unable to set offset/size for unknown entry '%s'" % name) - entry.SetOffsetSize(self._skip_at_start + offset, size) + entry.SetOffsetSize(self._skip_at_start + offset if offset else None, + size)
def GetEntryOffsets(self): """Handle entries that want to set the offset/size of other entries diff --git a/tools/binman/entry.py b/tools/binman/entry.py index 7ead997e0fd..7356c49c626 100644 --- a/tools/binman/entry.py +++ b/tools/binman/entry.py @@ -368,13 +368,21 @@ class Entry(object): Dict: key: Entry type value: List containing position and size of the given entry - type. + type. Either can be None if not known """ return {}
- def SetOffsetSize(self, pos, size): - self.offset = pos - self.size = size + def SetOffsetSize(self, offset, size): + """Set the offset and/or size of an entry + + Args: + offset: New offset, or None to leave alone + size: New size, or None to leave alone + """ + if offset is not None: + self.offset = offset + if size is not None: + self.size = size
def SetImagePos(self, image_pos): """Set the position in the image

At present this function always sets both the offset and the size of entries. But in some cases we want to set only one or the other, for example with the forthcoming ifwi entry, where we only set the offset. Update the function to handle this.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: None
tools/binman/bsection.py | 7 ++++--- tools/binman/entry.py | 16 ++++++++++++---- 2 files changed, 16 insertions(+), 7 deletions(-)
Applied to u-boot-dm, thanks!

At present text entries use an indirect method to specify the text to use, with a label pointing to the text itself.
Allow the text to be directly written into the node. This is more convenient in cases where the text is constant.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: None
tools/binman/README.entries | 9 +++++++++ tools/binman/etype/text.py | 23 +++++++++++++++++++---- tools/binman/ftest.py | 2 +- tools/binman/test/066_text.dts | 5 +++++ 4 files changed, 34 insertions(+), 5 deletions(-)
diff --git a/tools/binman/README.entries b/tools/binman/README.entries index 702fc9fda08..9cbdbbaadef 100644 --- a/tools/binman/README.entries +++ b/tools/binman/README.entries @@ -312,6 +312,8 @@ Properties / Entry arguments: that contains the string to place in the entry <xxx> (actual name is the value of text-label): contains the string to place in the entry. + <text>: The text to place in the entry (overrides the above mechanism). + This is useful when the text is constant.
Example node:
@@ -334,6 +336,13 @@ It is also possible to put the string directly in the node: message = "a message directly in the node" };
+or just: + + text { + size = <8>; + text = "some text directly in the node" + }; + The text is not itself nul-terminated. This can be achieved, if required, by setting the size of the entry to something larger than the text.
diff --git a/tools/binman/etype/text.py b/tools/binman/etype/text.py index 9ee04d7c9d8..da1813a638e 100644 --- a/tools/binman/etype/text.py +++ b/tools/binman/etype/text.py @@ -22,6 +22,8 @@ class Entry_text(Entry): that contains the string to place in the entry <xxx> (actual name is the value of text-label): contains the string to place in the entry. + <text>: The text to place in the entry (overrides the above mechanism). + This is useful when the text is constant.
Example node:
@@ -44,15 +46,28 @@ class Entry_text(Entry): message = "a message directly in the node" };
+ or just: + + text { + size = <8>; + text = "some text directly in the node" + }; + The text is not itself nul-terminated. This can be achieved, if required, by setting the size of the entry to something larger than the text. """ def __init__(self, section, etype, node): Entry.__init__(self, section, etype, node) - label, = self.GetEntryArgsOrProps([EntryArg('text-label', str)]) - self.text_label = tools.ToStr(label) if type(label) != str else label - value, = self.GetEntryArgsOrProps([EntryArg(self.text_label, str)]) - value = tools.ToBytes(value) if value is not None else value + value = fdt_util.GetString(self._node, 'text') + if value: + value = tools.ToBytes(value) + else: + label, = self.GetEntryArgsOrProps([EntryArg('text-label', str)]) + self.text_label = label + if self.text_label: + value, = self.GetEntryArgsOrProps([EntryArg(self.text_label, + str)]) + value = tools.ToBytes(value) if value is not None else value self.value = value
def ObtainContents(self): diff --git a/tools/binman/ftest.py b/tools/binman/ftest.py index 8577adb5380..c74e12d13c8 100644 --- a/tools/binman/ftest.py +++ b/tools/binman/ftest.py @@ -1286,7 +1286,7 @@ class TestFunctional(unittest.TestCase): expected = (tools.ToBytes(TEXT_DATA) + tools.GetBytes(0, 8 - len(TEXT_DATA)) + tools.ToBytes(TEXT_DATA2) + tools.ToBytes(TEXT_DATA3) + - b'some text') + b'some text' + b'more text') self.assertEqual(expected, data)
def testEntryDocs(self): diff --git a/tools/binman/test/066_text.dts b/tools/binman/test/066_text.dts index 59b1fed0ef8..f23a75ae929 100644 --- a/tools/binman/test/066_text.dts +++ b/tools/binman/test/066_text.dts @@ -24,5 +24,10 @@ text-label = "test-id4"; test-id4 = "some text"; }; + /* Put text directly in the node */ + text5 { + type = "text"; + text = "more text"; + }; }; };

At present text entries use an indirect method to specify the text to use, with a label pointing to the text itself.
Allow the text to be directly written into the node. This is more convenient in cases where the text is constant.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: None
tools/binman/README.entries | 9 +++++++++ tools/binman/etype/text.py | 23 +++++++++++++++++++---- tools/binman/ftest.py | 2 +- tools/binman/test/066_text.dts | 5 +++++ 4 files changed, 34 insertions(+), 5 deletions(-)
Applied to u-boot-dm, thanks!

Add utility functions to compress and decompress using lz4 and lzma algorithms. In the latter case these use the legacy lzma support favoured by coreboot's CBFS.
No tests are provided as these functions will be tested by the CBFS tests in a separate patch.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: None
tools/binman/README | 3 +- tools/patman/tools.py | 66 +++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 68 insertions(+), 1 deletion(-)
diff --git a/tools/binman/README b/tools/binman/README index 0ff30ef6fd9..7eda244bbe2 100644 --- a/tools/binman/README +++ b/tools/binman/README @@ -183,7 +183,8 @@ Running binman
First install prerequisites, e.g.
- sudo apt-get install python-pyelftools python3-pyelftools + sudo apt-get install python-pyelftools python3-pyelftools lzma-alone \ + liblz4-tool
Type:
diff --git a/tools/patman/tools.py b/tools/patman/tools.py index 0b3049f91f4..69d03d38608 100644 --- a/tools/patman/tools.py +++ b/tools/patman/tools.py @@ -374,3 +374,69 @@ def ToBytes(string): if sys.version_info[0] >= 3: return string.encode('utf-8') return string + +def Compress(indata, algo): + """Compress some data using a given algorithm + + Note that for lzma this uses an old version of the algorithm, not that + provided by xz. + + This requires 'lz4' and 'lzma_alone' tools. It also requires an output + directory to be previously set up, by calling PrepareOutputDir(). + + Args: + indata: Input data to compress + algo: Algorithm to use ('none', 'gzip', 'lz4' or 'lzma') + + Returns: + Compressed data + """ + if algo == 'none': + return indata + fname = GetOutputFilename('%s.comp.tmp' % algo) + WriteFile(fname, indata) + if algo == 'lz4': + data = Run('lz4', '--no-frame-crc', '-c', fname, binary=True) + # cbfstool uses a very old version of lzma + elif algo == 'lzma': + outfname = GetOutputFilename('%s.comp.otmp' % algo) + Run('lzma_alone', 'e', fname, outfname, '-lc1', '-lp0', '-pb0', '-d8') + data = ReadFile(outfname) + elif algo == 'gzip': + data = Run('gzip', '-c', fname, binary=True) + else: + raise ValueError("Unknown algorithm '%s'" % algo) + return data + +def Decompress(indata, algo): + """Decompress some data using a given algorithm + + Note that for lzma this uses an old version of the algorithm, not that + provided by xz. + + This requires 'lz4' and 'lzma_alone' tools. It also requires an output + directory to be previously set up, by calling PrepareOutputDir(). + + Args: + indata: Input data to decompress + algo: Algorithm to use ('none', 'gzip', 'lz4' or 'lzma') + + Returns: + Compressed data + """ + if algo == 'none': + return indata + fname = GetOutputFilename('%s.decomp.tmp' % algo) + with open(fname, 'wb') as fd: + fd.write(indata) + if algo == 'lz4': + data = Run('lz4', '-dc', fname, binary=True) + elif algo == 'lzma': + outfname = GetOutputFilename('%s.decomp.otmp' % algo) + Run('lzma_alone', 'd', fname, outfname) + data = ReadFile(outfname) + elif algo == 'gzip': + data = Run('gzip', '-cd', fname, binary=True) + else: + raise ValueError("Unknown algorithm '%s'" % algo) + return data

Add utility functions to compress and decompress using lz4 and lzma algorithms. In the latter case these use the legacy lzma support favoured by coreboot's CBFS.
No tests are provided as these functions will be tested by the CBFS tests in a separate patch.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: None
tools/binman/README | 3 +- tools/patman/tools.py | 66 +++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 68 insertions(+), 1 deletion(-)
Applied to u-boot-dm, thanks!

Update the compression test to use the tools module to decompress the output data.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: None
tools/binman/ftest.py | 13 ++----------- 1 file changed, 2 insertions(+), 11 deletions(-)
diff --git a/tools/binman/ftest.py b/tools/binman/ftest.py index c74e12d13c8..6ff871b3c16 100644 --- a/tools/binman/ftest.py +++ b/tools/binman/ftest.py @@ -59,7 +59,7 @@ BMPBLK_DATA = b'bmp' VBLOCK_DATA = b'vblk' FILES_DATA = (b"sorry I'm late\nOh, don't bother apologising, I'm " + b"sorry you're alive\n") -COMPRESS_DATA = b'data to compress' +COMPRESS_DATA = b'compress xxxxxxxxxxxxxxxxxxxxxx data' REFCODE_DATA = b'refcode'
@@ -1560,16 +1560,7 @@ class TestFunctional(unittest.TestCase): self._ResetDtbs()
def _decompress(self, data): - out = os.path.join(self._indir, 'lz4.tmp') - with open(out, 'wb') as fd: - fd.write(data) - return tools.Run('lz4', '-dc', out, binary=True) - ''' - try: - orig = lz4.frame.decompress(data) - except AttributeError: - orig = lz4.decompress(data) - ''' + return tools.Decompress(data, 'lz4')
def testCompress(self): """Test compression of blobs"""

Update the compression test to use the tools module to decompress the output data.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: None
tools/binman/ftest.py | 13 ++----------- 1 file changed, 2 insertions(+), 11 deletions(-)
Applied to u-boot-dm, thanks!

The -D option enables debug mode, but we only need to add -D to the command line once. Drop the duplicate code. Also drop the comment about enabling debugging since this can be done with -D.
Signed-off-by: Simon Glass sjg@chromium.org
---
Changes in v2: None
tools/binman/binman.py | 5 ----- 1 file changed, 5 deletions(-)
diff --git a/tools/binman/binman.py b/tools/binman/binman.py index 05aeaecd8f3..bab98826dc6 100755 --- a/tools/binman/binman.py +++ b/tools/binman/binman.py @@ -71,8 +71,6 @@ def RunTests(debug, processes, args): sys.argv = [sys.argv[0]] if debug: sys.argv.append('-D') - if debug: - sys.argv.append('-D')
# Run the entry tests first ,since these need to be the first to import the # 'entry' module. @@ -151,9 +149,6 @@ def RunBinman(options, args): """ ret_code = 0
- # For testing: This enables full exception traces. - #options.debug = True - if not options.debug: sys.tracebacklimit = 0

The -D option enables debug mode, but we only need to add -D to the command line once. Drop the duplicate code. Also drop the comment about enabling debugging since this can be done with -D.
Signed-off-by: Simon Glass sjg@chromium.org
---
Changes in v2: None
tools/binman/binman.py | 5 ----- 1 file changed, 5 deletions(-)
Applied to u-boot-dm, thanks!

Avoid duplicate code here by using the new compression function in the tools module.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: None
tools/binman/etype/blob.py | 16 ++++------------ 1 file changed, 4 insertions(+), 12 deletions(-)
diff --git a/tools/binman/etype/blob.py b/tools/binman/etype/blob.py index f56a1f87688..a91e7847009 100644 --- a/tools/binman/etype/blob.py +++ b/tools/binman/etype/blob.py @@ -49,18 +49,10 @@ class Entry_blob(Entry): # new Entry method which can read in chunks. Then we could copy # the data in chunks and avoid reading it all at once. For now # this seems like an unnecessary complication. - data = tools.ReadFile(self._pathname) - if self._compress == 'lz4': - self._uncompressed_size = len(data) - ''' - import lz4 # Import this only if needed (python-lz4 dependency) - - try: - data = lz4.frame.compress(data) - except AttributeError: - data = lz4.compress(data) - ''' - data = tools.Run('lz4', '-c', self._pathname, binary=True) + indata = tools.ReadFile(self._pathname) + if self._compress != 'none': + self._uncompressed_size = len(indata) + data = tools.Compress(indata, self._compress) self.SetContents(data) return True

Avoid duplicate code here by using the new compression function in the tools module.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: None
tools/binman/etype/blob.py | 16 ++++------------ 1 file changed, 4 insertions(+), 12 deletions(-)
Applied to u-boot-dm, thanks!

This comment mentions the wrong default filename. Fix it.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: None
tools/binman/README.entries | 2 +- tools/binman/etype/u_boot_spl_elf.py | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/tools/binman/README.entries b/tools/binman/README.entries index 9cbdbbaadef..c26addcfe64 100644 --- a/tools/binman/README.entries +++ b/tools/binman/README.entries @@ -496,7 +496,7 @@ Entry: u-boot-spl-elf: U-Boot SPL ELF image -------------------------------------------
Properties / Entry arguments: - - filename: Filename of SPL u-boot (default 'spl/u-boot') + - filename: Filename of SPL u-boot (default 'spl/u-boot-spl')
This is the U-Boot SPL ELF image. It does not include a device tree but can be relocated to any address for execution. diff --git a/tools/binman/etype/u_boot_spl_elf.py b/tools/binman/etype/u_boot_spl_elf.py index da328ae15e1..24ee77237ed 100644 --- a/tools/binman/etype/u_boot_spl_elf.py +++ b/tools/binman/etype/u_boot_spl_elf.py @@ -12,7 +12,7 @@ class Entry_u_boot_spl_elf(Entry_blob): """U-Boot SPL ELF image
Properties / Entry arguments: - - filename: Filename of SPL u-boot (default 'spl/u-boot') + - filename: Filename of SPL u-boot (default 'spl/u-boot-spl')
This is the U-Boot SPL ELF image. It does not include a device tree but can be relocated to any address for execution.

This comment mentions the wrong default filename. Fix it.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: None
tools/binman/README.entries | 2 +- tools/binman/etype/u_boot_spl_elf.py | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-)
Applied to u-boot-dm, thanks!

We currenty support using the ELF file in U-Boot proper and SPL, but not TPL. Add this as it is useful both with sandbox and for CBFS to allow adding TPL as a 'stage'.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: None
tools/binman/README.entries | 11 +++++++++++ tools/binman/etype/u_boot_tpl_elf.py | 24 ++++++++++++++++++++++++ tools/binman/ftest.py | 2 ++ tools/binman/test/096_elf.dts | 2 ++ 4 files changed, 39 insertions(+) create mode 100644 tools/binman/etype/u_boot_tpl_elf.py
diff --git a/tools/binman/README.entries b/tools/binman/README.entries index c26addcfe64..9a316763ace 100644 --- a/tools/binman/README.entries +++ b/tools/binman/README.entries @@ -574,6 +574,17 @@ process.
+Entry: u-boot-tpl-elf: U-Boot TPL ELF image +------------------------------------------- + +Properties / Entry arguments: + - filename: Filename of TPL u-boot (default 'tpl/u-boot-tpl') + +This is the U-Boot TPL ELF image. It does not include a device tree but can +be relocated to any address for execution. + + + Entry: u-boot-tpl-with-ucode-ptr: U-Boot TPL with embedded microcode pointer ----------------------------------------------------------------------------
diff --git a/tools/binman/etype/u_boot_tpl_elf.py b/tools/binman/etype/u_boot_tpl_elf.py new file mode 100644 index 00000000000..9cc1cc2c450 --- /dev/null +++ b/tools/binman/etype/u_boot_tpl_elf.py @@ -0,0 +1,24 @@ +# SPDX-License-Identifier: GPL-2.0+ +# Copyright (c) 2018 Google, Inc +# Written by Simon Glass sjg@chromium.org +# +# Entry-type module for U-Boot TPL ELF image +# + +from entry import Entry +from blob import Entry_blob + +class Entry_u_boot_tpl_elf(Entry_blob): + """U-Boot TPL ELF image + + Properties / Entry arguments: + - filename: Filename of TPL u-boot (default 'tpl/u-boot-tpl') + + This is the U-Boot TPL ELF image. It does not include a device tree but can + be relocated to any address for execution. + """ + def __init__(self, section, etype, node): + Entry_blob.__init__(self, section, etype, node) + + def GetDefaultFilename(self): + return 'tpl/u-boot-tpl' diff --git a/tools/binman/ftest.py b/tools/binman/ftest.py index 6ff871b3c16..9cec5f42fa3 100644 --- a/tools/binman/ftest.py +++ b/tools/binman/ftest.py @@ -1745,6 +1745,8 @@ class TestFunctional(unittest.TestCase): def testElf(self): """Basic test of ELF entries""" self._SetupSplElf() + with open(self.TestFile('bss_data'), 'rb') as fd: + TestFunctional._MakeInputFile('tpl/u-boot-tpl', fd.read()) with open(self.TestFile('bss_data'), 'rb') as fd: TestFunctional._MakeInputFile('-boot', fd.read()) data = self._DoReadFile('096_elf.dts') diff --git a/tools/binman/test/096_elf.dts b/tools/binman/test/096_elf.dts index df3440c3194..8e3f3f15ef0 100644 --- a/tools/binman/test/096_elf.dts +++ b/tools/binman/test/096_elf.dts @@ -10,5 +10,7 @@ }; u-boot-spl-elf { }; + u-boot-tpl-elf { + }; }; };

We currenty support using the ELF file in U-Boot proper and SPL, but not TPL. Add this as it is useful both with sandbox and for CBFS to allow adding TPL as a 'stage'.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: None
tools/binman/README.entries | 11 +++++++++++ tools/binman/etype/u_boot_tpl_elf.py | 24 ++++++++++++++++++++++++ tools/binman/ftest.py | 2 ++ tools/binman/test/096_elf.dts | 2 ++ 4 files changed, 39 insertions(+) create mode 100644 tools/binman/etype/u_boot_tpl_elf.py
Applied to u-boot-dm, thanks!

This should be -u, not -up, since we don't need to preserve the output directory in this case.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: None
tools/binman/ftest.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/binman/ftest.py b/tools/binman/ftest.py index 9cec5f42fa3..b1780854cfe 100644 --- a/tools/binman/ftest.py +++ b/tools/binman/ftest.py @@ -208,7 +208,7 @@ class TestFunctional(unittest.TestCase): if map: args.append('-m') if update_dtb: - args.append('-up') + args.append('-u') if not use_real_dtb: args.append('--fake-dtb') if verbosity is not None:

This should be -u, not -up, since we don't need to preserve the output directory in this case.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: None
tools/binman/ftest.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
Applied to u-boot-dm, thanks!

At present the -v flag is ignored with tests, so that (for example) -v2 does not have any effect. Update binman to pass this flag through to tests so that they work just like running binman normally, except in a few special cases where we are actually testing behaviour with different levels of verbosity.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: None
tools/binman/binman.py | 8 ++++++-- tools/binman/ftest.py | 17 ++++++++++++++++- 2 files changed, 22 insertions(+), 3 deletions(-)
diff --git a/tools/binman/binman.py b/tools/binman/binman.py index bab98826dc6..7c1dcfb65fc 100755 --- a/tools/binman/binman.py +++ b/tools/binman/binman.py @@ -46,11 +46,12 @@ except: import control import test_util
-def RunTests(debug, processes, args): +def RunTests(debug, verbosity, processes, args): """Run the functional tests and any embedded doctests
Args: debug: True to enable debugging, which shows a full stack trace on error + verbosity: Verbosity level to use args: List of positional args provided to binman. This can hold a test name to execute (as in 'binman -t testSections', for example) processes: Number of processes to use to run tests (None=same as #CPUs) @@ -71,6 +72,8 @@ def RunTests(debug, processes, args): sys.argv = [sys.argv[0]] if debug: sys.argv.append('-D') + if verbosity: + sys.argv.append('-v%d' % verbosity)
# Run the entry tests first ,since these need to be the first to import the # 'entry' module. @@ -153,7 +156,8 @@ def RunBinman(options, args): sys.tracebacklimit = 0
if options.test: - ret_code = RunTests(options.debug, options.processes, args[1:]) + ret_code = RunTests(options.debug, options.verbosity, options.processes, + args[1:])
elif options.test_coverage: RunTestCoverage() diff --git a/tools/binman/ftest.py b/tools/binman/ftest.py index b1780854cfe..f5e0b9b9742 100644 --- a/tools/binman/ftest.py +++ b/tools/binman/ftest.py @@ -153,6 +153,19 @@ class TestFunctional(unittest.TestCase): TestFunctional._MakeInputFile('spl/u-boot-spl.dtb', U_BOOT_SPL_DTB_DATA) TestFunctional._MakeInputFile('tpl/u-boot-tpl.dtb', U_BOOT_TPL_DTB_DATA)
+ def _GetVerbosity(self): + """Check if verbosity should be enabled + + Returns: + list containing either: + - Verbosity flag (e.g. '-v2') if it is present on the cmd line + - nothing if the flag is not present + """ + for arg in sys.argv[1:]: + if arg.startswith('-v'): + return [arg] + return [] + def _RunBinman(self, *args, **kwargs): """Run binman using the command line
@@ -213,6 +226,8 @@ class TestFunctional(unittest.TestCase): args.append('--fake-dtb') if verbosity is not None: args.append('-v%d' % verbosity) + else: + args += self._GetVerbosity() if entry_args: for arg, value in entry_args.items(): args.append('-a%s=%s' % (arg, value)) @@ -1471,7 +1486,7 @@ class TestFunctional(unittest.TestCase): expected = 'Skipping images: image1'
# We should only get the expected message in verbose mode - for verbosity in (None, 2): + for verbosity in (0, 2): with test_util.capture_sys_output() as (stdout, stderr): retcode = self._DoTestFile('006_dual_image.dts', verbosity=verbosity,

At present the -v flag is ignored with tests, so that (for example) -v2 does not have any effect. Update binman to pass this flag through to tests so that they work just like running binman normally, except in a few special cases where we are actually testing behaviour with different levels of verbosity.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: None
tools/binman/binman.py | 8 ++++++-- tools/binman/ftest.py | 17 ++++++++++++++++- 2 files changed, 22 insertions(+), 3 deletions(-)
Applied to u-boot-dm, thanks!

Sometimes when debugging tests it is useful to keep the input and output directories so they can be examined later. Add an option for this and update the binman tests to support it. This affects both the test class and the tearDown() function called after each test.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: None
tools/binman/README | 8 ++++++++ tools/binman/binman.py | 16 +++++++++++++--- tools/binman/cmdline.py | 4 ++++ tools/binman/ftest.py | 28 +++++++++++++++++++++++++--- 4 files changed, 50 insertions(+), 6 deletions(-)
diff --git a/tools/binman/README b/tools/binman/README index 7eda244bbe2..2f4c7fec21e 100644 --- a/tools/binman/README +++ b/tools/binman/README @@ -731,6 +731,14 @@ Use '-P 1' to disable this. It is automatically disabled when code coverage is being used (-T) since they are incompatible.
+Debugging tests +--------------- + +Sometimes when debugging tests it is useful to keep the input and output +directories so they can be examined later. Use -X or --test-preserve-dirs for +this. + + Advanced Features / Technical docs ----------------------------------
diff --git a/tools/binman/binman.py b/tools/binman/binman.py index 7c1dcfb65fc..9878eb86d4f 100755 --- a/tools/binman/binman.py +++ b/tools/binman/binman.py @@ -46,15 +46,20 @@ except: import control import test_util
-def RunTests(debug, verbosity, processes, args): +def RunTests(debug, verbosity, processes, test_preserve_dirs, args): """Run the functional tests and any embedded doctests
Args: debug: True to enable debugging, which shows a full stack trace on error verbosity: Verbosity level to use + test_preserve_dirs: True to preserve the input directory used by tests + so that it can be examined afterwards (only useful for debugging + tests). If a single test is selected (in args[0]) it also preserves + the output directory for this test. Both directories are displayed + on the command line. + processes: Number of processes to use to run tests (None=same as #CPUs) args: List of positional args provided to binman. This can hold a test name to execute (as in 'binman -t testSections', for example) - processes: Number of processes to use to run tests (None=same as #CPUs) """ import elf_test import entry_test @@ -82,6 +87,11 @@ def RunTests(debug, verbosity, processes, args): loader = unittest.TestLoader() for module in (entry_test.TestEntry, ftest.TestFunctional, fdt_test.TestFdt, elf_test.TestElf, image_test.TestImage): + # Test the test module about our arguments, if it is interested + if hasattr(module, 'setup_test_args'): + setup_test_args = getattr(module, 'setup_test_args') + setup_test_args(preserve_indir=test_preserve_dirs, + preserve_outdirs=test_preserve_dirs and test_name is not None) if test_name: try: suite.addTests(loader.loadTestsFromName(test_name, module)) @@ -157,7 +167,7 @@ def RunBinman(options, args):
if options.test: ret_code = RunTests(options.debug, options.verbosity, options.processes, - args[1:]) + options.test_preserve_dirs, args[1:])
elif options.test_coverage: RunTestCoverage() diff --git a/tools/binman/cmdline.py b/tools/binman/cmdline.py index 39b835666ea..91e007e4e03 100644 --- a/tools/binman/cmdline.py +++ b/tools/binman/cmdline.py @@ -59,6 +59,10 @@ def ParseArgs(argv): parser.add_option('-v', '--verbosity', default=1, type='int', help='Control verbosity: 0=silent, 1=progress, 3=full, ' '4=debug') + parser.add_option('-X', '--test-preserve-dirs', action='store_true', + help='Preserve and display test-created input directories; also ' + 'preserve the output directory if a single test is run (pass test ' + 'name at the end of the command line')
parser.usage += """
diff --git a/tools/binman/ftest.py b/tools/binman/ftest.py index f5e0b9b9742..256d4a1c5d8 100644 --- a/tools/binman/ftest.py +++ b/tools/binman/ftest.py @@ -6,6 +6,8 @@ # # python -m unittest func_test.TestFunctional.testHelp
+from __future__ import print_function + import hashlib from optparse import OptionParser import os @@ -134,10 +136,27 @@ class TestFunctional(unittest.TestCase): @classmethod def tearDownClass(self): """Remove the temporary input directory and its contents""" - if self._indir: - shutil.rmtree(self._indir) + if self.preserve_indir: + print('Preserving input dir: %s' % self._indir) + else: + if self._indir: + shutil.rmtree(self._indir) self._indir = None
+ @classmethod + def setup_test_args(cls, preserve_indir=False, preserve_outdirs=False): + """Accept arguments controlling test execution + + Args: + preserve_indir: Preserve the shared input directory used by all + tests in this class. + preserve_outdir: Preserve the output directories used by tests. Each + test has its own, so this is normally only useful when running a + single test. + """ + cls.preserve_indir = preserve_indir + cls.preserve_outdirs = preserve_outdirs + def setUp(self): # Enable this to turn on debugging output # tout.Init(tout.DEBUG) @@ -145,7 +164,10 @@ class TestFunctional(unittest.TestCase):
def tearDown(self): """Remove the temporary output directory""" - tools._FinaliseForTest() + if self.preserve_outdirs: + print('Preserving output dir: %s' % tools.outdir) + else: + tools._FinaliseForTest()
@classmethod def _ResetDtbs(self):

Sometimes when debugging tests it is useful to keep the input and output directories so they can be examined later. Add an option for this and update the binman tests to support it. This affects both the test class and the tearDown() function called after each test.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: None
tools/binman/README | 8 ++++++++ tools/binman/binman.py | 16 +++++++++++++--- tools/binman/cmdline.py | 4 ++++ tools/binman/ftest.py | 28 +++++++++++++++++++++++++--- 4 files changed, 50 insertions(+), 6 deletions(-)
Applied to u-boot-dm, thanks!

Tools like ifwitool may not be available in the PATH, but are available in the build. These tools may be needed by tests, so allow tests to use the --toolpath flag.
Also use this flag with travis.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: None
.travis.yml | 2 +- test/run | 9 ++++++--- tools/binman/binman.py | 12 +++++++++--- tools/binman/ftest.py | 8 +++++++- 4 files changed, 23 insertions(+), 8 deletions(-)
diff --git a/.travis.yml b/.travis.yml index 6662ca126ab..c883198abbf 100644 --- a/.travis.yml +++ b/.travis.yml @@ -146,7 +146,7 @@ script: if [[ -n "${TEST_PY_TOOLS}" ]]; then PYTHONPATH="${UBOOT_TRAVIS_BUILD_DIR}/scripts/dtc/pylibfdt" PATH="${UBOOT_TRAVIS_BUILD_DIR}/scripts/dtc:${PATH}" - ./tools/binman/binman -t && + ./tools/binman/binman --toolpath ${UBOOT_TRAVIS_BUILD_DIR}/tools -t && ./tools/patman/patman --test && ./tools/buildman/buildman -t && PYTHONPATH="${UBOOT_TRAVIS_BUILD_DIR}/scripts/dtc/pylibfdt" diff --git a/test/run b/test/run index 55a6649a9c5..b97647eba6f 100755 --- a/test/run +++ b/test/run @@ -33,12 +33,14 @@ run_test "sandbox_flattree" ./test/py/test.py --bd sandbox_flattree --build \ -k test_ut
# Set up a path to dtc (device-tree compiler) and libfdt.py, a library it -# provides and which is built by the sandbox_spl config. +# provides and which is built by the sandbox_spl config. Also set up the path +# to tools build by the build. DTC_DIR=build-sandbox_spl/scripts/dtc export PYTHONPATH=${DTC_DIR}/pylibfdt export DTC=${DTC_DIR}/dtc +TOOLS_DIR=build-sandbox_spl/tools
-run_test "binman" ./tools/binman/binman -t +run_test "binman" ./tools/binman/binman -t --toolpath ${TOOLS_DIR} run_test "patman" ./tools/patman/patman --test
[ "$1" == "quick" ] && skip=--skip-net-tests @@ -49,7 +51,8 @@ run_test "dtoc" ./tools/dtoc/dtoc -t # This needs you to set up Python test coverage tools. # To enable Python test coverage on Debian-type distributions (e.g. Ubuntu): # $ sudo apt-get install python-pytest python-coverage -run_test "binman code coverage" ./tools/binman/binman -T +export PATH=$PATH:${TOOLS_DIR} +run_test "binman code coverage" ./tools/binman/binman -T --toolpath ${TOOLS_DIR} run_test "dtoc code coverage" ./tools/dtoc/dtoc -T run_test "fdt code coverage" ./tools/dtoc/test_fdt -T
diff --git a/tools/binman/binman.py b/tools/binman/binman.py index 9878eb86d4f..52c03f68c6d 100755 --- a/tools/binman/binman.py +++ b/tools/binman/binman.py @@ -46,7 +46,7 @@ except: import control import test_util
-def RunTests(debug, verbosity, processes, test_preserve_dirs, args): +def RunTests(debug, verbosity, processes, test_preserve_dirs, args, toolpath): """Run the functional tests and any embedded doctests
Args: @@ -60,6 +60,7 @@ def RunTests(debug, verbosity, processes, test_preserve_dirs, args): processes: Number of processes to use to run tests (None=same as #CPUs) args: List of positional args provided to binman. This can hold a test name to execute (as in 'binman -t testSections', for example) + toolpath: List of paths to use for tools """ import elf_test import entry_test @@ -79,6 +80,9 @@ def RunTests(debug, verbosity, processes, test_preserve_dirs, args): sys.argv.append('-D') if verbosity: sys.argv.append('-v%d' % verbosity) + if toolpath: + for path in toolpath: + sys.argv += ['--toolpath', path]
# Run the entry tests first ,since these need to be the first to import the # 'entry' module. @@ -91,7 +95,8 @@ def RunTests(debug, verbosity, processes, test_preserve_dirs, args): if hasattr(module, 'setup_test_args'): setup_test_args = getattr(module, 'setup_test_args') setup_test_args(preserve_indir=test_preserve_dirs, - preserve_outdirs=test_preserve_dirs and test_name is not None) + preserve_outdirs=test_preserve_dirs and test_name is not None, + toolpath=toolpath) if test_name: try: suite.addTests(loader.loadTestsFromName(test_name, module)) @@ -167,7 +172,8 @@ def RunBinman(options, args):
if options.test: ret_code = RunTests(options.debug, options.verbosity, options.processes, - options.test_preserve_dirs, args[1:]) + options.test_preserve_dirs, args[1:], + options.toolpath)
elif options.test_coverage: RunTestCoverage() diff --git a/tools/binman/ftest.py b/tools/binman/ftest.py index 256d4a1c5d8..3455b8ccebd 100644 --- a/tools/binman/ftest.py +++ b/tools/binman/ftest.py @@ -144,7 +144,8 @@ class TestFunctional(unittest.TestCase): self._indir = None
@classmethod - def setup_test_args(cls, preserve_indir=False, preserve_outdirs=False): + def setup_test_args(cls, preserve_indir=False, preserve_outdirs=False, + toolpath=None): """Accept arguments controlling test execution
Args: @@ -153,9 +154,11 @@ class TestFunctional(unittest.TestCase): preserve_outdir: Preserve the output directories used by tests. Each test has its own, so this is normally only useful when running a single test. + toolpath: ist of paths to use for tools """ cls.preserve_indir = preserve_indir cls.preserve_outdirs = preserve_outdirs + cls.toolpath = toolpath
def setUp(self): # Enable this to turn on debugging output @@ -256,6 +259,9 @@ class TestFunctional(unittest.TestCase): if images: for image in images: args += ['-i', image] + if self.toolpath: + for path in self.toolpath: + args += ['--toolpath', path] return self._DoBinman(*args)
def _SetupDtb(self, fname, outfile='u-boot.dtb'):

Tools like ifwitool may not be available in the PATH, but are available in the build. These tools may be needed by tests, so allow tests to use the --toolpath flag.
Also use this flag with travis.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: None
.travis.yml | 2 +- test/run | 9 ++++++--- tools/binman/binman.py | 12 +++++++++--- tools/binman/ftest.py | 8 +++++++- 4 files changed, 23 insertions(+), 8 deletions(-)
Applied to u-boot-dm, thanks!

This tool has quite a few arguments and options, so put the functionality in a function so that we call it from one place and hopefully get it right.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: None
tools/patman/tools.py | 33 +++++++++++++++++++++++++++++++++ 1 file changed, 33 insertions(+)
diff --git a/tools/patman/tools.py b/tools/patman/tools.py index 69d03d38608..e945b54fa28 100644 --- a/tools/patman/tools.py +++ b/tools/patman/tools.py @@ -3,6 +3,8 @@ # Copyright (c) 2016 Google, Inc #
+from __future__ import print_function + import command import glob import os @@ -440,3 +442,34 @@ def Decompress(indata, algo): else: raise ValueError("Unknown algorithm '%s'" % algo) return data + +CMD_CREATE, CMD_DELETE, CMD_ADD, CMD_REPLACE, CMD_EXTRACT = range(5) + +IFWITOOL_CMDS = { + CMD_CREATE: 'create', + CMD_DELETE: 'delete', + CMD_ADD: 'add', + CMD_REPLACE: 'replace', + CMD_EXTRACT: 'extract', + } + +def RunIfwiTool(ifwi_file, cmd, fname=None, subpart=None, entry_name=None): + """Run ifwitool with the given arguments: + + Args: + ifwi_file: IFWI file to operation on + cmd: Command to execute (CMD_...) + fname: Filename of file to add/replace/extract/create (None for + CMD_DELETE) + subpart: Name of sub-partition to operation on (None for CMD_CREATE) + entry_name: Name of directory entry to operate on, or None if none + """ + args = ['ifwitool', ifwi_file] + args.append(IFWITOOL_CMDS[cmd]) + if fname: + args += ['-f', fname] + if subpart: + args += ['-n', subpart] + if entry_name: + args += ['-d', '-e', entry_name] + Run(*args)

This tool has quite a few arguments and options, so put the functionality in a function so that we call it from one place and hopefully get it right.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: None
tools/patman/tools.py | 33 +++++++++++++++++++++++++++++++++ 1 file changed, 33 insertions(+)
Applied to u-boot-dm, thanks!

Coreboot uses a simple flash-based filesystem called Coreboot Filesystem (CBFS) to organise files used during boot. This allows files to be named and their position in the flash to be set. It has special features for dealing with x86 devices which typically memory-map their SPI flash to the top of 32-bit address space and need a 'boot block' ending there.
Create a library to help create and read CBFS files. This includes a writer class, a reader class and associated other helpers. Only a subset of features are currently supported.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: - Deal with travis's old lz4 version by skipping tests as necessary - Install lzma tool in travis - Skip use of cbfstool in tests if it is not available
.travis.yml | 1 + tools/binman/binman.py | 4 +- tools/binman/cbfs_util.py | 720 +++++++++++++++++++++++++++++++++ tools/binman/cbfs_util_test.py | 540 +++++++++++++++++++++++++ 4 files changed, 1264 insertions(+), 1 deletion(-) create mode 100644 tools/binman/cbfs_util.py create mode 100755 tools/binman/cbfs_util_test.py
diff --git a/.travis.yml b/.travis.yml index c883198abbf..70d89d3e233 100644 --- a/.travis.yml +++ b/.travis.yml @@ -32,6 +32,7 @@ addons: - device-tree-compiler - lzop - liblz4-tool + - lzma-alone - libisl15 - clang-7 - srecord diff --git a/tools/binman/binman.py b/tools/binman/binman.py index 52c03f68c6d..613aad5c451 100755 --- a/tools/binman/binman.py +++ b/tools/binman/binman.py @@ -62,6 +62,7 @@ def RunTests(debug, verbosity, processes, test_preserve_dirs, args, toolpath): name to execute (as in 'binman -t testSections', for example) toolpath: List of paths to use for tools """ + import cbfs_util_test import elf_test import entry_test import fdt_test @@ -90,7 +91,8 @@ def RunTests(debug, verbosity, processes, test_preserve_dirs, args, toolpath): suite = unittest.TestSuite() loader = unittest.TestLoader() for module in (entry_test.TestEntry, ftest.TestFunctional, fdt_test.TestFdt, - elf_test.TestElf, image_test.TestImage): + elf_test.TestElf, image_test.TestImage, + cbfs_util_test.TestCbfs): # Test the test module about our arguments, if it is interested if hasattr(module, 'setup_test_args'): setup_test_args = getattr(module, 'setup_test_args') diff --git a/tools/binman/cbfs_util.py b/tools/binman/cbfs_util.py new file mode 100644 index 00000000000..197cff89509 --- /dev/null +++ b/tools/binman/cbfs_util.py @@ -0,0 +1,720 @@ +# SPDX-License-Identifier: GPL-2.0+ +# Copyright 2019 Google LLC +# Written by Simon Glass sjg@chromium.org + +"""Support for coreboot's CBFS format + +CBFS supports a header followed by a number of files, generally targeted at SPI +flash. + +The format is somewhat defined by documentation in the coreboot tree although +it is necessary to rely on the C structures and source code (mostly cbfstool) +to fully understand it. + +Currently supported: raw and stage types with compression +""" + +from __future__ import print_function + +from collections import OrderedDict +import io +import struct +import sys + +import command +import elf +import tools + +# Set to True to enable printing output while working +DEBUG = False + +# Set to True to enable output from running cbfstool for debugging +VERBOSE = False + +# The master header, at the start of the CBFS +HEADER_FORMAT = '>IIIIIIII' +HEADER_LEN = 0x20 +HEADER_MAGIC = 0x4f524243 +HEADER_VERSION1 = 0x31313131 +HEADER_VERSION2 = 0x31313132 + +# The file header, at the start of each file in the CBFS +FILE_HEADER_FORMAT = b'>8sIIII' +FILE_HEADER_LEN = 0x18 +FILE_MAGIC = b'LARCHIVE' +FILENAME_ALIGN = 16 # Filename lengths are aligned to this + +# A stage header containing information about 'stage' files +# Yes this is correct: this header is in litte-endian format +STAGE_FORMAT = '<IQQII' +STAGE_LEN = 0x1c + +# An attribute describring the compression used in a file +ATTR_COMPRESSION_FORMAT = '>IIII' +ATTR_COMPRESSION_LEN = 0x10 + +# Attribute tags +# Depending on how the header was initialised, it may be backed with 0x00 or +# 0xff. Support both. +FILE_ATTR_TAG_UNUSED = 0 +FILE_ATTR_TAG_UNUSED2 = 0xffffffff +FILE_ATTR_TAG_COMPRESSION = 0x42435a4c +FILE_ATTR_TAG_HASH = 0x68736148 +FILE_ATTR_TAG_POSITION = 0x42435350 # PSCB +FILE_ATTR_TAG_ALIGNMENT = 0x42434c41 # ALCB +FILE_ATTR_TAG_PADDING = 0x47444150 # PDNG + +# This is 'the size of bootblock reserved in firmware image (cbfs.txt)' +# Not much more info is available, but we set it to 4, due to this comment in +# cbfstool.c: +# This causes 4 bytes to be left out at the end of the image, for two reasons: +# 1. The cbfs master header pointer resides there +# 2. Ssme cbfs implementations assume that an image that resides below 4GB has +# a bootblock and get confused when the end of the image is at 4GB == 0. +MIN_BOOTBLOCK_SIZE = 4 + +# Files start aligned to this boundary in the CBFS +ENTRY_ALIGN = 0x40 + +# CBFSs must declare an architecture since much of the logic is designed with +# x86 in mind. The effect of setting this value is not well documented, but in +# general x86 is used and this makes use of a boot block and an image that ends +# at the end of 32-bit address space. +ARCHITECTURE_UNKNOWN = 0xffffffff +ARCHITECTURE_X86 = 0x00000001 +ARCHITECTURE_ARM = 0x00000010 +ARCHITECTURE_AARCH64 = 0x0000aa64 +ARCHITECTURE_MIPS = 0x00000100 +ARCHITECTURE_RISCV = 0xc001d0de +ARCHITECTURE_PPC64 = 0x407570ff + +ARCH_NAMES = { + ARCHITECTURE_UNKNOWN : 'unknown', + ARCHITECTURE_X86 : 'x86', + ARCHITECTURE_ARM : 'arm', + ARCHITECTURE_AARCH64 : 'arm64', + ARCHITECTURE_MIPS : 'mips', + ARCHITECTURE_RISCV : 'riscv', + ARCHITECTURE_PPC64 : 'ppc64', + } + +# File types. Only supported ones are included here +TYPE_CBFSHEADER = 0x02 # Master header, HEADER_FORMAT +TYPE_STAGE = 0x10 # Stage, holding an executable, see STAGE_FORMAT +TYPE_RAW = 0x50 # Raw file, possibly compressed + +# Compression types +COMPRESS_NONE, COMPRESS_LZMA, COMPRESS_LZ4 = range(3) + +COMPRESS_NAMES = { + COMPRESS_NONE : 'none', + COMPRESS_LZMA : 'lzma', + COMPRESS_LZ4 : 'lz4', + } + +def find_arch(find_name): + """Look up an architecture name + + Args: + find_name: Architecture name to find + + Returns: + ARCHITECTURE_... value or None if not found + """ + for arch, name in ARCH_NAMES.items(): + if name == find_name: + return arch + return None + +def find_compress(find_name): + """Look up a compression algorithm name + + Args: + find_name: Compression algorithm name to find + + Returns: + COMPRESS_... value or None if not found + """ + for compress, name in COMPRESS_NAMES.items(): + if name == find_name: + return compress + return None + +def align_int(val, align): + """Align a value up to the given alignment + + Args: + val: Integer value to align + align: Integer alignment value (e.g. 4 to align to 4-byte boundary) + + Returns: + integer value aligned to the required boundary, rounding up if necessary + """ + return int((val + align - 1) / align) * align + +def _pack_string(instr): + """Pack a string to the required aligned size by adding padding + + Args: + instr: String to process + + Returns: + String with required padding (at least one 0x00 byte) at the end + """ + val = tools.ToBytes(instr) + pad_len = align_int(len(val) + 1, FILENAME_ALIGN) + return val + tools.GetBytes(0, pad_len - len(val)) + + +class CbfsFile(object): + """Class to represent a single CBFS file + + This is used to hold the information about a file, including its contents. + Use the get_data() method to obtain the raw output for writing to CBFS. + + Properties: + name: Name of file + offset: Offset of file data from start of file header + data: Contents of file, uncompressed + data_len: Length of (possibly compressed) data in bytes + ftype: File type (TYPE_...) + compression: Compression type (COMPRESS_...) + memlen: Length of data in memory (typically the uncompressed length) + load: Load address in memory if known, else None + entry: Entry address in memory if known, else None. This is where + execution starts after the file is loaded + base_address: Base address to use for 'stage' files + """ + def __init__(self, name, ftype, data, compress=COMPRESS_NONE): + self.name = name + self.offset = None + self.data = data + self.ftype = ftype + self.compress = compress + self.memlen = len(data) + self.load = None + self.entry = None + self.base_address = None + self.data_len = 0 + + def decompress(self): + """Handle decompressing data if necessary""" + indata = self.data + if self.compress == COMPRESS_LZ4: + data = tools.Decompress(indata, 'lz4') + elif self.compress == COMPRESS_LZMA: + data = tools.Decompress(indata, 'lzma') + else: + data = indata + self.memlen = len(data) + self.data = data + self.data_len = len(indata) + + @classmethod + def stage(cls, base_address, name, data): + """Create a new stage file + + Args: + base_address: Int base address for memory-mapping of ELF file + name: String file name to put in CBFS (does not need to correspond + to the name that the file originally came from) + data: Contents of file + + Returns: + CbfsFile object containing the file information + """ + cfile = CbfsFile(name, TYPE_STAGE, data) + cfile.base_address = base_address + return cfile + + @classmethod + def raw(cls, name, data, compress): + """Create a new raw file + + Args: + name: String file name to put in CBFS (does not need to correspond + to the name that the file originally came from) + data: Contents of file + compress: Compression algorithm to use (COMPRESS_...) + + Returns: + CbfsFile object containing the file information + """ + return CbfsFile(name, TYPE_RAW, data, compress) + + def get_data(self): + """Obtain the contents of the file, in CBFS format + + Returns: + bytes representing the contents of this file, packed and aligned + for directly inserting into the final CBFS output + """ + name = _pack_string(self.name) + hdr_len = len(name) + FILE_HEADER_LEN + attr_pos = 0 + content = b'' + attr = b'' + data = self.data + if self.ftype == TYPE_STAGE: + elf_data = elf.DecodeElf(data, self.base_address) + content = struct.pack(STAGE_FORMAT, self.compress, + elf_data.entry, elf_data.load, + len(elf_data.data), elf_data.memsize) + data = elf_data.data + elif self.ftype == TYPE_RAW: + orig_data = data + if self.compress == COMPRESS_LZ4: + data = tools.Compress(orig_data, 'lz4') + elif self.compress == COMPRESS_LZMA: + data = tools.Compress(orig_data, 'lzma') + attr = struct.pack(ATTR_COMPRESSION_FORMAT, + FILE_ATTR_TAG_COMPRESSION, ATTR_COMPRESSION_LEN, + self.compress, len(orig_data)) + else: + raise ValueError('Unknown type %#x when writing\n' % self.ftype) + if attr: + attr_pos = hdr_len + hdr_len += len(attr) + hdr = struct.pack(FILE_HEADER_FORMAT, FILE_MAGIC, + len(content) + len(data), + self.ftype, attr_pos, hdr_len) + return hdr + name + attr + content + data + + +class CbfsWriter(object): + """Class to handle writing a Coreboot File System (CBFS) + + Usage is something like: + + cbw = CbfsWriter(size) + cbw.add_file_raw('u-boot', tools.ReadFile('u-boot.bin')) + ... + data = cbw.get_data() + + Attributes: + _master_name: Name of the file containing the master header + _size: Size of the filesystem, in bytes + _files: Ordered list of files in the CBFS, each a CbfsFile + _arch: Architecture of the CBFS (ARCHITECTURE_...) + _bootblock_size: Size of the bootblock, typically at the end of the CBFS + _erase_byte: Byte to use for empty space in the CBFS + _align: Alignment to use for files, typically ENTRY_ALIGN + _base_address: Boot block offset in bytes from the start of CBFS. + Typically this is located at top of the CBFS. It is 0 when there is + no boot block + _header_offset: Offset of master header in bytes from start of CBFS + _contents_offset: Offset of first file header + _hdr_at_start: True if the master header is at the start of the CBFS, + instead of the end as normal for x86 + _add_fileheader: True to add a fileheader around the master header + """ + def __init__(self, size, arch=ARCHITECTURE_X86): + """Set up a new CBFS + + This sets up all properties to default values. Files can be added using + add_file_raw(), etc. + + Args: + size: Size of CBFS in bytes + arch: Architecture to declare for CBFS + """ + self._master_name = 'cbfs master header' + self._size = size + self._files = OrderedDict() + self._arch = arch + self._bootblock_size = 0 + self._erase_byte = 0xff + self._align = ENTRY_ALIGN + self._add_fileheader = False + if self._arch == ARCHITECTURE_X86: + # Allow 4 bytes for the header pointer. That holds the + # twos-compliment negative offset of the master header in bytes + # measured from one byte past the end of the CBFS + self._base_address = self._size - max(self._bootblock_size, + MIN_BOOTBLOCK_SIZE) + self._header_offset = self._base_address - HEADER_LEN + self._contents_offset = 0 + self._hdr_at_start = False + else: + # For non-x86, different rules apply + self._base_address = 0 + self._header_offset = align_int(self._base_address + + self._bootblock_size, 4) + self._contents_offset = align_int(self._header_offset + + FILE_HEADER_LEN + + self._bootblock_size, self._align) + self._hdr_at_start = True + + def _skip_to(self, fd, offset): + """Write out pad bytes until a given offset + + Args: + fd: File objext to write to + offset: Offset to write to + """ + if fd.tell() > offset: + raise ValueError('No space for data before offset %#x (current offset %#x)' % + (offset, fd.tell())) + fd.write(tools.GetBytes(self._erase_byte, offset - fd.tell())) + + def _align_to(self, fd, align): + """Write out pad bytes until a given alignment is reached + + This only aligns if the resulting output would not reach the end of the + CBFS, since we want to leave the last 4 bytes for the master-header + pointer. + + Args: + fd: File objext to write to + align: Alignment to require (e.g. 4 means pad to next 4-byte + boundary) + """ + offset = align_int(fd.tell(), align) + if offset < self._size: + self._skip_to(fd, offset) + + def add_file_stage(self, name, data): + """Add a new stage file to the CBFS + + Args: + name: String file name to put in CBFS (does not need to correspond + to the name that the file originally came from) + data: Contents of file + + Returns: + CbfsFile object created + """ + cfile = CbfsFile.stage(self._base_address, name, data) + self._files[name] = cfile + return cfile + + def add_file_raw(self, name, data, compress=COMPRESS_NONE): + """Create a new raw file + + Args: + name: String file name to put in CBFS (does not need to correspond + to the name that the file originally came from) + data: Contents of file + compress: Compression algorithm to use (COMPRESS_...) + + Returns: + CbfsFile object created + """ + cfile = CbfsFile.raw(name, data, compress) + self._files[name] = cfile + return cfile + + def _write_header(self, fd, add_fileheader): + """Write out the master header to a CBFS + + Args: + fd: File object + add_fileheader: True to place the master header in a file header + record + """ + if fd.tell() > self._header_offset: + raise ValueError('No space for header at offset %#x (current offset %#x)' % + (self._header_offset, fd.tell())) + if not add_fileheader: + self._skip_to(fd, self._header_offset) + hdr = struct.pack(HEADER_FORMAT, HEADER_MAGIC, HEADER_VERSION2, + self._size, self._bootblock_size, self._align, + self._contents_offset, self._arch, 0xffffffff) + if add_fileheader: + name = _pack_string(self._master_name) + fd.write(struct.pack(FILE_HEADER_FORMAT, FILE_MAGIC, len(hdr), + TYPE_CBFSHEADER, 0, + FILE_HEADER_LEN + len(name))) + fd.write(name) + self._header_offset = fd.tell() + fd.write(hdr) + self._align_to(fd, self._align) + else: + fd.write(hdr) + + def get_data(self): + """Obtain the full contents of the CBFS + + Thhis builds the CBFS with headers and all required files. + + Returns: + 'bytes' type containing the data + """ + fd = io.BytesIO() + + # THe header can go at the start in some cases + if self._hdr_at_start: + self._write_header(fd, add_fileheader=self._add_fileheader) + self._skip_to(fd, self._contents_offset) + + # Write out each file + for cbf in self._files.values(): + fd.write(cbf.get_data()) + self._align_to(fd, self._align) + if not self._hdr_at_start: + self._write_header(fd, add_fileheader=self._add_fileheader) + + # Pad to the end and write a pointer to the CBFS master header + self._skip_to(fd, self._base_address or self._size - 4) + rel_offset = self._header_offset - self._size + fd.write(struct.pack('<I', rel_offset & 0xffffffff)) + + return fd.getvalue() + + +class CbfsReader(object): + """Class to handle reading a Coreboot File System (CBFS) + + Usage is something like: + cbfs = cbfs_util.CbfsReader(data) + cfile = cbfs.files['u-boot'] + self.WriteFile('u-boot.bin', cfile.data) + + Attributes: + files: Ordered list of CbfsFile objects + align: Alignment to use for files, typically ENTRT_ALIGN + stage_base_address: Base address to use when mapping ELF files into the + CBFS for TYPE_STAGE files. If this is larger than the code address + of the ELF file, then data at the start of the ELF file will not + appear in the CBFS. Currently there are no tests for behaviour as + documentation is sparse + magic: Integer magic number from master header (HEADER_MAGIC) + version: Version number of CBFS (HEADER_VERSION2) + rom_size: Size of CBFS + boot_block_size: Size of boot block + cbfs_offset: Offset of the first file in bytes from start of CBFS + arch: Architecture of CBFS file (ARCHITECTURE_...) + """ + def __init__(self, data, read=True): + self.align = ENTRY_ALIGN + self.arch = None + self.boot_block_size = None + self.cbfs_offset = None + self.files = OrderedDict() + self.magic = None + self.rom_size = None + self.stage_base_address = 0 + self.version = None + self.data = data + if read: + self.read() + + def read(self): + """Read all the files in the CBFS and add them to self.files""" + with io.BytesIO(self.data) as fd: + # First, get the master header + if not self._find_and_read_header(fd, len(self.data)): + raise ValueError('Cannot find master header') + fd.seek(self.cbfs_offset) + + # Now read in the files one at a time + while True: + cfile = self._read_next_file(fd) + if cfile: + self.files[cfile.name] = cfile + elif cfile is False: + break + + def _find_and_read_header(self, fd, size): + """Find and read the master header in the CBFS + + This looks at the pointer word at the very end of the CBFS. This is an + offset to the header relative to the size of the CBFS, which is assumed + to be known. Note that the offset is in *little endian* format. + + Args: + fd: File to read from + size: Size of file + + Returns: + True if header was found, False if not + """ + orig_pos = fd.tell() + fd.seek(size - 4) + rel_offset, = struct.unpack('<I', fd.read(4)) + pos = (size + rel_offset) & 0xffffffff + fd.seek(pos) + found = self._read_header(fd) + if not found: + print('Relative offset seems wrong, scanning whole image') + for pos in range(0, size - HEADER_LEN, 4): + fd.seek(pos) + found = self._read_header(fd) + if found: + break + fd.seek(orig_pos) + return found + + def _read_next_file(self, fd): + """Read the next file from a CBFS + + Args: + fd: File to read from + + Returns: + CbfsFile object, if found + None if no object found, but data was parsed (e.g. TYPE_CBFSHEADER) + False if at end of CBFS and reading should stop + """ + file_pos = fd.tell() + data = fd.read(FILE_HEADER_LEN) + if len(data) < FILE_HEADER_LEN: + print('File header at %x ran out of data' % file_pos) + return False + magic, size, ftype, attr, offset = struct.unpack(FILE_HEADER_FORMAT, + data) + if magic != FILE_MAGIC: + return False + pos = fd.tell() + name = self._read_string(fd) + if name is None: + print('String at %x ran out of data' % pos) + return False + + if DEBUG: + print('name', name) + + # If there are attribute headers present, read those + compress = self._read_attr(fd, file_pos, attr, offset) + if compress is None: + return False + + # Create the correct CbfsFile object depending on the type + cfile = None + fd.seek(file_pos + offset, io.SEEK_SET) + if ftype == TYPE_CBFSHEADER: + self._read_header(fd) + elif ftype == TYPE_STAGE: + data = fd.read(STAGE_LEN) + cfile = CbfsFile.stage(self.stage_base_address, name, b'') + (cfile.compress, cfile.entry, cfile.load, cfile.data_len, + cfile.memlen) = struct.unpack(STAGE_FORMAT, data) + cfile.data = fd.read(cfile.data_len) + elif ftype == TYPE_RAW: + data = fd.read(size) + cfile = CbfsFile.raw(name, data, compress) + cfile.decompress() + if DEBUG: + print('data', data) + else: + raise ValueError('Unknown type %#x when reading\n' % ftype) + if cfile: + cfile.offset = offset + + # Move past the padding to the start of a possible next file. If we are + # already at an alignment boundary, then there is no padding. + pad = (self.align - fd.tell() % self.align) % self.align + fd.seek(pad, io.SEEK_CUR) + return cfile + + @classmethod + def _read_attr(cls, fd, file_pos, attr, offset): + """Read attributes from the file + + CBFS files can have attributes which are things that cannot fit into the + header. The only attribute currently supported is compression. + + Args: + fd: File to read from + file_pos: Position of file in fd + attr: Offset of attributes, 0 if none + offset: Offset of file data (used to indicate the end of the + attributes) + + Returns: + Compression to use for the file (COMPRESS_...) + """ + compress = COMPRESS_NONE + if not attr: + return compress + attr_size = offset - attr + fd.seek(file_pos + attr, io.SEEK_SET) + while attr_size: + pos = fd.tell() + hdr = fd.read(8) + if len(hdr) < 8: + print('Attribute tag at %x ran out of data' % pos) + return None + atag, alen = struct.unpack(">II", hdr) + data = hdr + fd.read(alen - 8) + if atag == FILE_ATTR_TAG_COMPRESSION: + # We don't currently use this information + atag, alen, compress, _decomp_size = struct.unpack( + ATTR_COMPRESSION_FORMAT, data) + else: + print('Unknown attribute tag %x' % atag) + attr_size -= len(data) + return compress + + def _read_header(self, fd): + """Read the master header + + Reads the header and stores the information obtained into the member + variables. + + Args: + fd: File to read from + + Returns: + True if header was read OK, False if it is truncated or has the + wrong magic or version + """ + pos = fd.tell() + data = fd.read(HEADER_LEN) + if len(data) < HEADER_LEN: + print('Header at %x ran out of data' % pos) + return False + (self.magic, self.version, self.rom_size, self.boot_block_size, + self.align, self.cbfs_offset, self.arch, _) = struct.unpack( + HEADER_FORMAT, data) + return self.magic == HEADER_MAGIC and ( + self.version == HEADER_VERSION1 or + self.version == HEADER_VERSION2) + + @classmethod + def _read_string(cls, fd): + """Read a string from a file + + This reads a string and aligns the data to the next alignment boundary + + Args: + fd: File to read from + + Returns: + string read ('str' type) encoded to UTF-8, or None if we ran out of + data + """ + val = b'' + while True: + data = fd.read(FILENAME_ALIGN) + if len(data) < FILENAME_ALIGN: + return None + pos = data.find(b'\0') + if pos == -1: + val += data + else: + val += data[:pos] + break + return val.decode('utf-8') + + +def cbfstool(fname, *cbfs_args): + """Run cbfstool with provided arguments + + If the tool fails then this function raises an exception and prints out the + output and stderr. + + Args: + fname: Filename of CBFS + *cbfs_args: List of arguments to pass to cbfstool + + Returns: + CommandResult object containing the results + """ + args = ('cbfstool', fname) + cbfs_args + result = command.RunPipe([args], capture=not VERBOSE, + capture_stderr=not VERBOSE, raise_on_error=False) + if result.return_code: + print(result.stderr, file=sys.stderr) + raise Exception("Failed to run (error %d): '%s'" % + (result.return_code, ' '.join(args))) diff --git a/tools/binman/cbfs_util_test.py b/tools/binman/cbfs_util_test.py new file mode 100755 index 00000000000..19086305af1 --- /dev/null +++ b/tools/binman/cbfs_util_test.py @@ -0,0 +1,540 @@ +#!/usr/bin/env python +# SPDX-License-Identifier: GPL-2.0+ +# Copyright 2019 Google LLC +# Written by Simon Glass sjg@chromium.org + +"""Tests for cbfs_util + +These create and read various CBFSs and compare the results with expected +values and with cbfstool +""" + +from __future__ import print_function + +import io +import os +import shutil +import struct +import tempfile +import unittest + +import cbfs_util +from cbfs_util import CbfsWriter +import elf +import test_util +import tools + +U_BOOT_DATA = b'1234' +U_BOOT_DTB_DATA = b'udtb' +COMPRESS_DATA = b'compress xxxxxxxxxxxxxxxxxxxxxx data' + + +class TestCbfs(unittest.TestCase): + """Test of cbfs_util classes""" + #pylint: disable=W0212 + @classmethod + def setUpClass(cls): + # Create a temporary directory for test files + cls._indir = tempfile.mkdtemp(prefix='cbfs_util.') + tools.SetInputDirs([cls._indir]) + + # Set up some useful data files + TestCbfs._make_input_file('u-boot.bin', U_BOOT_DATA) + TestCbfs._make_input_file('u-boot.dtb', U_BOOT_DTB_DATA) + TestCbfs._make_input_file('compress', COMPRESS_DATA) + + # Set up a temporary output directory, used by the tools library when + # compressing files + tools.PrepareOutputDir(None) + + cls.have_cbfstool = True + try: + tools.Run('which', 'cbfstool') + except: + cls.have_cbfstool = False + + cls.have_lz4 = True + try: + tools.Run('lz4', '--no-frame-crc', '-c', + tools.GetInputFilename('u-boot.bin')) + except: + cls.have_lz4 = False + + @classmethod + def tearDownClass(cls): + """Remove the temporary input directory and its contents""" + if cls._indir: + shutil.rmtree(cls._indir) + cls._indir = None + tools.FinaliseOutputDir() + + @classmethod + def _make_input_file(cls, fname, contents): + """Create a new test input file, creating directories as needed + + Args: + fname: Filename to create + contents: File contents to write in to the file + Returns: + Full pathname of file created + """ + pathname = os.path.join(cls._indir, fname) + tools.WriteFile(pathname, contents) + return pathname + + def _check_hdr(self, data, size, offset=0, arch=cbfs_util.ARCHITECTURE_X86): + """Check that the CBFS has the expected header + + Args: + data: Data to check + size: Expected ROM size + offset: Expected offset to first CBFS file + arch: Expected architecture + + Returns: + CbfsReader object containing the CBFS + """ + cbfs = cbfs_util.CbfsReader(data) + self.assertEqual(cbfs_util.HEADER_MAGIC, cbfs.magic) + self.assertEqual(cbfs_util.HEADER_VERSION2, cbfs.version) + self.assertEqual(size, cbfs.rom_size) + self.assertEqual(0, cbfs.boot_block_size) + self.assertEqual(cbfs_util.ENTRY_ALIGN, cbfs.align) + self.assertEqual(offset, cbfs.cbfs_offset) + self.assertEqual(arch, cbfs.arch) + return cbfs + + def _check_uboot(self, cbfs, ftype=cbfs_util.TYPE_RAW, offset=0x38, + data=U_BOOT_DATA): + """Check that the U-Boot file is as expected + + Args: + cbfs: CbfsReader object to check + ftype: Expected file type + offset: Expected offset of file + data: Expected data in file + + Returns: + CbfsFile object containing the file + """ + self.assertIn('u-boot', cbfs.files) + cfile = cbfs.files['u-boot'] + self.assertEqual('u-boot', cfile.name) + self.assertEqual(offset, cfile.offset) + self.assertEqual(data, cfile.data) + self.assertEqual(ftype, cfile.ftype) + self.assertEqual(cbfs_util.COMPRESS_NONE, cfile.compress) + self.assertEqual(len(data), cfile.memlen) + return cfile + + def _check_dtb(self, cbfs, offset=0x38, data=U_BOOT_DTB_DATA): + """Check that the U-Boot dtb file is as expected + + Args: + cbfs: CbfsReader object to check + offset: Expected offset of file + data: Expected data in file + """ + self.assertIn('u-boot-dtb', cbfs.files) + cfile = cbfs.files['u-boot-dtb'] + self.assertEqual('u-boot-dtb', cfile.name) + self.assertEqual(offset, cfile.offset) + self.assertEqual(U_BOOT_DTB_DATA, cfile.data) + self.assertEqual(cbfs_util.TYPE_RAW, cfile.ftype) + self.assertEqual(cbfs_util.COMPRESS_NONE, cfile.compress) + self.assertEqual(len(U_BOOT_DTB_DATA), cfile.memlen) + + def _check_raw(self, data, size, offset=0, arch=cbfs_util.ARCHITECTURE_X86): + """Check that two raw files are added as expected + + Args: + data: Data to check + size: Expected ROM size + offset: Expected offset to first CBFS file + arch: Expected architecture + """ + cbfs = self._check_hdr(data, size, offset=offset, arch=arch) + self._check_uboot(cbfs) + self._check_dtb(cbfs) + + def _get_expected_cbfs(self, size, arch='x86', compress=None): + """Get the file created by cbfstool for a particular scenario + + Args: + size: Size of the CBFS in bytes + arch: Architecture of the CBFS, as a string + compress: Compression to use, e.g. cbfs_util.COMPRESS_LZMA + + Returns: + Resulting CBFS file, or None if cbfstool is not available + """ + if not self.have_cbfstool or not self.have_lz4: + return None + cbfs_fname = os.path.join(self._indir, 'test.cbfs') + cbfs_util.cbfstool(cbfs_fname, 'create', '-m', arch, '-s', '%#x' % size) + cbfs_util.cbfstool(cbfs_fname, 'add', '-n', 'u-boot', '-t', 'raw', + '-c', compress and compress[0] or 'none', + '-f', tools.GetInputFilename( + compress and 'compress' or 'u-boot.bin')) + cbfs_util.cbfstool(cbfs_fname, 'add', '-n', 'u-boot-dtb', '-t', 'raw', + '-c', compress and compress[1] or 'none', + '-f', tools.GetInputFilename( + compress and 'compress' or 'u-boot.dtb')) + return cbfs_fname + + def _compare_expected_cbfs(self, data, cbfstool_fname): + """Compare against what cbfstool creates + + This compares what binman creates with what cbfstool creates for what + is proportedly the same thing. + + Args: + data: CBFS created by binman + cbfstool_fname: CBFS created by cbfstool + """ + if not self.have_cbfstool or not self.have_lz4: + return + expect = tools.ReadFile(cbfstool_fname) + if expect != data: + tools.WriteFile('/tmp/expect', expect) + tools.WriteFile('/tmp/actual', data) + print('diff -y <(xxd -g1 /tmp/expect) <(xxd -g1 /tmp/actual) | colordiff') + self.fail('cbfstool produced a different result') + + def test_cbfs_functions(self): + """Test global functions of cbfs_util""" + self.assertEqual(cbfs_util.ARCHITECTURE_X86, cbfs_util.find_arch('x86')) + self.assertIsNone(cbfs_util.find_arch('bad-arch')) + + self.assertEqual(cbfs_util.COMPRESS_LZMA, cbfs_util.find_compress('lzma')) + self.assertIsNone(cbfs_util.find_compress('bad-comp')) + + def test_cbfstool_failure(self): + """Test failure to run cbfstool""" + if not self.have_cbfstool: + self.skipTest('No cbfstool available') + try: + # In verbose mode this test fails since stderr is not captured. Fix + # this by turning off verbosity. + old_verbose = cbfs_util.VERBOSE + cbfs_util.VERBOSE = False + with test_util.capture_sys_output() as (_stdout, stderr): + with self.assertRaises(Exception) as e: + cbfs_util.cbfstool('missing-file', 'bad-command') + finally: + cbfs_util.VERBOSE = old_verbose + self.assertIn('Unknown command', stderr.getvalue()) + self.assertIn('Failed to run', str(e.exception)) + + def test_cbfs_raw(self): + """Test base handling of a Coreboot Filesystem (CBFS)""" + size = 0xb0 + cbw = CbfsWriter(size) + cbw.add_file_raw('u-boot', U_BOOT_DATA) + cbw.add_file_raw('u-boot-dtb', U_BOOT_DTB_DATA) + data = cbw.get_data() + self._check_raw(data, size) + cbfs_fname = self._get_expected_cbfs(size=size) + self._compare_expected_cbfs(data, cbfs_fname) + + def test_cbfs_invalid_file_type(self): + """Check handling of an invalid file type when outputiing a CBFS""" + size = 0xb0 + cbw = CbfsWriter(size) + cfile = cbw.add_file_raw('u-boot', U_BOOT_DATA) + + # Change the type manually before generating the CBFS, and make sure + # that the generator complains + cfile.ftype = 0xff + with self.assertRaises(ValueError) as e: + cbw.get_data() + self.assertIn('Unknown type 0xff when writing', str(e.exception)) + + def test_cbfs_invalid_file_type_on_read(self): + """Check handling of an invalid file type when reading the CBFS""" + size = 0xb0 + cbw = CbfsWriter(size) + cbw.add_file_raw('u-boot', U_BOOT_DATA) + + data = cbw.get_data() + + # Read in the first file header + cbr = cbfs_util.CbfsReader(data, read=False) + with io.BytesIO(data) as fd: + self.assertTrue(cbr._find_and_read_header(fd, len(data))) + pos = fd.tell() + hdr_data = fd.read(cbfs_util.FILE_HEADER_LEN) + magic, size, ftype, attr, offset = struct.unpack( + cbfs_util.FILE_HEADER_FORMAT, hdr_data) + + # Create a new CBFS with a change to the file type + ftype = 0xff + newdata = data[:pos] + newdata += struct.pack(cbfs_util.FILE_HEADER_FORMAT, magic, size, ftype, + attr, offset) + newdata += data[pos + cbfs_util.FILE_HEADER_LEN:] + + # Read in this CBFS and make sure that the reader complains + with self.assertRaises(ValueError) as e: + cbfs_util.CbfsReader(newdata) + self.assertIn('Unknown type 0xff when reading', str(e.exception)) + + def test_cbfs_no_space(self): + """Check handling of running out of space in the CBFS""" + size = 0x60 + cbw = CbfsWriter(size) + cbw.add_file_raw('u-boot', U_BOOT_DATA) + with self.assertRaises(ValueError) as e: + cbw.get_data() + self.assertIn('No space for header', str(e.exception)) + + def test_cbfs_no_space_skip(self): + """Check handling of running out of space in CBFS with file header""" + size = 0x70 + cbw = CbfsWriter(size) + cbw._add_fileheader = True + cbw.add_file_raw('u-boot', U_BOOT_DATA) + with self.assertRaises(ValueError) as e: + cbw.get_data() + self.assertIn('No space for data before offset', str(e.exception)) + + def test_cbfs_bad_header_ptr(self): + """Check handling of a bad master-header pointer""" + size = 0x70 + cbw = CbfsWriter(size) + cbw.add_file_raw('u-boot', U_BOOT_DATA) + data = cbw.get_data() + + # Add one to the pointer to make it invalid + newdata = data[:-4] + struct.pack('<I', cbw._header_offset + 1) + + # We should still be able to find the master header by searching + with test_util.capture_sys_output() as (stdout, _stderr): + cbfs = cbfs_util.CbfsReader(newdata) + self.assertIn('Relative offset seems wrong', stdout.getvalue()) + self.assertIn('u-boot', cbfs.files) + self.assertEqual(size, cbfs.rom_size) + + def test_cbfs_bad_header(self): + """Check handling of a bad master header""" + size = 0x70 + cbw = CbfsWriter(size) + cbw.add_file_raw('u-boot', U_BOOT_DATA) + data = cbw.get_data() + + # Drop most of the header and try reading the modified CBFS + newdata = data[:cbw._header_offset + 4] + + with test_util.capture_sys_output() as (stdout, _stderr): + with self.assertRaises(ValueError) as e: + cbfs_util.CbfsReader(newdata) + self.assertIn('Relative offset seems wrong', stdout.getvalue()) + self.assertIn('Cannot find master header', str(e.exception)) + + def test_cbfs_bad_file_header(self): + """Check handling of a bad file header""" + size = 0x70 + cbw = CbfsWriter(size) + cbw.add_file_raw('u-boot', U_BOOT_DATA) + data = cbw.get_data() + + # Read in the CBFS master header (only), then stop + cbr = cbfs_util.CbfsReader(data, read=False) + with io.BytesIO(data) as fd: + self.assertTrue(cbr._find_and_read_header(fd, len(data))) + pos = fd.tell() + + # Remove all but 4 bytes of the file headerm and try to read the file + newdata = data[:pos + 4] + with test_util.capture_sys_output() as (stdout, _stderr): + with io.BytesIO(newdata) as fd: + fd.seek(pos) + self.assertEqual(False, cbr._read_next_file(fd)) + self.assertIn('File header at 0 ran out of data', stdout.getvalue()) + + def test_cbfs_bad_file_string(self): + """Check handling of an incomplete filename string""" + size = 0x70 + cbw = CbfsWriter(size) + cbw.add_file_raw('16-characters xx', U_BOOT_DATA) + data = cbw.get_data() + + # Read in the CBFS master header (only), then stop + cbr = cbfs_util.CbfsReader(data, read=False) + with io.BytesIO(data) as fd: + self.assertTrue(cbr._find_and_read_header(fd, len(data))) + pos = fd.tell() + + # Create a new CBFS with only the first 16 bytes of the file name, then + # try to read the file + newdata = data[:pos + cbfs_util.FILE_HEADER_LEN + 16] + with test_util.capture_sys_output() as (stdout, _stderr): + with io.BytesIO(newdata) as fd: + fd.seek(pos) + self.assertEqual(False, cbr._read_next_file(fd)) + self.assertIn('String at %x ran out of data' % + cbfs_util.FILE_HEADER_LEN, stdout.getvalue()) + + def test_cbfs_debug(self): + """Check debug output""" + size = 0x70 + cbw = CbfsWriter(size) + cbw.add_file_raw('u-boot', U_BOOT_DATA) + data = cbw.get_data() + + try: + cbfs_util.DEBUG = True + with test_util.capture_sys_output() as (stdout, _stderr): + cbfs_util.CbfsReader(data) + self.assertEqual('name u-boot\ndata %s\n' % U_BOOT_DATA, + stdout.getvalue()) + finally: + cbfs_util.DEBUG = False + + def test_cbfs_bad_attribute(self): + """Check handling of bad attribute tag""" + if not self.have_lz4: + self.skipTest('lz4 --no-frame-crc not available') + size = 0x140 + cbw = CbfsWriter(size) + cbw.add_file_raw('u-boot', COMPRESS_DATA, + compress=cbfs_util.COMPRESS_LZ4) + data = cbw.get_data() + + # Search the CBFS for the expected compression tag + with io.BytesIO(data) as fd: + while True: + pos = fd.tell() + tag, = struct.unpack('>I', fd.read(4)) + if tag == cbfs_util.FILE_ATTR_TAG_COMPRESSION: + break + + # Create a new CBFS with the tag changed to something invalid + newdata = data[:pos] + struct.pack('>I', 0x123) + data[pos + 4:] + with test_util.capture_sys_output() as (stdout, _stderr): + cbfs_util.CbfsReader(newdata) + self.assertEqual('Unknown attribute tag 123\n', stdout.getvalue()) + + def test_cbfs_missing_attribute(self): + """Check handling of an incomplete attribute tag""" + if not self.have_lz4: + self.skipTest('lz4 --no-frame-crc not available') + size = 0x140 + cbw = CbfsWriter(size) + cbw.add_file_raw('u-boot', COMPRESS_DATA, + compress=cbfs_util.COMPRESS_LZ4) + data = cbw.get_data() + + # Read in the CBFS master header (only), then stop + cbr = cbfs_util.CbfsReader(data, read=False) + with io.BytesIO(data) as fd: + self.assertTrue(cbr._find_and_read_header(fd, len(data))) + pos = fd.tell() + + # Create a new CBFS with only the first 4 bytes of the compression tag, + # then try to read the file + tag_pos = pos + cbfs_util.FILE_HEADER_LEN + cbfs_util.FILENAME_ALIGN + newdata = data[:tag_pos + 4] + with test_util.capture_sys_output() as (stdout, _stderr): + with io.BytesIO(newdata) as fd: + fd.seek(pos) + self.assertEqual(False, cbr._read_next_file(fd)) + self.assertIn('Attribute tag at %x ran out of data' % tag_pos, + stdout.getvalue()) + + def test_cbfs_file_master_header(self): + """Check handling of a file containing a master header""" + size = 0x100 + cbw = CbfsWriter(size) + cbw._add_fileheader = True + cbw.add_file_raw('u-boot', U_BOOT_DATA) + data = cbw.get_data() + + cbr = cbfs_util.CbfsReader(data) + self.assertIn('u-boot', cbr.files) + self.assertEqual(size, cbr.rom_size) + + def test_cbfs_arch(self): + """Test on non-x86 architecture""" + size = 0x100 + cbw = CbfsWriter(size, arch=cbfs_util.ARCHITECTURE_PPC64) + cbw.add_file_raw('u-boot', U_BOOT_DATA) + cbw.add_file_raw('u-boot-dtb', U_BOOT_DTB_DATA) + data = cbw.get_data() + self._check_raw(data, size, offset=0x40, + arch=cbfs_util.ARCHITECTURE_PPC64) + + # Compare against what cbfstool creates + cbfs_fname = self._get_expected_cbfs(size=size, arch='ppc64') + self._compare_expected_cbfs(data, cbfs_fname) + + def test_cbfs_stage(self): + """Tests handling of a Coreboot Filesystem (CBFS)""" + if not elf.ELF_TOOLS: + self.skipTest('Python elftools not available') + elf_fname = os.path.join(self._indir, 'cbfs-stage.elf') + elf.MakeElf(elf_fname, U_BOOT_DATA, U_BOOT_DTB_DATA) + + size = 0xb0 + cbw = CbfsWriter(size) + cbw.add_file_stage('u-boot', tools.ReadFile(elf_fname)) + + data = cbw.get_data() + cbfs = self._check_hdr(data, size) + load = 0xfef20000 + entry = load + 2 + + cfile = self._check_uboot(cbfs, cbfs_util.TYPE_STAGE, offset=0x28, + data=U_BOOT_DATA + U_BOOT_DTB_DATA) + + self.assertEqual(entry, cfile.entry) + self.assertEqual(load, cfile.load) + self.assertEqual(len(U_BOOT_DATA) + len(U_BOOT_DTB_DATA), + cfile.data_len) + + # Compare against what cbfstool creates + if self.have_cbfstool: + cbfs_fname = os.path.join(self._indir, 'test.cbfs') + cbfs_util.cbfstool(cbfs_fname, 'create', '-m', 'x86', '-s', + '%#x' % size) + cbfs_util.cbfstool(cbfs_fname, 'add-stage', '-n', 'u-boot', + '-f', elf_fname) + self._compare_expected_cbfs(data, cbfs_fname) + + def test_cbfs_raw_compress(self): + """Test base handling of compressing raw files""" + if not self.have_lz4: + self.skipTest('lz4 --no-frame-crc not available') + size = 0x140 + cbw = CbfsWriter(size) + cbw.add_file_raw('u-boot', COMPRESS_DATA, + compress=cbfs_util.COMPRESS_LZ4) + cbw.add_file_raw('u-boot-dtb', COMPRESS_DATA, + compress=cbfs_util.COMPRESS_LZMA) + data = cbw.get_data() + + cbfs = self._check_hdr(data, size) + self.assertIn('u-boot', cbfs.files) + cfile = cbfs.files['u-boot'] + self.assertEqual(cfile.name, 'u-boot') + self.assertEqual(cfile.offset, 56) + self.assertEqual(cfile.data, COMPRESS_DATA) + self.assertEqual(cfile.ftype, cbfs_util.TYPE_RAW) + self.assertEqual(cfile.compress, cbfs_util.COMPRESS_LZ4) + self.assertEqual(cfile.memlen, len(COMPRESS_DATA)) + + self.assertIn('u-boot-dtb', cbfs.files) + cfile = cbfs.files['u-boot-dtb'] + self.assertEqual(cfile.name, 'u-boot-dtb') + self.assertEqual(cfile.offset, 56) + self.assertEqual(cfile.data, COMPRESS_DATA) + self.assertEqual(cfile.ftype, cbfs_util.TYPE_RAW) + self.assertEqual(cfile.compress, cbfs_util.COMPRESS_LZMA) + self.assertEqual(cfile.memlen, len(COMPRESS_DATA)) + + cbfs_fname = self._get_expected_cbfs(size=size, compress=['lz4', 'lzma']) + self._compare_expected_cbfs(data, cbfs_fname) + + +if __name__ == '__main__': + unittest.main()

Coreboot uses a simple flash-based filesystem called Coreboot Filesystem (CBFS) to organise files used during boot. This allows files to be named and their position in the flash to be set. It has special features for dealing with x86 devices which typically memory-map their SPI flash to the top of 32-bit address space and need a 'boot block' ending there.
Create a library to help create and read CBFS files. This includes a writer class, a reader class and associated other helpers. Only a subset of features are currently supported.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: - Deal with travis's old lz4 version by skipping tests as necessary - Install lzma tool in travis - Skip use of cbfstool in tests if it is not available
.travis.yml | 1 + tools/binman/binman.py | 4 +- tools/binman/cbfs_util.py | 720 +++++++++++++++++++++++++++++++++ tools/binman/cbfs_util_test.py | 540 +++++++++++++++++++++++++ 4 files changed, 1264 insertions(+), 1 deletion(-) create mode 100644 tools/binman/cbfs_util.py create mode 100755 tools/binman/cbfs_util_test.py
Applied to u-boot-dm, thanks!

Add support for putting CBFSs (Coreboot Filesystems) in an image. This allows binman to produce firmware images used by coreboot to boot.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: - Deal with travis's old lz4 version by skipping tests as necessary - Skip use of cbfstool in tests if it is not available
tools/binman/README.entries | 142 ++++++++++++++ tools/binman/control.py | 2 + tools/binman/etype/cbfs.py | 193 ++++++++++++++++++++ tools/binman/ftest.py | 110 +++++++++++ tools/binman/test/102_cbfs_raw.dts | 20 ++ tools/binman/test/103_cbfs_raw_ppc.dts | 21 +++ tools/binman/test/104_cbfs_stage.dts | 19 ++ tools/binman/test/105_cbfs_raw_compress.dts | 26 +++ tools/binman/test/106_cbfs_bad_arch.dts | 15 ++ tools/binman/test/107_cbfs_no_size.dts | 13 ++ tools/binman/test/108_cbfs_no_contents.dts | 17 ++ tools/binman/test/109_cbfs_bad_compress.dts | 18 ++ tools/binman/test/110_cbfs_name.dts | 24 +++ 13 files changed, 620 insertions(+) create mode 100644 tools/binman/etype/cbfs.py create mode 100644 tools/binman/test/102_cbfs_raw.dts create mode 100644 tools/binman/test/103_cbfs_raw_ppc.dts create mode 100644 tools/binman/test/104_cbfs_stage.dts create mode 100644 tools/binman/test/105_cbfs_raw_compress.dts create mode 100644 tools/binman/test/106_cbfs_bad_arch.dts create mode 100644 tools/binman/test/107_cbfs_no_size.dts create mode 100644 tools/binman/test/108_cbfs_no_contents.dts create mode 100644 tools/binman/test/109_cbfs_bad_compress.dts create mode 100644 tools/binman/test/110_cbfs_name.dts
diff --git a/tools/binman/README.entries b/tools/binman/README.entries index 9a316763ace..2e6aea1e84c 100644 --- a/tools/binman/README.entries +++ b/tools/binman/README.entries @@ -60,6 +60,148 @@ See cros_ec_rw for an example of this.
+Entry: cbfs: Entry containing a Coreboot Filesystem (CBFS) +---------------------------------------------------------- + +A CBFS provides a way to group files into a group. It has a simple directory +structure and allows the position of individual files to be set, since it is +designed to support execute-in-place in an x86 SPI-flash device. Where XIP +is not used, it supports compression and storing ELF files. + +CBFS is used by coreboot as its way of orgnanising SPI-flash contents. + +The contents of the CBFS are defined by subnodes of the cbfs entry, e.g.: + + cbfs { + size = <0x100000>; + u-boot { + cbfs-type = "raw"; + }; + u-boot-dtb { + cbfs-type = "raw"; + }; + }; + +This creates a CBFS 1MB in size two files in it: u-boot.bin and u-boot.dtb. +Note that the size is required since binman does not support calculating it. +The contents of each entry is just what binman would normally provide if it +were not a CBFS node. A blob type can be used to import arbitrary files as +with the second subnode below: + + cbfs { + size = <0x100000>; + u-boot { + cbfs-name = "BOOT"; + cbfs-type = "raw"; + }; + + dtb { + type = "blob"; + filename = "u-boot.dtb"; + cbfs-type = "raw"; + cbfs-compress = "lz4"; + }; + }; + +This creates a CBFS 1MB in size with u-boot.bin (named "BOOT") and +u-boot.dtb (named "dtb") and compressed with the lz4 algorithm. + + +Properties supported in the top-level CBFS node: + +cbfs-arch: + Defaults to "x86", but you can specify the architecture if needed. + + +Properties supported in the CBFS entry subnodes: + +cbfs-name: + This is the name of the file created in CBFS. It defaults to the entry + name (which is the node name), but you can override it with this + property. + +cbfs-type: + This is the CBFS file type. The following are supported: + + raw: + This is a 'raw' file, although compression is supported. It can be + used to store any file in CBFS. + + stage: + This is an ELF file that has been loaded (i.e. mapped to memory), so + appears in the CBFS as a flat binary. The input file must be an ELF + image, for example this puts "u-boot" (the ELF image) into a 'stage' + entry: + + cbfs { + size = <0x100000>; + u-boot-elf { + cbfs-name = "BOOT"; + cbfs-type = "stage"; + }; + }; + + You can use your own ELF file with something like: + + cbfs { + size = <0x100000>; + something { + type = "blob"; + filename = "cbfs-stage.elf"; + cbfs-type = "stage"; + }; + }; + + As mentioned, the file is converted to a flat binary, so it is + equivalent to adding "u-boot.bin", for example, but with the load and + start addresses specified by the ELF. At present there is no option + to add a flat binary with a load/start address, similar to the + 'add-flat-binary' option in cbfstool. + + +The current implementation supports only a subset of CBFS features. It does +not support other file types (e.g. payload), adding multiple files (like the +'files' entry with a pattern supported by binman), putting files at a +particular offset in the CBFS and a few other things. + +Of course binman can create images containing multiple CBFSs, simply by +defining these in the binman config: + + + binman { + size = <0x800000>; + cbfs { + offset = <0x100000>; + size = <0x100000>; + u-boot { + cbfs-type = "raw"; + }; + u-boot-dtb { + cbfs-type = "raw"; + }; + }; + + cbfs2 { + offset = <0x700000>; + size = <0x100000>; + u-boot { + cbfs-type = "raw"; + }; + u-boot-dtb { + cbfs-type = "raw"; + }; + image { + type = "blob"; + filename = "image.jpg"; + }; + }; + }; + +This creates an 8MB image with two CBFSs, one at offset 1MB, one at 7MB, +both of size 1MB. + + + Entry: cros-ec-rw: A blob entry which contains a Chromium OS read-write EC image --------------------------------------------------------------------------------
diff --git a/tools/binman/control.py b/tools/binman/control.py index df78848e13d..4a94afc8640 100644 --- a/tools/binman/control.py +++ b/tools/binman/control.py @@ -12,6 +12,7 @@ import os import sys import tools
+import cbfs_util import command import elf from image import Image @@ -108,6 +109,7 @@ def Binman(options, args):
tout.Init(options.verbosity) elf.debug = options.debug + cbfs_util.VERBOSE = options.verbosity > 2 state.use_fake_dtb = options.fake_dtb try: tools.SetInputDirs(options.indir) diff --git a/tools/binman/etype/cbfs.py b/tools/binman/etype/cbfs.py new file mode 100644 index 00000000000..513df217bc1 --- /dev/null +++ b/tools/binman/etype/cbfs.py @@ -0,0 +1,193 @@ +# SPDX-License-Identifier: GPL-2.0+ +# Copyright 2019 Google LLC +# Written by Simon Glass sjg@chromium.org +# +# Entry-type module for a Coreboot Filesystem (CBFS) +# + +from collections import OrderedDict + +import cbfs_util +from cbfs_util import CbfsWriter +from entry import Entry +import fdt_util + +class Entry_cbfs(Entry): + """Entry containing a Coreboot Filesystem (CBFS) + + A CBFS provides a way to group files into a group. It has a simple directory + structure and allows the position of individual files to be set, since it is + designed to support execute-in-place in an x86 SPI-flash device. Where XIP + is not used, it supports compression and storing ELF files. + + CBFS is used by coreboot as its way of orgnanising SPI-flash contents. + + The contents of the CBFS are defined by subnodes of the cbfs entry, e.g.: + + cbfs { + size = <0x100000>; + u-boot { + cbfs-type = "raw"; + }; + u-boot-dtb { + cbfs-type = "raw"; + }; + }; + + This creates a CBFS 1MB in size two files in it: u-boot.bin and u-boot.dtb. + Note that the size is required since binman does not support calculating it. + The contents of each entry is just what binman would normally provide if it + were not a CBFS node. A blob type can be used to import arbitrary files as + with the second subnode below: + + cbfs { + size = <0x100000>; + u-boot { + cbfs-name = "BOOT"; + cbfs-type = "raw"; + }; + + dtb { + type = "blob"; + filename = "u-boot.dtb"; + cbfs-type = "raw"; + cbfs-compress = "lz4"; + }; + }; + + This creates a CBFS 1MB in size with u-boot.bin (named "BOOT") and + u-boot.dtb (named "dtb") and compressed with the lz4 algorithm. + + + Properties supported in the top-level CBFS node: + + cbfs-arch: + Defaults to "x86", but you can specify the architecture if needed. + + + Properties supported in the CBFS entry subnodes: + + cbfs-name: + This is the name of the file created in CBFS. It defaults to the entry + name (which is the node name), but you can override it with this + property. + + cbfs-type: + This is the CBFS file type. The following are supported: + + raw: + This is a 'raw' file, although compression is supported. It can be + used to store any file in CBFS. + + stage: + This is an ELF file that has been loaded (i.e. mapped to memory), so + appears in the CBFS as a flat binary. The input file must be an ELF + image, for example this puts "u-boot" (the ELF image) into a 'stage' + entry: + + cbfs { + size = <0x100000>; + u-boot-elf { + cbfs-name = "BOOT"; + cbfs-type = "stage"; + }; + }; + + You can use your own ELF file with something like: + + cbfs { + size = <0x100000>; + something { + type = "blob"; + filename = "cbfs-stage.elf"; + cbfs-type = "stage"; + }; + }; + + As mentioned, the file is converted to a flat binary, so it is + equivalent to adding "u-boot.bin", for example, but with the load and + start addresses specified by the ELF. At present there is no option + to add a flat binary with a load/start address, similar to the + 'add-flat-binary' option in cbfstool. + + + The current implementation supports only a subset of CBFS features. It does + not support other file types (e.g. payload), adding multiple files (like the + 'files' entry with a pattern supported by binman), putting files at a + particular offset in the CBFS and a few other things. + + Of course binman can create images containing multiple CBFSs, simply by + defining these in the binman config: + + + binman { + size = <0x800000>; + cbfs { + offset = <0x100000>; + size = <0x100000>; + u-boot { + cbfs-type = "raw"; + }; + u-boot-dtb { + cbfs-type = "raw"; + }; + }; + + cbfs2 { + offset = <0x700000>; + size = <0x100000>; + u-boot { + cbfs-type = "raw"; + }; + u-boot-dtb { + cbfs-type = "raw"; + }; + image { + type = "blob"; + filename = "image.jpg"; + }; + }; + }; + + This creates an 8MB image with two CBFSs, one at offset 1MB, one at 7MB, + both of size 1MB. + """ + def __init__(self, section, etype, node): + Entry.__init__(self, section, etype, node) + self._cbfs_arg = fdt_util.GetString(node, 'cbfs-arch', 'x86') + self._cbfs_entries = OrderedDict() + self._ReadSubnodes() + + def ObtainContents(self): + arch = cbfs_util.find_arch(self._cbfs_arg) + if arch is None: + self.Raise("Invalid architecture '%s'" % self._cbfs_arg) + if self.size is None: + self.Raise("'cbfs' entry must have a size property") + cbfs = CbfsWriter(self.size, arch) + for entry in self._cbfs_entries.values(): + # First get the input data and put it in a file. If not available, + # try later. + if not entry.ObtainContents(): + return False + data = entry.GetData() + if entry._type == 'raw': + cbfs.add_file_raw(entry._cbfs_name, data, entry._cbfs_compress) + elif entry._type == 'stage': + cbfs.add_file_stage(entry._cbfs_name, data) + data = cbfs.get_data() + self.SetContents(data) + return True + + def _ReadSubnodes(self): + """Read the subnodes to find out what should go in this IFWI""" + for node in self._node.subnodes: + entry = Entry.Create(self.section, node) + entry._cbfs_name = fdt_util.GetString(node, 'cbfs-name', entry.name) + entry._type = fdt_util.GetString(node, 'cbfs-type') + compress = fdt_util.GetString(node, 'cbfs-compress', 'none') + entry._cbfs_compress = cbfs_util.find_compress(compress) + if entry._cbfs_compress is None: + self.Raise("Invalid compression in '%s': '%s'" % + (node.name, compress)) + self._cbfs_entries[entry._cbfs_name] = entry diff --git a/tools/binman/ftest.py b/tools/binman/ftest.py index 3455b8ccebd..14abfbf774f 100644 --- a/tools/binman/ftest.py +++ b/tools/binman/ftest.py @@ -18,6 +18,7 @@ import tempfile import unittest
import binman +import cbfs_util import cmdline import command import control @@ -133,6 +134,14 @@ class TestFunctional(unittest.TestCase):
TestFunctional._MakeInputFile('compress', COMPRESS_DATA)
+ # Travis-CI may have an old lz4 + self.have_lz4 = True + try: + tools.Run('lz4', '--no-frame-crc', '-c', + os.path.join(self._indir, 'u-boot.bin')) + except: + self.have_lz4 = False + @classmethod def tearDownClass(self): """Remove the temporary input directory and its contents""" @@ -160,6 +169,10 @@ class TestFunctional(unittest.TestCase): cls.preserve_outdirs = preserve_outdirs cls.toolpath = toolpath
+ def _CheckLz4(self): + if not self.have_lz4: + self.skipTest('lz4 --no-frame-crc not available') + def setUp(self): # Enable this to turn on debugging output # tout.Init(tout.DEBUG) @@ -1607,6 +1620,7 @@ class TestFunctional(unittest.TestCase):
def testCompress(self): """Test compression of blobs""" + self._CheckLz4() data, _, _, out_dtb_fname = self._DoReadFileDtb('083_compress.dts', use_real_dtb=True, update_dtb=True) dtb = fdt.Fdt(out_dtb_fname) @@ -1628,6 +1642,7 @@ class TestFunctional(unittest.TestCase):
def testFilesCompress(self): """Test bringing in multiple files and compressing them""" + self._CheckLz4() data = self._DoReadFile('085_files_compress.dts')
image = control.images['image'] @@ -1846,6 +1861,101 @@ class TestFunctional(unittest.TestCase): tools.GetBytes(0x26, 4) + U_BOOT_DATA + tools.GetBytes(0x26, 8))
+ def testCbfsRaw(self): + """Test base handling of a Coreboot Filesystem (CBFS) + + The exact contents of the CBFS is verified by similar tests in + cbfs_util_test.py. The tests here merely check that the files added to + the CBFS can be found in the final image. + """ + data = self._DoReadFile('102_cbfs_raw.dts') + size = 0xb0 + + cbfs = cbfs_util.CbfsReader(data) + self.assertEqual(size, cbfs.rom_size) + + self.assertIn('u-boot-dtb', cbfs.files) + cfile = cbfs.files['u-boot-dtb'] + self.assertEqual(U_BOOT_DTB_DATA, cfile.data) + + def testCbfsArch(self): + """Test on non-x86 architecture""" + data = self._DoReadFile('103_cbfs_raw_ppc.dts') + size = 0x100 + + cbfs = cbfs_util.CbfsReader(data) + self.assertEqual(size, cbfs.rom_size) + + self.assertIn('u-boot-dtb', cbfs.files) + cfile = cbfs.files['u-boot-dtb'] + self.assertEqual(U_BOOT_DTB_DATA, cfile.data) + + def testCbfsStage(self): + """Tests handling of a Coreboot Filesystem (CBFS)""" + if not elf.ELF_TOOLS: + self.skipTest('Python elftools not available') + elf_fname = os.path.join(self._indir, 'cbfs-stage.elf') + elf.MakeElf(elf_fname, U_BOOT_DATA, U_BOOT_DTB_DATA) + size = 0xb0 + + data = self._DoReadFile('104_cbfs_stage.dts') + cbfs = cbfs_util.CbfsReader(data) + self.assertEqual(size, cbfs.rom_size) + + self.assertIn('u-boot', cbfs.files) + cfile = cbfs.files['u-boot'] + self.assertEqual(U_BOOT_DATA + U_BOOT_DTB_DATA, cfile.data) + + def testCbfsRawCompress(self): + """Test handling of compressing raw files""" + self._CheckLz4() + data = self._DoReadFile('105_cbfs_raw_compress.dts') + size = 0x140 + + cbfs = cbfs_util.CbfsReader(data) + self.assertIn('u-boot', cbfs.files) + cfile = cbfs.files['u-boot'] + self.assertEqual(COMPRESS_DATA, cfile.data) + + def testCbfsBadArch(self): + """Test handling of a bad architecture""" + with self.assertRaises(ValueError) as e: + self._DoReadFile('106_cbfs_bad_arch.dts') + self.assertIn("Invalid architecture 'bad-arch'", str(e.exception)) + + def testCbfsNoSize(self): + """Test handling of a missing size property""" + with self.assertRaises(ValueError) as e: + self._DoReadFile('107_cbfs_no_size.dts') + self.assertIn('entry must have a size property', str(e.exception)) + + def testCbfsNoCOntents(self): + """Test handling of a CBFS entry which does not provide contentsy""" + with self.assertRaises(ValueError) as e: + self._DoReadFile('108_cbfs_no_contents.dts') + self.assertIn('Could not complete processing of contents', + str(e.exception)) + + def testCbfsBadCompress(self): + """Test handling of a bad architecture""" + with self.assertRaises(ValueError) as e: + self._DoReadFile('109_cbfs_bad_compress.dts') + self.assertIn("Invalid compression in 'u-boot': 'invalid-algo'", + str(e.exception)) + + def testCbfsNamedEntries(self): + """Test handling of named entries""" + data = self._DoReadFile('110_cbfs_name.dts') + + cbfs = cbfs_util.CbfsReader(data) + self.assertIn('FRED', cbfs.files) + cfile1 = cbfs.files['FRED'] + self.assertEqual(U_BOOT_DATA, cfile1.data) + + self.assertIn('hello', cbfs.files) + cfile2 = cbfs.files['hello'] + self.assertEqual(U_BOOT_DTB_DATA, cfile2.data) +
if __name__ == "__main__": unittest.main() diff --git a/tools/binman/test/102_cbfs_raw.dts b/tools/binman/test/102_cbfs_raw.dts new file mode 100644 index 00000000000..779cbc121ad --- /dev/null +++ b/tools/binman/test/102_cbfs_raw.dts @@ -0,0 +1,20 @@ +// SPDX-License-Identifier: GPL-2.0+ + +/dts-v1/; + +/ { + #address-cells = <1>; + #size-cells = <1>; + + binman { + cbfs { + size = <0xb0>; + u-boot { + cbfs-type = "raw"; + }; + u-boot-dtb { + cbfs-type = "raw"; + }; + }; + }; +}; diff --git a/tools/binman/test/103_cbfs_raw_ppc.dts b/tools/binman/test/103_cbfs_raw_ppc.dts new file mode 100644 index 00000000000..df1caf092f4 --- /dev/null +++ b/tools/binman/test/103_cbfs_raw_ppc.dts @@ -0,0 +1,21 @@ +// SPDX-License-Identifier: GPL-2.0+ + +/dts-v1/; + +/ { + #address-cells = <1>; + #size-cells = <1>; + + binman { + cbfs { + size = <0x100>; + cbfs-arch = "ppc64"; + u-boot { + cbfs-type = "raw"; + }; + u-boot-dtb { + cbfs-type = "raw"; + }; + }; + }; +}; diff --git a/tools/binman/test/104_cbfs_stage.dts b/tools/binman/test/104_cbfs_stage.dts new file mode 100644 index 00000000000..215e2f287a4 --- /dev/null +++ b/tools/binman/test/104_cbfs_stage.dts @@ -0,0 +1,19 @@ +// SPDX-License-Identifier: GPL-2.0+ + +/dts-v1/; + +/ { + #address-cells = <1>; + #size-cells = <1>; + + binman { + cbfs { + size = <0xb0>; + u-boot { + type = "blob"; + filename = "cbfs-stage.elf"; + cbfs-type = "stage"; + }; + }; + }; +}; diff --git a/tools/binman/test/105_cbfs_raw_compress.dts b/tools/binman/test/105_cbfs_raw_compress.dts new file mode 100644 index 00000000000..646168d84b4 --- /dev/null +++ b/tools/binman/test/105_cbfs_raw_compress.dts @@ -0,0 +1,26 @@ +// SPDX-License-Identifier: GPL-2.0+ + +/dts-v1/; + +/ { + #address-cells = <1>; + #size-cells = <1>; + + binman { + cbfs { + size = <0x140>; + u-boot { + type = "text"; + text = "compress xxxxxxxxxxxxxxxxxxxxxx data"; + cbfs-type = "raw"; + cbfs-compress = "lz4"; + }; + u-boot-dtb { + type = "text"; + text = "compress xxxxxxxxxxxxxxxxxxxxxx data"; + cbfs-type = "raw"; + cbfs-compress = "lzma"; + }; + }; + }; +}; diff --git a/tools/binman/test/106_cbfs_bad_arch.dts b/tools/binman/test/106_cbfs_bad_arch.dts new file mode 100644 index 00000000000..4318d45a7d4 --- /dev/null +++ b/tools/binman/test/106_cbfs_bad_arch.dts @@ -0,0 +1,15 @@ +// SPDX-License-Identifier: GPL-2.0+ + +/dts-v1/; + +/ { + #address-cells = <1>; + #size-cells = <1>; + + binman { + cbfs { + size = <0x100>; + cbfs-arch = "bad-arch"; + }; + }; +}; diff --git a/tools/binman/test/107_cbfs_no_size.dts b/tools/binman/test/107_cbfs_no_size.dts new file mode 100644 index 00000000000..3592f62f7e6 --- /dev/null +++ b/tools/binman/test/107_cbfs_no_size.dts @@ -0,0 +1,13 @@ +// SPDX-License-Identifier: GPL-2.0+ + +/dts-v1/; + +/ { + #address-cells = <1>; + #size-cells = <1>; + + binman { + cbfs { + }; + }; +}; diff --git a/tools/binman/test/108_cbfs_no_contents.dts b/tools/binman/test/108_cbfs_no_contents.dts new file mode 100644 index 00000000000..623346760d2 --- /dev/null +++ b/tools/binman/test/108_cbfs_no_contents.dts @@ -0,0 +1,17 @@ +// SPDX-License-Identifier: GPL-2.0+ + +/dts-v1/; + +/ { + #address-cells = <1>; + #size-cells = <1>; + + binman { + cbfs { + size = <0x100>; + _testing { + return-unknown-contents; + }; + }; + }; +}; diff --git a/tools/binman/test/109_cbfs_bad_compress.dts b/tools/binman/test/109_cbfs_bad_compress.dts new file mode 100644 index 00000000000..9695024ee9b --- /dev/null +++ b/tools/binman/test/109_cbfs_bad_compress.dts @@ -0,0 +1,18 @@ +// SPDX-License-Identifier: GPL-2.0+ + +/dts-v1/; + +/ { + #address-cells = <1>; + #size-cells = <1>; + + binman { + cbfs { + size = <0xb0>; + u-boot { + cbfs-type = "raw"; + cbfs-compress = "invalid-algo"; + }; + }; + }; +}; diff --git a/tools/binman/test/110_cbfs_name.dts b/tools/binman/test/110_cbfs_name.dts new file mode 100644 index 00000000000..98c16f30b41 --- /dev/null +++ b/tools/binman/test/110_cbfs_name.dts @@ -0,0 +1,24 @@ +// SPDX-License-Identifier: GPL-2.0+ + +/dts-v1/; + +/ { + #address-cells = <1>; + #size-cells = <1>; + + binman { + cbfs { + size = <0x100>; + u-boot { + cbfs-name = "FRED"; + cbfs-type = "raw"; + }; + + hello { + type = "blob"; + filename = "u-boot.dtb"; + cbfs-type = "raw"; + }; + }; + }; +};

Add support for putting CBFSs (Coreboot Filesystems) in an image. This allows binman to produce firmware images used by coreboot to boot.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: - Deal with travis's old lz4 version by skipping tests as necessary - Skip use of cbfstool in tests if it is not available
tools/binman/README.entries | 142 ++++++++++++++ tools/binman/control.py | 2 + tools/binman/etype/cbfs.py | 193 ++++++++++++++++++++ tools/binman/ftest.py | 110 +++++++++++ tools/binman/test/102_cbfs_raw.dts | 20 ++ tools/binman/test/103_cbfs_raw_ppc.dts | 21 +++ tools/binman/test/104_cbfs_stage.dts | 19 ++ tools/binman/test/105_cbfs_raw_compress.dts | 26 +++ tools/binman/test/106_cbfs_bad_arch.dts | 15 ++ tools/binman/test/107_cbfs_no_size.dts | 13 ++ tools/binman/test/108_cbfs_no_contents.dts | 17 ++ tools/binman/test/109_cbfs_bad_compress.dts | 18 ++ tools/binman/test/110_cbfs_name.dts | 24 +++ 13 files changed, 620 insertions(+) create mode 100644 tools/binman/etype/cbfs.py create mode 100644 tools/binman/test/102_cbfs_raw.dts create mode 100644 tools/binman/test/103_cbfs_raw_ppc.dts create mode 100644 tools/binman/test/104_cbfs_stage.dts create mode 100644 tools/binman/test/105_cbfs_raw_compress.dts create mode 100644 tools/binman/test/106_cbfs_bad_arch.dts create mode 100644 tools/binman/test/107_cbfs_no_size.dts create mode 100644 tools/binman/test/108_cbfs_no_contents.dts create mode 100644 tools/binman/test/109_cbfs_bad_compress.dts create mode 100644 tools/binman/test/110_cbfs_name.dts
Applied to u-boot-dm, thanks!

An Integrated Firmware Image is used to hold various binaries used for booting with Apollolake and some later devices. Add support for this.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: None
tools/binman/etype/intel_descriptor.py | 6 +- tools/binman/etype/intel_ifwi.py | 100 ++++++++++++++++++ tools/binman/ftest.py | 55 ++++++++++ tools/binman/test/111_x86-rom-ifwi.dts | 29 +++++ tools/binman/test/112_x86-rom-ifwi-nodesc.dts | 28 +++++ tools/binman/test/113_x86-rom-ifwi-nodata.dts | 29 +++++ tools/binman/test/fitimage.bin.gz | Bin 0 -> 8418 bytes tools/binman/test/ifwi.bin.gz | Bin 0 -> 1884 bytes 8 files changed, 245 insertions(+), 2 deletions(-) create mode 100644 tools/binman/etype/intel_ifwi.py create mode 100644 tools/binman/test/111_x86-rom-ifwi.dts create mode 100644 tools/binman/test/112_x86-rom-ifwi-nodesc.dts create mode 100644 tools/binman/test/113_x86-rom-ifwi-nodata.dts create mode 100644 tools/binman/test/fitimage.bin.gz create mode 100644 tools/binman/test/ifwi.bin.gz
diff --git a/tools/binman/etype/intel_descriptor.py b/tools/binman/etype/intel_descriptor.py index 65ba2391e69..adea578080c 100644 --- a/tools/binman/etype/intel_descriptor.py +++ b/tools/binman/etype/intel_descriptor.py @@ -60,10 +60,12 @@ class Entry_intel_descriptor(Entry_blob): for i in range(MAX_REGIONS): self._regions.append(Region(self.data, frba, i))
- # Set the offset for ME (Management Engine) only, for now, since the - # others are not used + # Set the offset for ME (Management Engine) and IFWI (Integrated + # Firmware Image), for now, since the others are not used. info = {} if self.HasSibling('intel-me'): info['intel-me'] = [self._regions[REGION_ME].base, self._regions[REGION_ME].size] + if self.HasSibling('intel-ifwi'): + info['intel-ifwi'] = [self._regions[REGION_BIOS].base, None] return info diff --git a/tools/binman/etype/intel_ifwi.py b/tools/binman/etype/intel_ifwi.py new file mode 100644 index 00000000000..8c79b2dd291 --- /dev/null +++ b/tools/binman/etype/intel_ifwi.py @@ -0,0 +1,100 @@ +# SPDX-License-Identifier: GPL-2.0+ +# Copyright (c) 2016 Google, Inc +# Written by Simon Glass sjg@chromium.org +# +# Entry-type module for Intel Management Engine binary blob +# + +from collections import OrderedDict + +from entry import Entry +from blob import Entry_blob +import fdt_util +import tools + +class Entry_intel_ifwi(Entry_blob): + """Entry containing an Intel Integrated Firmware Image (IFWI) file + + Properties / Entry arguments: + - filename: Filename of file to read into entry. This is either the + IFWI file itself, or a file that can be converted into one using a + tool + - convert-fit: If present this indicates that the ifwitool should be + used to convert the provided file into a IFWI. + + This file contains code and data used by the SoC that is required to make + it work. It includes U-Boot TPL, microcode, things related to the CSE + (Converged Security Engine, the microcontroller that loads all the firmware) + and other items beyond the wit of man. + + A typical filename is 'ifwi.bin' for an IFWI file, or 'fitimage.bin' for a + file that will be converted to an IFWI. + + The position of this entry is generally set by the intel-descriptor entry. + + The contents of the IFWI are specified by the subnodes of the IFWI node. + Each subnode describes an entry which is placed into the IFWFI with a given + sub-partition (and optional entry name). + + See README.x86 for information about x86 binary blobs. + """ + def __init__(self, section, etype, node): + Entry_blob.__init__(self, section, etype, node) + self._convert_fit = fdt_util.GetBool(self._node, 'convert-fit') + self._ifwi_entries = OrderedDict() + self._ReadSubnodes() + + def ObtainContents(self): + """Get the contects for the IFWI + + Unfortunately we cannot create anything from scratch here, as Intel has + tools which create precursor binaries with lots of data and settings, + and these are not incorporated into binman. + + The first step is to get a file in the IFWI format. This is either + supplied directly or is extracted from a fitimage using the 'create' + subcommand. + + After that we delete the OBBP sub-partition and add each of the files + that we want in the IFWI file, one for each sub-entry of the IWFI node. + """ + self._pathname = tools.GetInputFilename(self._filename) + + # Create the IFWI file if needed + if self._convert_fit: + inname = self._pathname + outname = tools.GetOutputFilename('ifwi.bin') + tools.RunIfwiTool(inname, tools.CMD_CREATE, outname) + self._filename = 'ifwi.bin' + self._pathname = outname + else: + # Provide a different code path here to ensure we have test coverage + inname = self._pathname + + # Delete OBBP if it is there, then add the required new items. + tools.RunIfwiTool(inname, tools.CMD_DELETE, subpart='OBBP') + + for entry in self._ifwi_entries.values(): + # First get the input data and put it in a file + if not entry.ObtainContents(): + return False + data = entry.GetData() + uniq = self.GetUniqueName() + input_fname = tools.GetOutputFilename('input.%s' % uniq) + tools.WriteFile(input_fname, data) + + tools.RunIfwiTool(inname, + tools.CMD_REPLACE if entry._ifwi_replace else tools.CMD_ADD, + input_fname, entry._ifwi_subpart, entry._ifwi_entry_name) + + self.ReadBlobContents() + return True + + def _ReadSubnodes(self): + """Read the subnodes to find out what should go in this IFWI""" + for node in self._node.subnodes: + entry = Entry.Create(self.section, node) + entry._ifwi_replace = fdt_util.GetBool(node, 'replace') + entry._ifwi_subpart = fdt_util.GetString(node, 'ifwi-subpart') + entry._ifwi_entry_name = fdt_util.GetString(node, 'ifwi-entry') + self._ifwi_entries[entry._ifwi_subpart] = entry diff --git a/tools/binman/ftest.py b/tools/binman/ftest.py index 14abfbf774f..1355c4f55d3 100644 --- a/tools/binman/ftest.py +++ b/tools/binman/ftest.py @@ -27,6 +27,7 @@ import fdt import fdt_util import fmap_util import test_util +import gzip import state import tools import tout @@ -876,6 +877,9 @@ class TestFunctional(unittest.TestCase): def testPackX86RomMe(self): """Test that an x86 ROM with an ME region can be created""" data = self._DoReadFile('031_x86-rom-me.dts') + expected_desc = tools.ReadFile(self.TestFile('descriptor.bin')) + if data[:0x1000] != expected_desc: + self.fail('Expected descriptor binary at start of image') self.assertEqual(ME_DATA, data[0x1000:0x1000 + len(ME_DATA)])
def testPackVga(self): @@ -1956,6 +1960,57 @@ class TestFunctional(unittest.TestCase): cfile2 = cbfs.files['hello'] self.assertEqual(U_BOOT_DTB_DATA, cfile2.data)
+ def _SetupIfwi(self, fname): + """Set up to run an IFWI test + + Args: + fname: Filename of input file to provide (fitimage.bin or ifwi.bin) + """ + self._SetupSplElf() + + # Intel Integrated Firmware Image (IFWI) file + with gzip.open(self.TestFile('%s.gz' % fname), 'rb') as fd: + data = fd.read() + TestFunctional._MakeInputFile(fname,data) + + def _CheckIfwi(self, data): + """Check that an image with an IFWI contains the correct output + + Args: + data: Conents of output file + """ + expected_desc = tools.ReadFile(self.TestFile('descriptor.bin')) + if data[:0x1000] != expected_desc: + self.fail('Expected descriptor binary at start of image') + + # We expect to find the TPL wil in subpart IBBP entry IBBL + image_fname = tools.GetOutputFilename('image.bin') + tpl_fname = tools.GetOutputFilename('tpl.out') + tools.RunIfwiTool(image_fname, tools.CMD_EXTRACT, fname=tpl_fname, + subpart='IBBP', entry_name='IBBL') + + tpl_data = tools.ReadFile(tpl_fname) + self.assertEqual(tpl_data[:len(U_BOOT_TPL_DATA)], U_BOOT_TPL_DATA) + + def testPackX86RomIfwi(self): + """Test that an x86 ROM with Integrated Firmware Image can be created""" + self._SetupIfwi('fitimage.bin') + data = self._DoReadFile('111_x86-rom-ifwi.dts') + self._CheckIfwi(data) + + def testPackX86RomIfwiNoDesc(self): + """Test that an x86 ROM with IFWI can be created from an ifwi.bin file""" + self._SetupIfwi('ifwi.bin') + data = self._DoReadFile('112_x86-rom-ifwi-nodesc.dts') + self._CheckIfwi(data) + + def testPackX86RomIfwiNoData(self): + """Test that an x86 ROM with IFWI handles missing data""" + self._SetupIfwi('ifwi.bin') + with self.assertRaises(ValueError) as e: + data = self._DoReadFile('113_x86-rom-ifwi-nodata.dts') + self.assertIn('Could not complete processing of contents', + str(e.exception))
if __name__ == "__main__": unittest.main() diff --git a/tools/binman/test/111_x86-rom-ifwi.dts b/tools/binman/test/111_x86-rom-ifwi.dts new file mode 100644 index 00000000000..63b5972cc8e --- /dev/null +++ b/tools/binman/test/111_x86-rom-ifwi.dts @@ -0,0 +1,29 @@ +// SPDX-License-Identifier: GPL-2.0+ + +/dts-v1/; + +/ { + #address-cells = <1>; + #size-cells = <1>; + + binman { + sort-by-offset; + end-at-4gb; + size = <0x800000>; + intel-descriptor { + filename = "descriptor.bin"; + }; + + intel-ifwi { + offset-unset; + filename = "fitimage.bin"; + convert-fit; + + u-boot-tpl { + replace; + ifwi-subpart = "IBBP"; + ifwi-entry = "IBBL"; + }; + }; + }; +}; diff --git a/tools/binman/test/112_x86-rom-ifwi-nodesc.dts b/tools/binman/test/112_x86-rom-ifwi-nodesc.dts new file mode 100644 index 00000000000..21ec4654ffe --- /dev/null +++ b/tools/binman/test/112_x86-rom-ifwi-nodesc.dts @@ -0,0 +1,28 @@ +// SPDX-License-Identifier: GPL-2.0+ + +/dts-v1/; + +/ { + #address-cells = <1>; + #size-cells = <1>; + + binman { + sort-by-offset; + end-at-4gb; + size = <0x800000>; + intel-descriptor { + filename = "descriptor.bin"; + }; + + intel-ifwi { + offset-unset; + filename = "ifwi.bin"; + + u-boot-tpl { + replace; + ifwi-subpart = "IBBP"; + ifwi-entry = "IBBL"; + }; + }; + }; +}; diff --git a/tools/binman/test/113_x86-rom-ifwi-nodata.dts b/tools/binman/test/113_x86-rom-ifwi-nodata.dts new file mode 100644 index 00000000000..62486fd990e --- /dev/null +++ b/tools/binman/test/113_x86-rom-ifwi-nodata.dts @@ -0,0 +1,29 @@ +// SPDX-License-Identifier: GPL-2.0+ + +/dts-v1/; + +/ { + #address-cells = <1>; + #size-cells = <1>; + + binman { + sort-by-offset; + end-at-4gb; + size = <0x800000>; + intel-descriptor { + filename = "descriptor.bin"; + }; + + intel-ifwi { + offset-unset; + filename = "ifwi.bin"; + + _testing { + return-unknown-contents; + replace; + ifwi-subpart = "IBBP"; + ifwi-entry = "IBBL"; + }; + }; + }; +}; diff --git a/tools/binman/test/fitimage.bin.gz b/tools/binman/test/fitimage.bin.gz new file mode 100644 index 0000000000000000000000000000000000000000..0a9dcfc424845c89436f119eeda8983e12191364 GIT binary patch literal 8418 zcmb2|=HM`q;)`WqPRlIG%uP&B)l16EV|aVpI$I=Ag6%<mnt9=#BWpO4b~+vLOWC+N zGEDm5X){K_?udv=$0N5mnzpvuJ8;ilP_{_yCCgv&e=NC2ifq&M3zhD9#`K=wJm+M& zU)r}bMQPpNTX@)B&#nJbw@>iwic{C_dF=b~^?`Hc!()y2E=_)}AHVn9-x@!`?a$ax zzr0fzJx_kkYyR8jE{kOgS41Cs@VMwo&FeqCWk3Jr%+r+;%-1{H_bktB+NF)6b33x4 z?USzmm^aJ#Y{$DD#{X{4Ib_Ss!0@N1-23<Agx?9(x4wTguZTXCquni7ta;GiB<ODE z)vRjW>+epq8>v6)m>F|_ozktf7elWHSAVcnPA}eC<?nat<*ii`=}-Ttxo1gQUESUO z_-KLp!k&+;hO><;S6z<LY<_>{YtX#wvvQZDZ@j17wrsB+0|SF}?!$`zF6VB#UbHIP zbp7@7vLCOGW?OR4^M31Ud8#$@&lMHrGHJt8&lZ2&`sT)7<8$91@9(yk30u6kb@%qm z5nI>)HMz>Y{Cr8$<dW-YPwLI*-?}JaxA^bL8~yWo<NyA<v!^aK*dFLJh7aL4dU7p+ z3?OKTH+%bkGaH06N{)sALtRd);UbUqoo=hFH%8^2o?9L2vRV|BY#z)vd;6DtG=xS& zU^E0qLjWxVZg7p(A!tD|${h`X(GVC7fzc2c4S~@R7!85Z5Eu=C(GVC7fzc2c4S~@R z7!85Z5Eu=C(GVC7fzc2c4S~@RASnd?cT7Fnq181bW63Vz)mOj$$Z=mg?_|5t*296j zwUWZu&Rcyob?v?2)yx0xlsx%5z9#i=dhzn9QMaQQ85nX{xsU%=FZ!k3b?x!yy>a$i lx9Z%On{wd&ldm8bY};eMGaRUj;m<t(^-=eBGaRU9006rUF!lfd
literal 0 HcmV?d00001
diff --git a/tools/binman/test/ifwi.bin.gz b/tools/binman/test/ifwi.bin.gz new file mode 100644 index 0000000000000000000000000000000000000000..25d72892944d7de2813cede71ec96f581cece949 GIT binary patch literal 1884 zcmb2|=HQs3%ooeRoS9ahsh5<Q$ME*HwfErw5rzlCua);be)Q%-<dPjbMM?!jc5(FW zT3zyioug^9y<<4L`N5-S|1bS+cI~)|leg%8{W;&hn3$WZo-EM&@=hViuIokil0BmS zPfKcMoYYSi46T}d;-{<R#-OW<cWLG+J+pgj|6eoSD{$$?)LWG{;-4I^-P@OQ{ioCV zdHrEeGG8$o&OE33Jae7jk(_h=pY{mq^PF+6`Ra3i#d>ZAh64*sw_Dxbd0ww8D>=P* z^W@~OUw-I(Z)(?+JN{~)!PWBLwn@7MOY`Hd->*LY?P+wqeChi7+oxY`ZC%|RqCGwQ zUGTj-xATuq*m6Jp$^K}cjdFXdw_ILdTb}>t%TLSq^LW0RGXTN<Lk82Hf*3$hmviO+ zW;O_WlpGBKhJEGb0)b1`uAlfR%w}rv&%>pXfuSFnfC2L-b8%e#XaJ3dz-S1J%n+#C OAO70lJukxo2?hX1K&cV{
literal 0 HcmV?d00001

An Integrated Firmware Image is used to hold various binaries used for booting with Apollolake and some later devices. Add support for this.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: None
tools/binman/etype/intel_descriptor.py | 6 +- tools/binman/etype/intel_ifwi.py | 100 ++++++++++++++++++ tools/binman/ftest.py | 55 ++++++++++ tools/binman/test/111_x86-rom-ifwi.dts | 29 +++++ tools/binman/test/112_x86-rom-ifwi-nodesc.dts | 28 +++++ tools/binman/test/113_x86-rom-ifwi-nodata.dts | 29 +++++ tools/binman/test/fitimage.bin.gz | Bin 0 -> 8418 bytes tools/binman/test/ifwi.bin.gz | Bin 0 -> 1884 bytes 8 files changed, 245 insertions(+), 2 deletions(-) create mode 100644 tools/binman/etype/intel_ifwi.py create mode 100644 tools/binman/test/111_x86-rom-ifwi.dts create mode 100644 tools/binman/test/112_x86-rom-ifwi-nodesc.dts create mode 100644 tools/binman/test/113_x86-rom-ifwi-nodata.dts create mode 100644 tools/binman/test/fitimage.bin.gz create mode 100644 tools/binman/test/ifwi.bin.gz
Applied to u-boot-dm, thanks!

When there is lots of open space in a CBFS it is normally padded with 'empty' files so that sequentially scanning the CBFS can skip from one to the next without a break.
Add support for this.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: None
tools/binman/cbfs_util.py | 68 ++++++++++++++++++++++++++++++++-- tools/binman/cbfs_util_test.py | 23 +++++++++++- 2 files changed, 87 insertions(+), 4 deletions(-)
diff --git a/tools/binman/cbfs_util.py b/tools/binman/cbfs_util.py index 197cff89509..ec4a2e5a8c6 100644 --- a/tools/binman/cbfs_util.py +++ b/tools/binman/cbfs_util.py @@ -11,7 +11,8 @@ The format is somewhat defined by documentation in the coreboot tree although it is necessary to rely on the C structures and source code (mostly cbfstool) to fully understand it.
-Currently supported: raw and stage types with compression +Currently supported: raw and stage types with compression, padding empty areas + with empty files """
from __future__ import print_function @@ -102,6 +103,7 @@ ARCH_NAMES = { TYPE_CBFSHEADER = 0x02 # Master header, HEADER_FORMAT TYPE_STAGE = 0x10 # Stage, holding an executable, see STAGE_FORMAT TYPE_RAW = 0x50 # Raw file, possibly compressed +TYPE_EMPTY = 0xffffffff # Empty data
# Compression types COMPRESS_NONE, COMPRESS_LZMA, COMPRESS_LZ4 = range(3) @@ -152,6 +154,19 @@ def align_int(val, align): """ return int((val + align - 1) / align) * align
+def align_int_down(val, align): + """Align a value down to the given alignment + + Args: + val: Integer value to align + align: Integer alignment value (e.g. 4 to align to 4-byte boundary) + + Returns: + integer value aligned to the required boundary, rounding down if + necessary + """ + return int(val / align) * align + def _pack_string(instr): """Pack a string to the required aligned size by adding padding
@@ -184,6 +199,9 @@ class CbfsFile(object): entry: Entry address in memory if known, else None. This is where execution starts after the file is loaded base_address: Base address to use for 'stage' files + erase_byte: Erase byte to use for padding between the file header and + contents (used for empty files) + size: Size of the file in bytes (used for empty files) """ def __init__(self, name, ftype, data, compress=COMPRESS_NONE): self.name = name @@ -196,6 +214,8 @@ class CbfsFile(object): self.entry = None self.base_address = None self.data_len = 0 + self.erase_byte = None + self.size = None
def decompress(self): """Handle decompressing data if necessary""" @@ -242,6 +262,24 @@ class CbfsFile(object): """ return CbfsFile(name, TYPE_RAW, data, compress)
+ @classmethod + def empty(cls, space_to_use, erase_byte): + """Create a new empty file of a given size + + Args: + space_to_use:: Size of available space, which must be at least as + large as the alignment size for this CBFS + erase_byte: Byte to use for contents of file (repeated through the + whole file) + + Returns: + CbfsFile object containing the file information + """ + cfile = CbfsFile('', TYPE_EMPTY, b'') + cfile.size = space_to_use - FILE_HEADER_LEN - FILENAME_ALIGN + cfile.erase_byte = erase_byte + return cfile + def get_data(self): """Obtain the contents of the file, in CBFS format
@@ -270,6 +308,8 @@ class CbfsFile(object): attr = struct.pack(ATTR_COMPRESSION_FORMAT, FILE_ATTR_TAG_COMPRESSION, ATTR_COMPRESSION_LEN, self.compress, len(orig_data)) + elif self.ftype == TYPE_EMPTY: + data = tools.GetBytes(self.erase_byte, self.size) else: raise ValueError('Unknown type %#x when writing\n' % self.ftype) if attr: @@ -357,6 +397,24 @@ class CbfsWriter(object): (offset, fd.tell())) fd.write(tools.GetBytes(self._erase_byte, offset - fd.tell()))
+ def _pad_to(self, fd, offset): + """Write out pad bytes and/or an empty file until a given offset + + Args: + fd: File objext to write to + offset: Offset to write to + """ + self._align_to(fd, self._align) + upto = fd.tell() + if upto > offset: + raise ValueError('No space for data before pad offset %#x (current offset %#x)' % + (offset, upto)) + todo = align_int_down(offset - upto, self._align) + if todo: + cbf = CbfsFile.empty(todo, self._erase_byte) + fd.write(cbf.get_data()) + self._skip_to(fd, offset) + def _align_to(self, fd, align): """Write out pad bytes until a given alignment is reached
@@ -416,7 +474,7 @@ class CbfsWriter(object): raise ValueError('No space for header at offset %#x (current offset %#x)' % (self._header_offset, fd.tell())) if not add_fileheader: - self._skip_to(fd, self._header_offset) + self._pad_to(fd, self._header_offset) hdr = struct.pack(HEADER_FORMAT, HEADER_MAGIC, HEADER_VERSION2, self._size, self._bootblock_size, self._align, self._contents_offset, self._arch, 0xffffffff) @@ -455,7 +513,7 @@ class CbfsWriter(object): self._write_header(fd, add_fileheader=self._add_fileheader)
# Pad to the end and write a pointer to the CBFS master header - self._skip_to(fd, self._base_address or self._size - 4) + self._pad_to(fd, self._base_address or self._size - 4) rel_offset = self._header_offset - self._size fd.write(struct.pack('<I', rel_offset & 0xffffffff))
@@ -596,6 +654,10 @@ class CbfsReader(object): cfile.decompress() if DEBUG: print('data', data) + elif ftype == TYPE_EMPTY: + # Just read the data and discard it, since it is only padding + fd.read(size) + cfile = CbfsFile('', TYPE_EMPTY, b'') else: raise ValueError('Unknown type %#x when reading\n' % ftype) if cfile: diff --git a/tools/binman/cbfs_util_test.py b/tools/binman/cbfs_util_test.py index 19086305af1..9bb6a298222 100755 --- a/tools/binman/cbfs_util_test.py +++ b/tools/binman/cbfs_util_test.py @@ -289,6 +289,16 @@ class TestCbfs(unittest.TestCase): self.assertIn('No space for header', str(e.exception))
def test_cbfs_no_space_skip(self): + """Check handling of running out of space in CBFS with file header""" + size = 0x5c + cbw = CbfsWriter(size, arch=cbfs_util.ARCHITECTURE_PPC64) + cbw._add_fileheader = True + cbw.add_file_raw('u-boot', U_BOOT_DATA) + with self.assertRaises(ValueError) as e: + cbw.get_data() + self.assertIn('No space for data before offset', str(e.exception)) + + def test_cbfs_no_space_pad(self): """Check handling of running out of space in CBFS with file header""" size = 0x70 cbw = CbfsWriter(size) @@ -296,7 +306,7 @@ class TestCbfs(unittest.TestCase): cbw.add_file_raw('u-boot', U_BOOT_DATA) with self.assertRaises(ValueError) as e: cbw.get_data() - self.assertIn('No space for data before offset', str(e.exception)) + self.assertIn('No space for data before pad offset', str(e.exception))
def test_cbfs_bad_header_ptr(self): """Check handling of a bad master-header pointer""" @@ -535,6 +545,17 @@ class TestCbfs(unittest.TestCase): cbfs_fname = self._get_expected_cbfs(size=size, compress=['lz4', 'lzma']) self._compare_expected_cbfs(data, cbfs_fname)
+ def test_cbfs_raw_space(self): + """Test files with unused space in the CBFS""" + size = 0xf0 + cbw = CbfsWriter(size) + cbw.add_file_raw('u-boot', U_BOOT_DATA) + cbw.add_file_raw('u-boot-dtb', U_BOOT_DTB_DATA) + data = cbw.get_data() + self._check_raw(data, size) + cbfs_fname = self._get_expected_cbfs(size=size) + self._compare_expected_cbfs(data, cbfs_fname) +
if __name__ == '__main__': unittest.main()

When there is lots of open space in a CBFS it is normally padded with 'empty' files so that sequentially scanning the CBFS can skip from one to the next without a break.
Add support for this.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: None
tools/binman/cbfs_util.py | 68 ++++++++++++++++++++++++++++++++-- tools/binman/cbfs_util_test.py | 23 +++++++++++- 2 files changed, 87 insertions(+), 4 deletions(-)
Applied to u-boot-dm, thanks!

A feature of CBFS is that it allows files to be positioned at particular offset (as with binman in general). This is useful to support execute-in-place (XIP) code, since this may not be relocatable.
Add a new cbfs-offset property to control this.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: - Deal with travis's old lz4 version by skipping tests as necessary - Skip use of cbfstool in tests if it is not available
tools/binman/README.entries | 38 ++++++++ tools/binman/cbfs_util.py | 125 +++++++++++++++++++++----- tools/binman/cbfs_util_test.py | 82 +++++++++++++++-- tools/binman/etype/cbfs.py | 16 +++- tools/binman/ftest.py | 23 +++++ tools/binman/test/114_cbfs_offset.dts | 26 ++++++ 6 files changed, 276 insertions(+), 34 deletions(-) create mode 100644 tools/binman/test/114_cbfs_offset.dts
diff --git a/tools/binman/README.entries b/tools/binman/README.entries index 2e6aea1e84c..3241befc7f4 100644 --- a/tools/binman/README.entries +++ b/tools/binman/README.entries @@ -100,6 +100,7 @@ with the second subnode below: filename = "u-boot.dtb"; cbfs-type = "raw"; cbfs-compress = "lz4"; + cbfs-offset = <0x100000>; }; };
@@ -158,6 +159,15 @@ cbfs-type: to add a flat binary with a load/start address, similar to the 'add-flat-binary' option in cbfstool.
+cbfs-offset: + This is the offset of the file's data within the CBFS. It is used to + specify where the file should be placed in cases where a fixed position + is needed. Typical uses are for code which is not relocatable and must + execute in-place from a particular address. This works because SPI flash + is generally mapped into memory on x86 devices. The file header is + placed before this offset so that the data start lines up exactly with + the chosen offset. If this property is not provided, then the file is + placed in the next available spot.
The current implementation supports only a subset of CBFS features. It does not support other file types (e.g. payload), adding multiple files (like the @@ -334,6 +344,34 @@ See README.x86 for information about x86 binary blobs.
+Entry: intel-ifwi: Entry containing an Intel Integrated Firmware Image (IFWI) file +---------------------------------------------------------------------------------- + +Properties / Entry arguments: + - filename: Filename of file to read into entry. This is either the + IFWI file itself, or a file that can be converted into one using a + tool + - convert-fit: If present this indicates that the ifwitool should be + used to convert the provided file into a IFWI. + +This file contains code and data used by the SoC that is required to make +it work. It includes U-Boot TPL, microcode, things related to the CSE +(Converged Security Engine, the microcontroller that loads all the firmware) +and other items beyond the wit of man. + +A typical filename is 'ifwi.bin' for an IFWI file, or 'fitimage.bin' for a +file that will be converted to an IFWI. + +The position of this entry is generally set by the intel-descriptor entry. + +The contents of the IFWI are specified by the subnodes of the IFWI node. +Each subnode describes an entry which is placed into the IFWFI with a given +sub-partition (and optional entry name). + +See README.x86 for information about x86 binary blobs. + + + Entry: intel-me: Entry containing an Intel Management Engine (ME) file ----------------------------------------------------------------------
diff --git a/tools/binman/cbfs_util.py b/tools/binman/cbfs_util.py index ec4a2e5a8c6..1cdbcb2339e 100644 --- a/tools/binman/cbfs_util.py +++ b/tools/binman/cbfs_util.py @@ -12,7 +12,7 @@ it is necessary to rely on the C structures and source code (mostly cbfstool) to fully understand it.
Currently supported: raw and stage types with compression, padding empty areas - with empty files + with empty files, fixed-offset files """
from __future__ import print_function @@ -190,6 +190,8 @@ class CbfsFile(object): Properties: name: Name of file offset: Offset of file data from start of file header + cbfs_offset: Offset of file data in bytes from start of CBFS, or None to + place this file anyway data: Contents of file, uncompressed data_len: Length of (possibly compressed) data in bytes ftype: File type (TYPE_...) @@ -203,9 +205,10 @@ class CbfsFile(object): contents (used for empty files) size: Size of the file in bytes (used for empty files) """ - def __init__(self, name, ftype, data, compress=COMPRESS_NONE): + def __init__(self, name, ftype, data, cbfs_offset, compress=COMPRESS_NONE): self.name = name self.offset = None + self.cbfs_offset = cbfs_offset self.data = data self.ftype = ftype self.compress = compress @@ -231,7 +234,7 @@ class CbfsFile(object): self.data_len = len(indata)
@classmethod - def stage(cls, base_address, name, data): + def stage(cls, base_address, name, data, cbfs_offset): """Create a new stage file
Args: @@ -239,28 +242,32 @@ class CbfsFile(object): name: String file name to put in CBFS (does not need to correspond to the name that the file originally came from) data: Contents of file + cbfs_offset: Offset of file data in bytes from start of CBFS, or + None to place this file anyway
Returns: CbfsFile object containing the file information """ - cfile = CbfsFile(name, TYPE_STAGE, data) + cfile = CbfsFile(name, TYPE_STAGE, data, cbfs_offset) cfile.base_address = base_address return cfile
@classmethod - def raw(cls, name, data, compress): + def raw(cls, name, data, cbfs_offset, compress): """Create a new raw file
Args: name: String file name to put in CBFS (does not need to correspond to the name that the file originally came from) data: Contents of file + cbfs_offset: Offset of file data in bytes from start of CBFS, or + None to place this file anyway compress: Compression algorithm to use (COMPRESS_...)
Returns: CbfsFile object containing the file information """ - return CbfsFile(name, TYPE_RAW, data, compress) + return CbfsFile(name, TYPE_RAW, data, cbfs_offset, compress)
@classmethod def empty(cls, space_to_use, erase_byte): @@ -275,12 +282,44 @@ class CbfsFile(object): Returns: CbfsFile object containing the file information """ - cfile = CbfsFile('', TYPE_EMPTY, b'') + cfile = CbfsFile('', TYPE_EMPTY, b'', None) cfile.size = space_to_use - FILE_HEADER_LEN - FILENAME_ALIGN cfile.erase_byte = erase_byte return cfile
- def get_data(self): + def calc_start_offset(self): + """Check if this file needs to start at a particular offset in CBFS + + Returns: + None if the file can be placed anywhere, or + the largest offset where the file could start (integer) + """ + if self.cbfs_offset is None: + return None + return self.cbfs_offset - self.get_header_len() + + def get_header_len(self): + """Get the length of headers required for a file + + This is the minimum length required before the actual data for this file + could start. It might start later if there is padding. + + Returns: + Total length of all non-data fields, in bytes + """ + name = _pack_string(self.name) + hdr_len = len(name) + FILE_HEADER_LEN + if self.ftype == TYPE_STAGE: + pass + elif self.ftype == TYPE_RAW: + hdr_len += ATTR_COMPRESSION_LEN + elif self.ftype == TYPE_EMPTY: + pass + else: + raise ValueError('Unknown file type %#x\n' % self.ftype) + return hdr_len + + def get_data(self, offset=None, pad_byte=None): """Obtain the contents of the file, in CBFS format
Returns: @@ -292,6 +331,7 @@ class CbfsFile(object): attr_pos = 0 content = b'' attr = b'' + pad = b'' data = self.data if self.ftype == TYPE_STAGE: elf_data = elf.DecodeElf(data, self.base_address) @@ -315,10 +355,33 @@ class CbfsFile(object): if attr: attr_pos = hdr_len hdr_len += len(attr) - hdr = struct.pack(FILE_HEADER_FORMAT, FILE_MAGIC, - len(content) + len(data), + if self.cbfs_offset is not None: + pad_len = self.cbfs_offset - offset - hdr_len + if pad_len < 0: # pragma: no cover + # Test coverage of this is not available since this should never + # happen. It indicates that get_header_len() provided an + # incorrect value (too small) so that we decided that we could + # put this file at the requested place, but in fact a previous + # file extends far enough into the CBFS that this is not + # possible. + raise ValueError("Internal error: CBFS file '%s': Requested offset %#x but current output position is %#x" % + (self.name, self.cbfs_offset, offset)) + pad = tools.GetBytes(pad_byte, pad_len) + hdr_len += pad_len + self.offset = len(content) + len(data) + hdr = struct.pack(FILE_HEADER_FORMAT, FILE_MAGIC, self.offset, self.ftype, attr_pos, hdr_len) - return hdr + name + attr + content + data + + # Do a sanity check of the get_header_len() function, to ensure that it + # stays in lockstep with this function + expected_len = self.get_header_len() + actual_len = len(hdr + name + attr) + if expected_len != actual_len: # pragma: no cover + # Test coverage of this is not available since this should never + # happen. It probably indicates that get_header_len() is broken. + raise ValueError("Internal error: CBFS file '%s': Expected headers of %#x bytes, got %#d" % + (self.name, expected_len, actual_len)) + return hdr + name + attr + pad + content + data
class CbfsWriter(object): @@ -431,34 +494,39 @@ class CbfsWriter(object): if offset < self._size: self._skip_to(fd, offset)
- def add_file_stage(self, name, data): + def add_file_stage(self, name, data, cbfs_offset=None): """Add a new stage file to the CBFS
Args: name: String file name to put in CBFS (does not need to correspond to the name that the file originally came from) data: Contents of file + cbfs_offset: Offset of this file's data within the CBFS, in bytes, + or None to place this file anywhere
Returns: CbfsFile object created """ - cfile = CbfsFile.stage(self._base_address, name, data) + cfile = CbfsFile.stage(self._base_address, name, data, cbfs_offset) self._files[name] = cfile return cfile
- def add_file_raw(self, name, data, compress=COMPRESS_NONE): + def add_file_raw(self, name, data, cbfs_offset=None, + compress=COMPRESS_NONE): """Create a new raw file
Args: name: String file name to put in CBFS (does not need to correspond to the name that the file originally came from) data: Contents of file + cbfs_offset: Offset of this file's data within the CBFS, in bytes, + or None to place this file anywhere compress: Compression algorithm to use (COMPRESS_...)
Returns: CbfsFile object created """ - cfile = CbfsFile.raw(name, data, compress) + cfile = CbfsFile.raw(name, data, cbfs_offset, compress) self._files[name] = cfile return cfile
@@ -507,7 +575,11 @@ class CbfsWriter(object):
# Write out each file for cbf in self._files.values(): - fd.write(cbf.get_data()) + # Place the file at its requested place, if any + offset = cbf.calc_start_offset() + if offset is not None: + self._pad_to(fd, align_int_down(offset, self._align)) + fd.write(cbf.get_data(fd.tell(), self._erase_byte)) self._align_to(fd, self._align) if not self._hdr_at_start: self._write_header(fd, add_fileheader=self._add_fileheader) @@ -639,25 +711,27 @@ class CbfsReader(object):
# Create the correct CbfsFile object depending on the type cfile = None - fd.seek(file_pos + offset, io.SEEK_SET) + cbfs_offset = file_pos + offset + fd.seek(cbfs_offset, io.SEEK_SET) if ftype == TYPE_CBFSHEADER: self._read_header(fd) elif ftype == TYPE_STAGE: data = fd.read(STAGE_LEN) - cfile = CbfsFile.stage(self.stage_base_address, name, b'') + cfile = CbfsFile.stage(self.stage_base_address, name, b'', + cbfs_offset) (cfile.compress, cfile.entry, cfile.load, cfile.data_len, cfile.memlen) = struct.unpack(STAGE_FORMAT, data) cfile.data = fd.read(cfile.data_len) elif ftype == TYPE_RAW: data = fd.read(size) - cfile = CbfsFile.raw(name, data, compress) + cfile = CbfsFile.raw(name, data, cbfs_offset, compress) cfile.decompress() if DEBUG: print('data', data) elif ftype == TYPE_EMPTY: # Just read the data and discard it, since it is only padding fd.read(size) - cfile = CbfsFile('', TYPE_EMPTY, b'') + cfile = CbfsFile('', TYPE_EMPTY, b'', cbfs_offset) else: raise ValueError('Unknown type %#x when reading\n' % ftype) if cfile: @@ -674,7 +748,8 @@ class CbfsReader(object): """Read attributes from the file
CBFS files can have attributes which are things that cannot fit into the - header. The only attribute currently supported is compression. + header. The only attributes currently supported are compression and the + unused tag.
Args: fd: File to read from @@ -703,6 +778,8 @@ class CbfsReader(object): # We don't currently use this information atag, alen, compress, _decomp_size = struct.unpack( ATTR_COMPRESSION_FORMAT, data) + elif atag == FILE_ATTR_TAG_UNUSED2: + break else: print('Unknown attribute tag %x' % atag) attr_size -= len(data) @@ -760,7 +837,7 @@ class CbfsReader(object): return val.decode('utf-8')
-def cbfstool(fname, *cbfs_args): +def cbfstool(fname, *cbfs_args, **kwargs): """Run cbfstool with provided arguments
If the tool fails then this function raises an exception and prints out the @@ -773,7 +850,9 @@ def cbfstool(fname, *cbfs_args): Returns: CommandResult object containing the results """ - args = ('cbfstool', fname) + cbfs_args + args = ['cbfstool', fname] + list(cbfs_args) + if kwargs.get('base') is not None: + args += ['-b', '%#x' % kwargs['base']] result = command.RunPipe([args], capture=not VERBOSE, capture_stderr=not VERBOSE, raise_on_error=False) if result.return_code: diff --git a/tools/binman/cbfs_util_test.py b/tools/binman/cbfs_util_test.py index 9bb6a298222..0fe4aa494ec 100755 --- a/tools/binman/cbfs_util_test.py +++ b/tools/binman/cbfs_util_test.py @@ -105,7 +105,7 @@ class TestCbfs(unittest.TestCase): return cbfs
def _check_uboot(self, cbfs, ftype=cbfs_util.TYPE_RAW, offset=0x38, - data=U_BOOT_DATA): + data=U_BOOT_DATA, cbfs_offset=None): """Check that the U-Boot file is as expected
Args: @@ -113,6 +113,7 @@ class TestCbfs(unittest.TestCase): ftype: Expected file type offset: Expected offset of file data: Expected data in file + cbfs_offset: Expected CBFS offset for file's data
Returns: CbfsFile object containing the file @@ -121,24 +122,30 @@ class TestCbfs(unittest.TestCase): cfile = cbfs.files['u-boot'] self.assertEqual('u-boot', cfile.name) self.assertEqual(offset, cfile.offset) + if cbfs_offset is not None: + self.assertEqual(cbfs_offset, cfile.cbfs_offset) self.assertEqual(data, cfile.data) self.assertEqual(ftype, cfile.ftype) self.assertEqual(cbfs_util.COMPRESS_NONE, cfile.compress) self.assertEqual(len(data), cfile.memlen) return cfile
- def _check_dtb(self, cbfs, offset=0x38, data=U_BOOT_DTB_DATA): + def _check_dtb(self, cbfs, offset=0x38, data=U_BOOT_DTB_DATA, + cbfs_offset=None): """Check that the U-Boot dtb file is as expected
Args: cbfs: CbfsReader object to check offset: Expected offset of file data: Expected data in file + cbfs_offset: Expected CBFS offset for file's data """ self.assertIn('u-boot-dtb', cbfs.files) cfile = cbfs.files['u-boot-dtb'] self.assertEqual('u-boot-dtb', cfile.name) self.assertEqual(offset, cfile.offset) + if cbfs_offset is not None: + self.assertEqual(cbfs_offset, cfile.cbfs_offset) self.assertEqual(U_BOOT_DTB_DATA, cfile.data) self.assertEqual(cbfs_util.TYPE_RAW, cfile.ftype) self.assertEqual(cbfs_util.COMPRESS_NONE, cfile.compress) @@ -157,13 +164,14 @@ class TestCbfs(unittest.TestCase): self._check_uboot(cbfs) self._check_dtb(cbfs)
- def _get_expected_cbfs(self, size, arch='x86', compress=None): + def _get_expected_cbfs(self, size, arch='x86', compress=None, base=None): """Get the file created by cbfstool for a particular scenario
Args: size: Size of the CBFS in bytes arch: Architecture of the CBFS, as a string compress: Compression to use, e.g. cbfs_util.COMPRESS_LZMA + base: Base address of file, or None to put it anywhere
Returns: Resulting CBFS file, or None if cbfstool is not available @@ -172,14 +180,18 @@ class TestCbfs(unittest.TestCase): return None cbfs_fname = os.path.join(self._indir, 'test.cbfs') cbfs_util.cbfstool(cbfs_fname, 'create', '-m', arch, '-s', '%#x' % size) + if base: + base = [(1 << 32) - size + b for b in base] cbfs_util.cbfstool(cbfs_fname, 'add', '-n', 'u-boot', '-t', 'raw', '-c', compress and compress[0] or 'none', '-f', tools.GetInputFilename( - compress and 'compress' or 'u-boot.bin')) + compress and 'compress' or 'u-boot.bin'), + base=base[0] if base else None) cbfs_util.cbfstool(cbfs_fname, 'add', '-n', 'u-boot-dtb', '-t', 'raw', '-c', compress and compress[1] or 'none', '-f', tools.GetInputFilename( - compress and 'compress' or 'u-boot.dtb')) + compress and 'compress' or 'u-boot.dtb'), + base=base[1] if base else None) return cbfs_fname
def _compare_expected_cbfs(self, data, cbfstool_fname): @@ -407,7 +419,7 @@ class TestCbfs(unittest.TestCase): self.skipTest('lz4 --no-frame-crc not available') size = 0x140 cbw = CbfsWriter(size) - cbw.add_file_raw('u-boot', COMPRESS_DATA, + cbw.add_file_raw('u-boot', COMPRESS_DATA, None, compress=cbfs_util.COMPRESS_LZ4) data = cbw.get_data()
@@ -431,7 +443,7 @@ class TestCbfs(unittest.TestCase): self.skipTest('lz4 --no-frame-crc not available') size = 0x140 cbw = CbfsWriter(size) - cbw.add_file_raw('u-boot', COMPRESS_DATA, + cbw.add_file_raw('u-boot', COMPRESS_DATA, None, compress=cbfs_util.COMPRESS_LZ4) data = cbw.get_data()
@@ -517,9 +529,9 @@ class TestCbfs(unittest.TestCase): self.skipTest('lz4 --no-frame-crc not available') size = 0x140 cbw = CbfsWriter(size) - cbw.add_file_raw('u-boot', COMPRESS_DATA, + cbw.add_file_raw('u-boot', COMPRESS_DATA, None, compress=cbfs_util.COMPRESS_LZ4) - cbw.add_file_raw('u-boot-dtb', COMPRESS_DATA, + cbw.add_file_raw('u-boot-dtb', COMPRESS_DATA, None, compress=cbfs_util.COMPRESS_LZMA) data = cbw.get_data()
@@ -556,6 +568,58 @@ class TestCbfs(unittest.TestCase): cbfs_fname = self._get_expected_cbfs(size=size) self._compare_expected_cbfs(data, cbfs_fname)
+ def test_cbfs_offset(self): + """Test a CBFS with files at particular offsets""" + size = 0x200 + cbw = CbfsWriter(size) + cbw.add_file_raw('u-boot', U_BOOT_DATA, 0x40) + cbw.add_file_raw('u-boot-dtb', U_BOOT_DTB_DATA, 0x140) + + data = cbw.get_data() + cbfs = self._check_hdr(data, size) + self._check_uboot(cbfs, ftype=cbfs_util.TYPE_RAW, offset=0x40, + cbfs_offset=0x40) + self._check_dtb(cbfs, offset=0x40, cbfs_offset=0x140) + + cbfs_fname = self._get_expected_cbfs(size=size, base=(0x40, 0x140)) + self._compare_expected_cbfs(data, cbfs_fname) + + def test_cbfs_invalid_file_type_header(self): + """Check handling of an invalid file type when outputting a header""" + size = 0xb0 + cbw = CbfsWriter(size) + cfile = cbw.add_file_raw('u-boot', U_BOOT_DATA, 0) + + # Change the type manually before generating the CBFS, and make sure + # that the generator complains + cfile.ftype = 0xff + with self.assertRaises(ValueError) as e: + cbw.get_data() + self.assertIn('Unknown file type 0xff', str(e.exception)) + + def test_cbfs_offset_conflict(self): + """Test a CBFS with files that want to overlap""" + size = 0x200 + cbw = CbfsWriter(size) + cbw.add_file_raw('u-boot', U_BOOT_DATA, 0x40) + cbw.add_file_raw('u-boot-dtb', U_BOOT_DTB_DATA, 0x80) + + with self.assertRaises(ValueError) as e: + cbw.get_data() + self.assertIn('No space for data before pad offset', str(e.exception)) + + def test_cbfs_check_offset(self): + """Test that we can discover the offset of a file after writing it""" + size = 0xb0 + cbw = CbfsWriter(size) + cbw.add_file_raw('u-boot', U_BOOT_DATA) + cbw.add_file_raw('u-boot-dtb', U_BOOT_DTB_DATA) + data = cbw.get_data() + + cbfs = cbfs_util.CbfsReader(data) + self.assertEqual(0x38, cbfs.files['u-boot'].cbfs_offset) + self.assertEqual(0x78, cbfs.files['u-boot-dtb'].cbfs_offset) +
if __name__ == '__main__': unittest.main() diff --git a/tools/binman/etype/cbfs.py b/tools/binman/etype/cbfs.py index 513df217bc1..49baa6a4f63 100644 --- a/tools/binman/etype/cbfs.py +++ b/tools/binman/etype/cbfs.py @@ -52,6 +52,7 @@ class Entry_cbfs(Entry): filename = "u-boot.dtb"; cbfs-type = "raw"; cbfs-compress = "lz4"; + cbfs-offset = <0x100000>; }; };
@@ -110,6 +111,15 @@ class Entry_cbfs(Entry): to add a flat binary with a load/start address, similar to the 'add-flat-binary' option in cbfstool.
+ cbfs-offset: + This is the offset of the file's data within the CBFS. It is used to + specify where the file should be placed in cases where a fixed position + is needed. Typical uses are for code which is not relocatable and must + execute in-place from a particular address. This works because SPI flash + is generally mapped into memory on x86 devices. The file header is + placed before this offset so that the data start lines up exactly with + the chosen offset. If this property is not provided, then the file is + placed in the next available spot.
The current implementation supports only a subset of CBFS features. It does not support other file types (e.g. payload), adding multiple files (like the @@ -172,9 +182,10 @@ class Entry_cbfs(Entry): return False data = entry.GetData() if entry._type == 'raw': - cbfs.add_file_raw(entry._cbfs_name, data, entry._cbfs_compress) + cbfs.add_file_raw(entry._cbfs_name, data, entry._cbfs_offset, + entry._cbfs_compress) elif entry._type == 'stage': - cbfs.add_file_stage(entry._cbfs_name, data) + cbfs.add_file_stage(entry._cbfs_name, data, entry._cbfs_offset) data = cbfs.get_data() self.SetContents(data) return True @@ -186,6 +197,7 @@ class Entry_cbfs(Entry): entry._cbfs_name = fdt_util.GetString(node, 'cbfs-name', entry.name) entry._type = fdt_util.GetString(node, 'cbfs-type') compress = fdt_util.GetString(node, 'cbfs-compress', 'none') + entry._cbfs_offset = fdt_util.GetInt(node, 'cbfs-offset') entry._cbfs_compress = cbfs_util.find_compress(compress) if entry._cbfs_compress is None: self.Raise("Invalid compression in '%s': '%s'" % diff --git a/tools/binman/ftest.py b/tools/binman/ftest.py index 1355c4f55d3..5bde8aa30a1 100644 --- a/tools/binman/ftest.py +++ b/tools/binman/ftest.py @@ -2012,5 +2012,28 @@ class TestFunctional(unittest.TestCase): self.assertIn('Could not complete processing of contents', str(e.exception))
+ def testCbfsOffset(self): + """Test a CBFS with files at particular offsets + + Like all CFBS tests, this is just checking the logic that calls + cbfs_util. See cbfs_util_test for fully tests (e.g. test_cbfs_offset()). + """ + data = self._DoReadFile('114_cbfs_offset.dts') + size = 0x200 + + cbfs = cbfs_util.CbfsReader(data) + self.assertEqual(size, cbfs.rom_size) + + self.assertIn('u-boot', cbfs.files) + cfile = cbfs.files['u-boot'] + self.assertEqual(U_BOOT_DATA, cfile.data) + self.assertEqual(0x40, cfile.cbfs_offset) + + self.assertIn('u-boot-dtb', cbfs.files) + cfile2 = cbfs.files['u-boot-dtb'] + self.assertEqual(U_BOOT_DTB_DATA, cfile2.data) + self.assertEqual(0x140, cfile2.cbfs_offset) + + if __name__ == "__main__": unittest.main() diff --git a/tools/binman/test/114_cbfs_offset.dts b/tools/binman/test/114_cbfs_offset.dts new file mode 100644 index 00000000000..7aa9d9d4bf3 --- /dev/null +++ b/tools/binman/test/114_cbfs_offset.dts @@ -0,0 +1,26 @@ +// SPDX-License-Identifier: GPL-2.0+ + +/dts-v1/; + +/ { + #address-cells = <1>; + #size-cells = <1>; + + binman { + sort-by-offset; + end-at-4gb; + size = <0x200>; + cbfs { + size = <0x200>; + offset = <0xfffffe00>; + u-boot { + cbfs-offset = <0x40>; + cbfs-type = "raw"; + }; + u-boot-dtb { + cbfs-offset = <0x140>; + cbfs-type = "raw"; + }; + }; + }; +};

A feature of CBFS is that it allows files to be positioned at particular offset (as with binman in general). This is useful to support execute-in-place (XIP) code, since this may not be relocatable.
Add a new cbfs-offset property to control this.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: - Deal with travis's old lz4 version by skipping tests as necessary - Skip use of cbfstool in tests if it is not available
tools/binman/README.entries | 38 ++++++++ tools/binman/cbfs_util.py | 125 +++++++++++++++++++++----- tools/binman/cbfs_util_test.py | 82 +++++++++++++++-- tools/binman/etype/cbfs.py | 16 +++- tools/binman/ftest.py | 23 +++++ tools/binman/test/114_cbfs_offset.dts | 26 ++++++ 6 files changed, 276 insertions(+), 34 deletions(-) create mode 100644 tools/binman/test/114_cbfs_offset.dts
Applied to u-boot-dm, thanks!
participants (2)
-
Simon Glass
-
sjg@google.com