[PATCH v3 00/26] binman: rockchip: Migrate from rockchip SPL_FIT_GENERATOR script

At present rockchip 64-bit boards make use of a FIT-generator script written in Python. The script supports splitting an ELF file into several 'loadable' nodes in the FIT. Binman does not current support this feature.
This series adds binman support for ELF splitting. This works by adding a new 'fit,operation' property to the FIT subnodes, allowing this new way of generating nodes.
Some other fixes and improvements are needed along the way.
A new, common binman description is added for 64-bit boards which includes the required u-boot.itb file.
The existing script is removed, so that only a few zynq boards are now using a SPL_FIT_GENERATOR script:
avnet_ultrazedev_cc_v1_0_ultrazedev_som_v1_0 xilinx_zynqmp_virt
Migration of those is hopefully in progress.
Note however that tools/k3_fit_atf.sh remains, used by a few boards that enable CONFIG_TI_SECURE_DEVICE so this series is copied there too:
am335x_hs_evm am335x_hs_evm_uart am43xx_hs_evm am57xx_hs_evm am57xx_hs_evm_usb am65x_hs_evm_a53 am65x_hs_evm_r5 dra7xx_hs_evm dra7xx_hs_evm_usb j721e_hs_evm_a72 j721e_hs_evm_r5 k2e_hs_evm k2g_hs_evm k2hk_hs_evm k2l_hs_evm
Ivan Mikhaylov has sent a patch to help with these, but I need to take a look at the testing side. In any case they should really be using binman for the image generation.
Changes in v3: - Fix 'whic' typo - Drop unnecessary variable - Add tests for an empty string/stringlist too - Rename some more things to 'extend' - Fix 'He' typo - Drop the base_node argument and use self._node instead - Rename function to _raise_subnode() so it is clear it is not like Raise() - Fix 'segmnet' typo - Use seq == 0 instead of 'not seq' - Add an offset to the FIT description
Changes in v2: - Add new patch to make GetArgs() more flexible - Add new patch to remove remove_defconfig() - Add new patch to use re.fullmatch() to avoid extra check - Add new patch to correct Kconfig help for TPL_BINMAN_SYMBOLS - Add new patch to tidy up implementation of AddStringList() - Add new patch to rename load_segments() and module failure - Add new patch to tweak collect_contents_to_file() and docs - Add patch to rename ExpandToLimit to extend_to_limit - Add patch to rename ExpandEntries to gen_entries - Add new patch to refactor fit to generate output at the end - Add a patch to change how faked blobs are created - Add a patch to make fake blobs zero-sized by default - Add a patch to allow mkimage to use a non-zero fake-blob size - Rewrite this to use the new FIT entry-type implementation - Rename op-tee to tee-os - Rename op-tee to tee-os - Drop use of .itb2 - Drop patches previously applied - Add various suggestions from Alper Nebi Yasak - Add patches to refactor binman's FIT support
Simon Glass (26): dtoc: Make GetArgs() more flexible moveconfig: Remove remove_defconfig() moveconfig: Use re.fullmatch() to avoid extra check spl: Correct Kconfig help for TPL_BINMAN_SYMBOLS dtoc: Tidy up implementation of AddStringList() elf: Rename load_segments() and module failure binman: Tweak collect_contents_to_file() and docs binman: Rename ExpandToLimit to extend_to_limit binman: Rename ExpandEntries to gen_entries binman: Refactor fit to generate output at the end binman: Rename tools parameter to btools binman: Change how faked blobs are created binman: Make fake blobs zero-sized by default binman: Allow mkimage to use a non-zero fake-blob size binman: Read the fit entries only once binman: Update fit to move node reading into the ReadNode() method binman: Fix some pylint warnings in fit binman: Add a consistent way to report errors with fit binman: Update fit to use node instead of subnode binman: Keep a separate list of entries for fit binman: Support splitting an ELF file into multiple nodes rockchip: evb-rk3288: Drop raw-image support rockchip: Include binman script in 64-bit boards rockchip: Support building the all output files in binman rockchip: Convert all boards to use binman rockchip: Drop the FIT generator script
Makefile | 42 +- arch/arm/dts/px30-u-boot.dtsi | 2 + arch/arm/dts/rk3308-u-boot.dtsi | 2 + arch/arm/dts/rk3326-odroid-go2-u-boot.dtsi | 2 + arch/arm/dts/rk3328-u-boot.dtsi | 2 + arch/arm/dts/rk3368-u-boot.dtsi | 1 + arch/arm/dts/rk3399-u-boot.dtsi | 5 +- arch/arm/dts/rk3568-u-boot.dtsi | 2 + arch/arm/dts/rockchip-u-boot.dtsi | 85 ++- arch/arm/mach-rockchip/Kconfig | 6 + arch/arm/mach-rockchip/make_fit_atf.py | 240 --------- boot/Kconfig | 3 +- common/spl/Kconfig | 6 +- configs/evb-rk3288_defconfig | 1 + tools/binman/binman.rst | 32 +- tools/binman/control.py | 2 +- tools/binman/elf.py | 6 +- tools/binman/elf_test.py | 18 +- tools/binman/entries.rst | 146 +++++ tools/binman/entry.py | 51 +- tools/binman/etype/_testing.py | 2 +- tools/binman/etype/blob.py | 10 +- tools/binman/etype/blob_ext_list.py | 2 +- tools/binman/etype/blob_phase.py | 2 +- tools/binman/etype/files.py | 2 +- tools/binman/etype/fit.py | 508 ++++++++++++++---- tools/binman/etype/gbb.py | 4 +- tools/binman/etype/intel_ifwi.py | 4 +- tools/binman/etype/mkimage.py | 19 +- tools/binman/etype/section.py | 24 +- tools/binman/etype/vblock.py | 4 +- tools/binman/ftest.py | 208 ++++++- ...88_expand_size.dts => 088_extend_size.dts} | 8 +- ...d_size_bad.dts => 089_extend_size_bad.dts} | 2 +- ..._entry_expand.dts => 121_entry_extend.dts} | 0 ...d_twice.dts => 122_entry_extend_twice.dts} | 0 ...ction.dts => 123_entry_extend_section.dts} | 0 tools/binman/test/224_fit_bad_oper.dts | 2 - tools/binman/test/225_expand_size_bad.dts | 10 + tools/binman/test/226_fit_split_elf.dts | 67 +++ tools/binman/test/227_fit_bad_dir.dts | 9 + tools/binman/test/228_fit_bad_dir_config.dts | 9 + tools/binman/test/229_mkimage_missing.dts | 18 + tools/dtoc/fdt.py | 4 +- tools/dtoc/fdt_util.py | 8 +- tools/dtoc/test/dtoc_test_simple.dts | 2 + tools/dtoc/test_fdt.py | 14 +- tools/moveconfig.py | 16 +- 48 files changed, 1080 insertions(+), 532 deletions(-) delete mode 100755 arch/arm/mach-rockchip/make_fit_atf.py rename tools/binman/test/{088_expand_size.dts => 088_extend_size.dts} (88%) rename tools/binman/test/{089_expand_size_bad.dts => 089_extend_size_bad.dts} (90%) rename tools/binman/test/{121_entry_expand.dts => 121_entry_extend.dts} (100%) rename tools/binman/test/{122_entry_expand_twice.dts => 122_entry_extend_twice.dts} (100%) rename tools/binman/test/{123_entry_expand_section.dts => 123_entry_extend_section.dts} (100%) create mode 100644 tools/binman/test/225_expand_size_bad.dts create mode 100644 tools/binman/test/226_fit_split_elf.dts create mode 100644 tools/binman/test/227_fit_bad_dir.dts create mode 100644 tools/binman/test/228_fit_bad_dir_config.dts create mode 100644 tools/binman/test/229_mkimage_missing.dts

At present it is not possible to have arguments which include spaces. Update the function to only split the args if the property is a single string. This is a bit inconsistent, but might still be useful.
Signed-off-by: Simon Glass sjg@chromium.org Reviewed-by: Alper Nebi Yasak alpernebiyasak@gmail.com Suggested-by: Alper Nebi Yasak alpernebiyasak@gmail.com ---
Changes in v3: - Fix 'whic' typo
Changes in v2: - Add new patch to make GetArgs() more flexible
tools/dtoc/fdt_util.py | 8 ++++++-- tools/dtoc/test/dtoc_test_simple.dts | 2 ++ tools/dtoc/test_fdt.py | 6 +++++- 3 files changed, 13 insertions(+), 3 deletions(-)
diff --git a/tools/dtoc/fdt_util.py b/tools/dtoc/fdt_util.py index c82e7747aa..57550624bb 100644 --- a/tools/dtoc/fdt_util.py +++ b/tools/dtoc/fdt_util.py @@ -192,8 +192,12 @@ def GetArgs(node, propname): value = GetStringList(node, propname) else: value = [] - lists = [v.split() for v in value] - args = [x for l in lists for x in l] + if not value: + args = [] + elif len(value) == 1: + args = value[0].split() + else: + args = value return args
def GetBool(node, propname, default=False): diff --git a/tools/dtoc/test/dtoc_test_simple.dts b/tools/dtoc/test/dtoc_test_simple.dts index 2d321fb034..aef07efeae 100644 --- a/tools/dtoc/test/dtoc_test_simple.dts +++ b/tools/dtoc/test/dtoc_test_simple.dts @@ -63,5 +63,7 @@ orig-node { orig = <1 23 4>; args = "-n first", "second", "-p", "123,456", "-x"; + args2 = "a space", "there"; + args3 = "-n first second -p 123,456 -x"; }; }; diff --git a/tools/dtoc/test_fdt.py b/tools/dtoc/test_fdt.py index 28231e57b1..ea707f2f87 100755 --- a/tools/dtoc/test_fdt.py +++ b/tools/dtoc/test_fdt.py @@ -659,8 +659,12 @@ class TestFdtUtil(unittest.TestCase): ['multi-word', 'message'], fdt_util.GetArgs(self.node, 'stringarray')) self.assertEqual([], fdt_util.GetArgs(self.node, 'boolval')) - self.assertEqual(['-n', 'first', 'second', '-p', '123,456', '-x'], + self.assertEqual(['-n first', 'second', '-p', '123,456', '-x'], fdt_util.GetArgs(node, 'args')) + self.assertEqual(['a space', 'there'], + fdt_util.GetArgs(node, 'args2')) + self.assertEqual(['-n', 'first', 'second', '-p', '123,456', '-x'], + fdt_util.GetArgs(node, 'args3')) with self.assertRaises(ValueError) as exc: fdt_util.GetArgs(self.node, 'missing') self.assertIn(

This is not necessary if simpler code is used. Use the split function and drop the unnecessary []
Signed-off-by: Simon Glass sjg@chromium.org Reviewed-by: Alper Nebi Yasak alpernebiyasak@gmail.com Suggested-by: Alper Nebi Yasak alpernebiyasak@gmail.com ---
(no changes since v2)
Changes in v2: - Add new patch to remove remove_defconfig()
tools/moveconfig.py | 13 +------------ 1 file changed, 1 insertion(+), 12 deletions(-)
diff --git a/tools/moveconfig.py b/tools/moveconfig.py index d4a96ef45d..ecc6e16c6c 100755 --- a/tools/moveconfig.py +++ b/tools/moveconfig.py @@ -94,17 +94,6 @@ SIZES = { RE_REMOVE_DEFCONFIG = re.compile(r'(.*)_defconfig')
### helper functions ### -def remove_defconfig(defc): - """Drop the _defconfig suffix on a string - - Args: - defc (str): String to convert - - Returns: - str: string with the '_defconfig' suffix removed - """ - return RE_REMOVE_DEFCONFIG.match(defc)[1] - def check_top_directory(): """Exit if we are not at the top of source directory.""" for fname in 'README', 'Licenses': @@ -1668,7 +1657,7 @@ def do_find_config(config_list): print(f"Error: Not in Kconfig: %s" % ' '.join(adhoc)) else: print(f'{len(out)} matches') - print(' '.join([remove_defconfig(item) for item in out])) + print(' '.join(item.split('_defconfig')[0] for item in out))
def prefix_config(cfg):

Simplify the code by using the available function.
Signed-off-by: Simon Glass sjg@chromium.org Reviewed-by: Alper Nebi Yasak alpernebiyasak@gmail.com ---
Changes in v3: - Drop unnecessary variable
Changes in v2: - Add new patch to use re.fullmatch() to avoid extra check
tools/moveconfig.py | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/tools/moveconfig.py b/tools/moveconfig.py index ecc6e16c6c..84bc875fff 100755 --- a/tools/moveconfig.py +++ b/tools/moveconfig.py @@ -1607,8 +1607,7 @@ def defconfig_matches(configs, re_match): bool: True if any CONFIG matches the regex """ for cfg in configs: - m_cfg = re_match.match(cfg) - if m_cfg and m_cfg.span()[1] == len(cfg): + if re_match.fullmatch(cfg): return True return False

Fix the help which should refer to TPL, not SPL.
Signed-off-by: Simon Glass sjg@chromium.org Reviewed-by: Alper Nebi Yasak alpernebiyasak@gmail.com Suggested-by: Alper Nebi Yasak alpernebiyasak@gmail.com ---
(no changes since v2)
Changes in v2: - Add new patch to correct Kconfig help for TPL_BINMAN_SYMBOLS
common/spl/Kconfig | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/common/spl/Kconfig b/common/spl/Kconfig index 9418d37b2e..dc319adeac 100644 --- a/common/spl/Kconfig +++ b/common/spl/Kconfig @@ -1334,16 +1334,16 @@ config TPL_SIZE_LIMIT If this value is zero, it is ignored.
config TPL_BINMAN_SYMBOLS - bool "Declare binman symbols in SPL" + bool "Declare binman symbols in TPL" depends on SPL_FRAMEWORK && BINMAN default y help - This enables use of symbols in TPL which refer to U-Boot, enabling SPL + This enables use of symbols in TPL which refer to U-Boot, enabling TPL to obtain the location of U-Boot simply by calling spl_get_image_pos() and spl_get_image_size().
For this to work, you must have a U-Boot image in the binman image, so - binman can update SPL with the location of it. + binman can update TPL with the location of it.
config TPL_FRAMEWORK bool "Support TPL based upon the common SPL framework"

Refactor this to avoid a loop. Also add a test for an empty string.
Signed-off-by: Simon Glass sjg@chromium.org Suggested-by: Alper Nebi Yasak alpernebiyasak@gmail.com Reviewed-by: Alper Nebi Yasak alpernebiyasak@gmail.com ---
Changes in v3: - Add tests for an empty string/stringlist too
Changes in v2: - Add new patch to tidy up implementation of AddStringList()
tools/dtoc/fdt.py | 4 +--- tools/dtoc/test_fdt.py | 8 ++++++++ 2 files changed, 9 insertions(+), 3 deletions(-)
diff --git a/tools/dtoc/fdt.py b/tools/dtoc/fdt.py index c16909a876..d933972918 100644 --- a/tools/dtoc/fdt.py +++ b/tools/dtoc/fdt.py @@ -516,9 +516,7 @@ class Node: Returns: Prop added """ - out = b'' - for string in val: - out += bytes(string, 'utf-8') + b'\0' + out = b'\0'.join(bytes(s, 'utf-8') for s in val) + b'\0' if val else b'' return self.AddData(prop_name, out)
def AddInt(self, prop_name, val): diff --git a/tools/dtoc/test_fdt.py b/tools/dtoc/test_fdt.py index ea707f2f87..914ed6aed5 100755 --- a/tools/dtoc/test_fdt.py +++ b/tools/dtoc/test_fdt.py @@ -550,6 +550,12 @@ class TestProp(unittest.TestCase): data = self.fdt.getprop(self.node.Offset(), 'stringlist') self.assertEqual(b'123\x00456\0', data)
+ val = [] + self.node.AddStringList('stringlist', val) + self.dtb.Sync(auto_resize=True) + data = self.fdt.getprop(self.node.Offset(), 'stringlist') + self.assertEqual(b'', data) + def test_delete_node(self): """Test deleting a node""" old_offset = self.fdt.path_offset('/spl-test') @@ -637,6 +643,7 @@ class TestFdtUtil(unittest.TestCase): self.assertEqual('message', fdt_util.GetString(self.node, 'stringval')) self.assertEqual('test', fdt_util.GetString(self.node, 'missing', 'test')) + self.assertEqual('', fdt_util.GetString(self.node, 'boolval'))
with self.assertRaises(ValueError) as e: self.assertEqual(3, fdt_util.GetString(self.node, 'stringarray')) @@ -651,6 +658,7 @@ class TestFdtUtil(unittest.TestCase): fdt_util.GetStringList(self.node, 'stringarray')) self.assertEqual(['test'], fdt_util.GetStringList(self.node, 'missing', ['test'])) + self.assertEqual([], fdt_util.GetStringList(self.node, 'boolval'))
def testGetArgs(self): node = self.dtb.GetNode('/orig-node')

Hi,
On Sat, 5 Mar 2022 at 20:19, Simon Glass sjg@chromium.org wrote:
Refactor this to avoid a loop. Also add a test for an empty string.
Signed-off-by: Simon Glass sjg@chromium.org Suggested-by: Alper Nebi Yasak alpernebiyasak@gmail.com Reviewed-by: Alper Nebi Yasak alpernebiyasak@gmail.com
Changes in v3:
- Add tests for an empty string/stringlist too
Changes in v2:
- Add new patch to tidy up implementation of AddStringList()
tools/dtoc/fdt.py | 4 +--- tools/dtoc/test_fdt.py | 8 ++++++++ 2 files changed, 9 insertions(+), 3 deletions(-)
This is actually missing the changes in fdt_util.py so I will add that back when applying. Alper, I think you noticed this.
Regards, Simon

Rename this function to make it clear that it only reads loadable segments. Also update the error for missing module to better match the message emitted by Python.
Signed-off-by: Simon Glass sjg@chromium.org Reviewed-by: Alper Nebi Yasak alpernebiyasak@gmail.com Suggested-by: Alper Nebi Yasak alpernebiyasak@gmail.com ---
(no changes since v2)
Changes in v2: - Add new patch to rename load_segments() and module failure
tools/binman/elf.py | 6 +++--- tools/binman/elf_test.py | 18 +++++++++--------- 2 files changed, 12 insertions(+), 12 deletions(-)
diff --git a/tools/binman/elf.py b/tools/binman/elf.py index 5e7d6ae7b9..016e9b2885 100644 --- a/tools/binman/elf.py +++ b/tools/binman/elf.py @@ -105,7 +105,7 @@ def GetSymbolFileOffset(fname, patterns): return addr - seg['p_vaddr'] + seg['p_offset']
if not ELF_TOOLS: - raise ValueError('Python elftools package is not available') + raise ValueError("Python: No module named 'elftools'")
syms = {} with open(fname, 'rb') as fd: @@ -371,7 +371,7 @@ def UpdateFile(infile, outfile, start_sym, end_sym, insert): tools.write_file(outfile, newdata) tout.info('Written to offset %#x' % syms[start_sym].offset)
-def read_segments(data): +def read_loadable_segments(data): """Read segments from an ELF file
Args: @@ -389,7 +389,7 @@ def read_segments(data): ValueError: elftools is not available """ if not ELF_TOOLS: - raise ValueError('Python elftools package is not available') + raise ValueError("Python: No module named 'elftools'") with io.BytesIO(data) as inf: try: elf = ELFFile(inf) diff --git a/tools/binman/elf_test.py b/tools/binman/elf_test.py index 5084838b91..555117d33a 100644 --- a/tools/binman/elf_test.py +++ b/tools/binman/elf_test.py @@ -243,7 +243,7 @@ class TestElf(unittest.TestCase): fname = self.ElfTestFile('embed_data') with self.assertRaises(ValueError) as e: elf.GetSymbolFileOffset(fname, ['embed_start', 'embed_end']) - self.assertIn('Python elftools package is not available', + self.assertIn("Python: No module named 'elftools'", str(e.exception)) finally: elf.ELF_TOOLS = old_val @@ -257,31 +257,31 @@ class TestElf(unittest.TestCase): offset = elf.GetSymbolFileOffset(fname, ['missing_sym']) self.assertEqual({}, offset)
- def test_read_segments(self): - """Test for read_segments()""" + def test_read_loadable_segments(self): + """Test for read_loadable_segments()""" if not elf.ELF_TOOLS: self.skipTest('Python elftools not available') fname = self.ElfTestFile('embed_data') - segments, entry = elf.read_segments(tools.read_file(fname)) + segments, entry = elf.read_loadable_segments(tools.read_file(fname))
def test_read_segments_fail(self): - """Test for read_segments() without elftools""" + """Test for read_loadable_segments() without elftools""" try: old_val = elf.ELF_TOOLS elf.ELF_TOOLS = False fname = self.ElfTestFile('embed_data') with self.assertRaises(ValueError) as e: - elf.read_segments(tools.read_file(fname)) - self.assertIn('Python elftools package is not available', + elf.read_loadable_segments(tools.read_file(fname)) + self.assertIn("Python: No module named 'elftools'", str(e.exception)) finally: elf.ELF_TOOLS = old_val
def test_read_segments_bad_data(self): - """Test for read_segments() with an invalid ELF file""" + """Test for read_loadable_segments() with an invalid ELF file""" fname = self.ElfTestFile('embed_data') with self.assertRaises(ValueError) as e: - elf.read_segments(tools.get_bytes(100, 100)) + elf.read_loadable_segments(tools.get_bytes(100, 100)) self.assertIn('Magic number does not match', str(e.exception))

Update the return value of this function, fix the 'create' typo and update the documentation for clarity.
Signed-off-by: Simon Glass sjg@chromium.org Reviewed-by: Alper Nebi Yasak alpernebiyasak@gmail.com Suggested-by: Alper Nebi Yasak alpernebiyasak@gmail.com ---
(no changes since v2)
Changes in v2: - Add new patch to tweak collect_contents_to_file() and docs
tools/binman/binman.rst | 25 +++++++++++++++---------- tools/binman/entry.py | 8 ++++---- tools/binman/etype/mkimage.py | 2 +- 3 files changed, 20 insertions(+), 15 deletions(-)
diff --git a/tools/binman/binman.rst b/tools/binman/binman.rst index 771645380e..0061e43659 100644 --- a/tools/binman/binman.rst +++ b/tools/binman/binman.rst @@ -1375,18 +1375,20 @@ Some entry types deal with data obtained from others. For example, };
This shows mkimage being passed a file consisting of SPL and U-Boot proper. It -is create by calling `Entry.collect_contents_to_file()`. Note that in this case, -the data is passed to mkimage for processing but does not appear separately in -the image. It may not appear at all, depending on what mkimage does. The -contents of the `mkimage` entry are entirely dependent on the processing done -by the entry, with the provided subnodes (`u-boot-spl` and `u-boot`) simply -providing the input data for that processing. +is created by calling `Entry.collect_contents_to_file()`. Note that in this +case, the data is passed to mkimage for processing but does not appear +separately in the image. It may not appear at all, depending on what mkimage +does. The contents of the `mkimage` entry are entirely dependent on the +processing done by the entry, with the provided subnodes (`u-boot-spl` and +`u-boot`) simply providing the input data for that processing.
Note that `Entry.collect_contents_to_file()` simply concatenates the data from the different entries together, with no control over alignment, etc. Another approach is to subclass `Entry_section` so that those features become available, such as `size` and `pad-byte`. Then the contents of the entry can be obtained by -calling `BuildSectionData()`. +calling `super().BuildSectionData()` in the entry's BuildSectionData() +implementation to get the input data, then write it to a file and process it +however is desired.
There are other ways to obtain data also, depending on the situation. If the entry type is simply signing data which exists elsewhere in the image, then @@ -1396,6 +1398,7 @@ is used by `Entry_vblock`, for example::
u_boot: u-boot { }; + vblock { content = <&u_boot &dtb>; keyblock = "firmware.keyblock"; @@ -1440,9 +1443,11 @@ The `soc-fw` node is a `blob-ext` (i.e. it reads in a named binary file) whereas a known blob type.
When adding new entry types you are encouraged to use subnodes to provide the -data for processing, unless the `content` approach is more suitable. Ad-hoc -properties and other methods of obtaining data are discouraged, since it adds to -confusion for users. +data for processing, unless the `content` approach is more suitable. Consider +whether the input entries are contained within (or consumed by) the entry, vs +just being 'referenced' by the entry. In the latter case, the `content` approach +makes more sense. Ad-hoc properties and other methods of obtaining data are +discouraged, since it adds to confusion for users.
History / Credits ----------------- diff --git a/tools/binman/entry.py b/tools/binman/entry.py index 85dc339726..42320979a1 100644 --- a/tools/binman/entry.py +++ b/tools/binman/entry.py @@ -1138,16 +1138,16 @@ features to produce new behaviours.
Returns: Tuple: - bytes: Concatenated data from all the entries (or False) - str: Filename of file written (or False if no data) - str: Unique portion of filename (or False if no data) + bytes: Concatenated data from all the entries (or None) + str: Filename of file written (or None if no data) + str: Unique portion of filename (or None if no data) """ data = b'' for entry in entries: # First get the input data and put it in a file. If not available, # try later. if not entry.ObtainContents(): - return False, False, False + return None, None, None data += entry.GetData() uniq = self.GetUniqueName() fname = tools.get_output_filename(f'{prefix}.{uniq}') diff --git a/tools/binman/etype/mkimage.py b/tools/binman/etype/mkimage.py index 8e74ebb9c0..c28141ff69 100644 --- a/tools/binman/etype/mkimage.py +++ b/tools/binman/etype/mkimage.py @@ -53,7 +53,7 @@ class Entry_mkimage(Entry): def ObtainContents(self): data, input_fname, uniq = self.collect_contents_to_file( self._mkimage_entries.values(), 'mkimage') - if data is False: + if data is None: return False output_fname = tools.get_output_filename('mkimage-out.%s' % uniq) if self.mkimage.run_cmd('-d', input_fname, *self._args,

The word 'expand' is used for entries which generate subentries. It is also used for entries that can have an '_expanded' version which is used to break out its contents.
Rather than talking about expanding an entry's size, use the term 'extending'. It is slightly more precise and avoids the above conflicts.
This change renders the old 'expand-size' property invalid, so add an error check for that.
Signed-off-by: Simon Glass sjg@chromium.org Reviewed-by: Alper Nebi Yasak alpernebiyasak@gmail.com ---
Changes in v3: - Rename some more things to 'extend'
Changes in v2: - Add patch to rename ExpandToLimit to extend_to_limit
tools/binman/binman.rst | 4 +-- tools/binman/entry.py | 10 +++--- tools/binman/etype/section.py | 12 +++---- tools/binman/ftest.py | 36 +++++++++++-------- ...88_expand_size.dts => 088_extend_size.dts} | 8 ++--- ...d_size_bad.dts => 089_extend_size_bad.dts} | 2 +- ..._entry_expand.dts => 121_entry_extend.dts} | 0 ...d_twice.dts => 122_entry_extend_twice.dts} | 0 ...ction.dts => 123_entry_extend_section.dts} | 0 tools/binman/test/225_expand_size_bad.dts | 10 ++++++ 10 files changed, 51 insertions(+), 31 deletions(-) rename tools/binman/test/{088_expand_size.dts => 088_extend_size.dts} (88%) rename tools/binman/test/{089_expand_size_bad.dts => 089_extend_size_bad.dts} (90%) rename tools/binman/test/{121_entry_expand.dts => 121_entry_extend.dts} (100%) rename tools/binman/test/{122_entry_expand_twice.dts => 122_entry_extend_twice.dts} (100%) rename tools/binman/test/{123_entry_expand_section.dts => 123_entry_extend_section.dts} (100%) create mode 100644 tools/binman/test/225_expand_size_bad.dts
diff --git a/tools/binman/binman.rst b/tools/binman/binman.rst index 0061e43659..509fc8da6d 100644 --- a/tools/binman/binman.rst +++ b/tools/binman/binman.rst @@ -480,8 +480,8 @@ image-pos: for each entry. This makes it easy to find out exactly where the entry ended up in the image, regardless of parent sections, etc.
-expand-size: - Expand the size of this entry to fit available space. This space is only +extend-size: + Extend the size of this entry to fit available space. This space is only limited by the size of the image/section and the position of the next entry.
diff --git a/tools/binman/entry.py b/tools/binman/entry.py index 42320979a1..52ba7a81a0 100644 --- a/tools/binman/entry.py +++ b/tools/binman/entry.py @@ -106,7 +106,7 @@ class Entry(object): self.pad_after = 0 self.offset_unset = False self.image_pos = None - self.expand_size = False + self.extend_size = False self.compress = 'none' self.missing = False self.faked = False @@ -235,6 +235,8 @@ class Entry(object): """ if 'pos' in self._node.props: self.Raise("Please use 'offset' instead of 'pos'") + if 'expand-size' in self._node.props: + self.Raise("Please use 'extend-size' instead of 'expand-size'") self.offset = fdt_util.GetInt(self._node, 'offset') self.size = fdt_util.GetInt(self._node, 'size') self.orig_offset = fdt_util.GetInt(self._node, 'orig-offset') @@ -262,7 +264,7 @@ class Entry(object): self.align_size) self.align_end = fdt_util.GetInt(self._node, 'align-end') self.offset_unset = fdt_util.GetBool(self._node, 'offset-unset') - self.expand_size = fdt_util.GetBool(self._node, 'expand-size') + self.extend_size = fdt_util.GetBool(self._node, 'extend-size') self.missing_msg = fdt_util.GetString(self._node, 'missing-msg')
# This is only supported by blobs and sections at present @@ -774,8 +776,8 @@ features to produce new behaviours. name = '%s.%s' % (node.name, name) return name
- def ExpandToLimit(self, limit): - """Expand an entry so that it ends at the given offset limit""" + def extend_to_limit(self, limit): + """Extend an entry so that it ends at the given offset limit""" if self.offset + self.size < limit: self.size = limit - self.offset # Request the contents again, since changing the size requires that diff --git a/tools/binman/etype/section.py b/tools/binman/etype/section.py index 639b12d95b..8e8ee50bdf 100644 --- a/tools/binman/etype/section.py +++ b/tools/binman/etype/section.py @@ -386,7 +386,7 @@ class Entry_section(Entry): self._PackEntries() if self._sort: self._SortEntries() - self._ExpandEntries() + self._extend_entries()
data = self.BuildSectionData(True) self.SetContents(data) @@ -404,17 +404,17 @@ class Entry_section(Entry): offset = entry.Pack(offset) return offset
- def _ExpandEntries(self): - """Expand any entries that are permitted to""" + def _extend_entries(self): + """Extend any entries that are permitted to""" exp_entry = None for entry in self._entries.values(): if exp_entry: - exp_entry.ExpandToLimit(entry.offset) + exp_entry.extend_to_limit(entry.offset) exp_entry = None - if entry.expand_size: + if entry.extend_size: exp_entry = entry if exp_entry: - exp_entry.ExpandToLimit(self.size) + exp_entry.extend_to_limit(self.size)
def _SortEntries(self): """Sort entries by offset""" diff --git a/tools/binman/ftest.py b/tools/binman/ftest.py index 8d41ab67c5..9b45cf2054 100644 --- a/tools/binman/ftest.py +++ b/tools/binman/ftest.py @@ -2028,9 +2028,9 @@ class TestFunctional(unittest.TestCase): self.assertIn("Node '/binman/files': Missing 'pattern' property", str(e.exception))
- def testExpandSize(self): - """Test an expanding entry""" - data, _, map_data, _ = self._DoReadFileDtb('088_expand_size.dts', + def testExtendSize(self): + """Test an extending entry""" + data, _, map_data, _ = self._DoReadFileDtb('088_extend_size.dts', map=True) expect = (tools.get_bytes(ord('a'), 8) + U_BOOT_DATA + MRC_DATA + tools.get_bytes(ord('b'), 1) + U_BOOT_DATA + @@ -2050,11 +2050,11 @@ class TestFunctional(unittest.TestCase): 00000020 00000020 00000008 fill2 ''', map_data)
- def testExpandSizeBad(self): - """Test an expanding entry which fails to provide contents""" + def testExtendSizeBad(self): + """Test an extending entry which fails to provide contents""" with test_util.capture_sys_output() as (stdout, stderr): with self.assertRaises(ValueError) as e: - self._DoReadFileDtb('089_expand_size_bad.dts', map=True) + self._DoReadFileDtb('089_extend_size_bad.dts', map=True) self.assertIn("Node '/binman/_testing': Cannot obtain contents when " 'expanding entry', str(e.exception))
@@ -2487,22 +2487,22 @@ class TestFunctional(unittest.TestCase): str(e.exception))
def testEntryExpand(self): - """Test expanding an entry after it is packed""" - data = self._DoReadFile('121_entry_expand.dts') + """Test extending an entry after it is packed""" + data = self._DoReadFile('121_entry_extend.dts') self.assertEqual(b'aaa', data[:3]) self.assertEqual(U_BOOT_DATA, data[3:3 + len(U_BOOT_DATA)]) self.assertEqual(b'aaa', data[-3:])
- def testEntryExpandBad(self): - """Test expanding an entry after it is packed, twice""" + def testEntryExtendBad(self): + """Test extending an entry after it is packed, twice""" with self.assertRaises(ValueError) as e: - self._DoReadFile('122_entry_expand_twice.dts') + self._DoReadFile('122_entry_extend_twice.dts') self.assertIn("Image '/binman': Entries changed size after packing", str(e.exception))
- def testEntryExpandSection(self): - """Test expanding an entry within a section after it is packed""" - data = self._DoReadFile('123_entry_expand_section.dts') + def testEntryExtendSection(self): + """Test extending an entry within a section after it is packed""" + data = self._DoReadFile('123_entry_extend_section.dts') self.assertEqual(b'aaa', data[:3]) self.assertEqual(U_BOOT_DATA, data[3:3 + len(U_BOOT_DATA)]) self.assertEqual(b'aaa', data[-3:]) @@ -5304,6 +5304,14 @@ fdt fdtmap Extract the devicetree blob from the fdtmap self.assertIn("Node '/binman/fit': Unknown operation 'unknown'", str(exc.exception))
+ def test_uses_expand_size(self): + """Test that the 'expand-size' property cannot be used anymore""" + with self.assertRaises(ValueError) as e: + data = self._DoReadFile('225_expand_size_bad.dts') + self.assertIn( + "Node '/binman/u-boot': Please use 'extend-size' instead of 'expand-size'", + str(e.exception)) +
if __name__ == "__main__": unittest.main() diff --git a/tools/binman/test/088_expand_size.dts b/tools/binman/test/088_extend_size.dts similarity index 88% rename from tools/binman/test/088_expand_size.dts rename to tools/binman/test/088_extend_size.dts index c8a01308ec..f352699e37 100644 --- a/tools/binman/test/088_expand_size.dts +++ b/tools/binman/test/088_extend_size.dts @@ -5,7 +5,7 @@ binman { size = <40>; fill { - expand-size; + extend-size; fill-byte = [61]; size = <0>; }; @@ -13,7 +13,7 @@ offset = <8>; }; section { - expand-size; + extend-size; pad-byte = <0x62>; intel-mrc { }; @@ -25,7 +25,7 @@ section2 { type = "section"; fill { - expand-size; + extend-size; fill-byte = [63]; size = <0>; }; @@ -35,7 +35,7 @@ }; fill2 { type = "fill"; - expand-size; + extend-size; fill-byte = [64]; size = <0>; }; diff --git a/tools/binman/test/089_expand_size_bad.dts b/tools/binman/test/089_extend_size_bad.dts similarity index 90% rename from tools/binman/test/089_expand_size_bad.dts rename to tools/binman/test/089_extend_size_bad.dts index edc0e5cf68..edc60e43fd 100644 --- a/tools/binman/test/089_expand_size_bad.dts +++ b/tools/binman/test/089_extend_size_bad.dts @@ -4,7 +4,7 @@ / { binman { _testing { - expand-size; + extend-size; return-contents-once; }; u-boot { diff --git a/tools/binman/test/121_entry_expand.dts b/tools/binman/test/121_entry_extend.dts similarity index 100% rename from tools/binman/test/121_entry_expand.dts rename to tools/binman/test/121_entry_extend.dts diff --git a/tools/binman/test/122_entry_expand_twice.dts b/tools/binman/test/122_entry_extend_twice.dts similarity index 100% rename from tools/binman/test/122_entry_expand_twice.dts rename to tools/binman/test/122_entry_extend_twice.dts diff --git a/tools/binman/test/123_entry_expand_section.dts b/tools/binman/test/123_entry_extend_section.dts similarity index 100% rename from tools/binman/test/123_entry_expand_section.dts rename to tools/binman/test/123_entry_extend_section.dts diff --git a/tools/binman/test/225_expand_size_bad.dts b/tools/binman/test/225_expand_size_bad.dts new file mode 100644 index 0000000000..d4ad9a6a1a --- /dev/null +++ b/tools/binman/test/225_expand_size_bad.dts @@ -0,0 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0+ +/dts-v1/; + +/ { + binman { + u-boot { + expand-size; + }; + }; +};

Leave the 'expand' term for use by entry types which have an expanded version of themselves. Rename this method to indicate that it generates subentries.
Signed-off-by: Simon Glass sjg@chromium.org Reviewed-by: Alper Nebi Yasak alpernebiyasak@gmail.com ---
(no changes since v2)
Changes in v2: - Add patch to rename ExpandEntries to gen_entries
tools/binman/control.py | 2 +- tools/binman/entry.py | 4 ++-- tools/binman/etype/blob_phase.py | 2 +- tools/binman/etype/files.py | 2 +- tools/binman/etype/section.py | 6 +++--- 5 files changed, 8 insertions(+), 8 deletions(-)
diff --git a/tools/binman/control.py b/tools/binman/control.py index c9d7a08bbc..d4c8dc8920 100644 --- a/tools/binman/control.py +++ b/tools/binman/control.py @@ -507,7 +507,7 @@ def PrepareImagesAndDtbs(dtb_fname, select_images, update_fdt, use_expanded): # entry offsets remain the same. for image in images.values(): image.CollectBintools() - image.ExpandEntries() + image.gen_entries() if update_fdt: image.AddMissingProperties(True) image.ProcessFdt(dtb) diff --git a/tools/binman/entry.py b/tools/binman/entry.py index 52ba7a81a0..da77236a8a 100644 --- a/tools/binman/entry.py +++ b/tools/binman/entry.py @@ -286,8 +286,8 @@ class Entry(object): """ return {}
- def ExpandEntries(self): - """Expand out entries which produce other entries + def gen_entries(self): + """Allow entries to generate other entries
Some entries generate subnodes automatically, from which sub-entries are then created. This method allows those to be added to the binman diff --git a/tools/binman/etype/blob_phase.py b/tools/binman/etype/blob_phase.py index 23e2ad69ba..b937158756 100644 --- a/tools/binman/etype/blob_phase.py +++ b/tools/binman/etype/blob_phase.py @@ -42,7 +42,7 @@ class Entry_blob_phase(Entry_section): self.dtb_file = dtb_file self.bss_pad = bss_pad
- def ExpandEntries(self): + def gen_entries(self): """Create the subnodes""" names = [self.root_fname + '-nodtb', self.root_fname + '-dtb'] if self.bss_pad: diff --git a/tools/binman/etype/files.py b/tools/binman/etype/files.py index 13838ece8f..2081bc727b 100644 --- a/tools/binman/etype/files.py +++ b/tools/binman/etype/files.py @@ -48,7 +48,7 @@ class Entry_files(Entry_section): self._require_matches = fdt_util.GetBool(self._node, 'require-matches')
- def ExpandEntries(self): + def gen_entries(self): files = tools.get_input_filename_glob(self._pattern) if self._require_matches and not files: self.Raise("Pattern '%s' matched no files" % self._pattern) diff --git a/tools/binman/etype/section.py b/tools/binman/etype/section.py index 8e8ee50bdf..286842323d 100644 --- a/tools/binman/etype/section.py +++ b/tools/binman/etype/section.py @@ -234,10 +234,10 @@ class Entry_section(Entry): todo) return True
- def ExpandEntries(self): - super().ExpandEntries() + def gen_entries(self): + super().gen_entries() for entry in self._entries.values(): - entry.ExpandEntries() + entry.gen_entries()
def AddMissingProperties(self, have_image_pos): """Add new properties to the device tree as needed for this entry"""

At present the fit implementation creates the output tree while scanning the FIT description. Then it updates the tree later when the data is known.
This works, but is a bit confusing, since it requires mixing the scanning code with the generation code, with a fix-up step at the end.
It is actually possible to do this in two phases, one to scan everything and the other to generate the FIT. Thus the FIT is generated in one pass, when everything is known.
Update the code accordingly. The only functional change is that the 'data' property for each node are now last instead of first, which is really a more natural position. Update the affected test to deal with this.
One wrinkle is that the calculated properties (image-pos, size and offset) are now added before the FIT is generated. so we must filter these out when copying properties from the binman description to the FIT.
Most of the change here is splitting out some of the code from the ReadEntries() implementation into _BuildInput(). So despite the large diff, most of the code is the same. It is not feasible to split this patch up, so far as I can tell.
Signed-off-by: Simon Glass sjg@chromium.org Reviewed-by: Alper Nebi Yasak alpernebiyasak@gmail.com ---
Changes in v3: - Fix 'He' typo
Changes in v2: - Add new patch to refactor fit to generate output at the end
tools/binman/etype/fit.py | 178 ++++++++++++++----------- tools/binman/ftest.py | 13 +- tools/binman/test/224_fit_bad_oper.dts | 2 - 3 files changed, 109 insertions(+), 84 deletions(-)
diff --git a/tools/binman/etype/fit.py b/tools/binman/etype/fit.py index 2d4c5f6545..4931afc5ad 100644 --- a/tools/binman/etype/fit.py +++ b/tools/binman/etype/fit.py @@ -209,6 +209,81 @@ class Entry_fit(Entry_section): return oper
def ReadEntries(self): + def _add_entries(base_node, depth, node): + """Add entries for any nodes that need them + + Args: + base_node: Base Node of the FIT (with 'description' property) + depth: Current node depth (0 is the base 'fit' node) + node: Current node to process + + Here we only need to provide binman entries which are used to define + the 'data' for each image. We create an entry_Section for each. + """ + rel_path = node.path[len(base_node.path):] + in_images = rel_path.startswith('/images') + has_images = depth == 2 and in_images + if has_images: + # This node is a FIT subimage node (e.g. "/images/kernel") + # containing content nodes. We collect the subimage nodes and + # section entries for them here to merge the content subnodes + # together and put the merged contents in the subimage node's + # 'data' property later. + entry = Entry.Create(self.section, node, etype='section') + entry.ReadNode() + # The hash subnodes here are for mkimage, not binman. + entry.SetUpdateHash(False) + self._entries[rel_path] = entry + + for subnode in node.subnodes: + _add_entries(base_node, depth + 1, subnode) + + _add_entries(self._node, 0, self._node) + + def BuildSectionData(self, required): + """Build FIT entry contents + + This adds the 'data' properties to the input ITB (Image-tree Binary) + then runs mkimage to process it. + + Args: + required: True if the data must be present, False if it is OK to + return None + + Returns: + Contents of the section (bytes) + """ + data = self._BuildInput() + uniq = self.GetUniqueName() + input_fname = tools.get_output_filename('%s.itb' % uniq) + output_fname = tools.get_output_filename('%s.fit' % uniq) + tools.write_file(input_fname, data) + tools.write_file(output_fname, data) + + args = {} + ext_offset = self._fit_props.get('fit,external-offset') + if ext_offset is not None: + args = { + 'external': True, + 'pad': fdt_util.fdt32_to_cpu(ext_offset.value) + } + if self.mkimage.run(reset_timestamp=True, output_fname=output_fname, + **args) is None: + # Bintool is missing; just use empty data as the output + self.record_missing_bintool(self.mkimage) + return tools.get_bytes(0, 1024) + + return tools.read_file(output_fname) + + def _BuildInput(self): + """Finish the FIT by adding the 'data' properties to it + + Arguments: + fdt: FIT to update + + Returns: + New fdt contents (bytes) + """ def _process_prop(pname, prop): """Process special properties
@@ -236,9 +311,15 @@ class Entry_fit(Entry_section): val = val[1:].replace('DEFAULT-SEQ', str(seq + 1)) fsw.property_string(pname, val) return + elif pname.startswith('fit,'): + # Ignore these, which are commands for binman to process + return + elif pname in ['offset', 'size', 'image-pos']: + # Don't add binman's calculated properties + return fsw.property(pname, prop.bytes)
- def _scan_gen_fdt_nodes(subnode, depth, in_images): + def _gen_fdt_nodes(subnode, depth, in_images): """Generate FDT nodes
This creates one node for each member of self._fdts using the @@ -277,7 +358,7 @@ class Entry_fit(Entry_section): else: self.Raise("Generator node requires 'fit,fdt-list' property")
- def _scan_node(subnode, depth, in_images): + def _gen_node(subnode, depth, in_images): """Generate nodes from a template
This creates one node for each member of self._fdts using the @@ -294,10 +375,10 @@ class Entry_fit(Entry_section): """ oper = self._get_operation(subnode) if oper == OP_GEN_FDT_NODES: - _scan_gen_fdt_nodes(subnode, depth, in_images) + _gen_fdt_nodes(subnode, depth, in_images)
- def _AddNode(base_node, depth, node): - """Add a node to the FIT + def _add_node(base_node, depth, node): + """Add nodes to the output FIT
Args: base_node: Base Node of the FIT (with 'description' property) @@ -307,104 +388,49 @@ class Entry_fit(Entry_section): There are two cases to deal with: - hash and signature nodes which become part of the FIT - binman entries which are used to define the 'data' for each - image + image, so don't appear in the FIT """ + # Copy over all the relevant properties for pname, prop in node.props.items(): - if not pname.startswith('fit,'): - _process_prop(pname, prop) + _process_prop(pname, prop)
rel_path = node.path[len(base_node.path):] in_images = rel_path.startswith('/images') + has_images = depth == 2 and in_images if has_images: - # This node is a FIT subimage node (e.g. "/images/kernel") - # containing content nodes. We collect the subimage nodes and - # section entries for them here to merge the content subnodes - # together and put the merged contents in the subimage node's - # 'data' property later. - entry = Entry.Create(self.section, node, etype='section') - entry.ReadNode() - # The hash subnodes here are for mkimage, not binman. - entry.SetUpdateHash(False) - self._entries[rel_path] = entry + entry = self._entries[rel_path] + data = entry.GetData() + fsw.property('data', bytes(data))
for subnode in node.subnodes: if has_images and not (subnode.name.startswith('hash') or subnode.name.startswith('signature')): # This subnode is a content node not meant to appear in # the FIT (e.g. "/images/kernel/u-boot"), so don't call - # fsw.add_node() or _AddNode() for it. + # fsw.add_node() or _add_node() for it. pass elif self.GetImage().generate and subnode.name.startswith('@'): - _scan_node(subnode, depth, in_images) + subnode_path = f'{rel_path}/{subnode.name}' + entry = self._entries.get(subnode_path) + _gen_node(subnode, depth, in_images) + if entry: + del self._entries[subnode_path] else: with fsw.add_node(subnode.name): - _AddNode(base_node, depth + 1, subnode) + _add_node(base_node, depth + 1, subnode)
# Build a new tree with all nodes and properties starting from the # entry node fsw = libfdt.FdtSw() fsw.finish_reservemap() with fsw.add_node(''): - _AddNode(self._node, 0, self._node) + _add_node(self._node, 0, self._node) fdt = fsw.as_fdt()
# Pack this new FDT and scan it so we can add the data later fdt.pack() - self._fdt = Fdt.FromData(fdt.as_bytearray()) - self._fdt.Scan() - - def BuildSectionData(self, required): - """Build FIT entry contents - - This adds the 'data' properties to the input ITB (Image-tree Binary) - then runs mkimage to process it. - - Args: - required: True if the data must be present, False if it is OK to - return None - - Returns: - Contents of the section (bytes) - """ - data = self._BuildInput(self._fdt) - uniq = self.GetUniqueName() - input_fname = tools.get_output_filename('%s.itb' % uniq) - output_fname = tools.get_output_filename('%s.fit' % uniq) - tools.write_file(input_fname, data) - tools.write_file(output_fname, data) - - args = {} - ext_offset = self._fit_props.get('fit,external-offset') - if ext_offset is not None: - args = { - 'external': True, - 'pad': fdt_util.fdt32_to_cpu(ext_offset.value) - } - if self.mkimage.run(reset_timestamp=True, output_fname=output_fname, - **args) is None: - # Bintool is missing; just use empty data as the output - self.record_missing_bintool(self.mkimage) - return tools.get_bytes(0, 1024) - - return tools.read_file(output_fname) - - def _BuildInput(self, fdt): - """Finish the FIT by adding the 'data' properties to it - - Arguments: - fdt: FIT to update - - Returns: - New fdt contents (bytes) - """ - for path, section in self._entries.items(): - node = fdt.GetNode(path) - data = section.GetData() - node.AddData('data', data) - - fdt.Sync(auto_resize=True) - data = fdt.GetContents() + data = fdt.as_bytearray() return data
def SetImagePos(self, image_pos): diff --git a/tools/binman/ftest.py b/tools/binman/ftest.py index 9b45cf2054..ffe59ffba8 100644 --- a/tools/binman/ftest.py +++ b/tools/binman/ftest.py @@ -3780,6 +3780,7 @@ class TestFunctional(unittest.TestCase): dtb.Scan() props = self._GetPropTree(dtb, BASE_DTB_PROPS + REPACK_DTB_PROPS)
+ self.maxDiff = None self.assertEqual({ 'image-pos': 0, 'offset': 0, @@ -3793,19 +3794,19 @@ class TestFunctional(unittest.TestCase): 'fit:offset': 4, 'fit:size': 1840,
- 'fit/images/kernel:image-pos': 160, - 'fit/images/kernel:offset': 156, + 'fit/images/kernel:image-pos': 304, + 'fit/images/kernel:offset': 300, 'fit/images/kernel:size': 4,
- 'fit/images/kernel/u-boot:image-pos': 160, + 'fit/images/kernel/u-boot:image-pos': 304, 'fit/images/kernel/u-boot:offset': 0, 'fit/images/kernel/u-boot:size': 4,
- 'fit/images/fdt-1:image-pos': 456, - 'fit/images/fdt-1:offset': 452, + 'fit/images/fdt-1:image-pos': 552, + 'fit/images/fdt-1:offset': 548, 'fit/images/fdt-1:size': 6,
- 'fit/images/fdt-1/u-boot-spl-dtb:image-pos': 456, + 'fit/images/fdt-1/u-boot-spl-dtb:image-pos': 552, 'fit/images/fdt-1/u-boot-spl-dtb:offset': 0, 'fit/images/fdt-1/u-boot-spl-dtb:size': 6,
diff --git a/tools/binman/test/224_fit_bad_oper.dts b/tools/binman/test/224_fit_bad_oper.dts index cee801e2ea..8a8014ea33 100644 --- a/tools/binman/test/224_fit_bad_oper.dts +++ b/tools/binman/test/224_fit_bad_oper.dts @@ -21,7 +21,5 @@ }; }; }; - fdtmap { - }; }; };

This shadows the patman.tools library so rename it to avoid a pylint warning.
Signed-off-by: Simon Glass sjg@chromium.org Reviewed-by: Alper Nebi Yasak alpernebiyasak@gmail.com ---
(no changes since v1)
tools/binman/entry.py | 4 ++-- tools/binman/etype/fit.py | 6 +++--- tools/binman/etype/gbb.py | 4 ++-- tools/binman/etype/intel_ifwi.py | 4 ++-- tools/binman/etype/mkimage.py | 4 ++-- tools/binman/etype/section.py | 4 ++-- tools/binman/etype/vblock.py | 4 ++-- 7 files changed, 15 insertions(+), 15 deletions(-)
diff --git a/tools/binman/entry.py b/tools/binman/entry.py index da77236a8a..786c959911 100644 --- a/tools/binman/entry.py +++ b/tools/binman/entry.py @@ -1101,11 +1101,11 @@ features to produce new behaviours. """ pass
- def AddBintools(self, tools): + def AddBintools(self, btools): """Add the bintools used by this entry type
Args: - tools (dict of Bintool): + btools (dict of Bintool): """ pass
diff --git a/tools/binman/etype/fit.py b/tools/binman/etype/fit.py index 4931afc5ad..c1ed5dcb98 100644 --- a/tools/binman/etype/fit.py +++ b/tools/binman/etype/fit.py @@ -484,6 +484,6 @@ class Entry_fit(Entry_section): section.SetOffsetSize(offset, size) section.SetImagePos(self.image_pos)
- def AddBintools(self, tools): - super().AddBintools(tools) - self.mkimage = self.AddBintool(tools, 'mkimage') + def AddBintools(self, btools): + super().AddBintools(btools) + self.mkimage = self.AddBintool(btools, 'mkimage') diff --git a/tools/binman/etype/gbb.py b/tools/binman/etype/gbb.py index e32fae27ce..7394e4e5d3 100644 --- a/tools/binman/etype/gbb.py +++ b/tools/binman/etype/gbb.py @@ -99,5 +99,5 @@ class Entry_gbb(Entry):
return True
- def AddBintools(self, tools): - self.futility = self.AddBintool(tools, 'futility') + def AddBintools(self, btools): + self.futility = self.AddBintool(btools, 'futility') diff --git a/tools/binman/etype/intel_ifwi.py b/tools/binman/etype/intel_ifwi.py index 46bdf116e6..4fa7d636fe 100644 --- a/tools/binman/etype/intel_ifwi.py +++ b/tools/binman/etype/intel_ifwi.py @@ -143,5 +143,5 @@ class Entry_intel_ifwi(Entry_blob_ext): for entry in self._ifwi_entries.values(): entry.WriteSymbols(self)
- def AddBintools(self, tools): - self.ifwitool = self.AddBintool(tools, 'ifwitool') + def AddBintools(self, btools): + self.ifwitool = self.AddBintool(btools, 'ifwitool') diff --git a/tools/binman/etype/mkimage.py b/tools/binman/etype/mkimage.py index c28141ff69..9f4717e4ea 100644 --- a/tools/binman/etype/mkimage.py +++ b/tools/binman/etype/mkimage.py @@ -93,5 +93,5 @@ class Entry_mkimage(Entry): for entry in self._mkimage_entries.values(): entry.CheckFakedBlobs(faked_blobs_list)
- def AddBintools(self, tools): - self.mkimage = self.AddBintool(tools, 'mkimage') + def AddBintools(self, btools): + self.mkimage = self.AddBintool(btools, 'mkimage') diff --git a/tools/binman/etype/section.py b/tools/binman/etype/section.py index 286842323d..ac61d3de28 100644 --- a/tools/binman/etype/section.py +++ b/tools/binman/etype/section.py @@ -895,6 +895,6 @@ class Entry_section(Entry): for entry in self._entries.values(): entry.CheckAltFormats(alt_formats)
- def AddBintools(self, tools): + def AddBintools(self, btools): for entry in self._entries.values(): - entry.AddBintools(tools) + entry.AddBintools(btools) diff --git a/tools/binman/etype/vblock.py b/tools/binman/etype/vblock.py index a1de98290b..065b6ed2f6 100644 --- a/tools/binman/etype/vblock.py +++ b/tools/binman/etype/vblock.py @@ -97,5 +97,5 @@ class Entry_vblock(Entry_collection): data = self.GetVblock(True) return self.ProcessContentsUpdate(data)
- def AddBintools(self, tools): - self.futility = self.AddBintool(tools, 'futility') + def AddBintools(self, btools): + self.futility = self.AddBintool(btools, 'futility')

At present fake blobs are created but internally an empty blob is used. Change it to use the contents of the faked file. Also return whether the blob was faked, in case the caller needs to know that.
Add a TODO to put fake blobs in their own directory.
Signed-off-by: Simon Glass sjg@chromium.org Reviewed-by: Alper Nebi Yasak alpernebiyasak@gmail.com ---
(no changes since v2)
Changes in v2: - Add a patch to change how faked blobs are created
tools/binman/binman.rst | 3 ++- tools/binman/entry.py | 9 ++++++--- tools/binman/etype/blob.py | 7 ++++--- tools/binman/etype/blob_ext_list.py | 2 +- 4 files changed, 13 insertions(+), 8 deletions(-)
diff --git a/tools/binman/binman.rst b/tools/binman/binman.rst index 509fc8da6d..935839c433 100644 --- a/tools/binman/binman.rst +++ b/tools/binman/binman.rst @@ -1500,7 +1500,8 @@ Some ideas: - Figure out how to make Fdt support changing the node order, so that Node.AddSubnode() can support adding a node before another, existing node. Perhaps it should completely regenerate the flat tree? - +- Put faked files into a separate subdir and remove them on start-up, to avoid + seeing them as 'real' files on a subsequent run
-- Simon Glass sjg@chromium.org diff --git a/tools/binman/entry.py b/tools/binman/entry.py index 786c959911..9d499f07aa 100644 --- a/tools/binman/entry.py +++ b/tools/binman/entry.py @@ -999,15 +999,18 @@ features to produce new behaviours. fname (str): Filename to check
Returns: - fname (str): Filename of faked file + tuple: + fname (str): Filename of faked file + bool: True if the blob was faked, False if not """ if self.allow_fake and not pathlib.Path(fname).is_file(): outfname = tools.get_output_filename(os.path.basename(fname)) with open(outfname, "wb") as out: out.truncate(1024) self.faked = True - return outfname - return fname + tout.info(f"Entry '{self._node.path}': Faked file '{outfname}'") + return outfname, True + return fname, False
def CheckFakedBlobs(self, faked_blobs_list): """Check if any entries in this section have faked external blobs diff --git a/tools/binman/etype/blob.py b/tools/binman/etype/blob.py index 25ec5d26c9..89f089e740 100644 --- a/tools/binman/etype/blob.py +++ b/tools/binman/etype/blob.py @@ -41,10 +41,11 @@ class Entry_blob(Entry): self.external and self.section.GetAllowMissing()) # Allow the file to be missing if not self._pathname: - self._pathname = self.check_fake_fname(self._filename) - self.SetContents(b'') + self._pathname, faked = self.check_fake_fname(self._filename) self.missing = True - return True + if not faked: + self.SetContents(b'') + return True
self.ReadBlobContents() return True diff --git a/tools/binman/etype/blob_ext_list.py b/tools/binman/etype/blob_ext_list.py index 76ad32a1ee..f00202e9eb 100644 --- a/tools/binman/etype/blob_ext_list.py +++ b/tools/binman/etype/blob_ext_list.py @@ -37,7 +37,7 @@ class Entry_blob_ext_list(Entry_blob): missing = False pathnames = [] for fname in self._filenames: - fname = self.check_fake_fname(fname) + fname, _ = self.check_fake_fname(fname) pathname = tools.get_input_filename( fname, self.external and self.section.GetAllowMissing()) # Allow the file to be missing

On x86 devices having even a small amount of data can cause an overlap between regions. For example, bayleybay complains when the intel-vga region overlaps with u-boot-ucode:
ImagePos Offset Size Name <none> 00000000 00800000 main-section <none> ff800000 00000080 intel-descriptor <none> ff800400 00000080 intel-me <none> fff00000 00098f24 u-boot-with-ucode-ptr <none> fff98f24 00001aa0 u-boot-dtb-with-ucode <none> fff9a9d0 0002a000 u-boot-ucode <none> fffb0000 00000080 intel-vga ...
It is safer to use an empty file in most cases. Add an option to set the size for those uses that need it.
Signed-off-by: Simon Glass sjg@chromium.org Reviewed-by: Alper Nebi Yasak alpernebiyasak@gmail.com ---
(no changes since v2)
Changes in v2: - Add a patch to make fake blobs zero-sized by default
tools/binman/entry.py | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/tools/binman/entry.py b/tools/binman/entry.py index 9d499f07aa..21d3457788 100644 --- a/tools/binman/entry.py +++ b/tools/binman/entry.py @@ -990,13 +990,14 @@ features to produce new behaviours. if self.missing: missing_list.append(self)
- def check_fake_fname(self, fname): + def check_fake_fname(self, fname, size=0): """If the file is missing and the entry allows fake blobs, fake it
Sets self.faked to True if faked
Args: fname (str): Filename to check + size (int): Size of fake file to create
Returns: tuple: @@ -1006,7 +1007,7 @@ features to produce new behaviours. if self.allow_fake and not pathlib.Path(fname).is_file(): outfname = tools.get_output_filename(os.path.basename(fname)) with open(outfname, "wb") as out: - out.truncate(1024) + out.truncate(size) self.faked = True tout.info(f"Entry '{self._node.path}': Faked file '{outfname}'") return outfname, True

Unfortunately mkimage gets upset with zero-sized files. Update the ObtainContents() method to support specifying the size, if a fake blob is created.
Signed-off-by: Simon Glass sjg@chromium.org Reviewed-by: Alper Nebi Yasak alpernebiyasak@gmail.com ---
(no changes since v2)
Changes in v2: - Add a patch to allow mkimage to use a non-zero fake-blob size
tools/binman/entry.py | 11 ++++++++--- tools/binman/etype/_testing.py | 2 +- tools/binman/etype/blob.py | 5 +++-- tools/binman/etype/mkimage.py | 13 ++++++++++++- tools/binman/etype/section.py | 2 +- tools/binman/ftest.py | 10 ++++++++++ tools/binman/test/229_mkimage_missing.dts | 18 ++++++++++++++++++ 7 files changed, 53 insertions(+), 8 deletions(-) create mode 100644 tools/binman/test/229_mkimage_missing.dts
diff --git a/tools/binman/entry.py b/tools/binman/entry.py index 21d3457788..18a7a35105 100644 --- a/tools/binman/entry.py +++ b/tools/binman/entry.py @@ -417,9 +417,13 @@ class Entry(object): self.SetContents(data) return size_ok
- def ObtainContents(self): + def ObtainContents(self, skip_entry=None, fake_size=0): """Figure out the contents of an entry.
+ Args: + skip_entry (Entry): Entry to skip when obtaining section contents + fake_size (int): Size of fake file to create if needed + Returns: True if the contents were found, False if another call is needed after the other entries are processed. @@ -1132,12 +1136,13 @@ features to produce new behaviours. """ self.update_hash = update_hash
- def collect_contents_to_file(self, entries, prefix): + def collect_contents_to_file(self, entries, prefix, fake_size=0): """Put the contents of a list of entries into a file
Args: entries (list of Entry): Entries to collect prefix (str): Filename prefix of file to write to + fake_size (int): Size of fake file to create if needed
If any entry does not have contents yet, this function returns False for the data. @@ -1152,7 +1157,7 @@ features to produce new behaviours. for entry in entries: # First get the input data and put it in a file. If not available, # try later. - if not entry.ObtainContents(): + if not entry.ObtainContents(fake_size=fake_size): return None, None, None data += entry.GetData() uniq = self.GetUniqueName() diff --git a/tools/binman/etype/_testing.py b/tools/binman/etype/_testing.py index 0800c25899..5089de3642 100644 --- a/tools/binman/etype/_testing.py +++ b/tools/binman/etype/_testing.py @@ -82,7 +82,7 @@ class Entry__testing(Entry): self.return_contents = True self.contents = b'aa'
- def ObtainContents(self): + def ObtainContents(self, fake_size=0): if self.return_unknown_contents or not self.return_contents: return False if self.return_contents_later: diff --git a/tools/binman/etype/blob.py b/tools/binman/etype/blob.py index 89f089e740..ceaefb07b7 100644 --- a/tools/binman/etype/blob.py +++ b/tools/binman/etype/blob.py @@ -35,13 +35,14 @@ class Entry_blob(Entry): super().__init__(section, etype, node) self._filename = fdt_util.GetString(self._node, 'filename', self.etype)
- def ObtainContents(self): + def ObtainContents(self, fake_size=0): self._filename = self.GetDefaultFilename() self._pathname = tools.get_input_filename(self._filename, self.external and self.section.GetAllowMissing()) # Allow the file to be missing if not self._pathname: - self._pathname, faked = self.check_fake_fname(self._filename) + self._pathname, faked = self.check_fake_fname(self._filename, + fake_size) self.missing = True if not faked: self.SetContents(b'') diff --git a/tools/binman/etype/mkimage.py b/tools/binman/etype/mkimage.py index 9f4717e4ea..5f6def2287 100644 --- a/tools/binman/etype/mkimage.py +++ b/tools/binman/etype/mkimage.py @@ -51,8 +51,9 @@ class Entry_mkimage(Entry): self.ReadEntries()
def ObtainContents(self): + # Use a non-zero size for any fake files to keep mkimage happy data, input_fname, uniq = self.collect_contents_to_file( - self._mkimage_entries.values(), 'mkimage') + self._mkimage_entries.values(), 'mkimage', 1024) if data is None: return False output_fname = tools.get_output_filename('mkimage-out.%s' % uniq) @@ -73,6 +74,16 @@ class Entry_mkimage(Entry): entry.ReadNode() self._mkimage_entries[entry.name] = entry
+ def SetAllowMissing(self, allow_missing): + """Set whether a section allows missing external blobs + + Args: + allow_missing: True if allowed, False if not allowed + """ + self.allow_missing = allow_missing + for entry in self._mkimage_entries.values(): + entry.SetAllowMissing(allow_missing) + def SetAllowFakeBlob(self, allow_fake): """Set whether the sub nodes allows to create a fake blob
diff --git a/tools/binman/etype/section.py b/tools/binman/etype/section.py index ac61d3de28..ccac658c18 100644 --- a/tools/binman/etype/section.py +++ b/tools/binman/etype/section.py @@ -247,7 +247,7 @@ class Entry_section(Entry): for entry in self._entries.values(): entry.AddMissingProperties(have_image_pos)
- def ObtainContents(self, skip_entry=None): + def ObtainContents(self, fake_size=0, skip_entry=None): return self.GetEntryContents(skip_entry=skip_entry)
def GetPaddedDataForEntry(self, entry, entry_data): diff --git a/tools/binman/ftest.py b/tools/binman/ftest.py index ffe59ffba8..2b0abd8982 100644 --- a/tools/binman/ftest.py +++ b/tools/binman/ftest.py @@ -5313,6 +5313,16 @@ fdt fdtmap Extract the devicetree blob from the fdtmap "Node '/binman/u-boot': Please use 'extend-size' instead of 'expand-size'", str(e.exception))
+ def testMkimageMissingBlob(self): + """Test using mkimage to build an image""" + with test_util.capture_sys_output() as (stdout, stderr): + self._DoTestFile('229_mkimage_missing.dts', allow_missing=True, + allow_fake_blobs=True) + err = stderr.getvalue() + self.assertRegex( + err, + "Image '.*' has faked external blobs and is non-functional: .*") +
if __name__ == "__main__": unittest.main() diff --git a/tools/binman/test/229_mkimage_missing.dts b/tools/binman/test/229_mkimage_missing.dts new file mode 100644 index 0000000000..54a5a6c571 --- /dev/null +++ b/tools/binman/test/229_mkimage_missing.dts @@ -0,0 +1,18 @@ +// SPDX-License-Identifier: GPL-2.0+ + +/dts-v1/; + +/ { + #address-cells = <1>; + #size-cells = <1>; + + binman { + mkimage { + args = "-n test -T script"; + + blob-ext { + filename = "missing.bin"; + }; + }; + }; +};

At present the entries are read twice, once by the entry_Section class and once by the FIT implementation. This is harmless but can be confusing when debugging. Fix it.
Signed-off-by: Simon Glass sjg@chromium.org Reviewed-by: Alper Nebi Yasak alpernebiyasak@gmail.com ---
(no changes since v1)
tools/binman/etype/fit.py | 1 - 1 file changed, 1 deletion(-)
diff --git a/tools/binman/etype/fit.py b/tools/binman/etype/fit.py index c1ed5dcb98..8c37e0a77f 100644 --- a/tools/binman/etype/fit.py +++ b/tools/binman/etype/fit.py @@ -185,7 +185,6 @@ class Entry_fit(Entry_section): self.mkimage = None
def ReadNode(self): - self.ReadEntries() super().ReadNode()
def _get_operation(self, subnode):

This should not be done in the constructor. Move it.
Signed-off-by: Simon Glass sjg@chromium.org Suggested-by: Alper Nebi Yasak alpernebiyasak@gmail.com ---
(no changes since v1)
tools/binman/etype/fit.py | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-)
diff --git a/tools/binman/etype/fit.py b/tools/binman/etype/fit.py index 8c37e0a77f..4c0e345a4e 100644 --- a/tools/binman/etype/fit.py +++ b/tools/binman/etype/fit.py @@ -168,12 +168,14 @@ class Entry_fit(Entry_section): super().__init__(section, etype, node) self._fit = None self._fit_props = {} + self._fdts = None + self.mkimage = None
+ def ReadNode(self): + super().ReadNode() for pname, prop in self._node.props.items(): if pname.startswith('fit,'): self._fit_props[pname] = prop - - self._fdts = None self._fit_list_prop = self._fit_props.get('fit,fdt-list') if self._fit_list_prop: fdts, = self.GetEntryArgsOrProps( @@ -182,10 +184,6 @@ class Entry_fit(Entry_section): self._fdts = fdts.split() self._fit_default_dt = self.GetEntryArgsOrProps([EntryArg('default-dt', str)])[0] - self.mkimage = None - - def ReadNode(self): - super().ReadNode()
def _get_operation(self, subnode): """Get the operation referenced by a subnode

On 06/03/2022 06:19, Simon Glass wrote:
This should not be done in the constructor. Move it.
Signed-off-by: Simon Glass sjg@chromium.org Suggested-by: Alper Nebi Yasak alpernebiyasak@gmail.com
(no changes since v1)
tools/binman/etype/fit.py | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-)
Reviewed-by: Alper Nebi Yasak alpernebiyasak@gmail.com

Some warnings have crept in, so fix those that are easy to fix.
Signed-off-by: Simon Glass sjg@chromium.org Reviewed-by: Alper Nebi Yasak alpernebiyasak@gmail.com ---
(no changes since v1)
tools/binman/etype/fit.py | 50 +++++++++++++++++++-------------------- 1 file changed, 25 insertions(+), 25 deletions(-)
diff --git a/tools/binman/etype/fit.py b/tools/binman/etype/fit.py index 4c0e345a4e..0e8d2cf459 100644 --- a/tools/binman/etype/fit.py +++ b/tools/binman/etype/fit.py @@ -2,10 +2,9 @@ # Copyright (c) 2016 Google, Inc # Written by Simon Glass sjg@chromium.org # -# Entry-type module for producing a FIT -#
-from collections import defaultdict, OrderedDict +"""Entry-type module for producing a FIT""" + import libfdt
from binman.entry import Entry, EntryArg @@ -244,16 +243,16 @@ class Entry_fit(Entry_section): then runs mkimage to process it.
Args: - required: True if the data must be present, False if it is OK to - return None + required (bool): True if the data must be present, False if it is OK + to return None
Returns: - Contents of the section (bytes) + bytes: Contents of the section """ - data = self._BuildInput() + data = self._build_input() uniq = self.GetUniqueName() - input_fname = tools.get_output_filename('%s.itb' % uniq) - output_fname = tools.get_output_filename('%s.fit' % uniq) + input_fname = tools.get_output_filename(f'{uniq}.itb') + output_fname = tools.get_output_filename(f'{uniq}.fit') tools.write_file(input_fname, data) tools.write_file(output_fname, data)
@@ -272,14 +271,14 @@ class Entry_fit(Entry_section):
return tools.read_file(output_fname)
- def _BuildInput(self): + def _build_input(self): """Finish the FIT by adding the 'data' properties to it
Arguments: fdt: FIT to update
Returns: - New fdt contents (bytes) + bytes: New fdt contents """ def _process_prop(pname, prop): """Process special properties @@ -301,9 +300,9 @@ class Entry_fit(Entry_section): if not self._fit_default_dt: self.Raise("Generated 'default' node requires default-dt entry argument") if self._fit_default_dt not in self._fdts: - self.Raise("default-dt entry argument '%s' not found in fdt list: %s" % - (self._fit_default_dt, - ', '.join(self._fdts))) + self.Raise( + f"default-dt entry argument '{self._fit_default_dt}' " + f"not found in fdt list: {', '.join(self._fdts)}") seq = self._fdts.index(self._fit_default_dt) val = val[1:].replace('DEFAULT-SEQ', str(seq + 1)) fsw.property_string(pname, val) @@ -350,8 +349,8 @@ class Entry_fit(Entry_section): else: if self._fdts is None: if self._fit_list_prop: - self.Raise("Generator node requires '%s' entry argument" % - self._fit_list_prop.value) + self.Raise('Generator node requires ' + f"'{self._fit_list_prop.value}' entry argument") else: self.Raise("Generator node requires 'fit,fdt-list' property")
@@ -365,10 +364,10 @@ class Entry_fit(Entry_section): first.
Args: - subnode (None): Generator node to process - depth: Current node depth (0 is the base 'fit' node) - in_images: True if this is inside the 'images' node, so that - 'data' properties should be generated + subnode (Node): Generator node to process + depth (int): Current node depth (0 is the base 'fit' node) + in_images (bool): True if this is inside the 'images' node, so + that 'data' properties should be generated """ oper = self._get_operation(subnode) if oper == OP_GEN_FDT_NODES: @@ -378,9 +377,10 @@ class Entry_fit(Entry_section): """Add nodes to the output FIT
Args: - base_node: Base Node of the FIT (with 'description' property) - depth: Current node depth (0 is the base 'fit' node) - node: Current node to process + base_node (Node): Base Node of the FIT (with 'description' + property) + depth (int): Current node depth (0 is the base 'fit' node) + node (Node): Current node to process
There are two cases to deal with: - hash and signature nodes which become part of the FIT @@ -437,7 +437,7 @@ class Entry_fit(Entry_section): according to where they ended up in the packed FIT file.
Args: - image_pos: Position of this entry in the image + image_pos (int): Position of this entry in the image """ super().SetImagePos(image_pos)
@@ -476,7 +476,7 @@ class Entry_fit(Entry_section):
# This should never happen else: # pragma: no cover - self.Raise("%s: missing data properties" % (path)) + self.Raise(f'{path}: missing data properties')
section.SetOffsetSize(offset, size) section.SetImagePos(self.image_pos)

Add a new function to handling reporting errors within a particular subnode of the FIT description. This can be used to make the format of these errors consistent.
Signed-off-by: Simon Glass sjg@chromium.org Reviewed-by: Alper Nebi Yasak alpernebiyasak@gmail.com ---
Changes in v3: - Drop the base_node argument and use self._node instead - Rename function to _raise_subnode() so it is clear it is not like Raise()
tools/binman/etype/fit.py | 31 +++++++++++++++++++++++-------- tools/binman/ftest.py | 2 +- 2 files changed, 24 insertions(+), 9 deletions(-)
diff --git a/tools/binman/etype/fit.py b/tools/binman/etype/fit.py index 0e8d2cf459..169b55887c 100644 --- a/tools/binman/etype/fit.py +++ b/tools/binman/etype/fit.py @@ -184,11 +184,11 @@ class Entry_fit(Entry_section): self._fit_default_dt = self.GetEntryArgsOrProps([EntryArg('default-dt', str)])[0]
- def _get_operation(self, subnode): + def _get_operation(self, base_node, node): """Get the operation referenced by a subnode
Args: - subnode (Node): Subnode (of the FIT) to check + node (Node): Subnode (of the FIT) to check
Returns: int: Operation to perform @@ -196,12 +196,12 @@ class Entry_fit(Entry_section): Raises: ValueError: Invalid operation name """ - oper_name = subnode.props.get('fit,operation') + oper_name = node.props.get('fit,operation') if not oper_name: return OP_GEN_FDT_NODES oper = OPERATIONS.get(oper_name.value) - if not oper: - self.Raise(f"Unknown operation '{oper_name.value}'") + if oper is None: + self._raise_subnode(node, f"Unknown operation '{oper_name.value}'") return oper
def ReadEntries(self): @@ -271,6 +271,19 @@ class Entry_fit(Entry_section):
return tools.read_file(output_fname)
+ def _raise_subnode(self, node, msg): + """Raise an error with a paticular FIT subnode + + Args: + node (Node): FIT subnode containing the error + msg (str): Message to report + + Raises: + ValueError, as requested + """ + rel_path = node.path[len(self._node.path) + 1:] + self.Raise(f"subnode '{rel_path}': {msg}") + def _build_input(self): """Finish the FIT by adding the 'data' properties to it
@@ -354,7 +367,7 @@ class Entry_fit(Entry_section): else: self.Raise("Generator node requires 'fit,fdt-list' property")
- def _gen_node(subnode, depth, in_images): + def _gen_node(base_node, subnode, depth, in_images): """Generate nodes from a template
This creates one node for each member of self._fdts using the @@ -364,12 +377,14 @@ class Entry_fit(Entry_section): first.
Args: + base_node (Node): Base Node of the FIT (with 'description' + property) subnode (Node): Generator node to process depth (int): Current node depth (0 is the base 'fit' node) in_images (bool): True if this is inside the 'images' node, so that 'data' properties should be generated """ - oper = self._get_operation(subnode) + oper = self._get_operation(base_node, subnode) if oper == OP_GEN_FDT_NODES: _gen_fdt_nodes(subnode, depth, in_images)
@@ -410,7 +425,7 @@ class Entry_fit(Entry_section): elif self.GetImage().generate and subnode.name.startswith('@'): subnode_path = f'{rel_path}/{subnode.name}' entry = self._entries.get(subnode_path) - _gen_node(subnode, depth, in_images) + _gen_node(base_node, subnode, depth, in_images) if entry: del self._entries[subnode_path] else: diff --git a/tools/binman/ftest.py b/tools/binman/ftest.py index 2b0abd8982..690fa96d07 100644 --- a/tools/binman/ftest.py +++ b/tools/binman/ftest.py @@ -5302,7 +5302,7 @@ fdt fdtmap Extract the devicetree blob from the fdtmap """Check handling of an FDT map when the section cannot be found""" with self.assertRaises(ValueError) as exc: self._DoReadFileDtb('224_fit_bad_oper.dts') - self.assertIn("Node '/binman/fit': Unknown operation 'unknown'", + self.assertIn("Node '/binman/fit': subnode 'images/@fdt-SEQ': Unknown operation 'unknown'", str(exc.exception))
def test_uses_expand_size(self):

It doesn't make sense to use 'subnode' as a function parameter since it is just a 'node' so far as the function is concerned. Update two functions to use 'node' instead.
Signed-off-by: Simon Glass sjg@chromium.org Reviewed-by: Alper Nebi Yasak alpernebiyasak@gmail.com ---
(no changes since v1)
tools/binman/etype/fit.py | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/tools/binman/etype/fit.py b/tools/binman/etype/fit.py index 169b55887c..d2e3b4dea8 100644 --- a/tools/binman/etype/fit.py +++ b/tools/binman/etype/fit.py @@ -328,7 +328,7 @@ class Entry_fit(Entry_section): return fsw.property(pname, prop.bytes)
- def _gen_fdt_nodes(subnode, depth, in_images): + def _gen_fdt_nodes(node, depth, in_images): """Generate FDT nodes
This creates one node for each member of self._fdts using the @@ -338,7 +338,7 @@ class Entry_fit(Entry_section): first.
Args: - subnode (None): Generator node to process + node (None): Generator node to process depth: Current node depth (0 is the base 'fit' node) in_images: True if this is inside the 'images' node, so that 'data' properties should be generated @@ -346,10 +346,10 @@ class Entry_fit(Entry_section): if self._fdts: # Generate nodes for each FDT for seq, fdt_fname in enumerate(self._fdts): - node_name = subnode.name[1:].replace('SEQ', str(seq + 1)) + node_name = node.name[1:].replace('SEQ', str(seq + 1)) fname = tools.get_input_filename(fdt_fname + '.dtb') with fsw.add_node(node_name): - for pname, prop in subnode.props.items(): + for pname, prop in node.props.items(): val = prop.bytes.replace( b'NAME', tools.to_bytes(fdt_fname)) val = val.replace( @@ -367,7 +367,7 @@ class Entry_fit(Entry_section): else: self.Raise("Generator node requires 'fit,fdt-list' property")
- def _gen_node(base_node, subnode, depth, in_images): + def _gen_node(base_node, node, depth, in_images): """Generate nodes from a template
This creates one node for each member of self._fdts using the @@ -379,14 +379,14 @@ class Entry_fit(Entry_section): Args: base_node (Node): Base Node of the FIT (with 'description' property) - subnode (Node): Generator node to process + node (Node): Generator node to process depth (int): Current node depth (0 is the base 'fit' node) in_images (bool): True if this is inside the 'images' node, so that 'data' properties should be generated """ - oper = self._get_operation(base_node, subnode) + oper = self._get_operation(base_node, node) if oper == OP_GEN_FDT_NODES: - _gen_fdt_nodes(subnode, depth, in_images) + _gen_fdt_nodes(node, depth, in_images)
def _add_node(base_node, depth, node): """Add nodes to the output FIT

The current implementation sets up the FIT entries but then deletes the 'generator' ones so they don't appear in the final image.
This is a bit clumsy. We cannot build the image more than once, since the generator entries are lost during the first build. Binman requires that calling BuildSectionData() multiple times returns a valid result each time.
Keep a separate, private list which includes the generator nodes and use that where needed, to correct this problem. Ensure that the missing list includes removed generator entries too.
Signed-off-by: Simon Glass sjg@chromium.org Reviewed-by: Alper Nebi Yasak alpernebiyasak@gmail.com ---
(no changes since v1)
tools/binman/etype/fit.py | 33 ++++++++++++++++++++++++++++----- 1 file changed, 28 insertions(+), 5 deletions(-)
diff --git a/tools/binman/etype/fit.py b/tools/binman/etype/fit.py index d2e3b4dea8..49f89851b0 100644 --- a/tools/binman/etype/fit.py +++ b/tools/binman/etype/fit.py @@ -163,12 +163,17 @@ class Entry_fit(Entry_section): key: relative path to entry Node (from the base of the FIT) value: Entry_section object comprising the contents of this node + _priv_entries: Internal copy of _entries which includes 'generator' + entries which are used to create the FIT, but should not be + processed as real entries. This is set up once we have the + entries """ super().__init__(section, etype, node) self._fit = None self._fit_props = {} self._fdts = None self.mkimage = None + self._priv_entries = {}
def ReadNode(self): super().ReadNode() @@ -236,6 +241,10 @@ class Entry_fit(Entry_section):
_add_entries(self._node, 0, self._node)
+ # Keep a copy of all entries, including generator entries, since these + # removed from self._entries later. + self._priv_entries = dict(self._entries) + def BuildSectionData(self, required): """Build FIT entry contents
@@ -411,11 +420,12 @@ class Entry_fit(Entry_section):
has_images = depth == 2 and in_images if has_images: - entry = self._entries[rel_path] + entry = self._priv_entries[rel_path] data = entry.GetData() fsw.property('data', bytes(data))
for subnode in node.subnodes: + subnode_path = f'{rel_path}/{subnode.name}' if has_images and not (subnode.name.startswith('hash') or subnode.name.startswith('signature')): # This subnode is a content node not meant to appear in @@ -423,11 +433,11 @@ class Entry_fit(Entry_section): # fsw.add_node() or _add_node() for it. pass elif self.GetImage().generate and subnode.name.startswith('@'): - subnode_path = f'{rel_path}/{subnode.name}' - entry = self._entries.get(subnode_path) _gen_node(base_node, subnode, depth, in_images) - if entry: - del self._entries[subnode_path] + # This is a generator (template) entry, so remove it from + # the list of entries used by PackEntries(), etc. Otherwise + # it will appear in the binman output + to_remove.append(subnode_path) else: with fsw.add_node(subnode.name): _add_node(base_node, depth + 1, subnode) @@ -436,10 +446,16 @@ class Entry_fit(Entry_section): # entry node fsw = libfdt.FdtSw() fsw.finish_reservemap() + to_remove = [] with fsw.add_node(''): _add_node(self._node, 0, self._node) fdt = fsw.as_fdt()
+ # Remove generator entries from the main list + for path in to_remove: + if path in self._entries: + del self._entries[path] + # Pack this new FDT and scan it so we can add the data later fdt.pack() data = fdt.as_bytearray() @@ -499,3 +515,10 @@ class Entry_fit(Entry_section): def AddBintools(self, btools): super().AddBintools(btools) self.mkimage = self.AddBintool(btools, 'mkimage') + + def CheckMissing(self, missing_list): + # We must use our private entry list for this since generator notes + # which are removed from self._entries will otherwise not show up as + # missing + for entry in self._priv_entries.values(): + entry.CheckMissing(missing_list)

Some boards need to load an ELF file using the 'loadables' property, but the file has segments at different memory addresses. This means that it cannot be supplied as a flat binary.
Allow generating a separate node in the FIT for each segment in the ELF, with a different load address for each.
Also add checks that the fit,xxx directives are valid.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v3: - Fix 'segmnet' typo - Use seq == 0 instead of 'not seq'
Changes in v2: - Rewrite this to use the new FIT entry-type implementation - Rename op-tee to tee-os
tools/binman/entries.rst | 146 ++++++++++++ tools/binman/etype/fit.py | 229 ++++++++++++++++++- tools/binman/ftest.py | 147 ++++++++++++ tools/binman/test/226_fit_split_elf.dts | 67 ++++++ tools/binman/test/227_fit_bad_dir.dts | 9 + tools/binman/test/228_fit_bad_dir_config.dts | 9 + 6 files changed, 597 insertions(+), 10 deletions(-) create mode 100644 tools/binman/test/226_fit_split_elf.dts create mode 100644 tools/binman/test/227_fit_bad_dir.dts create mode 100644 tools/binman/test/228_fit_bad_dir_config.dts
diff --git a/tools/binman/entries.rst b/tools/binman/entries.rst index 484cde5c80..be8de5560c 100644 --- a/tools/binman/entries.rst +++ b/tools/binman/entries.rst @@ -612,6 +612,9 @@ gen-fdt-nodes Generate FDT nodes as above. This is the default if there is no `fit,operation` property.
+split-elf + Split an ELF file into a separate node for each segment. + Generating nodes from an FDT list (gen-fdt-nodes) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -655,6 +658,149 @@ for each of your two files. Note that if no devicetree files are provided (with '-a of-list' as above) then no nodes will be generated.
+Generating nodes from an ELF file (split-elf) +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +This uses the node as a template to generate multiple nodes. The following +special properties are available: + +split-elf + Split an ELF file into a separate node for each segment. This uses the + node as a template to generate multiple nodes. The following special + properties are available: + + fit,load + Generates a `load = <...>` property with the load address of the + segment + + fit,entry + Generates a `entry = <...>` property with the entry address of the + ELF. This is only produced for the first entry + + fit,data + Generates a `data = <...>` property with the contents of the segment + + fit,loadables + Generates a `loadable = <...>` property with a list of the generated + nodes (including all nodes if this operation is used multiple times) + + +Here is an example showing ATF, TEE and a device tree all combined:: + + fit { + description = "test-desc"; + #address-cells = <1>; + fit,fdt-list = "of-list"; + + images { + u-boot { + description = "U-Boot (64-bit)"; + type = "standalone"; + os = "U-Boot"; + arch = "arm64"; + compression = "none"; + load = <CONFIG_SYS_TEXT_BASE>; + u-boot-nodtb { + }; + }; + @fdt-SEQ { + description = "fdt-NAME.dtb"; + type = "flat_dt"; + compression = "none"; + }; + @atf-SEQ { + fit,operation = "split-elf"; + description = "ARM Trusted Firmware"; + type = "firmware"; + arch = "arm64"; + os = "arm-trusted-firmware"; + compression = "none"; + fit,load; + fit,entry; + fit,data; + + atf-bl31 { + }; + }; + + @tee-SEQ { + fit,operation = "split-elf"; + description = "TEE"; + type = "tee"; + arch = "arm64"; + os = "tee"; + compression = "none"; + fit,load; + fit,entry; + fit,data; + + tee-os { + }; + }; + }; + + configurations { + default = "@config-DEFAULT-SEQ"; + @config-SEQ { + description = "conf-NAME.dtb"; + fdt = "fdt-SEQ"; + firmware = "u-boot"; + fit,loadables; + }; + }; + }; + +If ATF-BL31 is available, this generates a node for each segment in the +ELF file, for example:: + + images { + atf-1 { + data = <...contents of first segment...>; + data-offset = <0x00000000>; + entry = <0x00040000>; + load = <0x00040000>; + compression = "none"; + os = "arm-trusted-firmware"; + arch = "arm64"; + type = "firmware"; + description = "ARM Trusted Firmware"; + }; + atf-2 { + data = <...contents of second segment...>; + load = <0xff3b0000>; + compression = "none"; + os = "arm-trusted-firmware"; + arch = "arm64"; + type = "firmware"; + description = "ARM Trusted Firmware"; + }; + }; + +The same applies for OP-TEE if that is available. + +If each binary is not available, the relevant template node (@atf-SEQ or +@tee-SEQ) is removed from the output. + +This also generates a `config-xxx` node for each device tree in `of-list`. +Note that the U-Boot build system uses `-a of-list=$(CONFIG_OF_LIST)` +so you can use `CONFIG_OF_LIST` to define that list. In this example it is +set up for `firefly-rk3399` with a single device tree and the default set +with `-a default-dt=$(CONFIG_DEFAULT_DEVICE_TREE)`, so the resulting output +is:: + + configurations { + default = "config-1"; + config-1 { + loadables = "atf-1", "atf-2", "atf-3", "tee-1", "tee-2"; + description = "rk3399-firefly.dtb"; + fdt = "fdt-1"; + firmware = "u-boot"; + }; + }; + +U-Boot SPL can then load the firmware (U-Boot proper) and all the loadables +(ATF and TEE), then proceed with the boot. +
Entry: fmap: An entry which contains an Fmap section diff --git a/tools/binman/etype/fit.py b/tools/binman/etype/fit.py index 49f89851b0..85d8cfcc2b 100644 --- a/tools/binman/etype/fit.py +++ b/tools/binman/etype/fit.py @@ -9,14 +9,16 @@ import libfdt
from binman.entry import Entry, EntryArg from binman.etype.section import Entry_section +from binman import elf from dtoc import fdt_util from dtoc.fdt import Fdt from patman import tools
# Supported operations, with the fit,operation property -OP_GEN_FDT_NODES = range(1) +OP_GEN_FDT_NODES, OP_SPLIT_ELF = range(2) OPERATIONS = { 'gen-fdt-nodes': OP_GEN_FDT_NODES, + 'split-elf': OP_SPLIT_ELF, }
class Entry_fit(Entry_section): @@ -112,6 +114,9 @@ class Entry_fit(Entry_section): Generate FDT nodes as above. This is the default if there is no `fit,operation` property.
+ split-elf + Split an ELF file into a separate node for each segment. + Generating nodes from an FDT list (gen-fdt-nodes) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -154,6 +159,149 @@ class Entry_fit(Entry_section):
Note that if no devicetree files are provided (with '-a of-list' as above) then no nodes will be generated. + + Generating nodes from an ELF file (split-elf) + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + + This uses the node as a template to generate multiple nodes. The following + special properties are available: + + split-elf + Split an ELF file into a separate node for each segment. This uses the + node as a template to generate multiple nodes. The following special + properties are available: + + fit,load + Generates a `load = <...>` property with the load address of the + segment + + fit,entry + Generates a `entry = <...>` property with the entry address of the + ELF. This is only produced for the first entry + + fit,data + Generates a `data = <...>` property with the contents of the segment + + fit,loadables + Generates a `loadable = <...>` property with a list of the generated + nodes (including all nodes if this operation is used multiple times) + + + Here is an example showing ATF, TEE and a device tree all combined:: + + fit { + description = "test-desc"; + #address-cells = <1>; + fit,fdt-list = "of-list"; + + images { + u-boot { + description = "U-Boot (64-bit)"; + type = "standalone"; + os = "U-Boot"; + arch = "arm64"; + compression = "none"; + load = <CONFIG_SYS_TEXT_BASE>; + u-boot-nodtb { + }; + }; + @fdt-SEQ { + description = "fdt-NAME.dtb"; + type = "flat_dt"; + compression = "none"; + }; + @atf-SEQ { + fit,operation = "split-elf"; + description = "ARM Trusted Firmware"; + type = "firmware"; + arch = "arm64"; + os = "arm-trusted-firmware"; + compression = "none"; + fit,load; + fit,entry; + fit,data; + + atf-bl31 { + }; + }; + + @tee-SEQ { + fit,operation = "split-elf"; + description = "TEE"; + type = "tee"; + arch = "arm64"; + os = "tee"; + compression = "none"; + fit,load; + fit,entry; + fit,data; + + tee-os { + }; + }; + }; + + configurations { + default = "@config-DEFAULT-SEQ"; + @config-SEQ { + description = "conf-NAME.dtb"; + fdt = "fdt-SEQ"; + firmware = "u-boot"; + fit,loadables; + }; + }; + }; + + If ATF-BL31 is available, this generates a node for each segment in the + ELF file, for example:: + + images { + atf-1 { + data = <...contents of first segment...>; + data-offset = <0x00000000>; + entry = <0x00040000>; + load = <0x00040000>; + compression = "none"; + os = "arm-trusted-firmware"; + arch = "arm64"; + type = "firmware"; + description = "ARM Trusted Firmware"; + }; + atf-2 { + data = <...contents of second segment...>; + load = <0xff3b0000>; + compression = "none"; + os = "arm-trusted-firmware"; + arch = "arm64"; + type = "firmware"; + description = "ARM Trusted Firmware"; + }; + }; + + The same applies for OP-TEE if that is available. + + If each binary is not available, the relevant template node (@atf-SEQ or + @tee-SEQ) is removed from the output. + + This also generates a `config-xxx` node for each device tree in `of-list`. + Note that the U-Boot build system uses `-a of-list=$(CONFIG_OF_LIST)` + so you can use `CONFIG_OF_LIST` to define that list. In this example it is + set up for `firefly-rk3399` with a single device tree and the default set + with `-a default-dt=$(CONFIG_DEFAULT_DEVICE_TREE)`, so the resulting output + is:: + + configurations { + default = "config-1"; + config-1 { + loadables = "atf-1", "atf-2", "atf-3", "tee-1", "tee-2"; + description = "rk3399-firefly.dtb"; + fdt = "fdt-1"; + firmware = "u-boot"; + }; + }; + + U-Boot SPL can then load the firmware (U-Boot proper) and all the loadables + (ATF and TEE), then proceed with the boot. """ def __init__(self, section, etype, node): """ @@ -167,6 +315,7 @@ class Entry_fit(Entry_section): entries which are used to create the FIT, but should not be processed as real entries. This is set up once we have the entries + _loadables: List of generated split-elf nodes, each a node name """ super().__init__(section, etype, node) self._fit = None @@ -174,6 +323,7 @@ class Entry_fit(Entry_section): self._fdts = None self.mkimage = None self._priv_entries = {} + self._loadables = []
def ReadNode(self): super().ReadNode() @@ -337,7 +487,7 @@ class Entry_fit(Entry_section): return fsw.property(pname, prop.bytes)
- def _gen_fdt_nodes(node, depth, in_images): + def _gen_fdt_nodes(base_node, node, depth, in_images): """Generate FDT nodes
This creates one node for each member of self._fdts using the @@ -359,11 +509,20 @@ class Entry_fit(Entry_section): fname = tools.get_input_filename(fdt_fname + '.dtb') with fsw.add_node(node_name): for pname, prop in node.props.items(): - val = prop.bytes.replace( - b'NAME', tools.to_bytes(fdt_fname)) - val = val.replace( - b'SEQ', tools.to_bytes(str(seq + 1))) - fsw.property(pname, val) + if pname == 'fit,loadables': + val = '\0'.join(self._loadables) + '\0' + fsw.property('loadables', val.encode('utf-8')) + elif pname == 'fit,operation': + pass + elif pname.startswith('fit,'): + self._raise_subnode( + node, f"Unknown directive '{pname}'") + else: + val = prop.bytes.replace( + b'NAME', tools.to_bytes(fdt_fname)) + val = val.replace( + b'SEQ', tools.to_bytes(str(seq + 1))) + fsw.property(pname, val)
# Add data for 'images' nodes (but not 'config') if depth == 1 and in_images: @@ -376,7 +535,43 @@ class Entry_fit(Entry_section): else: self.Raise("Generator node requires 'fit,fdt-list' property")
- def _gen_node(base_node, node, depth, in_images): + def _gen_split_elf(base_node, node, elf_data, missing): + """Add nodes for the ELF file, one per group of contiguous segments + + Args: + base_node (Node): Template node from the binman definition + node (Node): Node to replace (in the FIT being built) + data (bytes): ELF-format data to process (may be empty) + missing (bool): True if any of the data is missing + + """ + # If any pieces are missing, skip this. The missing entries will + # show an error + if not missing: + try: + segments, entry = elf.read_loadable_segments(elf_data) + except ValueError as exc: + self._raise_subnode(node, + f'Failed to read ELF file: {str(exc)}') + for (seq, start, data) in segments: + node_name = node.name[1:].replace('SEQ', str(seq + 1)) + with fsw.add_node(node_name): + loadables.append(node_name) + for pname, prop in node.props.items(): + if not pname.startswith('fit,'): + fsw.property(pname, prop.bytes) + elif pname == 'fit,load': + fsw.property_u32('load', start) + elif pname == 'fit,entry': + if seq == 0: + fsw.property_u32('entry', entry) + elif pname == 'fit,data': + fsw.property('data', bytes(data)) + elif pname != 'fit,operation': + self._raise_subnode( + node, f"Unknown directive '{pname}'") + + def _gen_node(base_node, node, depth, in_images, entry): """Generate nodes from a template
This creates one node for each member of self._fdts using the @@ -395,7 +590,18 @@ class Entry_fit(Entry_section): """ oper = self._get_operation(base_node, node) if oper == OP_GEN_FDT_NODES: - _gen_fdt_nodes(node, depth, in_images) + _gen_fdt_nodes(base_node, node, depth, in_images) + elif oper == OP_SPLIT_ELF: + # Entry_section.ObtainContents() either returns True or + # raises an exception. + data = None + missing_list = [] + entry.ObtainContents() + entry.Pack(0) + data = entry.GetData() + entry.CheckMissing(missing_list) + + _gen_split_elf(base_node, node, data, bool(missing_list))
def _add_node(base_node, depth, node): """Add nodes to the output FIT @@ -433,7 +639,8 @@ class Entry_fit(Entry_section): # fsw.add_node() or _add_node() for it. pass elif self.GetImage().generate and subnode.name.startswith('@'): - _gen_node(base_node, subnode, depth, in_images) + entry = self._priv_entries.get(subnode_path) + _gen_node(base_node, subnode, depth, in_images, entry) # This is a generator (template) entry, so remove it from # the list of entries used by PackEntries(), etc. Otherwise # it will appear in the binman output @@ -447,8 +654,10 @@ class Entry_fit(Entry_section): fsw = libfdt.FdtSw() fsw.finish_reservemap() to_remove = [] + loadables = [] with fsw.add_node(''): _add_node(self._node, 0, self._node) + self._loadables = loadables fdt = fsw.as_fdt()
# Remove generator entries from the main list diff --git a/tools/binman/ftest.py b/tools/binman/ftest.py index 690fa96d07..7ce9f9a7d8 100644 --- a/tools/binman/ftest.py +++ b/tools/binman/ftest.py @@ -202,6 +202,13 @@ class TestFunctional(unittest.TestCase):
TestFunctional._MakeInputFile('env.txt', ENV_DATA)
+ # ELF file with two sections in different parts of memory, used for both + # ATF and OP_TEE + TestFunctional._MakeInputFile('bl31.elf', + tools.read_file(cls.ElfTestFile('elf_sections'))) + TestFunctional._MakeInputFile('tee.elf', + tools.read_file(cls.ElfTestFile('elf_sections'))) + cls.have_lz4 = comp_util.HAVE_LZ4
@classmethod @@ -5323,6 +5330,146 @@ fdt fdtmap Extract the devicetree blob from the fdtmap err, "Image '.*' has faked external blobs and is non-functional: .*")
+ def testFitSplitElf(self): + """Test an image with an FIT with an split-elf operation""" + entry_args = { + 'of-list': 'test-fdt1 test-fdt2', + 'default-dt': 'test-fdt2', + 'atf-bl31-path': 'bl31.elf', + 'tee-os-path': 'tee.elf', + } + test_subdir = os.path.join(self._indir, TEST_FDT_SUBDIR) + data = self._DoReadFileDtb( + '226_fit_split_elf.dts', + entry_args=entry_args, + extra_indirs=[test_subdir])[0] + + self.assertEqual(U_BOOT_NODTB_DATA, data[-len(U_BOOT_NODTB_DATA):]) + fit_data = data[len(U_BOOT_DATA):-len(U_BOOT_NODTB_DATA)] + + base_keys = {'description', 'type', 'arch', 'os', 'compression', + 'data', 'load'} + dtb = fdt.Fdt.FromData(fit_data) + dtb.Scan() + + elf_data = tools.read_file(os.path.join(self._indir, 'bl31.elf')) + segments, entry = elf.read_loadable_segments(elf_data) + + # We assume there are two segments + self.assertEquals(2, len(segments)) + + atf1 = dtb.GetNode('/images/atf-1') + _, start, data = segments[0] + self.assertEqual(base_keys | {'entry'}, atf1.props.keys()) + self.assertEqual(entry, + fdt_util.fdt32_to_cpu(atf1.props['entry'].value)) + self.assertEqual(start, + fdt_util.fdt32_to_cpu(atf1.props['load'].value)) + self.assertEqual(data, atf1.props['data'].bytes) + + atf2 = dtb.GetNode('/images/atf-2') + self.assertEqual(base_keys, atf2.props.keys()) + _, start, data = segments[1] + self.assertEqual(start, + fdt_util.fdt32_to_cpu(atf2.props['load'].value)) + self.assertEqual(data, atf2.props['data'].bytes) + + conf = dtb.GetNode('/configurations') + self.assertEqual({'default'}, conf.props.keys()) + + for subnode in conf.subnodes: + self.assertEqual({'description', 'fdt', 'loadables'}, + subnode.props.keys()) + self.assertEqual( + ['atf-1', 'atf-2', 'tee-1', 'tee-2'], + fdt_util.GetStringList(subnode, 'loadables')) + + def _check_bad_fit(self, dts): + """Check a bad FIT + + This runs with the given dts and returns the assertion raised + + Args: + dts (str): dts filename to use + + Returns: + str: Assertion string raised + """ + entry_args = { + 'of-list': 'test-fdt1 test-fdt2', + 'default-dt': 'test-fdt2', + 'atf-bl31-path': 'bl31.elf', + 'tee-os-path': 'tee.elf', + } + test_subdir = os.path.join(self._indir, TEST_FDT_SUBDIR) + with self.assertRaises(ValueError) as exc: + self._DoReadFileDtb(dts, entry_args=entry_args, + extra_indirs=[test_subdir])[0] + return str(exc.exception) + + def testFitSplitElfBadElf(self): + """Test a FIT split-elf operation with an invalid ELF file""" + TestFunctional._MakeInputFile('bad.elf', tools.get_bytes(100, 100)) + entry_args = { + 'of-list': 'test-fdt1 test-fdt2', + 'default-dt': 'test-fdt2', + 'atf-bl31-path': 'bad.elf', + 'tee-os-path': 'tee.elf', + } + test_subdir = os.path.join(self._indir, TEST_FDT_SUBDIR) + with self.assertRaises(ValueError) as exc: + self._DoReadFileDtb( + '226_fit_split_elf.dts', + entry_args=entry_args, + extra_indirs=[test_subdir])[0] + self.assertIn( + "Node '/binman/fit': subnode 'images/@atf-SEQ': Failed to read ELF file: Magic number does not match", + str(exc.exception)) + + def testFitSplitElfBadDirective(self): + """Test a FIT split-elf invalid fit,xxx directive in an image node""" + err = self._check_bad_fit('227_fit_bad_dir.dts') + self.assertIn( + "Node '/binman/fit': subnode 'images/@atf-SEQ': Unknown directive 'fit,something'", + err) + + def testFitSplitElfBadDirectiveConfig(self): + """Test a FIT split-elf with invalid fit,xxx directive in config""" + err = self._check_bad_fit('228_fit_bad_dir_config.dts') + self.assertEqual( + "Node '/binman/fit': subnode 'configurations/@config-SEQ': Unknown directive 'fit,config'", + err) + + def checkFitSplitElf(self, **kwargs): + """Test an split-elf FIT with a missing ELF file""" + entry_args = { + 'of-list': 'test-fdt1 test-fdt2', + 'default-dt': 'test-fdt2', + 'atf-bl31-path': 'bl31.elf', + 'tee-os-path': 'missing.elf', + } + test_subdir = os.path.join(self._indir, TEST_FDT_SUBDIR) + with test_util.capture_sys_output() as (stdout, stderr): + self._DoTestFile( + '226_fit_split_elf.dts', entry_args=entry_args, + extra_indirs=[test_subdir], **kwargs) + err = stderr.getvalue() + return err + + def testFitSplitElfMissing(self): + """Test an split-elf FIT with a missing ELF file""" + err = self.checkFitSplitElf(allow_missing=True) + self.assertRegex( + err, + "Image '.*' is missing external blobs and is non-functional: .*") + + def testFitSplitElfFaked(self): + """Test an split-elf FIT with faked ELF file""" + err = self.checkFitSplitElf(allow_missing=True, allow_fake_blobs=True) + self.assertRegex( + err, + "Image '.*' is missing external blobs and is non-functional: .*") +
if __name__ == "__main__": unittest.main() diff --git a/tools/binman/test/226_fit_split_elf.dts b/tools/binman/test/226_fit_split_elf.dts new file mode 100644 index 0000000000..fab15338b2 --- /dev/null +++ b/tools/binman/test/226_fit_split_elf.dts @@ -0,0 +1,67 @@ +// SPDX-License-Identifier: GPL-2.0+ + +/dts-v1/; + +/ { + #address-cells = <1>; + #size-cells = <1>; + + binman { + u-boot { + }; + fit { + description = "test-desc"; + #address-cells = <1>; + fit,fdt-list = "of-list"; + + images { + @fdt-SEQ { + description = "fdt-NAME.dtb"; + type = "flat_dt"; + compression = "none"; + }; + atf: @atf-SEQ { + fit,operation = "split-elf"; + description = "ARM Trusted Firmware"; + type = "firmware"; + arch = "arm64"; + os = "arm-trusted-firmware"; + compression = "none"; + fit,load; + fit,entry; + fit,data; + + atf-bl31 { + }; + }; + + @tee-SEQ { + fit,operation = "split-elf"; + description = "TEE"; + type = "tee"; + arch = "arm64"; + os = "tee"; + compression = "none"; + fit,load; + fit,entry; + fit,data; + + tee-os { + }; + }; + }; + + configurations { + default = "@config-DEFAULT-SEQ"; + config: @config-SEQ { + description = "conf-NAME.dtb"; + fdt = "fdt-SEQ"; + fit,loadables; + }; + }; + }; + + u-boot-nodtb { + }; + }; +}; diff --git a/tools/binman/test/227_fit_bad_dir.dts b/tools/binman/test/227_fit_bad_dir.dts new file mode 100644 index 0000000000..51f4816c4c --- /dev/null +++ b/tools/binman/test/227_fit_bad_dir.dts @@ -0,0 +1,9 @@ +// SPDX-License-Identifier: GPL-2.0+ + +/dts-v1/; + +#include "226_fit_split_elf.dts" + +&atf { + fit,something = "bad"; +}; diff --git a/tools/binman/test/228_fit_bad_dir_config.dts b/tools/binman/test/228_fit_bad_dir_config.dts new file mode 100644 index 0000000000..825a346c3e --- /dev/null +++ b/tools/binman/test/228_fit_bad_dir_config.dts @@ -0,0 +1,9 @@ +// SPDX-License-Identifier: GPL-2.0+ + +/dts-v1/; + +#include "226_fit_split_elf.dts" + +&config { + fit,config = "bad"; +};

On 06/03/2022 06:19, Simon Glass wrote:
Some boards need to load an ELF file using the 'loadables' property, but the file has segments at different memory addresses. This means that it cannot be supplied as a flat binary.
Allow generating a separate node in the FIT for each segment in the ELF, with a different load address for each.
Also add checks that the fit,xxx directives are valid.
Signed-off-by: Simon Glass sjg@chromium.org
Changes in v3:
- Fix 'segmnet' typo
- Use seq == 0 instead of 'not seq'
Changes in v2:
- Rewrite this to use the new FIT entry-type implementation
- Rename op-tee to tee-os
tools/binman/entries.rst | 146 ++++++++++++ tools/binman/etype/fit.py | 229 ++++++++++++++++++- tools/binman/ftest.py | 147 ++++++++++++ tools/binman/test/226_fit_split_elf.dts | 67 ++++++ tools/binman/test/227_fit_bad_dir.dts | 9 + tools/binman/test/228_fit_bad_dir_config.dts | 9 + 6 files changed, 597 insertions(+), 10 deletions(-) create mode 100644 tools/binman/test/226_fit_split_elf.dts create mode 100644 tools/binman/test/227_fit_bad_dir.dts create mode 100644 tools/binman/test/228_fit_bad_dir_config.dts
I still can't like this enough to add a Reviewed-by, but I guess you'll apply the series up to and maybe including this, so:
Acked-by: Alper Nebi Yasak alpernebiyasak@gmail.com

Hi Alper,
On Thu, 10 Mar 2022 at 12:36, Alper Nebi Yasak alpernebiyasak@gmail.com wrote:
On 06/03/2022 06:19, Simon Glass wrote:
Some boards need to load an ELF file using the 'loadables' property, but the file has segments at different memory addresses. This means that it cannot be supplied as a flat binary.
Allow generating a separate node in the FIT for each segment in the ELF, with a different load address for each.
Also add checks that the fit,xxx directives are valid.
Signed-off-by: Simon Glass sjg@chromium.org
Changes in v3:
- Fix 'segmnet' typo
- Use seq == 0 instead of 'not seq'
Changes in v2:
- Rewrite this to use the new FIT entry-type implementation
- Rename op-tee to tee-os
tools/binman/entries.rst | 146 ++++++++++++ tools/binman/etype/fit.py | 229 ++++++++++++++++++- tools/binman/ftest.py | 147 ++++++++++++ tools/binman/test/226_fit_split_elf.dts | 67 ++++++ tools/binman/test/227_fit_bad_dir.dts | 9 + tools/binman/test/228_fit_bad_dir_config.dts | 9 + 6 files changed, 597 insertions(+), 10 deletions(-) create mode 100644 tools/binman/test/226_fit_split_elf.dts create mode 100644 tools/binman/test/227_fit_bad_dir.dts create mode 100644 tools/binman/test/228_fit_bad_dir_config.dts
I still can't like this enough to add a Reviewed-by, but I guess you'll apply the series up to and maybe including this, so:
Acked-by: Alper Nebi Yasak alpernebiyasak@gmail.com
Yes OK I will do that in the next few days.
Then I will await your thoughts on next steps.
Regards, Simon

This boards uses SPL_FIT so does not need to support loading a raw image. Drop it to avoid binman trying to insert a symbol which has no value.
Signed-off-by: Simon Glass sjg@chromium.org ---
(no changes since v1)
configs/evb-rk3288_defconfig | 1 + 1 file changed, 1 insertion(+)
diff --git a/configs/evb-rk3288_defconfig b/configs/evb-rk3288_defconfig index 97d4c14f65..6143c4d989 100644 --- a/configs/evb-rk3288_defconfig +++ b/configs/evb-rk3288_defconfig @@ -24,6 +24,7 @@ CONFIG_USE_PREBOOT=y CONFIG_DEFAULT_FDT_FILE="rk3288-evb-rk808.dtb" CONFIG_SILENT_CONSOLE=y CONFIG_DISPLAY_BOARDINFO_LATE=y +# CONFIG_SPL_RAW_IMAGE_SUPPORT is not set CONFIG_SPL_STACK_R=y CONFIG_SPL_STACK_R_MALLOC_SIMPLE_LEN=0x10000 CONFIG_SPL_OPTEE_IMAGE=y

On 06/03/2022 06:19, Simon Glass wrote:
This boards uses SPL_FIT so does not need to support loading a raw image.
This sounds OK to me, but...
Drop it to avoid binman trying to insert a symbol which has no value.
I couldn't figure out how it leads to this in the code. I guess some ifdefs or optimization steps dropping the binman symbols from the ELF file?
Signed-off-by: Simon Glass sjg@chromium.org
(no changes since v1)
configs/evb-rk3288_defconfig | 1 + 1 file changed, 1 insertion(+)
Anyway, I have no experience with this board or with loading raw images, I barely have a working knowledge of binman symbols, so I'm refraining from adding Reviewed-by etc. (in case you were waiting for that).

Hi Alper,
On Thu, 10 Mar 2022 at 12:36, Alper Nebi Yasak alpernebiyasak@gmail.com wrote:
On 06/03/2022 06:19, Simon Glass wrote:
This boards uses SPL_FIT so does not need to support loading a raw image.
This sounds OK to me, but...
Drop it to avoid binman trying to insert a symbol which has no value.
I couldn't figure out how it leads to this in the code. I guess some ifdefs or optimization steps dropping the binman symbols from the ELF file?
Well the raw-image method uses a symbol in SPL which holds the image-pos of U-Boot. If we disable that, then the symbol is not used and we don't have to set it. See spl_ram_load_image().
Signed-off-by: Simon Glass sjg@chromium.org
(no changes since v1)
configs/evb-rk3288_defconfig | 1 + 1 file changed, 1 insertion(+)
Anyway, I have no experience with this board or with loading raw images, I barely have a working knowledge of binman symbols, so I'm refraining from adding Reviewed-by etc. (in case you were waiting for that).
Sure, thanks for looking at all this. I'll get it in shortly and we can figure out where to go from there.
Regards, Simon

On 12/03/2022 08:02, Simon Glass wrote:
On Thu, 10 Mar 2022 at 12:36, Alper Nebi Yasak alpernebiyasak@gmail.com wrote:
On 06/03/2022 06:19, Simon Glass wrote:
This boards uses SPL_FIT so does not need to support loading a raw image.
This sounds OK to me, but...
Drop it to avoid binman trying to insert a symbol which has no value.
I couldn't figure out how it leads to this in the code. I guess some ifdefs or optimization steps dropping the binman symbols from the ELF file?
Well the raw-image method uses a symbol in SPL which holds the image-pos of U-Boot. If we disable that, then the symbol is not used and we don't have to set it. See spl_ram_load_image().
Can you double-check this? Looks to me like the symbol isn't set both before/after this patch only because there's no image using u-boot-spl or u-boot-tpl entries. At u-boot-dm/fit-working, binman verbose output shows symbols being set as a later patch adds such entries.
But, I did get u-boot-any symbols to disappear by disabling CONFIG_{SPL,TPL}_BINMAN_SYMBOLS, and u-boot-spl symbols by editing common/spl/spl.c to make that config's ifdef include those symbols.

Hi Alper,
On Mon, 14 Mar 2022 at 15:32, Alper Nebi Yasak alpernebiyasak@gmail.com wrote:
On 12/03/2022 08:02, Simon Glass wrote:
On Thu, 10 Mar 2022 at 12:36, Alper Nebi Yasak alpernebiyasak@gmail.com wrote:
On 06/03/2022 06:19, Simon Glass wrote:
This boards uses SPL_FIT so does not need to support loading a raw image.
This sounds OK to me, but...
Drop it to avoid binman trying to insert a symbol which has no value.
I couldn't figure out how it leads to this in the code. I guess some ifdefs or optimization steps dropping the binman symbols from the ELF file?
Well the raw-image method uses a symbol in SPL which holds the image-pos of U-Boot. If we disable that, then the symbol is not used and we don't have to set it. See spl_ram_load_image().
Can you double-check this? Looks to me like the symbol isn't set both before/after this patch only because there's no image using u-boot-spl or u-boot-tpl entries. At u-boot-dm/fit-working, binman verbose output shows symbols being set as a later patch adds such entries.
Yes it is needed for the later patch. I have to put this patch before that one to avoid a breakage.
But, I did get u-boot-any symbols to disappear by disabling CONFIG_{SPL,TPL}_BINMAN_SYMBOLS, and u-boot-spl symbols by editing common/spl/spl.c to make that config's ifdef include those symbols.
Yes that would work.
Regards, Simon

Include the rockchip-u-boot.dtsi file with 64-bit boards and enable binman so that these boards can also use it, rather than using special Makefile rules and scripts.
This does not change the Makefile nor remove any scripts, but sets it up so that this is possible.
Signed-off-by: Simon Glass sjg@chromium.org Reviewed-by: Alper Nebi Yasak alpernebiyasak@gmail.com ---
(no changes since v1)
arch/arm/dts/px30-u-boot.dtsi | 2 ++ arch/arm/dts/rk3308-u-boot.dtsi | 2 ++ arch/arm/dts/rk3326-odroid-go2-u-boot.dtsi | 2 ++ arch/arm/dts/rk3328-u-boot.dtsi | 2 ++ arch/arm/dts/rk3368-u-boot.dtsi | 1 + arch/arm/dts/rk3399-u-boot.dtsi | 5 +++-- arch/arm/dts/rk3568-u-boot.dtsi | 2 ++ arch/arm/mach-rockchip/Kconfig | 6 ++++++ 8 files changed, 20 insertions(+), 2 deletions(-)
diff --git a/arch/arm/dts/px30-u-boot.dtsi b/arch/arm/dts/px30-u-boot.dtsi index f102b2aef4..462eaf68f8 100644 --- a/arch/arm/dts/px30-u-boot.dtsi +++ b/arch/arm/dts/px30-u-boot.dtsi @@ -3,6 +3,8 @@ * (C) Copyright 2019 Rockchip Electronics Co., Ltd */
+#include "rockchip-u-boot.dtsi" + / { aliases { mmc0 = &emmc; diff --git a/arch/arm/dts/rk3308-u-boot.dtsi b/arch/arm/dts/rk3308-u-boot.dtsi index 4bfad31fba..ab5bfc2ce9 100644 --- a/arch/arm/dts/rk3308-u-boot.dtsi +++ b/arch/arm/dts/rk3308-u-boot.dtsi @@ -3,6 +3,8 @@ *(C) Copyright 2019 Rockchip Electronics Co., Ltd */
+#include "rockchip-u-boot.dtsi" + / { aliases { mmc0 = &emmc; diff --git a/arch/arm/dts/rk3326-odroid-go2-u-boot.dtsi b/arch/arm/dts/rk3326-odroid-go2-u-boot.dtsi index 95f2652494..16c33735eb 100644 --- a/arch/arm/dts/rk3326-odroid-go2-u-boot.dtsi +++ b/arch/arm/dts/rk3326-odroid-go2-u-boot.dtsi @@ -3,6 +3,8 @@ * Copyright (c) 2020 Theobroma Systems Design und Consulting GmbH */
+#include "rockchip-u-boot.dtsi" + / { chosen { u-boot,spl-boot-order = &sdmmc; diff --git a/arch/arm/dts/rk3328-u-boot.dtsi b/arch/arm/dts/rk3328-u-boot.dtsi index 1633558264..d4a7540a92 100644 --- a/arch/arm/dts/rk3328-u-boot.dtsi +++ b/arch/arm/dts/rk3328-u-boot.dtsi @@ -3,6 +3,8 @@ * (C) Copyright 2019 Rockchip Electronics Co., Ltd */
+#include "rockchip-u-boot.dtsi" + / { aliases { mmc0 = &emmc; diff --git a/arch/arm/dts/rk3368-u-boot.dtsi b/arch/arm/dts/rk3368-u-boot.dtsi index 2767c2678d..b37da4e851 100644 --- a/arch/arm/dts/rk3368-u-boot.dtsi +++ b/arch/arm/dts/rk3368-u-boot.dtsi @@ -3,6 +3,7 @@ * Copyright (c) 2020 Theobroma Systems Design und Consulting GmbH */
+#include "rockchip-u-boot.dtsi" #include <dt-bindings/memory/rk3368-dmc.h>
/ { diff --git a/arch/arm/dts/rk3399-u-boot.dtsi b/arch/arm/dts/rk3399-u-boot.dtsi index 716b9a433a..fd4150e3f0 100644 --- a/arch/arm/dts/rk3399-u-boot.dtsi +++ b/arch/arm/dts/rk3399-u-boot.dtsi @@ -60,8 +60,9 @@
};
-#ifdef CONFIG_ROCKCHIP_SPI_IMAGE &binman { + multiple-images; +#ifdef CONFIG_ROCKCHIP_SPI_IMAGE rom { filename = "u-boot.rom"; size = <0x400000>; @@ -81,8 +82,8 @@ fdtmap { }; }; -}; #endif +};
&cru { u-boot,dm-pre-reloc; diff --git a/arch/arm/dts/rk3568-u-boot.dtsi b/arch/arm/dts/rk3568-u-boot.dtsi index 5a80dda275..fa9b6ae23b 100644 --- a/arch/arm/dts/rk3568-u-boot.dtsi +++ b/arch/arm/dts/rk3568-u-boot.dtsi @@ -3,6 +3,8 @@ * (C) Copyright 2021 Rockchip Electronics Co., Ltd */
+#include "rockchip-u-boot.dtsi" + / { aliases { mmc0 = &sdhci; diff --git a/arch/arm/mach-rockchip/Kconfig b/arch/arm/mach-rockchip/Kconfig index 92f35309e4..c8df65980f 100644 --- a/arch/arm/mach-rockchip/Kconfig +++ b/arch/arm/mach-rockchip/Kconfig @@ -15,6 +15,7 @@ config ROCKCHIP_PX30 select DEBUG_UART_BOARD_INIT imply ROCKCHIP_COMMON_BOARD imply SPL_ROCKCHIP_COMMON_BOARD + imply BINMAN help The Rockchip PX30 is a ARM-based SoC with a quad-core Cortex-A35 including NEON and GPU, Mali-400 graphics, several DDR3 options @@ -146,6 +147,7 @@ config ROCKCHIP_RK3308 imply SPL_SERIAL imply TPL_SERIAL imply SPL_SEPARATE_BSS + imply BINMAN help The Rockchip RK3308 is a ARM-based Soc which embedded with quad Cortex-A35 and highly integrated audio interfaces. @@ -167,6 +169,7 @@ config ROCKCHIP_RK3328 select ENABLE_ARM_SOC_BOOT0_HOOK select DEBUG_UART_BOARD_INIT select SYS_NS16550 + imply BINMAN help The Rockchip RK3328 is a ARM-based SoC with a quad-core Cortex-A53. including NEON and GPU, 1MB L2 cache, Mali-T7 graphics, two @@ -186,6 +189,7 @@ config ROCKCHIP_RK3368 imply SPL_SERIAL imply TPL_SERIAL imply TPL_ROCKCHIP_COMMON_BOARD + imply BINMAN help The Rockchip RK3368 is a ARM-based SoC with a octa-core (organised into a big and little cluster with 4 cores each) Cortex-A53 including @@ -244,6 +248,7 @@ config ROCKCHIP_RK3399 imply TPL_ROCKCHIP_COMMON_BOARD imply SYS_BOOTCOUNT_SINGLEWORD if BOOTCOUNT_LIMIT imply CMD_BOOTCOUNT if BOOTCOUNT_LIMIT + imply BINMAN help The Rockchip RK3399 is a ARM-based SoC with a dual-core Cortex-A72 and quad-core Cortex-A53. @@ -264,6 +269,7 @@ config ROCKCHIP_RK3568 select SYSCON select BOARD_LATE_INIT imply ROCKCHIP_COMMON_BOARD + imply BINMAN help The Rockchip RK3568 is a ARM-based SoC with quad-core Cortex-A55, including NEON and GPU, 512K L3 cache, Mali-G52 based graphics,

Add the required binman images to replace the Makefile rules which are currently used. This includes subsuming:
- tpl/u-boot-tpl-rockchip.bin if TPL is enabled - idbloader.img if either or both of SPL and TPL are enabled - u-boot.itb if SPL_FIT is enabled - u-boot-rockchip.bin if SPL is used, either using u-boot.itb when SPL_FIT is enabled or u-boot.img when it isn't
Note that the intermediate files are dropped with binman, since it producing everything in one pass. This means that tpl/u-boot-tpl-rockchip.bin is not created, for example.
Note that for some 32-bit rk3288 boards, rockchip-optee.dtsi is included.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v3: - Add an offset to the FIT description
Changes in v2: - Rename op-tee to tee-os - Drop use of .itb2
arch/arm/dts/rockchip-u-boot.dtsi | 85 ++++++++++++++++++++++++++++++- 1 file changed, 83 insertions(+), 2 deletions(-)
diff --git a/arch/arm/dts/rockchip-u-boot.dtsi b/arch/arm/dts/rockchip-u-boot.dtsi index eae3ee715d..ff64ee2013 100644 --- a/arch/arm/dts/rockchip-u-boot.dtsi +++ b/arch/arm/dts/rockchip-u-boot.dtsi @@ -17,13 +17,94 @@ filename = "u-boot-rockchip.bin"; pad-byte = <0xff>;
- blob { - filename = "idbloader.img"; +#ifdef CONFIG_TPL + mkimage { + args = "-n", CONFIG_SYS_SOC, "-T", "rksd"; + + u-boot-tpl { + }; + }; + + u-boot-spl { }; +#elif defined(CONFIG_SPL) /* SPL only */ + mkimage { + args = "-n", CONFIG_SYS_SOC, "-T", "rksd"; + + u-boot-spl { + }; + }; +#endif +#if defined(CONFIG_SPL_FIT) && defined(CONFIG_ARM64) + fit: fit { + description = "FIT image for U-Boot with bl31 (TF-A)"; + #address-cells = <1>; + fit,fdt-list = "of-list"; + fit,external-offset = <CONFIG_FIT_EXTERNAL_OFFSET>; + offset = <CONFIG_SPL_PAD_TO>; + images { + u-boot { + description = "U-Boot (64-bit)"; + type = "standalone"; + os = "U-Boot"; + arch = "arm64"; + compression = "none"; + load = <CONFIG_SYS_TEXT_BASE>; + u-boot-nodtb { + }; + }; + + @atf-SEQ { + fit,operation = "split-elf"; + description = "ARM Trusted Firmware"; + type = "firmware"; + arch = "arm64"; + os = "arm-trusted-firmware"; + compression = "none"; + fit,load; + fit,entry; + fit,data; + + atf-bl31 { + }; + }; + @tee-SEQ { + fit,operation = "split-elf"; + description = "TEE"; + type = "tee"; + arch = "arm64"; + os = "tee"; + compression = "none"; + fit,load; + fit,entry; + fit,data;
+ tee-os { + }; + }; + + @fdt-SEQ { + description = "fdt-NAME"; + compression = "none"; + type = "flat_dt"; + }; + }; + + configurations { + default = "@config-DEFAULT-SEQ"; + @config-SEQ { + description = "NAME.dtb"; + fdt = "fdt-SEQ"; + firmware = "u-boot"; + fit,loadables; + }; + }; + }; +#else u-boot-img { offset = <CONFIG_SPL_PAD_TO>; }; +#endif /* CONFIG_ARM64 */ }; }; #endif

Instead of the bash script, use binman to generate the FIT for arm64.
For 32-bit boards, use binman for all images, dropping the intermediate files.
With this change, only Zynq is now using SPL_FIT_GENERATOR so update the Kconfig rule accordingly.
Clean up the Makefile to the extent possible. Unfortunately, two boards do not use SPL_FRAMEWORK so don't enable the u-boot.img rule:
evb-rk3036 kylin-rk3036
So a small remnant remains.
Signed-off-by: Simon Glass sjg@chromium.org ---
(no changes since v1)
Makefile | 39 ++------------------------------------- boot/Kconfig | 2 +- 2 files changed, 3 insertions(+), 38 deletions(-)
diff --git a/Makefile b/Makefile index 9ef34ca4b7..8f149fa3a4 100644 --- a/Makefile +++ b/Makefile @@ -977,19 +977,8 @@ ifeq ($(CONFIG_MPC85xx)$(CONFIG_OF_SEPARATE),yy) INPUTS-y += u-boot-with-dtb.bin endif
-ifeq ($(CONFIG_ARCH_ROCKCHIP),y) -# On ARM64 this target is produced by binman so we don't need this dep -ifeq ($(CONFIG_ARM64),y) -ifeq ($(CONFIG_SPL),y) -# TODO: Get binman to generate this too -INPUTS-y += u-boot-rockchip.bin -endif -else -ifeq ($(CONFIG_SPL),y) -# Generate these inputs for binman which will create the output files -INPUTS-y += idbloader.img u-boot.img -endif -endif +ifeq ($(CONFIG_ARCH_ROCKCHIP)_$(CONFIG_SPL_FRAMEWORK),y_) +INPUTS-y += u-boot.img endif
INPUTS-$(CONFIG_X86) += u-boot-x86-start16.bin u-boot-x86-reset16.bin \ @@ -1474,30 +1463,6 @@ OBJCOPYFLAGS_u-boot-with-spl.bin = -I binary -O binary \ u-boot-with-spl.bin: $(SPL_IMAGE) $(SPL_PAYLOAD) FORCE $(call if_changed,pad_cat)
-ifeq ($(CONFIG_ARCH_ROCKCHIP),y) - -# TPL + SPL -ifeq ($(CONFIG_SPL)$(CONFIG_TPL),yy) -MKIMAGEFLAGS_u-boot-tpl-rockchip.bin = -n $(CONFIG_SYS_SOC) -T rksd -tpl/u-boot-tpl-rockchip.bin: tpl/u-boot-tpl.bin FORCE - $(call if_changed,mkimage) -idbloader.img: tpl/u-boot-tpl-rockchip.bin spl/u-boot-spl.bin FORCE - $(call if_changed,cat) -else -MKIMAGEFLAGS_idbloader.img = -n $(CONFIG_SYS_SOC) -T rksd -idbloader.img: spl/u-boot-spl.bin FORCE - $(call if_changed,mkimage) -endif - -ifeq ($(CONFIG_ARM64),y) -OBJCOPYFLAGS_u-boot-rockchip.bin = -I binary -O binary \ - --pad-to=$(CONFIG_SPL_PAD_TO) --gap-fill=0xff -u-boot-rockchip.bin: idbloader.img u-boot.itb FORCE - $(call if_changed,pad_cat) -endif # CONFIG_ARM64 - -endif # CONFIG_ARCH_ROCKCHIP - ifeq ($(CONFIG_ARCH_LPC32XX)$(CONFIG_SPL),yy) MKIMAGEFLAGS_lpc32xx-spl.img = -T lpc32xximage -a $(CONFIG_SPL_TEXT_BASE)
diff --git a/boot/Kconfig b/boot/Kconfig index b83a4e8400..ae1a37ae92 100644 --- a/boot/Kconfig +++ b/boot/Kconfig @@ -275,7 +275,7 @@ config SPL_FIT_SOURCE
config USE_SPL_FIT_GENERATOR bool "Use a script to generate the .its script" - default y if SPL_FIT && (!ARCH_SUNXI && !RISCV) + default y if SPL_FIT && ARCH_ZYNQMP
config SPL_FIT_GENERATOR string ".its file generator script for U-Boot FIT image"

This is not used anymore. Drop it.
Signed-off-by: Simon Glass sjg@chromium.org ---
(no changes since v2)
Changes in v2: - Drop patches previously applied - Add various suggestions from Alper Nebi Yasak - Add patches to refactor binman's FIT support
Makefile | 3 - arch/arm/mach-rockchip/make_fit_atf.py | 240 ------------------------- boot/Kconfig | 1 - 3 files changed, 244 deletions(-) delete mode 100755 arch/arm/mach-rockchip/make_fit_atf.py
diff --git a/Makefile b/Makefile index 8f149fa3a4..c782614f7f 100644 --- a/Makefile +++ b/Makefile @@ -1353,9 +1353,6 @@ $(U_BOOT_ITS): $(subst ",,$(CONFIG_SPL_FIT_SOURCE)) else ifneq ($(CONFIG_USE_SPL_FIT_GENERATOR),) U_BOOT_ITS := u-boot.its -ifeq ($(CONFIG_SPL_FIT_GENERATOR),"arch/arm/mach-rockchip/make_fit_atf.py") -U_BOOT_ITS_DEPS += u-boot -endif $(U_BOOT_ITS): $(U_BOOT_ITS_DEPS) FORCE $(srctree)/$(CONFIG_SPL_FIT_GENERATOR) \ $(patsubst %,arch/$(ARCH)/dts/%.dtb,$(subst ",,$(CONFIG_OF_LIST))) > $@ diff --git a/arch/arm/mach-rockchip/make_fit_atf.py b/arch/arm/mach-rockchip/make_fit_atf.py deleted file mode 100755 index f3224d2555..0000000000 --- a/arch/arm/mach-rockchip/make_fit_atf.py +++ /dev/null @@ -1,240 +0,0 @@ -#!/usr/bin/env python3 -""" -# SPDX-License-Identifier: GPL-2.0+ -# -# A script to generate FIT image source for rockchip boards -# with ARM Trusted Firmware -# and multiple device trees (given on the command line) -# -# usage: $0 <dt_name> [<dt_name> [<dt_name] ...] -""" - -import os -import sys -import getopt -import logging -import struct - -DT_HEADER = """ -/* - * This is a generated file. - */ -/dts-v1/; - -/ { - description = "FIT image for U-Boot with bl31 (TF-A)"; - #address-cells = <1>; - - images { -""" - -DT_UBOOT = """ - uboot { - description = "U-Boot (64-bit)"; - data = /incbin/("u-boot-nodtb.bin"); - type = "standalone"; - os = "U-Boot"; - arch = "arm64"; - compression = "none"; - load = <0x%08x>; - }; - -""" - -DT_IMAGES_NODE_END = """ }; - -""" - -DT_END = "};" - -def append_bl31_node(file, atf_index, phy_addr, elf_entry): - # Append BL31 DT node to input FIT dts file. - data = 'bl31_0x%08x.bin' % phy_addr - file.write('\t\tatf_%d {\n' % atf_index) - file.write('\t\t\tdescription = "ARM Trusted Firmware";\n') - file.write('\t\t\tdata = /incbin/("%s");\n' % data) - file.write('\t\t\ttype = "firmware";\n') - file.write('\t\t\tarch = "arm64";\n') - file.write('\t\t\tos = "arm-trusted-firmware";\n') - file.write('\t\t\tcompression = "none";\n') - file.write('\t\t\tload = <0x%08x>;\n' % phy_addr) - if atf_index == 1: - file.write('\t\t\tentry = <0x%08x>;\n' % elf_entry) - file.write('\t\t};\n') - file.write('\n') - -def append_tee_node(file, atf_index, phy_addr, elf_entry): - # Append TEE DT node to input FIT dts file. - data = 'tee_0x%08x.bin' % phy_addr - file.write('\t\tatf_%d {\n' % atf_index) - file.write('\t\t\tdescription = "TEE";\n') - file.write('\t\t\tdata = /incbin/("%s");\n' % data) - file.write('\t\t\ttype = "tee";\n') - file.write('\t\t\tarch = "arm64";\n') - file.write('\t\t\tos = "tee";\n') - file.write('\t\t\tcompression = "none";\n') - file.write('\t\t\tload = <0x%08x>;\n' % phy_addr) - file.write('\t\t\tentry = <0x%08x>;\n' % elf_entry) - file.write('\t\t};\n') - file.write('\n') - -def append_fdt_node(file, dtbs): - # Append FDT nodes. - cnt = 1 - for dtb in dtbs: - dtname = os.path.basename(dtb) - file.write('\t\tfdt_%d {\n' % cnt) - file.write('\t\t\tdescription = "%s";\n' % dtname) - file.write('\t\t\tdata = /incbin/("%s");\n' % dtb) - file.write('\t\t\ttype = "flat_dt";\n') - file.write('\t\t\tcompression = "none";\n') - file.write('\t\t};\n') - file.write('\n') - cnt = cnt + 1 - -def append_conf_section(file, cnt, dtname, segments): - file.write('\t\tconfig_%d {\n' % cnt) - file.write('\t\t\tdescription = "%s";\n' % dtname) - file.write('\t\t\tfirmware = "atf_1";\n') - file.write('\t\t\tloadables = "uboot"') - if segments > 1: - file.write(',') - for i in range(1, segments): - file.write('"atf_%d"' % (i + 1)) - if i != (segments - 1): - file.write(',') - else: - file.write(';\n') - if segments <= 1: - file.write(';\n') - file.write('\t\t\tfdt = "fdt_%d";\n' % cnt) - file.write('\t\t};\n') - file.write('\n') - -def append_conf_node(file, dtbs, segments): - # Append configeration nodes. - cnt = 1 - file.write('\tconfigurations {\n') - file.write('\t\tdefault = "config_1";\n') - for dtb in dtbs: - dtname = os.path.basename(dtb) - append_conf_section(file, cnt, dtname, segments) - cnt = cnt + 1 - file.write('\t};\n') - file.write('\n') - -def generate_atf_fit_dts_uboot(fit_file, uboot_file_name): - segments = unpack_elf(uboot_file_name) - if len(segments) != 1: - raise ValueError("Invalid u-boot ELF image '%s'" % uboot_file_name) - index, entry, p_paddr, data = segments[0] - fit_file.write(DT_UBOOT % p_paddr) - -def generate_atf_fit_dts_bl31(fit_file, bl31_file_name, tee_file_name, dtbs_file_name): - segments = unpack_elf(bl31_file_name) - for index, entry, paddr, data in segments: - append_bl31_node(fit_file, index + 1, paddr, entry) - num_segments = len(segments) - - if tee_file_name: - tee_segments = unpack_elf(tee_file_name) - for index, entry, paddr, data in tee_segments: - append_tee_node(fit_file, num_segments + index + 1, paddr, entry) - num_segments = num_segments + len(tee_segments) - - append_fdt_node(fit_file, dtbs_file_name) - fit_file.write(DT_IMAGES_NODE_END) - append_conf_node(fit_file, dtbs_file_name, num_segments) - -def generate_atf_fit_dts(fit_file_name, bl31_file_name, tee_file_name, uboot_file_name, dtbs_file_name): - # Generate FIT script for ATF image. - if fit_file_name != sys.stdout: - fit_file = open(fit_file_name, "wb") - else: - fit_file = sys.stdout - - fit_file.write(DT_HEADER) - generate_atf_fit_dts_uboot(fit_file, uboot_file_name) - generate_atf_fit_dts_bl31(fit_file, bl31_file_name, tee_file_name, dtbs_file_name) - fit_file.write(DT_END) - - if fit_file_name != sys.stdout: - fit_file.close() - -def generate_atf_binary(bl31_file_name): - for index, entry, paddr, data in unpack_elf(bl31_file_name): - file_name = 'bl31_0x%08x.bin' % paddr - with open(file_name, "wb") as atf: - atf.write(data) - -def generate_tee_binary(tee_file_name): - if tee_file_name: - for index, entry, paddr, data in unpack_elf(tee_file_name): - file_name = 'tee_0x%08x.bin' % paddr - with open(file_name, "wb") as atf: - atf.write(data) - -def unpack_elf(filename): - with open(filename, 'rb') as file: - elf = file.read() - if elf[0:7] != b'\x7fELF\x02\x01\x01' or elf[18:20] != b'\xb7\x00': - raise ValueError("Invalid arm64 ELF file '%s'" % filename) - - e_entry, e_phoff = struct.unpack_from('<2Q', elf, 0x18) - e_phentsize, e_phnum = struct.unpack_from('<2H', elf, 0x36) - segments = [] - - for index in range(e_phnum): - offset = e_phoff + e_phentsize * index - p_type, p_flags, p_offset = struct.unpack_from('<LLQ', elf, offset) - if p_type == 1: # PT_LOAD - p_paddr, p_filesz = struct.unpack_from('<2Q', elf, offset + 0x18) - if p_filesz > 0: - p_data = elf[p_offset:p_offset + p_filesz] - segments.append((index, e_entry, p_paddr, p_data)) - return segments - -def main(): - uboot_elf = "./u-boot" - fit_its = sys.stdout - if "BL31" in os.environ: - bl31_elf=os.getenv("BL31"); - elif os.path.isfile("./bl31.elf"): - bl31_elf = "./bl31.elf" - else: - os.system("echo 'int main(){}' > bl31.c") - os.system("${CROSS_COMPILE}gcc -c bl31.c -o bl31.elf") - bl31_elf = "./bl31.elf" - logging.basicConfig(format='%(levelname)s:%(message)s', level=logging.DEBUG) - logging.warning(' BL31 file bl31.elf NOT found, resulting binary is non-functional') - logging.warning(' Please read Building section in doc/README.rockchip') - - if "TEE" in os.environ: - tee_elf = os.getenv("TEE") - elif os.path.isfile("./tee.elf"): - tee_elf = "./tee.elf" - else: - tee_elf = "" - - opts, args = getopt.getopt(sys.argv[1:], "o:u:b:t:h") - for opt, val in opts: - if opt == "-o": - fit_its = val - elif opt == "-u": - uboot_elf = val - elif opt == "-b": - bl31_elf = val - elif opt == "-t": - tee_elf = val - elif opt == "-h": - print(__doc__) - sys.exit(2) - - dtbs = args - - generate_atf_fit_dts(fit_its, bl31_elf, tee_elf, uboot_elf, dtbs) - generate_atf_binary(bl31_elf) - generate_tee_binary(tee_elf) - -if __name__ == "__main__": - main() diff --git a/boot/Kconfig b/boot/Kconfig index ae1a37ae92..1ae5b3a793 100644 --- a/boot/Kconfig +++ b/boot/Kconfig @@ -280,7 +280,6 @@ config USE_SPL_FIT_GENERATOR config SPL_FIT_GENERATOR string ".its file generator script for U-Boot FIT image" depends on USE_SPL_FIT_GENERATOR - default "arch/arm/mach-rockchip/make_fit_atf.py" if SPL_LOAD_FIT && ARCH_ROCKCHIP default "arch/arm/mach-zynqmp/mkimage_fit_atf.sh" if SPL_LOAD_FIT && ARCH_ZYNQMP help Specifies a (platform specific) script file to generate the FIT
participants (2)
-
Alper Nebi Yasak
-
Simon Glass