[PATCH v2 00/25] binman: rockchip: Migrate from rockchip SPL_FIT_GENERATOR script

At present rockchip 64-bit boards make use of a FIT-generator script written in Python. The script supports splitting an ELF file into several 'loadable' nodes in the FIT. Binman does not current support this feature.
This series adds binman support for ELF splitting. This works by adding a new 'fit,operation' property to the FIT subnodes, allowing this new way of generating nodes.
Some other fixes and improvements are needed along the way.
A new, common binman description is added for 64-bit boards which includes the required u-boot.itb file.
The existing script is removed, so that only a few zynq boards are now using a SPL_FIT_GENERATOR script:
avnet_ultrazedev_cc_v1_0_ultrazedev_som_v1_0 xilinx_zynqmp_virt
Migration of those is hopefully in progress.
Note however that tools/k3_fit_atf.sh remains, used by a few boards that enable CONFIG_TI_SECURE_DEVICE so this series is copied there too:
am335x_hs_evm am335x_hs_evm_uart am43xx_hs_evm am57xx_hs_evm am57xx_hs_evm_usb am65x_hs_evm_a53 am65x_hs_evm_r5 dra7xx_hs_evm dra7xx_hs_evm_usb j721e_hs_evm_a72 j721e_hs_evm_r5 k2e_hs_evm k2g_hs_evm k2hk_hs_evm k2l_hs_evm
Ivan Mikhaylov has sent a patch to help with these, but I need to take a look at the testing side. In any case they should really be using binman for the image generation.
Changes in v2: - Add new patch to make GetArgs() more flexible - Add new patch to remove remove_defconfig() - Add new patch to use re.fullmatch() to avoid extra check - Add new patch to correct Kconfig help for TPL_BINMAN_SYMBOLS - Add new patch to tidy up implementaiton of AddStringList() - Add new patch to rename load_segments() and module failure - Add new patch to tweak collect_contents_to_file() and docs - Add patch to rename ExpandToLimit to extend_to_limit - Add patch to rename ExpandEntries to gen_entries - Add new patch to refactor fit to generate output at the end - Add a patch to change how faked blobs are created - Add a patch to make fake blobs zero-sized by default - Add a patch to allow mkimage to use a non-zero fake-blob size - Rewrite this to use the new FIT entry-type implementation - Rename op-tee to tee-os - Rename op-tee to tee-os - Drop use of .itb2 - Drop patches previously applied - Add various suggestions from Alper Nebi Yasak - Add patches to refactor binman's FIT support
Simon Glass (25): dtoc: Make GetArgs() more flexible moveconfig: Remove remove_defconfig() moveconfig: Use re.fullmatch() to avoid extra check spl: Correct Kconfig help for TPL_BINMAN_SYMBOLS dtoc: Tidy up implementaiton of AddStringList() elf: Rename load_segments() and module failure binman: Tweak collect_contents_to_file() and docs binman: Rename ExpandToLimit to extend_to_limit binman: Rename ExpandEntries to gen_entries binman: Refactor fit to generate output at the end binman: Rename tools parameter to btools binman: Change how faked blobs are created binman: Make fake blobs zero-sized by default binman: Allow mkimage to use a non-zero fake-blob size binman: Read the fit entries only once binman: Fix some pylint warnings in fit binman: Add a consistent way to report errors with fit binman: Update fit to use node instead of subnode binman: Keep a separate list of entries for fit binman: Support splitting an ELF file into multiple nodes rockchip: evb-rk3288: Drop raw-image support rockchip: Include binman script in 64-bit boards rockchip: Support building the all output files in binman rockchip: Convert all boards to use binman rockchip: Drop the FIT generator script
Makefile | 42 +- arch/arm/dts/px30-u-boot.dtsi | 2 + arch/arm/dts/rk3308-u-boot.dtsi | 2 + arch/arm/dts/rk3326-odroid-go2-u-boot.dtsi | 2 + arch/arm/dts/rk3328-u-boot.dtsi | 2 + arch/arm/dts/rk3368-u-boot.dtsi | 1 + arch/arm/dts/rk3399-u-boot.dtsi | 5 +- arch/arm/dts/rk3568-u-boot.dtsi | 2 + arch/arm/dts/rockchip-u-boot.dtsi | 84 +++- arch/arm/mach-rockchip/Kconfig | 6 + arch/arm/mach-rockchip/make_fit_atf.py | 240 --------- boot/Kconfig | 3 +- common/spl/Kconfig | 6 +- configs/evb-rk3288_defconfig | 1 + tools/binman/binman.rst | 30 +- tools/binman/control.py | 2 +- tools/binman/elf.py | 6 +- tools/binman/elf_test.py | 18 +- tools/binman/entries.rst | 146 ++++++ tools/binman/entry.py | 49 +- tools/binman/etype/_testing.py | 2 +- tools/binman/etype/blob.py | 10 +- tools/binman/etype/blob_ext_list.py | 2 +- tools/binman/etype/blob_phase.py | 2 +- tools/binman/etype/files.py | 2 +- tools/binman/etype/fit.py | 501 ++++++++++++++----- tools/binman/etype/gbb.py | 4 +- tools/binman/etype/intel_ifwi.py | 4 +- tools/binman/etype/mkimage.py | 19 +- tools/binman/etype/section.py | 22 +- tools/binman/etype/vblock.py | 4 +- tools/binman/ftest.py | 180 ++++++- tools/binman/test/088_expand_size.dts | 8 +- tools/binman/test/089_expand_size_bad.dts | 2 +- tools/binman/test/224_fit_bad_oper.dts | 2 - tools/binman/test/225_expand_size_bad.dts | 10 + tools/binman/test/226_fit_split_elf.dts | 67 +++ tools/binman/test/227_fit_bad_dir.dts | 9 + tools/binman/test/228_fit_bad_dir_config.dts | 9 + tools/binman/test/229_mkimage_missing.dts | 18 + tools/dtoc/fdt.py | 4 +- tools/dtoc/fdt_util.py | 8 +- tools/dtoc/test/dtoc_test_simple.dts | 2 + tools/dtoc/test_fdt.py | 12 +- tools/moveconfig.py | 17 +- 45 files changed, 1060 insertions(+), 509 deletions(-) delete mode 100755 arch/arm/mach-rockchip/make_fit_atf.py create mode 100644 tools/binman/test/225_expand_size_bad.dts create mode 100644 tools/binman/test/226_fit_split_elf.dts create mode 100644 tools/binman/test/227_fit_bad_dir.dts create mode 100644 tools/binman/test/228_fit_bad_dir_config.dts create mode 100644 tools/binman/test/229_mkimage_missing.dts

At present it is not possible to have arguments whic include spaces. Update the function to only split the args if the property is a single string. This is a bit inconsistent, but might still be useful.
Signed-off-by: Simon Glass sjg@chromium.org Suggested-by: Alper Nebi Yasak alpernebiyasak@gmail.com ---
Changes in v2: - Add new patch to make GetArgs() more flexible
tools/dtoc/fdt_util.py | 8 ++++++-- tools/dtoc/test/dtoc_test_simple.dts | 2 ++ tools/dtoc/test_fdt.py | 6 +++++- 3 files changed, 13 insertions(+), 3 deletions(-)
diff --git a/tools/dtoc/fdt_util.py b/tools/dtoc/fdt_util.py index c82e7747aa..57550624bb 100644 --- a/tools/dtoc/fdt_util.py +++ b/tools/dtoc/fdt_util.py @@ -192,8 +192,12 @@ def GetArgs(node, propname): value = GetStringList(node, propname) else: value = [] - lists = [v.split() for v in value] - args = [x for l in lists for x in l] + if not value: + args = [] + elif len(value) == 1: + args = value[0].split() + else: + args = value return args
def GetBool(node, propname, default=False): diff --git a/tools/dtoc/test/dtoc_test_simple.dts b/tools/dtoc/test/dtoc_test_simple.dts index 2d321fb034..aef07efeae 100644 --- a/tools/dtoc/test/dtoc_test_simple.dts +++ b/tools/dtoc/test/dtoc_test_simple.dts @@ -63,5 +63,7 @@ orig-node { orig = <1 23 4>; args = "-n first", "second", "-p", "123,456", "-x"; + args2 = "a space", "there"; + args3 = "-n first second -p 123,456 -x"; }; }; diff --git a/tools/dtoc/test_fdt.py b/tools/dtoc/test_fdt.py index 576d65b97e..c2013b9289 100755 --- a/tools/dtoc/test_fdt.py +++ b/tools/dtoc/test_fdt.py @@ -659,8 +659,12 @@ class TestFdtUtil(unittest.TestCase): ['multi-word', 'message'], fdt_util.GetArgs(self.node, 'stringarray')) self.assertEqual([], fdt_util.GetArgs(self.node, 'boolval')) - self.assertEqual(['-n', 'first', 'second', '-p', '123,456', '-x'], + self.assertEqual(['-n first', 'second', '-p', '123,456', '-x'], fdt_util.GetArgs(node, 'args')) + self.assertEqual(['a space', 'there'], + fdt_util.GetArgs(node, 'args2')) + self.assertEqual(['-n', 'first', 'second', '-p', '123,456', '-x'], + fdt_util.GetArgs(node, 'args3')) with self.assertRaises(ValueError) as exc: fdt_util.GetArgs(self.node, 'missing') self.assertIn(

On 24/02/2022 02:00, Simon Glass wrote:
At present it is not possible to have arguments whic include spaces.
whic -> which
Update the function to only split the args if the property is a single string. This is a bit inconsistent, but might still be useful.
Signed-off-by: Simon Glass sjg@chromium.org Suggested-by: Alper Nebi Yasak alpernebiyasak@gmail.com
Changes in v2:
- Add new patch to make GetArgs() more flexible
tools/dtoc/fdt_util.py | 8 ++++++-- tools/dtoc/test/dtoc_test_simple.dts | 2 ++ tools/dtoc/test_fdt.py | 6 +++++- 3 files changed, 13 insertions(+), 3 deletions(-)
Reviewed-by: Alper Nebi Yasak alpernebiyasak@gmail.com
The use cases and design needs more thinking still, but later.

This is not necessary if simpler code is used. Use the split function and drop the unnecessary []
Signed-off-by: Simon Glass sjg@chromium.org Suggested-by: Alper Nebi Yasak alpernebiyasak@gmail.com ---
Changes in v2: - Add new patch to remove remove_defconfig()
tools/moveconfig.py | 13 +------------ 1 file changed, 1 insertion(+), 12 deletions(-)
diff --git a/tools/moveconfig.py b/tools/moveconfig.py index cff1e30658..dea60f4661 100755 --- a/tools/moveconfig.py +++ b/tools/moveconfig.py @@ -94,17 +94,6 @@ SIZES = { RE_REMOVE_DEFCONFIG = re.compile(r'(.*)_defconfig')
### helper functions ### -def remove_defconfig(defc): - """Drop the _defconfig suffix on a string - - Args: - defc (str): String to convert - - Returns: - str: string with the '_defconfig' suffix removed - """ - return RE_REMOVE_DEFCONFIG.match(defc)[1] - def check_top_directory(): """Exit if we are not at the top of source directory.""" for fname in 'README', 'Licenses': @@ -1671,7 +1660,7 @@ def do_find_config(config_list): print(f"Error: Not in Kconfig: %s" % ' '.join(adhoc)) else: print(f'{len(out)} matches') - print(' '.join([remove_defconfig(item) for item in out])) + print(' '.join(item.split('_defconfig')[0] for item in out))
def prefix_config(cfg):

On 24/02/2022 02:00, Simon Glass wrote:
This is not necessary if simpler code is used. Use the split function and drop the unnecessary []
Signed-off-by: Simon Glass sjg@chromium.org Suggested-by: Alper Nebi Yasak alpernebiyasak@gmail.com
Changes in v2:
- Add new patch to remove remove_defconfig()
tools/moveconfig.py | 13 +------------ 1 file changed, 1 insertion(+), 12 deletions(-)
Reviewed-by: Alper Nebi Yasak alpernebiyasak@gmail.com

Simplify the code by using the available function.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: - Add new patch to use re.fullmatch() to avoid extra check
tools/moveconfig.py | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/tools/moveconfig.py b/tools/moveconfig.py index dea60f4661..4b0e2e250b 100755 --- a/tools/moveconfig.py +++ b/tools/moveconfig.py @@ -1610,8 +1610,8 @@ def defconfig_matches(configs, re_match): bool: True if any CONFIG matches the regex """ for cfg in configs: - m_cfg = re_match.match(cfg) - if m_cfg and m_cfg.span()[1] == len(cfg): + m_cfg = re_match.fullmatch(cfg) + if m_cfg: return True return False

On 24/02/2022 02:00, Simon Glass wrote:
Simplify the code by using the available function.
Signed-off-by: Simon Glass sjg@chromium.org
Changes in v2:
- Add new patch to use re.fullmatch() to avoid extra check
tools/moveconfig.py | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
Reviewed-by: Alper Nebi Yasak alpernebiyasak@gmail.com
diff --git a/tools/moveconfig.py b/tools/moveconfig.py index dea60f4661..4b0e2e250b 100755 --- a/tools/moveconfig.py +++ b/tools/moveconfig.py @@ -1610,8 +1610,8 @@ def defconfig_matches(configs, re_match): bool: True if any CONFIG matches the regex """ for cfg in configs:
m_cfg = re_match.match(cfg)
if m_cfg and m_cfg.span()[1] == len(cfg):
m_cfg = re_match.fullmatch(cfg)
if m_cfg:
Could skip the variable entirely.
return True return False

Fix the help which should refer to TPL, not SPL.
Signed-off-by: Simon Glass sjg@chromium.org Suggested-by: Alper Nebi Yasak alpernebiyasak@gmail.com ---
Changes in v2: - Add new patch to correct Kconfig help for TPL_BINMAN_SYMBOLS
common/spl/Kconfig | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/common/spl/Kconfig b/common/spl/Kconfig index 9418d37b2e..dc319adeac 100644 --- a/common/spl/Kconfig +++ b/common/spl/Kconfig @@ -1334,16 +1334,16 @@ config TPL_SIZE_LIMIT If this value is zero, it is ignored.
config TPL_BINMAN_SYMBOLS - bool "Declare binman symbols in SPL" + bool "Declare binman symbols in TPL" depends on SPL_FRAMEWORK && BINMAN default y help - This enables use of symbols in TPL which refer to U-Boot, enabling SPL + This enables use of symbols in TPL which refer to U-Boot, enabling TPL to obtain the location of U-Boot simply by calling spl_get_image_pos() and spl_get_image_size().
For this to work, you must have a U-Boot image in the binman image, so - binman can update SPL with the location of it. + binman can update TPL with the location of it.
config TPL_FRAMEWORK bool "Support TPL based upon the common SPL framework"

On 24/02/2022 02:00, Simon Glass wrote:
Fix the help which should refer to TPL, not SPL.
Signed-off-by: Simon Glass sjg@chromium.org Suggested-by: Alper Nebi Yasak alpernebiyasak@gmail.com
Changes in v2:
- Add new patch to correct Kconfig help for TPL_BINMAN_SYMBOLS
common/spl/Kconfig | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-)
Reviewed-by: Alper Nebi Yasak alpernebiyasak@gmail.com

Refactor this to avoid a loop. Also add a test for an empty string.
Signed-off-by: Simon Glass sjg@chromium.org Suggested-by: Alper Nebi Yasak alpernebiyasak@gmail.com ---
Changes in v2: - Add new patch to tidy up implementaiton of AddStringList()
tools/dtoc/fdt.py | 4 +--- tools/dtoc/test_fdt.py | 6 ++++++ 2 files changed, 7 insertions(+), 3 deletions(-)
diff --git a/tools/dtoc/fdt.py b/tools/dtoc/fdt.py index c16909a876..d933972918 100644 --- a/tools/dtoc/fdt.py +++ b/tools/dtoc/fdt.py @@ -516,9 +516,7 @@ class Node: Returns: Prop added """ - out = b'' - for string in val: - out += bytes(string, 'utf-8') + b'\0' + out = b'\0'.join(bytes(s, 'utf-8') for s in val) + b'\0' if val else b'' return self.AddData(prop_name, out)
def AddInt(self, prop_name, val): diff --git a/tools/dtoc/test_fdt.py b/tools/dtoc/test_fdt.py index c2013b9289..d33328c392 100755 --- a/tools/dtoc/test_fdt.py +++ b/tools/dtoc/test_fdt.py @@ -550,6 +550,12 @@ class TestProp(unittest.TestCase): data = self.fdt.getprop(self.node.Offset(), 'stringlist') self.assertEqual(b'123\x00456\0', data)
+ val = [] + self.node.AddStringList('stringlist', val) + self.dtb.Sync(auto_resize=True) + data = self.fdt.getprop(self.node.Offset(), 'stringlist') + self.assertEqual(b'', data) + def test_delete_node(self): """Test deleting a node""" old_offset = self.fdt.path_offset('/spl-test')

On 24/02/2022 02:00, Simon Glass wrote:
Refactor this to avoid a loop. Also add a test for an empty string.
Signed-off-by: Simon Glass sjg@chromium.org Suggested-by: Alper Nebi Yasak alpernebiyasak@gmail.com
Changes in v2:
- Add new patch to tidy up implementaiton of AddStringList()
tools/dtoc/fdt.py | 4 +--- tools/dtoc/test_fdt.py | 6 ++++++ 2 files changed, 7 insertions(+), 3 deletions(-)
Reviewed-by: Alper Nebi Yasak alpernebiyasak@gmail.com
diff --git a/tools/dtoc/fdt.py b/tools/dtoc/fdt.py index c16909a876..d933972918 100644 --- a/tools/dtoc/fdt.py +++ b/tools/dtoc/fdt.py @@ -516,9 +516,7 @@ class Node: Returns: Prop added """
out = b''
for string in val:
out += bytes(string, 'utf-8') + b'\0'
out = b'\0'.join(bytes(s, 'utf-8') for s in val) + b'\0' if val else b''
Meh, it turned out uglier than I thought it would be. Either is fine really, I didn't add "Reviewed-by"s for v1 only because I thought you'd need to send a v2 for the later FIT stuff anyway.
return self.AddData(prop_name, out) def AddInt(self, prop_name, val):
[...]

Rename this function to make it clear that it only reads loadable segments. Also update the error for missing module to better match the message emitted by Python.
Signed-off-by: Simon Glass sjg@chromium.org Suggested-by: Alper Nebi Yasak alpernebiyasak@gmail.com ---
Changes in v2: - Add new patch to rename load_segments() and module failure
tools/binman/elf.py | 6 +++--- tools/binman/elf_test.py | 18 +++++++++--------- 2 files changed, 12 insertions(+), 12 deletions(-)
diff --git a/tools/binman/elf.py b/tools/binman/elf.py index 5e7d6ae7b9..016e9b2885 100644 --- a/tools/binman/elf.py +++ b/tools/binman/elf.py @@ -105,7 +105,7 @@ def GetSymbolFileOffset(fname, patterns): return addr - seg['p_vaddr'] + seg['p_offset']
if not ELF_TOOLS: - raise ValueError('Python elftools package is not available') + raise ValueError("Python: No module named 'elftools'")
syms = {} with open(fname, 'rb') as fd: @@ -371,7 +371,7 @@ def UpdateFile(infile, outfile, start_sym, end_sym, insert): tools.write_file(outfile, newdata) tout.info('Written to offset %#x' % syms[start_sym].offset)
-def read_segments(data): +def read_loadable_segments(data): """Read segments from an ELF file
Args: @@ -389,7 +389,7 @@ def read_segments(data): ValueError: elftools is not available """ if not ELF_TOOLS: - raise ValueError('Python elftools package is not available') + raise ValueError("Python: No module named 'elftools'") with io.BytesIO(data) as inf: try: elf = ELFFile(inf) diff --git a/tools/binman/elf_test.py b/tools/binman/elf_test.py index a67915bda6..4c881a9755 100644 --- a/tools/binman/elf_test.py +++ b/tools/binman/elf_test.py @@ -244,7 +244,7 @@ class TestElf(unittest.TestCase): fname = self.ElfTestFile('embed_data') with self.assertRaises(ValueError) as e: elf.GetSymbolFileOffset(fname, ['embed_start', 'embed_end']) - self.assertIn('Python elftools package is not available', + self.assertIn("Python: No module named 'elftools'", str(e.exception)) finally: elf.ELF_TOOLS = old_val @@ -258,31 +258,31 @@ class TestElf(unittest.TestCase): offset = elf.GetSymbolFileOffset(fname, ['missing_sym']) self.assertEqual({}, offset)
- def test_read_segments(self): - """Test for read_segments()""" + def test_read_loadable_segments(self): + """Test for read_loadable_segments()""" if not elf.ELF_TOOLS: self.skipTest('Python elftools not available') fname = self.ElfTestFile('embed_data') - segments, entry = elf.read_segments(tools.read_file(fname)) + segments, entry = elf.read_loadable_segments(tools.read_file(fname))
def test_read_segments_fail(self): - """Test for read_segments() without elftools""" + """Test for read_loadable_segments() without elftools""" try: old_val = elf.ELF_TOOLS elf.ELF_TOOLS = False fname = self.ElfTestFile('embed_data') with self.assertRaises(ValueError) as e: - elf.read_segments(tools.read_file(fname)) - self.assertIn('Python elftools package is not available', + elf.read_loadable_segments(tools.read_file(fname)) + self.assertIn("Python: No module named 'elftools'", str(e.exception)) finally: elf.ELF_TOOLS = old_val
def test_read_segments_bad_data(self): - """Test for read_segments() with an invalid ELF file""" + """Test for read_loadable_segments() with an invalid ELF file""" fname = self.ElfTestFile('embed_data') with self.assertRaises(ValueError) as e: - elf.read_segments(tools.get_bytes(100, 100)) + elf.read_loadable_segments(tools.get_bytes(100, 100)) self.assertIn('Magic number does not match', str(e.exception))

On 24/02/2022 02:00, Simon Glass wrote:
Rename this function to make it clear that it only reads loadable segments. Also update the error for missing module to better match the message emitted by Python.
Signed-off-by: Simon Glass sjg@chromium.org Suggested-by: Alper Nebi Yasak alpernebiyasak@gmail.com
Changes in v2:
- Add new patch to rename load_segments() and module failure
tools/binman/elf.py | 6 +++--- tools/binman/elf_test.py | 18 +++++++++--------- 2 files changed, 12 insertions(+), 12 deletions(-)
Reviewed-by: Alper Nebi Yasak alpernebiyasak@gmail.com

Update the return value of this function, fix the 'create' typo and update the documentation for clarity.
Signed-off-by: Simon Glass sjg@chromium.org Suggested-by: Alper Nebi Yasak alpernebiyasak@gmail.com ---
Changes in v2: - Add new patch to tweak collect_contents_to_file() and docs
tools/binman/binman.rst | 25 +++++++++++++++---------- tools/binman/entry.py | 8 ++++---- tools/binman/etype/mkimage.py | 2 +- 3 files changed, 20 insertions(+), 15 deletions(-)
diff --git a/tools/binman/binman.rst b/tools/binman/binman.rst index 771645380e..0061e43659 100644 --- a/tools/binman/binman.rst +++ b/tools/binman/binman.rst @@ -1375,18 +1375,20 @@ Some entry types deal with data obtained from others. For example, };
This shows mkimage being passed a file consisting of SPL and U-Boot proper. It -is create by calling `Entry.collect_contents_to_file()`. Note that in this case, -the data is passed to mkimage for processing but does not appear separately in -the image. It may not appear at all, depending on what mkimage does. The -contents of the `mkimage` entry are entirely dependent on the processing done -by the entry, with the provided subnodes (`u-boot-spl` and `u-boot`) simply -providing the input data for that processing. +is created by calling `Entry.collect_contents_to_file()`. Note that in this +case, the data is passed to mkimage for processing but does not appear +separately in the image. It may not appear at all, depending on what mkimage +does. The contents of the `mkimage` entry are entirely dependent on the +processing done by the entry, with the provided subnodes (`u-boot-spl` and +`u-boot`) simply providing the input data for that processing.
Note that `Entry.collect_contents_to_file()` simply concatenates the data from the different entries together, with no control over alignment, etc. Another approach is to subclass `Entry_section` so that those features become available, such as `size` and `pad-byte`. Then the contents of the entry can be obtained by -calling `BuildSectionData()`. +calling `super().BuildSectionData()` in the entry's BuildSectionData() +implementation to get the input data, then write it to a file and process it +however is desired.
There are other ways to obtain data also, depending on the situation. If the entry type is simply signing data which exists elsewhere in the image, then @@ -1396,6 +1398,7 @@ is used by `Entry_vblock`, for example::
u_boot: u-boot { }; + vblock { content = <&u_boot &dtb>; keyblock = "firmware.keyblock"; @@ -1440,9 +1443,11 @@ The `soc-fw` node is a `blob-ext` (i.e. it reads in a named binary file) whereas a known blob type.
When adding new entry types you are encouraged to use subnodes to provide the -data for processing, unless the `content` approach is more suitable. Ad-hoc -properties and other methods of obtaining data are discouraged, since it adds to -confusion for users. +data for processing, unless the `content` approach is more suitable. Consider +whether the input entries are contained within (or consumed by) the entry, vs +just being 'referenced' by the entry. In the latter case, the `content` approach +makes more sense. Ad-hoc properties and other methods of obtaining data are +discouraged, since it adds to confusion for users.
History / Credits ----------------- diff --git a/tools/binman/entry.py b/tools/binman/entry.py index bf68a85b24..de8526618d 100644 --- a/tools/binman/entry.py +++ b/tools/binman/entry.py @@ -1136,16 +1136,16 @@ features to produce new behaviours.
Returns: Tuple: - bytes: Concatenated data from all the entries (or False) - str: Filename of file written (or False if no data) - str: Unique portion of filename (or False if no data) + bytes: Concatenated data from all the entries (or None) + str: Filename of file written (or None if no data) + str: Unique portion of filename (or None if no data) """ data = b'' for entry in entries: # First get the input data and put it in a file. If not available, # try later. if not entry.ObtainContents(): - return False, False, False + return None, None, None data += entry.GetData() uniq = self.GetUniqueName() fname = tools.get_output_filename(f'{prefix}.{uniq}') diff --git a/tools/binman/etype/mkimage.py b/tools/binman/etype/mkimage.py index 8e74ebb9c0..c28141ff69 100644 --- a/tools/binman/etype/mkimage.py +++ b/tools/binman/etype/mkimage.py @@ -53,7 +53,7 @@ class Entry_mkimage(Entry): def ObtainContents(self): data, input_fname, uniq = self.collect_contents_to_file( self._mkimage_entries.values(), 'mkimage') - if data is False: + if data is None: return False output_fname = tools.get_output_filename('mkimage-out.%s' % uniq) if self.mkimage.run_cmd('-d', input_fname, *self._args,

On 24/02/2022 02:00, Simon Glass wrote:
Update the return value of this function, fix the 'create' typo and update the documentation for clarity.
Signed-off-by: Simon Glass sjg@chromium.org Suggested-by: Alper Nebi Yasak alpernebiyasak@gmail.com
Changes in v2:
- Add new patch to tweak collect_contents_to_file() and docs
tools/binman/binman.rst | 25 +++++++++++++++---------- tools/binman/entry.py | 8 ++++---- tools/binman/etype/mkimage.py | 2 +- 3 files changed, 20 insertions(+), 15 deletions(-)
Reviewed-by: Alper Nebi Yasak alpernebiyasak@gmail.com

The word 'expand' is used for entries which generate subentries. It is also used for entries that can have an '_expanded' version which is used to break out its contents.
Rather than talking about expanding an entry's size, use the term 'extending'. It is slightly more precise and avoids the above conflicts.
This change renders the old 'expand-size' property invalid, so add an error check for that.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: - Add patch to rename ExpandToLimit to extend_to_limit
tools/binman/binman.rst | 2 +- tools/binman/entry.py | 8 +++++--- tools/binman/etype/section.py | 10 +++++----- tools/binman/ftest.py | 8 ++++++++ tools/binman/test/088_expand_size.dts | 8 ++++---- tools/binman/test/089_expand_size_bad.dts | 2 +- tools/binman/test/225_expand_size_bad.dts | 10 ++++++++++ 7 files changed, 34 insertions(+), 14 deletions(-) create mode 100644 tools/binman/test/225_expand_size_bad.dts
diff --git a/tools/binman/binman.rst b/tools/binman/binman.rst index 0061e43659..85f8337a31 100644 --- a/tools/binman/binman.rst +++ b/tools/binman/binman.rst @@ -480,7 +480,7 @@ image-pos: for each entry. This makes it easy to find out exactly where the entry ended up in the image, regardless of parent sections, etc.
-expand-size: +extend-size: Expand the size of this entry to fit available space. This space is only limited by the size of the image/section and the position of the next entry. diff --git a/tools/binman/entry.py b/tools/binman/entry.py index de8526618d..a59eb56f14 100644 --- a/tools/binman/entry.py +++ b/tools/binman/entry.py @@ -233,6 +233,8 @@ class Entry(object): """ if 'pos' in self._node.props: self.Raise("Please use 'offset' instead of 'pos'") + if 'expand-size' in self._node.props: + self.Raise("Please use 'extend-size' instead of 'expand-size'") self.offset = fdt_util.GetInt(self._node, 'offset') self.size = fdt_util.GetInt(self._node, 'size') self.orig_offset = fdt_util.GetInt(self._node, 'orig-offset') @@ -260,7 +262,7 @@ class Entry(object): self.align_size) self.align_end = fdt_util.GetInt(self._node, 'align-end') self.offset_unset = fdt_util.GetBool(self._node, 'offset-unset') - self.expand_size = fdt_util.GetBool(self._node, 'expand-size') + self.expand_size = fdt_util.GetBool(self._node, 'extend-size') self.missing_msg = fdt_util.GetString(self._node, 'missing-msg')
# This is only supported by blobs and sections at present @@ -772,8 +774,8 @@ features to produce new behaviours. name = '%s.%s' % (node.name, name) return name
- def ExpandToLimit(self, limit): - """Expand an entry so that it ends at the given offset limit""" + def extend_to_limit(self, limit): + """Extent an entry so that it ends at the given offset limit""" if self.offset + self.size < limit: self.size = limit - self.offset # Request the contents again, since changing the size requires that diff --git a/tools/binman/etype/section.py b/tools/binman/etype/section.py index 25159074ba..0cfee5f3e3 100644 --- a/tools/binman/etype/section.py +++ b/tools/binman/etype/section.py @@ -385,7 +385,7 @@ class Entry_section(Entry): self._PackEntries() if self._sort: self._SortEntries() - self._ExpandEntries() + self._extend_entries()
data = self.BuildSectionData(True) self.SetContents(data) @@ -403,17 +403,17 @@ class Entry_section(Entry): offset = entry.Pack(offset) return offset
- def _ExpandEntries(self): - """Expand any entries that are permitted to""" + def _extend_entries(self): + """Extend any entries that are permitted to""" exp_entry = None for entry in self._entries.values(): if exp_entry: - exp_entry.ExpandToLimit(entry.offset) + exp_entry.extend_to_limit(entry.offset) exp_entry = None if entry.expand_size: exp_entry = entry if exp_entry: - exp_entry.ExpandToLimit(self.size) + exp_entry.extend_to_limit(self.size)
def _SortEntries(self): """Sort entries by offset""" diff --git a/tools/binman/ftest.py b/tools/binman/ftest.py index 8f00db6945..62528ccbfa 100644 --- a/tools/binman/ftest.py +++ b/tools/binman/ftest.py @@ -5321,6 +5321,14 @@ fdt fdtmap Extract the devicetree blob from the fdtmap self.assertIn("Node '/binman/fit': Unknown operation 'unknown'", str(exc.exception))
+ def test_uses_expand_size(self): + """Test that the 'expand-size' property cannot be used anymore""" + with self.assertRaises(ValueError) as e: + data = self._DoReadFile('225_expand_size_bad.dts') + self.assertIn( + "Node '/binman/u-boot': Please use 'extend-size' instead of 'expand-size'", + str(e.exception)) +
if __name__ == "__main__": unittest.main() diff --git a/tools/binman/test/088_expand_size.dts b/tools/binman/test/088_expand_size.dts index c8a01308ec..f352699e37 100644 --- a/tools/binman/test/088_expand_size.dts +++ b/tools/binman/test/088_expand_size.dts @@ -5,7 +5,7 @@ binman { size = <40>; fill { - expand-size; + extend-size; fill-byte = [61]; size = <0>; }; @@ -13,7 +13,7 @@ offset = <8>; }; section { - expand-size; + extend-size; pad-byte = <0x62>; intel-mrc { }; @@ -25,7 +25,7 @@ section2 { type = "section"; fill { - expand-size; + extend-size; fill-byte = [63]; size = <0>; }; @@ -35,7 +35,7 @@ }; fill2 { type = "fill"; - expand-size; + extend-size; fill-byte = [64]; size = <0>; }; diff --git a/tools/binman/test/089_expand_size_bad.dts b/tools/binman/test/089_expand_size_bad.dts index edc0e5cf68..edc60e43fd 100644 --- a/tools/binman/test/089_expand_size_bad.dts +++ b/tools/binman/test/089_expand_size_bad.dts @@ -4,7 +4,7 @@ / { binman { _testing { - expand-size; + extend-size; return-contents-once; }; u-boot { diff --git a/tools/binman/test/225_expand_size_bad.dts b/tools/binman/test/225_expand_size_bad.dts new file mode 100644 index 0000000000..d4ad9a6a1a --- /dev/null +++ b/tools/binman/test/225_expand_size_bad.dts @@ -0,0 +1,10 @@ +// SPDX-License-Identifier: GPL-2.0+ +/dts-v1/; + +/ { + binman { + u-boot { + expand-size; + }; + }; +};

On 24/02/2022 02:00, Simon Glass wrote:
The word 'expand' is used for entries which generate subentries. It is also used for entries that can have an '_expanded' version which is used to break out its contents.
Rather than talking about expanding an entry's size, use the term 'extending'. It is slightly more precise and avoids the above conflicts.
This change renders the old 'expand-size' property invalid, so add an error check for that.
Signed-off-by: Simon Glass sjg@chromium.org
Changes in v2:
- Add patch to rename ExpandToLimit to extend_to_limit
tools/binman/binman.rst | 2 +- tools/binman/entry.py | 8 +++++--- tools/binman/etype/section.py | 10 +++++----- tools/binman/ftest.py | 8 ++++++++ tools/binman/test/088_expand_size.dts | 8 ++++---- tools/binman/test/089_expand_size_bad.dts | 2 +- tools/binman/test/225_expand_size_bad.dts | 10 ++++++++++ 7 files changed, 34 insertions(+), 14 deletions(-) create mode 100644 tools/binman/test/225_expand_size_bad.dts
I haven't ever used expand-size, but:
Reviewed-by: Alper Nebi Yasak alpernebiyasak@gmail.com
diff --git a/tools/binman/binman.rst b/tools/binman/binman.rst index 0061e43659..85f8337a31 100644 --- a/tools/binman/binman.rst +++ b/tools/binman/binman.rst @@ -480,7 +480,7 @@ image-pos: for each entry. This makes it easy to find out exactly where the entry ended up in the image, regardless of parent sections, etc.
-expand-size: +extend-size: Expand the size of this entry to fit available space. This space is only
Expand -> Extend
limited by the size of the image/section and the position of the next entry.
diff --git a/tools/binman/entry.py b/tools/binman/entry.py index de8526618d..a59eb56f14 100644 --- a/tools/binman/entry.py +++ b/tools/binman/entry.py @@ -233,6 +233,8 @@ class Entry(object): """ if 'pos' in self._node.props: self.Raise("Please use 'offset' instead of 'pos'")
if 'expand-size' in self._node.props:
self.Raise("Please use 'extend-size' instead of 'expand-size'") self.offset = fdt_util.GetInt(self._node, 'offset') self.size = fdt_util.GetInt(self._node, 'size') self.orig_offset = fdt_util.GetInt(self._node, 'orig-offset')
@@ -260,7 +262,7 @@ class Entry(object): self.align_size) self.align_end = fdt_util.GetInt(self._node, 'align-end') self.offset_unset = fdt_util.GetBool(self._node, 'offset-unset')
self.expand_size = fdt_util.GetBool(self._node, 'expand-size')
self.expand_size = fdt_util.GetBool(self._node, 'extend-size')
Consider also changing the attribute name, I don't see many instances of it with git-grep. Maybe even the test dts names.
self.missing_msg = fdt_util.GetString(self._node, 'missing-msg') # This is only supported by blobs and sections at present
@@ -772,8 +774,8 @@ features to produce new behaviours. name = '%s.%s' % (node.name, name) return name
- def ExpandToLimit(self, limit):
"""Expand an entry so that it ends at the given offset limit"""
- def extend_to_limit(self, limit):
"""Extent an entry so that it ends at the given offset limit"""
Extent -> Extend
if self.offset + self.size < limit: self.size = limit - self.offset # Request the contents again, since changing the size requires that
[...]

Leave the 'expand' term for use by entry types which have an expanded version of themselves. Rename this method to indicate that it generates subentries.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: - Add patch to rename ExpandEntries to gen_entries
tools/binman/control.py | 2 +- tools/binman/entry.py | 4 ++-- tools/binman/etype/blob_phase.py | 2 +- tools/binman/etype/files.py | 2 +- tools/binman/etype/section.py | 6 +++--- 5 files changed, 8 insertions(+), 8 deletions(-)
diff --git a/tools/binman/control.py b/tools/binman/control.py index a179f78129..be2b422e62 100644 --- a/tools/binman/control.py +++ b/tools/binman/control.py @@ -501,7 +501,7 @@ def PrepareImagesAndDtbs(dtb_fname, select_images, update_fdt, use_expanded): # entry offsets remain the same. for image in images.values(): image.CollectBintools() - image.ExpandEntries() + image.gen_entries() if update_fdt: image.AddMissingProperties(True) image.ProcessFdt(dtb) diff --git a/tools/binman/entry.py b/tools/binman/entry.py index a59eb56f14..90a40bb274 100644 --- a/tools/binman/entry.py +++ b/tools/binman/entry.py @@ -284,8 +284,8 @@ class Entry(object): """ return {}
- def ExpandEntries(self): - """Expand out entries which produce other entries + def gen_entries(self): + """Allow entries to generate other entries
Some entries generate subnodes automatically, from which sub-entries are then created. This method allows those to be added to the binman diff --git a/tools/binman/etype/blob_phase.py b/tools/binman/etype/blob_phase.py index ed25e467a1..aba93b759b 100644 --- a/tools/binman/etype/blob_phase.py +++ b/tools/binman/etype/blob_phase.py @@ -39,7 +39,7 @@ class Entry_blob_phase(Entry_section): self.dtb_file = dtb_file self.bss_pad = bss_pad
- def ExpandEntries(self): + def gen_entries(self): """Create the subnodes""" names = [self.root_fname + '-nodtb', self.root_fname + '-dtb'] if self.bss_pad: diff --git a/tools/binman/etype/files.py b/tools/binman/etype/files.py index 0650a69c55..210a16d671 100644 --- a/tools/binman/etype/files.py +++ b/tools/binman/etype/files.py @@ -46,7 +46,7 @@ class Entry_files(Entry_section): self._require_matches = fdt_util.GetBool(self._node, 'require-matches')
- def ExpandEntries(self): + def gen_entries(self): files = tools.get_input_filename_glob(self._pattern) if self._require_matches and not files: self.Raise("Pattern '%s' matched no files" % self._pattern) diff --git a/tools/binman/etype/section.py b/tools/binman/etype/section.py index 0cfee5f3e3..f2d401d95b 100644 --- a/tools/binman/etype/section.py +++ b/tools/binman/etype/section.py @@ -233,10 +233,10 @@ class Entry_section(Entry): todo) return True
- def ExpandEntries(self): - super().ExpandEntries() + def gen_entries(self): + super().gen_entries() for entry in self._entries.values(): - entry.ExpandEntries() + entry.gen_entries()
def AddMissingProperties(self, have_image_pos): """Add new properties to the device tree as needed for this entry"""

On 24/02/2022 02:00, Simon Glass wrote:
Leave the 'expand' term for use by entry types which have an expanded version of themselves. Rename this method to indicate that it generates subentries.
Signed-off-by: Simon Glass sjg@chromium.org
Changes in v2:
- Add patch to rename ExpandEntries to gen_entries
tools/binman/control.py | 2 +- tools/binman/entry.py | 4 ++-- tools/binman/etype/blob_phase.py | 2 +- tools/binman/etype/files.py | 2 +- tools/binman/etype/section.py | 6 +++--- 5 files changed, 8 insertions(+), 8 deletions(-)
Reviewed-by: Alper Nebi Yasak alpernebiyasak@gmail.com

At present the fit implementation creates the output tree while scanning the FIT description. Then it updates the tree later when the data is known.
This works, but is a bit confusing, since it requires mixing the scanning code with the generation code, with a fix-up step at the end.
It is actually possible to do this in two phases, one to scan everything and the other to generate the FIT. Thus the FIT is generated in one pass, when everything is known.
Update the code accordingly. The only functional change is that the 'data' property for each node are now last instead of first, which is really a more natural position. Update the affected test to deal with this.
One wrinkle is that the calculated properties (image-pos, size and offset) are now added before the FIT is generated. so we must filter these out when copying properties from the binman description to the FIT.
Most of the change here is splitting out some of the code from the ReadEntries() implementation into _BuildInput(). So despite the large diff, most of the code is the same. It is not feasible to split this patch up, so far as I can tell.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: - Add new patch to refactor fit to generate output at the end
tools/binman/etype/fit.py | 178 ++++++++++++++----------- tools/binman/ftest.py | 13 +- tools/binman/test/224_fit_bad_oper.dts | 2 - 3 files changed, 109 insertions(+), 84 deletions(-)
diff --git a/tools/binman/etype/fit.py b/tools/binman/etype/fit.py index 2d4c5f6545..61c72780e9 100644 --- a/tools/binman/etype/fit.py +++ b/tools/binman/etype/fit.py @@ -209,6 +209,81 @@ class Entry_fit(Entry_section): return oper
def ReadEntries(self): + def _add_entries(base_node, depth, node): + """Add entries for any nodes that need them + + Args: + base_node: Base Node of the FIT (with 'description' property) + depth: Current node depth (0 is the base 'fit' node) + node: Current node to process + + He we only need to provide binman entries which are used to define + the 'data' for each image. We create an entry_Section for each. + """ + rel_path = node.path[len(base_node.path):] + in_images = rel_path.startswith('/images') + has_images = depth == 2 and in_images + if has_images: + # This node is a FIT subimage node (e.g. "/images/kernel") + # containing content nodes. We collect the subimage nodes and + # section entries for them here to merge the content subnodes + # together and put the merged contents in the subimage node's + # 'data' property later. + entry = Entry.Create(self.section, node, etype='section') + entry.ReadNode() + # The hash subnodes here are for mkimage, not binman. + entry.SetUpdateHash(False) + self._entries[rel_path] = entry + + for subnode in node.subnodes: + _add_entries(base_node, depth + 1, subnode) + + _add_entries(self._node, 0, self._node) + + def BuildSectionData(self, required): + """Build FIT entry contents + + This adds the 'data' properties to the input ITB (Image-tree Binary) + then runs mkimage to process it. + + Args: + required: True if the data must be present, False if it is OK to + return None + + Returns: + Contents of the section (bytes) + """ + data = self._BuildInput() + uniq = self.GetUniqueName() + input_fname = tools.get_output_filename('%s.itb' % uniq) + output_fname = tools.get_output_filename('%s.fit' % uniq) + tools.write_file(input_fname, data) + tools.write_file(output_fname, data) + + args = {} + ext_offset = self._fit_props.get('fit,external-offset') + if ext_offset is not None: + args = { + 'external': True, + 'pad': fdt_util.fdt32_to_cpu(ext_offset.value) + } + if self.mkimage.run(reset_timestamp=True, output_fname=output_fname, + **args) is None: + # Bintool is missing; just use empty data as the output + self.record_missing_bintool(self.mkimage) + return tools.get_bytes(0, 1024) + + return tools.read_file(output_fname) + + def _BuildInput(self): + """Finish the FIT by adding the 'data' properties to it + + Arguments: + fdt: FIT to update + + Returns: + New fdt contents (bytes) + """ def _process_prop(pname, prop): """Process special properties
@@ -236,9 +311,15 @@ class Entry_fit(Entry_section): val = val[1:].replace('DEFAULT-SEQ', str(seq + 1)) fsw.property_string(pname, val) return + elif pname.startswith('fit,'): + # Ignore these, which are commands for binman to process + return + elif pname in ['offset', 'size', 'image-pos']: + # Don't add binman's calculated properties + return fsw.property(pname, prop.bytes)
- def _scan_gen_fdt_nodes(subnode, depth, in_images): + def _gen_fdt_nodes(subnode, depth, in_images): """Generate FDT nodes
This creates one node for each member of self._fdts using the @@ -277,7 +358,7 @@ class Entry_fit(Entry_section): else: self.Raise("Generator node requires 'fit,fdt-list' property")
- def _scan_node(subnode, depth, in_images): + def _gen_node(subnode, depth, in_images): """Generate nodes from a template
This creates one node for each member of self._fdts using the @@ -294,10 +375,10 @@ class Entry_fit(Entry_section): """ oper = self._get_operation(subnode) if oper == OP_GEN_FDT_NODES: - _scan_gen_fdt_nodes(subnode, depth, in_images) + _gen_fdt_nodes(subnode, depth, in_images)
- def _AddNode(base_node, depth, node): - """Add a node to the FIT + def _add_node(base_node, depth, node): + """Add nodes to the output FIT
Args: base_node: Base Node of the FIT (with 'description' property) @@ -307,104 +388,49 @@ class Entry_fit(Entry_section): There are two cases to deal with: - hash and signature nodes which become part of the FIT - binman entries which are used to define the 'data' for each - image + image, so don't appear in the FIT """ + # Copy over all the relevant properties for pname, prop in node.props.items(): - if not pname.startswith('fit,'): - _process_prop(pname, prop) + _process_prop(pname, prop)
rel_path = node.path[len(base_node.path):] in_images = rel_path.startswith('/images') + has_images = depth == 2 and in_images if has_images: - # This node is a FIT subimage node (e.g. "/images/kernel") - # containing content nodes. We collect the subimage nodes and - # section entries for them here to merge the content subnodes - # together and put the merged contents in the subimage node's - # 'data' property later. - entry = Entry.Create(self.section, node, etype='section') - entry.ReadNode() - # The hash subnodes here are for mkimage, not binman. - entry.SetUpdateHash(False) - self._entries[rel_path] = entry + entry = self._entries[rel_path] + data = entry.GetData() + fsw.property('data', bytes(data))
for subnode in node.subnodes: if has_images and not (subnode.name.startswith('hash') or subnode.name.startswith('signature')): # This subnode is a content node not meant to appear in # the FIT (e.g. "/images/kernel/u-boot"), so don't call - # fsw.add_node() or _AddNode() for it. + # fsw.add_node() or _add_node() for it. pass elif self.GetImage().generate and subnode.name.startswith('@'): - _scan_node(subnode, depth, in_images) + subnode_path = f'{rel_path}/{subnode.name}' + entry = self._entries.get(subnode_path) + _gen_node(subnode, depth, in_images) + if entry: + del self._entries[subnode_path] else: with fsw.add_node(subnode.name): - _AddNode(base_node, depth + 1, subnode) + _add_node(base_node, depth + 1, subnode)
# Build a new tree with all nodes and properties starting from the # entry node fsw = libfdt.FdtSw() fsw.finish_reservemap() with fsw.add_node(''): - _AddNode(self._node, 0, self._node) + _add_node(self._node, 0, self._node) fdt = fsw.as_fdt()
# Pack this new FDT and scan it so we can add the data later fdt.pack() - self._fdt = Fdt.FromData(fdt.as_bytearray()) - self._fdt.Scan() - - def BuildSectionData(self, required): - """Build FIT entry contents - - This adds the 'data' properties to the input ITB (Image-tree Binary) - then runs mkimage to process it. - - Args: - required: True if the data must be present, False if it is OK to - return None - - Returns: - Contents of the section (bytes) - """ - data = self._BuildInput(self._fdt) - uniq = self.GetUniqueName() - input_fname = tools.get_output_filename('%s.itb' % uniq) - output_fname = tools.get_output_filename('%s.fit' % uniq) - tools.write_file(input_fname, data) - tools.write_file(output_fname, data) - - args = {} - ext_offset = self._fit_props.get('fit,external-offset') - if ext_offset is not None: - args = { - 'external': True, - 'pad': fdt_util.fdt32_to_cpu(ext_offset.value) - } - if self.mkimage.run(reset_timestamp=True, output_fname=output_fname, - **args) is None: - # Bintool is missing; just use empty data as the output - self.record_missing_bintool(self.mkimage) - return tools.get_bytes(0, 1024) - - return tools.read_file(output_fname) - - def _BuildInput(self, fdt): - """Finish the FIT by adding the 'data' properties to it - - Arguments: - fdt: FIT to update - - Returns: - New fdt contents (bytes) - """ - for path, section in self._entries.items(): - node = fdt.GetNode(path) - data = section.GetData() - node.AddData('data', data) - - fdt.Sync(auto_resize=True) - data = fdt.GetContents() + data = fdt.as_bytearray() return data
def SetImagePos(self, image_pos): diff --git a/tools/binman/ftest.py b/tools/binman/ftest.py index 62528ccbfa..4eac913d06 100644 --- a/tools/binman/ftest.py +++ b/tools/binman/ftest.py @@ -3780,6 +3780,7 @@ class TestFunctional(unittest.TestCase): dtb.Scan() props = self._GetPropTree(dtb, BASE_DTB_PROPS + REPACK_DTB_PROPS)
+ self.maxDiff = None self.assertEqual({ 'image-pos': 0, 'offset': 0, @@ -3793,19 +3794,19 @@ class TestFunctional(unittest.TestCase): 'fit:offset': 4, 'fit:size': 1840,
- 'fit/images/kernel:image-pos': 160, - 'fit/images/kernel:offset': 156, + 'fit/images/kernel:image-pos': 304, + 'fit/images/kernel:offset': 300, 'fit/images/kernel:size': 4,
- 'fit/images/kernel/u-boot:image-pos': 160, + 'fit/images/kernel/u-boot:image-pos': 304, 'fit/images/kernel/u-boot:offset': 0, 'fit/images/kernel/u-boot:size': 4,
- 'fit/images/fdt-1:image-pos': 456, - 'fit/images/fdt-1:offset': 452, + 'fit/images/fdt-1:image-pos': 552, + 'fit/images/fdt-1:offset': 548, 'fit/images/fdt-1:size': 6,
- 'fit/images/fdt-1/u-boot-spl-dtb:image-pos': 456, + 'fit/images/fdt-1/u-boot-spl-dtb:image-pos': 552, 'fit/images/fdt-1/u-boot-spl-dtb:offset': 0, 'fit/images/fdt-1/u-boot-spl-dtb:size': 6,
diff --git a/tools/binman/test/224_fit_bad_oper.dts b/tools/binman/test/224_fit_bad_oper.dts index cee801e2ea..8a8014ea33 100644 --- a/tools/binman/test/224_fit_bad_oper.dts +++ b/tools/binman/test/224_fit_bad_oper.dts @@ -21,7 +21,5 @@ }; }; }; - fdtmap { - }; }; };

On 24/02/2022 02:00, Simon Glass wrote:
At present the fit implementation creates the output tree while scanning the FIT description. Then it updates the tree later when the data is known.
This works, but is a bit confusing, since it requires mixing the scanning code with the generation code, with a fix-up step at the end.
It is actually possible to do this in two phases, one to scan everything and the other to generate the FIT. Thus the FIT is generated in one pass, when everything is known.
Doing it in one go makes sense to me as well. In general I like the way distinct processing actions/steps are being split into their own blocks or so, and I think this helps move things toward that.
Update the code accordingly. The only functional change is that the 'data' property for each node are now last instead of first, which is really a more natural position. Update the affected test to deal with this.
One wrinkle is that the calculated properties (image-pos, size and offset) are now added before the FIT is generated. so we must filter these out when copying properties from the binman description to the FIT.
Most of the change here is splitting out some of the code from the ReadEntries() implementation into _BuildInput(). So despite the large diff, most of the code is the same. It is not feasible to split this patch up, so far as I can tell.
Signed-off-by: Simon Glass sjg@chromium.org
Changes in v2:
- Add new patch to refactor fit to generate output at the end
tools/binman/etype/fit.py | 178 ++++++++++++++----------- tools/binman/ftest.py | 13 +- tools/binman/test/224_fit_bad_oper.dts | 2 - 3 files changed, 109 insertions(+), 84 deletions(-)
Reviewed-by: Alper Nebi Yasak alpernebiyasak@gmail.com
I still wrote some weird ideas below, mostly for the future, since this patch is mostly moving code around which is fine as is.
diff --git a/tools/binman/etype/fit.py b/tools/binman/etype/fit.py index 2d4c5f6545..61c72780e9 100644 --- a/tools/binman/etype/fit.py +++ b/tools/binman/etype/fit.py @@ -209,6 +209,81 @@ class Entry_fit(Entry_section): return oper
def ReadEntries(self):
def _add_entries(base_node, depth, node):
"""Add entries for any nodes that need them
Args:
base_node: Base Node of the FIT (with 'description' property)
depth: Current node depth (0 is the base 'fit' node)
node: Current node to process
He we only need to provide binman entries which are used to define
He -> Here ?
the 'data' for each image. We create an entry_Section for each.
"""
rel_path = node.path[len(base_node.path):]
in_images = rel_path.startswith('/images')
has_images = depth == 2 and in_images
if has_images:
# This node is a FIT subimage node (e.g. "/images/kernel")
# containing content nodes. We collect the subimage nodes and
# section entries for them here to merge the content subnodes
# together and put the merged contents in the subimage node's
# 'data' property later.
entry = Entry.Create(self.section, node, etype='section')
entry.ReadNode()
I plan to change 'self.section' to 'self' here later, fixes extracting wrong contents for FIT subentries.
# The hash subnodes here are for mkimage, not binman.
entry.SetUpdateHash(False)
self._entries[rel_path] = entry
I also plan to change this to a single-level node name instead of the relative path, lets 'binman extract fit/u-boot' etc. run at all.
for subnode in node.subnodes:
_add_entries(base_node, depth + 1, subnode)
_add_entries(self._node, 0, self._node)
I think it's especially visible here what I meant by switching away from recursion: this recurses through every node but only does anything on immediate subnodes of "/images" (for now?).
- def BuildSectionData(self, required):
"""Build FIT entry contents
This adds the 'data' properties to the input ITB (Image-tree Binary)
then runs mkimage to process it.
Args:
required: True if the data must be present, False if it is OK to
return None
I forgot to handle 'required' while converting FIT to section...
Returns:
Contents of the section (bytes)
"""
data = self._BuildInput()
uniq = self.GetUniqueName()
input_fname = tools.get_output_filename('%s.itb' % uniq)
output_fname = tools.get_output_filename('%s.fit' % uniq)
tools.write_file(input_fname, data)
tools.write_file(output_fname, data)
args = {}
ext_offset = self._fit_props.get('fit,external-offset')
if ext_offset is not None:
args = {
'external': True,
'pad': fdt_util.fdt32_to_cpu(ext_offset.value)
}
if self.mkimage.run(reset_timestamp=True, output_fname=output_fname,
**args) is None:
I have an idea for the far future, to let /image/* nodes sometimes be Entry_collection to handle external offsets in binman so we can take mkimage completely out of this, but no clue how feasible/desirable that end goal is.
# Bintool is missing; just use empty data as the output
self.record_missing_bintool(self.mkimage)
return tools.get_bytes(0, 1024)
return tools.read_file(output_fname)
- def _BuildInput(self):
"""Finish the FIT by adding the 'data' properties to it
Arguments:
fdt: FIT to update
Returns:
New fdt contents (bytes)
""" def _process_prop(pname, prop): """Process special properties
@@ -236,9 +311,15 @@ class Entry_fit(Entry_section): val = val[1:].replace('DEFAULT-SEQ', str(seq + 1)) fsw.property_string(pname, val) return
elif pname.startswith('fit,'):
# Ignore these, which are commands for binman to process
return
elif pname in ['offset', 'size', 'image-pos']:
# Don't add binman's calculated properties
return
This is one of the things I was thinking of doing, thanks. I encountered the same issue when replacing a FIT entry with the same f"{uniq}.fit" that was used to build it, will try adding a test for that later.
fsw.property(pname, prop.bytes)
def _scan_gen_fdt_nodes(subnode, depth, in_images):
def _gen_fdt_nodes(subnode, depth, in_images): """Generate FDT nodes This creates one node for each member of self._fdts using the
[...]
diff --git a/tools/binman/test/224_fit_bad_oper.dts b/tools/binman/test/224_fit_bad_oper.dts index cee801e2ea..8a8014ea33 100644 --- a/tools/binman/test/224_fit_bad_oper.dts +++ b/tools/binman/test/224_fit_bad_oper.dts @@ -21,7 +21,5 @@ }; }; };
fdtmap {
};
This looked unrelated at first, but as I understand it, changing the order of things made the bad-operation error happen later and exposed a breakage from the fdtmap entry trying to parse the mock dtb data.
}; };

Hi Alper,
On Thu, 3 Mar 2022 at 14:16, Alper Nebi Yasak alpernebiyasak@gmail.com wrote:
On 24/02/2022 02:00, Simon Glass wrote:
At present the fit implementation creates the output tree while scanning the FIT description. Then it updates the tree later when the data is known.
This works, but is a bit confusing, since it requires mixing the scanning code with the generation code, with a fix-up step at the end.
It is actually possible to do this in two phases, one to scan everything and the other to generate the FIT. Thus the FIT is generated in one pass, when everything is known.
Doing it in one go makes sense to me as well. In general I like the way distinct processing actions/steps are being split into their own blocks or so, and I think this helps move things toward that.
OK good.
Update the code accordingly. The only functional change is that the 'data' property for each node are now last instead of first, which is really a more natural position. Update the affected test to deal with this.
One wrinkle is that the calculated properties (image-pos, size and offset) are now added before the FIT is generated. so we must filter these out when copying properties from the binman description to the FIT.
Most of the change here is splitting out some of the code from the ReadEntries() implementation into _BuildInput(). So despite the large diff, most of the code is the same. It is not feasible to split this patch up, so far as I can tell.
Signed-off-by: Simon Glass sjg@chromium.org
Changes in v2:
- Add new patch to refactor fit to generate output at the end
tools/binman/etype/fit.py | 178 ++++++++++++++----------- tools/binman/ftest.py | 13 +- tools/binman/test/224_fit_bad_oper.dts | 2 - 3 files changed, 109 insertions(+), 84 deletions(-)
Reviewed-by: Alper Nebi Yasak alpernebiyasak@gmail.com
I still wrote some weird ideas below, mostly for the future, since this patch is mostly moving code around which is fine as is.
diff --git a/tools/binman/etype/fit.py b/tools/binman/etype/fit.py index 2d4c5f6545..61c72780e9 100644 --- a/tools/binman/etype/fit.py +++ b/tools/binman/etype/fit.py @@ -209,6 +209,81 @@ class Entry_fit(Entry_section): return oper
def ReadEntries(self):
def _add_entries(base_node, depth, node):
"""Add entries for any nodes that need them
Args:
base_node: Base Node of the FIT (with 'description' property)
depth: Current node depth (0 is the base 'fit' node)
node: Current node to process
He we only need to provide binman entries which are used to define
He -> Here ?
the 'data' for each image. We create an entry_Section for each.
"""
rel_path = node.path[len(base_node.path):]
in_images = rel_path.startswith('/images')
has_images = depth == 2 and in_images
if has_images:
# This node is a FIT subimage node (e.g. "/images/kernel")
# containing content nodes. We collect the subimage nodes and
# section entries for them here to merge the content subnodes
# together and put the merged contents in the subimage node's
# 'data' property later.
entry = Entry.Create(self.section, node, etype='section')
entry.ReadNode()
I plan to change 'self.section' to 'self' here later, fixes extracting wrong contents for FIT subentries.
OK
# The hash subnodes here are for mkimage, not binman.
entry.SetUpdateHash(False)
self._entries[rel_path] = entry
I also plan to change this to a single-level node name instead of the relative path, lets 'binman extract fit/u-boot' etc. run at all.
That would be good.
for subnode in node.subnodes:
_add_entries(base_node, depth + 1, subnode)
_add_entries(self._node, 0, self._node)
I think it's especially visible here what I meant by switching away from recursion: this recurses through every node but only does anything on immediate subnodes of "/images" (for now?).
Fair enough.
- def BuildSectionData(self, required):
"""Build FIT entry contents
This adds the 'data' properties to the input ITB (Image-tree Binary)
then runs mkimage to process it.
Args:
required: True if the data must be present, False if it is OK to
return None
I forgot to handle 'required' while converting FIT to section...
Returns:
Contents of the section (bytes)
"""
data = self._BuildInput()
uniq = self.GetUniqueName()
input_fname = tools.get_output_filename('%s.itb' % uniq)
output_fname = tools.get_output_filename('%s.fit' % uniq)
tools.write_file(input_fname, data)
tools.write_file(output_fname, data)
args = {}
ext_offset = self._fit_props.get('fit,external-offset')
if ext_offset is not None:
args = {
'external': True,
'pad': fdt_util.fdt32_to_cpu(ext_offset.value)
}
if self.mkimage.run(reset_timestamp=True, output_fname=output_fname,
**args) is None:
I have an idea for the far future, to let /image/* nodes sometimes be Entry_collection to handle external offsets in binman so we can take mkimage completely out of this, but no clue how feasible/desirable that end goal is.
Well I would really prefer to avoid duplicating mkimage functionality. I think it is something to look at when binman is more widespread, but as you can see we are still sorting out the FIT issues.
# Bintool is missing; just use empty data as the output
self.record_missing_bintool(self.mkimage)
return tools.get_bytes(0, 1024)
return tools.read_file(output_fname)
- def _BuildInput(self):
"""Finish the FIT by adding the 'data' properties to it
Arguments:
fdt: FIT to update
Returns:
New fdt contents (bytes)
""" def _process_prop(pname, prop): """Process special properties
@@ -236,9 +311,15 @@ class Entry_fit(Entry_section): val = val[1:].replace('DEFAULT-SEQ', str(seq + 1)) fsw.property_string(pname, val) return
elif pname.startswith('fit,'):
# Ignore these, which are commands for binman to process
return
elif pname in ['offset', 'size', 'image-pos']:
# Don't add binman's calculated properties
return
This is one of the things I was thinking of doing, thanks. I encountered the same issue when replacing a FIT entry with the same f"{uniq}.fit" that was used to build it, will try adding a test for that later.
Yes good.
fsw.property(pname, prop.bytes)
def _scan_gen_fdt_nodes(subnode, depth, in_images):
def _gen_fdt_nodes(subnode, depth, in_images): """Generate FDT nodes This creates one node for each member of self._fdts using the
[...]
diff --git a/tools/binman/test/224_fit_bad_oper.dts b/tools/binman/test/224_fit_bad_oper.dts index cee801e2ea..8a8014ea33 100644 --- a/tools/binman/test/224_fit_bad_oper.dts +++ b/tools/binman/test/224_fit_bad_oper.dts @@ -21,7 +21,5 @@ }; }; };
fdtmap {
};
This looked unrelated at first, but as I understand it, changing the order of things made the bad-operation error happen later and exposed a breakage from the fdtmap entry trying to parse the mock dtb data.
Yes, that's it.
Regards, Simon

This shadows the patman.tools library so rename it to avoid a pylint warning.
Signed-off-by: Simon Glass sjg@chromium.org ---
(no changes since v1)
tools/binman/entry.py | 4 ++-- tools/binman/etype/fit.py | 6 +++--- tools/binman/etype/gbb.py | 4 ++-- tools/binman/etype/intel_ifwi.py | 4 ++-- tools/binman/etype/mkimage.py | 4 ++-- tools/binman/etype/section.py | 4 ++-- tools/binman/etype/vblock.py | 4 ++-- 7 files changed, 15 insertions(+), 15 deletions(-)
diff --git a/tools/binman/entry.py b/tools/binman/entry.py index 90a40bb274..00a13c5839 100644 --- a/tools/binman/entry.py +++ b/tools/binman/entry.py @@ -1099,11 +1099,11 @@ features to produce new behaviours. """ pass
- def AddBintools(self, tools): + def AddBintools(self, btools): """Add the bintools used by this entry type
Args: - tools (dict of Bintool): + btools (dict of Bintool): """ pass
diff --git a/tools/binman/etype/fit.py b/tools/binman/etype/fit.py index 61c72780e9..2b82955226 100644 --- a/tools/binman/etype/fit.py +++ b/tools/binman/etype/fit.py @@ -484,6 +484,6 @@ class Entry_fit(Entry_section): section.SetOffsetSize(offset, size) section.SetImagePos(self.image_pos)
- def AddBintools(self, tools): - super().AddBintools(tools) - self.mkimage = self.AddBintool(tools, 'mkimage') + def AddBintools(self, btools): + super().AddBintools(btools) + self.mkimage = self.AddBintool(btools, 'mkimage') diff --git a/tools/binman/etype/gbb.py b/tools/binman/etype/gbb.py index e32fae27ce..7394e4e5d3 100644 --- a/tools/binman/etype/gbb.py +++ b/tools/binman/etype/gbb.py @@ -99,5 +99,5 @@ class Entry_gbb(Entry):
return True
- def AddBintools(self, tools): - self.futility = self.AddBintool(tools, 'futility') + def AddBintools(self, btools): + self.futility = self.AddBintool(btools, 'futility') diff --git a/tools/binman/etype/intel_ifwi.py b/tools/binman/etype/intel_ifwi.py index 46bdf116e6..4fa7d636fe 100644 --- a/tools/binman/etype/intel_ifwi.py +++ b/tools/binman/etype/intel_ifwi.py @@ -143,5 +143,5 @@ class Entry_intel_ifwi(Entry_blob_ext): for entry in self._ifwi_entries.values(): entry.WriteSymbols(self)
- def AddBintools(self, tools): - self.ifwitool = self.AddBintool(tools, 'ifwitool') + def AddBintools(self, btools): + self.ifwitool = self.AddBintool(btools, 'ifwitool') diff --git a/tools/binman/etype/mkimage.py b/tools/binman/etype/mkimage.py index c28141ff69..9f4717e4ea 100644 --- a/tools/binman/etype/mkimage.py +++ b/tools/binman/etype/mkimage.py @@ -93,5 +93,5 @@ class Entry_mkimage(Entry): for entry in self._mkimage_entries.values(): entry.CheckFakedBlobs(faked_blobs_list)
- def AddBintools(self, tools): - self.mkimage = self.AddBintool(tools, 'mkimage') + def AddBintools(self, btools): + self.mkimage = self.AddBintool(btools, 'mkimage') diff --git a/tools/binman/etype/section.py b/tools/binman/etype/section.py index f2d401d95b..16ee5ed723 100644 --- a/tools/binman/etype/section.py +++ b/tools/binman/etype/section.py @@ -894,6 +894,6 @@ class Entry_section(Entry): for entry in self._entries.values(): entry.CheckAltFormats(alt_formats)
- def AddBintools(self, tools): + def AddBintools(self, btools): for entry in self._entries.values(): - entry.AddBintools(tools) + entry.AddBintools(btools) diff --git a/tools/binman/etype/vblock.py b/tools/binman/etype/vblock.py index a1de98290b..065b6ed2f6 100644 --- a/tools/binman/etype/vblock.py +++ b/tools/binman/etype/vblock.py @@ -97,5 +97,5 @@ class Entry_vblock(Entry_collection): data = self.GetVblock(True) return self.ProcessContentsUpdate(data)
- def AddBintools(self, tools): - self.futility = self.AddBintool(tools, 'futility') + def AddBintools(self, btools): + self.futility = self.AddBintool(btools, 'futility')

On 24/02/2022 02:00, Simon Glass wrote:
This shadows the patman.tools library so rename it to avoid a pylint warning.
Signed-off-by: Simon Glass sjg@chromium.org
(no changes since v1)
tools/binman/entry.py | 4 ++-- tools/binman/etype/fit.py | 6 +++--- tools/binman/etype/gbb.py | 4 ++-- tools/binman/etype/intel_ifwi.py | 4 ++-- tools/binman/etype/mkimage.py | 4 ++-- tools/binman/etype/section.py | 4 ++-- tools/binman/etype/vblock.py | 4 ++-- 7 files changed, 15 insertions(+), 15 deletions(-)
Reviewed-by: Alper Nebi Yasak alpernebiyasak@gmail.com

At present fake blobs are created but internally an empty blob is used. Change it to use the contents of the faked file. Also return whether the blob was faked, in case the caller needs to know that.
Add a TODO to put fake blobs in their own directory.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: - Add a patch to change how faked blobs are created
tools/binman/binman.rst | 3 ++- tools/binman/entry.py | 9 ++++++--- tools/binman/etype/blob.py | 7 ++++--- tools/binman/etype/blob_ext_list.py | 2 +- 4 files changed, 13 insertions(+), 8 deletions(-)
diff --git a/tools/binman/binman.rst b/tools/binman/binman.rst index 85f8337a31..a90364f2fb 100644 --- a/tools/binman/binman.rst +++ b/tools/binman/binman.rst @@ -1500,7 +1500,8 @@ Some ideas: - Figure out how to make Fdt support changing the node order, so that Node.AddSubnode() can support adding a node before another, existing node. Perhaps it should completely regenerate the flat tree? - +- Put faked files into a separate subdir and remove them on start-up, to avoid + seeing them as 'real' files on a subsequent run
-- Simon Glass sjg@chromium.org diff --git a/tools/binman/entry.py b/tools/binman/entry.py index 00a13c5839..2fb0050da5 100644 --- a/tools/binman/entry.py +++ b/tools/binman/entry.py @@ -997,15 +997,18 @@ features to produce new behaviours. fname (str): Filename to check
Returns: - fname (str): Filename of faked file + tuple: + fname (str): Filename of faked file + bool: True if the blob was faked, False if not """ if self.allow_fake and not pathlib.Path(fname).is_file(): outfname = tools.get_output_filename(os.path.basename(fname)) with open(outfname, "wb") as out: out.truncate(1024) self.faked = True - return outfname - return fname + tout.info(f"Entry '{self._node.path}': Faked file '{outfname}'") + return outfname, True + return fname, False
def CheckFakedBlobs(self, faked_blobs_list): """Check if any entries in this section have faked external blobs diff --git a/tools/binman/etype/blob.py b/tools/binman/etype/blob.py index 25ec5d26c9..89f089e740 100644 --- a/tools/binman/etype/blob.py +++ b/tools/binman/etype/blob.py @@ -41,10 +41,11 @@ class Entry_blob(Entry): self.external and self.section.GetAllowMissing()) # Allow the file to be missing if not self._pathname: - self._pathname = self.check_fake_fname(self._filename) - self.SetContents(b'') + self._pathname, faked = self.check_fake_fname(self._filename) self.missing = True - return True + if not faked: + self.SetContents(b'') + return True
self.ReadBlobContents() return True diff --git a/tools/binman/etype/blob_ext_list.py b/tools/binman/etype/blob_ext_list.py index 76ad32a1ee..f00202e9eb 100644 --- a/tools/binman/etype/blob_ext_list.py +++ b/tools/binman/etype/blob_ext_list.py @@ -37,7 +37,7 @@ class Entry_blob_ext_list(Entry_blob): missing = False pathnames = [] for fname in self._filenames: - fname = self.check_fake_fname(fname) + fname, _ = self.check_fake_fname(fname) pathname = tools.get_input_filename( fname, self.external and self.section.GetAllowMissing()) # Allow the file to be missing

On 24/02/2022 02:00, Simon Glass wrote:
At present fake blobs are created but internally an empty blob is used. Change it to use the contents of the faked file. Also return whether the blob was faked, in case the caller needs to know that.
Add a TODO to put fake blobs in their own directory.
Signed-off-by: Simon Glass sjg@chromium.org
Changes in v2:
- Add a patch to change how faked blobs are created
tools/binman/binman.rst | 3 ++- tools/binman/entry.py | 9 ++++++--- tools/binman/etype/blob.py | 7 ++++--- tools/binman/etype/blob_ext_list.py | 2 +- 4 files changed, 13 insertions(+), 8 deletions(-)
I feel a bit uneasy about all this fake file stuff, but can't figure out exactly why. I guess the patch doesn't make it that worse.
Reviewed-by: Alper Nebi Yasak alpernebiyasak@gmail.com
diff --git a/tools/binman/binman.rst b/tools/binman/binman.rst index 85f8337a31..a90364f2fb 100644 --- a/tools/binman/binman.rst +++ b/tools/binman/binman.rst @@ -1500,7 +1500,8 @@ Some ideas:
- Figure out how to make Fdt support changing the node order, so that Node.AddSubnode() can support adding a node before another, existing node. Perhaps it should completely regenerate the flat tree?
+- Put faked files into a separate subdir and remove them on start-up, to avoid
- seeing them as 'real' files on a subsequent run
Do we need to create actual files, or is it a convenience thing for blob entry types (because they already read their contents from files)?
Also, maybe it's better to create them somewhere in the /tmp/binman.* directories so they disappear without effort.
-- Simon Glass sjg@chromium.org diff --git a/tools/binman/entry.py b/tools/binman/entry.py index 00a13c5839..2fb0050da5 100644 --- a/tools/binman/entry.py +++ b/tools/binman/entry.py @@ -997,15 +997,18 @@ features to produce new behaviours. fname (str): Filename to check
Returns:
fname (str): Filename of faked file
tuple:
fname (str): Filename of faked file
bool: True if the blob was faked, False if not """ if self.allow_fake and not pathlib.Path(fname).is_file(): outfname = tools.get_output_filename(os.path.basename(fname)) with open(outfname, "wb") as out: out.truncate(1024) self.faked = True
return outfname
return fname
tout.info(f"Entry '{self._node.path}': Faked file '{outfname}'")
return outfname, True
return fname, False
Can't callers use self.faked for this?
I think I see an edge case when calling this multiple times for the same filename, only the first call would recognize it being a fake file and only the first-entry-to-call would consider itself faked.
def CheckFakedBlobs(self, faked_blobs_list): """Check if any entries in this section have faked external blobs
diff --git a/tools/binman/etype/blob.py b/tools/binman/etype/blob.py index 25ec5d26c9..89f089e740 100644 --- a/tools/binman/etype/blob.py +++ b/tools/binman/etype/blob.py @@ -41,10 +41,11 @@ class Entry_blob(Entry): self.external and self.section.GetAllowMissing()) # Allow the file to be missing if not self._pathname:
self._pathname = self.check_fake_fname(self._filename)
self.SetContents(b'')
self._pathname, faked = self.check_fake_fname(self._filename) self.missing = True
return True
if not faked:
self.SetContents(b'')
return True self.ReadBlobContents() return True
[...]

Hi Alper,
On Thu, 3 Mar 2022 at 14:16, Alper Nebi Yasak alpernebiyasak@gmail.com wrote:
On 24/02/2022 02:00, Simon Glass wrote:
At present fake blobs are created but internally an empty blob is used. Change it to use the contents of the faked file. Also return whether the blob was faked, in case the caller needs to know that.
Add a TODO to put fake blobs in their own directory.
Signed-off-by: Simon Glass sjg@chromium.org
Changes in v2:
- Add a patch to change how faked blobs are created
tools/binman/binman.rst | 3 ++- tools/binman/entry.py | 9 ++++++--- tools/binman/etype/blob.py | 7 ++++--- tools/binman/etype/blob_ext_list.py | 2 +- 4 files changed, 13 insertions(+), 8 deletions(-)
I feel a bit uneasy about all this fake file stuff, but can't figure out exactly why. I guess the patch doesn't make it that worse.
Well we have at least one major problem with it, which is that fake blobs need to exist only for the life of the binman run.
Reviewed-by: Alper Nebi Yasak alpernebiyasak@gmail.com
diff --git a/tools/binman/binman.rst b/tools/binman/binman.rst index 85f8337a31..a90364f2fb 100644 --- a/tools/binman/binman.rst +++ b/tools/binman/binman.rst @@ -1500,7 +1500,8 @@ Some ideas:
- Figure out how to make Fdt support changing the node order, so that Node.AddSubnode() can support adding a node before another, existing node. Perhaps it should completely regenerate the flat tree?
+- Put faked files into a separate subdir and remove them on start-up, to avoid
- seeing them as 'real' files on a subsequent run
Do we need to create actual files, or is it a convenience thing for blob entry types (because they already read their contents from files)?
We need the files for mkimage.
Also, maybe it's better to create them somewhere in the /tmp/binman.* directories so they disappear without effort.
Yes that would be better. I decided to stop this series where it is since that can be done in a follow-up. I did add a TODO though.
-- Simon Glass sjg@chromium.org diff --git a/tools/binman/entry.py b/tools/binman/entry.py index 00a13c5839..2fb0050da5 100644 --- a/tools/binman/entry.py +++ b/tools/binman/entry.py @@ -997,15 +997,18 @@ features to produce new behaviours. fname (str): Filename to check
Returns:
fname (str): Filename of faked file
tuple:
fname (str): Filename of faked file
bool: True if the blob was faked, False if not """ if self.allow_fake and not pathlib.Path(fname).is_file(): outfname = tools.get_output_filename(os.path.basename(fname)) with open(outfname, "wb") as out: out.truncate(1024) self.faked = True
return outfname
return fname
tout.info(f"Entry '{self._node.path}': Faked file '{outfname}'")
return outfname, True
return fname, False
Can't callers use self.faked for this?
I think I see an edge case when calling this multiple times for the same filename, only the first call would recognize it being a fake file and only the first-entry-to-call would consider itself faked.
Yes we could and yes there is an edge case. This function returns True if it created the fake, False if it had happened before. I decided to leave it as is since checking self.faked after a function that sets it seems like a side effect. No strong opinions on it though.
def CheckFakedBlobs(self, faked_blobs_list): """Check if any entries in this section have faked external blobs
diff --git a/tools/binman/etype/blob.py b/tools/binman/etype/blob.py index 25ec5d26c9..89f089e740 100644 --- a/tools/binman/etype/blob.py +++ b/tools/binman/etype/blob.py @@ -41,10 +41,11 @@ class Entry_blob(Entry): self.external and self.section.GetAllowMissing()) # Allow the file to be missing if not self._pathname:
self._pathname = self.check_fake_fname(self._filename)
self.SetContents(b'')
self._pathname, faked = self.check_fake_fname(self._filename) self.missing = True
return True
if not faked:
self.SetContents(b'')
return True self.ReadBlobContents() return True
[...]
Regards, Simon

On 06/03/2022 06:08, Simon Glass wrote:
On Thu, 3 Mar 2022 at 14:16, Alper Nebi Yasak alpernebiyasak@gmail.com wrote:
On 24/02/2022 02:00, Simon Glass wrote:
+- Put faked files into a separate subdir and remove them on start-up, to avoid
- seeing them as 'real' files on a subsequent run
Do we need to create actual files, or is it a convenience thing for blob entry types (because they already read their contents from files)?
We need the files for mkimage.
But the mkimage etype creates its own files to pass to mkimage anyway, using entry.GetData() in collect_contents_to_file() without reaching into subentries' files.
More specifically, what I'm thinking of is removing check_fake_fname() entirely, instead of managing/removing the files it creates. At a cursory glance, only blob and blob-ext-list seem to use that directly.

Hi Alper,
On Thu, 10 Mar 2022 at 12:36, Alper Nebi Yasak alpernebiyasak@gmail.com wrote:
On 06/03/2022 06:08, Simon Glass wrote:
On Thu, 3 Mar 2022 at 14:16, Alper Nebi Yasak alpernebiyasak@gmail.com wrote:
On 24/02/2022 02:00, Simon Glass wrote:
+- Put faked files into a separate subdir and remove them on start-up, to avoid
- seeing them as 'real' files on a subsequent run
Do we need to create actual files, or is it a convenience thing for blob entry types (because they already read their contents from files)?
We need the files for mkimage.
But the mkimage etype creates its own files to pass to mkimage anyway, using entry.GetData() in collect_contents_to_file() without reaching into subentries' files.
More specifically, what I'm thinking of is removing check_fake_fname() entirely, instead of managing/removing the files it creates. At a cursory glance, only blob and blob-ext-list seem to use that directly.
OK, but if the files don't get created, then mkimage will fail, right?
Regards, Simon

On 12/03/2022 08:02, Simon Glass wrote:
On Thu, 10 Mar 2022 at 12:36, Alper Nebi Yasak alpernebiyasak@gmail.com wrote:
On 06/03/2022 06:08, Simon Glass wrote:
On Thu, 3 Mar 2022 at 14:16, Alper Nebi Yasak alpernebiyasak@gmail.com wrote:
Do we need to create actual files, or is it a convenience thing for blob entry types (because they already read their contents from files)?
We need the files for mkimage.
But the mkimage etype creates its own files to pass to mkimage anyway, using entry.GetData() in collect_contents_to_file() without reaching into subentries' files.
More specifically, what I'm thinking of is removing check_fake_fname() entirely, instead of managing/removing the files it creates. At a cursory glance, only blob and blob-ext-list seem to use that directly.
OK, but if the files don't get created, then mkimage will fail, right?
I don't think it will, but maybe it's better that I try removing it to see if anything breaks.

Hi Alper,
On Mon, 14 Mar 2022 at 15:31, Alper Nebi Yasak alpernebiyasak@gmail.com wrote:
On 12/03/2022 08:02, Simon Glass wrote:
On Thu, 10 Mar 2022 at 12:36, Alper Nebi Yasak alpernebiyasak@gmail.com wrote:
On 06/03/2022 06:08, Simon Glass wrote:
On Thu, 3 Mar 2022 at 14:16, Alper Nebi Yasak alpernebiyasak@gmail.com wrote:
Do we need to create actual files, or is it a convenience thing for blob entry types (because they already read their contents from files)?
We need the files for mkimage.
But the mkimage etype creates its own files to pass to mkimage anyway, using entry.GetData() in collect_contents_to_file() without reaching into subentries' files.
More specifically, what I'm thinking of is removing check_fake_fname() entirely, instead of managing/removing the files it creates. At a cursory glance, only blob and blob-ext-list seem to use that directly.
OK, but if the files don't get created, then mkimage will fail, right?
I don't think it will, but maybe it's better that I try removing it to see if anything breaks.
Yes, the reason for adding faked blobs was needing to run mkimage which expects files (in the .its or elsewhere) to be present.
Regards, Simon

On x86 devices having even a small amount of data can cause an overlap between regions. For example, bayleybay complains when the intel-vga region overlaps with u-boot-ucode:
ImagePos Offset Size Name <none> 00000000 00800000 main-section <none> ff800000 00000080 intel-descriptor <none> ff800400 00000080 intel-me <none> fff00000 00098f24 u-boot-with-ucode-ptr <none> fff98f24 00001aa0 u-boot-dtb-with-ucode <none> fff9a9d0 0002a000 u-boot-ucode <none> fffb0000 00000080 intel-vga ...
It is safer to use an empty file in most cases. Add an option to set the size for those uses that need it.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: - Add a patch to make fake blobs zero-sized by default
tools/binman/entry.py | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/tools/binman/entry.py b/tools/binman/entry.py index 2fb0050da5..e4c0fbe23d 100644 --- a/tools/binman/entry.py +++ b/tools/binman/entry.py @@ -988,13 +988,14 @@ features to produce new behaviours. if self.missing: missing_list.append(self)
- def check_fake_fname(self, fname): + def check_fake_fname(self, fname, size=0): """If the file is missing and the entry allows fake blobs, fake it
Sets self.faked to True if faked
Args: fname (str): Filename to check + size (int): Size of fake file to create
Returns: tuple: @@ -1004,7 +1005,7 @@ features to produce new behaviours. if self.allow_fake and not pathlib.Path(fname).is_file(): outfname = tools.get_output_filename(os.path.basename(fname)) with open(outfname, "wb") as out: - out.truncate(1024) + out.truncate(size) self.faked = True tout.info(f"Entry '{self._node.path}': Faked file '{outfname}'") return outfname, True

On 24/02/2022 02:00, Simon Glass wrote:
On x86 devices having even a small amount of data can cause an overlap between regions. For example, bayleybay complains when the intel-vga region overlaps with u-boot-ucode:
ImagePos Offset Size Name <none> 00000000 00800000 main-section <none> ff800000 00000080 intel-descriptor <none> ff800400 00000080 intel-me <none> fff00000 00098f24 u-boot-with-ucode-ptr <none> fff98f24 00001aa0 u-boot-dtb-with-ucode <none> fff9a9d0 0002a000 u-boot-ucode <none> fffb0000 00000080 intel-vga ...
It is safer to use an empty file in most cases. Add an option to set the size for those uses that need it.
Signed-off-by: Simon Glass sjg@chromium.org
Changes in v2:
- Add a patch to make fake blobs zero-sized by default
tools/binman/entry.py | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/tools/binman/entry.py b/tools/binman/entry.py
Reviewed-by: Alper Nebi Yasak alpernebiyasak@gmail.com
Later on, I think adding 'min-size' and 'max-size' properties for entries could be useful for a few things including this.

Hi Alper,
On Thu, 3 Mar 2022 at 14:16, Alper Nebi Yasak alpernebiyasak@gmail.com wrote:
On 24/02/2022 02:00, Simon Glass wrote:
On x86 devices having even a small amount of data can cause an overlap between regions. For example, bayleybay complains when the intel-vga region overlaps with u-boot-ucode:
ImagePos Offset Size Name <none> 00000000 00800000 main-section <none> ff800000 00000080 intel-descriptor <none> ff800400 00000080 intel-me <none> fff00000 00098f24 u-boot-with-ucode-ptr <none> fff98f24 00001aa0 u-boot-dtb-with-ucode <none> fff9a9d0 0002a000 u-boot-ucode <none> fffb0000 00000080 intel-vga ...
It is safer to use an empty file in most cases. Add an option to set the size for those uses that need it.
Signed-off-by: Simon Glass sjg@chromium.org
Changes in v2:
- Add a patch to make fake blobs zero-sized by default
tools/binman/entry.py | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/tools/binman/entry.py b/tools/binman/entry.py
Reviewed-by: Alper Nebi Yasak alpernebiyasak@gmail.com
Later on, I think adding 'min-size' and 'max-size' properties for entries could be useful for a few things including this.
Yes I think that might become necessary.
Regards, Simon

Unfortunately mkimage gets upset with zero-sized files. Update the ObtainContents() method to support specifying the size, if a fake blob is created.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: - Add a patch to allow mkimage to use a non-zero fake-blob size
tools/binman/entry.py | 11 ++++++++--- tools/binman/etype/_testing.py | 2 +- tools/binman/etype/blob.py | 5 +++-- tools/binman/etype/mkimage.py | 13 ++++++++++++- tools/binman/etype/section.py | 2 +- tools/binman/ftest.py | 10 ++++++++++ tools/binman/test/229_mkimage_missing.dts | 18 ++++++++++++++++++ 7 files changed, 53 insertions(+), 8 deletions(-) create mode 100644 tools/binman/test/229_mkimage_missing.dts
diff --git a/tools/binman/entry.py b/tools/binman/entry.py index e4c0fbe23d..249f117ace 100644 --- a/tools/binman/entry.py +++ b/tools/binman/entry.py @@ -415,9 +415,13 @@ class Entry(object): self.SetContents(data) return size_ok
- def ObtainContents(self): + def ObtainContents(self, skip_entry=None, fake_size=0): """Figure out the contents of an entry.
+ Args: + skip_entry (Entry): Entry to skip when obtaining section contents + fake_size (int): Size of fake file to create if needed + Returns: True if the contents were found, False if another call is needed after the other entries are processed. @@ -1130,12 +1134,13 @@ features to produce new behaviours. """ self.update_hash = update_hash
- def collect_contents_to_file(self, entries, prefix): + def collect_contents_to_file(self, entries, prefix, fake_size=0): """Put the contents of a list of entries into a file
Args: entries (list of Entry): Entries to collect prefix (str): Filename prefix of file to write to + fake_size (int): Size of fake file to create if needed
If any entry does not have contents yet, this function returns False for the data. @@ -1150,7 +1155,7 @@ features to produce new behaviours. for entry in entries: # First get the input data and put it in a file. If not available, # try later. - if not entry.ObtainContents(): + if not entry.ObtainContents(fake_size=fake_size): return None, None, None data += entry.GetData() uniq = self.GetUniqueName() diff --git a/tools/binman/etype/_testing.py b/tools/binman/etype/_testing.py index 0800c25899..5089de3642 100644 --- a/tools/binman/etype/_testing.py +++ b/tools/binman/etype/_testing.py @@ -82,7 +82,7 @@ class Entry__testing(Entry): self.return_contents = True self.contents = b'aa'
- def ObtainContents(self): + def ObtainContents(self, fake_size=0): if self.return_unknown_contents or not self.return_contents: return False if self.return_contents_later: diff --git a/tools/binman/etype/blob.py b/tools/binman/etype/blob.py index 89f089e740..ceaefb07b7 100644 --- a/tools/binman/etype/blob.py +++ b/tools/binman/etype/blob.py @@ -35,13 +35,14 @@ class Entry_blob(Entry): super().__init__(section, etype, node) self._filename = fdt_util.GetString(self._node, 'filename', self.etype)
- def ObtainContents(self): + def ObtainContents(self, fake_size=0): self._filename = self.GetDefaultFilename() self._pathname = tools.get_input_filename(self._filename, self.external and self.section.GetAllowMissing()) # Allow the file to be missing if not self._pathname: - self._pathname, faked = self.check_fake_fname(self._filename) + self._pathname, faked = self.check_fake_fname(self._filename, + fake_size) self.missing = True if not faked: self.SetContents(b'') diff --git a/tools/binman/etype/mkimage.py b/tools/binman/etype/mkimage.py index 9f4717e4ea..5f6def2287 100644 --- a/tools/binman/etype/mkimage.py +++ b/tools/binman/etype/mkimage.py @@ -51,8 +51,9 @@ class Entry_mkimage(Entry): self.ReadEntries()
def ObtainContents(self): + # Use a non-zero size for any fake files to keep mkimage happy data, input_fname, uniq = self.collect_contents_to_file( - self._mkimage_entries.values(), 'mkimage') + self._mkimage_entries.values(), 'mkimage', 1024) if data is None: return False output_fname = tools.get_output_filename('mkimage-out.%s' % uniq) @@ -73,6 +74,16 @@ class Entry_mkimage(Entry): entry.ReadNode() self._mkimage_entries[entry.name] = entry
+ def SetAllowMissing(self, allow_missing): + """Set whether a section allows missing external blobs + + Args: + allow_missing: True if allowed, False if not allowed + """ + self.allow_missing = allow_missing + for entry in self._mkimage_entries.values(): + entry.SetAllowMissing(allow_missing) + def SetAllowFakeBlob(self, allow_fake): """Set whether the sub nodes allows to create a fake blob
diff --git a/tools/binman/etype/section.py b/tools/binman/etype/section.py index 16ee5ed723..fab42c20f7 100644 --- a/tools/binman/etype/section.py +++ b/tools/binman/etype/section.py @@ -246,7 +246,7 @@ class Entry_section(Entry): for entry in self._entries.values(): entry.AddMissingProperties(have_image_pos)
- def ObtainContents(self, skip_entry=None): + def ObtainContents(self, fake_size=0, skip_entry=None): return self.GetEntryContents(skip_entry=skip_entry)
def GetPaddedDataForEntry(self, entry, entry_data): diff --git a/tools/binman/ftest.py b/tools/binman/ftest.py index 4eac913d06..106027d245 100644 --- a/tools/binman/ftest.py +++ b/tools/binman/ftest.py @@ -5330,6 +5330,16 @@ fdt fdtmap Extract the devicetree blob from the fdtmap "Node '/binman/u-boot': Please use 'extend-size' instead of 'expand-size'", str(e.exception))
+ def testMkimageMissingBlob(self): + """Test using mkimage to build an image""" + with test_util.capture_sys_output() as (stdout, stderr): + self._DoTestFile('229_mkimage_missing.dts', allow_missing=True, + allow_fake_blobs=True) + err = stderr.getvalue() + self.assertRegex( + err, + "Image '.*' has faked external blobs and is non-functional: .*") +
if __name__ == "__main__": unittest.main() diff --git a/tools/binman/test/229_mkimage_missing.dts b/tools/binman/test/229_mkimage_missing.dts new file mode 100644 index 0000000000..54a5a6c571 --- /dev/null +++ b/tools/binman/test/229_mkimage_missing.dts @@ -0,0 +1,18 @@ +// SPDX-License-Identifier: GPL-2.0+ + +/dts-v1/; + +/ { + #address-cells = <1>; + #size-cells = <1>; + + binman { + mkimage { + args = "-n test -T script"; + + blob-ext { + filename = "missing.bin"; + }; + }; + }; +};

On 24/02/2022 02:00, Simon Glass wrote:
Unfortunately mkimage gets upset with zero-sized files. Update the ObtainContents() method to support specifying the size, if a fake blob is created.
Signed-off-by: Simon Glass sjg@chromium.org
Changes in v2:
- Add a patch to allow mkimage to use a non-zero fake-blob size
tools/binman/entry.py | 11 ++++++++--- tools/binman/etype/_testing.py | 2 +- tools/binman/etype/blob.py | 5 +++-- tools/binman/etype/mkimage.py | 13 ++++++++++++- tools/binman/etype/section.py | 2 +- tools/binman/ftest.py | 10 ++++++++++ tools/binman/test/229_mkimage_missing.dts | 18 ++++++++++++++++++ 7 files changed, 53 insertions(+), 8 deletions(-) create mode 100644 tools/binman/test/229_mkimage_missing.dts
I don't exactly like this, but can't think of a nice alternative either.
Reviewed-by: Alper Nebi Yasak alpernebiyasak@gmail.com
[...]
diff --git a/tools/binman/etype/mkimage.py b/tools/binman/etype/mkimage.py index 9f4717e4ea..5f6def2287 100644 --- a/tools/binman/etype/mkimage.py +++ b/tools/binman/etype/mkimage.py @@ -51,8 +51,9 @@ class Entry_mkimage(Entry): self.ReadEntries()
def ObtainContents(self):
# Use a non-zero size for any fake files to keep mkimage happy data, input_fname, uniq = self.collect_contents_to_file(
self._mkimage_entries.values(), 'mkimage')
self._mkimage_entries.values(), 'mkimage', 1024)
I kind of want to say that mkimage-the-etype should be able to handle here whatever it gets from subentries (maybe by writing a single-byte file itself), and mkimage-the-executable should be able to handle zero-size files, but I'm not confident in those opinions.
if data is None: return False output_fname = tools.get_output_filename('mkimage-out.%s' % uniq)
[...]

Hi Alper,
On Thu, 3 Mar 2022 at 14:16, Alper Nebi Yasak alpernebiyasak@gmail.com wrote:
On 24/02/2022 02:00, Simon Glass wrote:
Unfortunately mkimage gets upset with zero-sized files. Update the ObtainContents() method to support specifying the size, if a fake blob is created.
Signed-off-by: Simon Glass sjg@chromium.org
Changes in v2:
- Add a patch to allow mkimage to use a non-zero fake-blob size
tools/binman/entry.py | 11 ++++++++--- tools/binman/etype/_testing.py | 2 +- tools/binman/etype/blob.py | 5 +++-- tools/binman/etype/mkimage.py | 13 ++++++++++++- tools/binman/etype/section.py | 2 +- tools/binman/ftest.py | 10 ++++++++++ tools/binman/test/229_mkimage_missing.dts | 18 ++++++++++++++++++ 7 files changed, 53 insertions(+), 8 deletions(-) create mode 100644 tools/binman/test/229_mkimage_missing.dts
I don't exactly like this, but can't think of a nice alternative either.
Reviewed-by: Alper Nebi Yasak alpernebiyasak@gmail.com
[...]
diff --git a/tools/binman/etype/mkimage.py b/tools/binman/etype/mkimage.py index 9f4717e4ea..5f6def2287 100644 --- a/tools/binman/etype/mkimage.py +++ b/tools/binman/etype/mkimage.py @@ -51,8 +51,9 @@ class Entry_mkimage(Entry): self.ReadEntries()
def ObtainContents(self):
# Use a non-zero size for any fake files to keep mkimage happy data, input_fname, uniq = self.collect_contents_to_file(
self._mkimage_entries.values(), 'mkimage')
self._mkimage_entries.values(), 'mkimage', 1024)
I kind of want to say that mkimage-the-etype should be able to handle here whatever it gets from subentries (maybe by writing a single-byte file itself), and mkimage-the-executable should be able to handle zero-size files, but I'm not confident in those opinions.
Well the entry has no problem with missing files, so that should be OK.
For mkimage I agree it is a strange restriction. Perhaps we should just change it? I don't see what problem it could create.
if data is None: return False output_fname = tools.get_output_filename('mkimage-out.%s' % uniq)
[...]
Regards, Simon

On 06/03/2022 06:08, Simon Glass wrote:
On Thu, 3 Mar 2022 at 14:16, Alper Nebi Yasak alpernebiyasak@gmail.com wrote:
def ObtainContents(self):
# Use a non-zero size for any fake files to keep mkimage happy data, input_fname, uniq = self.collect_contents_to_file(
self._mkimage_entries.values(), 'mkimage')
self._mkimage_entries.values(), 'mkimage', 1024)
I kind of want to say that mkimage-the-etype should be able to handle here whatever it gets from subentries (maybe by writing a single-byte file itself), and mkimage-the-executable should be able to handle zero-size files, but I'm not confident in those opinions.
Well the entry has no problem with missing files, so that should be OK.
What I meant is when the non-faked input data ends up being zero-sized. A bit contrived, but this still triggers the error:
mkimage { args = ...;
blob-ext { filename = "/dev/null"; /* or any other zero-size file */ }; };
which might end up happening e.g. via tee-os with TEE=/dev/null, I remember someone doing that for one of the blobs in a mail but can't find it or recall any details.
For mkimage I agree it is a strange restriction. Perhaps we should just change it? I don't see what problem it could create.
I don't know either.

Hi Alper,
On Thu, 10 Mar 2022 at 12:36, Alper Nebi Yasak alpernebiyasak@gmail.com wrote:
On 06/03/2022 06:08, Simon Glass wrote:
On Thu, 3 Mar 2022 at 14:16, Alper Nebi Yasak alpernebiyasak@gmail.com wrote:
def ObtainContents(self):
# Use a non-zero size for any fake files to keep mkimage happy data, input_fname, uniq = self.collect_contents_to_file(
self._mkimage_entries.values(), 'mkimage')
self._mkimage_entries.values(), 'mkimage', 1024)
I kind of want to say that mkimage-the-etype should be able to handle here whatever it gets from subentries (maybe by writing a single-byte file itself), and mkimage-the-executable should be able to handle zero-size files, but I'm not confident in those opinions.
Well the entry has no problem with missing files, so that should be OK.
What I meant is when the non-faked input data ends up being zero-sized. A bit contrived, but this still triggers the error:
mkimage { args = ...; blob-ext { filename = "/dev/null"; /* or any other zero-size file */ }; };
which might end up happening e.g. via tee-os with TEE=/dev/null, I remember someone doing that for one of the blobs in a mail but can't find it or recall any details.
Hmm OK. So long as the error message is useful then we should be OK. I suspect the /dev/null thing was for when TEE was missing, but binman handles that differently, so we should not use /dev/null anywhere with binman.
For mkimage I agree it is a strange restriction. Perhaps we should just change it? I don't see what problem it could create.
I don't know either.
I suppose one way to find out is to change it!
Regards, Simon

At present the entries are read twice, once by the entry_Section class and once by the FIT implementation. This is harmless but can be confusing when debugging. Fix it.
Signed-off-by: Simon Glass sjg@chromium.org ---
(no changes since v1)
tools/binman/etype/fit.py | 1 - 1 file changed, 1 deletion(-)
diff --git a/tools/binman/etype/fit.py b/tools/binman/etype/fit.py index 2b82955226..7b0c94dfee 100644 --- a/tools/binman/etype/fit.py +++ b/tools/binman/etype/fit.py @@ -185,7 +185,6 @@ class Entry_fit(Entry_section): self.mkimage = None
def ReadNode(self): - self.ReadEntries() super().ReadNode()
def _get_operation(self, subnode):

On 24/02/2022 02:00, Simon Glass wrote:
At present the entries are read twice, once by the entry_Section class and once by the FIT implementation. This is harmless but can be confusing when debugging. Fix it.
Signed-off-by: Simon Glass sjg@chromium.org
(no changes since v1)
tools/binman/etype/fit.py | 1 - 1 file changed, 1 deletion(-)
This is actually necessary so that we can use 'self' as the section in Entry.Create(self, ...) when creating /image/* sections, which requires some section-related attributes set by super().ReadNode(). It was one of the things I was planning to fix as well, thanks!
Reviewed-by: Alper Nebi Yasak alpernebiyasak@gmail.com
diff --git a/tools/binman/etype/fit.py b/tools/binman/etype/fit.py index 2b82955226..7b0c94dfee 100644 --- a/tools/binman/etype/fit.py +++ b/tools/binman/etype/fit.py @@ -185,7 +185,6 @@ class Entry_fit(Entry_section): self.mkimage = None
def ReadNode(self):
self.ReadEntries() super().ReadNode()
The entire function can be removed, but I think some stuff from __init__() actually belongs here so I'm fine with keeping it.
def _get_operation(self, subnode):

Hi Alper,
On Thu, 3 Mar 2022 at 14:16, Alper Nebi Yasak alpernebiyasak@gmail.com wrote:
On 24/02/2022 02:00, Simon Glass wrote:
At present the entries are read twice, once by the entry_Section class and once by the FIT implementation. This is harmless but can be confusing when debugging. Fix it.
Signed-off-by: Simon Glass sjg@chromium.org
(no changes since v1)
tools/binman/etype/fit.py | 1 - 1 file changed, 1 deletion(-)
This is actually necessary so that we can use 'self' as the section in Entry.Create(self, ...) when creating /image/* sections, which requires some section-related attributes set by super().ReadNode(). It was one of the things I was planning to fix as well, thanks!
OK good.
Reviewed-by: Alper Nebi Yasak alpernebiyasak@gmail.com
diff --git a/tools/binman/etype/fit.py b/tools/binman/etype/fit.py index 2b82955226..7b0c94dfee 100644 --- a/tools/binman/etype/fit.py +++ b/tools/binman/etype/fit.py @@ -185,7 +185,6 @@ class Entry_fit(Entry_section): self.mkimage = None
def ReadNode(self):
self.ReadEntries() super().ReadNode()
The entire function can be removed, but I think some stuff from __init__() actually belongs here so I'm fine with keeping it.
Ah yes. I added a patch to move things as you say.
def _get_operation(self, subnode):
Regards, Simon

Some warnings have crept in, so fix those that are easy to fix.
Signed-off-by: Simon Glass sjg@chromium.org ---
(no changes since v1)
tools/binman/etype/fit.py | 50 +++++++++++++++++++-------------------- 1 file changed, 25 insertions(+), 25 deletions(-)
diff --git a/tools/binman/etype/fit.py b/tools/binman/etype/fit.py index 7b0c94dfee..70966028e8 100644 --- a/tools/binman/etype/fit.py +++ b/tools/binman/etype/fit.py @@ -2,10 +2,9 @@ # Copyright (c) 2016 Google, Inc # Written by Simon Glass sjg@chromium.org # -# Entry-type module for producing a FIT -#
-from collections import defaultdict, OrderedDict +"""Entry-type module for producing a FIT""" + import libfdt
from binman.entry import Entry, EntryArg @@ -246,16 +245,16 @@ class Entry_fit(Entry_section): then runs mkimage to process it.
Args: - required: True if the data must be present, False if it is OK to - return None + required (bool): True if the data must be present, False if it is OK + to return None
Returns: - Contents of the section (bytes) + bytes: Contents of the section """ - data = self._BuildInput() + data = self._build_input() uniq = self.GetUniqueName() - input_fname = tools.get_output_filename('%s.itb' % uniq) - output_fname = tools.get_output_filename('%s.fit' % uniq) + input_fname = tools.get_output_filename(f'{uniq}.itb') + output_fname = tools.get_output_filename(f'{uniq}.fit') tools.write_file(input_fname, data) tools.write_file(output_fname, data)
@@ -274,14 +273,14 @@ class Entry_fit(Entry_section):
return tools.read_file(output_fname)
- def _BuildInput(self): + def _build_input(self): """Finish the FIT by adding the 'data' properties to it
Arguments: fdt: FIT to update
Returns: - New fdt contents (bytes) + bytes: New fdt contents """ def _process_prop(pname, prop): """Process special properties @@ -303,9 +302,9 @@ class Entry_fit(Entry_section): if not self._fit_default_dt: self.Raise("Generated 'default' node requires default-dt entry argument") if self._fit_default_dt not in self._fdts: - self.Raise("default-dt entry argument '%s' not found in fdt list: %s" % - (self._fit_default_dt, - ', '.join(self._fdts))) + self.Raise( + f"default-dt entry argument '{self._fit_default_dt}' " + f"not found in fdt list: {', '.join(self._fdts)}") seq = self._fdts.index(self._fit_default_dt) val = val[1:].replace('DEFAULT-SEQ', str(seq + 1)) fsw.property_string(pname, val) @@ -352,8 +351,8 @@ class Entry_fit(Entry_section): else: if self._fdts is None: if self._fit_list_prop: - self.Raise("Generator node requires '%s' entry argument" % - self._fit_list_prop.value) + self.Raise('Generator node requires ' + f"'{self._fit_list_prop.value}' entry argument") else: self.Raise("Generator node requires 'fit,fdt-list' property")
@@ -367,10 +366,10 @@ class Entry_fit(Entry_section): first.
Args: - subnode (None): Generator node to process - depth: Current node depth (0 is the base 'fit' node) - in_images: True if this is inside the 'images' node, so that - 'data' properties should be generated + subnode (Node): Generator node to process + depth (int): Current node depth (0 is the base 'fit' node) + in_images (bool): True if this is inside the 'images' node, so + that 'data' properties should be generated """ oper = self._get_operation(subnode) if oper == OP_GEN_FDT_NODES: @@ -380,9 +379,10 @@ class Entry_fit(Entry_section): """Add nodes to the output FIT
Args: - base_node: Base Node of the FIT (with 'description' property) - depth: Current node depth (0 is the base 'fit' node) - node: Current node to process + base_node (Node): Base Node of the FIT (with 'description' + property) + depth (int): Current node depth (0 is the base 'fit' node) + node (Node): Current node to process
There are two cases to deal with: - hash and signature nodes which become part of the FIT @@ -439,7 +439,7 @@ class Entry_fit(Entry_section): according to where they ended up in the packed FIT file.
Args: - image_pos: Position of this entry in the image + image_pos (int): Position of this entry in the image """ super().SetImagePos(image_pos)
@@ -478,7 +478,7 @@ class Entry_fit(Entry_section):
# This should never happen else: # pragma: no cover - self.Raise("%s: missing data properties" % (path)) + self.Raise(f'{path}: missing data properties')
section.SetOffsetSize(offset, size) section.SetImagePos(self.image_pos)

On 24/02/2022 02:00, Simon Glass wrote:
Some warnings have crept in, so fix those that are easy to fix.
Signed-off-by: Simon Glass sjg@chromium.org
(no changes since v1)
tools/binman/etype/fit.py | 50 +++++++++++++++++++-------------------- 1 file changed, 25 insertions(+), 25 deletions(-)
Reviewed-by: Alper Nebi Yasak alpernebiyasak@gmail.com

Add a new function to handling reporting errors within a particular subnode of the FIT description. This can be used to make the format of these errors consistent.
Signed-off-by: Simon Glass sjg@chromium.org ---
(no changes since v1)
tools/binman/etype/fit.py | 33 +++++++++++++++++++++++++-------- tools/binman/ftest.py | 2 +- 2 files changed, 26 insertions(+), 9 deletions(-)
diff --git a/tools/binman/etype/fit.py b/tools/binman/etype/fit.py index 70966028e8..50a9179f9f 100644 --- a/tools/binman/etype/fit.py +++ b/tools/binman/etype/fit.py @@ -186,11 +186,11 @@ class Entry_fit(Entry_section): def ReadNode(self): super().ReadNode()
- def _get_operation(self, subnode): + def _get_operation(self, base_node, node): """Get the operation referenced by a subnode
Args: - subnode (Node): Subnode (of the FIT) to check + node (Node): Subnode (of the FIT) to check
Returns: int: Operation to perform @@ -198,12 +198,13 @@ class Entry_fit(Entry_section): Raises: ValueError: Invalid operation name """ - oper_name = subnode.props.get('fit,operation') + oper_name = node.props.get('fit,operation') if not oper_name: return OP_GEN_FDT_NODES oper = OPERATIONS.get(oper_name.value) - if not oper: - self.Raise(f"Unknown operation '{oper_name.value}'") + if oper is None: + self._raise(base_node, node, + f"Unknown operation '{oper_name.value}'") return oper
def ReadEntries(self): @@ -273,6 +274,20 @@ class Entry_fit(Entry_section):
return tools.read_file(output_fname)
+ def _raise(self, base_node, node, msg): + """Raise an error with a paticular FIT subnode + + Args: + base_node (Node): Base Node of the FIT (with 'description' property) + node (Node): FIT subnode containing the error + msg (str): Message to report + + Raises: + ValueError, as requested + """ + rel_path = node.path[len(base_node.path) + 1:] + self.Raise(f"subnode '{rel_path}': {msg}") + def _build_input(self): """Finish the FIT by adding the 'data' properties to it
@@ -356,7 +371,7 @@ class Entry_fit(Entry_section): else: self.Raise("Generator node requires 'fit,fdt-list' property")
- def _gen_node(subnode, depth, in_images): + def _gen_node(base_node, subnode, depth, in_images): """Generate nodes from a template
This creates one node for each member of self._fdts using the @@ -366,12 +381,14 @@ class Entry_fit(Entry_section): first.
Args: + base_node (Node): Base Node of the FIT (with 'description' + property) subnode (Node): Generator node to process depth (int): Current node depth (0 is the base 'fit' node) in_images (bool): True if this is inside the 'images' node, so that 'data' properties should be generated """ - oper = self._get_operation(subnode) + oper = self._get_operation(base_node, subnode) if oper == OP_GEN_FDT_NODES: _gen_fdt_nodes(subnode, depth, in_images)
@@ -412,7 +429,7 @@ class Entry_fit(Entry_section): elif self.GetImage().generate and subnode.name.startswith('@'): subnode_path = f'{rel_path}/{subnode.name}' entry = self._entries.get(subnode_path) - _gen_node(subnode, depth, in_images) + _gen_node(base_node, subnode, depth, in_images) if entry: del self._entries[subnode_path] else: diff --git a/tools/binman/ftest.py b/tools/binman/ftest.py index 106027d245..16463518aa 100644 --- a/tools/binman/ftest.py +++ b/tools/binman/ftest.py @@ -5319,7 +5319,7 @@ fdt fdtmap Extract the devicetree blob from the fdtmap """Check handling of an FDT map when the section cannot be found""" with self.assertRaises(ValueError) as exc: self._DoReadFileDtb('224_fit_bad_oper.dts') - self.assertIn("Node '/binman/fit': Unknown operation 'unknown'", + self.assertIn("Node '/binman/fit': subnode 'images/@fdt-SEQ': Unknown operation 'unknown'", str(exc.exception))
def test_uses_expand_size(self):

On 24/02/2022 02:00, Simon Glass wrote:
Add a new function to handling reporting errors within a particular subnode of the FIT description. This can be used to make the format of these errors consistent.
Signed-off-by: Simon Glass sjg@chromium.org
(no changes since v1)
tools/binman/etype/fit.py | 33 +++++++++++++++++++++++++-------- tools/binman/ftest.py | 2 +- 2 files changed, 26 insertions(+), 9 deletions(-)
Reviewed-by: Alper Nebi Yasak alpernebiyasak@gmail.com
I don't like entry.Raise(msg) in general, if it were me I would define Exception subclasses and e.g. raise CustomError(self, subnode) for the various error conditions, but that's beside the point.
diff --git a/tools/binman/etype/fit.py b/tools/binman/etype/fit.py index 70966028e8..50a9179f9f 100644 --- a/tools/binman/etype/fit.py +++ b/tools/binman/etype/fit.py [...] @@ -273,6 +274,20 @@ class Entry_fit(Entry_section):
return tools.read_file(output_fname)
- def _raise(self, base_node, node, msg):
"""Raise an error with a paticular FIT subnode
Args:
base_node (Node): Base Node of the FIT (with 'description' property)
node (Node): FIT subnode containing the error
msg (str): Message to report
Raises:
ValueError, as requested
"""
rel_path = node.path[len(base_node.path) + 1:]
Can base_node be anything except self._node here, why not use that?
self.Raise(f"subnode '{rel_path}': {msg}")
The names are a bit inconsistent, but I guess you intend to change Entry.Raise() to Entry._raise() later on...
- def _build_input(self): """Finish the FIT by adding the 'data' properties to it
[...]

Hi Alper,
On Thu, 3 Mar 2022 at 14:16, Alper Nebi Yasak alpernebiyasak@gmail.com wrote:
On 24/02/2022 02:00, Simon Glass wrote:
Add a new function to handling reporting errors within a particular subnode of the FIT description. This can be used to make the format of these errors consistent.
Signed-off-by: Simon Glass sjg@chromium.org
(no changes since v1)
tools/binman/etype/fit.py | 33 +++++++++++++++++++++++++-------- tools/binman/ftest.py | 2 +- 2 files changed, 26 insertions(+), 9 deletions(-)
Reviewed-by: Alper Nebi Yasak alpernebiyasak@gmail.com
I don't like entry.Raise(msg) in general, if it were me I would define Exception subclasses and e.g. raise CustomError(self, subnode) for the various error conditions, but that's beside the point.
I have so far stuck with ValueError as I find the need to import classes to raise an exception a pain. But I suppose that would be OK.
diff --git a/tools/binman/etype/fit.py b/tools/binman/etype/fit.py index 70966028e8..50a9179f9f 100644 --- a/tools/binman/etype/fit.py +++ b/tools/binman/etype/fit.py [...] @@ -273,6 +274,20 @@ class Entry_fit(Entry_section):
return tools.read_file(output_fname)
- def _raise(self, base_node, node, msg):
"""Raise an error with a paticular FIT subnode
Args:
base_node (Node): Base Node of the FIT (with 'description' property)
node (Node): FIT subnode containing the error
msg (str): Message to report
Raises:
ValueError, as requested
"""
rel_path = node.path[len(base_node.path) + 1:]
Can base_node be anything except self._node here, why not use that?
OK I fixed that.
self.Raise(f"subnode '{rel_path}': {msg}")
The names are a bit inconsistent, but I guess you intend to change Entry.Raise() to Entry._raise() later on...
Well I was planning to change Raise() to raise(), so I'll rename this raise_subnode()
- def _build_input(self): """Finish the FIT by adding the 'data' properties to it
[...]
Regards, Simon

It doesn't make sense to use 'subnode' as a function parameter since it is just a 'node' so far as the function is concerned. Update two functions to use 'node' instead.
Signed-off-by: Simon Glass sjg@chromium.org ---
(no changes since v1)
tools/binman/etype/fit.py | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/tools/binman/etype/fit.py b/tools/binman/etype/fit.py index 50a9179f9f..e1b056f56e 100644 --- a/tools/binman/etype/fit.py +++ b/tools/binman/etype/fit.py @@ -332,7 +332,7 @@ class Entry_fit(Entry_section): return fsw.property(pname, prop.bytes)
- def _gen_fdt_nodes(subnode, depth, in_images): + def _gen_fdt_nodes(node, depth, in_images): """Generate FDT nodes
This creates one node for each member of self._fdts using the @@ -342,7 +342,7 @@ class Entry_fit(Entry_section): first.
Args: - subnode (None): Generator node to process + node (None): Generator node to process depth: Current node depth (0 is the base 'fit' node) in_images: True if this is inside the 'images' node, so that 'data' properties should be generated @@ -350,10 +350,10 @@ class Entry_fit(Entry_section): if self._fdts: # Generate nodes for each FDT for seq, fdt_fname in enumerate(self._fdts): - node_name = subnode.name[1:].replace('SEQ', str(seq + 1)) + node_name = node.name[1:].replace('SEQ', str(seq + 1)) fname = tools.get_input_filename(fdt_fname + '.dtb') with fsw.add_node(node_name): - for pname, prop in subnode.props.items(): + for pname, prop in node.props.items(): val = prop.bytes.replace( b'NAME', tools.to_bytes(fdt_fname)) val = val.replace( @@ -371,7 +371,7 @@ class Entry_fit(Entry_section): else: self.Raise("Generator node requires 'fit,fdt-list' property")
- def _gen_node(base_node, subnode, depth, in_images): + def _gen_node(base_node, node, depth, in_images): """Generate nodes from a template
This creates one node for each member of self._fdts using the @@ -383,14 +383,14 @@ class Entry_fit(Entry_section): Args: base_node (Node): Base Node of the FIT (with 'description' property) - subnode (Node): Generator node to process + node (Node): Generator node to process depth (int): Current node depth (0 is the base 'fit' node) in_images (bool): True if this is inside the 'images' node, so that 'data' properties should be generated """ - oper = self._get_operation(base_node, subnode) + oper = self._get_operation(base_node, node) if oper == OP_GEN_FDT_NODES: - _gen_fdt_nodes(subnode, depth, in_images) + _gen_fdt_nodes(node, depth, in_images)
def _add_node(base_node, depth, node): """Add nodes to the output FIT

On 24/02/2022 02:00, Simon Glass wrote:
It doesn't make sense to use 'subnode' as a function parameter since it is just a 'node' so far as the function is concerned. Update two functions to use 'node' instead.
Signed-off-by: Simon Glass sjg@chromium.org
(no changes since v1)
tools/binman/etype/fit.py | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-)
Reviewed-by: Alper Nebi Yasak alpernebiyasak@gmail.com

The current implementation sets up the FIT entries but then deletes the 'generator' ones so they don't appear in the final image.
This is a bit clumsy. We cannot build the image more than once, since the generator entries are lost during the first build. Binman requires that calling BuildSectionData() multiple times returns a valid result each time.
Keep a separate, private list which includes the generator nodes and use that where needed, to correct this problem. Ensure that the missing list includes removed generator entries too.
Signed-off-by: Simon Glass sjg@chromium.org ---
(no changes since v1)
tools/binman/etype/fit.py | 33 ++++++++++++++++++++++++++++----- 1 file changed, 28 insertions(+), 5 deletions(-)
diff --git a/tools/binman/etype/fit.py b/tools/binman/etype/fit.py index e1b056f56e..30b20a07a2 100644 --- a/tools/binman/etype/fit.py +++ b/tools/binman/etype/fit.py @@ -163,10 +163,15 @@ class Entry_fit(Entry_section): key: relative path to entry Node (from the base of the FIT) value: Entry_section object comprising the contents of this node + _priv_entries: Internal copy of _entries which includes 'generator' + entries which are used to create the FIT, but should not be + processed as real entries. This is set up once we have the + entries """ super().__init__(section, etype, node) self._fit = None self._fit_props = {} + self._priv_entries = {}
for pname, prop in self._node.props.items(): if pname.startswith('fit,'): @@ -239,6 +244,10 @@ class Entry_fit(Entry_section):
_add_entries(self._node, 0, self._node)
+ # Keep a copy of all entries, including generator entries, since these + # removed from self._entries later. + self._priv_entries = dict(self._entries) + def BuildSectionData(self, required): """Build FIT entry contents
@@ -415,11 +424,12 @@ class Entry_fit(Entry_section):
has_images = depth == 2 and in_images if has_images: - entry = self._entries[rel_path] + entry = self._priv_entries[rel_path] data = entry.GetData() fsw.property('data', bytes(data))
for subnode in node.subnodes: + subnode_path = f'{rel_path}/{subnode.name}' if has_images and not (subnode.name.startswith('hash') or subnode.name.startswith('signature')): # This subnode is a content node not meant to appear in @@ -427,11 +437,11 @@ class Entry_fit(Entry_section): # fsw.add_node() or _add_node() for it. pass elif self.GetImage().generate and subnode.name.startswith('@'): - subnode_path = f'{rel_path}/{subnode.name}' - entry = self._entries.get(subnode_path) _gen_node(base_node, subnode, depth, in_images) - if entry: - del self._entries[subnode_path] + # This is a generator (template) entry, so remove it from + # the list of entries used by PackEntries(), etc. Otherwise + # it will appear in the binman output + to_remove.append(subnode_path) else: with fsw.add_node(subnode.name): _add_node(base_node, depth + 1, subnode) @@ -440,10 +450,16 @@ class Entry_fit(Entry_section): # entry node fsw = libfdt.FdtSw() fsw.finish_reservemap() + to_remove = [] with fsw.add_node(''): _add_node(self._node, 0, self._node) fdt = fsw.as_fdt()
+ # Remove generator entries from the main list + for path in to_remove: + if path in self._entries: + del self._entries[path] + # Pack this new FDT and scan it so we can add the data later fdt.pack() data = fdt.as_bytearray() @@ -503,3 +519,10 @@ class Entry_fit(Entry_section): def AddBintools(self, btools): super().AddBintools(btools) self.mkimage = self.AddBintool(btools, 'mkimage') + + def CheckMissing(self, missing_list): + # We must use our private entry list for this since generator notes + # which are removed from self._entries will otherwise not show up as + # missing + for entry in self._priv_entries.values(): + entry.CheckMissing(missing_list)

On 24/02/2022 02:00, Simon Glass wrote:
The current implementation sets up the FIT entries but then deletes the 'generator' ones so they don't appear in the final image.
They still show up in the fdtmap if I add one to rockchip-u-boot.dtsi:
$ binman ls -i u-boot-rockchip.bin Name Image-pos Size Entry-type Offset ------------------------------------------------------------- main-section 0 103520 section 0 mkimage 0 1a000 mkimage 0 fit 1a000 e8d74 fit 1a000 u-boot 1a644 b1898 section 644 u-boot-nodtb 1a644 b1898 u-boot-nodtb 0 @atf-SEQ 0 0 section 0 atf-bl31 0 0 atf-bl31 0 @tee-SEQ 0 0 section 0 tee-os 0 0 tee-os 0 @fdt-SEQ 0 0 section 0 fdtmap 102d74 79c fdtmap 102d74
But not simple-bin.map:
ImagePos Offset Size Name 00000000 00000000 00103520 main-section 00000000 00000000 0001a000 mkimage 0001a000 0001a000 000e8d74 fit 0001a644 00000644 000b1898 u-boot 0001a644 00000000 000b1898 u-boot-nodtb 00102d74 00102d74 0000079c fdtmap
This happens in v1 as well, but I hadn't checked it then.
(The "fit" size and "fdtmap" offset are off by 0x10 too, but I'm not sure why yet.)
This is a bit clumsy. We cannot build the image more than once, since the generator entries are lost during the first build. Binman requires that calling BuildSectionData() multiple times returns a valid result each time.
I think the generator entries should be turned into concrete entries and be removed in _gen_entries(), so BuildSectionData() should work on the entries that were generated. This way the individual entries would show up in fdtmap and could be binman-extracted/replaced as well.
For FDTs we can generate blob entries for each file, for ELFs maybe we can split them into files like the script used to do (bl31_0x<addr>.bin) and do the same?
Maybe some new entry types, "data" for arbitrary data unrelated to a file whose contents we can set, or "elf-segment" that can extract a segment from an ELF file (but I don't like the information loss there).
Keep a separate, private list which includes the generator nodes and use that where needed, to correct this problem. Ensure that the missing list includes removed generator entries too.
Signed-off-by: Simon Glass sjg@chromium.org
(no changes since v1)
I'm more and more convinced that generator nodes needs a thorough redesign, but the patch is consistent with status quo, so:
Reviewed-by: Alper Nebi Yasak alpernebiyasak@gmail.com
tools/binman/etype/fit.py | 33 ++++++++++++++++++++++++++++----- 1 file changed, 28 insertions(+), 5 deletions(-)
diff --git a/tools/binman/etype/fit.py b/tools/binman/etype/fit.py index e1b056f56e..30b20a07a2 100644 --- a/tools/binman/etype/fit.py +++ b/tools/binman/etype/fit.py @@ -163,10 +163,15 @@ class Entry_fit(Entry_section): key: relative path to entry Node (from the base of the FIT) value: Entry_section object comprising the contents of this node
_priv_entries: Internal copy of _entries which includes 'generator'
entries which are used to create the FIT, but should not be
processed as real entries. This is set up once we have the
entries
Maybe this could be "_templates" or "_entry_generators" and only keep track of the generator entries.
""" super().__init__(section, etype, node) self._fit = None self._fit_props = {}
self._priv_entries = {} for pname, prop in self._node.props.items(): if pname.startswith('fit,'):
[...]

Hi Alper,
On Thu, 3 Mar 2022 at 14:17, Alper Nebi Yasak alpernebiyasak@gmail.com wrote:
On 24/02/2022 02:00, Simon Glass wrote:
The current implementation sets up the FIT entries but then deletes the 'generator' ones so they don't appear in the final image.
They still show up in the fdtmap if I add one to rockchip-u-boot.dtsi:
$ binman ls -i u-boot-rockchip.bin Name Image-pos Size Entry-type Offset
main-section 0 103520 section 0 mkimage 0 1a000 mkimage 0 fit 1a000 e8d74 fit 1a000 u-boot 1a644 b1898 section 644 u-boot-nodtb 1a644 b1898 u-boot-nodtb 0 @atf-SEQ 0 0 section 0 atf-bl31 0 0 atf-bl31 0 @tee-SEQ 0 0 section 0 tee-os 0 0 tee-os 0 @fdt-SEQ 0 0 section 0 fdtmap 102d74 79c fdtmap 102d74
But not simple-bin.map:
ImagePos Offset Size Name 00000000 00000000 00103520 main-section 00000000 00000000 0001a000 mkimage 0001a000 0001a000 000e8d74 fit 0001a644 00000644 000b1898 u-boot 0001a644 00000000 000b1898 u-boot-nodtb 00102d74 00102d74 0000079c fdtmap
This happens in v1 as well, but I hadn't checked it then.
(The "fit" size and "fdtmap" offset are off by 0x10 too, but I'm not sure why yet.)
Well let's fix this after the series. We need tests for the map and so on.
This is a bit clumsy. We cannot build the image more than once, since the generator entries are lost during the first build. Binman requires that calling BuildSectionData() multiple times returns a valid result each time.
I think the generator entries should be turned into concrete entries and be removed in _gen_entries(), so BuildSectionData() should work on the entries that were generated. This way the individual entries would show up in fdtmap and could be binman-extracted/replaced as well.
That makes sense, but I'm not sure how to implement it. The split-elf thing needs the contents of the entries which is not available at the start. It is likely available before BuildSectionData() is called, but not necessarily. So the template needs to hang around at least as long as that.
I think there is something we could do here, but it isn't quite clear to me. Perhaps we need a expand-nodes-based-on-contents phase in control.py ? Eek...
For FDTs we can generate blob entries for each file, for ELFs maybe we can split them into files like the script used to do (bl31_0x<addr>.bin) and do the same?
I'm not a big fan of adding files. Binman should be able to hold everything in memory. It does generate files for later inspection though, so yes we could add this feature.
Maybe some new entry types, "data" for arbitrary data unrelated to a file whose contents we can set, or "elf-segment" that can extract a segment from an ELF file (but I don't like the information loss there).
Yes I quite like the idea of new entry types. In fact I was hoping to turn split-elf into an entry type (one that generated multiple entries). But I decided to stop before doing that since we really need to gets the code in there, fix the bugs /move forward with some of the ideas you and others have.
Keep a separate, private list which includes the generator nodes and use that where needed, to correct this problem. Ensure that the missing list includes removed generator entries too.
Signed-off-by: Simon Glass sjg@chromium.org
(no changes since v1)
I'm more and more convinced that generator nodes needs a thorough redesign, but the patch is consistent with status quo, so:
Reviewed-by: Alper Nebi Yasak alpernebiyasak@gmail.com
tools/binman/etype/fit.py | 33 ++++++++++++++++++++++++++++----- 1 file changed, 28 insertions(+), 5 deletions(-)
diff --git a/tools/binman/etype/fit.py b/tools/binman/etype/fit.py index e1b056f56e..30b20a07a2 100644 --- a/tools/binman/etype/fit.py +++ b/tools/binman/etype/fit.py @@ -163,10 +163,15 @@ class Entry_fit(Entry_section): key: relative path to entry Node (from the base of the FIT) value: Entry_section object comprising the contents of this node
_priv_entries: Internal copy of _entries which includes 'generator'
entries which are used to create the FIT, but should not be
processed as real entries. This is set up once we have the
entries
Maybe this could be "_templates" or "_entry_generators" and only keep track of the generator entries.
Again I think we can revisit that. I feel it is a bit messy right now, too.
""" super().__init__(section, etype, node) self._fit = None self._fit_props = {}
self._priv_entries = {} for pname, prop in self._node.props.items(): if pname.startswith('fit,'):
[...]
Regards, Simon

On 06/03/2022 06:08, Simon Glass wrote:
On Thu, 3 Mar 2022 at 14:17, Alper Nebi Yasak alpernebiyasak@gmail.com wrote:
This is a bit clumsy. We cannot build the image more than once, since the generator entries are lost during the first build. Binman requires that calling BuildSectionData() multiple times returns a valid result each time.
I think the generator entries should be turned into concrete entries and be removed in _gen_entries(), so BuildSectionData() should work on the entries that were generated. This way the individual entries would show up in fdtmap and could be binman-extracted/replaced as well.
That makes sense, but I'm not sure how to implement it. The split-elf thing needs the contents of the entries which is not available at the start. It is likely available before BuildSectionData() is called, but not necessarily. So the template needs to hang around at least as long as that.
I think there is something we could do here, but it isn't quite clear to me. Perhaps we need a expand-nodes-based-on-contents phase in control.py ? Eek...
I can't think of anything proper. I guess it could need architectural changes, but I need to read more of the code and get a better grasp of things.
For FDTs we can generate blob entries for each file, for ELFs maybe we can split them into files like the script used to do (bl31_0x<addr>.bin) and do the same?
I'm not a big fan of adding files. Binman should be able to hold everything in memory. It does generate files for later inspection though, so yes we could add this feature.
Maybe some new entry types, "data" for arbitrary data unrelated to a file whose contents we can set, or "elf-segment" that can extract a segment from an ELF file (but I don't like the information loss there).
Yes I quite like the idea of new entry types. In fact I was hoping to turn split-elf into an entry type (one that generated multiple entries). But I decided to stop before doing that since we really need to gets the code in there, fix the bugs /move forward with some of the ideas you and others have.
After this and the other fixes and such, I hope I can set aside some time and experiment more on the ideas I've been throwing at you.
Sorry if I ended up delaying this too much already...

Some boards need to load an ELF file using the 'loadables' property, but the file has segments at different memory addresses. This means that it cannot be supplied as a flat binary.
Allow generating a separate node in the FIT for each segment in the ELF, with a different load address for each.
Also add checks that the fit,xxx directives are valid.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: - Rewrite this to use the new FIT entry-type implementation - Rename op-tee to tee-os
tools/binman/entries.rst | 146 ++++++++++++ tools/binman/etype/fit.py | 230 ++++++++++++++++++- tools/binman/ftest.py | 147 ++++++++++++ tools/binman/test/226_fit_split_elf.dts | 67 ++++++ tools/binman/test/227_fit_bad_dir.dts | 9 + tools/binman/test/228_fit_bad_dir_config.dts | 9 + 6 files changed, 598 insertions(+), 10 deletions(-) create mode 100644 tools/binman/test/226_fit_split_elf.dts create mode 100644 tools/binman/test/227_fit_bad_dir.dts create mode 100644 tools/binman/test/228_fit_bad_dir_config.dts
diff --git a/tools/binman/entries.rst b/tools/binman/entries.rst index 484cde5c80..a915f64d46 100644 --- a/tools/binman/entries.rst +++ b/tools/binman/entries.rst @@ -612,6 +612,9 @@ gen-fdt-nodes Generate FDT nodes as above. This is the default if there is no `fit,operation` property.
+split-elf + Split an ELF file into a separate node for each segment. + Generating nodes from an FDT list (gen-fdt-nodes) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -655,6 +658,149 @@ for each of your two files. Note that if no devicetree files are provided (with '-a of-list' as above) then no nodes will be generated.
+Generating nodes from an ELF file (split-elf) +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +This uses the node as a template to generate multiple nodes. The following +special properties are available: + +split-elf + Split an ELF file into a separate node for each segment. This uses the + node as a template to generate multiple nodes. The following special + properties are available: + + fit,load + Generates a `load = <...>` property with the load address of the + segmnet + + fit,entry + Generates a `entry = <...>` property with the entry address of the + ELF. This is only produced for the first entry + + fit,data + Generates a `data = <...>` property with the contents of the segment + + fit,loadables + Generates a `loadable = <...>` property with a list of the generated + nodes (including all nodes if this operation is used multiple times) + + +Here is an example showing ATF, TEE and a device tree all combined:: + + fit { + description = "test-desc"; + #address-cells = <1>; + fit,fdt-list = "of-list"; + + images { + u-boot { + description = "U-Boot (64-bit)"; + type = "standalone"; + os = "U-Boot"; + arch = "arm64"; + compression = "none"; + load = <CONFIG_SYS_TEXT_BASE>; + u-boot-nodtb { + }; + }; + @fdt-SEQ { + description = "fdt-NAME.dtb"; + type = "flat_dt"; + compression = "none"; + }; + @atf-SEQ { + fit,operation = "split-elf"; + description = "ARM Trusted Firmware"; + type = "firmware"; + arch = "arm64"; + os = "arm-trusted-firmware"; + compression = "none"; + fit,load; + fit,entry; + fit,data; + + atf-bl31 { + }; + }; + + @tee-SEQ { + fit,operation = "split-elf"; + description = "TEE"; + type = "tee"; + arch = "arm64"; + os = "tee"; + compression = "none"; + fit,load; + fit,entry; + fit,data; + + tee-os { + }; + }; + }; + + configurations { + default = "@config-DEFAULT-SEQ"; + @config-SEQ { + description = "conf-NAME.dtb"; + fdt = "fdt-SEQ"; + firmware = "u-boot"; + fit,loadables; + }; + }; + }; + +If ATF-BL31 is available, this generates a node for each segment in the +ELF file, for example:: + + images { + atf-1 { + data = <...contents of first segment...>; + data-offset = <0x00000000>; + entry = <0x00040000>; + load = <0x00040000>; + compression = "none"; + os = "arm-trusted-firmware"; + arch = "arm64"; + type = "firmware"; + description = "ARM Trusted Firmware"; + }; + atf-2 { + data = <...contents of second segment...>; + load = <0xff3b0000>; + compression = "none"; + os = "arm-trusted-firmware"; + arch = "arm64"; + type = "firmware"; + description = "ARM Trusted Firmware"; + }; + }; + +The same applies for OP-TEE if that is available. + +If each binary is not available, the relevant template node (@atf-SEQ or +@tee-SEQ) is removed from the output. + +This also generates a `config-xxx` node for each device tree in `of-list`. +Note that the U-Boot build system uses `-a of-list=$(CONFIG_OF_LIST)` +so you can use `CONFIG_OF_LIST` to define that list. In this example it is +set up for `firefly-rk3399` with a single device tree and the default set +with `-a default-dt=$(CONFIG_DEFAULT_DEVICE_TREE)`, so the resulting output +is:: + + configurations { + default = "config-1"; + config-1 { + loadables = "atf-1", "atf-2", "atf-3", "tee-1", "tee-2"; + description = "rk3399-firefly.dtb"; + fdt = "fdt-1"; + firmware = "u-boot"; + }; + }; + +U-Boot SPL can then load the firmware (U-Boot proper) and all the loadables +(ATF and TEE), then proceed with the boot. +
Entry: fmap: An entry which contains an Fmap section diff --git a/tools/binman/etype/fit.py b/tools/binman/etype/fit.py index 30b20a07a2..63d552ed19 100644 --- a/tools/binman/etype/fit.py +++ b/tools/binman/etype/fit.py @@ -9,14 +9,16 @@ import libfdt
from binman.entry import Entry, EntryArg from binman.etype.section import Entry_section +from binman import elf from dtoc import fdt_util from dtoc.fdt import Fdt from patman import tools
# Supported operations, with the fit,operation property -OP_GEN_FDT_NODES = range(1) +OP_GEN_FDT_NODES, OP_SPLIT_ELF = range(2) OPERATIONS = { 'gen-fdt-nodes': OP_GEN_FDT_NODES, + 'split-elf': OP_SPLIT_ELF, }
class Entry_fit(Entry_section): @@ -112,6 +114,9 @@ class Entry_fit(Entry_section): Generate FDT nodes as above. This is the default if there is no `fit,operation` property.
+ split-elf + Split an ELF file into a separate node for each segment. + Generating nodes from an FDT list (gen-fdt-nodes) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -154,6 +159,149 @@ class Entry_fit(Entry_section):
Note that if no devicetree files are provided (with '-a of-list' as above) then no nodes will be generated. + + Generating nodes from an ELF file (split-elf) + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + + This uses the node as a template to generate multiple nodes. The following + special properties are available: + + split-elf + Split an ELF file into a separate node for each segment. This uses the + node as a template to generate multiple nodes. The following special + properties are available: + + fit,load + Generates a `load = <...>` property with the load address of the + segmnet + + fit,entry + Generates a `entry = <...>` property with the entry address of the + ELF. This is only produced for the first entry + + fit,data + Generates a `data = <...>` property with the contents of the segment + + fit,loadables + Generates a `loadable = <...>` property with a list of the generated + nodes (including all nodes if this operation is used multiple times) + + + Here is an example showing ATF, TEE and a device tree all combined:: + + fit { + description = "test-desc"; + #address-cells = <1>; + fit,fdt-list = "of-list"; + + images { + u-boot { + description = "U-Boot (64-bit)"; + type = "standalone"; + os = "U-Boot"; + arch = "arm64"; + compression = "none"; + load = <CONFIG_SYS_TEXT_BASE>; + u-boot-nodtb { + }; + }; + @fdt-SEQ { + description = "fdt-NAME.dtb"; + type = "flat_dt"; + compression = "none"; + }; + @atf-SEQ { + fit,operation = "split-elf"; + description = "ARM Trusted Firmware"; + type = "firmware"; + arch = "arm64"; + os = "arm-trusted-firmware"; + compression = "none"; + fit,load; + fit,entry; + fit,data; + + atf-bl31 { + }; + }; + + @tee-SEQ { + fit,operation = "split-elf"; + description = "TEE"; + type = "tee"; + arch = "arm64"; + os = "tee"; + compression = "none"; + fit,load; + fit,entry; + fit,data; + + tee-os { + }; + }; + }; + + configurations { + default = "@config-DEFAULT-SEQ"; + @config-SEQ { + description = "conf-NAME.dtb"; + fdt = "fdt-SEQ"; + firmware = "u-boot"; + fit,loadables; + }; + }; + }; + + If ATF-BL31 is available, this generates a node for each segment in the + ELF file, for example:: + + images { + atf-1 { + data = <...contents of first segment...>; + data-offset = <0x00000000>; + entry = <0x00040000>; + load = <0x00040000>; + compression = "none"; + os = "arm-trusted-firmware"; + arch = "arm64"; + type = "firmware"; + description = "ARM Trusted Firmware"; + }; + atf-2 { + data = <...contents of second segment...>; + load = <0xff3b0000>; + compression = "none"; + os = "arm-trusted-firmware"; + arch = "arm64"; + type = "firmware"; + description = "ARM Trusted Firmware"; + }; + }; + + The same applies for OP-TEE if that is available. + + If each binary is not available, the relevant template node (@atf-SEQ or + @tee-SEQ) is removed from the output. + + This also generates a `config-xxx` node for each device tree in `of-list`. + Note that the U-Boot build system uses `-a of-list=$(CONFIG_OF_LIST)` + so you can use `CONFIG_OF_LIST` to define that list. In this example it is + set up for `firefly-rk3399` with a single device tree and the default set + with `-a default-dt=$(CONFIG_DEFAULT_DEVICE_TREE)`, so the resulting output + is:: + + configurations { + default = "config-1"; + config-1 { + loadables = "atf-1", "atf-2", "atf-3", "tee-1", "tee-2"; + description = "rk3399-firefly.dtb"; + fdt = "fdt-1"; + firmware = "u-boot"; + }; + }; + + U-Boot SPL can then load the firmware (U-Boot proper) and all the loadables + (ATF and TEE), then proceed with the boot. """ def __init__(self, section, etype, node): """ @@ -167,6 +315,7 @@ class Entry_fit(Entry_section): entries which are used to create the FIT, but should not be processed as real entries. This is set up once we have the entries + _loadables: List of generated split-elf nodes, each a node name """ super().__init__(section, etype, node) self._fit = None @@ -188,6 +337,8 @@ class Entry_fit(Entry_section): str)])[0] self.mkimage = None
+ self._loadables = [] + def ReadNode(self): super().ReadNode()
@@ -341,7 +492,7 @@ class Entry_fit(Entry_section): return fsw.property(pname, prop.bytes)
- def _gen_fdt_nodes(node, depth, in_images): + def _gen_fdt_nodes(base_node, node, depth, in_images): """Generate FDT nodes
This creates one node for each member of self._fdts using the @@ -363,11 +514,20 @@ class Entry_fit(Entry_section): fname = tools.get_input_filename(fdt_fname + '.dtb') with fsw.add_node(node_name): for pname, prop in node.props.items(): - val = prop.bytes.replace( - b'NAME', tools.to_bytes(fdt_fname)) - val = val.replace( - b'SEQ', tools.to_bytes(str(seq + 1))) - fsw.property(pname, val) + if pname == 'fit,loadables': + val = '\0'.join(self._loadables) + '\0' + fsw.property('loadables', val.encode('utf-8')) + elif pname == 'fit,operation': + pass + elif pname.startswith('fit,'): + self._raise(base_node, node, + f"Unknown directive '{pname}'") + else: + val = prop.bytes.replace( + b'NAME', tools.to_bytes(fdt_fname)) + val = val.replace( + b'SEQ', tools.to_bytes(str(seq + 1))) + fsw.property(pname, val)
# Add data for 'images' nodes (but not 'config') if depth == 1 and in_images: @@ -380,7 +540,43 @@ class Entry_fit(Entry_section): else: self.Raise("Generator node requires 'fit,fdt-list' property")
- def _gen_node(base_node, node, depth, in_images): + def _gen_split_elf(base_node, node, elf_data, missing): + """Add nodes for the ELF file, one per group of contiguous segments + + Args: + node (Node): Template node from the binman definition + node (Node): Node to replace (in the FIT being built) + data (bytes): ELF-format data to process (may be empty) + missing (bool): True if any of the data is missing + + """ + # If any pieces are missing, skip this. The missing entries will + # show an error + if not missing: + try: + segments, entry = elf.read_loadable_segments(elf_data) + except ValueError as exc: + self._raise(base_node, node, + f'Failed to read ELF file: {str(exc)}') + for (seq, start, data) in segments: + node_name = node.name[1:].replace('SEQ', str(seq + 1)) + with fsw.add_node(node_name): + loadables.append(node_name) + for pname, prop in node.props.items(): + if not pname.startswith('fit,'): + fsw.property(pname, prop.bytes) + elif pname == 'fit,load': + fsw.property_u32('load', start) + elif pname == 'fit,entry': + if not seq: + fsw.property_u32('entry', entry) + elif pname == 'fit,data': + fsw.property('data', bytes(data)) + elif pname != 'fit,operation': + self._raise(base_node, node, + f"Unknown directive '{pname}'") + + def _gen_node(base_node, node, depth, in_images, entry): """Generate nodes from a template
This creates one node for each member of self._fdts using the @@ -399,7 +595,18 @@ class Entry_fit(Entry_section): """ oper = self._get_operation(base_node, node) if oper == OP_GEN_FDT_NODES: - _gen_fdt_nodes(node, depth, in_images) + _gen_fdt_nodes(base_node, node, depth, in_images) + elif oper == OP_SPLIT_ELF: + # Entry_section.ObtainContents() either returns True or + # raises an exception. + data = None + missing_list = [] + entry.ObtainContents() + entry.Pack(0) + data = entry.GetData() + entry.CheckMissing(missing_list) + + _gen_split_elf(base_node, node, data, bool(missing_list))
def _add_node(base_node, depth, node): """Add nodes to the output FIT @@ -437,7 +644,8 @@ class Entry_fit(Entry_section): # fsw.add_node() or _add_node() for it. pass elif self.GetImage().generate and subnode.name.startswith('@'): - _gen_node(base_node, subnode, depth, in_images) + entry = self._priv_entries.get(subnode_path) + _gen_node(base_node, subnode, depth, in_images, entry) # This is a generator (template) entry, so remove it from # the list of entries used by PackEntries(), etc. Otherwise # it will appear in the binman output @@ -451,8 +659,10 @@ class Entry_fit(Entry_section): fsw = libfdt.FdtSw() fsw.finish_reservemap() to_remove = [] + loadables = [] with fsw.add_node(''): _add_node(self._node, 0, self._node) + self._loadables = loadables fdt = fsw.as_fdt()
# Remove generator entries from the main list diff --git a/tools/binman/ftest.py b/tools/binman/ftest.py index 16463518aa..ec8e3d4d91 100644 --- a/tools/binman/ftest.py +++ b/tools/binman/ftest.py @@ -202,6 +202,13 @@ class TestFunctional(unittest.TestCase):
TestFunctional._MakeInputFile('env.txt', ENV_DATA)
+ # ELF file with two sections in different parts of memory, used for both + # ATF and OP_TEE + TestFunctional._MakeInputFile('bl31.elf', + tools.read_file(cls.ElfTestFile('elf_sections'))) + TestFunctional._MakeInputFile('tee.elf', + tools.read_file(cls.ElfTestFile('elf_sections'))) + cls.have_lz4 = comp_util.HAVE_LZ4
@classmethod @@ -5340,6 +5347,146 @@ fdt fdtmap Extract the devicetree blob from the fdtmap err, "Image '.*' has faked external blobs and is non-functional: .*")
+ def testFitSplitElf(self): + """Test an image with an FIT with an split-elf operation""" + entry_args = { + 'of-list': 'test-fdt1 test-fdt2', + 'default-dt': 'test-fdt2', + 'atf-bl31-path': 'bl31.elf', + 'tee-os-path': 'tee.elf', + } + test_subdir = os.path.join(self._indir, TEST_FDT_SUBDIR) + data = self._DoReadFileDtb( + '226_fit_split_elf.dts', + entry_args=entry_args, + extra_indirs=[test_subdir])[0] + + self.assertEqual(U_BOOT_NODTB_DATA, data[-len(U_BOOT_NODTB_DATA):]) + fit_data = data[len(U_BOOT_DATA):-len(U_BOOT_NODTB_DATA)] + + base_keys = {'description', 'type', 'arch', 'os', 'compression', + 'data', 'load'} + dtb = fdt.Fdt.FromData(fit_data) + dtb.Scan() + + elf_data = tools.read_file(os.path.join(self._indir, 'bl31.elf')) + segments, entry = elf.read_loadable_segments(elf_data) + + # We assume there are two segments + self.assertEquals(2, len(segments)) + + atf1 = dtb.GetNode('/images/atf-1') + _, start, data = segments[0] + self.assertEqual(base_keys | {'entry'}, atf1.props.keys()) + self.assertEqual(entry, + fdt_util.fdt32_to_cpu(atf1.props['entry'].value)) + self.assertEqual(start, + fdt_util.fdt32_to_cpu(atf1.props['load'].value)) + self.assertEqual(data, atf1.props['data'].bytes) + + atf2 = dtb.GetNode('/images/atf-2') + self.assertEqual(base_keys, atf2.props.keys()) + _, start, data = segments[1] + self.assertEqual(start, + fdt_util.fdt32_to_cpu(atf2.props['load'].value)) + self.assertEqual(data, atf2.props['data'].bytes) + + conf = dtb.GetNode('/configurations') + self.assertEqual({'default'}, conf.props.keys()) + + for subnode in conf.subnodes: + self.assertEqual({'description', 'fdt', 'loadables'}, + subnode.props.keys()) + self.assertEqual( + ['atf-1', 'atf-2', 'tee-1', 'tee-2'], + fdt_util.GetStringList(subnode, 'loadables')) + + def _check_bad_fit(self, dts): + """Check a bad FIT + + This runs with the given dts and returns the assertion raised + + Args: + dts (str): dts filename to use + + Returns: + str: Assertion string raised + """ + entry_args = { + 'of-list': 'test-fdt1 test-fdt2', + 'default-dt': 'test-fdt2', + 'atf-bl31-path': 'bl31.elf', + 'tee-os-path': 'tee.elf', + } + test_subdir = os.path.join(self._indir, TEST_FDT_SUBDIR) + with self.assertRaises(ValueError) as exc: + self._DoReadFileDtb(dts, entry_args=entry_args, + extra_indirs=[test_subdir])[0] + return str(exc.exception) + + def testFitSplitElfBadElf(self): + """Test a FIT split-elf operation with an invalid ELF file""" + TestFunctional._MakeInputFile('bad.elf', tools.get_bytes(100, 100)) + entry_args = { + 'of-list': 'test-fdt1 test-fdt2', + 'default-dt': 'test-fdt2', + 'atf-bl31-path': 'bad.elf', + 'tee-os-path': 'tee.elf', + } + test_subdir = os.path.join(self._indir, TEST_FDT_SUBDIR) + with self.assertRaises(ValueError) as exc: + self._DoReadFileDtb( + '226_fit_split_elf.dts', + entry_args=entry_args, + extra_indirs=[test_subdir])[0] + self.assertIn( + "Node '/binman/fit': subnode 'images/@atf-SEQ': Failed to read ELF file: Magic number does not match", + str(exc.exception)) + + def testFitSplitElfBadDirective(self): + """Test a FIT split-elf invalid fit,xxx directive in an image node""" + err = self._check_bad_fit('227_fit_bad_dir.dts') + self.assertIn( + "Node '/binman/fit': subnode 'images/@atf-SEQ': Unknown directive 'fit,something'", + err) + + def testFitSplitElfBadDirectiveConfig(self): + """Test a FIT split-elf with invalid fit,xxx directive in config""" + err = self._check_bad_fit('228_fit_bad_dir_config.dts') + self.assertEqual( + "Node '/binman/fit': subnode 'configurations/@config-SEQ': Unknown directive 'fit,config'", + err) + + def checkFitSplitElf(self, **kwargs): + """Test an split-elf FIT with a missing ELF file""" + entry_args = { + 'of-list': 'test-fdt1 test-fdt2', + 'default-dt': 'test-fdt2', + 'atf-bl31-path': 'bl31.elf', + 'tee-os-path': 'missing.elf', + } + test_subdir = os.path.join(self._indir, TEST_FDT_SUBDIR) + with test_util.capture_sys_output() as (stdout, stderr): + self._DoTestFile( + '226_fit_split_elf.dts', entry_args=entry_args, + extra_indirs=[test_subdir], **kwargs) + err = stderr.getvalue() + return err + + def testFitSplitElfMissing(self): + """Test an split-elf FIT with a missing ELF file""" + err = self.checkFitSplitElf(allow_missing=True) + self.assertRegex( + err, + "Image '.*' is missing external blobs and is non-functional: .*") + + def testFitSplitElfFaked(self): + """Test an split-elf FIT with faked ELF file""" + err = self.checkFitSplitElf(allow_missing=True, allow_fake_blobs=True) + self.assertRegex( + err, + "Image '.*' is missing external blobs and is non-functional: .*") +
if __name__ == "__main__": unittest.main() diff --git a/tools/binman/test/226_fit_split_elf.dts b/tools/binman/test/226_fit_split_elf.dts new file mode 100644 index 0000000000..fab15338b2 --- /dev/null +++ b/tools/binman/test/226_fit_split_elf.dts @@ -0,0 +1,67 @@ +// SPDX-License-Identifier: GPL-2.0+ + +/dts-v1/; + +/ { + #address-cells = <1>; + #size-cells = <1>; + + binman { + u-boot { + }; + fit { + description = "test-desc"; + #address-cells = <1>; + fit,fdt-list = "of-list"; + + images { + @fdt-SEQ { + description = "fdt-NAME.dtb"; + type = "flat_dt"; + compression = "none"; + }; + atf: @atf-SEQ { + fit,operation = "split-elf"; + description = "ARM Trusted Firmware"; + type = "firmware"; + arch = "arm64"; + os = "arm-trusted-firmware"; + compression = "none"; + fit,load; + fit,entry; + fit,data; + + atf-bl31 { + }; + }; + + @tee-SEQ { + fit,operation = "split-elf"; + description = "TEE"; + type = "tee"; + arch = "arm64"; + os = "tee"; + compression = "none"; + fit,load; + fit,entry; + fit,data; + + tee-os { + }; + }; + }; + + configurations { + default = "@config-DEFAULT-SEQ"; + config: @config-SEQ { + description = "conf-NAME.dtb"; + fdt = "fdt-SEQ"; + fit,loadables; + }; + }; + }; + + u-boot-nodtb { + }; + }; +}; diff --git a/tools/binman/test/227_fit_bad_dir.dts b/tools/binman/test/227_fit_bad_dir.dts new file mode 100644 index 0000000000..51f4816c4c --- /dev/null +++ b/tools/binman/test/227_fit_bad_dir.dts @@ -0,0 +1,9 @@ +// SPDX-License-Identifier: GPL-2.0+ + +/dts-v1/; + +#include "226_fit_split_elf.dts" + +&atf { + fit,something = "bad"; +}; diff --git a/tools/binman/test/228_fit_bad_dir_config.dts b/tools/binman/test/228_fit_bad_dir_config.dts new file mode 100644 index 0000000000..825a346c3e --- /dev/null +++ b/tools/binman/test/228_fit_bad_dir_config.dts @@ -0,0 +1,9 @@ +// SPDX-License-Identifier: GPL-2.0+ + +/dts-v1/; + +#include "226_fit_split_elf.dts" + +&config { + fit,config = "bad"; +};

On 24/02/2022 02:00, Simon Glass wrote:
Some boards need to load an ELF file using the 'loadables' property, but the file has segments at different memory addresses. This means that it cannot be supplied as a flat binary.
Allow generating a separate node in the FIT for each segment in the ELF, with a different load address for each.
Also add checks that the fit,xxx directives are valid.
Signed-off-by: Simon Glass sjg@chromium.org
Changes in v2:
- Rewrite this to use the new FIT entry-type implementation
- Rename op-tee to tee-os
tools/binman/entries.rst | 146 ++++++++++++ tools/binman/etype/fit.py | 230 ++++++++++++++++++- tools/binman/ftest.py | 147 ++++++++++++ tools/binman/test/226_fit_split_elf.dts | 67 ++++++ tools/binman/test/227_fit_bad_dir.dts | 9 + tools/binman/test/228_fit_bad_dir_config.dts | 9 + 6 files changed, 598 insertions(+), 10 deletions(-) create mode 100644 tools/binman/test/226_fit_split_elf.dts create mode 100644 tools/binman/test/227_fit_bad_dir.dts create mode 100644 tools/binman/test/228_fit_bad_dir_config.dts
[...]
diff --git a/tools/binman/etype/fit.py b/tools/binman/etype/fit.py index 30b20a07a2..63d552ed19 100644> > [...]
@@ -154,6 +159,149 @@ class Entry_fit(Entry_section):
Note that if no devicetree files are provided (with '-a of-list' as above) then no nodes will be generated.
- Generating nodes from an ELF file (split-elf)
- This uses the node as a template to generate multiple nodes. The following
- special properties are available:
- split-elf
Split an ELF file into a separate node for each segment. This uses the
node as a template to generate multiple nodes. The following special
properties are available:
fit,load
Generates a `load = <...>` property with the load address of the
segmnet
segmnet -> segment, missed it the first time around.
fit,entry
Generates a `entry = <...>` property with the entry address of the
ELF. This is only produced for the first entry
fit,data
Generates a `data = <...>` property with the contents of the segment
fit,loadables
Generates a `loadable = <...>` property with a list of the generated
nodes (including all nodes if this operation is used multiple times)
[...]
@@ -363,11 +514,20 @@ class Entry_fit(Entry_section): fname = tools.get_input_filename(fdt_fname + '.dtb') with fsw.add_node(node_name): for pname, prop in node.props.items():
val = prop.bytes.replace(
b'NAME', tools.to_bytes(fdt_fname))
val = val.replace(
b'SEQ', tools.to_bytes(str(seq + 1)))
fsw.property(pname, val)
if pname == 'fit,loadables':
val = '\0'.join(self._loadables) + '\0'
fsw.property('loadables', val.encode('utf-8'))
elif pname == 'fit,operation':
pass
elif pname.startswith('fit,'):
self._raise(base_node, node,
f"Unknown directive '{pname}'")
else:
val = prop.bytes.replace(
b'NAME', tools.to_bytes(fdt_fname))
val = val.replace(
b'SEQ', tools.to_bytes(str(seq + 1)))
fsw.property(pname, val)
I get two distinct "loadables" properties with this config fragment, and fdtget and mkimage -l only considers the first one:
@config-SEQ { description = "NAME.dtb"; fdt = "fdt-SEQ"; firmware = "atf-1"; loadables = "u-boot"; fit,loadables; };
$ fdtget simple-bin.fit.fit /configurations/config-1 -p description fdt firmware loadables loadables
$ fdtget simple-bin.fit.fit /configurations/config-1 loadables u-boot
$ mkimage -l simple-bin.fit.fit FIT description: FIT image for U-Boot with bl31 (TF-A) Created: Thu Feb 24 17:36:36 2022 Image 0 (u-boot) Description: U-Boot (64-bit) Created: Thu Feb 24 17:36:36 2022 Type: Standalone Program Compression: uncompressed Data Size: 727192 Bytes = 710.15 KiB = 0.69 MiB Architecture: AArch64 Load Address: 0x00200000 Entry Point: unavailable Image 1 (atf-1) Description: ARM Trusted Firmware Created: Thu Feb 24 17:36:36 2022 Type: Firmware Compression: uncompressed Data Size: 139342 Bytes = 136.08 KiB = 0.13 MiB Architecture: AArch64 OS: ARM Trusted Firmware Load Address: 0x00040000 Image 2 (atf-2) Description: ARM Trusted Firmware Created: Thu Feb 24 17:36:36 2022 Type: Firmware Compression: uncompressed Data Size: 8024 Bytes = 7.84 KiB = 0.01 MiB Architecture: AArch64 OS: ARM Trusted Firmware Load Address: 0xff3b0000 Image 3 (atf-3) Description: ARM Trusted Firmware Created: Thu Feb 24 17:36:36 2022 Type: Firmware Compression: uncompressed Data Size: 8192 Bytes = 8.00 KiB = 0.01 MiB Architecture: AArch64 OS: ARM Trusted Firmware Load Address: 0xff8c0000 Image 4 (fdt-1) Description: fdt-rk3399-gru-bob Created: Thu Feb 24 17:36:36 2022 Type: Flat Device Tree Compression: uncompressed Data Size: 69392 Bytes = 67.77 KiB = 0.07 MiB Architecture: Unknown Architecture Default Configuration: 'config-1' Configuration 0 (config-1) Description: rk3399-gru-bob.dtb Kernel: unavailable Firmware: atf-1 FDT: fdt-1 Loadables: u-boot
I didn't test if my chromebook_kevin boots in this double-loadables configuration, though.
# Add data for 'images' nodes (but not 'config') if depth == 1 and in_images:
@@ -380,7 +540,43 @@ class Entry_fit(Entry_section): else: self.Raise("Generator node requires 'fit,fdt-list' property")
def _gen_node(base_node, node, depth, in_images):
def _gen_split_elf(base_node, node, elf_data, missing):
"""Add nodes for the ELF file, one per group of contiguous segments
Args:
node (Node): Template node from the binman definition
node -> base_node
node (Node): Node to replace (in the FIT being built)
data (bytes): ELF-format data to process (may be empty)
missing (bool): True if any of the data is missing
"""
# If any pieces are missing, skip this. The missing entries will
# show an error
if not missing:
try:
segments, entry = elf.read_loadable_segments(elf_data)
except ValueError as exc:
self._raise(base_node, node,
f'Failed to read ELF file: {str(exc)}')
for (seq, start, data) in segments:
node_name = node.name[1:].replace('SEQ', str(seq + 1))
with fsw.add_node(node_name):
loadables.append(node_name)
for pname, prop in node.props.items():
if not pname.startswith('fit,'):
fsw.property(pname, prop.bytes)
elif pname == 'fit,load':
fsw.property_u32('load', start)
elif pname == 'fit,entry':
if not seq:
I'd prefer seq == 0 as it's a number.
fsw.property_u32('entry', entry)
elif pname == 'fit,data':
fsw.property('data', bytes(data))
elif pname != 'fit,operation':
self._raise(base_node, node,
f"Unknown directive '{pname}'")
def _gen_node(base_node, node, depth, in_images, entry): """Generate nodes from a template This creates one node for each member of self._fdts using the
[...]

Hi Alper,
On Thu, 3 Mar 2022 at 14:17, Alper Nebi Yasak alpernebiyasak@gmail.com wrote:
On 24/02/2022 02:00, Simon Glass wrote:
Some boards need to load an ELF file using the 'loadables' property, but the file has segments at different memory addresses. This means that it cannot be supplied as a flat binary.
Allow generating a separate node in the FIT for each segment in the ELF, with a different load address for each.
Also add checks that the fit,xxx directives are valid.
Signed-off-by: Simon Glass sjg@chromium.org
Changes in v2:
- Rewrite this to use the new FIT entry-type implementation
- Rename op-tee to tee-os
tools/binman/entries.rst | 146 ++++++++++++ tools/binman/etype/fit.py | 230 ++++++++++++++++++- tools/binman/ftest.py | 147 ++++++++++++ tools/binman/test/226_fit_split_elf.dts | 67 ++++++ tools/binman/test/227_fit_bad_dir.dts | 9 + tools/binman/test/228_fit_bad_dir_config.dts | 9 + 6 files changed, 598 insertions(+), 10 deletions(-) create mode 100644 tools/binman/test/226_fit_split_elf.dts create mode 100644 tools/binman/test/227_fit_bad_dir.dts create mode 100644 tools/binman/test/228_fit_bad_dir_config.dts
[...]
diff --git a/tools/binman/etype/fit.py b/tools/binman/etype/fit.py index 30b20a07a2..63d552ed19 100644> > [...]
@@ -154,6 +159,149 @@ class Entry_fit(Entry_section):
Note that if no devicetree files are provided (with '-a of-list' as above) then no nodes will be generated.
- Generating nodes from an ELF file (split-elf)
- This uses the node as a template to generate multiple nodes. The following
- special properties are available:
- split-elf
Split an ELF file into a separate node for each segment. This uses the
node as a template to generate multiple nodes. The following special
properties are available:
fit,load
Generates a `load = <...>` property with the load address of the
segmnet
segmnet -> segment, missed it the first time around.
fit,entry
Generates a `entry = <...>` property with the entry address of the
ELF. This is only produced for the first entry
fit,data
Generates a `data = <...>` property with the contents of the segment
fit,loadables
Generates a `loadable = <...>` property with a list of the generated
nodes (including all nodes if this operation is used multiple times)
[...]
@@ -363,11 +514,20 @@ class Entry_fit(Entry_section): fname = tools.get_input_filename(fdt_fname + '.dtb') with fsw.add_node(node_name): for pname, prop in node.props.items():
val = prop.bytes.replace(
b'NAME', tools.to_bytes(fdt_fname))
val = val.replace(
b'SEQ', tools.to_bytes(str(seq + 1)))
fsw.property(pname, val)
if pname == 'fit,loadables':
val = '\0'.join(self._loadables) + '\0'
fsw.property('loadables', val.encode('utf-8'))
elif pname == 'fit,operation':
pass
elif pname.startswith('fit,'):
self._raise(base_node, node,
f"Unknown directive '{pname}'")
else:
val = prop.bytes.replace(
b'NAME', tools.to_bytes(fdt_fname))
val = val.replace(
b'SEQ', tools.to_bytes(str(seq + 1)))
fsw.property(pname, val)
I get two distinct "loadables" properties with this config fragment, and fdtget and mkimage -l only considers the first one:
@config-SEQ { description = "NAME.dtb"; fdt = "fdt-SEQ"; firmware = "atf-1"; loadables = "u-boot"; fit,loadables; };
Yes that needs fixing, but I'll leave that case for now. It needs its own test.
$ fdtget simple-bin.fit.fit /configurations/config-1 -p description fdt firmware loadables loadables
$ fdtget simple-bin.fit.fit /configurations/config-1 loadables u-boot
$ mkimage -l simple-bin.fit.fit FIT description: FIT image for U-Boot with bl31 (TF-A) Created: Thu Feb 24 17:36:36 2022 Image 0 (u-boot) Description: U-Boot (64-bit) Created: Thu Feb 24 17:36:36 2022 Type: Standalone Program Compression: uncompressed Data Size: 727192 Bytes = 710.15 KiB = 0.69 MiB Architecture: AArch64 Load Address: 0x00200000 Entry Point: unavailable Image 1 (atf-1) Description: ARM Trusted Firmware Created: Thu Feb 24 17:36:36 2022 Type: Firmware Compression: uncompressed Data Size: 139342 Bytes = 136.08 KiB = 0.13 MiB Architecture: AArch64 OS: ARM Trusted Firmware Load Address: 0x00040000 Image 2 (atf-2) Description: ARM Trusted Firmware Created: Thu Feb 24 17:36:36 2022 Type: Firmware Compression: uncompressed Data Size: 8024 Bytes = 7.84 KiB = 0.01 MiB Architecture: AArch64 OS: ARM Trusted Firmware Load Address: 0xff3b0000 Image 3 (atf-3) Description: ARM Trusted Firmware Created: Thu Feb 24 17:36:36 2022 Type: Firmware Compression: uncompressed Data Size: 8192 Bytes = 8.00 KiB = 0.01 MiB Architecture: AArch64 OS: ARM Trusted Firmware Load Address: 0xff8c0000 Image 4 (fdt-1) Description: fdt-rk3399-gru-bob Created: Thu Feb 24 17:36:36 2022 Type: Flat Device Tree Compression: uncompressed Data Size: 69392 Bytes = 67.77 KiB = 0.07 MiB Architecture: Unknown Architecture Default Configuration: 'config-1' Configuration 0 (config-1) Description: rk3399-gru-bob.dtb Kernel: unavailable Firmware: atf-1 FDT: fdt-1 Loadables: u-boot
I didn't test if my chromebook_kevin boots in this double-loadables configuration, though.
I wish we could get the kevin patches applied. What on earth is going on there? I haven't been able to test any of this on kevin.
[..]
Regards, Simon

This boards uses SPL_FIT so does not need to support loading a raw image. Drop it to avoid binman trying to insert a symbol which has no value.
Signed-off-by: Simon Glass sjg@chromium.org ---
(no changes since v1)
configs/evb-rk3288_defconfig | 1 + 1 file changed, 1 insertion(+)
diff --git a/configs/evb-rk3288_defconfig b/configs/evb-rk3288_defconfig index 97d4c14f65..6143c4d989 100644 --- a/configs/evb-rk3288_defconfig +++ b/configs/evb-rk3288_defconfig @@ -24,6 +24,7 @@ CONFIG_USE_PREBOOT=y CONFIG_DEFAULT_FDT_FILE="rk3288-evb-rk808.dtb" CONFIG_SILENT_CONSOLE=y CONFIG_DISPLAY_BOARDINFO_LATE=y +# CONFIG_SPL_RAW_IMAGE_SUPPORT is not set CONFIG_SPL_STACK_R=y CONFIG_SPL_STACK_R_MALLOC_SIMPLE_LEN=0x10000 CONFIG_SPL_OPTEE_IMAGE=y

Include the rockchip-u-boot.dtsi file with 64-bit boards and enable binman so that these boards can also use it, rather than using special Makefile rules and scripts.
This does not change the Makefile nor remove any scripts, but sets it up so that this is possible.
Signed-off-by: Simon Glass sjg@chromium.org ---
(no changes since v1)
arch/arm/dts/px30-u-boot.dtsi | 2 ++ arch/arm/dts/rk3308-u-boot.dtsi | 2 ++ arch/arm/dts/rk3326-odroid-go2-u-boot.dtsi | 2 ++ arch/arm/dts/rk3328-u-boot.dtsi | 2 ++ arch/arm/dts/rk3368-u-boot.dtsi | 1 + arch/arm/dts/rk3399-u-boot.dtsi | 5 +++-- arch/arm/dts/rk3568-u-boot.dtsi | 2 ++ arch/arm/mach-rockchip/Kconfig | 6 ++++++ 8 files changed, 20 insertions(+), 2 deletions(-)
diff --git a/arch/arm/dts/px30-u-boot.dtsi b/arch/arm/dts/px30-u-boot.dtsi index f102b2aef4..462eaf68f8 100644 --- a/arch/arm/dts/px30-u-boot.dtsi +++ b/arch/arm/dts/px30-u-boot.dtsi @@ -3,6 +3,8 @@ * (C) Copyright 2019 Rockchip Electronics Co., Ltd */
+#include "rockchip-u-boot.dtsi" + / { aliases { mmc0 = &emmc; diff --git a/arch/arm/dts/rk3308-u-boot.dtsi b/arch/arm/dts/rk3308-u-boot.dtsi index 4bfad31fba..ab5bfc2ce9 100644 --- a/arch/arm/dts/rk3308-u-boot.dtsi +++ b/arch/arm/dts/rk3308-u-boot.dtsi @@ -3,6 +3,8 @@ *(C) Copyright 2019 Rockchip Electronics Co., Ltd */
+#include "rockchip-u-boot.dtsi" + / { aliases { mmc0 = &emmc; diff --git a/arch/arm/dts/rk3326-odroid-go2-u-boot.dtsi b/arch/arm/dts/rk3326-odroid-go2-u-boot.dtsi index 95f2652494..16c33735eb 100644 --- a/arch/arm/dts/rk3326-odroid-go2-u-boot.dtsi +++ b/arch/arm/dts/rk3326-odroid-go2-u-boot.dtsi @@ -3,6 +3,8 @@ * Copyright (c) 2020 Theobroma Systems Design und Consulting GmbH */
+#include "rockchip-u-boot.dtsi" + / { chosen { u-boot,spl-boot-order = &sdmmc; diff --git a/arch/arm/dts/rk3328-u-boot.dtsi b/arch/arm/dts/rk3328-u-boot.dtsi index 1633558264..d4a7540a92 100644 --- a/arch/arm/dts/rk3328-u-boot.dtsi +++ b/arch/arm/dts/rk3328-u-boot.dtsi @@ -3,6 +3,8 @@ * (C) Copyright 2019 Rockchip Electronics Co., Ltd */
+#include "rockchip-u-boot.dtsi" + / { aliases { mmc0 = &emmc; diff --git a/arch/arm/dts/rk3368-u-boot.dtsi b/arch/arm/dts/rk3368-u-boot.dtsi index 2767c2678d..b37da4e851 100644 --- a/arch/arm/dts/rk3368-u-boot.dtsi +++ b/arch/arm/dts/rk3368-u-boot.dtsi @@ -3,6 +3,7 @@ * Copyright (c) 2020 Theobroma Systems Design und Consulting GmbH */
+#include "rockchip-u-boot.dtsi" #include <dt-bindings/memory/rk3368-dmc.h>
/ { diff --git a/arch/arm/dts/rk3399-u-boot.dtsi b/arch/arm/dts/rk3399-u-boot.dtsi index 716b9a433a..fd4150e3f0 100644 --- a/arch/arm/dts/rk3399-u-boot.dtsi +++ b/arch/arm/dts/rk3399-u-boot.dtsi @@ -60,8 +60,9 @@
};
-#ifdef CONFIG_ROCKCHIP_SPI_IMAGE &binman { + multiple-images; +#ifdef CONFIG_ROCKCHIP_SPI_IMAGE rom { filename = "u-boot.rom"; size = <0x400000>; @@ -81,8 +82,8 @@ fdtmap { }; }; -}; #endif +};
&cru { u-boot,dm-pre-reloc; diff --git a/arch/arm/dts/rk3568-u-boot.dtsi b/arch/arm/dts/rk3568-u-boot.dtsi index 5a80dda275..fa9b6ae23b 100644 --- a/arch/arm/dts/rk3568-u-boot.dtsi +++ b/arch/arm/dts/rk3568-u-boot.dtsi @@ -3,6 +3,8 @@ * (C) Copyright 2021 Rockchip Electronics Co., Ltd */
+#include "rockchip-u-boot.dtsi" + / { aliases { mmc0 = &sdhci; diff --git a/arch/arm/mach-rockchip/Kconfig b/arch/arm/mach-rockchip/Kconfig index 92f35309e4..c8df65980f 100644 --- a/arch/arm/mach-rockchip/Kconfig +++ b/arch/arm/mach-rockchip/Kconfig @@ -15,6 +15,7 @@ config ROCKCHIP_PX30 select DEBUG_UART_BOARD_INIT imply ROCKCHIP_COMMON_BOARD imply SPL_ROCKCHIP_COMMON_BOARD + imply BINMAN help The Rockchip PX30 is a ARM-based SoC with a quad-core Cortex-A35 including NEON and GPU, Mali-400 graphics, several DDR3 options @@ -146,6 +147,7 @@ config ROCKCHIP_RK3308 imply SPL_SERIAL imply TPL_SERIAL imply SPL_SEPARATE_BSS + imply BINMAN help The Rockchip RK3308 is a ARM-based Soc which embedded with quad Cortex-A35 and highly integrated audio interfaces. @@ -167,6 +169,7 @@ config ROCKCHIP_RK3328 select ENABLE_ARM_SOC_BOOT0_HOOK select DEBUG_UART_BOARD_INIT select SYS_NS16550 + imply BINMAN help The Rockchip RK3328 is a ARM-based SoC with a quad-core Cortex-A53. including NEON and GPU, 1MB L2 cache, Mali-T7 graphics, two @@ -186,6 +189,7 @@ config ROCKCHIP_RK3368 imply SPL_SERIAL imply TPL_SERIAL imply TPL_ROCKCHIP_COMMON_BOARD + imply BINMAN help The Rockchip RK3368 is a ARM-based SoC with a octa-core (organised into a big and little cluster with 4 cores each) Cortex-A53 including @@ -244,6 +248,7 @@ config ROCKCHIP_RK3399 imply TPL_ROCKCHIP_COMMON_BOARD imply SYS_BOOTCOUNT_SINGLEWORD if BOOTCOUNT_LIMIT imply CMD_BOOTCOUNT if BOOTCOUNT_LIMIT + imply BINMAN help The Rockchip RK3399 is a ARM-based SoC with a dual-core Cortex-A72 and quad-core Cortex-A53. @@ -264,6 +269,7 @@ config ROCKCHIP_RK3568 select SYSCON select BOARD_LATE_INIT imply ROCKCHIP_COMMON_BOARD + imply BINMAN help The Rockchip RK3568 is a ARM-based SoC with quad-core Cortex-A55, including NEON and GPU, 512K L3 cache, Mali-G52 based graphics,

On 24/02/2022 02:00, Simon Glass wrote:
Include the rockchip-u-boot.dtsi file with 64-bit boards and enable binman so that these boards can also use it, rather than using special Makefile rules and scripts.
This does not change the Makefile nor remove any scripts, but sets it up so that this is possible.
Signed-off-by: Simon Glass sjg@chromium.org
(no changes since v1)
arch/arm/dts/px30-u-boot.dtsi | 2 ++ arch/arm/dts/rk3308-u-boot.dtsi | 2 ++ arch/arm/dts/rk3326-odroid-go2-u-boot.dtsi | 2 ++ arch/arm/dts/rk3328-u-boot.dtsi | 2 ++ arch/arm/dts/rk3368-u-boot.dtsi | 1 + arch/arm/dts/rk3399-u-boot.dtsi | 5 +++-- arch/arm/dts/rk3568-u-boot.dtsi | 2 ++ arch/arm/mach-rockchip/Kconfig | 6 ++++++ 8 files changed, 20 insertions(+), 2 deletions(-)
Reviewed-by: Alper Nebi Yasak alpernebiyasak@gmail.com

Add the required binman images to replace the Makefile rules which are currently used. This includes subsuming:
- tpl/u-boot-tpl-rockchip.bin if TPL is enabled - idbloader.img if either or both of SPL and TPL are enabled - u-boot.itb if SPL_FIT is enabled - u-boot-rockchip.bin if SPL is used, either using u-boot.itb when SPL_FIT is enabled or u-boot.img when it isn't
Note that the intermediate files are dropped with binman, since it producing everything in one pass. This means that tpl/u-boot-tpl-rockchip.bin is not created, for example.
Note that for some 32-bit rk3288 boards, rockchip-optee.dtsi is included.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: - Rename op-tee to tee-os - Drop use of .itb2
arch/arm/dts/rockchip-u-boot.dtsi | 84 ++++++++++++++++++++++++++++++- 1 file changed, 82 insertions(+), 2 deletions(-)
diff --git a/arch/arm/dts/rockchip-u-boot.dtsi b/arch/arm/dts/rockchip-u-boot.dtsi index eae3ee715d..64e4466489 100644 --- a/arch/arm/dts/rockchip-u-boot.dtsi +++ b/arch/arm/dts/rockchip-u-boot.dtsi @@ -17,13 +17,93 @@ filename = "u-boot-rockchip.bin"; pad-byte = <0xff>;
- blob { - filename = "idbloader.img"; +#ifdef CONFIG_TPL + mkimage { + args = "-n", CONFIG_SYS_SOC, "-T", "rksd"; + + u-boot-tpl { + }; + }; + + u-boot-spl { }; +#elif defined(CONFIG_SPL) /* SPL only */ + mkimage { + args = "-n", CONFIG_SYS_SOC, "-T", "rksd"; + + u-boot-spl { + }; + }; +#endif +#if defined(CONFIG_SPL_FIT) && defined(CONFIG_ARM64) + fit: fit { + description = "FIT image for U-Boot with bl31 (TF-A)"; + #address-cells = <1>; + fit,fdt-list = "of-list"; + fit,external-offset = <CONFIG_FIT_EXTERNAL_OFFSET>; + images { + u-boot { + description = "U-Boot (64-bit)"; + type = "standalone"; + os = "U-Boot"; + arch = "arm64"; + compression = "none"; + load = <CONFIG_SYS_TEXT_BASE>; + u-boot-nodtb { + }; + };
+ @atf-SEQ { + fit,operation = "split-elf"; + description = "ARM Trusted Firmware"; + type = "firmware"; + arch = "arm64"; + os = "arm-trusted-firmware"; + compression = "none"; + fit,load; + fit,entry; + fit,data; + + atf-bl31 { + }; + }; + @tee-SEQ { + fit,operation = "split-elf"; + description = "TEE"; + type = "tee"; + arch = "arm64"; + os = "tee"; + compression = "none"; + fit,load; + fit,entry; + fit,data; + + tee-os { + }; + }; + + @fdt-SEQ { + description = "fdt-NAME"; + compression = "none"; + type = "flat_dt"; + }; + }; + + configurations { + default = "@config-DEFAULT-SEQ"; + @config-SEQ { + description = "NAME.dtb"; + fdt = "fdt-SEQ"; + firmware = "u-boot"; + fit,loadables; + }; + }; + }; +#else u-boot-img { offset = <CONFIG_SPL_PAD_TO>; }; +#endif /* CONFIG_ARM64 */ }; }; #endif

On Wed, Feb 23, 2022 at 6:04 PM Simon Glass sjg@chromium.org wrote:
Good Evening,
Add the required binman images to replace the Makefile rules which are currently used. This includes subsuming:
- tpl/u-boot-tpl-rockchip.bin if TPL is enabled
- idbloader.img if either or both of SPL and TPL are enabled
- u-boot.itb if SPL_FIT is enabled
- u-boot-rockchip.bin if SPL is used, either using u-boot.itb when SPL_FIT is enabled or u-boot.img when it isn't
Note that the intermediate files are dropped with binman, since it producing everything in one pass. This means that tpl/u-boot-tpl-rockchip.bin is not created, for example.
I love this series, and look forward to it working. I've tested this with rk3399, unfortunately it does not produce a functional u-boot.itb. I originally thought it was, but then I realized it was placing it at an incorrect location.
Note that for some 32-bit rk3288 boards, rockchip-optee.dtsi is included.
Signed-off-by: Simon Glass sjg@chromium.org
Changes in v2:
- Rename op-tee to tee-os
- Drop use of .itb2
arch/arm/dts/rockchip-u-boot.dtsi | 84 ++++++++++++++++++++++++++++++- 1 file changed, 82 insertions(+), 2 deletions(-)
diff --git a/arch/arm/dts/rockchip-u-boot.dtsi b/arch/arm/dts/rockchip-u-boot.dtsi index eae3ee715d..64e4466489 100644 --- a/arch/arm/dts/rockchip-u-boot.dtsi +++ b/arch/arm/dts/rockchip-u-boot.dtsi @@ -17,13 +17,93 @@ filename = "u-boot-rockchip.bin"; pad-byte = <0xff>;
blob {
filename = "idbloader.img";
+#ifdef CONFIG_TPL
mkimage {
args = "-n", CONFIG_SYS_SOC, "-T", "rksd";
u-boot-tpl {
};
};
u-boot-spl { };
+#elif defined(CONFIG_SPL) /* SPL only */
mkimage {
args = "-n", CONFIG_SYS_SOC, "-T", "rksd";
u-boot-spl {
};
};
+#endif +#if defined(CONFIG_SPL_FIT) && defined(CONFIG_ARM64)
fit: fit {
description = "FIT image for U-Boot with bl31 (TF-A)";
#address-cells = <1>;
fit,fdt-list = "of-list";
fit,external-offset = <CONFIG_FIT_EXTERNAL_OFFSET>;
You need: offset = <CONFIG_SPL_PAD_TO>; here or the image is located at the wrong location.
images {
u-boot {
description = "U-Boot (64-bit)";
type = "standalone";
os = "U-Boot";
arch = "arm64";
compression = "none";
load = <CONFIG_SYS_TEXT_BASE>;
u-boot-nodtb {
};
};
@atf-SEQ {
fit,operation = "split-elf";
description = "ARM Trusted Firmware";
type = "firmware";
arch = "arm64";
os = "arm-trusted-firmware";
compression = "none";
fit,load;
fit,entry;
fit,data;
atf-bl31 {
};
};
@tee-SEQ {
fit,operation = "split-elf";
description = "TEE";
type = "tee";
arch = "arm64";
os = "tee";
compression = "none";
fit,load;
fit,entry;
fit,data;
tee-os {
};
};
@fdt-SEQ {
description = "fdt-NAME";
compression = "none";
type = "flat_dt";
};
};
configurations {
default = "@config-DEFAULT-SEQ";
@config-SEQ {
description = "NAME.dtb";
fdt = "fdt-SEQ";
firmware = "u-boot";
fit,loadables;
};
};
};
+#else u-boot-img { offset = <CONFIG_SPL_PAD_TO>; }; +#endif /* CONFIG_ARM64 */ }; };
#endif
2.35.1.574.g5d30c73bfb-goog
The image produced is not functional however, because it swaps the firmware node with the loadable-1 node. Working image: Configuration 0 (config_1) Description: rk3399-pinephone-pro.dtb Kernel: unavailable Firmware: atf_1 FDT: fdt_1 Loadables: uboot atf_2 atf_3
Non working image produced from this: Configuration 0 (config-1) Description: rk3399-rockpro64.dtb Kernel: unavailable Firmware: u-boot FDT: fdt-1 Loadables: atf-1 atf-2 atf-3
Also, it still doesn't support passing two separate files to mkimage at the same time, required for proper support of rk33xx_SPI generation and for rk35xx in general. Please see my standalone patch for what I hacked together to get that working: http://patchwork.ozlabs.org/project/uboot/patch/20220301024826.1228290-1-pgw... Note: those images are not functional either due to this issue.
A wishlist item would be for it to produce the original idbloader.img and u-boot.itb files, at least for now.
Thanks! Very Respectfully, Peter Geis

On 03/03/2022 01:16, Peter Geis wrote:
On Wed, Feb 23, 2022 at 6:04 PM Simon Glass sjg@chromium.org wrote:
diff --git a/arch/arm/dts/rockchip-u-boot.dtsi b/arch/arm/dts/rockchip-u-boot.dtsi index eae3ee715d..64e4466489 100644 --- a/arch/arm/dts/rockchip-u-boot.dtsi +++ b/arch/arm/dts/rockchip-u-boot.dtsi @@ -17,13 +17,93 @@ filename = "u-boot-rockchip.bin"; pad-byte = <0xff>;
blob {
filename = "idbloader.img";
+#ifdef CONFIG_TPL
mkimage {
args = "-n", CONFIG_SYS_SOC, "-T", "rksd";
u-boot-tpl {
};
};
u-boot-spl { };
+#elif defined(CONFIG_SPL) /* SPL only */
mkimage {
args = "-n", CONFIG_SYS_SOC, "-T", "rksd";
u-boot-spl {
};
};
+#endif +#if defined(CONFIG_SPL_FIT) && defined(CONFIG_ARM64)
fit: fit {
description = "FIT image for U-Boot with bl31 (TF-A)";
#address-cells = <1>;
fit,fdt-list = "of-list";
fit,external-offset = <CONFIG_FIT_EXTERNAL_OFFSET>;
You need: offset = <CONFIG_SPL_PAD_TO>; here or the image is located at the wrong location.
Thanks for confirming this.
The image produced is not functional however, because it swaps the firmware node with the loadable-1 node. Working image: Configuration 0 (config_1) Description: rk3399-pinephone-pro.dtb Kernel: unavailable Firmware: atf_1 FDT: fdt_1 Loadables: uboot atf_2 atf_3
Non working image produced from this: Configuration 0 (config-1) Description: rk3399-rockpro64.dtb Kernel: unavailable Firmware: u-boot FDT: fdt-1 Loadables: atf-1 atf-2 atf-3
And also this. I encountered the same things on rk3399-gru-kevin, mentioned them in my reply [1] to v1 but I guess Simon missed it.
[1] My reply to v1 of this patch https://lore.kernel.org/u-boot/d50a7e13-7113-8c95-1861-cbc6c1000755@gmail.co...
Also, it still doesn't support passing two separate files to mkimage at the same time, required for proper support of rk33xx_SPI generation and for rk35xx in general.
I got v1 of this booting from SPI, but on a board that doesn't use TPL. I see in doc/boards/rockchip.rst that one would indeed pass TPL+SPL to mkimage as two different files. I don't know how they're processed or anything about the rksd/rkspi formats though.
Please see my standalone patch for what I hacked together to get that working: http://patchwork.ozlabs.org/project/uboot/patch/20220301024826.1228290-1-pgw... Note: those images are not functional either due to this issue.
A wishlist item would be for it to produce the original idbloader.img and u-boot.itb files, at least for now.
Thanks! Very Respectfully, Peter Geis

On Thu, Mar 3, 2022 at 4:17 PM Alper Nebi Yasak alpernebiyasak@gmail.com wrote:
On 03/03/2022 01:16, Peter Geis wrote:
On Wed, Feb 23, 2022 at 6:04 PM Simon Glass sjg@chromium.org wrote:
diff --git a/arch/arm/dts/rockchip-u-boot.dtsi b/arch/arm/dts/rockchip-u-boot.dtsi index eae3ee715d..64e4466489 100644 --- a/arch/arm/dts/rockchip-u-boot.dtsi +++ b/arch/arm/dts/rockchip-u-boot.dtsi @@ -17,13 +17,93 @@ filename = "u-boot-rockchip.bin"; pad-byte = <0xff>;
blob {
filename = "idbloader.img";
+#ifdef CONFIG_TPL
mkimage {
args = "-n", CONFIG_SYS_SOC, "-T", "rksd";
u-boot-tpl {
};
};
u-boot-spl { };
+#elif defined(CONFIG_SPL) /* SPL only */
mkimage {
args = "-n", CONFIG_SYS_SOC, "-T", "rksd";
u-boot-spl {
};
};
+#endif +#if defined(CONFIG_SPL_FIT) && defined(CONFIG_ARM64)
fit: fit {
description = "FIT image for U-Boot with bl31 (TF-A)";
#address-cells = <1>;
fit,fdt-list = "of-list";
fit,external-offset = <CONFIG_FIT_EXTERNAL_OFFSET>;
You need: offset = <CONFIG_SPL_PAD_TO>; here or the image is located at the wrong location.
Thanks for confirming this.
The image produced is not functional however, because it swaps the firmware node with the loadable-1 node. Working image: Configuration 0 (config_1) Description: rk3399-pinephone-pro.dtb Kernel: unavailable Firmware: atf_1 FDT: fdt_1 Loadables: uboot atf_2 atf_3
Non working image produced from this: Configuration 0 (config-1) Description: rk3399-rockpro64.dtb Kernel: unavailable Firmware: u-boot FDT: fdt-1 Loadables: atf-1 atf-2 atf-3
And also this. I encountered the same things on rk3399-gru-kevin, mentioned them in my reply [1] to v1 but I guess Simon missed it.
[1] My reply to v1 of this patch https://lore.kernel.org/u-boot/d50a7e13-7113-8c95-1861-cbc6c1000755@gmail.co...
Yes, that's the solution I ended up with as well. Funnily enough, it seems loadables has a limit of five items, meaning the newest rk356x ATF images which have four binaries, u-boot, and optee trigger an error.
Also, it still doesn't support passing two separate files to mkimage at the same time, required for proper support of rk33xx_SPI generation and for rk35xx in general.
I got v1 of this booting from SPI, but on a board that doesn't use TPL. I see in doc/boards/rockchip.rst that one would indeed pass TPL+SPL to mkimage as two different files. I don't know how they're processed or anything about the rksd/rkspi formats though.
RKSPI mode splits the image up into 2048 chunks, and pads each chunk with 2048 of blank space. For some reason the rk32/rk33 bootrom SPI code reads only 2048 out of each 4096 chunk. Delightfully this seems to have been fixed in rk35xx.
Please see my standalone patch for what I hacked together to get that working: http://patchwork.ozlabs.org/project/uboot/patch/20220301024826.1228290-1-pgw... Note: those images are not functional either due to this issue.
A wishlist item would be for it to produce the original idbloader.img and u-boot.itb files, at least for now.
Thanks! Very Respectfully, Peter Geis

Hi Peter, Alper,
On Thu, 3 Mar 2022 at 15:34, Peter Geis pgwipeout@gmail.com wrote:
On Thu, Mar 3, 2022 at 4:17 PM Alper Nebi Yasak alpernebiyasak@gmail.com wrote:
On 03/03/2022 01:16, Peter Geis wrote:
On Wed, Feb 23, 2022 at 6:04 PM Simon Glass sjg@chromium.org wrote:
diff --git a/arch/arm/dts/rockchip-u-boot.dtsi b/arch/arm/dts/rockchip-u-boot.dtsi index eae3ee715d..64e4466489 100644 --- a/arch/arm/dts/rockchip-u-boot.dtsi +++ b/arch/arm/dts/rockchip-u-boot.dtsi @@ -17,13 +17,93 @@ filename = "u-boot-rockchip.bin"; pad-byte = <0xff>;
blob {
filename = "idbloader.img";
+#ifdef CONFIG_TPL
mkimage {
args = "-n", CONFIG_SYS_SOC, "-T", "rksd";
u-boot-tpl {
};
};
u-boot-spl { };
+#elif defined(CONFIG_SPL) /* SPL only */
mkimage {
args = "-n", CONFIG_SYS_SOC, "-T", "rksd";
u-boot-spl {
};
};
+#endif +#if defined(CONFIG_SPL_FIT) && defined(CONFIG_ARM64)
fit: fit {
description = "FIT image for U-Boot with bl31 (TF-A)";
#address-cells = <1>;
fit,fdt-list = "of-list";
fit,external-offset = <CONFIG_FIT_EXTERNAL_OFFSET>;
You need: offset = <CONFIG_SPL_PAD_TO>; here or the image is located at the wrong location.
Thanks for confirming this.
The image produced is not functional however, because it swaps the firmware node with the loadable-1 node. Working image: Configuration 0 (config_1) Description: rk3399-pinephone-pro.dtb Kernel: unavailable Firmware: atf_1 FDT: fdt_1 Loadables: uboot atf_2 atf_3
Non working image produced from this: Configuration 0 (config-1) Description: rk3399-rockpro64.dtb Kernel: unavailable Firmware: u-boot FDT: fdt-1 Loadables: atf-1 atf-2 atf-3
And also this. I encountered the same things on rk3399-gru-kevin, mentioned them in my reply [1] to v1 but I guess Simon missed it.
[1] My reply to v1 of this patch https://lore.kernel.org/u-boot/d50a7e13-7113-8c95-1861-cbc6c1000755@gmail.co...
Yes, that's the solution I ended up with as well. Funnily enough, it seems loadables has a limit of five items, meaning the newest rk356x ATF images which have four binaries, u-boot, and optee trigger an error.
Also, it still doesn't support passing two separate files to mkimage at the same time, required for proper support of rk33xx_SPI generation and for rk35xx in general.
I got v1 of this booting from SPI, but on a board that doesn't use TPL. I see in doc/boards/rockchip.rst that one would indeed pass TPL+SPL to mkimage as two different files. I don't know how they're processed or anything about the rksd/rkspi formats though.
RKSPI mode splits the image up into 2048 chunks, and pads each chunk with 2048 of blank space. For some reason the rk32/rk33 bootrom SPI code reads only 2048 out of each 4096 chunk. Delightfully this seems to have been fixed in rk35xx.
Please see my standalone patch for what I hacked together to get that working: http://patchwork.ozlabs.org/project/uboot/patch/20220301024826.1228290-1-pgw... Note: those images are not functional either due to this issue.
A wishlist item would be for it to produce the original idbloader.img and u-boot.itb files, at least for now.
Thanks for all the info. I look forward to kevin being supported...but I should try jerry as well.
I hope to get this series applied soon and review the various follow-on patches. It would be great to sort things out this we as there are quite a few deficiencies in all of this, as you have found. But for now I will hold off on the rockchip-specific patches, until binman FIT is a bit better.
Regards, Simon

Instead of the bash script, use binman to generate the FIT for arm64.
For 32-bit boards, use binman for all images, dropping the intermediate files.
With this change, only Zynq is now using SPL_FIT_GENERATOR so update the Kconfig rule accordingly.
Clean up the Makefile to the extent possible. Unfortunately, two boards do not use SPL_FRAMEWORK so don't enable the u-boot.img rule:
evb-rk3036 kylin-rk3036
So a small remnant remains.
Signed-off-by: Simon Glass sjg@chromium.org ---
(no changes since v1)
Makefile | 39 ++------------------------------------- boot/Kconfig | 2 +- 2 files changed, 3 insertions(+), 38 deletions(-)
diff --git a/Makefile b/Makefile index 697cc51d67..7a4f95b898 100644 --- a/Makefile +++ b/Makefile @@ -976,19 +976,8 @@ ifeq ($(CONFIG_MPC85xx)$(CONFIG_OF_SEPARATE),yy) INPUTS-y += u-boot-with-dtb.bin endif
-ifeq ($(CONFIG_ARCH_ROCKCHIP),y) -# On ARM64 this target is produced by binman so we don't need this dep -ifeq ($(CONFIG_ARM64),y) -ifeq ($(CONFIG_SPL),y) -# TODO: Get binman to generate this too -INPUTS-y += u-boot-rockchip.bin -endif -else -ifeq ($(CONFIG_SPL),y) -# Generate these inputs for binman which will create the output files -INPUTS-y += idbloader.img u-boot.img -endif -endif +ifeq ($(CONFIG_ARCH_ROCKCHIP)_$(CONFIG_SPL_FRAMEWORK),y_) +INPUTS-y += u-boot.img endif
INPUTS-$(CONFIG_X86) += u-boot-x86-start16.bin u-boot-x86-reset16.bin \ @@ -1473,30 +1462,6 @@ OBJCOPYFLAGS_u-boot-with-spl.bin = -I binary -O binary \ u-boot-with-spl.bin: $(SPL_IMAGE) $(SPL_PAYLOAD) FORCE $(call if_changed,pad_cat)
-ifeq ($(CONFIG_ARCH_ROCKCHIP),y) - -# TPL + SPL -ifeq ($(CONFIG_SPL)$(CONFIG_TPL),yy) -MKIMAGEFLAGS_u-boot-tpl-rockchip.bin = -n $(CONFIG_SYS_SOC) -T rksd -tpl/u-boot-tpl-rockchip.bin: tpl/u-boot-tpl.bin FORCE - $(call if_changed,mkimage) -idbloader.img: tpl/u-boot-tpl-rockchip.bin spl/u-boot-spl.bin FORCE - $(call if_changed,cat) -else -MKIMAGEFLAGS_idbloader.img = -n $(CONFIG_SYS_SOC) -T rksd -idbloader.img: spl/u-boot-spl.bin FORCE - $(call if_changed,mkimage) -endif - -ifeq ($(CONFIG_ARM64),y) -OBJCOPYFLAGS_u-boot-rockchip.bin = -I binary -O binary \ - --pad-to=$(CONFIG_SPL_PAD_TO) --gap-fill=0xff -u-boot-rockchip.bin: idbloader.img u-boot.itb FORCE - $(call if_changed,pad_cat) -endif # CONFIG_ARM64 - -endif # CONFIG_ARCH_ROCKCHIP - ifeq ($(CONFIG_ARCH_LPC32XX)$(CONFIG_SPL),yy) MKIMAGEFLAGS_lpc32xx-spl.img = -T lpc32xximage -a $(CONFIG_SPL_TEXT_BASE)
diff --git a/boot/Kconfig b/boot/Kconfig index b83a4e8400..ae1a37ae92 100644 --- a/boot/Kconfig +++ b/boot/Kconfig @@ -275,7 +275,7 @@ config SPL_FIT_SOURCE
config USE_SPL_FIT_GENERATOR bool "Use a script to generate the .its script" - default y if SPL_FIT && (!ARCH_SUNXI && !RISCV) + default y if SPL_FIT && ARCH_ZYNQMP
config SPL_FIT_GENERATOR string ".its file generator script for U-Boot FIT image"

This is not used anymore. Drop it.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: - Drop patches previously applied - Add various suggestions from Alper Nebi Yasak - Add patches to refactor binman's FIT support
Makefile | 3 - arch/arm/mach-rockchip/make_fit_atf.py | 240 ------------------------- boot/Kconfig | 1 - 3 files changed, 244 deletions(-) delete mode 100755 arch/arm/mach-rockchip/make_fit_atf.py
diff --git a/Makefile b/Makefile index 7a4f95b898..f0b01a8f1b 100644 --- a/Makefile +++ b/Makefile @@ -1352,9 +1352,6 @@ $(U_BOOT_ITS): $(subst ",,$(CONFIG_SPL_FIT_SOURCE)) else ifneq ($(CONFIG_USE_SPL_FIT_GENERATOR),) U_BOOT_ITS := u-boot.its -ifeq ($(CONFIG_SPL_FIT_GENERATOR),"arch/arm/mach-rockchip/make_fit_atf.py") -U_BOOT_ITS_DEPS += u-boot -endif $(U_BOOT_ITS): $(U_BOOT_ITS_DEPS) FORCE $(srctree)/$(CONFIG_SPL_FIT_GENERATOR) \ $(patsubst %,arch/$(ARCH)/dts/%.dtb,$(subst ",,$(CONFIG_OF_LIST))) > $@ diff --git a/arch/arm/mach-rockchip/make_fit_atf.py b/arch/arm/mach-rockchip/make_fit_atf.py deleted file mode 100755 index f3224d2555..0000000000 --- a/arch/arm/mach-rockchip/make_fit_atf.py +++ /dev/null @@ -1,240 +0,0 @@ -#!/usr/bin/env python3 -""" -# SPDX-License-Identifier: GPL-2.0+ -# -# A script to generate FIT image source for rockchip boards -# with ARM Trusted Firmware -# and multiple device trees (given on the command line) -# -# usage: $0 <dt_name> [<dt_name> [<dt_name] ...] -""" - -import os -import sys -import getopt -import logging -import struct - -DT_HEADER = """ -/* - * This is a generated file. - */ -/dts-v1/; - -/ { - description = "FIT image for U-Boot with bl31 (TF-A)"; - #address-cells = <1>; - - images { -""" - -DT_UBOOT = """ - uboot { - description = "U-Boot (64-bit)"; - data = /incbin/("u-boot-nodtb.bin"); - type = "standalone"; - os = "U-Boot"; - arch = "arm64"; - compression = "none"; - load = <0x%08x>; - }; - -""" - -DT_IMAGES_NODE_END = """ }; - -""" - -DT_END = "};" - -def append_bl31_node(file, atf_index, phy_addr, elf_entry): - # Append BL31 DT node to input FIT dts file. - data = 'bl31_0x%08x.bin' % phy_addr - file.write('\t\tatf_%d {\n' % atf_index) - file.write('\t\t\tdescription = "ARM Trusted Firmware";\n') - file.write('\t\t\tdata = /incbin/("%s");\n' % data) - file.write('\t\t\ttype = "firmware";\n') - file.write('\t\t\tarch = "arm64";\n') - file.write('\t\t\tos = "arm-trusted-firmware";\n') - file.write('\t\t\tcompression = "none";\n') - file.write('\t\t\tload = <0x%08x>;\n' % phy_addr) - if atf_index == 1: - file.write('\t\t\tentry = <0x%08x>;\n' % elf_entry) - file.write('\t\t};\n') - file.write('\n') - -def append_tee_node(file, atf_index, phy_addr, elf_entry): - # Append TEE DT node to input FIT dts file. - data = 'tee_0x%08x.bin' % phy_addr - file.write('\t\tatf_%d {\n' % atf_index) - file.write('\t\t\tdescription = "TEE";\n') - file.write('\t\t\tdata = /incbin/("%s");\n' % data) - file.write('\t\t\ttype = "tee";\n') - file.write('\t\t\tarch = "arm64";\n') - file.write('\t\t\tos = "tee";\n') - file.write('\t\t\tcompression = "none";\n') - file.write('\t\t\tload = <0x%08x>;\n' % phy_addr) - file.write('\t\t\tentry = <0x%08x>;\n' % elf_entry) - file.write('\t\t};\n') - file.write('\n') - -def append_fdt_node(file, dtbs): - # Append FDT nodes. - cnt = 1 - for dtb in dtbs: - dtname = os.path.basename(dtb) - file.write('\t\tfdt_%d {\n' % cnt) - file.write('\t\t\tdescription = "%s";\n' % dtname) - file.write('\t\t\tdata = /incbin/("%s");\n' % dtb) - file.write('\t\t\ttype = "flat_dt";\n') - file.write('\t\t\tcompression = "none";\n') - file.write('\t\t};\n') - file.write('\n') - cnt = cnt + 1 - -def append_conf_section(file, cnt, dtname, segments): - file.write('\t\tconfig_%d {\n' % cnt) - file.write('\t\t\tdescription = "%s";\n' % dtname) - file.write('\t\t\tfirmware = "atf_1";\n') - file.write('\t\t\tloadables = "uboot"') - if segments > 1: - file.write(',') - for i in range(1, segments): - file.write('"atf_%d"' % (i + 1)) - if i != (segments - 1): - file.write(',') - else: - file.write(';\n') - if segments <= 1: - file.write(';\n') - file.write('\t\t\tfdt = "fdt_%d";\n' % cnt) - file.write('\t\t};\n') - file.write('\n') - -def append_conf_node(file, dtbs, segments): - # Append configeration nodes. - cnt = 1 - file.write('\tconfigurations {\n') - file.write('\t\tdefault = "config_1";\n') - for dtb in dtbs: - dtname = os.path.basename(dtb) - append_conf_section(file, cnt, dtname, segments) - cnt = cnt + 1 - file.write('\t};\n') - file.write('\n') - -def generate_atf_fit_dts_uboot(fit_file, uboot_file_name): - segments = unpack_elf(uboot_file_name) - if len(segments) != 1: - raise ValueError("Invalid u-boot ELF image '%s'" % uboot_file_name) - index, entry, p_paddr, data = segments[0] - fit_file.write(DT_UBOOT % p_paddr) - -def generate_atf_fit_dts_bl31(fit_file, bl31_file_name, tee_file_name, dtbs_file_name): - segments = unpack_elf(bl31_file_name) - for index, entry, paddr, data in segments: - append_bl31_node(fit_file, index + 1, paddr, entry) - num_segments = len(segments) - - if tee_file_name: - tee_segments = unpack_elf(tee_file_name) - for index, entry, paddr, data in tee_segments: - append_tee_node(fit_file, num_segments + index + 1, paddr, entry) - num_segments = num_segments + len(tee_segments) - - append_fdt_node(fit_file, dtbs_file_name) - fit_file.write(DT_IMAGES_NODE_END) - append_conf_node(fit_file, dtbs_file_name, num_segments) - -def generate_atf_fit_dts(fit_file_name, bl31_file_name, tee_file_name, uboot_file_name, dtbs_file_name): - # Generate FIT script for ATF image. - if fit_file_name != sys.stdout: - fit_file = open(fit_file_name, "wb") - else: - fit_file = sys.stdout - - fit_file.write(DT_HEADER) - generate_atf_fit_dts_uboot(fit_file, uboot_file_name) - generate_atf_fit_dts_bl31(fit_file, bl31_file_name, tee_file_name, dtbs_file_name) - fit_file.write(DT_END) - - if fit_file_name != sys.stdout: - fit_file.close() - -def generate_atf_binary(bl31_file_name): - for index, entry, paddr, data in unpack_elf(bl31_file_name): - file_name = 'bl31_0x%08x.bin' % paddr - with open(file_name, "wb") as atf: - atf.write(data) - -def generate_tee_binary(tee_file_name): - if tee_file_name: - for index, entry, paddr, data in unpack_elf(tee_file_name): - file_name = 'tee_0x%08x.bin' % paddr - with open(file_name, "wb") as atf: - atf.write(data) - -def unpack_elf(filename): - with open(filename, 'rb') as file: - elf = file.read() - if elf[0:7] != b'\x7fELF\x02\x01\x01' or elf[18:20] != b'\xb7\x00': - raise ValueError("Invalid arm64 ELF file '%s'" % filename) - - e_entry, e_phoff = struct.unpack_from('<2Q', elf, 0x18) - e_phentsize, e_phnum = struct.unpack_from('<2H', elf, 0x36) - segments = [] - - for index in range(e_phnum): - offset = e_phoff + e_phentsize * index - p_type, p_flags, p_offset = struct.unpack_from('<LLQ', elf, offset) - if p_type == 1: # PT_LOAD - p_paddr, p_filesz = struct.unpack_from('<2Q', elf, offset + 0x18) - if p_filesz > 0: - p_data = elf[p_offset:p_offset + p_filesz] - segments.append((index, e_entry, p_paddr, p_data)) - return segments - -def main(): - uboot_elf = "./u-boot" - fit_its = sys.stdout - if "BL31" in os.environ: - bl31_elf=os.getenv("BL31"); - elif os.path.isfile("./bl31.elf"): - bl31_elf = "./bl31.elf" - else: - os.system("echo 'int main(){}' > bl31.c") - os.system("${CROSS_COMPILE}gcc -c bl31.c -o bl31.elf") - bl31_elf = "./bl31.elf" - logging.basicConfig(format='%(levelname)s:%(message)s', level=logging.DEBUG) - logging.warning(' BL31 file bl31.elf NOT found, resulting binary is non-functional') - logging.warning(' Please read Building section in doc/README.rockchip') - - if "TEE" in os.environ: - tee_elf = os.getenv("TEE") - elif os.path.isfile("./tee.elf"): - tee_elf = "./tee.elf" - else: - tee_elf = "" - - opts, args = getopt.getopt(sys.argv[1:], "o:u:b:t:h") - for opt, val in opts: - if opt == "-o": - fit_its = val - elif opt == "-u": - uboot_elf = val - elif opt == "-b": - bl31_elf = val - elif opt == "-t": - tee_elf = val - elif opt == "-h": - print(__doc__) - sys.exit(2) - - dtbs = args - - generate_atf_fit_dts(fit_its, bl31_elf, tee_elf, uboot_elf, dtbs) - generate_atf_binary(bl31_elf) - generate_tee_binary(tee_elf) - -if __name__ == "__main__": - main() diff --git a/boot/Kconfig b/boot/Kconfig index ae1a37ae92..1ae5b3a793 100644 --- a/boot/Kconfig +++ b/boot/Kconfig @@ -280,7 +280,6 @@ config USE_SPL_FIT_GENERATOR config SPL_FIT_GENERATOR string ".its file generator script for U-Boot FIT image" depends on USE_SPL_FIT_GENERATOR - default "arch/arm/mach-rockchip/make_fit_atf.py" if SPL_LOAD_FIT && ARCH_ROCKCHIP default "arch/arm/mach-zynqmp/mkimage_fit_atf.sh" if SPL_LOAD_FIT && ARCH_ZYNQMP help Specifies a (platform specific) script file to generate the FIT

Good Evening,
I successfully tested your v2 patch series to create a bootable sdcard image out of the box for rockpro64-rk3399. Unfortunately, rk356x and rk3399-spi modes are broken, due to the inability to pass multiple images to mkimage at the same time. rk3399-spi mode is already supported manually, see: https://elixir.bootlin.com/u-boot/v2022.04-rc3/source/doc/board/rockchip/roc...
rk356x is currently only supported manually, the image built by the old Makefile method is non functional. (u-boot-rockchip.bin)
Knowing absolutely nothing about python, I've hacked together something that works for splitting the image in the way mkimage expects. The file name passed to mkimage with the -d flag is: ./mkimage.simple-bin.mkimage.1:./mkimage.simple-bin.mkimage.2
I definitely don't expect this to be accepted as is, I just use it as an example of what we need to fully support this in binman. Adding the following allows me to build images automatically for rk356x:
mkimage { args = "-n", CONFIG_SYS_SOC, "-T", "rksd"; mkimage,separate_files;
ddrl { type = "blob-ext"; filename = "rk3568_ddr_1560MHz_v1.12.bin"; };
u-boot-spl { }; };
This is my first attempt to use in-reply-to, so I hope this works.
Thanks, Peter Geis
Signed-off-by: Peter Geis pgwipeout@gmail.com --- tools/binman/entry.py | 43 ++++++++++++++++++++++++++--------- tools/binman/etype/mkimage.py | 3 ++- 2 files changed, 34 insertions(+), 12 deletions(-)
diff --git a/tools/binman/entry.py b/tools/binman/entry.py index 249f117ace56..48e552fc6af3 100644 --- a/tools/binman/entry.py +++ b/tools/binman/entry.py @@ -114,6 +114,8 @@ class Entry(object): self.bintools = {} self.missing_bintools = [] self.update_hash = True + self.fname_tmp = str() + self.index = 0
@staticmethod def FindEntryClass(etype, expanded): @@ -1134,7 +1136,7 @@ features to produce new behaviours. """ self.update_hash = update_hash
- def collect_contents_to_file(self, entries, prefix, fake_size=0): + def collect_contents_to_file(self, entries, prefix, fake_size=0, separate=False): """Put the contents of a list of entries into a file
Args: @@ -1152,13 +1154,32 @@ features to produce new behaviours. str: Unique portion of filename (or None if no data) """ data = b'' - for entry in entries: - # First get the input data and put it in a file. If not available, - # try later. - if not entry.ObtainContents(fake_size=fake_size): - return None, None, None - data += entry.GetData() - uniq = self.GetUniqueName() - fname = tools.get_output_filename(f'{prefix}.{uniq}') - tools.write_file(fname, data) - return data, fname, uniq + if separate is False: + for entry in entries: + # First get the input data and put it in a file. If not available, + # try later. + if not entry.ObtainContents(fake_size=fake_size): + return None, None, None + data += entry.GetData() + uniq = self.GetUniqueName() + fname = tools.get_output_filename(f'{prefix}.{uniq}') + tools.write_file(fname, data) + return data, fname, uniq + else: + for entry in entries: + self.index = (self.index + 1) + if (self.index > 2): + print('BINMAN Warn: mkimage only supports a maximum of two separate files') + break + # First get the input data and put it in a file. If not available, + # try later. + if not entry.ObtainContents(fake_size=fake_size): + return None, None, None + data = entry.GetData() + uniq = self.GetUniqueName() + fname = tools.get_output_filename(f'{prefix}.{uniq}.{self.index}') + tools.write_file(fname, data) + self.fname_tmp = [''.join(self.fname_tmp),fname] + fname = ':'.join(self.fname_tmp) + uniq = self.GetUniqueName() + return data, fname, uniq diff --git a/tools/binman/etype/mkimage.py b/tools/binman/etype/mkimage.py index 5f6def2287f6..ce5f6b6b543a 100644 --- a/tools/binman/etype/mkimage.py +++ b/tools/binman/etype/mkimage.py @@ -46,6 +46,7 @@ class Entry_mkimage(Entry): def __init__(self, section, etype, node): super().__init__(section, etype, node) self._args = fdt_util.GetArgs(self._node, 'args') + self._mkimage_separate = fdt_util.GetBool(self._node, 'mkimage,separate_files') self._mkimage_entries = OrderedDict() self.align_default = None self.ReadEntries() @@ -53,7 +54,7 @@ class Entry_mkimage(Entry): def ObtainContents(self): # Use a non-zero size for any fake files to keep mkimage happy data, input_fname, uniq = self.collect_contents_to_file( - self._mkimage_entries.values(), 'mkimage', 1024) + self._mkimage_entries.values(), 'mkimage', 1024, self._mkimage_separate) if data is None: return False output_fname = tools.get_output_filename('mkimage-out.%s' % uniq)

On 01/03/2022 05:48, Peter Geis wrote:
Good Evening,
I successfully tested your v2 patch series to create a bootable sdcard image out of the box for rockpro64-rk3399. Unfortunately, rk356x and rk3399-spi modes are broken, due to the inability to pass multiple images to mkimage at the same time. rk3399-spi mode is already supported manually, see: https://elixir.bootlin.com/u-boot/v2022.04-rc3/source/doc/board/rockchip/roc...
rk356x is currently only supported manually, the image built by the old Makefile method is non functional. (u-boot-rockchip.bin)
Knowing absolutely nothing about python, I've hacked together something that works for splitting the image in the way mkimage expects. The file name passed to mkimage with the -d flag is: ./mkimage.simple-bin.mkimage.1:./mkimage.simple-bin.mkimage.2
I definitely don't expect this to be accepted as is, I just use it as an example of what we need to fully support this in binman. Adding the following allows me to build images automatically for rk356x:
mkimage { args = "-n", CONFIG_SYS_SOC, "-T", "rksd"; mkimage,separate_files;
Adding a property to toggle this sounds reasonable to me. The prefix might not be necessary, and I think dashes are preferred to underscores in property names.
ddrl { type = "blob-ext"; filename = "rk3568_ddr_1560MHz_v1.12.bin"; };
u-boot-spl { }; };
This is my first attempt to use in-reply-to, so I hope this works.
FYI, I see it as a reply to 00/25 of the series.
Thanks, Peter Geis
Signed-off-by: Peter Geis pgwipeout@gmail.com
tools/binman/entry.py | 43 ++++++++++++++++++++++++++--------- tools/binman/etype/mkimage.py | 3 ++- 2 files changed, 34 insertions(+), 12 deletions(-)
diff --git a/tools/binman/entry.py b/tools/binman/entry.py index 249f117ace56..48e552fc6af3 100644 --- a/tools/binman/entry.py +++ b/tools/binman/entry.py @@ -114,6 +114,8 @@ class Entry(object): self.bintools = {} self.missing_bintools = [] self.update_hash = True
self.fname_tmp = str()
self.index = 0
@staticmethod def FindEntryClass(etype, expanded):
@@ -1134,7 +1136,7 @@ features to produce new behaviours. """ self.update_hash = update_hash
- def collect_contents_to_file(self, entries, prefix, fake_size=0):
def collect_contents_to_file(self, entries, prefix, fake_size=0, separate=False): """Put the contents of a list of entries into a file
Args:
@@ -1152,13 +1154,32 @@ features to produce new behaviours. str: Unique portion of filename (or None if no data) """ data = b''
for entry in entries:
# First get the input data and put it in a file. If not available,
# try later.
if not entry.ObtainContents(fake_size=fake_size):
return None, None, None
data += entry.GetData()
uniq = self.GetUniqueName()
fname = tools.get_output_filename(f'{prefix}.{uniq}')
tools.write_file(fname, data)
return data, fname, uniq
if separate is False:
for entry in entries:
# First get the input data and put it in a file. If not available,
# try later.
if not entry.ObtainContents(fake_size=fake_size):
return None, None, None
data += entry.GetData()
uniq = self.GetUniqueName()
fname = tools.get_output_filename(f'{prefix}.{uniq}')
tools.write_file(fname, data)
return data, fname, uniq
else:
for entry in entries:
self.index = (self.index + 1)
if (self.index > 2):
print('BINMAN Warn: mkimage only supports a maximum of two separate files')
break
# First get the input data and put it in a file. If not available,
# try later.
if not entry.ObtainContents(fake_size=fake_size):
return None, None, None
data = entry.GetData()
uniq = self.GetUniqueName()
fname = tools.get_output_filename(f'{prefix}.{uniq}.{self.index}')
tools.write_file(fname, data)
self.fname_tmp = [''.join(self.fname_tmp),fname]
fname = ':'.join(self.fname_tmp)
uniq = self.GetUniqueName()
return data, fname, uniq
I would keep this function as-is and call it multiple times in the mkimage etype code below (once per subentry), and also do the mkimage-specific checks and 'file1:file2' argument joining there as well.
diff --git a/tools/binman/etype/mkimage.py b/tools/binman/etype/mkimage.py index 5f6def2287f6..ce5f6b6b543a 100644 --- a/tools/binman/etype/mkimage.py +++ b/tools/binman/etype/mkimage.py @@ -46,6 +46,7 @@ class Entry_mkimage(Entry): def __init__(self, section, etype, node): super().__init__(section, etype, node) self._args = fdt_util.GetArgs(self._node, 'args')
self._mkimage_separate = fdt_util.GetBool(self._node, 'mkimage,separate_files') self._mkimage_entries = OrderedDict() self.align_default = None self.ReadEntries()
@@ -53,7 +54,7 @@ class Entry_mkimage(Entry): def ObtainContents(self): # Use a non-zero size for any fake files to keep mkimage happy data, input_fname, uniq = self.collect_contents_to_file(
self._mkimage_entries.values(), 'mkimage', 1024)
self._mkimage_entries.values(), 'mkimage', 1024, self._mkimage_separate) if data is None: return False output_fname = tools.get_output_filename('mkimage-out.%s' % uniq)

On Thu, Mar 3, 2022 at 4:17 PM Alper Nebi Yasak alpernebiyasak@gmail.com wrote:
On 01/03/2022 05:48, Peter Geis wrote:
Good Evening,
I successfully tested your v2 patch series to create a bootable sdcard image out of the box for rockpro64-rk3399. Unfortunately, rk356x and rk3399-spi modes are broken, due to the inability to pass multiple images to mkimage at the same time. rk3399-spi mode is already supported manually, see: https://elixir.bootlin.com/u-boot/v2022.04-rc3/source/doc/board/rockchip/roc...
rk356x is currently only supported manually, the image built by the old Makefile method is non functional. (u-boot-rockchip.bin)
Knowing absolutely nothing about python, I've hacked together something that works for splitting the image in the way mkimage expects. The file name passed to mkimage with the -d flag is: ./mkimage.simple-bin.mkimage.1:./mkimage.simple-bin.mkimage.2
I definitely don't expect this to be accepted as is, I just use it as an example of what we need to fully support this in binman. Adding the following allows me to build images automatically for rk356x:
mkimage { args = "-n", CONFIG_SYS_SOC, "-T", "rksd"; mkimage,separate_files;
Adding a property to toggle this sounds reasonable to me. The prefix might not be necessary, and I think dashes are preferred to underscores in property names.
Thanks! I thought including mkimage as a prefix was necessary to make it clear what was happening, but on second thought it's in the mkimage node.
ddrl { type = "blob-ext"; filename = "rk3568_ddr_1560MHz_v1.12.bin"; }; u-boot-spl { };
};
This is my first attempt to use in-reply-to, so I hope this works.
FYI, I see it as a reply to 00/25 of the series.
I appreciate the confirmation.
Thanks, Peter Geis
Signed-off-by: Peter Geis pgwipeout@gmail.com
tools/binman/entry.py | 43 ++++++++++++++++++++++++++--------- tools/binman/etype/mkimage.py | 3 ++- 2 files changed, 34 insertions(+), 12 deletions(-)
diff --git a/tools/binman/entry.py b/tools/binman/entry.py index 249f117ace56..48e552fc6af3 100644 --- a/tools/binman/entry.py +++ b/tools/binman/entry.py @@ -114,6 +114,8 @@ class Entry(object): self.bintools = {} self.missing_bintools = [] self.update_hash = True
self.fname_tmp = str()
self.index = 0
@staticmethod def FindEntryClass(etype, expanded):
@@ -1134,7 +1136,7 @@ features to produce new behaviours. """ self.update_hash = update_hash
- def collect_contents_to_file(self, entries, prefix, fake_size=0):
def collect_contents_to_file(self, entries, prefix, fake_size=0, separate=False): """Put the contents of a list of entries into a file
Args:
@@ -1152,13 +1154,32 @@ features to produce new behaviours. str: Unique portion of filename (or None if no data) """ data = b''
for entry in entries:
# First get the input data and put it in a file. If not available,
# try later.
if not entry.ObtainContents(fake_size=fake_size):
return None, None, None
data += entry.GetData()
uniq = self.GetUniqueName()
fname = tools.get_output_filename(f'{prefix}.{uniq}')
tools.write_file(fname, data)
return data, fname, uniq
if separate is False:
for entry in entries:
# First get the input data and put it in a file. If not available,
# try later.
if not entry.ObtainContents(fake_size=fake_size):
return None, None, None
data += entry.GetData()
uniq = self.GetUniqueName()
fname = tools.get_output_filename(f'{prefix}.{uniq}')
tools.write_file(fname, data)
return data, fname, uniq
else:
for entry in entries:
self.index = (self.index + 1)
if (self.index > 2):
print('BINMAN Warn: mkimage only supports a maximum of two separate files')
break
# First get the input data and put it in a file. If not available,
# try later.
if not entry.ObtainContents(fake_size=fake_size):
return None, None, None
data = entry.GetData()
uniq = self.GetUniqueName()
fname = tools.get_output_filename(f'{prefix}.{uniq}.{self.index}')
tools.write_file(fname, data)
self.fname_tmp = [''.join(self.fname_tmp),fname]
fname = ':'.join(self.fname_tmp)
uniq = self.GetUniqueName()
return data, fname, uniq
I would keep this function as-is and call it multiple times in the mkimage etype code below (once per subentry), and also do the mkimage-specific checks and 'file1:file2' argument joining there as well.
Hmm, thinking about it I think I can pull this off. My first (non public) attempt was something similar to this, which just hardcoded the first file name.
I'm not kidding when I said all the python I know I learned to make this happen, so it's a learning event. Thanks for the ideas!
diff --git a/tools/binman/etype/mkimage.py b/tools/binman/etype/mkimage.py index 5f6def2287f6..ce5f6b6b543a 100644 --- a/tools/binman/etype/mkimage.py +++ b/tools/binman/etype/mkimage.py @@ -46,6 +46,7 @@ class Entry_mkimage(Entry): def __init__(self, section, etype, node): super().__init__(section, etype, node) self._args = fdt_util.GetArgs(self._node, 'args')
self._mkimage_separate = fdt_util.GetBool(self._node, 'mkimage,separate_files') self._mkimage_entries = OrderedDict() self.align_default = None self.ReadEntries()
@@ -53,7 +54,7 @@ class Entry_mkimage(Entry): def ObtainContents(self): # Use a non-zero size for any fake files to keep mkimage happy data, input_fname, uniq = self.collect_contents_to_file(
self._mkimage_entries.values(), 'mkimage', 1024)
self._mkimage_entries.values(), 'mkimage', 1024, self._mkimage_separate) if data is None: return False output_fname = tools.get_output_filename('mkimage-out.%s' % uniq)

mkimage has the ability to process two files at the same time. This is necessary for rk356x support as both TPL and SPL need to be hashed individually in the resulting header. It also eases support for rkspi as mkimage handles everything for making the requisite file for maskrom loading.
Add a new flag "separate_files" for mkimage handling to gather the files separately rather than combining them before passing them to mkimage.
Signed-off-by: Peter Geis pgwipeout@gmail.com --- Changelog: v2: I've managed to move this all into mkimage.py as per Alper's suggestion. I've added an example to the readme portion of the function. mkimage,separate_files is now separate-files.
tools/binman/etype/mkimage.py | 41 ++++++++++++++++++++++++++++++++--- 1 file changed, 38 insertions(+), 3 deletions(-)
diff --git a/tools/binman/etype/mkimage.py b/tools/binman/etype/mkimage.py index 5f6def2287f6..0a86f904a2b7 100644 --- a/tools/binman/etype/mkimage.py +++ b/tools/binman/etype/mkimage.py @@ -17,6 +17,7 @@ class Entry_mkimage(Entry): Properties / Entry arguments: - datafile: Filename for -d argument - args: Other arguments to pass + - separate-files: Boolean flag for passing files individually.
The data passed to mkimage is collected from subnodes of the mkimage node, e.g.:: @@ -42,20 +43,54 @@ class Entry_mkimage(Entry): }; };
+ This calls mkimage to create a rksd image with a standalone ram init + binary file and u-boot-spl.bin as individual input files. The output from + mkimage then becomes part of the image produced by binman. + + mkimage { + args = "-n", CONFIG_SYS_SOC, "-T", "rksd"; + separate-files; + + ddrl { + type = "blob-ext"; + filename = "rk3568_ddr_1560MHz_v1.12.bin"; + }; + + u-boot-spl { + }; + }; + """ def __init__(self, section, etype, node): super().__init__(section, etype, node) self._args = fdt_util.GetArgs(self._node, 'args') + self._separate = fdt_util.GetBool(self._node, 'separate-files') self._mkimage_entries = OrderedDict() self.align_default = None self.ReadEntries() + self.index = 0 + self.fname_tmp = str()
def ObtainContents(self): # Use a non-zero size for any fake files to keep mkimage happy - data, input_fname, uniq = self.collect_contents_to_file( - self._mkimage_entries.values(), 'mkimage', 1024) - if data is None: + if self._separate is False: + data, input_fname, uniq = self.collect_contents_to_file( + self._mkimage_entries.values(), 'mkimage', 1024) + if data is None: return False + else: + for entry in self._mkimage_entries.values(): + self.index = (self.index + 1) + if (self.index > 2): + print('BINMAN Warn: mkimage only supports a maximum of two separate files') + return False + input_fname = ['mkimage', str(self.index)] + data, input_fname, uniq = self.collect_contents_to_file( + [entry], ".".join(input_fname), 1024) + if data is None: + return False + self.fname_tmp = [''.join(self.fname_tmp),input_fname] + input_fname = ":".join(self.fname_tmp) output_fname = tools.get_output_filename('mkimage-out.%s' % uniq) if self.mkimage.run_cmd('-d', input_fname, *self._args, output_fname) is not None:

Hi Peter,
On Fri, 4 Mar 2022 at 12:56, Peter Geis pgwipeout@gmail.com wrote:
mkimage has the ability to process two files at the same time. This is necessary for rk356x support as both TPL and SPL need to be hashed individually in the resulting header. It also eases support for rkspi as mkimage handles everything for making the requisite file for maskrom loading.
This makes me wonder if we should move that functoinality out of mkimage and into binman?
Add a new flag "separate_files" for mkimage handling to gather the files separately rather than combining them before passing them to mkimage.
Signed-off-by: Peter Geis pgwipeout@gmail.com
Changelog: v2: I've managed to move this all into mkimage.py as per Alper's suggestion. I've added an example to the readme portion of the function. mkimage,separate_files is now separate-files.
tools/binman/etype/mkimage.py | 41 ++++++++++++++++++++++++++++++++--- 1 file changed, 38 insertions(+), 3 deletions(-)
diff --git a/tools/binman/etype/mkimage.py b/tools/binman/etype/mkimage.py index 5f6def2287f6..0a86f904a2b7 100644 --- a/tools/binman/etype/mkimage.py +++ b/tools/binman/etype/mkimage.py @@ -17,6 +17,7 @@ class Entry_mkimage(Entry): Properties / Entry arguments: - datafile: Filename for -d argument - args: Other arguments to pass
- separate-files: Boolean flag for passing files individually.
The data passed to mkimage is collected from subnodes of the mkimage node, e.g.::
@@ -42,20 +43,54 @@ class Entry_mkimage(Entry): }; };
This calls mkimage to create a rksd image with a standalone ram init
binary file and u-boot-spl.bin as individual input files. The output from
mkimage then becomes part of the image produced by binman.
mkimage {
args = "-n", CONFIG_SYS_SOC, "-T", "rksd";
separate-files;
ddrl {
type = "blob-ext";
filename = "rk3568_ddr_1560MHz_v1.12.bin";
};
u-boot-spl {
};
};
""" def __init__(self, section, etype, node): super().__init__(section, etype, node) self._args = fdt_util.GetArgs(self._node, 'args')
self._separate = fdt_util.GetBool(self._node, 'separate-files') self._mkimage_entries = OrderedDict() self.align_default = None self.ReadEntries()
self.index = 0
self.fname_tmp = str()
def ObtainContents(self): # Use a non-zero size for any fake files to keep mkimage happy
data, input_fname, uniq = self.collect_contents_to_file(
self._mkimage_entries.values(), 'mkimage', 1024)
if data is None:
if self._separate is False:
data, input_fname, uniq = self.collect_contents_to_file(
self._mkimage_entries.values(), 'mkimage', 1024)
if data is None: return False
else:
for entry in self._mkimage_entries.values():
We can do:
for index, entry in enumerate(self._mkimage_entries.values()):
then you don't need self.index
self.index = (self.index + 1)
if (self.index > 2):
print('BINMAN Warn: mkimage only supports a maximum of two separate files')
Can we use self.Raise() here instead? It seems like a fatal error. Also this check should go in ReadNode() since we don't want to die this late in the picture if we know it is wrong upfront. (BTW I am moving the node-reading code to ReadNode() in my v3 series as suggested by Alper).
return False
input_fname = ['mkimage', str(self.index)]
data, input_fname, uniq = self.collect_contents_to_file(
[entry], ".".join(input_fname), 1024)
I suppose we can just use
data = entry.GetData()
here?
if data is None:
return False
self.fname_tmp = [''.join(self.fname_tmp),input_fname]
input_fname = ":".join(self.fname_tmp) output_fname = tools.get_output_filename('mkimage-out.%s' % uniq) if self.mkimage.run_cmd('-d', input_fname, *self._args, output_fname) is not None:
-- 2.25.1
Looks OK to me, needs a test or two.
Regards, Simon

On Sat, Mar 5, 2022 at 10:08 PM Simon Glass sjg@chromium.org wrote:
Hi Peter,
Good Morning,
On Fri, 4 Mar 2022 at 12:56, Peter Geis pgwipeout@gmail.com wrote:
mkimage has the ability to process two files at the same time. This is necessary for rk356x support as both TPL and SPL need to be hashed individually in the resulting header. It also eases support for rkspi as mkimage handles everything for making the requisite file for maskrom loading.
This makes me wonder if we should move that functoinality out of mkimage and into binman?
Rockchip rk32 and rk33 maskrom loading from SPI has a bug that causes it to only load 2048 bytes out of each 4096 byte chunk. RKSPI splits the TPL/SPL (the portion directly loaded by maskrom) into 2048 chunks then pads each chunk with blank space so the image can load correctly. While it could be moved to binman, as far as I'm aware this is a corner case and I don't know any other chip that would need this functionality.
Add a new flag "separate_files" for mkimage handling to gather the files separately rather than combining them before passing them to mkimage.
Signed-off-by: Peter Geis pgwipeout@gmail.com
Changelog: v2: I've managed to move this all into mkimage.py as per Alper's suggestion. I've added an example to the readme portion of the function. mkimage,separate_files is now separate-files.
tools/binman/etype/mkimage.py | 41 ++++++++++++++++++++++++++++++++--- 1 file changed, 38 insertions(+), 3 deletions(-)
diff --git a/tools/binman/etype/mkimage.py b/tools/binman/etype/mkimage.py index 5f6def2287f6..0a86f904a2b7 100644 --- a/tools/binman/etype/mkimage.py +++ b/tools/binman/etype/mkimage.py @@ -17,6 +17,7 @@ class Entry_mkimage(Entry): Properties / Entry arguments: - datafile: Filename for -d argument - args: Other arguments to pass
- separate-files: Boolean flag for passing files individually.
The data passed to mkimage is collected from subnodes of the mkimage node, e.g.::
@@ -42,20 +43,54 @@ class Entry_mkimage(Entry): }; };
This calls mkimage to create a rksd image with a standalone ram init
binary file and u-boot-spl.bin as individual input files. The output from
mkimage then becomes part of the image produced by binman.
mkimage {
args = "-n", CONFIG_SYS_SOC, "-T", "rksd";
separate-files;
ddrl {
type = "blob-ext";
filename = "rk3568_ddr_1560MHz_v1.12.bin";
};
u-boot-spl {
};
};
""" def __init__(self, section, etype, node): super().__init__(section, etype, node) self._args = fdt_util.GetArgs(self._node, 'args')
self._separate = fdt_util.GetBool(self._node, 'separate-files') self._mkimage_entries = OrderedDict() self.align_default = None self.ReadEntries()
self.index = 0
self.fname_tmp = str()
def ObtainContents(self): # Use a non-zero size for any fake files to keep mkimage happy
data, input_fname, uniq = self.collect_contents_to_file(
self._mkimage_entries.values(), 'mkimage', 1024)
if data is None:
if self._separate is False:
data, input_fname, uniq = self.collect_contents_to_file(
self._mkimage_entries.values(), 'mkimage', 1024)
if data is None: return False
else:
for entry in self._mkimage_entries.values():
We can do:
for index, entry in enumerate(self._mkimage_entries.values()):
then you don't need self.index
Thanks!
self.index = (self.index + 1)
if (self.index > 2):
print('BINMAN Warn: mkimage only supports a maximum of two separate files')
Can we use self.Raise() here instead? It seems like a fatal error. Also this check should go in ReadNode() since we don't want to die this late in the picture if we know it is wrong upfront. (BTW I am moving the node-reading code to ReadNode() in my v3 series as suggested by Alper).
Certainly, this really would be a fatal error.
return False
input_fname = ['mkimage', str(self.index)]
data, input_fname, uniq = self.collect_contents_to_file(
[entry], ".".join(input_fname), 1024)
I suppose we can just use
data = entry.GetData()
We don't actually use data directly here, collect_contents_to_file collects the data into separate files and passes the file name back. data is just used to tell if that function failed, the file names are what we care about. Really as far as I can tell collect_contents_to_file doesn't need to pass data back at all, because input_fname and uniq would be returned False as well and either could be used for this check. uniq is also used later on (I checked, each time returns the same value so clobbering it in each iteration doesn't cause issues).
here?
if data is None:
return False
self.fname_tmp = [''.join(self.fname_tmp),input_fname]
input_fname = ":".join(self.fname_tmp) output_fname = tools.get_output_filename('mkimage-out.%s' % uniq) if self.mkimage.run_cmd('-d', input_fname, *self._args, output_fname) is not None:
-- 2.25.1
Looks OK to me, needs a test or two.
Regards, Simon
Honestly, if you can implement this better than I did in your series, please do. As I said previously, all the python I know now I learned to make this happen, so I imagine it is not optimal.
Very Respectfully, Peter

On 06/03/2022 17:44, Peter Geis wrote:
On Sat, Mar 5, 2022 at 10:08 PM Simon Glass sjg@chromium.org wrote:
On Fri, 4 Mar 2022 at 12:56, Peter Geis pgwipeout@gmail.com wrote:
mkimage has the ability to process two files at the same time. This is necessary for rk356x support as both TPL and SPL need to be hashed individually in the resulting header. It also eases support for rkspi as mkimage handles everything for making the requisite file for maskrom loading.
This makes me wonder if we should move that functoinality out of mkimage and into binman?
I think we should have entry types for the formats mkimage supports, even if they're just stubs that run 'mkimage -T <type>'. Primarily because I don't see 'mkimage' as a meaningful 'type' of an entry, but I don't know if there's a common format to all the types it supports. Creating stub entries would also let us switch to native implementations one by one if we want that later.
Rockchip rk32 and rk33 maskrom loading from SPI has a bug that causes it to only load 2048 bytes out of each 4096 byte chunk. RKSPI splits the TPL/SPL (the portion directly loaded by maskrom) into 2048 chunks then pads each chunk with blank space so the image can load correctly. While it could be moved to binman, as far as I'm aware this is a corner case and I don't know any other chip that would need this functionality.
Do you know if rk35xx chips have the same bug? Does rkspi also do the weird chunking for those when it maybe doesn't need to?
Add a new flag "separate_files" for mkimage handling to gather the files separately rather than combining them before passing them to mkimage.
Signed-off-by: Peter Geis pgwipeout@gmail.com
Changelog: v2: I've managed to move this all into mkimage.py as per Alper's suggestion. I've added an example to the readme portion of the function. mkimage,separate_files is now separate-files.
tools/binman/etype/mkimage.py | 41 ++++++++++++++++++++++++++++++++--- 1 file changed, 38 insertions(+), 3 deletions(-)
diff --git a/tools/binman/etype/mkimage.py b/tools/binman/etype/mkimage.py index 5f6def2287f6..0a86f904a2b7 100644 --- a/tools/binman/etype/mkimage.py +++ b/tools/binman/etype/mkimage.py @@ -17,6 +17,7 @@ class Entry_mkimage(Entry): Properties / Entry arguments: - datafile: Filename for -d argument - args: Other arguments to pass
- separate-files: Boolean flag for passing files individually.
The data passed to mkimage is collected from subnodes of the mkimage node, e.g.::
@@ -42,20 +43,54 @@ class Entry_mkimage(Entry): }; };
- This calls mkimage to create a rksd image with a standalone ram init
- binary file and u-boot-spl.bin as individual input files. The output from
- mkimage then becomes part of the image produced by binman.
mkimage {
args = "-n", CONFIG_SYS_SOC, "-T", "rksd";
separate-files;
ddrl {
type = "blob-ext";
filename = "rk3568_ddr_1560MHz_v1.12.bin";
};
u-boot-spl {
};
};
- """ def __init__(self, section, etype, node): super().__init__(section, etype, node) self._args = fdt_util.GetArgs(self._node, 'args')
self._separate = fdt_util.GetBool(self._node, 'separate-files') self._mkimage_entries = OrderedDict() self.align_default = None self.ReadEntries()
self.index = 0
self.fname_tmp = str()
These two wouldn't be useful for anything except the function below, so should be defined there as local variables.
def ObtainContents(self): # Use a non-zero size for any fake files to keep mkimage happy
data, input_fname, uniq = self.collect_contents_to_file(
self._mkimage_entries.values(), 'mkimage', 1024)
if data is None:
if self._separate is False:
data, input_fname, uniq = self.collect_contents_to_file(
self._mkimage_entries.values(), 'mkimage', 1024)
if data is None: return False
else:
for entry in self._mkimage_entries.values():
We can do:
for index, entry in enumerate(self._mkimage_entries.values()):
then you don't need self.index
Thanks!
self.index = (self.index + 1)
if (self.index > 2):
print('BINMAN Warn: mkimage only supports a maximum of two separate files')
Can we use self.Raise() here instead? It seems like a fatal error. Also this check should go in ReadNode() since we don't want to die this late in the picture if we know it is wrong upfront. (BTW I am moving the node-reading code to ReadNode() in my v3 series as suggested by Alper).
Certainly, this really would be a fatal error.
(The "BINMAN Warn:" prefix should also be dropped with self.Raise())
return False
input_fname = ['mkimage', str(self.index)]
data, input_fname, uniq = self.collect_contents_to_file(
[entry], ".".join(input_fname), 1024)
This could have f"mkimage.{index}" as the prefix argument instead.
I suppose we can just use
data = entry.GetData()
We don't actually use data directly here, collect_contents_to_file collects the data into separate files and passes the file name back. data is just used to tell if that function failed, the file names are what we care about. Really as far as I can tell collect_contents_to_file doesn't need to pass data back at all, because input_fname and uniq would be returned False as well and either could be used for this check. uniq is also used later on (I checked, each time returns the same value so clobbering it in each iteration doesn't cause issues).
Yeah, I think collect_contents_to_file() is better here for the one-entry case as well. The alternative is doing exactly everything it does manually.
here?
if data is None:
return False
self.fname_tmp = [''.join(self.fname_tmp),input_fname]
input_fname = ":".join(self.fname_tmp)
Instead of these, I'd set input_fnames = [] before the for loop, use input_fnames.append(input_fname) inside the loop to collect the names, then ":".join(input_fnames) to build the argument.
output_fname = tools.get_output_filename('mkimage-out.%s' % uniq) if self.mkimage.run_cmd('-d', input_fname, *self._args, output_fname) is not None:
-- 2.25.1
Looks OK to me, needs a test or two.
Regards, Simon
Honestly, if you can implement this better than I did in your series, please do. As I said previously, all the python I know now I learned to make this happen, so I imagine it is not optimal.
Well, you're 90% there already (with the other 90% being the tests...), but if you don't feel like working on this, tell me and I can do so in a few days.

On Thu, Mar 10, 2022 at 2:36 PM Alper Nebi Yasak alpernebiyasak@gmail.com wrote:
On 06/03/2022 17:44, Peter Geis wrote:
On Sat, Mar 5, 2022 at 10:08 PM Simon Glass sjg@chromium.org wrote:
On Fri, 4 Mar 2022 at 12:56, Peter Geis pgwipeout@gmail.com wrote:
mkimage has the ability to process two files at the same time. This is necessary for rk356x support as both TPL and SPL need to be hashed individually in the resulting header. It also eases support for rkspi as mkimage handles everything for making the requisite file for maskrom loading.
This makes me wonder if we should move that functoinality out of mkimage and into binman?
I think we should have entry types for the formats mkimage supports, even if they're just stubs that run 'mkimage -T <type>'. Primarily because I don't see 'mkimage' as a meaningful 'type' of an entry, but I don't know if there's a common format to all the types it supports. Creating stub entries would also let us switch to native implementations one by one if we want that later.
Rockchip rk32 and rk33 maskrom loading from SPI has a bug that causes it to only load 2048 bytes out of each 4096 byte chunk. RKSPI splits the TPL/SPL (the portion directly loaded by maskrom) into 2048 chunks then pads each chunk with blank space so the image can load correctly. While it could be moved to binman, as far as I'm aware this is a corner case and I don't know any other chip that would need this functionality.
Do you know if rk35xx chips have the same bug? Does rkspi also do the weird chunking for those when it maybe doesn't need to?
From my limited testing, no rk35xx seems to have fixed this bug.
Thus this series gives me hope for "unified" images for SPI and MMC where the offsets are the same, and the only thing that changes is the TPL/SPL actually loaded in place.
Add a new flag "separate_files" for mkimage handling to gather the files separately rather than combining them before passing them to mkimage.
Signed-off-by: Peter Geis pgwipeout@gmail.com
Changelog: v2: I've managed to move this all into mkimage.py as per Alper's suggestion. I've added an example to the readme portion of the function. mkimage,separate_files is now separate-files.
tools/binman/etype/mkimage.py | 41 ++++++++++++++++++++++++++++++++--- 1 file changed, 38 insertions(+), 3 deletions(-)
diff --git a/tools/binman/etype/mkimage.py b/tools/binman/etype/mkimage.py index 5f6def2287f6..0a86f904a2b7 100644 --- a/tools/binman/etype/mkimage.py +++ b/tools/binman/etype/mkimage.py @@ -17,6 +17,7 @@ class Entry_mkimage(Entry): Properties / Entry arguments: - datafile: Filename for -d argument - args: Other arguments to pass
- separate-files: Boolean flag for passing files individually.
The data passed to mkimage is collected from subnodes of the mkimage node, e.g.::
@@ -42,20 +43,54 @@ class Entry_mkimage(Entry): }; };
- This calls mkimage to create a rksd image with a standalone ram init
- binary file and u-boot-spl.bin as individual input files. The output from
- mkimage then becomes part of the image produced by binman.
mkimage {
args = "-n", CONFIG_SYS_SOC, "-T", "rksd";
separate-files;
ddrl {
type = "blob-ext";
filename = "rk3568_ddr_1560MHz_v1.12.bin";
};
u-boot-spl {
};
};
- """ def __init__(self, section, etype, node): super().__init__(section, etype, node) self._args = fdt_util.GetArgs(self._node, 'args')
self._separate = fdt_util.GetBool(self._node, 'separate-files') self._mkimage_entries = OrderedDict() self.align_default = None self.ReadEntries()
self.index = 0
self.fname_tmp = str()
These two wouldn't be useful for anything except the function below, so should be defined there as local variables.
Thanks!
def ObtainContents(self): # Use a non-zero size for any fake files to keep mkimage happy
data, input_fname, uniq = self.collect_contents_to_file(
self._mkimage_entries.values(), 'mkimage', 1024)
if data is None:
if self._separate is False:
data, input_fname, uniq = self.collect_contents_to_file(
self._mkimage_entries.values(), 'mkimage', 1024)
if data is None: return False
else:
for entry in self._mkimage_entries.values():
We can do:
for index, entry in enumerate(self._mkimage_entries.values()):
then you don't need self.index
Thanks!
self.index = (self.index + 1)
if (self.index > 2):
print('BINMAN Warn: mkimage only supports a maximum of two separate files')
Can we use self.Raise() here instead? It seems like a fatal error. Also this check should go in ReadNode() since we don't want to die this late in the picture if we know it is wrong upfront. (BTW I am moving the node-reading code to ReadNode() in my v3 series as suggested by Alper).
Certainly, this really would be a fatal error.
(The "BINMAN Warn:" prefix should also be dropped with self.Raise())
return False
input_fname = ['mkimage', str(self.index)]
data, input_fname, uniq = self.collect_contents_to_file(
[entry], ".".join(input_fname), 1024)
This could have f"mkimage.{index}" as the prefix argument instead.
I have no idea what that means.
I suppose we can just use
data = entry.GetData()
We don't actually use data directly here, collect_contents_to_file collects the data into separate files and passes the file name back. data is just used to tell if that function failed, the file names are what we care about. Really as far as I can tell collect_contents_to_file doesn't need to pass data back at all, because input_fname and uniq would be returned False as well and either could be used for this check. uniq is also used later on (I checked, each time returns the same value so clobbering it in each iteration doesn't cause issues).
Yeah, I think collect_contents_to_file() is better here for the one-entry case as well. The alternative is doing exactly everything it does manually.
Yeah V1 actually implemented that all in a modified collect_contents_to_file(), so this is much simpler already.
here?
if data is None:
return False
self.fname_tmp = [''.join(self.fname_tmp),input_fname]
input_fname = ":".join(self.fname_tmp)
Instead of these, I'd set input_fnames = [] before the for loop, use input_fnames.append(input_fname) inside the loop to collect the names, then ":".join(input_fnames) to build the argument.
Ah, there's the little python details I don't know about yet.
output_fname = tools.get_output_filename('mkimage-out.%s' % uniq) if self.mkimage.run_cmd('-d', input_fname, *self._args, output_fname) is not None:
-- 2.25.1
Looks OK to me, needs a test or two.
Regards, Simon
Honestly, if you can implement this better than I did in your series, please do. As I said previously, all the python I know now I learned to make this happen, so I imagine it is not optimal.
Well, you're 90% there already (with the other 90% being the tests...), but if you don't feel like working on this, tell me and I can do so in a few days.
If you would like to, please do! Otherwise, I will continue stumbling around with it once this series settles down.
Thanks, Peter
participants (3)
-
Alper Nebi Yasak
-
Peter Geis
-
Simon Glass