[PATCH v2 00/43] labgrid: Provide an integration with Labgrid

Labgrid provides access to a hardware lab in an automated way. It is possible to boot U-Boot on boards in the lab without physically touching them. It relies on relays, USB UARTs and SD muxes, among other things.
By way of background, about 4 years ago I wrong a thing called Labman[1] which allowed my lab of about 30 devices to be operated remotely, using tbot for the console and build integration. While it worked OK and I used it for many bisects, I didn't take it any further.
It turns out that there was already an existing program, called Labgrid, which I did not know about at time (thank you Tom for telling me). It is more rounded than Labman and has a number of advantages:
- does not need udev rules, mostly - has several existing users who rely on it - supports multiple machines exporting their devices
It lacks a 'lab check' feature and a few other things, but these can be remedied.
On and off over the past several weeks I have been experimenting with Labgrid. I have managed to create an initial U-Boot integration (this series) by adding various features to Labgrid[2] and the U-Boot test hooks.
I hope that this might inspire others to set up boards and run tests automatically, rather than relying on infrequent, manual test. Perhaps it may even be possible to have a number of labs available.
Included in the integration are a number of simple scripts which make it easy to connect to boards and run tests:
ub-int <target> Build and boot on a target, starting an interactive session
ub-cli <target> Build and boot on a target, ensure U-Boot starts and provide an interactive session from there
ub-smoke <target> Smoke test U-Boot to check that it boots to a prompt on a target
ub-bisect Bisect a git tree to locate a failure on a particular target
ub-pyt <target> <testspec> Run U-Boot pytests on a target
Some of these help to provide the same tbot[4] workflow which I have relied on for several years, albeit much simpler versions.
The goal here is to create some sort of script which can collect patches from the mailing list, apply them and test them on a selection of boards. I suspect that script already exists, so please let me know what you suggest.
I hope you find this interesting and take a look!
[1] https://github.com/sjg20/u-boot/tree/lab6a [2] https://github.com/labgrid-project/labgrid/pull/1411 [3] https://github.com/sjg20/uboot-test-hooks/tree/labgrid [4] https://tbot.tools/index.html
Changes in v2: - Only disable echo if a terminal is detected - Add new patch to update u-boot.cfg with CFG_... options - Avoid running a docker image for skipped lab tests
Simon Glass (43): trace: Update test to tolerate different trace-cmd version binman: efi: Correct entry docs binman: Regenerate nxp docs binman: ti: Regenerate entry docs binman: Update the entrydocs header buildman: Make mrproper an argument to _reconfigure() buildman: Make mrproper an argument to _config_and_build() buildman: Make mrproper an argument to run_commit() buildman: Avoid rebuilding when --mrproper is used buildman: Add a flag to force mrproper on failure buildman: Retry the build for current source buildman: Add a way to limit the number of buildmans dm: core: Enhance comments on bind_drivers_pass() initcall: Correct use of relocation offset am33xx: Provide a function to set up the debug UART sunxi: Mark scp as optional google: Disable TPMv2 on most Chromebooks meson: Correct driver declaration for meson_axg_gpio test: Allow signaling that U-Boot is ready test: Make bootstd init run only on sandbox test: Use a constant for the test timeout test: Pass stderr to stdout test: Release board after tests complete log: Allow tests to pass with CONFIG_LOGF_FUNC_PAD set test: Allow connecting to a running board test: Decode exceptions only with sandbox test: Avoid failing skipped tests test: dm: Show failing driver name test: Check help output test: Create a common function to get the config test: Introduce the concept of a role test: Move the receive code into a function test: Separate out the exception handling test: Detect dead connections test: Tidy up remaining exceptions test: Introduce lab mode test: Improve handling of sending commands test: Fix mulptiplex_log typo test: Avoid double echo when starting up test: Try to shut down the lab console gracefully test: Add a section for closing the connection Update u-boot.cfg to include CFG also CI: Allow running tests on sjg lab
.gitlab-ci.yml | 153 +++++++++++++++++ arch/arm/dts/sunxi-u-boot.dtsi | 1 + arch/arm/mach-omap2/am33xx/board.c | 18 +- configs/chromebook_link64_defconfig | 1 + configs/chromebook_link_defconfig | 1 + configs/chromebook_samus_defconfig | 1 + configs/chromebook_samus_tpl_defconfig | 1 + configs/nyan-big_defconfig | 4 +- configs/snow_defconfig | 1 + drivers/core/lists.c | 16 ++ drivers/pinctrl/meson/pinctrl-meson-axg-pmx.c | 2 +- drivers/pinctrl/meson/pinctrl-meson-axg.c | 4 +- drivers/pinctrl/meson/pinctrl-meson-axg.h | 2 +- drivers/pinctrl/meson/pinctrl-meson-g12a.c | 4 +- lib/initcall.c | 6 +- scripts/Makefile.autoconf | 2 +- test/py/conftest.py | 86 ++++++++-- test/py/tests/test_dm.py | 5 +- test/py/tests/test_help.py | 6 +- test/py/tests/test_log.py | 11 +- test/py/tests/test_trace.py | 6 +- test/py/tests/test_ut.py | 1 + test/py/u_boot_console_base.py | 154 ++++++++++++------ test/py/u_boot_console_exec_attach.py | 31 +++- test/py/u_boot_console_sandbox.py | 2 +- test/py/u_boot_spawn.py | 126 ++++++++++++-- test/test-main.c | 16 +- tools/binman/entries.rst | 115 +++++++++---- tools/binman/entry.py | 2 +- tools/binman/etype/efi_capsule.py | 40 ++--- tools/binman/etype/efi_empty_capsule.py | 22 +-- tools/binman/etype/ti_secure.py | 45 ++--- tools/buildman/builder.py | 18 +- tools/buildman/builderthread.py | 44 +++-- tools/buildman/buildman.rst | 8 +- tools/buildman/cmdline.py | 6 +- tools/buildman/control.py | 141 +++++++++++++++- tools/buildman/pyproject.toml | 6 +- tools/buildman/test.py | 121 ++++++++++++++ tools/u_boot_pylib/terminal.py | 7 +- 40 files changed, 1009 insertions(+), 227 deletions(-)

Some versions of trace-cmd (or some machines?) show one less dot in the CPU list.
Signed-off-by: Simon Glass sjg@chromium.org ---
(no changes since v1)
test/py/tests/test_trace.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/test/py/tests/test_trace.py b/test/py/tests/test_trace.py index 7c5696ce747..f41d4cf71f0 100644 --- a/test/py/tests/test_trace.py +++ b/test/py/tests/test_trace.py @@ -12,7 +12,7 @@ import u_boot_utils as util TMPDIR = '/tmp/test_trace'
# Decode a function-graph line -RE_LINE = re.compile(r'.*0..... \s*([0-9.]*): func.*[|](\s*)(\S.*)?([{};])$') +RE_LINE = re.compile(r'.*0.....? \s*([0-9.]*): func.*[|](\s*)(\S.*)?([{};])$')
def collect_trace(cons):

Somehow the class documentation has got out of sync with the generated entries.rst file. Regenerating it causes errors, so correct these and regenerate the entries.rst file.
Signed-off-by: Simon Glass sjg@chromium.org Fixes: 809f28e7213 ("binman: capsule: Use dumped capsule header...") ---
(no changes since v1)
tools/binman/entries.rst | 58 ++++++++++++------------- tools/binman/etype/efi_capsule.py | 40 ++++++++--------- tools/binman/etype/efi_empty_capsule.py | 22 +++++----- 3 files changed, 61 insertions(+), 59 deletions(-)
diff --git a/tools/binman/entries.rst b/tools/binman/entries.rst index 86a3c02b485..531a92e2e0c 100644 --- a/tools/binman/entries.rst +++ b/tools/binman/entries.rst @@ -477,11 +477,11 @@ updating the EC on startup via software sync.
.. _etype_efi_capsule:
-Entry: capsule: Entry for generating EFI Capsule files ------------------------------------------------------- +Entry: efi-capsule: Generate EFI capsules +-----------------------------------------
-The parameters needed for generation of the capsules can be provided -as properties in the entry. +The parameters needed for generation of the capsules can +be provided as properties in the entry.
Properties / Entry arguments: - image-index: Unique number for identifying corresponding @@ -502,9 +502,9 @@ Properties / Entry arguments: file. Mandatory property for generating signed capsules. - oem-flags - OEM flags to be passed through capsule header.
- Since this is a subclass of Entry_section, all properties of the parent - class also apply here. Except for the properties stated as mandatory, the - rest of the properties are optional. +Since this is a subclass of Entry_section, all properties of the parent +class also apply here. Except for the properties stated as mandatory, the +rest of the properties are optional.
For more details on the description of the capsule format, and the capsule update functionality, refer Section 8.5 and Chapter 23 in the `UEFI @@ -517,17 +517,17 @@ provided as a subnode of the capsule entry. A typical capsule entry node would then look something like this::
capsule { - type = "efi-capsule"; - image-index = <0x1>; - /* Image GUID for testing capsule update */ - image-guid = SANDBOX_UBOOT_IMAGE_GUID; - hardware-instance = <0x0>; - private-key = "path/to/the/private/key"; - public-key-cert = "path/to/the/public-key-cert"; - oem-flags = <0x8000>; + type = "efi-capsule"; + image-index = <0x1>; + /* Image GUID for testing capsule update */ + image-guid = SANDBOX_UBOOT_IMAGE_GUID; + hardware-instance = <0x0>; + private-key = "path/to/the/private/key"; + public-key-cert = "path/to/the/public-key-cert"; + oem-flags = <0x8000>;
- u-boot { - }; + u-boot { + }; };
In the above example, the capsule payload is the U-Boot image. The @@ -541,8 +541,8 @@ payload using the blob-ext subnode.
.. _etype_efi_empty_capsule:
-Entry: efi-empty-capsule: Entry for generating EFI Empty Capsule files ----------------------------------------------------------------------- +Entry: efi-empty-capsule: Generate EFI empty capsules +-----------------------------------------------------
The parameters needed for generation of the empty capsules can be provided as properties in the entry. @@ -558,22 +558,22 @@ update functionality, refer Section 8.5 and Chapter 23 in the `UEFI specification`_. For more information on the empty capsule, refer the sections 2.3.2 and 2.3.3 in the `Dependable Boot specification`_.
-A typical accept empty capsule entry node would then look something -like this:: +A typical accept empty capsule entry node would then look something like +this::
empty-capsule { - type = "efi-empty-capsule"; - /* GUID of the image being accepted */ - image-type-id = SANDBOX_UBOOT_IMAGE_GUID; - capsule-type = "accept"; + type = "efi-empty-capsule"; + /* GUID of image being accepted */ + image-type-id = SANDBOX_UBOOT_IMAGE_GUID; + capsule-type = "accept"; };
-A typical revert empty capsule entry node would then look something -like this:: +A typical revert empty capsule entry node would then look something like +this::
empty-capsule { - type = "efi-empty-capsule"; - capsule-type = "revert"; + type = "efi-empty-capsule"; + capsule-type = "revert"; };
The empty capsules do not have any input payload image. diff --git a/tools/binman/etype/efi_capsule.py b/tools/binman/etype/efi_capsule.py index e3203717822..751f654bf31 100644 --- a/tools/binman/etype/efi_capsule.py +++ b/tools/binman/etype/efi_capsule.py @@ -36,23 +36,23 @@ class Entry_efi_capsule(Entry_section): be provided as properties in the entry.
Properties / Entry arguments: - - image-index: Unique number for identifying corresponding - payload image. Number between 1 and descriptor count, i.e. - the total number of firmware images that can be updated. Mandatory - property. - - image-guid: Image GUID which will be used for identifying the - updatable image on the board. Mandatory property. - - hardware-instance: Optional number for identifying unique - hardware instance of a device in the system. Default value of 0 - for images where value is not to be used. - - fw-version: Value of image version that can be put on the capsule - through the Firmware Management Protocol(FMP) header. - - monotonic-count: Count used when signing an image. - - private-key: Path to PEM formatted .key private key file. Mandatory - property for generating signed capsules. - - public-key-cert: Path to PEM formatted .crt public key certificate - file. Mandatory property for generating signed capsules. - - oem-flags - OEM flags to be passed through capsule header. + - image-index: Unique number for identifying corresponding + payload image. Number between 1 and descriptor count, i.e. + the total number of firmware images that can be updated. Mandatory + property. + - image-guid: Image GUID which will be used for identifying the + updatable image on the board. Mandatory property. + - hardware-instance: Optional number for identifying unique + hardware instance of a device in the system. Default value of 0 + for images where value is not to be used. + - fw-version: Value of image version that can be put on the capsule + through the Firmware Management Protocol(FMP) header. + - monotonic-count: Count used when signing an image. + - private-key: Path to PEM formatted .key private key file. Mandatory + property for generating signed capsules. + - public-key-cert: Path to PEM formatted .crt public key certificate + file. Mandatory property for generating signed capsules. + - oem-flags - OEM flags to be passed through capsule header.
Since this is a subclass of Entry_section, all properties of the parent class also apply here. Except for the properties stated as mandatory, the @@ -66,9 +66,9 @@ class Entry_efi_capsule(Entry_section): properties in the entry. The payload to be used in the capsule is to be provided as a subnode of the capsule entry.
- A typical capsule entry node would then look something like this + A typical capsule entry node would then look something like this::
- capsule { + capsule { type = "efi-capsule"; image-index = <0x1>; /* Image GUID for testing capsule update */ @@ -80,7 +80,7 @@ class Entry_efi_capsule(Entry_section):
u-boot { }; - }; + };
In the above example, the capsule payload is the U-Boot image. The capsule entry would read the contents of the payload and put them diff --git a/tools/binman/etype/efi_empty_capsule.py b/tools/binman/etype/efi_empty_capsule.py index 064bf9a77f0..1d99fbfb3bb 100644 --- a/tools/binman/etype/efi_empty_capsule.py +++ b/tools/binman/etype/efi_empty_capsule.py @@ -19,31 +19,33 @@ class Entry_efi_empty_capsule(Entry_section): be provided as properties in the entry.
Properties / Entry arguments: - - image-guid: Image GUID which will be used for identifying the - updatable image on the board. Mandatory for accept capsule. - - capsule-type - String to indicate type of capsule to generate. Valid - values are 'accept' and 'revert'. + - image-guid: Image GUID which will be used for identifying the + updatable image on the board. Mandatory for accept capsule. + - capsule-type - String to indicate type of capsule to generate. Valid + values are 'accept' and 'revert'.
For more details on the description of the capsule format, and the capsule update functionality, refer Section 8.5 and Chapter 23 in the `UEFI specification`_. For more information on the empty capsule, refer the sections 2.3.2 and 2.3.3 in the `Dependable Boot specification`_.
- A typical accept empty capsule entry node would then look something like this + A typical accept empty capsule entry node would then look something like + this::
- empty-capsule { + empty-capsule { type = "efi-empty-capsule"; /* GUID of image being accepted */ image-type-id = SANDBOX_UBOOT_IMAGE_GUID; capsule-type = "accept"; - }; + };
- A typical revert empty capsule entry node would then look something like this + A typical revert empty capsule entry node would then look something like + this::
- empty-capsule { + empty-capsule { type = "efi-empty-capsule"; capsule-type = "revert"; - }; + };
The empty capsules do not have any input payload image.

Regenerate the entries.rst file to include this recent addition.
Note that more docs are needed here, to actually describe the entry type.
Note also that the entry type needs Binman tests added.
Signed-off-by: Simon Glass sjg@chromium.org ---
(no changes since v1)
tools/binman/entries.rst | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+)
diff --git a/tools/binman/entries.rst b/tools/binman/entries.rst index 531a92e2e0c..ecccf77d112 100644 --- a/tools/binman/entries.rst +++ b/tools/binman/entries.rst @@ -1528,6 +1528,28 @@ byte.
+.. _etype_nxp_imx8mcst: + +Entry: nxp-imx8mcst: NXP i.MX8M CST .cfg file generator and cst invoker +----------------------------------------------------------------------- + +Properties / Entry arguments: + - nxp,loader-address - loader address (SPL text base) + + + +.. _etype_nxp_imx8mimage: + +Entry: nxp-imx8mimage: NXP i.MX8M imx8mimage .cfg file generator and mkimage invoker +------------------------------------------------------------------------------------ + +Properties / Entry arguments: + - nxp,boot-from - device to boot from (e.g. 'sd') + - nxp,loader-address - loader address (SPL text base) + - nxp,rom-version - BootROM version ('2' for i.MX8M Nano and Plus) + + + .. _etype_opensbi:
Entry: opensbi: RISC-V OpenSBI fw_dynamic blob

Correct formatting errors in the documentation.
Regenerate the entries.rst file to include this recent addition.
Signed-off-by: Simon Glass sjg@chromium.org ---
(no changes since v1)
tools/binman/entries.rst | 35 +++++++++++++++++++++++++ tools/binman/etype/ti_secure.py | 45 +++++++++++++++++---------------- 2 files changed, 58 insertions(+), 22 deletions(-)
diff --git a/tools/binman/entries.rst b/tools/binman/entries.rst index ecccf77d112..f59c0341840 100644 --- a/tools/binman/entries.rst +++ b/tools/binman/entries.rst @@ -1958,6 +1958,12 @@ Properties / Entry arguments: - content: List of phandles to entries to sign - keyfile: Filename of file containing key to sign binary with - sha: Hash function to be used for signing + - auth-in-place: This is an integer field that contains two pieces + of information: + + - Lower Byte - Remains 0x02 as per our use case + ( 0x02: Move the authenticated binary back to the header ) + - Upper Byte - The Host ID of the core owning the firewall
Output files: - input.<unique_name> - input file passed to openssl @@ -1966,6 +1972,35 @@ Output files: - cert.<unique_name> - output file generated by openssl (which is used as the entry contents)
+Depending on auth-in-place information in the inputs, we read the +firewall nodes that describe the configurations of firewall that TIFS +will be doing after reading the certificate. + +The syntax of the firewall nodes are as such:: + + firewall-257-0 { + id = <257>; /* The ID of the firewall being configured */ + region = <0>; /* Region number to configure */ + + control = /* The control register */ + <(FWCTRL_EN | FWCTRL_LOCK | FWCTRL_BG | FWCTRL_CACHE)>; + + permissions = /* The permission registers */ + <((FWPRIVID_ALL << FWPRIVID_SHIFT) | + FWPERM_SECURE_PRIV_RWCD | + FWPERM_SECURE_USER_RWCD | + FWPERM_NON_SECURE_PRIV_RWCD | + FWPERM_NON_SECURE_USER_RWCD)>; + + /* More defines can be found in k3-security.h */ + + start_address = /* The Start Address of the firewall */ + <0x0 0x0>; + end_address = /* The End Address of the firewall */ + <0xff 0xffffffff>; + }; + + openssl signs the provided data, using the TI templated config file and writes the signature in this entry. This allows verification that the data is genuine. diff --git a/tools/binman/etype/ti_secure.py b/tools/binman/etype/ti_secure.py index 704dcf8a381..420ee263e4f 100644 --- a/tools/binman/etype/ti_secure.py +++ b/tools/binman/etype/ti_secure.py @@ -53,10 +53,11 @@ class Entry_ti_secure(Entry_x509_cert): - keyfile: Filename of file containing key to sign binary with - sha: Hash function to be used for signing - auth-in-place: This is an integer field that contains two pieces - of information - Lower Byte - Remains 0x02 as per our use case - ( 0x02: Move the authenticated binary back to the header ) - Upper Byte - The Host ID of the core owning the firewall + of information: + + - Lower Byte - Remains 0x02 as per our use case + ( 0x02: Move the authenticated binary back to the header ) + - Upper Byte - The Host ID of the core owning the firewall
Output files: - input.<unique_name> - input file passed to openssl @@ -69,29 +70,29 @@ class Entry_ti_secure(Entry_x509_cert): firewall nodes that describe the configurations of firewall that TIFS will be doing after reading the certificate.
- The syntax of the firewall nodes are as such: + The syntax of the firewall nodes are as such::
- firewall-257-0 { - id = <257>; /* The ID of the firewall being configured */ - region = <0>; /* Region number to configure */ + firewall-257-0 { + id = <257>; /* The ID of the firewall being configured */ + region = <0>; /* Region number to configure */
- control = /* The control register */ - <(FWCTRL_EN | FWCTRL_LOCK | FWCTRL_BG | FWCTRL_CACHE)>; + control = /* The control register */ + <(FWCTRL_EN | FWCTRL_LOCK | FWCTRL_BG | FWCTRL_CACHE)>;
- permissions = /* The permission registers */ - <((FWPRIVID_ALL << FWPRIVID_SHIFT) | - FWPERM_SECURE_PRIV_RWCD | - FWPERM_SECURE_USER_RWCD | - FWPERM_NON_SECURE_PRIV_RWCD | - FWPERM_NON_SECURE_USER_RWCD)>; + permissions = /* The permission registers */ + <((FWPRIVID_ALL << FWPRIVID_SHIFT) | + FWPERM_SECURE_PRIV_RWCD | + FWPERM_SECURE_USER_RWCD | + FWPERM_NON_SECURE_PRIV_RWCD | + FWPERM_NON_SECURE_USER_RWCD)>;
- /* More defines can be found in k3-security.h */ + /* More defines can be found in k3-security.h */
- start_address = /* The Start Address of the firewall */ - <0x0 0x0>; - end_address = /* The End Address of the firewall */ - <0xff 0xffffffff>; - }; + start_address = /* The Start Address of the firewall */ + <0x0 0x0>; + end_address = /* The End Address of the firewall */ + <0xff 0xffffffff>; + };
openssl signs the provided data, using the TI templated config file and

Reduce the length of the underline for this header, to match the heading itself.
Signed-off-by: Simon Glass sjg@chromium.org ---
(no changes since v1)
tools/binman/entry.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/binman/entry.py b/tools/binman/entry.py index 42e0b7b9145..2ed65800d22 100644 --- a/tools/binman/entry.py +++ b/tools/binman/entry.py @@ -812,7 +812,7 @@ class Entry(object): as missing """ print('''Binman Entry Documentation -=========================== +==========================
This file describes the entry types supported by binman. These entry types can be placed in an image one by one to build up a final firmware image. It is

Pass this in so the caller can change it independently of the member variable.
Signed-off-by: Simon Glass sjg@chromium.org ---
(no changes since v1)
tools/buildman/builderthread.py | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/tools/buildman/builderthread.py b/tools/buildman/builderthread.py index a8599c0bb2a..5d4426bf0d1 100644 --- a/tools/buildman/builderthread.py +++ b/tools/buildman/builderthread.py @@ -240,7 +240,7 @@ class BuilderThread(threading.Thread): return args, cwd, src_dir
def _reconfigure(self, commit, brd, cwd, args, env, config_args, config_out, - cmd_list): + cmd_list, mrproper): """Reconfigure the build
Args: @@ -251,11 +251,12 @@ class BuilderThread(threading.Thread): env (dict): Environment strings config_args (list of str): defconfig arg for this board cmd_list (list of str): List to add the commands to, for logging + mrproper (bool): True to run mrproper first
Returns: CommandResult object """ - if self.mrproper: + if mrproper: result = self.make(commit, brd, 'mrproper', cwd, 'mrproper', *args, env=env) config_out.write(result.combined) @@ -419,7 +420,8 @@ class BuilderThread(threading.Thread): cmd_list = [] if do_config or adjust_cfg: result = self._reconfigure( - commit, brd, cwd, args, env, config_args, config_out, cmd_list) + commit, brd, cwd, args, env, config_args, config_out, cmd_list, + self.mrproper) do_config = False # No need to configure next time if adjust_cfg: cfgutil.adjust_cfg_file(cfg_file, adjust_cfg)

Pass this in so the caller can change it independently of the member variable.
Signed-off-by: Simon Glass sjg@chromium.org ---
(no changes since v1)
tools/buildman/builderthread.py | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/tools/buildman/builderthread.py b/tools/buildman/builderthread.py index 5d4426bf0d1..ff63f9397e6 100644 --- a/tools/buildman/builderthread.py +++ b/tools/buildman/builderthread.py @@ -381,7 +381,7 @@ class BuilderThread(threading.Thread): commit = 'current' return commit
- def _config_and_build(self, commit_upto, brd, work_dir, do_config, + def _config_and_build(self, commit_upto, brd, work_dir, do_config, mrproper, config_only, adjust_cfg, commit, out_dir, out_rel_dir, result): """Do the build, configuring first if necessary @@ -391,6 +391,7 @@ class BuilderThread(threading.Thread): brd (Board): Board to create arguments for work_dir (str): Directory to which the source will be checked out do_config (bool): True to run a make <board>_defconfig on the source + mrproper (bool): True to run mrproper first config_only (bool): Only configure the source, do not build it adjust_cfg (list of str): See the cfgutil module and run_commit() commit (Commit): Commit only being built @@ -421,7 +422,7 @@ class BuilderThread(threading.Thread): if do_config or adjust_cfg: result = self._reconfigure( commit, brd, cwd, args, env, config_args, config_out, cmd_list, - self.mrproper) + mrproper) do_config = False # No need to configure next time if adjust_cfg: cfgutil.adjust_cfg_file(cfg_file, adjust_cfg) @@ -500,8 +501,9 @@ class BuilderThread(threading.Thread): if self.toolchain: commit = self._checkout(commit_upto, work_dir) result, do_config = self._config_and_build( - commit_upto, brd, work_dir, do_config, config_only, - adjust_cfg, commit, out_dir, out_rel_dir, result) + commit_upto, brd, work_dir, do_config, self.mrproper, + config_only, adjust_cfg, commit, out_dir, out_rel_dir, + result) result.already_done = False
result.toolchain = self.toolchain

Pass this in so the caller can change it independently of the member variable.
Signed-off-by: Simon Glass sjg@chromium.org ---
(no changes since v1)
tools/buildman/builderthread.py | 18 ++++++++++-------- 1 file changed, 10 insertions(+), 8 deletions(-)
diff --git a/tools/buildman/builderthread.py b/tools/buildman/builderthread.py index ff63f9397e6..0a7ff2e083e 100644 --- a/tools/buildman/builderthread.py +++ b/tools/buildman/builderthread.py @@ -448,9 +448,9 @@ class BuilderThread(threading.Thread): result.cmd_list = cmd_list return result, do_config
- def run_commit(self, commit_upto, brd, work_dir, do_config, config_only, - force_build, force_build_failures, work_in_output, - adjust_cfg): + def run_commit(self, commit_upto, brd, work_dir, do_config, mrproper, + config_only, force_build, force_build_failures, + work_in_output, adjust_cfg): """Build a particular commit.
If the build is already done, and we are not forcing a build, we skip @@ -461,6 +461,7 @@ class BuilderThread(threading.Thread): brd (Board): Board to build work_dir (str): Directory to which the source will be checked out do_config (bool): True to run a make <board>_defconfig on the source + mrproper (bool): True to run mrproper first config_only (bool): Only configure the source, do not build it force_build (bool): Force a build even if one was previously done force_build_failures (bool): Force a bulid if the previous result @@ -501,7 +502,7 @@ class BuilderThread(threading.Thread): if self.toolchain: commit = self._checkout(commit_upto, work_dir) result, do_config = self._config_and_build( - commit_upto, brd, work_dir, do_config, self.mrproper, + commit_upto, brd, work_dir, do_config, mrproper, config_only, adjust_cfg, commit, out_dir, out_rel_dir, result) result.already_done = False @@ -692,7 +693,8 @@ class BuilderThread(threading.Thread): force_build = False for commit_upto in range(0, len(job.commits), job.step): result, request_config = self.run_commit(commit_upto, brd, - work_dir, do_config, self.builder.config_only, + work_dir, do_config, self.mrproper, + self.builder.config_only, force_build or self.builder.force_build, self.builder.force_build_failures, job.work_in_output, job.adjust_cfg) @@ -703,8 +705,8 @@ class BuilderThread(threading.Thread): # with a reconfig. if self.builder.force_config_on_failure: result, request_config = self.run_commit(commit_upto, - brd, work_dir, True, False, True, False, - job.work_in_output, job.adjust_cfg) + brd, work_dir, True, self.mrproper, False, True, + False, job.work_in_output, job.adjust_cfg) did_config = True if not self.builder.force_reconfig: do_config = request_config @@ -748,7 +750,7 @@ class BuilderThread(threading.Thread): else: # Just build the currently checked-out build result, request_config = self.run_commit(None, brd, work_dir, True, - self.builder.config_only, True, + self.mrproper, self.builder.config_only, True, self.builder.force_build_failures, job.work_in_output, job.adjust_cfg) result.commit_upto = 0

When this flag is enabled, 'make mrproper' is always used when reconfiguring, so there is no point in doing it again.
Update this.
Signed-off-by: Simon Glass sjg@chromium.org ---
(no changes since v1)
tools/buildman/builderthread.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/buildman/builderthread.py b/tools/buildman/builderthread.py index 0a7ff2e083e..c0b1067e3f7 100644 --- a/tools/buildman/builderthread.py +++ b/tools/buildman/builderthread.py @@ -700,7 +700,7 @@ class BuilderThread(threading.Thread): job.work_in_output, job.adjust_cfg) failed = result.return_code or result.stderr did_config = do_config - if failed and not do_config: + if failed and not do_config and not self.mrproper: # If our incremental build failed, try building again # with a reconfig. if self.builder.force_config_on_failure:

When a file is removed by a commit (e.g. include/common.h yay!) it can cause incremental build failures since one of the dependency files from a previous build may mention the file.
Add an option to run 'make mrproper' automatically when a build fails. This can be used to automatically resolve the problem, without always adding the large overhead of 'make mrproper' to every build.
Signed-off-by: Simon Glass sjg@chromium.org ---
(no changes since v1)
tools/buildman/builder.py | 18 ++++++++++-------- tools/buildman/builderthread.py | 6 ++++-- tools/buildman/buildman.rst | 3 ++- tools/buildman/cmdline.py | 4 +++- tools/buildman/control.py | 1 + 5 files changed, 20 insertions(+), 12 deletions(-)
diff --git a/tools/buildman/builder.py b/tools/buildman/builder.py index f35175b4598..c4384f53e8d 100644 --- a/tools/buildman/builder.py +++ b/tools/buildman/builder.py @@ -256,14 +256,14 @@ class Builder: def __init__(self, toolchains, base_dir, git_dir, num_threads, num_jobs, gnu_make='make', checkout=True, show_unknown=True, step=1, no_subdirs=False, full_path=False, verbose_build=False, - mrproper=False, per_board_out_dir=False, - config_only=False, squash_config_y=False, - warnings_as_errors=False, work_in_output=False, - test_thread_exceptions=False, adjust_cfg=None, - allow_missing=False, no_lto=False, reproducible_builds=False, - force_build=False, force_build_failures=False, - force_reconfig=False, in_tree=False, - force_config_on_failure=False, make_func=None): + mrproper=False, fallback_mrproper=False, + per_board_out_dir=False, config_only=False, + squash_config_y=False, warnings_as_errors=False, + work_in_output=False, test_thread_exceptions=False, + adjust_cfg=None, allow_missing=False, no_lto=False, + reproducible_builds=False, force_build=False, + force_build_failures=False, force_reconfig=False, + in_tree=False, force_config_on_failure=False, make_func=None): """Create a new Builder object
Args: @@ -283,6 +283,7 @@ class Builder: PATH verbose_build: Run build with V=1 and don't use 'make -s' mrproper: Always run 'make mrproper' when configuring + fallback_mrproper: Run 'make mrproper' and retry on build failure per_board_out_dir: Build in a separate persistent directory per board rather than a thread-specific directory config_only: Only configure each build, don't build it @@ -352,6 +353,7 @@ class Builder: self.force_reconfig = force_reconfig self.in_tree = in_tree self.force_config_on_failure = force_config_on_failure + self.fallback_mrproper = fallback_mrproper
if not self.squash_config_y: self.config_filenames += EXTRA_CONFIG_FILENAMES diff --git a/tools/buildman/builderthread.py b/tools/buildman/builderthread.py index c0b1067e3f7..bbe2f6f0d24 100644 --- a/tools/buildman/builderthread.py +++ b/tools/buildman/builderthread.py @@ -705,8 +705,10 @@ class BuilderThread(threading.Thread): # with a reconfig. if self.builder.force_config_on_failure: result, request_config = self.run_commit(commit_upto, - brd, work_dir, True, self.mrproper, False, True, - False, job.work_in_output, job.adjust_cfg) + brd, work_dir, True, + self.mrproper or self.builder.fallback_mrproper, + False, True, False, job.work_in_output, + job.adjust_cfg) did_config = True if not self.builder.force_reconfig: do_config = request_config diff --git a/tools/buildman/buildman.rst b/tools/buildman/buildman.rst index aae2477b5c3..bd0482af5f7 100644 --- a/tools/buildman/buildman.rst +++ b/tools/buildman/buildman.rst @@ -995,7 +995,8 @@ By default, buildman doesn't execute 'make mrproper' prior to building the first commit for each board. This reduces the amount of work 'make' does, and hence speeds up the build. To force use of 'make mrproper', use -the -m flag. This flag will slow down any buildman invocation, since it increases the amount -of work done on any build. +of work done on any build. An alternative is to use the --fallback-mrproper +flag, which retries the build with 'make mrproper' only after a build failure.
One possible application of buildman is as part of a continual edit, build, edit, build, ... cycle; repeatedly applying buildman to the same change or diff --git a/tools/buildman/cmdline.py b/tools/buildman/cmdline.py index 03211bd5aa5..8dc5a8787b5 100644 --- a/tools/buildman/cmdline.py +++ b/tools/buildman/cmdline.py @@ -90,7 +90,9 @@ def add_upto_m(parser): parser.add_argument('--list-tool-chains', action='store_true', default=False, help='List available tool chains (use -v to see probing detail)') parser.add_argument('-m', '--mrproper', action='store_true', - default=False, help="Run 'make mrproper before reconfiguring") + default=False, help="Run 'make mrproper' before reconfiguring") + parser.add_argument('--fallback-mrproper', action='store_true', + default=False, help="Run 'make mrproper' and retry on build failure") parser.add_argument( '-M', '--allow-missing', action='store_true', default=False, help='Tell binman to allow missing blobs and generate fake ones as needed') diff --git a/tools/buildman/control.py b/tools/buildman/control.py index 8f6850c5211..f2dd87814c3 100644 --- a/tools/buildman/control.py +++ b/tools/buildman/control.py @@ -656,6 +656,7 @@ def do_buildman(args, toolchains=None, make_func=None, brds=None, no_subdirs=args.no_subdirs, full_path=args.full_path, verbose_build=args.verbose_build, mrproper=args.mrproper, + fallback_mrproper=args.fallback_mrproper, per_board_out_dir=args.per_board_out_dir, config_only=args.config_only, squash_config_y=not args.preserve_config_y,

Buildman retries a failed build when processing a branch, but does not do this when building current source. It is useful to do this retry in both cases, so add the logic for it.
Signed-off-by: Simon Glass sjg@chromium.org ---
(no changes since v1)
tools/buildman/builderthread.py | 8 ++++++++ 1 file changed, 8 insertions(+)
diff --git a/tools/buildman/builderthread.py b/tools/buildman/builderthread.py index bbe2f6f0d24..55658487abf 100644 --- a/tools/buildman/builderthread.py +++ b/tools/buildman/builderthread.py @@ -755,6 +755,14 @@ class BuilderThread(threading.Thread): self.mrproper, self.builder.config_only, True, self.builder.force_build_failures, job.work_in_output, job.adjust_cfg) + failed = result.return_code or result.stderr + if failed and not self.mrproper: + result, request_config = self.run_commit(None, brd, work_dir, + True, self.builder.fallback_mrproper, + self.builder.config_only, True, + self.builder.force_build_failures, + job.work_in_output, job.adjust_cfg) + result.commit_upto = 0 self._write_result(result, job.keep_outputs, job.work_in_output) self._send_result(result)

Buildman uses all available CPUs by default, so running more than one or two concurrent processes is not normally useful.
However in some CI cases we want to be able to run several jobs at once to save time. For example, in a lab situation we may want to run a test on 20 boards at a time, since only the build step actually takes much CPU.
Add an option which allows such a limit. When buildman starts up, it waits until the number of running processes goes below the limit, then claims a spot in the list. The list is maintained with a temporary file.
Note that the temp file is user-specific, since it is hard to create a locked temporary file which can be accessed by any user. In most cases, only one user is running jobs on a machine, so this should not matter.
Signed-off-by: Simon Glass sjg@chromium.org ---
(no changes since v1)
tools/buildman/buildman.rst | 5 ++ tools/buildman/cmdline.py | 2 + tools/buildman/control.py | 140 ++++++++++++++++++++++++++++++++- tools/buildman/pyproject.toml | 6 +- tools/buildman/test.py | 121 ++++++++++++++++++++++++++++ tools/u_boot_pylib/terminal.py | 7 +- 6 files changed, 277 insertions(+), 4 deletions(-)
diff --git a/tools/buildman/buildman.rst b/tools/buildman/buildman.rst index bd0482af5f7..b8ff3bf1ab2 100644 --- a/tools/buildman/buildman.rst +++ b/tools/buildman/buildman.rst @@ -1286,6 +1286,11 @@ then buildman hangs. Failing to handle any eventuality is a bug in buildman and should be reported. But you can use -T0 to disable threading and hopefully figure out the root cause of the build failure.
+For situations where buildman is invoked from multiple running processes, it is +sometimes useful to have buildman wait until the others have finished. Use the +--process-limit option for this: --process-limit 1 will allow only one buildman +to process jobs at a time. + Build summary -------------
diff --git a/tools/buildman/cmdline.py b/tools/buildman/cmdline.py index 8dc5a8787b5..544a391a464 100644 --- a/tools/buildman/cmdline.py +++ b/tools/buildman/cmdline.py @@ -129,6 +129,8 @@ def add_after_m(parser): default=False, help="Use an O= (output) directory per board rather than per thread") parser.add_argument('--print-arch', action='store_true', default=False, help="Print the architecture for a board (ARCH=)") + parser.add_argument('--process-limit', type=int, + default=0, help='Limit to number of buildmans running at once') parser.add_argument('-r', '--reproducible-builds', action='store_true', help='Set SOURCE_DATE_EPOCH=0 to suuport a reproducible build') parser.add_argument('-R', '--regen-board-list', type=str, diff --git a/tools/buildman/control.py b/tools/buildman/control.py index f2dd87814c3..464835c5be5 100644 --- a/tools/buildman/control.py +++ b/tools/buildman/control.py @@ -7,10 +7,13 @@ This holds the main control logic for buildman, when not running tests. """
+import getpass import multiprocessing import os import shutil import sys +import tempfile +import time
from buildman import boards from buildman import bsettings @@ -21,10 +24,23 @@ from patman import gitutil from patman import patchstream from u_boot_pylib import command from u_boot_pylib import terminal -from u_boot_pylib.terminal import tprint +from u_boot_pylib import tools +from u_boot_pylib.terminal import print_clear, tprint
TEST_BUILDER = None
+# Space-separated list of buildman process IDs currently running jobs +RUNNING_FNAME = f'buildmanq.{getpass.getuser()}' + +# Lock file for access to RUNNING_FILE +LOCK_FNAME = f'{RUNNING_FNAME}.lock' + +# Wait time for access to lock (seconds) +LOCK_WAIT_S = 10 + +# Wait time to start running +RUN_WAIT_S = 300 + def get_plural(count): """Returns a plural 's' if count is not 1""" return 's' if count != 1 else '' @@ -578,6 +594,125 @@ def calc_adjust_cfg(adjust_cfg, reproducible_builds): return adjust_cfg
+def read_procs(tmpdir=tempfile.gettempdir()): + """Read the list of running buildman processes + + If the list is corrupted, returns an empty list + + Args: + tmpdir (str): Temporary directory to use (for testing only) + """ + running_fname = os.path.join(tmpdir, RUNNING_FNAME) + procs = [] + if os.path.exists(running_fname): + items = tools.read_file(running_fname, binary=False).split() + try: + procs = [int(x) for x in items] + except ValueError: # Handle invalid format + pass + return procs + + +def check_pid(pid): + """Check for existence of a unix PID + + https://stackoverflow.com/questions/568271/how-to-check-if-there-exists-a-pr... + + Args: + pid (int): PID to check + + Returns: + True if it exists, else False + """ + try: + os.kill(pid, 0) + except OSError: + return False + else: + return True + + +def write_procs(procs, tmpdir=tempfile.gettempdir()): + """Write the list of running buildman processes + + Args: + tmpdir (str): Temporary directory to use (for testing only) + """ + running_fname = os.path.join(tmpdir, RUNNING_FNAME) + tools.write_file(running_fname, ' '.join([str(p) for p in procs]), + binary=False) + + # Allow another user to access the file + os.chmod(running_fname, 0o666) + +def wait_for_process_limit(limit, tmpdir=tempfile.gettempdir(), + pid=os.getpid()): + """Wait until the number of buildman processes drops to the limit + + This uses FileLock to protect a 'running' file, which contains a list of + PIDs of running buildman processes. The number of PIDs in the file indicates + the number of running processes. + + When buildman starts up, it calls this function to wait until it is OK to + start the build. + + On exit, no attempt is made to remove the PID from the file, since other + buildman processes will notice that the PID is no-longer valid, and ignore + it. + + Two timeouts are provided: + LOCK_WAIT_S: length of time to wait for the lock; if this occurs, the + lock is busted / removed before trying again + RUN_WAIT_S: length of time to wait to be allowed to run; if this occurs, + the build starts, with the PID being added to the file. + + Args: + limit (int): Maximum number of buildman processes, including this one; + must be > 0 + tmpdir (str): Temporary directory to use (for testing only) + pid (int): Current process ID (for testing only) + """ + from filelock import Timeout, FileLock + + running_fname = os.path.join(tmpdir, RUNNING_FNAME) + lock_fname = os.path.join(tmpdir, LOCK_FNAME) + lock = FileLock(lock_fname) + + # Allow another user to access the file + col = terminal.Color() + tprint('Waiting for other buildman processes...', newline=False, + colour=col.RED) + + claimed = False + deadline = time.time() + RUN_WAIT_S + while True: + try: + with lock.acquire(timeout=LOCK_WAIT_S): + os.chmod(lock_fname, 0o666) + procs = read_procs(tmpdir) + + # Drop PIDs which are not running + procs = list(filter(check_pid, procs)) + + # If we haven't hit the limit, add ourself + if len(procs) < limit: + tprint('done...', newline=False) + claimed = True + if time.time() >= deadline: + tprint('timeout...', newline=False) + claimed = True + if claimed: + write_procs(procs + [pid], tmpdir) + break + + except Timeout: + tprint('failed to get lock: busting...', newline=False) + os.remove(lock_fname) + + time.sleep(1) + tprint('starting build', newline=False) + print_clear() + def do_buildman(args, toolchains=None, make_func=None, brds=None, clean_dir=False, test_thread_exceptions=False): """The main control code for buildman @@ -677,5 +812,8 @@ def do_buildman(args, toolchains=None, make_func=None, brds=None,
TEST_BUILDER = builder
+ if args.process_limit: + wait_for_process_limit(args.process_limit) + return run_builder(builder, series.commits if series else None, brds.get_selected_dict(), args) diff --git a/tools/buildman/pyproject.toml b/tools/buildman/pyproject.toml index fe0f6421b53..68bfa45c3f4 100644 --- a/tools/buildman/pyproject.toml +++ b/tools/buildman/pyproject.toml @@ -8,7 +8,11 @@ version = "0.0.6" authors = [ { name="Simon Glass", email="sjg@chromium.org" }, ] -dependencies = ["u_boot_pylib >= 0.0.6", "patch-manager >= 0.0.6"] +dependencies = [ + "filelock >= 3.0.12", + "u_boot_pylib >= 0.0.6", + "patch-manager >= 0.0.6" +] description = "Buildman build tool for U-Boot" readme = "README.rst" requires-python = ">=3.7" diff --git a/tools/buildman/test.py b/tools/buildman/test.py index f92add7a7c5..d68395c2164 100644 --- a/tools/buildman/test.py +++ b/tools/buildman/test.py @@ -2,12 +2,14 @@ # Copyright (c) 2012 The Chromium OS Authors. #
+from filelock import FileLock import os import shutil import sys import tempfile import time import unittest +from unittest.mock import patch
from buildman import board from buildman import boards @@ -156,6 +158,11 @@ class TestBuild(unittest.TestCase): if not os.path.isdir(self.base_dir): os.mkdir(self.base_dir)
+ self.cur_time = 0 + self.valid_pids = [] + self.finish_time = None + self.finish_pid = None + def tearDown(self): shutil.rmtree(self.base_dir)
@@ -747,6 +754,120 @@ class TestBuild(unittest.TestCase): self.assertEqual([ ['MARY="mary"', 'Missing expected line: CONFIG_MARY="mary"']], result)
+ def get_procs(self): + running_fname = os.path.join(self.base_dir, control.RUNNING_FNAME) + items = tools.read_file(running_fname, binary=False).split() + return [int(x) for x in items] + + def get_time(self): + return self.cur_time + + def inc_time(self, amount): + self.cur_time += amount + + # Handle a process exiting + if self.finish_time == self.cur_time: + self.valid_pids = [pid for pid in self.valid_pids + if pid != self.finish_pid] + + def kill(self, pid, signal): + if pid not in self.valid_pids: + raise OSError('Invalid PID') + + def test_process_limit(self): + """Test wait_for_process_limit() function""" + tmpdir = self.base_dir + + with (patch('time.time', side_effect=self.get_time), + patch('time.sleep', side_effect=self.inc_time), + patch('os.kill', side_effect=self.kill)): + # Grab the process. Since there is no other profcess, this should + # immediately succeed + control.wait_for_process_limit(1, tmpdir=tmpdir, pid=1) + lines = terminal.get_print_test_lines() + self.assertEqual(0, self.cur_time) + self.assertEqual('Waiting for other buildman processes...', + lines[0].text) + self.assertEqual(self._col.RED, lines[0].colour) + self.assertEqual(False, lines[0].newline) + self.assertEqual(True, lines[0].bright) + + self.assertEqual('done...', lines[1].text) + self.assertEqual(None, lines[1].colour) + self.assertEqual(False, lines[1].newline) + self.assertEqual(True, lines[1].bright) + + self.assertEqual('starting build', lines[2].text) + self.assertEqual([1], control.read_procs(tmpdir)) + self.assertEqual(None, lines[2].colour) + self.assertEqual(False, lines[2].newline) + self.assertEqual(True, lines[2].bright) + + # Try again, with a different PID...this should eventually timeout + # and start the build anyway + self.cur_time = 0 + self.valid_pids = [1] + control.wait_for_process_limit(1, tmpdir=tmpdir, pid=2) + lines = terminal.get_print_test_lines() + self.assertEqual('Waiting for other buildman processes...', + lines[0].text) + self.assertEqual('timeout...', lines[1].text) + self.assertEqual(None, lines[1].colour) + self.assertEqual(False, lines[1].newline) + self.assertEqual(True, lines[1].bright) + self.assertEqual('starting build', lines[2].text) + self.assertEqual([1, 2], control.read_procs(tmpdir)) + self.assertEqual(control.RUN_WAIT_S, self.cur_time) + + # Check lock-busting + self.cur_time = 0 + self.valid_pids = [1, 2] + lock_fname = os.path.join(tmpdir, control.LOCK_FNAME) + lock = FileLock(lock_fname) + lock.acquire(timeout=1) + control.wait_for_process_limit(1, tmpdir=tmpdir, pid=3) + lines = terminal.get_print_test_lines() + self.assertEqual('Waiting for other buildman processes...', + lines[0].text) + self.assertEqual('failed to get lock: busting...', lines[1].text) + self.assertEqual(None, lines[1].colour) + self.assertEqual(False, lines[1].newline) + self.assertEqual(True, lines[1].bright) + self.assertEqual('timeout...', lines[2].text) + self.assertEqual('starting build', lines[3].text) + self.assertEqual([1, 2, 3], control.read_procs(tmpdir)) + self.assertEqual(control.RUN_WAIT_S, self.cur_time) + lock.release() + + # Check handling of dead processes. Here we have PID 2 as a running + # process, even though the PID file contains 1, 2 and 3. So we can + # add one more PID, to make 2 and 4 + self.cur_time = 0 + self.valid_pids = [2] + control.wait_for_process_limit(2, tmpdir=tmpdir, pid=4) + lines = terminal.get_print_test_lines() + self.assertEqual('Waiting for other buildman processes...', + lines[0].text) + self.assertEqual('done...', lines[1].text) + self.assertEqual('starting build', lines[2].text) + self.assertEqual([2, 4], control.read_procs(tmpdir)) + self.assertEqual(0, self.cur_time) + + # Try again, with PID 2 quitting at time 50. This allows the new + # build to start + self.cur_time = 0 + self.valid_pids = [2, 4] + self.finish_pid = 2 + self.finish_time = 50 + control.wait_for_process_limit(2, tmpdir=tmpdir, pid=5) + lines = terminal.get_print_test_lines() + self.assertEqual('Waiting for other buildman processes...', + lines[0].text) + self.assertEqual('done...', lines[1].text) + self.assertEqual('starting build', lines[2].text) + self.assertEqual([4, 5], control.read_procs(tmpdir)) + self.assertEqual(self.finish_time, self.cur_time) +
if __name__ == "__main__": unittest.main() diff --git a/tools/u_boot_pylib/terminal.py b/tools/u_boot_pylib/terminal.py index 40d79f8ac07..2cd5a54ab52 100644 --- a/tools/u_boot_pylib/terminal.py +++ b/tools/u_boot_pylib/terminal.py @@ -164,8 +164,11 @@ def print_clear(): global last_print_len
if last_print_len: - print('\r%s\r' % (' '* last_print_len), end='', flush=True) - last_print_len = None + if print_test_mode: + print_test_list.append(PrintLine(None, None, None, None)) + else: + print('\r%s\r' % (' '* last_print_len), end='', flush=True) + last_print_len = None
def set_print_test_mode(enable=True): """Go into test mode, where all printing is recorded"""

This part of driver model is a little subtle, so add some more comments to promote better understanding.
Signed-off-by: Simon Glass sjg@chromium.org ---
(no changes since v1)
drivers/core/lists.c | 16 ++++++++++++++++ 1 file changed, 16 insertions(+)
diff --git a/drivers/core/lists.c b/drivers/core/lists.c index 2839a9b7371..a84e6a98cb1 100644 --- a/drivers/core/lists.c +++ b/drivers/core/lists.c @@ -8,6 +8,7 @@
#define LOG_CATEGORY LOGC_DM
+#include <debug_uart.h> #include <errno.h> #include <log.h> #include <dm/device.h> @@ -50,6 +51,21 @@ struct uclass_driver *lists_uclass_lookup(enum uclass_id id) return NULL; }
+/** + * bind_drivers_pass() - Perform a pass of driver binding + * + * Work through the driver_info records binding a driver for each one. If the + * binding fails, continue binding others, but return the error. + * + * For OF_PLATDATA we must bind parent devices before their children. So only + * children of bound parents are bound on each call to this function. When a + * child is left unbound, -EAGAIN is returned, indicating that this function + * should be called again + * + * @parent: Parent device to use when binding each child device + * Return: 0 if OK, -EAGAIN if unbound children exist, -ENOENT if there is no + * driver for one of the devices, other -ve on other error + */ static int bind_drivers_pass(struct udevice *parent, bool pre_reloc_only) { struct driver_info *info =

The relocation offset can change in some initcall sequences. Handle this and make sure it is used for all debugging statements in init_run_list()
Update the trace test to match.
Signed-off-by: Simon Glass sjg@chromium.org Reviewed-by: Caleb Connolly caleb.connolly@linaro.org ---
(no changes since v1)
lib/initcall.c | 6 ++++-- test/py/tests/test_trace.py | 4 ++-- 2 files changed, 6 insertions(+), 4 deletions(-)
diff --git a/lib/initcall.c b/lib/initcall.c index c8e2b0f6a38..2686b9aed5c 100644 --- a/lib/initcall.c +++ b/lib/initcall.c @@ -49,13 +49,14 @@ static int initcall_is_event(init_fnc_t func) */ int initcall_run_list(const init_fnc_t init_sequence[]) { - ulong reloc_ofs = calc_reloc_ofs(); + ulong reloc_ofs; const init_fnc_t *ptr; enum event_t type; init_fnc_t func; int ret = 0;
for (ptr = init_sequence; func = *ptr, func; ptr++) { + reloc_ofs = calc_reloc_ofs(); type = initcall_is_event(func);
if (type) { @@ -84,7 +85,8 @@ int initcall_run_list(const init_fnc_t init_sequence[]) sprintf(buf, "event %d/%s", type, event_type_name(type)); } else { - sprintf(buf, "call %p", func); + sprintf(buf, "call %p", + (char *)func - reloc_ofs); }
printf("initcall failed at %s (err=%dE)\n", buf, ret); diff --git a/test/py/tests/test_trace.py b/test/py/tests/test_trace.py index f41d4cf71f0..ec1e624722c 100644 --- a/test/py/tests/test_trace.py +++ b/test/py/tests/test_trace.py @@ -175,7 +175,7 @@ def check_funcgraph(cons, fname, proftool, map_fname, trace_dat): # Then look for this: # u-boot-1 0..... 282.101375: funcgraph_exit: 0.006 us | } # Then check for this: - # u-boot-1 0..... 282.101375: funcgraph_entry: 0.000 us | initcall_is_event(); + # u-boot-1 0..... 282.101375: funcgraph_entry: 0.000 us | calc_reloc_ofs();
expected_indent = None found_start = False @@ -199,7 +199,7 @@ def check_funcgraph(cons, fname, proftool, map_fname, trace_dat):
# The next function after initf_bootstage() exits should be # initcall_is_event() - assert upto == 'initcall_is_event()' + assert upto == 'calc_reloc_ofs()'
# Now look for initf_dm() and dm_timer_init() so we can check the bootstage # time

Since commit 0dba45864b2a ("arm: Init the debug UART") the debug UART is set up in _main() before early_system_init() is called.
Add a suitable board_debug_uart_init() function to set up the UART in SPL.
Signed-off-by: Simon Glass sjg@chromium.org ---
(no changes since v1)
arch/arm/mach-omap2/am33xx/board.c | 18 +++++++++++++++--- 1 file changed, 15 insertions(+), 3 deletions(-)
diff --git a/arch/arm/mach-omap2/am33xx/board.c b/arch/arm/mach-omap2/am33xx/board.c index 78c1e965c9f..84a60dedd72 100644 --- a/arch/arm/mach-omap2/am33xx/board.c +++ b/arch/arm/mach-omap2/am33xx/board.c @@ -490,9 +490,6 @@ void early_system_init(void) */ save_omap_boot_params(); #endif -#ifdef CONFIG_DEBUG_UART_OMAP - debug_uart_init(); -#endif
#ifdef CONFIG_SPL_BUILD spl_early_init(); @@ -533,3 +530,18 @@ static int am33xx_dm_post_init(void) return 0; } EVENT_SPY_SIMPLE(EVT_DM_POST_INIT_F, am33xx_dm_post_init); + +#ifdef CONFIG_DEBUG_UART_BOARD_INIT +void board_debug_uart_init(void) +{ + if (u_boot_first_phase()) { + hw_data_init(); + set_uart_mux_conf(); + setup_early_clocks(); + uart_soft_reset(); + + /* avoid uart gibberish by allowing the clocks to settle */ + mdelay(50); + } +} +#endif

This binary does not prevent the system from booting. Mark it optional so that U-Boot can be built without it.
Signed-off-by: Simon Glass sjg@chromium.org ---
(no changes since v1)
arch/arm/dts/sunxi-u-boot.dtsi | 1 + 1 file changed, 1 insertion(+)
diff --git a/arch/arm/dts/sunxi-u-boot.dtsi b/arch/arm/dts/sunxi-u-boot.dtsi index 0909a67883e..e1a9a7f5d4c 100644 --- a/arch/arm/dts/sunxi-u-boot.dtsi +++ b/arch/arm/dts/sunxi-u-boot.dtsi @@ -90,6 +90,7 @@ scp { filename = "scp.bin"; missing-msg = "scp-sunxi"; + optional; }; }; #endif

This feature is not present on older Chromebooks, so disable the setting.
Signed-off-by: Simon Glass sjg@chromium.org Reviewed-by: Ilias Apalodimas ilias.apalodimas@linaro.org ---
(no changes since v1)
configs/chromebook_link64_defconfig | 1 + configs/chromebook_link_defconfig | 1 + configs/chromebook_samus_defconfig | 1 + configs/chromebook_samus_tpl_defconfig | 1 + configs/nyan-big_defconfig | 4 +--- configs/snow_defconfig | 1 + 6 files changed, 6 insertions(+), 3 deletions(-)
diff --git a/configs/chromebook_link64_defconfig b/configs/chromebook_link64_defconfig index 7cf23b29e46..9583f87bf0f 100644 --- a/configs/chromebook_link64_defconfig +++ b/configs/chromebook_link64_defconfig @@ -80,6 +80,7 @@ CONFIG_SYS_NS16550=y CONFIG_SYS_NS16550_PORT_MAPPED=y CONFIG_SPI=y CONFIG_TPM_TIS_LPC=y +# CONFIG_TPM_V2 is not set CONFIG_USB_STORAGE=y CONFIG_USB_KEYBOARD=y CONFIG_FRAMEBUFFER_SET_VESA_MODE=y diff --git a/configs/chromebook_link_defconfig b/configs/chromebook_link_defconfig index a9f91dd9b26..75a7a488a92 100644 --- a/configs/chromebook_link_defconfig +++ b/configs/chromebook_link_defconfig @@ -70,6 +70,7 @@ CONFIG_SYS_NS16550_PORT_MAPPED=y CONFIG_SOUND=y CONFIG_SPI=y CONFIG_TPM_TIS_LPC=y +# CONFIG_TPM_V2 is not set CONFIG_USB_STORAGE=y CONFIG_USB_KEYBOARD=y CONFIG_VIDEO_COPY=y diff --git a/configs/chromebook_samus_defconfig b/configs/chromebook_samus_defconfig index 40cc449b9b3..8cdad8d2344 100644 --- a/configs/chromebook_samus_defconfig +++ b/configs/chromebook_samus_defconfig @@ -74,6 +74,7 @@ CONFIG_SOUND_I8254=y CONFIG_SOUND_RT5677=y CONFIG_SPI=y CONFIG_TPM_TIS_LPC=y +# CONFIG_TPM_V2 is not set CONFIG_USB_STORAGE=y CONFIG_USB_KEYBOARD=y CONFIG_VIDEO_COPY=y diff --git a/configs/chromebook_samus_tpl_defconfig b/configs/chromebook_samus_tpl_defconfig index 3e7298f16af..1be57560f89 100644 --- a/configs/chromebook_samus_tpl_defconfig +++ b/configs/chromebook_samus_tpl_defconfig @@ -96,6 +96,7 @@ CONFIG_SOUND_RT5677=y CONFIG_SPI=y CONFIG_TPL_SYSRESET=y CONFIG_TPM_TIS_LPC=y +# CONFIG_TPM_V2 is not set CONFIG_USB_STORAGE=y CONFIG_USB_KEYBOARD=y CONFIG_FRAMEBUFFER_SET_VESA_MODE=y diff --git a/configs/nyan-big_defconfig b/configs/nyan-big_defconfig index 4dec710cf8d..78fb7580da7 100644 --- a/configs/nyan-big_defconfig +++ b/configs/nyan-big_defconfig @@ -11,8 +11,6 @@ CONFIG_DEFAULT_DEVICE_TREE="tegra124-nyan-big" CONFIG_SPL_TEXT_BASE=0x80108000 CONFIG_SPL_STACK=0x800ffffc CONFIG_BOOTSTAGE_STASH_ADDR=0x83000000 -CONFIG_DEBUG_UART_BASE=0x70006000 -CONFIG_DEBUG_UART_CLOCK=408000000 CONFIG_TEGRA124=y CONFIG_TARGET_NYAN_BIG=y CONFIG_TEGRA_GPU=y @@ -75,7 +73,6 @@ CONFIG_DM_REGULATOR=y CONFIG_REGULATOR_AS3722=y CONFIG_DM_REGULATOR_FIXED=y CONFIG_PWM_TEGRA=y -CONFIG_DEBUG_UART_SHIFT=2 CONFIG_SYS_NS16550=y CONFIG_SOUND=y CONFIG_I2S=y @@ -83,6 +80,7 @@ CONFIG_I2S_TEGRA=y CONFIG_SOUND_MAX98090=y CONFIG_TEGRA114_SPI=y CONFIG_TPM_TIS_INFINEON=y +# CONFIG_TPM_V2 is not set CONFIG_USB=y CONFIG_USB_EHCI_HCD=y CONFIG_USB_EHCI_TEGRA=y diff --git a/configs/snow_defconfig b/configs/snow_defconfig index 3a617c6cf40..2c0757194bd 100644 --- a/configs/snow_defconfig +++ b/configs/snow_defconfig @@ -88,6 +88,7 @@ CONFIG_SOUND_MAX98095=y CONFIG_SOUND_WM8994=y CONFIG_EXYNOS_SPI=y CONFIG_TPM_TIS_INFINEON=y +# CONFIG_TPM_V2 is not set CONFIG_USB=y CONFIG_USB_XHCI_HCD=y CONFIG_USB_XHCI_DWC3=y

This should use the driver macros so that the driver appears in the linker list. Fix this.
Fixes: 8587839f19d ("pinctrl: meson: add axg support") Reviewed-by: Neil Armstrong neil.armstrong@linaro.org
Signed-off-by: Simon Glass sjg@chromium.org ---
(no changes since v1)
drivers/pinctrl/meson/pinctrl-meson-axg-pmx.c | 2 +- drivers/pinctrl/meson/pinctrl-meson-axg.c | 4 ++-- drivers/pinctrl/meson/pinctrl-meson-axg.h | 2 +- drivers/pinctrl/meson/pinctrl-meson-g12a.c | 4 ++-- 4 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/drivers/pinctrl/meson/pinctrl-meson-axg-pmx.c b/drivers/pinctrl/meson/pinctrl-meson-axg-pmx.c index 52c726cf038..15ebd574ef1 100644 --- a/drivers/pinctrl/meson/pinctrl-meson-axg-pmx.c +++ b/drivers/pinctrl/meson/pinctrl-meson-axg-pmx.c @@ -179,7 +179,7 @@ static const struct dm_gpio_ops meson_axg_gpio_ops = { .direction_output = meson_gpio_direction_output, };
-const struct driver meson_axg_gpio_driver = { +U_BOOT_DRIVER(meson_axg_gpio) = { .name = "meson-axg-gpio", .id = UCLASS_GPIO, .probe = meson_gpio_probe, diff --git a/drivers/pinctrl/meson/pinctrl-meson-axg.c b/drivers/pinctrl/meson/pinctrl-meson-axg.c index 94e09cd3f8a..ed3f92b2d75 100644 --- a/drivers/pinctrl/meson/pinctrl-meson-axg.c +++ b/drivers/pinctrl/meson/pinctrl-meson-axg.c @@ -939,7 +939,7 @@ struct meson_pinctrl_data meson_axg_periphs_pinctrl_data = { .num_groups = ARRAY_SIZE(meson_axg_periphs_groups), .num_funcs = ARRAY_SIZE(meson_axg_periphs_functions), .num_banks = ARRAY_SIZE(meson_axg_periphs_banks), - .gpio_driver = &meson_axg_gpio_driver, + .gpio_driver = DM_DRIVER_REF(meson_axg_gpio), .pmx_data = &meson_axg_periphs_pmx_banks_data, };
@@ -953,7 +953,7 @@ struct meson_pinctrl_data meson_axg_aobus_pinctrl_data = { .num_groups = ARRAY_SIZE(meson_axg_aobus_groups), .num_funcs = ARRAY_SIZE(meson_axg_aobus_functions), .num_banks = ARRAY_SIZE(meson_axg_aobus_banks), - .gpio_driver = &meson_axg_gpio_driver, + .gpio_driver = DM_DRIVER_REF(meson_axg_gpio), .pmx_data = &meson_axg_aobus_pmx_banks_data, };
diff --git a/drivers/pinctrl/meson/pinctrl-meson-axg.h b/drivers/pinctrl/meson/pinctrl-meson-axg.h index c8d2b3af036..a6581bab500 100644 --- a/drivers/pinctrl/meson/pinctrl-meson-axg.h +++ b/drivers/pinctrl/meson/pinctrl-meson-axg.h @@ -61,6 +61,6 @@ struct meson_pmx_axg_data { }
extern const struct pinctrl_ops meson_axg_pinctrl_ops; -extern const struct driver meson_axg_gpio_driver; +extern U_BOOT_DRIVER(meson_axg_gpio);
#endif /* __PINCTRL_MESON_AXG_H__ */ diff --git a/drivers/pinctrl/meson/pinctrl-meson-g12a.c b/drivers/pinctrl/meson/pinctrl-meson-g12a.c index 24f47f82558..67114df6824 100644 --- a/drivers/pinctrl/meson/pinctrl-meson-g12a.c +++ b/drivers/pinctrl/meson/pinctrl-meson-g12a.c @@ -1253,7 +1253,7 @@ static struct meson_pinctrl_data meson_g12a_periphs_pinctrl_data = { .num_groups = ARRAY_SIZE(meson_g12a_periphs_groups), .num_funcs = ARRAY_SIZE(meson_g12a_periphs_functions), .num_banks = ARRAY_SIZE(meson_g12a_periphs_banks), - .gpio_driver = &meson_axg_gpio_driver, + .gpio_driver = DM_DRIVER_REF(meson_axg_gpio), .pmx_data = &meson_g12a_periphs_pmx_banks_data, };
@@ -1267,7 +1267,7 @@ static struct meson_pinctrl_data meson_g12a_aobus_pinctrl_data = { .num_groups = ARRAY_SIZE(meson_g12a_aobus_groups), .num_funcs = ARRAY_SIZE(meson_g12a_aobus_functions), .num_banks = ARRAY_SIZE(meson_g12a_aobus_banks), - .gpio_driver = &meson_axg_gpio_driver, + .gpio_driver = DM_DRIVER_REF(meson_axg_gpio), .pmx_data = &meson_g12a_aobus_pmx_banks_data, };

When Labgrid is used, it can get U-Boot ready for running tests. It prints a message when it has done so.
Add logic to detect this message and accept it.
Signed-off-by: Simon Glass sjg@chromium.org ---
(no changes since v1)
test/py/u_boot_console_base.py | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/test/py/u_boot_console_base.py b/test/py/u_boot_console_base.py index 3e01be11029..e230caf37e1 100644 --- a/test/py/u_boot_console_base.py +++ b/test/py/u_boot_console_base.py @@ -22,6 +22,7 @@ pattern_stop_autoboot_prompt = re.compile('Hit any key to stop autoboot: ') pattern_unknown_command = re.compile('Unknown command '.*' - try 'help'') pattern_error_notification = re.compile('## Error: ') pattern_error_please_reset = re.compile('### ERROR ### Please RESET the board ###') +pattern_ready_prompt = re.compile('U-Boot is ready')
PAT_ID = 0 PAT_RE = 1 @@ -170,15 +171,15 @@ class ConsoleBase(object): self.bad_pattern_ids[m - 1]) self.u_boot_version_string = self.p.after while True: - m = self.p.expect([self.prompt_compiled, + m = self.p.expect([self.prompt_compiled, pattern_ready_prompt, pattern_stop_autoboot_prompt] + self.bad_patterns) - if m == 0: + if m == 0 or m == 1: break - if m == 1: + if m == 2: self.p.send(' ') continue raise Exception('Bad pattern found on console: ' + - self.bad_pattern_ids[m - 2]) + self.bad_pattern_ids[m - 3])
except Exception as ex: self.log.error(str(ex))

Tests for standard boot need disks to be set up, which can only be done on sandbox, since adjusting disks on real hardware is not currently supported. Mark the init function as sandbox-only.
Signed-off-by: Simon Glass sjg@chromium.org ---
(no changes since v1)
test/py/tests/test_ut.py | 1 + 1 file changed, 1 insertion(+)
diff --git a/test/py/tests/test_ut.py b/test/py/tests/test_ut.py index c169c835e38..58205066ec8 100644 --- a/test/py/tests/test_ut.py +++ b/test/py/tests/test_ut.py @@ -470,6 +470,7 @@ def test_ut_dm_init(u_boot_console): fh.write(data)
@pytest.mark.buildconfigspec('cmd_bootflow') +@pytest.mark.buildconfigspec('sandbox') def test_ut_dm_init_bootstd(u_boot_console): """Initialise data for bootflow tests"""

Declare a constant rather than open-coding the same value twice.
Signed-off-by: Simon Glass sjg@chromium.org ---
(no changes since v1)
test/py/u_boot_console_base.py | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/test/py/u_boot_console_base.py b/test/py/u_boot_console_base.py index e230caf37e1..e4f86f6af5b 100644 --- a/test/py/u_boot_console_base.py +++ b/test/py/u_boot_console_base.py @@ -27,6 +27,9 @@ pattern_ready_prompt = re.compile('U-Boot is ready') PAT_ID = 0 PAT_RE = 1
+# Timeout before expecting the console to be ready (in milliseconds) +TIMEOUT_MS = 30000 + bad_pattern_defs = ( ('spl_signon', pattern_u_boot_spl_signon), ('main_signon', pattern_u_boot_main_signon), @@ -397,7 +400,7 @@ class ConsoleBase(object): # Reset the console timeout value as some tests may change # its default value during the execution if not self.config.gdbserver: - self.p.timeout = 30000 + self.p.timeout = TIMEOUT_MS return try: self.log.start_section('Starting U-Boot') @@ -408,7 +411,7 @@ class ConsoleBase(object): # future, possibly per-test to be optimal. This works for 'help' # on board 'seaboard'. if not self.config.gdbserver: - self.p.timeout = 30000 + self.p.timeout = TIMEOUT_MS self.p.logfile_read = self.logstream if expect_reset: loop_num = 2

Some tests may output things to stderr. Ensure that this output is not dropped, by redirecting it to stdout
Signed-off-by: Simon Glass sjg@chromium.org ---
(no changes since v1)
test/py/u_boot_spawn.py | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/test/py/u_boot_spawn.py b/test/py/u_boot_spawn.py index 7c48d96210e..7421da49aa9 100644 --- a/test/py/u_boot_spawn.py +++ b/test/py/u_boot_spawn.py @@ -10,6 +10,7 @@ import re import pty import signal import select +import sys import time import traceback
@@ -57,6 +58,7 @@ class Spawn: signal.signal(signal.SIGHUP, signal.SIG_DFL) if cwd: os.chdir(cwd) + sys.stderr = sys.stdout os.execvp(args[0], args) except: print('CHILD EXECEPTION:')

When a board is finished with, the lab may want to power it off, or perform some other function. Add a new script which is called when tests are complete.
Signed-off-by: Simon Glass sjg@chromium.org ---
(no changes since v1)
test/py/u_boot_console_exec_attach.py | 10 ++++++++++ 1 file changed, 10 insertions(+)
diff --git a/test/py/u_boot_console_exec_attach.py b/test/py/u_boot_console_exec_attach.py index 8dd8cc1230c..5f4916b7da2 100644 --- a/test/py/u_boot_console_exec_attach.py +++ b/test/py/u_boot_console_exec_attach.py @@ -70,3 +70,13 @@ class ConsoleExecAttach(ConsoleBase): raise
return s + + def close(self): + super().close() + + self.log.action('Releasing board') + args = [self.config.board_type, self.config.board_identity] + cmd = ['u-boot-test-release'] + args + runner = self.log.get_runner(cmd[0], sys.stdout) + runner.run(cmd) + runner.close()

This setting pads out the function names. Adjust the test to handle this, since some boards use it.
Signed-off-by: Simon Glass sjg@chromium.org ---
(no changes since v1)
test/py/tests/test_log.py | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-)
diff --git a/test/py/tests/test_log.py b/test/py/tests/test_log.py index 140dcb9aa2b..79808674bbe 100644 --- a/test/py/tests/test_log.py +++ b/test/py/tests/test_log.py @@ -27,13 +27,16 @@ def test_log_format(u_boot_console):
cons = u_boot_console with cons.log.section('format'): - run_with_format('all', 'NOTICE.arch,file.c:123-func() msg') + pad = int(u_boot_console.config.buildconfig.get('config_logf_func_pad')) + padding = ' ' * (pad - len('func')) + + run_with_format('all', f'NOTICE.arch,file.c:123-{padding}func() msg') output = cons.run_command('log format') assert output == 'Log format: clFLfm'
- run_with_format('fm', 'func() msg') - run_with_format('clfm', 'NOTICE.arch,func() msg') - run_with_format('FLfm', 'file.c:123-func() msg') + run_with_format('fm', f'{padding}func() msg') + run_with_format('clfm', f'NOTICE.arch,{padding}func() msg') + run_with_format('FLfm', f'file.c:123-{padding}func() msg') run_with_format('lm', 'NOTICE. msg') run_with_format('m', 'msg')

Sometimes we know that the board is already running the right software, so provide an option to allow running of tests directly, without first resetting the board.
This saves time when re-running a test where only the Python code is changing.
Signed-off-by: Simon Glass sjg@chromium.org ---
(no changes since v1)
test/py/conftest.py | 3 +++ test/py/u_boot_console_base.py | 14 ++++++++++---- test/py/u_boot_console_exec_attach.py | 21 ++++++++++++--------- 3 files changed, 25 insertions(+), 13 deletions(-)
diff --git a/test/py/conftest.py b/test/py/conftest.py index fc9dd3a83f8..ca66b9d9e61 100644 --- a/test/py/conftest.py +++ b/test/py/conftest.py @@ -79,6 +79,8 @@ def pytest_addoption(parser): parser.addoption('--gdbserver', default=None, help='Run sandbox under gdbserver. The argument is the channel '+ 'over which gdbserver should communicate, e.g. localhost:1234') + parser.addoption('--no-prompt-wait', default=False, action='store_true', + help="Assume that U-Boot is ready and don't wait for a prompt")
def run_build(config, source_dir, build_dir, board_type, log): """run_build: Build U-Boot @@ -238,6 +240,7 @@ def pytest_configure(config): ubconfig.board_type = board_type ubconfig.board_identity = board_identity ubconfig.gdbserver = gdbserver + ubconfig.no_prompt_wait = config.getoption('no_prompt_wait') ubconfig.dtb = build_dir + '/arch/sandbox/dts/test.dtb'
env_vars = ( diff --git a/test/py/u_boot_console_base.py b/test/py/u_boot_console_base.py index e4f86f6af5b..a61eec31148 100644 --- a/test/py/u_boot_console_base.py +++ b/test/py/u_boot_console_base.py @@ -413,11 +413,17 @@ class ConsoleBase(object): if not self.config.gdbserver: self.p.timeout = TIMEOUT_MS self.p.logfile_read = self.logstream - if expect_reset: - loop_num = 2 + if self.config.no_prompt_wait: + # Send an empty command to set up the 'expect' logic. This has + # the side effect of ensuring that there was no partial command + # line entered + self.run_command(' ') else: - loop_num = 1 - self.wait_for_boot_prompt(loop_num = loop_num) + if expect_reset: + loop_num = 2 + else: + loop_num = 1 + self.wait_for_boot_prompt(loop_num = loop_num) self.at_prompt = True self.at_prompt_logevt = self.logstream.logfile.cur_evt except Exception as ex: diff --git a/test/py/u_boot_console_exec_attach.py b/test/py/u_boot_console_exec_attach.py index 5f4916b7da2..42fc15197b9 100644 --- a/test/py/u_boot_console_exec_attach.py +++ b/test/py/u_boot_console_exec_attach.py @@ -59,15 +59,18 @@ class ConsoleExecAttach(ConsoleBase): args = [self.config.board_type, self.config.board_identity] s = Spawn(['u-boot-test-console'] + args)
- try: - self.log.action('Resetting board') - cmd = ['u-boot-test-reset'] + args - runner = self.log.get_runner(cmd[0], sys.stdout) - runner.run(cmd) - runner.close() - except: - s.close() - raise + if self.config.no_prompt_wait: + self.log.action('Connecting to board without reset') + else: + try: + self.log.action('Resetting board') + cmd = ['u-boot-test-reset'] + args + runner = self.log.get_runner(cmd[0], sys.stdout) + runner.run(cmd) + runner.close() + except: + s.close() + raise
return s

When a real board fails we don't want to decode the exception. Reserve that behaviour for sandbox. Also avoid raising a new exception on failure - just re-raise the existing one.
Signed-off-by: Simon Glass sjg@chromium.org ---
(no changes since v1)
test/py/u_boot_console_sandbox.py | 2 +- test/py/u_boot_spawn.py | 10 ++++++---- 2 files changed, 7 insertions(+), 5 deletions(-)
diff --git a/test/py/u_boot_console_sandbox.py b/test/py/u_boot_console_sandbox.py index 27c6db8d719..7bc44c78b8b 100644 --- a/test/py/u_boot_console_sandbox.py +++ b/test/py/u_boot_console_sandbox.py @@ -58,7 +58,7 @@ class ConsoleSandbox(ConsoleBase): if self.use_dtb: cmd += ['-d', self.config.dtb] cmd += self.sandbox_flags - return Spawn(cmd, cwd=self.config.source_dir) + return Spawn(cmd, cwd=self.config.source_dir, decode_signal=True)
def restart_uboot_with_flags(self, flags, expect_reset=False, use_dtb=True): """Run U-Boot with the given command-line flags diff --git a/test/py/u_boot_spawn.py b/test/py/u_boot_spawn.py index 7421da49aa9..81a09a9d639 100644 --- a/test/py/u_boot_spawn.py +++ b/test/py/u_boot_spawn.py @@ -25,18 +25,20 @@ class Spawn: output: accumulated output from expect() """
- def __init__(self, args, cwd=None): + def __init__(self, args, cwd=None, decode_signal=False): """Spawn (fork/exec) the sub-process.
Args: args: array of processs arguments. argv[0] is the command to execute. cwd: the directory to run the process in, or None for no change. + decode_signal (bool): True to indicate the exception number when + something goes wrong
Returns: Nothing. """ - + self.decode_signal = decode_signal self.waited = False self.exit_code = 0 self.exit_info = '' @@ -199,12 +201,12 @@ class Spawn: # With sandbox, try to detect when U-Boot exits when it # shouldn't and explain why. This is much more friendly than # just dying with an I/O error - if err.errno == 5: # Input/output error + if self.decode_signal and err.errno == 5: # I/O error alive, _, info = self.checkalive() if alive: raise err raise ValueError('U-Boot exited with %s' % info) - raise err + raise if self.logfile_read: self.logfile_read.write(c) self.buf += c

When a test returns -EAGAIN this should not be considered a failure. Fix what seems to be a problem case, where the pytests see a failure when a test has merely been skipped.
Signed-off-by: Simon Glass sjg@chromium.org ---
(no changes since v1)
test/test-main.c | 16 +++++++++++----- 1 file changed, 11 insertions(+), 5 deletions(-)
diff --git a/test/test-main.c b/test/test-main.c index 3fa6f6e32ec..cda1a186390 100644 --- a/test/test-main.c +++ b/test/test-main.c @@ -448,7 +448,7 @@ static int ut_run_test(struct unit_test_state *uts, struct unit_test *test, static int ut_run_test_live_flat(struct unit_test_state *uts, struct unit_test *test) { - int runs; + int runs, ret;
if ((test->flags & UT_TESTF_OTHER_FDT) && !IS_ENABLED(CONFIG_SANDBOX)) return skip_test(uts); @@ -458,8 +458,11 @@ static int ut_run_test_live_flat(struct unit_test_state *uts, if (CONFIG_IS_ENABLED(OF_LIVE)) { if (!(test->flags & UT_TESTF_FLAT_TREE)) { uts->of_live = true; - ut_assertok(ut_run_test(uts, test, test->name)); - runs++; + ret = ut_run_test(uts, test, test->name); + if (ret != -EAGAIN) { + ut_assertok(ret); + runs++; + } } }
@@ -483,8 +486,11 @@ static int ut_run_test_live_flat(struct unit_test_state *uts, (!runs || ut_test_run_on_flattree(test)) && !(gd->flags & GD_FLG_FDT_CHANGED)) { uts->of_live = false; - ut_assertok(ut_run_test(uts, test, test->name)); - runs++; + ret = ut_run_test(uts, test, test->name); + if (ret != -EAGAIN) { + ut_assertok(ret); + runs++; + } }
return 0;

When a driver is not registered properly it is not clear which one it is. Adjust test_dm_compat() to show this.
Signed-off-by: Simon Glass sjg@chromium.org ---
(no changes since v1)
test/py/tests/test_dm.py | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/test/py/tests/test_dm.py b/test/py/tests/test_dm.py index 68d4ea12235..be94971e455 100644 --- a/test/py/tests/test_dm.py +++ b/test/py/tests/test_dm.py @@ -13,8 +13,11 @@ def test_dm_compat(u_boot_console): for line in response[:-1].split('\n')[2:])
response = u_boot_console.run_command('dm compat') + bad_drivers = set() for driver in drivers: - assert driver in response + if not driver in response: + bad_drivers.add(driver) + assert not bad_drivers
# check sorting - output looks something like this: # testacpi 0 [ ] testacpi_drv |-- acpi-test

The current test doesn't check anything about the output. If a bug results in junk before the output, this is not currently detected.
Add a check for the first line being the one expected.
Signed-off-by: Simon Glass sjg@chromium.org ---
(no changes since v1)
test/py/tests/test_help.py | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/test/py/tests/test_help.py b/test/py/tests/test_help.py index 153133cf28f..2325ff69229 100644 --- a/test/py/tests/test_help.py +++ b/test/py/tests/test_help.py @@ -7,7 +7,11 @@ import pytest def test_help(u_boot_console): """Test that the "help" command can be executed."""
- u_boot_console.run_command('help') + lines = u_boot_console.run_command('help') + if u_boot_console.config.buildconfig.get('config_cmd_2048', 'n') == 'y': + assert lines.splitlines()[0] == "2048 - The 2048 game" + else: + assert lines.splitlines()[0] == "? - alias for 'help'"
@pytest.mark.boardspec('sandbox') def test_help_no_devicetree(u_boot_console):

The settings are decoded in two places. Combine them into a new function, before (in a future patch) expanding the number of items.
Signed-off-by: Simon Glass sjg@chromium.org ---
(no changes since v1)
test/py/conftest.py | 41 ++++++++++++++++++++++++++++------------- 1 file changed, 28 insertions(+), 13 deletions(-)
diff --git a/test/py/conftest.py b/test/py/conftest.py index ca66b9d9e61..6547c6922c6 100644 --- a/test/py/conftest.py +++ b/test/py/conftest.py @@ -117,14 +117,36 @@ def run_build(config, source_dir, build_dir, board_type, log): runner.close() log.status_pass('OK')
-def pytest_xdist_setupnodes(config, specs): - """Clear out any 'done' file from a previous build""" - global build_done_file - build_dir = config.getoption('build_dir') +def get_details(config): + """Obtain salient details about the board and directories to use + + Args: + config (pytest.Config): pytest configuration + + Returns: + tuple: + str: Board type (U-Boot build name) + str: Identity for the lab board + str: Build directory + str: Source directory + """ board_type = config.getoption('board_type') + board_identity = config.getoption('board_identity') + build_dir = config.getoption('build_dir') + source_dir = os.path.dirname(os.path.dirname(TEST_PY_DIR)) + default_build_dir = source_dir + '/build-' + board_type if not build_dir: - build_dir = source_dir + '/build-' + board_type + build_dir = default_build_dir + + return board_type, board_identity, build_dir, source_dir + +def pytest_xdist_setupnodes(config, specs): + """Clear out any 'done' file from a previous build""" + global build_done_file + + build_dir = get_details(config)[2] + build_done_file = Path(build_dir) / 'build.done' if build_done_file.exists(): os.remove(build_done_file) @@ -163,17 +185,10 @@ def pytest_configure(config): global console global ubconfig
- source_dir = os.path.dirname(os.path.dirname(TEST_PY_DIR)) + board_type, board_identity, build_dir, source_dir = get_details(config)
- board_type = config.getoption('board_type') board_type_filename = board_type.replace('-', '_') - - board_identity = config.getoption('board_identity') board_identity_filename = board_identity.replace('-', '_') - - build_dir = config.getoption('build_dir') - if not build_dir: - build_dir = source_dir + '/build-' + board_type mkdir_p(build_dir)
result_dir = config.getoption('result_dir')

In Labgrid there is the concept of a 'role', which is similar to the U-Boot board ID in U-Boot's pytest subsystem.
The role indicates both the target and information about the U-Boot build to use. It can also provide any amount of other configuration. The information is obtained using the 'labgrid-client query' operation.
Make use of this in tests, so that only the role is required in gitlab and other situations. The board type and other things can be queried as needed.
Use a new 'u-boot-test-getrole' script to obtain the requested information.
With this it is possible to run lab tests in gitlab with just a single 'ROLE' variable for each board.
Signed-off-by: Simon Glass sjg@chromium.org ---
(no changes since v1)
test/py/conftest.py | 31 +++++++++++++++++++++++++++---- 1 file changed, 27 insertions(+), 4 deletions(-)
diff --git a/test/py/conftest.py b/test/py/conftest.py index 6547c6922c6..5de8d7b0e23 100644 --- a/test/py/conftest.py +++ b/test/py/conftest.py @@ -23,6 +23,7 @@ from pathlib import Path import pytest import re from _pytest.runner import runtestprotocol +import subprocess import sys
# Globals: The HTML log file, and the connection to the U-Boot console. @@ -79,6 +80,7 @@ def pytest_addoption(parser): parser.addoption('--gdbserver', default=None, help='Run sandbox under gdbserver. The argument is the channel '+ 'over which gdbserver should communicate, e.g. localhost:1234') + parser.addoption('--role', help='U-Boot board role (for Labgrid)') parser.addoption('--no-prompt-wait', default=False, action='store_true', help="Assume that U-Boot is ready and don't wait for a prompt")
@@ -130,12 +132,33 @@ def get_details(config): str: Build directory str: Source directory """ - board_type = config.getoption('board_type') - board_identity = config.getoption('board_identity') + role = config.getoption('role') build_dir = config.getoption('build_dir') + if role: + board_identity = role + cmd = ['u-boot-test-getrole', role, '--configure'] + env = os.environ.copy() + if build_dir: + env['U_BOOT_BUILD_DIR'] = build_dir + proc = subprocess.run(cmd, capture_output=True, encoding='utf-8', + env=env) + if proc.returncode: + raise ValueError(proc.stderr) + print('conftest: lab:', proc.stdout) + vals = {} + for line in proc.stdout.splitlines(): + item, value = line.split(' ', maxsplit=1) + k = item.split(':')[-1] + vals[k] = value + print('conftest: lab info:', vals) + board_type, default_build_dir, source_dir = (vals['board'], + vals['build_dir'], vals['source_dir']) + else: + board_type = config.getoption('board_type') + board_identity = config.getoption('board_identity')
- source_dir = os.path.dirname(os.path.dirname(TEST_PY_DIR)) - default_build_dir = source_dir + '/build-' + board_type + source_dir = os.path.dirname(os.path.dirname(TEST_PY_DIR)) + default_build_dir = source_dir + '/build-' + board_type if not build_dir: build_dir = default_build_dir

There is quite a bit of code to deal with receiving data from the target so move it into its own receive() function.
Signed-off-by: Simon Glass sjg@chromium.org ---
(no changes since v1)
test/py/u_boot_spawn.py | 39 +++++++++++++++++++++++++++------------ 1 file changed, 27 insertions(+), 12 deletions(-)
diff --git a/test/py/u_boot_spawn.py b/test/py/u_boot_spawn.py index 81a09a9d639..cb0d8f702ba 100644 --- a/test/py/u_boot_spawn.py +++ b/test/py/u_boot_spawn.py @@ -139,6 +139,32 @@ class Spawn:
os.write(self.fd, data.encode(errors='replace'))
+ def receive(self, num_bytes): + """Receive data from the sub-process's stdin. + + Args: + num_bytes (int): Maximum number of bytes to read + + Returns: + str: The data received + + Raises: + ValueError if U-Boot died + """ + try: + c = os.read(self.fd, num_bytes).decode(errors='replace') + except OSError as err: + # With sandbox, try to detect when U-Boot exits when it + # shouldn't and explain why. This is much more friendly than + # just dying with an I/O error + if self.decode_signal and err.errno == 5: # I/O error + alive, _, info = self.checkalive() + if alive: + raise err + raise ValueError('U-Boot exited with %s' % info) + raise + return c + def expect(self, patterns): """Wait for the sub-process to emit specific data.
@@ -195,18 +221,7 @@ class Spawn: events = self.poll.poll(poll_maxwait) if not events: raise Timeout() - try: - c = os.read(self.fd, 1024).decode(errors='replace') - except OSError as err: - # With sandbox, try to detect when U-Boot exits when it - # shouldn't and explain why. This is much more friendly than - # just dying with an I/O error - if self.decode_signal and err.errno == 5: # I/O error - alive, _, info = self.checkalive() - if alive: - raise err - raise ValueError('U-Boot exited with %s' % info) - raise + c = self.receive(1024) if self.logfile_read: self.logfile_read.write(c) self.buf += c

The tests currently catch a very board Exception in each case. This is thrown even in the event of a coding error.
We want to handle exceptions differently depending on their severity, so that we can avoid hour-long delays waiting for a board that is clearly broken.
As a first step, create some new exception types, separating out those which are simply an unexpected result from executed a command, from those which indicate some kind of hardware failure.
Signed-off-by: Simon Glass sjg@chromium.org ---
(no changes since v1)
test/py/u_boot_console_base.py | 26 ++++++++++++++------------ test/py/u_boot_spawn.py | 11 +++++++++++ 2 files changed, 25 insertions(+), 12 deletions(-)
diff --git a/test/py/u_boot_console_base.py b/test/py/u_boot_console_base.py index a61eec31148..b656a1f38cd 100644 --- a/test/py/u_boot_console_base.py +++ b/test/py/u_boot_console_base.py @@ -14,6 +14,7 @@ import pytest import re import sys import u_boot_spawn +from u_boot_spawn import BootFail, Timeout, Unexpected
# Regexes for text we expect U-Boot to send to the console. pattern_u_boot_spl_signon = re.compile('(U-Boot SPL \d{4}\.\d{2}[^\r\n]*\))') @@ -164,13 +165,13 @@ class ConsoleBase(object): m = self.p.expect([pattern_u_boot_spl_signon] + self.bad_patterns) if m != 0: - raise Exception('Bad pattern found on SPL console: ' + + raise BootFail('Bad pattern found on SPL console: ' + self.bad_pattern_ids[m - 1]) env_spl_banner_times -= 1
m = self.p.expect([pattern_u_boot_main_signon] + self.bad_patterns) if m != 0: - raise Exception('Bad pattern found on console: ' + + raise BootFail('Bad pattern found on console: ' + self.bad_pattern_ids[m - 1]) self.u_boot_version_string = self.p.after while True: @@ -181,13 +182,9 @@ class ConsoleBase(object): if m == 2: self.p.send(' ') continue - raise Exception('Bad pattern found on console: ' + + raise BootFail('Bad pattern found on console: ' + self.bad_pattern_ids[m - 3])
- except Exception as ex: - self.log.error(str(ex)) - self.cleanup_spawn() - raise finally: self.log.timestamp()
@@ -253,7 +250,7 @@ class ConsoleBase(object): m = self.p.expect([chunk] + self.bad_patterns) if m != 0: self.at_prompt = False - raise Exception('Bad pattern found on console: ' + + raise BootFail('Bad pattern found on console: ' + self.bad_pattern_ids[m - 1]) if not wait_for_prompt: return @@ -263,14 +260,18 @@ class ConsoleBase(object): m = self.p.expect([self.prompt_compiled] + self.bad_patterns) if m != 0: self.at_prompt = False - raise Exception('Bad pattern found on console: ' + + raise BootFail('Missing prompt on console: ' + self.bad_pattern_ids[m - 1]) self.at_prompt = True self.at_prompt_logevt = self.logstream.logfile.cur_evt # Only strip \r\n; space/TAB might be significant if testing # indentation. return self.p.before.strip('\r\n') - except Exception as ex: + except Timeout as exc: + self.log.error(str(exc)) + self.cleanup_spawn() + raise + except BootFail as ex: self.log.error(str(ex)) self.cleanup_spawn() raise @@ -329,8 +330,9 @@ class ConsoleBase(object): text = re.escape(text) m = self.p.expect([text] + self.bad_patterns) if m != 0: - raise Exception('Bad pattern found on console: ' + - self.bad_pattern_ids[m - 1]) + raise Unexpected( + "Unexpected pattern found on console (exp '{text}': " + + self.bad_pattern_ids[m - 1])
def drain_console(self): """Read from and log the U-Boot console for a short time. diff --git a/test/py/u_boot_spawn.py b/test/py/u_boot_spawn.py index cb0d8f702ba..57ea845ad4c 100644 --- a/test/py/u_boot_spawn.py +++ b/test/py/u_boot_spawn.py @@ -17,6 +17,17 @@ import traceback class Timeout(Exception): """An exception sub-class that indicates that a timeout occurred."""
+class BootFail(Exception): + """An exception sub-class that indicates that a boot failure occurred. + + This is used when a bad pattern is seen when waiting for the boot prompt. + It is regarded as fatal, to avoid trying to boot the again and again to no + avail. + """ + +class Unexpected(Exception): + """An exception sub-class that indicates that unexpected test was seen.""" + class Spawn: """Represents the stdio of a freshly created sub-process. Commands may be sent to the process, and responses waited for.

When the connection to a board dies, assume it is dead forever until some user action is taken. Skip all remaining tests. This avoids CI runs taking an hour, with hundreds of 30-second timeouts all to no avail.
Signed-off-by: Simon Glass sjg@chromium.org ---
(no changes since v1)
test/py/conftest.py | 19 +++++++++++++++++-- test/py/u_boot_spawn.py | 38 ++++++++++++++++++++++++++++++++++++++ 2 files changed, 55 insertions(+), 2 deletions(-)
diff --git a/test/py/conftest.py b/test/py/conftest.py index 5de8d7b0e23..42af1abd72e 100644 --- a/test/py/conftest.py +++ b/test/py/conftest.py @@ -25,6 +25,7 @@ import re from _pytest.runner import runtestprotocol import subprocess import sys +from u_boot_spawn import BootFail, Timeout, Unexpected, handle_exception
# Globals: The HTML log file, and the connection to the U-Boot console. log = None @@ -280,6 +281,7 @@ def pytest_configure(config): ubconfig.gdbserver = gdbserver ubconfig.no_prompt_wait = config.getoption('no_prompt_wait') ubconfig.dtb = build_dir + '/arch/sandbox/dts/test.dtb' + ubconfig.connection_ok = True
env_vars = ( 'board_type', @@ -446,8 +448,21 @@ def u_boot_console(request): Returns: The fixture value. """ - - console.ensure_spawned() + if not ubconfig.connection_ok: + pytest.skip('Cannot get target connection') + return None + try: + console.ensure_spawned() + except OSError as err: + handle_exception(ubconfig, console, log, err, 'Lab failure', True) + except Timeout as err: + handle_exception(ubconfig, console, log, err, 'Lab timeout', True) + except BootFail as err: + handle_exception(ubconfig, console, log, err, 'Boot fail', True, + console.get_spawn_output()) + except Unexpected: + handle_exception(ubconfig, console, log, err, 'Unexpected test output', + False) return console
anchors = {} diff --git a/test/py/u_boot_spawn.py b/test/py/u_boot_spawn.py index 57ea845ad4c..62eb4118731 100644 --- a/test/py/u_boot_spawn.py +++ b/test/py/u_boot_spawn.py @@ -8,6 +8,7 @@ Logic to spawn a sub-process and interact with its stdio. import os import re import pty +import pytest import signal import select import sys @@ -28,6 +29,43 @@ class BootFail(Exception): class Unexpected(Exception): """An exception sub-class that indicates that unexpected test was seen."""
+ +def handle_exception(ubconfig, console, log, err, name, fatal, output=''): + """Handle an exception from the console + + Exceptions can occur when there is unexpected output or due to the board + crashing or hanging. Some exceptions are likely fatal, where retrying will + just chew up time to no available. In those cases it is best to cause + further tests be skipped. + + Args: + ubconfig (ArbitraryAttributeContainer): ubconfig object + log (Logfile): Place to log errors + console (ConsoleBase): Console to clean up, if fatal + err (Exception): Exception which was thrown + name (str): Name of problem, to log + fatal (bool): True to abort all tests + output (str): Extra output to report on boot failure. This can show the + target's console output as it tried to boot + """ + msg = f'{name}: ' + if fatal: + msg += 'Marking connection bad - no other tests will run' + else: + msg += 'Assuming that lab is healthy' + print(msg) + log.error(msg) + log.error(f'Error: {err}') + + if output: + msg += f'; output {output}' + + if fatal: + ubconfig.connection_ok = False + console.cleanup_spawn() + pytest.exit(msg) + + class Spawn: """Represents the stdio of a freshly created sub-process. Commands may be sent to the process, and responses waited for.

Use the new handle_exception() function from ConsoleBase also.
Signed-off-by: Simon Glass sjg@chromium.org ---
(no changes since v1)
test/py/u_boot_console_base.py | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/test/py/u_boot_console_base.py b/test/py/u_boot_console_base.py index b656a1f38cd..dbf54eadd0f 100644 --- a/test/py/u_boot_console_base.py +++ b/test/py/u_boot_console_base.py @@ -14,7 +14,7 @@ import pytest import re import sys import u_boot_spawn -from u_boot_spawn import BootFail, Timeout, Unexpected +from u_boot_spawn import BootFail, Timeout, Unexpected, handle_exception
# Regexes for text we expect U-Boot to send to the console. pattern_u_boot_spl_signon = re.compile('(U-Boot SPL \d{4}\.\d{2}[^\r\n]*\))') @@ -268,12 +268,12 @@ class ConsoleBase(object): # indentation. return self.p.before.strip('\r\n') except Timeout as exc: - self.log.error(str(exc)) - self.cleanup_spawn() + handle_exception(self.config, self, self.log, exc, 'Lab failure', + True) raise - except BootFail as ex: - self.log.error(str(ex)) - self.cleanup_spawn() + except BootFail as exc: + handle_exception(self.config, self, self.log, exc, 'Boot fail', + True, self.get_spawn_output()) raise finally: self.log.timestamp()

There is quite a bit of code in pytest to try to start up U-Boot on a board, with timeouts, expects, etc.
This is tedious to maintain and is peripheral to the test system's purpose. It seems better to put this logic in the lab itself, where is can provide such support.
With Labgrid we can use the UbootStrategy class to get the board into a useful state, however it needs to do it. Then it can report to pytest by writing a suitable string along with the U-Boot version it detected.
Add support for detecting 'lab mode' and simply assume that all is well in that case. Collect the version string when Labgrid says it is ready.
Signed-off-by: Simon Glass sjg@chromium.org ---
(no changes since v1)
test/py/u_boot_console_base.py | 68 ++++++++++++++++++++++++++-------- 1 file changed, 53 insertions(+), 15 deletions(-)
diff --git a/test/py/u_boot_console_base.py b/test/py/u_boot_console_base.py index dbf54eadd0f..cfde57649f1 100644 --- a/test/py/u_boot_console_base.py +++ b/test/py/u_boot_console_base.py @@ -23,13 +23,21 @@ pattern_stop_autoboot_prompt = re.compile('Hit any key to stop autoboot: ') pattern_unknown_command = re.compile('Unknown command '.*' - try 'help'') pattern_error_notification = re.compile('## Error: ') pattern_error_please_reset = re.compile('### ERROR ### Please RESET the board ###') -pattern_ready_prompt = re.compile('U-Boot is ready') +pattern_ready_prompt = re.compile('{lab ready in (.*)s: (.*)}') +pattern_lab_mode = re.compile('{lab mode.*}')
PAT_ID = 0 PAT_RE = 1
# Timeout before expecting the console to be ready (in milliseconds) -TIMEOUT_MS = 30000 +TIMEOUT_MS = 30000 # Standard timeout + +# Timeout for board preparation in lab mode. This needs to be enough to build +# U-Boot, write it to the board and then boot the board. Since this process is +# under the control of another program (e.g. Labgrid), it will failure sooner +# if something goes way. So use a very long timeout here to cover all possible +# situations. +TIMEOUT_PREPARE_MS = 3 * 60 * 1000
bad_pattern_defs = ( ('spl_signon', pattern_u_boot_spl_signon), @@ -117,6 +125,7 @@ class ConsoleBase(object):
self.at_prompt = False self.at_prompt_logevt = None + self.lab_mode = False
def get_spawn(self): # This is not called, ssubclass must define this. @@ -150,40 +159,69 @@ class ConsoleBase(object): self.p.close() self.logstream.close()
+ def set_lab_mode(self): + """Select lab mode + + This tells us that we will get a 'lab ready' message when the board is + ready for use. We don't need to look for signon messages. + """ + self.log.info(f'test.py: Lab mode is active') + self.p.timeout = TIMEOUT_PREPARE_MS + self.lab_mode = True + def wait_for_boot_prompt(self, loop_num = 1): """Wait for the boot up until command prompt. This is for internal use only. """ try: + self.log.info('Waiting for U-Boot to be ready') bcfg = self.config.buildconfig config_spl_serial = bcfg.get('config_spl_serial', 'n') == 'y' env_spl_skipped = self.config.env.get('env__spl_skipped', False) env_spl_banner_times = self.config.env.get('env__spl_banner_times', 1)
- while loop_num > 0: + while not self.lab_mode and loop_num > 0: loop_num -= 1 while config_spl_serial and not env_spl_skipped and env_spl_banner_times > 0: - m = self.p.expect([pattern_u_boot_spl_signon] + - self.bad_patterns) - if m != 0: + m = self.p.expect([pattern_u_boot_spl_signon, + pattern_lab_mode] + self.bad_patterns) + if m == 1: + self.set_lab_mode() + break + elif m != 0: raise BootFail('Bad pattern found on SPL console: ' + - self.bad_pattern_ids[m - 1]) + self.bad_pattern_ids[m - 1]) env_spl_banner_times -= 1
- m = self.p.expect([pattern_u_boot_main_signon] + self.bad_patterns) - if m != 0: - raise BootFail('Bad pattern found on console: ' + - self.bad_pattern_ids[m - 1]) - self.u_boot_version_string = self.p.after + if not self.lab_mode: + m = self.p.expect([pattern_u_boot_main_signon, + pattern_lab_mode] + self.bad_patterns) + if m == 1: + self.set_lab_mode() + elif m != 0: + raise BootFail('Bad pattern found on console: ' + + self.bad_pattern_ids[m - 1]) + if not self.lab_mode: + self.u_boot_version_string = self.p.after while True: m = self.p.expect([self.prompt_compiled, pattern_ready_prompt, pattern_stop_autoboot_prompt] + self.bad_patterns) - if m == 0 or m == 1: + if m == 0: + self.log.info(f'Found ready prompt {m}') + break + elif m == 1: + m = pattern_ready_prompt.search(self.p.after) + self.u_boot_version_string = m.group(2) + self.log.info(f'Lab: Board is ready') + self.p.timeout = TIMEOUT_MS break if m == 2: + self.log.info(f'Found autoboot prompt {m}') self.p.send(' ') continue - raise BootFail('Bad pattern found on console: ' + - self.bad_pattern_ids[m - 3]) + if not self.lab_mode: + raise BootFail('Missing prompt / ready message on console: ' + + self.bad_pattern_ids[m - 3]) + self.log.info(f'U-Boot is ready')
finally: self.log.timestamp()

We expect commands to be echoed and this should happen quite quickly, since U-Boot is sitting at the prompt waiting for a command.
Reduce the timeout for this situation. Try to produce a more useful error message when something goes wrong. Also handle the case where the connection has gone away since the last command was issued.
Signed-off-by: Simon Glass sjg@chromium.org ---
(no changes since v1)
test/py/u_boot_console_base.py | 35 ++++++++++++++++++++-------------- 1 file changed, 21 insertions(+), 14 deletions(-)
diff --git a/test/py/u_boot_console_base.py b/test/py/u_boot_console_base.py index cfde57649f1..eda48cd35f7 100644 --- a/test/py/u_boot_console_base.py +++ b/test/py/u_boot_console_base.py @@ -31,6 +31,7 @@ PAT_RE = 1
# Timeout before expecting the console to be ready (in milliseconds) TIMEOUT_MS = 30000 # Standard timeout +TIMEOUT_CMD_MS = 10000 # Command-echo timeout
# Timeout for board preparation in lab mode. This needs to be enough to build # U-Boot, write it to the board and then boot the board. Since this process is @@ -274,22 +275,28 @@ class ConsoleBase(object):
try: self.at_prompt = False + if not self.p: + raise BootFail( + f"Lab failure: Connection lost when sending command '{cmd}'") + if send_nl: cmd += '\n' - while cmd: - # Limit max outstanding data, so UART FIFOs don't overflow - chunk = cmd[:self.max_fifo_fill] - cmd = cmd[self.max_fifo_fill:] - self.p.send(chunk) - if not wait_for_echo: - continue - chunk = re.escape(chunk) - chunk = chunk.replace('\\n', '[\r\n]') - m = self.p.expect([chunk] + self.bad_patterns) - if m != 0: - self.at_prompt = False - raise BootFail('Bad pattern found on console: ' + - self.bad_pattern_ids[m - 1]) + rem = cmd # Remaining to be sent + with self.temporary_timeout(TIMEOUT_CMD_MS): + while rem: + # Limit max outstanding data, so UART FIFOs don't overflow + chunk = rem[:self.max_fifo_fill] + rem = rem[self.max_fifo_fill:] + self.p.send(chunk) + if not wait_for_echo: + continue + chunk = re.escape(chunk) + chunk = chunk.replace('\\n', '[\r\n]') + m = self.p.expect([chunk] + self.bad_patterns) + if m != 0: + self.at_prompt = False + raise BootFail(f"Failed to get echo on console (cmd '{cmd}':rem '{rem}'): " + + self.bad_pattern_ids[m - 1]) if not wait_for_prompt: return if wait_for_reboot:

Fix a typo in a comment.
Signed-off-by: Simon Glass sjg@chromium.org ---
(no changes since v1)
test/py/u_boot_console_base.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/test/py/u_boot_console_base.py b/test/py/u_boot_console_base.py index eda48cd35f7..1b00821c5e9 100644 --- a/test/py/u_boot_console_base.py +++ b/test/py/u_boot_console_base.py @@ -97,7 +97,7 @@ class ConsoleBase(object): Can only usefully be called by sub-classes.
Args: - log: A mulptiplex_log.Logfile object, to which the U-Boot output + log: A multiplexed_log.Logfile object, to which the U-Boot output will be logged. config: A configuration data structure, as built by conftest.py. max_fifo_fill: The maximum number of characters to send to U-Boot

There is a very annoying bug at present where the terminal echos part of the first command sent to the board. This happens because the terminal is still set to echo for a period until Labgrid starts up and can change this.
Fix this by disabling echo (and other terminal features) as soon as the spawn happens.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: - Only disable echo if a terminal is detected
test/py/u_boot_spawn.py | 13 +++++++++++++ 1 file changed, 13 insertions(+)
diff --git a/test/py/u_boot_spawn.py b/test/py/u_boot_spawn.py index 62eb4118731..c0ff0813554 100644 --- a/test/py/u_boot_spawn.py +++ b/test/py/u_boot_spawn.py @@ -12,6 +12,7 @@ import pytest import signal import select import sys +import termios import time import traceback
@@ -117,11 +118,23 @@ class Spawn: finally: os._exit(255)
+ old = None try: + if os.isatty(sys.stdout.fileno()): + new = termios.tcgetattr(self.fd) + old = new + new[3] = new[3] & ~(termios.ICANON | termios.ISIG) + new[3] = new[3] & ~termios.ECHO + new[6][termios.VMIN] = 0 + new[6][termios.VTIME] = 0 + termios.tcsetattr(self.fd, termios.TCSANOW, new) + self.poll = select.poll() self.poll.register(self.fd, select.POLLIN | select.POLLPRI | select.POLLERR | select.POLLHUP | select.POLLNVAL) except: + if old: + termios.tcsetattr(self.fd, termios.TCSANOW, old) self.close() raise

Send the Labgrid quit characters to ask it to exit gracefully. This typically allows it to power off the board being used.
If that doesn't work, try the less graceful approach.
Signed-off-by: Simon Glass sjg@chromium.org ---
(no changes since v1)
test/py/u_boot_spawn.py | 17 +++++++++++++++-- 1 file changed, 15 insertions(+), 2 deletions(-)
diff --git a/test/py/u_boot_spawn.py b/test/py/u_boot_spawn.py index c0ff0813554..ec1fa465047 100644 --- a/test/py/u_boot_spawn.py +++ b/test/py/u_boot_spawn.py @@ -16,6 +16,9 @@ import termios import time import traceback
+# Character to send (twice) to exit the terminal +EXIT_CHAR = 0x1d # FS (Ctrl + ]) + class Timeout(Exception): """An exception sub-class that indicates that a timeout occurred."""
@@ -304,15 +307,25 @@ class Spawn: None.
Returns: - Nothing. + str: Type of closure completed """ + self.send(chr(EXIT_CHAR) * 2)
+ # Wait about 10 seconds for Labgrid to close and power off the board + for _ in range(100): + if not self.isalive(): + return 'normal' + time.sleep(0.1) + + # That didn't work, so try closing the PTY os.close(self.fd) for _ in range(100): if not self.isalive(): - break + return 'break' time.sleep(0.1)
+ return 'timeout' + def get_expect_output(self): """Return the output read by expect()

This can take a while and involve multiple steps (e.g. turning the board back off). Add a section for it and show the output.
Signed-off-by: Simon Glass sjg@chromium.org ---
(no changes since v1)
test/py/u_boot_console_base.py | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/test/py/u_boot_console_base.py b/test/py/u_boot_console_base.py index 1b00821c5e9..95096fc123c 100644 --- a/test/py/u_boot_console_base.py +++ b/test/py/u_boot_console_base.py @@ -157,7 +157,10 @@ class ConsoleBase(object): """
if self.p: - self.p.close() + self.log.start_section('Stopping U-Boot') + close_type = self.p.close() + self.log.info(f'Close type: {close_type}') + self.log.end_section('Stopping U-Boot') self.logstream.close()
def set_lab_mode(self):

Some configuration is now in variables with a CFG_ prefix. Add these to the .cfg file so that we can see everything in one place. Sort the options so they are easier to find and compare.
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: - Add new patch to update u-boot.cfg with CFG_... options
scripts/Makefile.autoconf | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/scripts/Makefile.autoconf b/scripts/Makefile.autoconf index b42f9b525fe..65ff11ea508 100644 --- a/scripts/Makefile.autoconf +++ b/scripts/Makefile.autoconf @@ -71,7 +71,7 @@ quiet_cmd_autoconf = GEN $@ quiet_cmd_u_boot_cfg = CFG $@ cmd_u_boot_cfg = \ $(CPP) $(c_flags) $2 -DDO_DEPS_ONLY -dM include/config.h > $@.tmp && { \ - grep 'define CONFIG_' $@.tmp | \ + egrep 'define (CONFIG_|CFG_)' $@.tmp | sort | \ sed '/define CONFIG_IS_ENABLED(/d;/define CONFIG_IF_ENABLED_INT(/d;/define CONFIG_VAL(/d;' > $@; \ rm $@.tmp; \ } || { \

Add a way to run tests on a real hardware lab. This is in the very early experimental stages. There are only 23 boards and 3 of those are broken! (bob, ff3399, samus). A fourth fails due to problems with the TPM tests.
To try this, assuming you have gitlab access, set SJG_LAB=1, e.g.:
git push -o ci.variable="SJG_LAB=1" dm HEAD:try
Signed-off-by: Simon Glass sjg@chromium.org ---
Changes in v2: - Avoid running a docker image for skipped lab tests
.gitlab-ci.yml | 153 +++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 153 insertions(+)
diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml index 165f765a833..75c18a0f2f7 100644 --- a/.gitlab-ci.yml +++ b/.gitlab-ci.yml @@ -17,6 +17,7 @@ stages: - testsuites - test.py - world build + - sjg-lab
.buildman_and_testpy_template: &buildman_and_testpy_dfn stage: test.py @@ -482,3 +483,155 @@ coreboot test.py: TEST_PY_TEST_SPEC: "not sleep" TEST_PY_ID: "--id qemu" <<: *buildman_and_testpy_dfn + +.lab_template: &lab_dfn + stage: sjg-lab + rules: + - if: $SJG_LAB == "1" + when: always + - when: manual + tags: [ 'lab' ] + script: + - if [[ -z "${SJG_LAB}" ]]; then + exit 0; + fi + # Environment: + # SRC - source tree + # OUT - output directory for builds + - export SRC="$(pwd)" + - export OUT="${SRC}/build/${BOARD}" + - export PATH=$PATH:~/bin + - export PATH=$PATH:/vid/software/devel/ubtest/u-boot-test-hooks/bin + + # Load it on the device + - ret=0 + - echo "role ${ROLE}" + - export strategy="-s uboot -e off" + # export verbose="-v" + - ${SRC}/test/py/test.py --role ${ROLE} --build-dir "${OUT}" + --capture=tee-sys -k "not bootstd"|| ret=$? + - U_BOOT_BOARD_IDENTITY="${ROLE}" u-boot-test-release || true + - if [[ $ret -ne 0 ]]; then + exit $ret; + fi + artifacts: + when: always + paths: + - "build/${BOARD}/test-log.html" + - "build/${BOARD}/multiplexed_log.css" + expire_in: 1 week + +rpi3: + variables: + ROLE: rpi3 + <<: *lab_dfn + +opi_pc: + variables: + ROLE: opi_pc + <<: *lab_dfn + +pcduino3_nano: + variables: + ROLE: pcduino3_nano + <<: *lab_dfn + +samus: + variables: + ROLE: samus + <<: *lab_dfn + +link: + variables: + ROLE: link + <<: *lab_dfn + +jerry: + variables: + ROLE: jerry + <<: *lab_dfn + +minnowmax: + variables: + ROLE: minnowmax + <<: *lab_dfn + +opi_pc2: + variables: + ROLE: opi_pc2 + <<: *lab_dfn + +bpi: + variables: + ROLE: bpi + <<: *lab_dfn + +rpi2: + variables: + ROLE: rpi2 + <<: *lab_dfn + +bob: + variables: + ROLE: bob + <<: *lab_dfn + +ff3399: + variables: + ROLE: ff3399 + <<: *lab_dfn + +coral: + variables: + ROLE: coral + <<: *lab_dfn + +rpi3z: + variables: + ROLE: rpi3z + <<: *lab_dfn + +bbb: + variables: + ROLE: bbb + <<: *lab_dfn + +kevin: + variables: + ROLE: kevin + <<: *lab_dfn + +pine64: + variables: + ROLE: pine64 + <<: *lab_dfn + +c4: + variables: + ROLE: c4 + <<: *lab_dfn + +rpi4: + variables: + ROLE: rpi4 + <<: *lab_dfn + +rpi0: + variables: + ROLE: rpi0 + <<: *lab_dfn + +snow: + variables: + ROLE: snow + <<: *lab_dfn + +pcduino3: + variables: + ROLE: pcduino3 + <<: *lab_dfn + +nyan-big: + variables: + ROLE: nyan-big + <<: *lab_dfn

On Fri, Jun 21, 2024 at 01:51:21PM -0600, Simon Glass wrote:
Labgrid provides access to a hardware lab in an automated way. It is possible to boot U-Boot on boards in the lab without physically touching them. It relies on relays, USB UARTs and SD muxes, among other things.
By way of background, about 4 years ago I wrong a thing called Labman[1] which allowed my lab of about 30 devices to be operated remotely, using tbot for the console and build integration. While it worked OK and I used it for many bisects, I didn't take it any further.
It turns out that there was already an existing program, called Labgrid, which I did not know about at time (thank you Tom for telling me). It is more rounded than Labman and has a number of advantages:
- does not need udev rules, mostly
- has several existing users who rely on it
- supports multiple machines exporting their devices
It lacks a 'lab check' feature and a few other things, but these can be remedied.
On and off over the past several weeks I have been experimenting with Labgrid. I have managed to create an initial U-Boot integration (this series) by adding various features to Labgrid[2] and the U-Boot test hooks.
I hope that this might inspire others to set up boards and run tests automatically, rather than relying on infrequent, manual test. Perhaps it may even be possible to have a number of labs available.
Included in the integration are a number of simple scripts which make it easy to connect to boards and run tests:
ub-int <target> Build and boot on a target, starting an interactive session
ub-cli <target> Build and boot on a target, ensure U-Boot starts and provide an interactive session from there
ub-smoke <target> Smoke test U-Boot to check that it boots to a prompt on a target
ub-bisect Bisect a git tree to locate a failure on a particular target
ub-pyt <target> <testspec> Run U-Boot pytests on a target
Some of these help to provide the same tbot[4] workflow which I have relied on for several years, albeit much simpler versions.
The goal here is to create some sort of script which can collect patches from the mailing list, apply them and test them on a selection of boards. I suspect that script already exists, so please let me know what you suggest.
I hope you find this interesting and take a look!
Can you please split this up in to 3-4 series so that they can be reviewed and merged? Thanks.

Hi Tom,
OK will do.
One I have already split out ("Bug-fixes for a few boards") so I'll split out a few more.
REgards, Simon
On Fri, 21 Jun 2024 at 14:07, Tom Rini trini@konsulko.com wrote:
On Fri, Jun 21, 2024 at 01:51:21PM -0600, Simon Glass wrote:
Labgrid provides access to a hardware lab in an automated way. It is possible to boot U-Boot on boards in the lab without physically touching them. It relies on relays, USB UARTs and SD muxes, among other things.
By way of background, about 4 years ago I wrong a thing called Labman[1] which allowed my lab of about 30 devices to be operated remotely, using tbot for the console and build integration. While it worked OK and I used it for many bisects, I didn't take it any further.
It turns out that there was already an existing program, called Labgrid, which I did not know about at time (thank you Tom for telling me). It is more rounded than Labman and has a number of advantages:
- does not need udev rules, mostly
- has several existing users who rely on it
- supports multiple machines exporting their devices
It lacks a 'lab check' feature and a few other things, but these can be remedied.
On and off over the past several weeks I have been experimenting with Labgrid. I have managed to create an initial U-Boot integration (this series) by adding various features to Labgrid[2] and the U-Boot test hooks.
I hope that this might inspire others to set up boards and run tests automatically, rather than relying on infrequent, manual test. Perhaps it may even be possible to have a number of labs available.
Included in the integration are a number of simple scripts which make it easy to connect to boards and run tests:
ub-int <target> Build and boot on a target, starting an interactive session
ub-cli <target> Build and boot on a target, ensure U-Boot starts and provide an interactive session from there
ub-smoke <target> Smoke test U-Boot to check that it boots to a prompt on a target
ub-bisect Bisect a git tree to locate a failure on a particular target
ub-pyt <target> <testspec> Run U-Boot pytests on a target
Some of these help to provide the same tbot[4] workflow which I have relied on for several years, albeit much simpler versions.
The goal here is to create some sort of script which can collect patches from the mailing list, apply them and test them on a selection of boards. I suspect that script already exists, so please let me know what you suggest.
I hope you find this interesting and take a look!
Can you please split this up in to 3-4 series so that they can be reviewed and merged? Thanks.
-- Tom
participants (2)
-
Simon Glass
-
Tom Rini