[PATCH 0/3] test: Try to deal with some co-dependent tests

Tests are supposed to be independent. With driver model tests, the environment is reset before each test, which ensures that.
With Python tests there is no reset of the board between tests, since we want to run all the tests as quickly as possible and without needing the external scripts running constantly.
In principle the Python tests can be independent if they each put the world back the way they found it, but it turns out that some are not. This means that some tests cannot be run unless another test is run first. It also means that tests cannot be run in parallel, e.g. on sandbox.
This series fixes some of them. Those that remain:
test_gpt_swap_partitions - not sure? test_pinmux_status - not sure? test_sqfs_load - cannot be run more than once! test_bind_unbind_with_uclass - relies on previous test
The last one would be much better done as a C test, so it doesn't have to deal with the changing driver tree. There isn't a lot of value in running the test on a real board, since sandbox should find any bugs in driver model or the 'bind' command.
If the above can be resolved we can enable parallel tests. On my test machine (32 threads) it reduces the time from 38 seconds to 7.5s
To use this feature: pip3 install pytest-xdist
test/py/test.py -B sandbox --build-dir /tmp/xx -q -k 'not slow' -n32
Simon Glass (3): test: Allow vboot tests to run in parallel test: Allow hush tests to run in parallel test: Allow tpm2 tests to run in parallel
test/py/tests/test_hush_if_test.py | 20 ++++-------- test/py/tests/test_tpm2.py | 52 ++++++++++++++++++++++++++---- test/py/tests/test_vboot.py | 30 +++++++++-------- 3 files changed, 69 insertions(+), 33 deletions(-)

Update the tests to use separate working directories, so we can run them in parallel. It also makes it possible to see the individual output files after the tests have completed.
Signed-off-by: Simon Glass sjg@chromium.org ---
test/py/tests/test_vboot.py | 30 +++++++++++++++++------------- 1 file changed, 17 insertions(+), 13 deletions(-)
diff --git a/test/py/tests/test_vboot.py b/test/py/tests/test_vboot.py index e45800d94c0..fb43c6ef35a 100644 --- a/test/py/tests/test_vboot.py +++ b/test/py/tests/test_vboot.py @@ -24,22 +24,23 @@ For configuration verification: Tests run with both SHA1 and SHA256 hashing. """
+import os import struct import pytest import u_boot_utils as util import vboot_forge
TESTDATA = [ - ['sha1', '', None, False], - ['sha1', '', '-E -p 0x10000', False], - ['sha1', '-pss', None, False], - ['sha1', '-pss', '-E -p 0x10000', False], - ['sha256', '', None, False], - ['sha256', '', '-E -p 0x10000', False], - ['sha256', '-pss', None, False], - ['sha256', '-pss', '-E -p 0x10000', False], - ['sha256', '-pss', None, True], - ['sha256', '-pss', '-E -p 0x10000', True], + ['sha1-basic', 'sha1', '', None, False], + ['sha1-pad', 'sha1', '', '-E -p 0x10000', False], + ['sha1-pss', 'sha1', '-pss', None, False], + ['sha1-pss-pad', 'sha1', '-pss', '-E -p 0x10000', False], + ['sha256-basic', 'sha256', '', None, False], + ['sha256-pad', 'sha256', '', '-E -p 0x10000', False], + ['sha256-pss', 'sha256', '-pss', None, False], + ['sha256-pss-pad', 'sha256', '-pss', '-E -p 0x10000', False], + ['sha256-pss-required', 'sha256', '-pss', None, True], + ['sha256-pss-pad-required', 'sha256', '-pss', '-E -p 0x10000', True], ]
@pytest.mark.boardspec('sandbox') @@ -48,8 +49,9 @@ TESTDATA = [ @pytest.mark.requiredtool('fdtget') @pytest.mark.requiredtool('fdtput') @pytest.mark.requiredtool('openssl') -@pytest.mark.parametrize("sha_algo,padding,sign_options,required", TESTDATA) -def test_vboot(u_boot_console, sha_algo, padding, sign_options, required): +@pytest.mark.parametrize("name,sha_algo,padding,sign_options,required", + TESTDATA) +def test_vboot(u_boot_console, name, sha_algo, padding, sign_options, required): """Test verified boot signing with mkimage and verification with 'bootm'.
This works using sandbox only as it needs to update the device tree used @@ -331,7 +333,9 @@ def test_vboot(u_boot_console, sha_algo, padding, sign_options, required): run_bootm(sha_algo, 'multi required key', '', False)
cons = u_boot_console - tmpdir = cons.config.result_dir + '/' + tmpdir = os.path.join(cons.config.result_dir, name) + '/' + if not os.path.exists(tmpdir): + os.mkdir(tmpdir) datadir = cons.config.source_dir + '/test/py/tests/vboot/' fit = '%stest.fit' % tmpdir mkimage = cons.config.build_dir + '/tools/mkimage'

The -z tests don't really need to be part of the main set. Separate them out so we can drop the test setup/cleans functions and thus run all tests in parallel.
Signed-off-by: Simon Glass sjg@chromium.org ---
test/py/tests/test_hush_if_test.py | 20 ++++++-------------- 1 file changed, 6 insertions(+), 14 deletions(-)
diff --git a/test/py/tests/test_hush_if_test.py b/test/py/tests/test_hush_if_test.py index d117921a6ac..37c1608bb22 100644 --- a/test/py/tests/test_hush_if_test.py +++ b/test/py/tests/test_hush_if_test.py @@ -119,11 +119,6 @@ subtests = ( ('test ! ! aaa != aaa -o ! ! bbb = bbb', True), ('test ! ! aaa = aaa -o ! ! bbb != bbb', True), ('test ! ! aaa = aaa -o ! ! bbb = bbb', True), - - # -z operator. - - ('test -z "$ut_var_nonexistent"', True), - ('test -z "$ut_var_exists"', False), )
def exec_hush_if(u_boot_console, expr, result): @@ -141,12 +136,6 @@ def exec_hush_if(u_boot_console, expr, result): response = u_boot_console.run_command(cmd) assert response.strip() == str(result).lower()
-def test_hush_if_test_setup(u_boot_console): - """Set up environment variables used during the "if" tests.""" - - u_boot_console.run_command('setenv ut_var_nonexistent') - u_boot_console.run_command('setenv ut_var_exists 1') - @pytest.mark.buildconfigspec('cmd_echo') @pytest.mark.parametrize('expr,result', subtests) def test_hush_if_test(u_boot_console, expr, result): @@ -154,9 +143,12 @@ def test_hush_if_test(u_boot_console, expr, result):
exec_hush_if(u_boot_console, expr, result)
-def test_hush_if_test_teardown(u_boot_console): - """Clean up environment variables used during the "if" tests.""" - +def test_hush_z(u_boot_console): + """Test the -z operator""" + u_boot_console.run_command('setenv ut_var_nonexistent') + u_boot_console.run_command('setenv ut_var_exists 1') + exec_hush_if(u_boot_console, 'test -z "$ut_var_nonexistent"', True) + exec_hush_if(u_boot_console, 'test -z "$ut_var_exists"', False) u_boot_console.run_command('setenv ut_var_exists')
# We might test this on real filesystems via UMS, DFU, 'save', etc.

These tests currently run in a particular sequence, with some of them depending on the actions of earlier tests.
Add a check for sandbox and reset to a known state at the start of each test, so that all tests can run in parallel.
Signed-off-by: Simon Glass sjg@chromium.org ---
test/py/tests/test_tpm2.py | 52 +++++++++++++++++++++++++++++++++----- 1 file changed, 46 insertions(+), 6 deletions(-)
diff --git a/test/py/tests/test_tpm2.py b/test/py/tests/test_tpm2.py index 70f906da511..49e7ebfa5bf 100644 --- a/test/py/tests/test_tpm2.py +++ b/test/py/tests/test_tpm2.py @@ -40,10 +40,14 @@ def force_init(u_boot_console, force=False): u_boot_console.run_command('tpm2 clear TPM2_RH_PLATFORM') u_boot_console.run_command('echo --- end of init ---')
+def is_sandbox(cons): + # Array slice removes leading/trailing quotes. + sys_arch = cons.config.buildconfig.get('config_sys_arch', '"sandbox"')[1:-1] + return sys_arch == 'sandbox' + @pytest.mark.buildconfigspec('cmd_tpm_v2') def test_tpm2_init(u_boot_console): """Init the software stack to use TPMv2 commands.""" - u_boot_console.run_command('tpm2 init') output = u_boot_console.run_command('echo $?') assert output.endswith('0') @@ -54,17 +58,43 @@ def test_tpm2_startup(u_boot_console):
Initiate the TPM internal state machine. """ + u_boot_console.run_command('tpm2 startup TPM2_SU_CLEAR') + output = u_boot_console.run_command('echo $?') + assert output.endswith('0') + +def tpm2_sandbox_init(u_boot_console): + """Put sandbox back into a known state so we can run a test + + This allows all tests to run in parallel, since no test depends on another. + """ + u_boot_console.restart_uboot() + u_boot_console.run_command('tpm2 init') + output = u_boot_console.run_command('echo $?') + assert output.endswith('0')
u_boot_console.run_command('tpm2 startup TPM2_SU_CLEAR') output = u_boot_console.run_command('echo $?') assert output.endswith('0')
+ u_boot_console.run_command('tpm2 self_test full') + output = u_boot_console.run_command('echo $?') + assert output.endswith('0') + @pytest.mark.buildconfigspec('cmd_tpm_v2') -def test_tpm2_self_test_full(u_boot_console): +def test_tpm2_sandbox_self_test_full(u_boot_console): """Execute a TPM2_SelfTest (full) command.
Ask the TPM to perform all self tests to also enable full capabilities. """ + if is_sandbox(u_boot_console): + u_boot_console.restart_uboot() + u_boot_console.run_command('tpm2 init') + output = u_boot_console.run_command('echo $?') + assert output.endswith('0') + + u_boot_console.run_command('tpm2 startup TPM2_SU_CLEAR') + output = u_boot_console.run_command('echo $?') + assert output.endswith('0')
u_boot_console.run_command('tpm2 self_test full') output = u_boot_console.run_command('echo $?') @@ -77,7 +107,8 @@ def test_tpm2_continue_self_test(u_boot_console): Ask the TPM to finish its self tests (alternative to the full test) in order to enter a fully operational state. """ - + if is_sandbox(u_boot_console): + tpm2_sandbox_init(u_boot_console) u_boot_console.run_command('tpm2 self_test continue') output = u_boot_console.run_command('echo $?') assert output.endswith('0') @@ -94,6 +125,8 @@ def test_tpm2_clear(u_boot_console): not have a password set, otherwise this test will fail. ENDORSEMENT and PLATFORM hierarchies are also available. """ + if is_sandbox(u_boot_console): + tpm2_sandbox_init(u_boot_console)
u_boot_console.run_command('tpm2 clear TPM2_RH_LOCKOUT') output = u_boot_console.run_command('echo $?') @@ -112,7 +145,8 @@ def test_tpm2_change_auth(u_boot_console): Use the LOCKOUT hierarchy for this. ENDORSEMENT and PLATFORM hierarchies are also available. """ - + if is_sandbox(u_boot_console): + tpm2_sandbox_init(u_boot_console) force_init(u_boot_console)
u_boot_console.run_command('tpm2 change_auth TPM2_RH_LOCKOUT unicorn') @@ -136,6 +170,8 @@ def test_tpm2_get_capability(u_boot_console): There is no expected default values because it would depend on the chip used. We can still save them in order to check they have changed later. """ + if is_sandbox(u_boot_console): + tpm2_sandbox_init(u_boot_console)
force_init(u_boot_console) ram = u_boot_utils.find_ram_base(u_boot_console) @@ -158,7 +194,8 @@ def test_tpm2_dam_parameters(u_boot_console): the authentication, otherwise the lockout will be engaged after the first failed authentication attempt. """ - + if is_sandbox(u_boot_console): + tpm2_sandbox_init(u_boot_console) force_init(u_boot_console) ram = u_boot_utils.find_ram_base(u_boot_console)
@@ -181,6 +218,8 @@ def test_tpm2_pcr_read(u_boot_console):
Perform a PCR read of the 0th PCR. Must be zero. """ + if is_sandbox(u_boot_console): + tpm2_sandbox_init(u_boot_console)
force_init(u_boot_console) ram = u_boot_utils.find_ram_base(u_boot_console) @@ -208,7 +247,8 @@ def test_tpm2_pcr_extend(u_boot_console): No authentication mechanism is used here, not protecting against packet replay, yet. """ - + if is_sandbox(u_boot_console): + tpm2_sandbox_init(u_boot_console) force_init(u_boot_console) ram = u_boot_utils.find_ram_base(u_boot_console)

On 2/8/21 5:05 AM, Simon Glass wrote:
Tests are supposed to be independent. With driver model tests, the environment is reset before each test, which ensures that.
With Python tests there is no reset of the board between tests, since we want to run all the tests as quickly as possible and without needing the external scripts running constantly.
In principle the Python tests can be independent if they each put the world back the way they found it, but it turns out that some are not. This means that some tests cannot be run unless another test is run first. It also means that tests cannot be run in parallel, e.g. on sandbox.
This series fixes some of them. Those that remain:
test_gpt_swap_partitions - not sure? test_pinmux_status - not sure? test_sqfs_load - cannot be run more than once! test_bind_unbind_with_uclass - relies on previous test
The last one would be much better done as a C test, so it doesn't have to deal with the changing driver tree. There isn't a lot of value in running the test on a real board, since sandbox should find any bugs in driver model or the 'bind' command.
If the above can be resolved we can enable parallel tests. On my test machine (32 threads) it reduces the time from 38 seconds to 7.5s
To use this feature: pip3 install pytest-xdist
test/py/test.py -B sandbox --build-dir /tmp/xx -q -k 'not slow' -n32
Thanks for looking into parallelization.
What I am missing in this series is a patch for doc/develop/py_testing.rst describing how parallelization of Python tests is controlled.
I have seen that test/py/tests/test_fs/test_basic.py test_fs1() is always failing on my machine because the test file 2.5GB.file is truncated. It is not truncated if I add some waiting time.
Could this be caused by parallelization?
Package 'python3-pytest-xdist' is not installed on my system.
Best regards
Heinrich
Simon Glass (3): test: Allow vboot tests to run in parallel test: Allow hush tests to run in parallel test: Allow tpm2 tests to run in parallel
test/py/tests/test_hush_if_test.py | 20 ++++-------- test/py/tests/test_tpm2.py | 52 ++++++++++++++++++++++++++---- test/py/tests/test_vboot.py | 30 +++++++++-------- 3 files changed, 69 insertions(+), 33 deletions(-)

Hi Heinrich,
On Mon, 8 Feb 2021 at 00:25, Heinrich Schuchardt xypron.glpk@gmx.de wrote:
On 2/8/21 5:05 AM, Simon Glass wrote:
Tests are supposed to be independent. With driver model tests, the environment is reset before each test, which ensures that.
With Python tests there is no reset of the board between tests, since we want to run all the tests as quickly as possible and without needing the external scripts running constantly.
In principle the Python tests can be independent if they each put the world back the way they found it, but it turns out that some are not. This means that some tests cannot be run unless another test is run first. It also means that tests cannot be run in parallel, e.g. on sandbox.
This series fixes some of them. Those that remain:
test_gpt_swap_partitions - not sure? test_pinmux_status - not sure? test_sqfs_load - cannot be run more than once! test_bind_unbind_with_uclass - relies on previous test
The last one would be much better done as a C test, so it doesn't have to deal with the changing driver tree. There isn't a lot of value in running the test on a real board, since sandbox should find any bugs in driver model or the 'bind' command.
If the above can be resolved we can enable parallel tests. On my test machine (32 threads) it reduces the time from 38 seconds to 7.5s
To use this feature: pip3 install pytest-xdist
test/py/test.py -B sandbox --build-dir /tmp/xx -q -k 'not slow' -n32
Thanks for looking into parallelization.
What I am missing in this series is a patch for doc/develop/py_testing.rst describing how parallelization of Python tests is controlled.
I don't think we can mention that yet, as it doesn't actually work.
I have seen that test/py/tests/test_fs/test_basic.py test_fs1() is always failing on my machine because the test file 2.5GB.file is truncated. It is not truncated if I add some waiting time.
Could this be caused by parallelization?
Not at present since we don't use it. I seldom run those tests. I wonder whether they work once and then not again, i.e. something needs to be reset at the start of that test?
Also I have not even tried to parallelise those tests. For me they use too much memory.
Package 'python3-pytest-xdist' is not installed on my system.
OK. Then the -n flag is not available.
Regards, Simon
participants (2)
-
Heinrich Schuchardt
-
Simon Glass