[U-Boot] [PATCH V2 1/7] test/py: Implement pytest infrastructure

This tool aims to test U-Boot by executing U-Boot shell commands using the console interface. A single top-level script exists to execute or attach to the U-Boot console, run the entire script of tests against it, and summarize the results. Advantages of this approach are:
- Testing is performed in the same way a user or script would interact with U-Boot; there can be no disconnect. - There is no need to write or embed test-related code into U-Boot itself. It is asserted that writing test-related code in Python is simpler and more flexible that writing it all in C. - It is reasonably simple to interact with U-Boot in this way.
A few simple tests are provided as examples. Soon, we should convert as many as possible of the other tests in test/* and test/cmd_ut.c too.
In the future, I hope to publish (out-of-tree) the hook scripts, relay control utilities, and udev rules I will use for my own HW setup.
See README.md for more details!
Signed-off-by: Stephen Warren swarren@wwwdotorg.org Signed-off-by: Stephen Warren swarren@nvidia.com --- v2: Many fixes and tweaks have been squashed in. Separated out some of' the tests into separate commits, and added some more tests. --- test/py/.gitignore | 1 + test/py/README.md | 300 +++++++++++++++++++++++++++++++++++ test/py/conftest.py | 278 ++++++++++++++++++++++++++++++++ test/py/multiplexed_log.css | 76 +++++++++ test/py/multiplexed_log.py | 193 ++++++++++++++++++++++ test/py/pytest.ini | 9 ++ test/py/test.py | 24 +++ test/py/test_000_version.py | 13 ++ test/py/test_help.py | 6 + test/py/test_unknown_cmd.py | 8 + test/py/uboot_console_base.py | 185 +++++++++++++++++++++ test/py/uboot_console_exec_attach.py | 36 +++++ test/py/uboot_console_sandbox.py | 31 ++++ test/py/ubspawn.py | 97 +++++++++++ 14 files changed, 1257 insertions(+) create mode 100644 test/py/.gitignore create mode 100644 test/py/README.md create mode 100644 test/py/conftest.py create mode 100644 test/py/multiplexed_log.css create mode 100644 test/py/multiplexed_log.py create mode 100644 test/py/pytest.ini create mode 100755 test/py/test.py create mode 100644 test/py/test_000_version.py create mode 100644 test/py/test_help.py create mode 100644 test/py/test_unknown_cmd.py create mode 100644 test/py/uboot_console_base.py create mode 100644 test/py/uboot_console_exec_attach.py create mode 100644 test/py/uboot_console_sandbox.py create mode 100644 test/py/ubspawn.py
diff --git a/test/py/.gitignore b/test/py/.gitignore new file mode 100644 index 000000000000..0d20b6487c61 --- /dev/null +++ b/test/py/.gitignore @@ -0,0 +1 @@ +*.pyc diff --git a/test/py/README.md b/test/py/README.md new file mode 100644 index 000000000000..23a403eb8d88 --- /dev/null +++ b/test/py/README.md @@ -0,0 +1,300 @@ +# U-Boot pytest suite + +## Introduction + +This tool aims to test U-Boot by executing U-Boot shell commands using the +console interface. A single top-level script exists to execute or attach to the +U-Boot console, run the entire script of tests against it, and summarize the +results. Advantages of this approach are: + +- Testing is performed in the same way a user or script would interact with + U-Boot; there can be no disconnect. +- There is no need to write or embed test-related code into U-Boot itself. + It is asserted that writing test-related code in Python is simpler and more + flexible that writing it all in C. +- It is reasonably simple to interact with U-Boot in this way. + +## Requirements + +The test suite is implemented using pytest. Interaction with the U-Boot console +involves executing some binary and interacting with its stdin/stdout. You will +need to implement various "hook" scripts that are called by the test suite at +the appropriate time. + +On Debian or Debian-like distributions, the following packages are required. +Similar package names should exist in other distributions. + +| Package | Version tested (Ubuntu 14.04) | +| -------------- | ----------------------------- | +| python | 2.7.5-5ubuntu3 | +| python-pytest | 2.5.1-1 | + +The test script supports either: + +- Executing a sandbox port of U-Boot on the local machine as a sub-process, + and interacting with it over stdin/stdout. +- Executing an external "hook" scripts to flash a U-Boot binary onto a + physical board, attach to the board's console stream, and reset the board. + Further details are described later. + +### Using `virtualenv` to provide requirements + +Older distributions (e.g. Ubuntu 10.04) may not provide all the required +packages, or may provide versions that are too old to run the test suite. One +can use the Python `virtualenv` script to locally install more up-to-date +versions of the required packages without interfering with the OS installation. +For example: + +```bash +$ cd /path/to/u-boot +$ sudo apt-get install python python-virtualenv +$ virtualenv venv +$ . ./venv/bin/activate +$ pip install pytest +``` + +## Testing sandbox + +To run the testsuite on the sandbox port (U-Boot built as a native user-space +application), simply execute: + +``` +./test/py/test.py --bd sandbox --build +``` + +The `--bd` option tells the test suite which board type is being tested. This +lets the test suite know which features the board has, and hence exactly what +can be tested. + +The `--build` option tells U-Boot to compile U-Boot. Alternatively, you may +omit this option and build U-Boot yourself, in whatever way you choose, before +running the test script. + +The test script will attach to U-Boot, execute all valid tests for the board, +then print a summary of the test process. A complete log of the test session +will be written to `${build_dir}/test-log.html`. This is best viewed in a web +browser, but may be read directly as plain text, perhaps with the aid of the +`html2text` utility. + +## Command-line options + +- `--board-type`, `--bd`, `-B` set the type of the board to be tested. For + example, `sandbox` or `seaboard`. +- `--board-identity`, `--id` set the identity of the board to be tested. + This allows differentiation between multiple instances of the same type of + physical board that are attached to the same host machine. This parameter is + not interpreted by the test script in any way, but rather is simply passed + to the hook scripts described below, and may be used in any site-specific + way deemed necessary. +- `--build` indicates that the test script should compile U-Boot itself + before running the tests. If using this option, make sure that any + environment variables required by the build process are already set, such as + `$CROSS_COMPILE`. +- `--build-dir` sets the directory containing the compiled U-Boot binaries. + If omitted, this is `${source_dir}/build-${board_type}`. +- `--result-dir` sets the directory to write results, such as log files, + into. If omitted, the build directory is used. +- `--persistent-data-dir` sets the directory used to store persistent test + data. This is test data that may be re-used across test runs, such as file- + system images. + +`pytest` also implements a number of its own command-line options. Please see +`pytest` documentation for complete details. Execute `py.test --version` for +a brief summary. Note that U-Boot's test.py script passes all command-line +arguments directly to `pytest` for processing. + +## Testing real hardware + +The tools and techniques used to interact with real hardware will vary +radically between different host and target systems, and the whims of the user. +For this reason, the test suite does not attempt to directly interact with real +hardware in any way. Rather, it executes a standardized set of "hook" scripts +via `$PATH`. These scripts implement certain actions on behalf of the test +suite. This keeps the test suite simple and isolated from system variances +unrelated to U-Boot features. + +### Hook scripts + +#### Environment variables + +The following environment variables are set when running hook scripts: + +- `UBOOT_BOARD_TYPE` the board type being tested. +- `UBOOT_BOARD_IDENTITY` the board identity being tested, or `na` if none was + specified. +- `UBOOT_SOURCE_DIR` the U-Boot source directory. +- `UBOOT_TEST_PY_DIR` the full path to `test/py/` in the source directory. +- `UBOOT_BUILD_DIR` the U-Boot build directory. +- `UBOOT_RESULT_DIR` the test result directory. +- `UBOOT_PERSISTENT_DATA_DIR` the test peristent data directory. + +#### `uboot-test-console` + +This script provides access to the U-Boot console. The script's stdin/stdout +should be connected to the board's console. This process should continue to run +indefinitely, until killed. The test suite will run this script in parallel +with all other hooks. + +This script may be implemented e.g. by exec()ing `cu`, `conmux`, etc. + +If you are able to run U-Boot under a hardware simulator such as qemu, then +you would likely spawn that simulator from this script. However, note that +`uboot-test-reset` may be called multiple times per test script run, and must +cause U-Boot to start execution from scratch each time. Hopefully your +simulator includes a virtual reset button! If not, you can launch the +simulator from `uboot-test-reset` instead, while arranging for this console +process to always communicate with the current simulator instance. + +#### `uboot-test-flash` + +Prior to running the test suite against a board, some arrangement must be made +so that the board executes the particular U-Boot binary to be tested. Often, +this involves writing the U-Boot binary to the board's flash ROM. The test +suite calls this hook script for that purpose. + +This script should perform the entire flashing process synchronously; the +script should only exit once flashing is complete, and a board reset will +cause the newly flashed U-Boot binary to be executed. + +It is conceivable that this script will do nothing. This might be useful in +the following cases: + +- Some other process has already written the desired U-Boot binary into the + board's flash prior to running the test suite. +- The board allows U-Boot to be downloaded directly into RAM, and executed + from there. Use of this feature will reduce wear on the board's flash, so + may be preferable if available, and if cold boot testing of U-Boot is not + required. If this feature is used, the `uboot-test-reset` script should + peform this download, since the board could conceivably be reset multiple + times in a single test run. + +It is up to the user to determine if those situations exist, and to code this +hook script appropriately. + +This script will typically be implemented by calling out to some SoC- or +board-specific vendor flashing utility. + +#### `uboot-test-reset` + +Whenever the test suite needs to reset the target board, this script is +executed. This is guaranteed to happen at least once, prior to executing the +first test function. If any test fails, the test infra-structure will execute +this script again to restore U-Boot to an operational state before running the +next test function. + +This script will likely be implemented by communicating with some form of +relay or electronic switch attached to the board's reset signal. + +The semantics of this script require that when it is executed, U-Boot will +start running from scratch. If the U-Boot binary to be tested has been written +to flash, pulsing the board's reset signal is likely all this script need do. +However, in some scenarios, this script may perform other actions. For +example, it may call out to some SoC- or board-specific vendor utility in order +to download the U-Boot binary directly into RAM and execute it. This would +avoid the need for `uboot-test-flash` to actually write U-Boot to flash, thus +saving wear on the flash chip(s). + +### Board-type-specific configuration + +Each board has a different configuration and behaviour. Many of these +differences can be automatically detected by parsing the `.config` file in the +build directory. However, some differences can't yet be handled automatically. + +For each board, an optional Python module `uboot_board_${board_type}` may exist +to provide board-specific information to the test script. Any global value +defined in these modules is available for use by any test function. The data +contained in these scripts must be purely derived from U-Boot source code. +Hence, these configuration files are part of the U-Boot source tree too. + +### Execution environment configuration + +Each user's hardware setup may enable testing different subsets of the features +implemented by a particular board's configuration of U-Boot. For example, a +U-Boot configuration may support USB device mode and USB Mass Storage, but this +can only be tested if a USB cable is connected between the board and the host +machine running the test script. + +For each board, optional Python modules `uboot_boardenv_${board_type}` and +`uboot_boardenv_${board_type}_${board_identity}` may exist to provide +board-specific and board-identity-specific information to the test script. Any +global value defined in these modules is available for use by any test +function. The data contained in these is specific to a particular user's +hardware configuration. Hence, these configuration files are not part of the +U-Boot source tree, and should be installed outside of the source tree. Users +should set `$PYTHONPATH` prior to running the test script to allow these +modules to be loaded. + +### Board module parameter usage + +The test scripts rely on the following variables being defined by the board +module: + +- None at present. + +### U-Boot `.config` feature usage + +The test scripts rely on various U-Boot `.config` features, either directly in +order to test those features, or indirectly in order to query information from +the running U-Boot instance in order to test other features. + +One example is that testing of the `md` command requires knowledge of a RAM +address to use for the test. This data is parsed from the output of the +`bdinfo` command, and hence relies on CONFIG_CMD_BDI being enabled. + +For a complete list of dependencies, please search the test scripts for +instances of: + +- `buildconfig.get(...` +- `@pytest.mark.buildconfigspec(...` + +### Complete invocation example + +Assuming that you have installed the hook scripts into $HOME/ubtest/bin, and +any required environment configuration Python modules into $HOME/ubtest/py, +then you would likely invoke the test script as follows: + +If U-Boot has already been built: + +```bash +PATH=$HOME/ubtest/bin:$PATH \ + PYTHONPATH=${HOME}/ubtest/py:${PYTHONPATH} \ + ./test/py/test.py --bd seaboard +``` + +If you want the test script to compile U-Boot for you too, then you likely +need to set `$CROSS_COMPILE` to allow this, and invoke the test script as +follow: + +```bash +CROSS_COMPILE=arm-none-eabi- \ + PATH=$HOME/ubtest/bin:$PATH \ + PYTHONPATH=${HOME}/ubtest/py:${PYTHONPATH} \ + ./test/py/test.py --bd seaboard --build +``` + +## Writing tests + +Please refer to the pytest documentation for details of writing pytest tests. +Details specific to the U-Boot test suite are described below. + +A test fixture named `uboot_console` should be used by each test function. This +provides the means to interact with the U-Boot console, and retrieve board and +environment configuration information. + +The function `uboot_console.run_command()` executes a shell command on the +U-Boot console, and returns all output from that command. This allows +validation or interpretation of the command output. This function validates +that certain strings are not seen on the U-Boot console. These include shell +error messages and the U-Boot sign-on message (in order to detect unexpected +board resets). See the source of `uboot_console_base.py` for a complete list of +"bad" strings. Some test scenarios are expected to trigger these strings. Use +`uboot_console.disable_check()` to temporarily disable checking for specific +strings. See `test_unknown_cmd.py` for an example. + +Board- and board-environment configuration values may be accessed as sub-fields +of the `uboot_console.config` object, for example +`uboot_console.config.ram_base`. + +Build configuration values (from `.config`) may be accessed via the dictionary +`uboot_console.config.buildconfig`, with keys equal to the Kconfig variable +names. diff --git a/test/py/conftest.py b/test/py/conftest.py new file mode 100644 index 000000000000..b6efe03a60f8 --- /dev/null +++ b/test/py/conftest.py @@ -0,0 +1,278 @@ +# Copyright (c) 2015 Stephen Warren +# Copyright (c) 2015, NVIDIA CORPORATION. All rights reserved. +# +# SPDX-License-Identifier: GPL-2.0 + +import atexit +import errno +import os +import os.path +import pexpect +import pytest +from _pytest.runner import runtestprotocol +import ConfigParser +import StringIO +import sys + +log = None +console = None + +def mkdir_p(path): + try: + os.makedirs(path) + except OSError as exc: + if exc.errno == errno.EEXIST and os.path.isdir(path): + pass + else: + raise + +def pytest_addoption(parser): + parser.addoption("--build-dir", default=None, + help="U-Boot build directory (O=)") + parser.addoption("--result-dir", default=None, + help="U-Boot test result/tmp directory") + parser.addoption("--persistent-data-dir", default=None, + help="U-Boot test persistent generated data directory") + parser.addoption("--board-type", "--bd", "-B", default="sandbox", + help="U-Boot board type") + parser.addoption("--board-identity", "--id", default="na", + help="U-Boot board identity/instance") + parser.addoption("--build", default=False, action="store_true", + help="Compile U-Boot before running tests") + +def pytest_configure(config): + global log + global console + global ubconfig + + test_py_dir = os.path.dirname(os.path.abspath(__file__)) + source_dir = os.path.dirname(os.path.dirname(test_py_dir)) + + board_type = config.getoption("board_type") + board_type_fn = board_type.replace("-", "_") + + board_identity = config.getoption("board_identity") + board_identity_fn = board_identity.replace("-", "_") + + build_dir = config.getoption("build_dir") + if not build_dir: + build_dir = source_dir + "/build-" + board_type + mkdir_p(build_dir) + + result_dir = config.getoption("result_dir") + if not result_dir: + result_dir = build_dir + mkdir_p(result_dir) + + persistent_data_dir = config.getoption("persistent_data_dir") + if not persistent_data_dir: + persistent_data_dir = build_dir + "/persistent-data" + mkdir_p(persistent_data_dir) + + import multiplexed_log + log = multiplexed_log.Logfile(result_dir + "/test-log.html") + + if config.getoption("build"): + if build_dir != source_dir: + o_opt = "O=%s" % build_dir + else: + o_opt = "" + cmds = ( + ["make", o_opt, "-s", board_type + "_defconfig"], + ["make", o_opt, "-s", "-j8"], + ) + runner = log.get_runner("make", sys.stdout) + for cmd in cmds: + runner.run(cmd, cwd=source_dir) + runner.close() + + class ArbitraryAttrContainer(object): + pass + + ubconfig = ArbitraryAttrContainer() + ubconfig.brd = dict() + ubconfig.env = dict() + + modules = [ + (ubconfig.brd, "uboot_board_" + board_type_fn), + (ubconfig.env, "uboot_boardenv_" + board_type_fn), + (ubconfig.env, "uboot_boardenv_" + board_type_fn + "_" + + board_identity_fn), + ] + for (sub_config, mod_name) in modules: + try: + mod = __import__(mod_name) + except ImportError: + continue + sub_config.update(mod.__dict__) + + ubconfig.buildconfig = dict() + + for conf_file in (".config", "include/autoconf.mk"): + dot_config = build_dir + "/" + conf_file + if not os.path.exists(dot_config): + raise Exception(conf_file + " does not exist; " + + "try passing --build option?") + + with open(dot_config, "rt") as f: + ini_str = "[root]\n" + f.read() + ini_sio = StringIO.StringIO(ini_str) + parser = ConfigParser.RawConfigParser() + parser.readfp(ini_sio) + ubconfig.buildconfig.update(parser.items("root")) + + ubconfig.test_py_dir = test_py_dir + ubconfig.source_dir = source_dir + ubconfig.build_dir = build_dir + ubconfig.result_dir = result_dir + ubconfig.persistent_data_dir = persistent_data_dir + ubconfig.board_type = board_type + ubconfig.board_identity = board_identity + + env_vars = ( + "board_type", + "board_identity", + "source_dir", + "test_py_dir", + "build_dir", + "result_dir", + "persistent_data_dir", + ) + for v in env_vars: + os.environ["UBOOT_" + v.upper()] = getattr(ubconfig, v) + + if board_type == "sandbox": + import uboot_console_sandbox + console = uboot_console_sandbox.ConsoleSandbox(log, ubconfig) + else: + import uboot_console_exec_attach + console = uboot_console_exec_attach.ConsoleExecAttach(log, ubconfig) + +def pytest_generate_tests(metafunc): + subconfigs = { + "brd": console.config.brd, + "env": console.config.env, + } + for fn in metafunc.fixturenames: + parts = fn.split("__") + if len(parts) < 2: + continue + if parts[0] not in subconfigs: + continue + subconfig = subconfigs[parts[0]] + vals = [] + val = subconfig.get(fn, []) + if val: + vals = (val, ) + else: + vals = subconfig.get(fn + "s", []) + metafunc.parametrize(fn, vals) + +@pytest.fixture(scope="session") +def uboot_console(request): + return console + +tests_not_run = set() +tests_failed = set() +tests_skipped = set() +tests_passed = set() + +def pytest_itemcollected(item): + tests_not_run.add(item.name) + +def cleanup(): + if console: + console.close() + if log: + log.status_pass("%d passed" % len(tests_passed)) + if tests_skipped: + log.status_skipped("%d skipped" % len(tests_skipped)) + for test in tests_skipped: + log.status_skipped("... " + test) + if tests_failed: + log.status_fail("%d failed" % len(tests_failed)) + for test in tests_failed: + log.status_fail("... " + test) + if tests_not_run: + log.status_fail("%d not run" % len(tests_not_run)) + for test in tests_not_run: + log.status_fail("... " + test) + log.close() +atexit.register(cleanup) + +def setup_boardspec(item): + mark = item.get_marker("boardspec") + if not mark: + return + required_boards = [] + for board in mark.args: + if board.startswith("!"): + if ubconfig.board_type == board[1:]: + pytest.skip("board not supported") + return + else: + required_boards.append(board) + if required_boards and ubconfig.board_type not in required_boards: + pytest.skip("board not supported") + +def setup_buildconfigspec(item): + mark = item.get_marker("buildconfigspec") + if not mark: + return + for option in mark.args: + if not ubconfig.buildconfig.get("config_" + option.lower(), None): + pytest.skip(".config feature not enabled") + +def pytest_runtest_setup(item): + log.start_section(item.name) + setup_boardspec(item) + setup_buildconfigspec(item) + +def pytest_runtest_protocol(item, nextitem): + reports = runtestprotocol(item, nextitem=nextitem) + failed = None + skipped = None + for report in reports: + if report.outcome == "failed": + failed = report + break + if report.outcome == "skipped": + if not skipped: + skipped = report + + if failed: + tests_failed.add(item.name) + elif skipped: + tests_skipped.add(item.name) + else: + tests_passed.add(item.name) + tests_not_run.remove(item.name) + + try: + if failed: + msg = "FAILED:\n" + str(failed.longrepr) + log.status_fail(msg) + elif skipped: + msg = "SKIPPED:\n" + str(skipped.longrepr) + log.status_skipped(msg) + else: + log.status_pass("OK") + except: + # If something went wrong with logging, it's better to let the test + # process continue, which may report other exceptions that triggered + # the logging issue (e.g. console.log wasn't created). Hence, just + # squash the exception. If the test setup failed due to e.g. syntax + # error somewhere else, this won't be seen. However, once that issue + # is fixed, if this exception still exists, it will then be logged as + # part of the test's stdout. + import traceback + print "Exception occurred while logging runtest status:" + traceback.print_exc() + # FIXME: Can we force a test failure here? + + log.end_section(item.name) + + if failed: + console.cleanup_spawn() + + return reports diff --git a/test/py/multiplexed_log.css b/test/py/multiplexed_log.css new file mode 100644 index 000000000000..96d87ebe034b --- /dev/null +++ b/test/py/multiplexed_log.css @@ -0,0 +1,76 @@ +/* + * Copyright (c) 2015 Stephen Warren + * + * SPDX-License-Identifier: GPL-2.0 + */ + +body { + background-color: black; + color: #ffffff; +} + +.implicit { + color: #808080; +} + +.section { + border-style: solid; + border-color: #303030; + border-width: 0px 0px 0px 5px; + padding-left: 5px +} + +.section-header { + background-color: #303030; + margin-left: -5px; + margin-top: 5px; +} + +.section-trailer { + display: none; +} + +.stream { + border-style: solid; + border-color: #303030; + border-width: 0px 0px 0px 5px; + padding-left: 5px +} + +.stream-header { + background-color: #303030; + margin-left: -5px; + margin-top: 5px; +} + +.stream-trailer { + display: none; +} + +.error { + color: #ff0000 +} + +.warning { + color: #ffff00 +} + +.info { + color: #808080 +} + +.action { + color: #8080ff +} + +.status-pass { + color: #00ff00 +} + +.status-skipped { + color: #ffff00 +} + +.status-fail { + color: #ff0000 +} diff --git a/test/py/multiplexed_log.py b/test/py/multiplexed_log.py new file mode 100644 index 000000000000..58b9a9c50ecf --- /dev/null +++ b/test/py/multiplexed_log.py @@ -0,0 +1,193 @@ +# Copyright (c) 2015 Stephen Warren +# Copyright (c) 2015, NVIDIA CORPORATION. All rights reserved. +# +# SPDX-License-Identifier: GPL-2.0 + +import cgi +import os.path +import shutil +import subprocess + +mod_dir = os.path.dirname(os.path.abspath(__file__)) + +class LogfileStream(object): + def __init__(self, logfile, name, chained_file): + self.logfile = logfile + self.name = name + self.chained_file = chained_file + + def close(self): + pass + + def write(self, data, implicit=False): + self.logfile.write(self, data, implicit) + if self.chained_file: + self.chained_file.write(data) + + def flush(self): + self.logfile.flush() + if self.chained_file: + self.chained_file.flush() + +class RunAndLog(object): + def __init__(self, logfile, name, chained_file): + self.logfile = logfile + self.name = name + self.chained_file = chained_file + + def close(self): + pass + + def run(self, cmd, cwd=None): + msg = "+" + " ".join(cmd) + "\n" + if self.chained_file: + self.chained_file.write(msg) + self.logfile.write(self, msg) + + try: + p = subprocess.Popen(cmd, cwd=cwd, + stdin=None, stdout=subprocess.PIPE, stderr=subprocess.STDOUT) + (output, stderr) = p.communicate() + status = p.returncode + except subprocess.CalledProcessError as cpe: + output = cpe.output + status = cpe.returncode + self.logfile.write(self, output) + if status: + if self.chained_file: + self.chained_file.write(output) + raise Exception("command failed; exit code " + str(status)) + +class SectionCtxMgr(object): + def __init__(self, log, marker): + self.log = log + self.marker = marker + + def __enter__(self): + self.log.start_section(self.marker) + + def __exit__(self, extype, value, traceback): + self.log.end_section(self.marker) + +class Logfile(object): + def __init__(self, fn): + self.f = open(fn, "wt") + self.last_stream = None + self.linebreak = True + self.blocks = [] + self.cur_evt = 1 + shutil.copy(mod_dir + "/multiplexed_log.css", os.path.dirname(fn)) + self.f.write("""\ +<html> +<head> +<link rel="stylesheet" type="text/css" href="multiplexed_log.css"> +</head> +<body> +<tt> +""") + + def close(self): + self.f.write("""\ +</tt> +</body> +</html> +""") + self.f.close() + + def _escape(self, data): + data = data.replace(chr(13), "") + data = "".join((c in self._nonprint) and ("%%%02x" % ord(c)) or + c for c in data) + data = cgi.escape(data) + data = data.replace(" ", " ") + self.linebreak = data[-1:-1] == "\n" + data = data.replace(chr(10), "<br/>\n") + return data + + def _terminate_stream(self): + self.cur_evt += 1 + if not self.last_stream: + return + if not self.linebreak: + self.f.write("<br/>\n") + self.f.write("<div class="stream-trailer" id="" + + self.last_stream.name + "">End stream: " + + self.last_stream.name + "</div>\n") + self.f.write("</div>\n") + self.last_stream = None + + def _note(self, note_type, msg): + self._terminate_stream() + self.f.write("<div class="" + note_type + "">\n") + self.f.write(self._escape(msg)) + self.f.write("<br/>\n") + self.f.write("</div>\n") + self.linebreak = True + + def start_section(self, marker): + self._terminate_stream() + self.blocks.append(marker) + blk_path = "/".join(self.blocks) + self.f.write("<div class="section" id="" + blk_path + "">\n") + self.f.write("<div class="section-header" id="" + blk_path + + "">Section: " + blk_path + "</div>\n") + + def end_section(self, marker): + if (not self.blocks) or (marker != self.blocks[-1]): + raise Exception("Block nesting mismatch: "%s" "%s"" % + (marker, "/".join(self.blocks))) + self._terminate_stream() + blk_path = "/".join(self.blocks) + self.f.write("<div class="section-trailer" id="section-trailer-" + + blk_path + "">End section: " + blk_path + "</div>\n") + self.f.write("</div>\n") + self.blocks.pop() + + def section(self, marker): + return SectionCtxMgr(self, marker) + + def error(self, msg): + self._note("error", msg) + + def warning(self, msg): + self._note("warning", msg) + + def info(self, msg): + self._note("info", msg) + + def action(self, msg): + self._note("action", msg) + + def status_pass(self, msg): + self._note("status-pass", msg) + + def status_skipped(self, msg): + self._note("status-skipped", msg) + + def status_fail(self, msg): + self._note("status-fail", msg) + + def get_stream(self, name, chained_file=None): + return LogfileStream(self, name, chained_file) + + def get_runner(self, name, chained_file=None): + return RunAndLog(self, name, chained_file) + + _nonprint = ("^%" + "".join(chr(c) for c in range(0, 32) if c != 10) + + "".join(chr(c) for c in range(127, 256))) + + def write(self, stream, data, implicit=False): + if stream != self.last_stream: + self._terminate_stream() + self.f.write("<div class="stream" id="%s">\n" % stream.name) + self.f.write("<div class="stream-header" id="" + stream.name + + "">Stream: " + stream.name + "</div>\n") + if implicit: + self.f.write("<span class="implicit">") + self.f.write(self._escape(data)) + if implicit: + self.f.write("</span>") + self.last_stream = stream + + def flush(self): + self.f.flush() diff --git a/test/py/pytest.ini b/test/py/pytest.ini new file mode 100644 index 000000000000..1bdff810d36e --- /dev/null +++ b/test/py/pytest.ini @@ -0,0 +1,9 @@ +# Copyright (c) 2015 Stephen Warren +# Copyright (c) 2015, NVIDIA CORPORATION. All rights reserved. +# +# SPDX-License-Identifier: GPL-2.0 + +[pytest] +markers = + boardspec: U-Boot: Describes the set of boards a test can/can't run on. + buildconfigspec: U-Boot: Describes Kconfig/config-header constraints. diff --git a/test/py/test.py b/test/py/test.py new file mode 100755 index 000000000000..7768216a2335 --- /dev/null +++ b/test/py/test.py @@ -0,0 +1,24 @@ +#!/usr/bin/env python + +# Copyright (c) 2015 Stephen Warren +# Copyright (c) 2015, NVIDIA CORPORATION. All rights reserved. +# +# SPDX-License-Identifier: GPL-2.0 + +import os +import os.path +import sys + +sys.argv.pop(0) + +args = ["py.test", os.path.dirname(__file__)] +args.extend(sys.argv) + +try: + os.execvp("py.test", args) +except: + import traceback + traceback.print_exc() + print >>sys.stderr, """ +exec(py.test) failed; perhaps you are missing some dependencies? +See test/md/README.md for the list.""" diff --git a/test/py/test_000_version.py b/test/py/test_000_version.py new file mode 100644 index 000000000000..360c8fd726e0 --- /dev/null +++ b/test/py/test_000_version.py @@ -0,0 +1,13 @@ +# Copyright (c) 2015 Stephen Warren +# +# SPDX-License-Identifier: GPL-2.0 + +# pytest runs tests the order of their module path, which is related to the +# filename containing the test. This file is named such that it is sorted +# first, simply as a very basic sanity check of the functionality of the U-Boot +# command prompt. + +def test_version(uboot_console): + with uboot_console.disable_check("main_signon"): + response = uboot_console.run_command("version") + uboot_console.validate_main_signon_in_text(response) diff --git a/test/py/test_help.py b/test/py/test_help.py new file mode 100644 index 000000000000..3cc896ee7af8 --- /dev/null +++ b/test/py/test_help.py @@ -0,0 +1,6 @@ +# Copyright (c) 2015 Stephen Warren +# +# SPDX-License-Identifier: GPL-2.0 + +def test_help(uboot_console): + uboot_console.run_command("help") diff --git a/test/py/test_unknown_cmd.py b/test/py/test_unknown_cmd.py new file mode 100644 index 000000000000..ba12de56a294 --- /dev/null +++ b/test/py/test_unknown_cmd.py @@ -0,0 +1,8 @@ +# Copyright (c) 2015 Stephen Warren +# +# SPDX-License-Identifier: GPL-2.0 + +def test_unknown_command(uboot_console): + with uboot_console.disable_check("unknown_command"): + response = uboot_console.run_command("non_existent_cmd") + assert("Unknown command 'non_existent_cmd' - try 'help'" in response) diff --git a/test/py/uboot_console_base.py b/test/py/uboot_console_base.py new file mode 100644 index 000000000000..9f13fead2e7e --- /dev/null +++ b/test/py/uboot_console_base.py @@ -0,0 +1,185 @@ +# Copyright (c) 2015 Stephen Warren +# Copyright (c) 2015, NVIDIA CORPORATION. All rights reserved. +# +# SPDX-License-Identifier: GPL-2.0 + +import multiplexed_log +import os +import pytest +import re +import sys + +pattern_uboot_spl_signon = re.compile("(U-Boot SPL \d{4}\.\d{2}-[^\r\n]*)") +pattern_uboot_main_signon = re.compile("(U-Boot \d{4}\.\d{2}-[^\r\n]*)") +pattern_stop_autoboot_prompt = re.compile("Hit any key to stop autoboot: ") +pattern_unknown_command = re.compile("Unknown command '.*' - try 'help'") +pattern_error_notification = re.compile("## Error: ") + +class ConsoleDisableCheck(object): + def __init__(self, console, check_type): + self.console = console + self.check_type = check_type + + def __enter__(self): + self.console.disable_check_count[self.check_type] += 1 + + def __exit__(self, extype, value, traceback): + self.console.disable_check_count[self.check_type] -= 1 + +class ConsoleBase(object): + def __init__(self, log, config, max_fifo_fill): + self.log = log + self.config = config + self.max_fifo_fill = max_fifo_fill + + self.logstream = self.log.get_stream("console", sys.stdout) + + # Array slice removes leading/trailing quotes + self.prompt = self.config.buildconfig["config_sys_prompt"][1:-1] + self.prompt_escaped = re.escape(self.prompt) + self.p = None + self.disable_check_count = { + "spl_signon": 0, + "main_signon": 0, + "unknown_command": 0, + "error_notification": 0, + } + + self.at_prompt = False + self.at_prompt_logevt = None + self.ram_base = None + + def close(self): + if self.p: + self.p.close() + self.logstream.close() + + def run_command(self, cmd, wait_for_echo=True, send_nl=True, wait_for_prompt=True): + self.ensure_spawned() + + if self.at_prompt and \ + self.at_prompt_logevt != self.logstream.logfile.cur_evt: + self.logstream.write(self.prompt, implicit=True) + + bad_patterns = [] + bad_pattern_ids = [] + if (self.disable_check_count["spl_signon"] == 0 and + self.uboot_spl_signon): + bad_patterns.append(self.uboot_spl_signon_escaped) + bad_pattern_ids.append("SPL signon") + if self.disable_check_count["main_signon"] == 0: + bad_patterns.append(self.uboot_main_signon_escaped) + bad_pattern_ids.append("U-Boot main signon") + if self.disable_check_count["unknown_command"] == 0: + bad_patterns.append(pattern_unknown_command) + bad_pattern_ids.append("Unknown command") + if self.disable_check_count["error_notification"] == 0: + bad_patterns.append(pattern_error_notification) + bad_pattern_ids.append("Error notification") + try: + self.at_prompt = False + if send_nl: + cmd += "\n" + while cmd: + # Limit max outstanding data, so UART FIFOs don't overflow + chunk = cmd[:self.max_fifo_fill] + cmd = cmd[self.max_fifo_fill:] + self.p.send(chunk) + if not wait_for_echo: + continue + chunk = re.escape(chunk) + chunk = chunk.replace("\\n", "[\r\n]") + m = self.p.expect([chunk] + bad_patterns) + if m != 0: + self.at_prompt = False + raise Exception("Bad pattern found on console: " + + bad_pattern_ids[m - 1]) + if not wait_for_prompt: + return + m = self.p.expect([self.prompt_escaped] + bad_patterns) + if m != 0: + self.at_prompt = False + raise Exception("Bad pattern found on console: " + + bad_pattern_ids[m - 1]) + self.at_prompt = True + self.at_prompt_logevt = self.logstream.logfile.cur_evt + # Only strip \r\n; space/TAB might be significant if testing + # indentation. + return self.p.before.strip("\r\n") + except Exception as ex: + self.log.error(str(ex)) + self.cleanup_spawn() + raise + + def ctrlc(self): + self.run_command(chr(3), wait_for_echo=False, send_nl=False) + + def ensure_spawned(self): + if self.p: + return + try: + self.at_prompt = False + self.log.action("Starting U-Boot") + self.p = self.get_spawn() + # Real targets can take a long time to scroll large amounts of + # text if LCD is enabled. This value may need tweaking in the + # future, possibly per-test to be optimal. This works for "help" + # on board "seaboard". + self.p.timeout = 30000 + self.p.logfile_read = self.logstream + if self.config.buildconfig.get("CONFIG_SPL", False) == "y": + self.p.expect([pattern_uboot_spl_signon]) + self.uboot_spl_signon = self.p.after + self.uboot_spl_signon_escaped = re.escape(self.p.after) + else: + self.uboot_spl_signon = None + self.p.expect([pattern_uboot_main_signon]) + self.uboot_main_signon = self.p.after + self.uboot_main_signon_escaped = re.escape(self.p.after) + while True: + match = self.p.expect([self.prompt_escaped, + pattern_stop_autoboot_prompt]) + if match == 1: + self.p.send(chr(3)) # CTRL-C + continue + break + self.at_prompt = True + self.at_prompt_logevt = self.logstream.logfile.cur_evt + except Exception as ex: + self.log.error(str(ex)) + self.cleanup_spawn() + raise + + def cleanup_spawn(self): + try: + if self.p: + self.p.close() + except: + pass + self.p = None + + def validate_main_signon_in_text(self, text): + assert(self.uboot_main_signon in text) + + def disable_check(self, check_type): + return ConsoleDisableCheck(self, check_type) + + def find_ram_base(self): + if self.config.buildconfig.get("config_cmd_bdi", "n") != "y": + pytest.skip("bdinfo command not supported") + if self.ram_base == -1: + pytest.skip("Previously failed to find RAM bank start") + if self.ram_base is not None: + return self.ram_base + + with self.log.section("find_ram_base"): + response = self.run_command("bdinfo") + for l in response.split("\n"): + if "-> start" in l: + self.ram_base = int(l.split("=")[1].strip(), 16) + break + if self.ram_base is None: + self.ram_base = -1 + raise Exception("Failed to find RAM bank start in `bdinfo`") + + return self.ram_base diff --git a/test/py/uboot_console_exec_attach.py b/test/py/uboot_console_exec_attach.py new file mode 100644 index 000000000000..0267ae4dc070 --- /dev/null +++ b/test/py/uboot_console_exec_attach.py @@ -0,0 +1,36 @@ +# Copyright (c) 2015 Stephen Warren +# Copyright (c) 2015, NVIDIA CORPORATION. All rights reserved. +# +# SPDX-License-Identifier: GPL-2.0 + +from ubspawn import Spawn +from uboot_console_base import ConsoleBase + +def cmdline(app, args): + return app + ' "' + '" "'.join(args) + '"' + +class ConsoleExecAttach(ConsoleBase): + def __init__(self, log, config): + # The max_fifo_fill value might need tweaking per-board/-SoC? + # 1 would be safe anywhere, but is very slow (a pexpect issue?). + # 16 is a common FIFO size. + # HW flow control would mean this could be infinite. + super(ConsoleExecAttach, self).__init__(log, config, max_fifo_fill=16) + + self.log.action("Flashing U-Boot") + cmd = ["uboot-test-flash", config.board_type, config.board_identity] + runner = self.log.get_runner(cmd[0]) + runner.run(cmd) + runner.close() + + def get_spawn(self): + args = [self.config.board_type, self.config.board_identity] + s = Spawn(["uboot-test-console"] + args) + + self.log.action("Resetting board") + cmd = ["uboot-test-reset"] + args + runner = self.log.get_runner(cmd[0]) + runner.run(cmd) + runner.close() + + return s diff --git a/test/py/uboot_console_sandbox.py b/test/py/uboot_console_sandbox.py new file mode 100644 index 000000000000..67fcbde11f73 --- /dev/null +++ b/test/py/uboot_console_sandbox.py @@ -0,0 +1,31 @@ +# Copyright (c) 2015 Stephen Warren +# Copyright (c) 2015, NVIDIA CORPORATION. All rights reserved. +# +# SPDX-License-Identifier: GPL-2.0 + +import time +from ubspawn import Spawn +from uboot_console_base import ConsoleBase + +class ConsoleSandbox(ConsoleBase): + def __init__(self, log, config): + super(ConsoleSandbox, self).__init__(log, config, max_fifo_fill=1024) + + def get_spawn(self): + return Spawn([self.config.build_dir + "/u-boot"]) + + def kill(self, sig): + self.ensure_spawned() + self.log.action("kill %d" % sig) + self.p.kill(sig) + + def validate_exited(self): + p = self.p + self.p = None + for i in xrange(100): + ret = not p.isalive() + if ret: + break + time.sleep(0.1) + p.close() + return ret diff --git a/test/py/ubspawn.py b/test/py/ubspawn.py new file mode 100644 index 000000000000..3a668bc5cb65 --- /dev/null +++ b/test/py/ubspawn.py @@ -0,0 +1,97 @@ +# Copyright (c) 2015, NVIDIA CORPORATION. All rights reserved. +# +# SPDX-License-Identifier: GPL-2.0 + +import os +import re +import pty +import select +import time + +class Timeout(Exception): + pass + +class Spawn(object): + def __init__(self, args): + self.waited = False + self.buf = "" + self.logfile_read = None + self.before = "" + self.after = "" + self.timeout = None + + (self.pid, self.fd) = pty.fork() + if self.pid == 0: + try: + os.execvp(args[0], args) + except: + print "CHILD EXECEPTION:" + import traceback + traceback.print_exc() + finally: + os._exit(255) + + self.poll = select.poll() + self.poll.register(self.fd, select.POLLIN | select.POLLPRI | select.POLLERR | select.POLLHUP | select.POLLNVAL) + + def kill(self, sig): + os.kill(self.pid, sig) + + def isalive(self): + if self.waited: + return False + + w = os.waitpid(self.pid, os.WNOHANG) + if w[0] == 0: + return True + + self.waited = True + return False + + def send(self, data): + os.write(self.fd, data) + + def expect(self, patterns): + for pi in xrange(len(patterns)): + if type(patterns[pi]) == type(""): + patterns[pi] = re.compile(patterns[pi]) + + try: + while True: + earliest_m = None + earliest_pi = None + for pi in xrange(len(patterns)): + pattern = patterns[pi] + m = pattern.search(self.buf) + if not m: + continue + if earliest_m and m.start() > earliest_m.start(): + continue + earliest_m = m + earliest_pi = pi + if earliest_m: + pos = earliest_m.start() + posafter = earliest_m.end() + 1 + self.before = self.buf[:pos] + self.after = self.buf[pos:posafter] + self.buf = self.buf[posafter:] + return earliest_pi + events = self.poll.poll(self.timeout) + if not events: + raise Timeout() + c = os.read(self.fd, 1024) + if not c: + raise EOFError() + if self.logfile_read: + self.logfile_read.write(c) + self.buf += c + finally: + if self.logfile_read: + self.logfile_read.flush() + + def close(self): + os.close(self.fd) + for i in xrange(100): + if not self.isalive(): + break + time.sleep(0.1)

Test the sandbox port's implementation of the reset command and SIGHUP handling. These should both cause the U-Boot process to exit gracefully.
Signed-off-by: Stephen Warren swarren@wwwdotorg.org Signed-off-by: Stephen Warren swarren@nvidia.com --- test/py/test_sandbox_exit.py | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) create mode 100644 test/py/test_sandbox_exit.py
diff --git a/test/py/test_sandbox_exit.py b/test/py/test_sandbox_exit.py new file mode 100644 index 000000000000..7359a73715cd --- /dev/null +++ b/test/py/test_sandbox_exit.py @@ -0,0 +1,20 @@ +# Copyright (c) 2015 Stephen Warren +# Copyright (c) 2015, NVIDIA CORPORATION. All rights reserved. +# +# SPDX-License-Identifier: GPL-2.0 + +import pytest +import signal + +@pytest.mark.boardspec("sandbox") +@pytest.mark.buildconfigspec("reset") +def test_reset(uboot_console): + uboot_console.run_command("reset", wait_for_prompt=False) + assert(uboot_console.validate_exited()) + uboot_console.ensure_spawned() + +@pytest.mark.boardspec("sandbox") +def test_ctrlc(uboot_console): + uboot_console.kill(signal.SIGINT) + assert(uboot_console.validate_exited()) + uboot_console.ensure_spawned()

On 2 December 2015 at 15:18, Stephen Warren swarren@wwwdotorg.org wrote:
Test the sandbox port's implementation of the reset command and SIGHUP handling. These should both cause the U-Boot process to exit gracefully.
Signed-off-by: Stephen Warren swarren@wwwdotorg.org Signed-off-by: Stephen Warren swarren@nvidia.com
test/py/test_sandbox_exit.py | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) create mode 100644 test/py/test_sandbox_exit.py
Reviewed-by: Simon Glass sjg@chromium.org Tested on chromebook_link, sandbox Tested-by: Simon Glass sjg@chromium.org

This tests basic environment variable functionality.
Signed-off-by: Stephen Warren swarren@wwwdotorg.org Signed-off-by: Stephen Warren swarren@nvidia.com --- test/py/test_env.py | 121 ++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 121 insertions(+) create mode 100644 test/py/test_env.py
diff --git a/test/py/test_env.py b/test/py/test_env.py new file mode 100644 index 000000000000..3af0176c4523 --- /dev/null +++ b/test/py/test_env.py @@ -0,0 +1,121 @@ +# Copyright (c) 2015 Stephen Warren +# Copyright (c) 2015, NVIDIA CORPORATION. All rights reserved. +# +# SPDX-License-Identifier: GPL-2.0 + +import pytest + +# FIXME: This might be useful for other tests; +# perhaps refactor it into ConsoleBase or some other state object? +class StateTestEnv(object): + def __init__(self, uboot_console): + self.uboot_console = uboot_console + self.get_env() + self.set_var = self.get_non_existent_var() + + def get_env(self): + response = self.uboot_console.run_command("printenv") + self.env = {} + for l in response.splitlines(): + if not "=" in l: + continue + (var, value) = l.strip().split("=") + self.env[var] = value + + def get_existent_var(self): + for var in self.env: + return var + + def get_non_existent_var(self): + n = 0 + while True: + var = "test_env_" + str(n) + if var not in self.env: + return var + n += 1 + +@pytest.fixture(scope="module") +def state_test_env(uboot_console): + return StateTestEnv(uboot_console) + +def unset_var(state_test_env, var): + state_test_env.uboot_console.run_command("setenv " + var) + if var in state_test_env.env: + del state_test_env.env[var] + +def set_var(state_test_env, var, value): + state_test_env.uboot_console.run_command("setenv " + var + " "" + value + """) + state_test_env.env[var] = value + +def validate_empty(state_test_env, var): + response = state_test_env.uboot_console.run_command("echo $" + var) + assert response == "" + +def validate_set(state_test_env, var, value): + # echo does not preserve leading, internal, or trailing whitespace in the + # value. printenv does, and hence allows more complete testing. + response = state_test_env.uboot_console.run_command("printenv " + var) + assert response == (var + "=" + value) + +def test_env_echo_exists(state_test_env): + """Echo a variable that exists""" + var = state_test_env.get_existent_var() + value = state_test_env.env[var] + validate_set(state_test_env, var, value) + +def test_env_echo_non_existent(state_test_env): + """Echo a variable that doesn't exist""" + var = state_test_env.set_var + validate_empty(state_test_env, var) + +def test_env_printenv_non_existent(state_test_env): + """Check printenv error message""" + var = state_test_env.set_var + c = state_test_env.uboot_console + with c.disable_check("error_notification"): + response = c.run_command("printenv " + var) + assert(response == "## Error: "" + var + "" not defined") + +def test_env_unset_non_existent(state_test_env): + """Unset a nonexistent variable""" + var = state_test_env.get_non_existent_var() + unset_var(state_test_env, var) + validate_empty(state_test_env, var) + +def test_env_set_non_existent(state_test_env): + """Set a new variable""" + var = state_test_env.set_var + value = "foo" + set_var(state_test_env, var, value) + validate_set(state_test_env, var, value) + +def test_env_set_existing(state_test_env): + """Set an existing variable""" + var = state_test_env.set_var + value = "bar" + set_var(state_test_env, var, value) + validate_set(state_test_env, var, value) + +def test_env_unset_existing(state_test_env): + """Unset a variable""" + var = state_test_env.set_var + unset_var(state_test_env, var) + validate_empty(state_test_env, var) + +def test_env_expansion_spaces(state_test_env): + var_space = None + var_test = None + try: + var_space = state_test_env.get_non_existent_var() + set_var(state_test_env, var_space, " ") + + var_test = state_test_env.get_non_existent_var() + value = " 1${%(var_space)s}${%(var_space)s} 2 " % locals() + set_var(state_test_env, var_test, value) + value = " 1 2 " + validate_set(state_test_env, var_test, value) + finally: + if var_space: + unset_var(state_test_env, var_space) + if var_test: + unset_var(state_test_env, var_test)

On 2.12.2015 23:18, Stephen Warren wrote:
This tests basic environment variable functionality.
Signed-off-by: Stephen Warren swarren@wwwdotorg.org Signed-off-by: Stephen Warren swarren@nvidia.com
test/py/test_env.py | 121 ++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 121 insertions(+) create mode 100644 test/py/test_env.py
diff --git a/test/py/test_env.py b/test/py/test_env.py new file mode 100644 index 000000000000..3af0176c4523 --- /dev/null +++ b/test/py/test_env.py @@ -0,0 +1,121 @@ +# Copyright (c) 2015 Stephen Warren +# Copyright (c) 2015, NVIDIA CORPORATION. All rights reserved. +# +# SPDX-License-Identifier: GPL-2.0
+import pytest
+# FIXME: This might be useful for other tests; +# perhaps refactor it into ConsoleBase or some other state object? +class StateTestEnv(object):
- def __init__(self, uboot_console):
self.uboot_console = uboot_console
self.get_env()
self.set_var = self.get_non_existent_var()
- def get_env(self):
response = self.uboot_console.run_command("printenv")
self.env = {}
for l in response.splitlines():
if not "=" in l:
continue
(var, value) = l.strip().split("=")
Please keep in your mind - I haven't written anything in python. This is failing on my testing platform. On microblaze I have variable which is defined like "console=console=ttyUL0,115200\0" and this script is not able to handle it properly. I expect it is because of two = on the same line.
Thanks, Michal

On 12/18/2015 06:50 AM, Michal Simek wrote:
On 2.12.2015 23:18, Stephen Warren wrote:
This tests basic environment variable functionality.
Signed-off-by: Stephen Warren swarren@wwwdotorg.org Signed-off-by: Stephen Warren swarren@nvidia.com
test/py/test_env.py | 121 ++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 121 insertions(+) create mode 100644 test/py/test_env.py
diff --git a/test/py/test_env.py b/test/py/test_env.py new file mode 100644 index 000000000000..3af0176c4523 --- /dev/null +++ b/test/py/test_env.py @@ -0,0 +1,121 @@ +# Copyright (c) 2015 Stephen Warren +# Copyright (c) 2015, NVIDIA CORPORATION. All rights reserved. +# +# SPDX-License-Identifier: GPL-2.0
+import pytest
+# FIXME: This might be useful for other tests; +# perhaps refactor it into ConsoleBase or some other state object? +class StateTestEnv(object):
- def __init__(self, uboot_console):
self.uboot_console = uboot_console
self.get_env()
self.set_var = self.get_non_existent_var()
- def get_env(self):
response = self.uboot_console.run_command("printenv")
self.env = {}
for l in response.splitlines():
if not "=" in l:
continue
(var, value) = l.strip().split("=")
Please keep in your mind - I haven't written anything in python. This is failing on my testing platform. On microblaze I have variable which is defined like "console=console=ttyUL0,115200\0" and this script is not able to handle it properly. I expect it is because of two = on the same line.
Ah yes. Try:
- (var, value) = l.strip().split("=") + (var, value) = l.strip().split("=", 1)

On 18.12.2015 19:09, Stephen Warren wrote:
On 12/18/2015 06:50 AM, Michal Simek wrote:
On 2.12.2015 23:18, Stephen Warren wrote:
This tests basic environment variable functionality.
Signed-off-by: Stephen Warren swarren@wwwdotorg.org Signed-off-by: Stephen Warren swarren@nvidia.com
test/py/test_env.py | 121 ++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 121 insertions(+) create mode 100644 test/py/test_env.py
diff --git a/test/py/test_env.py b/test/py/test_env.py new file mode 100644 index 000000000000..3af0176c4523 --- /dev/null +++ b/test/py/test_env.py @@ -0,0 +1,121 @@ +# Copyright (c) 2015 Stephen Warren +# Copyright (c) 2015, NVIDIA CORPORATION. All rights reserved. +# +# SPDX-License-Identifier: GPL-2.0
+import pytest
+# FIXME: This might be useful for other tests; +# perhaps refactor it into ConsoleBase or some other state object? +class StateTestEnv(object):
- def __init__(self, uboot_console):
self.uboot_console = uboot_console
self.get_env()
self.set_var = self.get_non_existent_var()
- def get_env(self):
response = self.uboot_console.run_command("printenv")
self.env = {}
for l in response.splitlines():
if not "=" in l:
continue
(var, value) = l.strip().split("=")
Please keep in your mind - I haven't written anything in python. This is failing on my testing platform. On microblaze I have variable which is defined like "console=console=ttyUL0,115200\0" and this script is not able to handle it properly. I expect it is because of two = on the same line.
Ah yes. Try:
- (var, value) = l.strip().split("=")
- (var, value) = l.strip().split("=", 1)
That works for me.
Thanks, Michal

HI Stephen,
On 2 December 2015 at 15:18, Stephen Warren swarren@wwwdotorg.org wrote:
This tests basic environment variable functionality.
Signed-off-by: Stephen Warren swarren@wwwdotorg.org Signed-off-by: Stephen Warren swarren@nvidia.com
test/py/test_env.py | 121 ++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 121 insertions(+) create mode 100644 test/py/test_env.py
diff --git a/test/py/test_env.py b/test/py/test_env.py new file mode 100644 index 000000000000..3af0176c4523 --- /dev/null +++ b/test/py/test_env.py @@ -0,0 +1,121 @@ +# Copyright (c) 2015 Stephen Warren +# Copyright (c) 2015, NVIDIA CORPORATION. All rights reserved. +# +# SPDX-License-Identifier: GPL-2.0
+import pytest
+# FIXME: This might be useful for other tests; +# perhaps refactor it into ConsoleBase or some other state object? +class StateTestEnv(object):
- def __init__(self, uboot_console):
self.uboot_console = uboot_console
self.get_env()
self.set_var = self.get_non_existent_var()
- def get_env(self):
response = self.uboot_console.run_command("printenv")
self.env = {}
for l in response.splitlines():
if not "=" in l:
continue
(var, value) = l.strip().split("=")
self.env[var] = value
- def get_existent_var(self):
for var in self.env:
return var
- def get_non_existent_var(self):
n = 0
while True:
var = "test_env_" + str(n)
if var not in self.env:
return var
n += 1
+@pytest.fixture(scope="module") +def state_test_env(uboot_console):
- return StateTestEnv(uboot_console)
+def unset_var(state_test_env, var):
- state_test_env.uboot_console.run_command("setenv " + var)
- if var in state_test_env.env:
del state_test_env.env[var]
+def set_var(state_test_env, var, value):
- state_test_env.uboot_console.run_command("setenv " + var + " "" + value + """)
How about 'setenv %s "%s"' % (var, value)
It seems much easier to read. Similarly elsewhere.
- state_test_env.env[var] = value
+def validate_empty(state_test_env, var):
- response = state_test_env.uboot_console.run_command("echo $" + var)
- assert response == ""
+def validate_set(state_test_env, var, value):
What does this function do? Function comment.
- # echo does not preserve leading, internal, or trailing whitespace in the
- # value. printenv does, and hence allows more complete testing.
- response = state_test_env.uboot_console.run_command("printenv " + var)
- assert response == (var + "=" + value)
+def test_env_echo_exists(state_test_env):
- """Echo a variable that exists"""
- var = state_test_env.get_existent_var()
- value = state_test_env.env[var]
- validate_set(state_test_env, var, value)
+def test_env_echo_non_existent(state_test_env):
- """Echo a variable that doesn't exist"""
- var = state_test_env.set_var
- validate_empty(state_test_env, var)
+def test_env_printenv_non_existent(state_test_env):
- """Check printenv error message"""
- var = state_test_env.set_var
- c = state_test_env.uboot_console
- with c.disable_check("error_notification"):
response = c.run_command("printenv " + var)
- assert(response == "## Error: "" + var + "" not defined")
+def test_env_unset_non_existent(state_test_env):
- """Unset a nonexistent variable"""
- var = state_test_env.get_non_existent_var()
- unset_var(state_test_env, var)
- validate_empty(state_test_env, var)
+def test_env_set_non_existent(state_test_env):
- """Set a new variable"""
- var = state_test_env.set_var
- value = "foo"
- set_var(state_test_env, var, value)
- validate_set(state_test_env, var, value)
+def test_env_set_existing(state_test_env):
- """Set an existing variable"""
- var = state_test_env.set_var
- value = "bar"
- set_var(state_test_env, var, value)
- validate_set(state_test_env, var, value)
+def test_env_unset_existing(state_test_env):
- """Unset a variable"""
- var = state_test_env.set_var
- unset_var(state_test_env, var)
- validate_empty(state_test_env, var)
+def test_env_expansion_spaces(state_test_env):
Function comment
- var_space = None
- var_test = None
- try:
var_space = state_test_env.get_non_existent_var()
set_var(state_test_env, var_space, " ")
var_test = state_test_env.get_non_existent_var()
value = " 1${%(var_space)s}${%(var_space)s} 2 " % locals()
set_var(state_test_env, var_test, value)
value = " 1 2 "
validate_set(state_test_env, var_test, value)
- finally:
if var_space:
unset_var(state_test_env, var_space)
if var_test:
unset_var(state_test_env, var_test)
-- 2.6.3
Regards, Simon

This tests whether md/mw work, and affect each-other.
Command repeat is also tested.
test/cmd_repeat.sh is removed, since the new Python-based test does everything it used to.
Signed-off-by: Stephen Warren swarren@wwwdotorg.org Signed-off-by: Stephen Warren swarren@nvidia.com --- test/cmd_repeat.sh | 29 ----------------------------- test/py/test_md.py | 29 +++++++++++++++++++++++++++++ 2 files changed, 29 insertions(+), 29 deletions(-) delete mode 100755 test/cmd_repeat.sh create mode 100644 test/py/test_md.py
diff --git a/test/cmd_repeat.sh b/test/cmd_repeat.sh deleted file mode 100755 index 990e79900f47..000000000000 --- a/test/cmd_repeat.sh +++ /dev/null @@ -1,29 +0,0 @@ -#!/bin/sh - -# Test for U-Boot cli including command repeat - -BASE="$(dirname $0)" -. $BASE/common.sh - -run_test() { - ./${OUTPUT_DIR}/u-boot <<END -setenv ctrlc_ignore y -md 0 - -reset -END -} -check_results() { - echo "Check results" - - grep -q 00000100 ${tmp} || fail "Command did not repeat" -} - -echo "Test CLI repeat" -echo -tmp="$(tempfile)" -build_uboot -run_test >${tmp} -check_results ${tmp} -rm ${tmp} -echo "Test passed" diff --git a/test/py/test_md.py b/test/py/test_md.py new file mode 100644 index 000000000000..2e67ed0a1de2 --- /dev/null +++ b/test/py/test_md.py @@ -0,0 +1,29 @@ +# Copyright (c) 2015 Stephen Warren +# Copyright (c) 2015, NVIDIA CORPORATION. All rights reserved. +# +# SPDX-License-Identifier: GPL-2.0 + +import pytest + +@pytest.mark.buildconfigspec("cmd_memory") +def test_md(uboot_console): + ram_base = uboot_console.find_ram_base() + addr = "%08x" % ram_base + val = "a5f09876" + expected_response = addr + ": " + val + response = uboot_console.run_command("md " + addr + " 10") + assert(not (expected_response in response)) + uboot_console.run_command("mw " + addr + " " + val) + response = uboot_console.run_command("md " + addr + " 10") + assert(expected_response in response) + +@pytest.mark.buildconfigspec("cmd_memory") +def test_md_repeat(uboot_console): + ram_base = uboot_console.find_ram_base() + addr_base = "%08x" % ram_base + words = 0x10 + addr_repeat = "%08x" % (ram_base + (words * 4)) + uboot_console.run_command("md %s %x" % (addr_base, words)) + response = uboot_console.run_command("") + expected_response = addr_repeat + ": " + assert(expected_response in response)

On 2.12.2015 23:18, Stephen Warren wrote:
This tests whether md/mw work, and affect each-other.
Command repeat is also tested.
test/cmd_repeat.sh is removed, since the new Python-based test does everything it used to.
Signed-off-by: Stephen Warren swarren@wwwdotorg.org Signed-off-by: Stephen Warren swarren@nvidia.com
test/cmd_repeat.sh | 29 ----------------------------- test/py/test_md.py | 29 +++++++++++++++++++++++++++++ 2 files changed, 29 insertions(+), 29 deletions(-) delete mode 100755 test/cmd_repeat.sh create mode 100644 test/py/test_md.py
diff --git a/test/cmd_repeat.sh b/test/cmd_repeat.sh deleted file mode 100755 index 990e79900f47..000000000000 --- a/test/cmd_repeat.sh +++ /dev/null @@ -1,29 +0,0 @@ -#!/bin/sh
-# Test for U-Boot cli including command repeat
-BASE="$(dirname $0)" -. $BASE/common.sh
-run_test() {
- ./${OUTPUT_DIR}/u-boot <<END
-setenv ctrlc_ignore y -md 0
-reset -END -} -check_results() {
- echo "Check results"
- grep -q 00000100 ${tmp} || fail "Command did not repeat"
-}
-echo "Test CLI repeat" -echo -tmp="$(tempfile)" -build_uboot -run_test >${tmp} -check_results ${tmp} -rm ${tmp} -echo "Test passed" diff --git a/test/py/test_md.py b/test/py/test_md.py new file mode 100644 index 000000000000..2e67ed0a1de2 --- /dev/null +++ b/test/py/test_md.py @@ -0,0 +1,29 @@ +# Copyright (c) 2015 Stephen Warren +# Copyright (c) 2015, NVIDIA CORPORATION. All rights reserved. +# +# SPDX-License-Identifier: GPL-2.0
+import pytest
+@pytest.mark.buildconfigspec("cmd_memory") +def test_md(uboot_console):
- ram_base = uboot_console.find_ram_base()
- addr = "%08x" % ram_base
- val = "a5f09876"
- expected_response = addr + ": " + val
I would add this here. uboot_console.run_command("mw " + addr + " 0 10")
The reason is that with jtag I don't need to do board reset and DDR is in the same state. Also I expect some board can have just cpu reset pin and origin values can stay in memory. That's why clearing before test will be good.
Thanks, Michal

On 12/18/2015 06:51 AM, Michal Simek wrote:
On 2.12.2015 23:18, Stephen Warren wrote:
This tests whether md/mw work, and affect each-other.
Command repeat is also tested.
test/cmd_repeat.sh is removed, since the new Python-based test does
diff --git a/test/py/test_md.py b/test/py/test_md.py
+@pytest.mark.buildconfigspec("cmd_memory") +def test_md(uboot_console):
- ram_base = uboot_console.find_ram_base()
- addr = "%08x" % ram_base
- val = "a5f09876"
- expected_response = addr + ": " + val
I would add this here. uboot_console.run_command("mw " + addr + " 0 10")
The reason is that with jtag I don't need to do board reset and DDR is in the same state. Also I expect some board can have just cpu reset pin and origin values can stay in memory. That's why clearing before test will be good.
Yes, that makes sense. Another alternative might be to read the current value, modify it e.g. invert it, and write it back. Still, your proposal is better since it'll yield the same command and result pattern each time, so I'll take that.

Hi Stephen,
On 2 December 2015 at 15:18, Stephen Warren swarren@wwwdotorg.org wrote:
This tests whether md/mw work, and affect each-other.
Command repeat is also tested.
test/cmd_repeat.sh is removed, since the new Python-based test does everything it used to.
Signed-off-by: Stephen Warren swarren@wwwdotorg.org Signed-off-by: Stephen Warren swarren@nvidia.com
test/cmd_repeat.sh | 29 ----------------------------- test/py/test_md.py | 29 +++++++++++++++++++++++++++++ 2 files changed, 29 insertions(+), 29 deletions(-) delete mode 100755 test/cmd_repeat.sh create mode 100644 test/py/test_md.py
Reviewed-by: Simon Glass sjg@chromium.org
But please add a little comment on each test function.
diff --git a/test/cmd_repeat.sh b/test/cmd_repeat.sh deleted file mode 100755 index 990e79900f47..000000000000 --- a/test/cmd_repeat.sh +++ /dev/null @@ -1,29 +0,0 @@ -#!/bin/sh
-# Test for U-Boot cli including command repeat
-BASE="$(dirname $0)" -. $BASE/common.sh
-run_test() {
./${OUTPUT_DIR}/u-boot <<END
-setenv ctrlc_ignore y -md 0
-reset -END -} -check_results() {
echo "Check results"
grep -q 00000100 ${tmp} || fail "Command did not repeat"
-}
-echo "Test CLI repeat" -echo -tmp="$(tempfile)" -build_uboot -run_test >${tmp} -check_results ${tmp} -rm ${tmp} -echo "Test passed" diff --git a/test/py/test_md.py b/test/py/test_md.py new file mode 100644 index 000000000000..2e67ed0a1de2 --- /dev/null +++ b/test/py/test_md.py @@ -0,0 +1,29 @@ +# Copyright (c) 2015 Stephen Warren +# Copyright (c) 2015, NVIDIA CORPORATION. All rights reserved. +# +# SPDX-License-Identifier: GPL-2.0
+import pytest
+@pytest.mark.buildconfigspec("cmd_memory") +def test_md(uboot_console):
- ram_base = uboot_console.find_ram_base()
- addr = "%08x" % ram_base
- val = "a5f09876"
- expected_response = addr + ": " + val
- response = uboot_console.run_command("md " + addr + " 10")
- assert(not (expected_response in response))
- uboot_console.run_command("mw " + addr + " " + val)
- response = uboot_console.run_command("md " + addr + " 10")
- assert(expected_response in response)
+@pytest.mark.buildconfigspec("cmd_memory") +def test_md_repeat(uboot_console):
- ram_base = uboot_console.find_ram_base()
- addr_base = "%08x" % ram_base
- words = 0x10
- addr_repeat = "%08x" % (ram_base + (words * 4))
- uboot_console.run_command("md %s %x" % (addr_base, words))
- response = uboot_console.run_command("")
- expected_response = addr_repeat + ": "
- assert(expected_response in response)
-- 2.6.3
Regards, Simon

From: Stephen Warren swarren@nvidia.com
This tests whether the following features of the U-Boot shell: - Execution of a directly entered command. - Compound commands (; delimiter). - Quoting of arguments containing spaces. - Executing commands from environment variables.
Signed-off-by: Stephen Warren swarren@nvidia.com --- test/py/test_shell_basics.py | 35 +++++++++++++++++++++++++++++++++++ 1 file changed, 35 insertions(+) create mode 100644 test/py/test_shell_basics.py
diff --git a/test/py/test_shell_basics.py b/test/py/test_shell_basics.py new file mode 100644 index 000000000000..f47f840432e0 --- /dev/null +++ b/test/py/test_shell_basics.py @@ -0,0 +1,35 @@ +# Copyright (c) 2015, NVIDIA CORPORATION. All rights reserved. +# +# SPDX-License-Identifier: GPL-2.0 + +def test_shell_execute(uboot_console): + """Test any shell command""" + response = uboot_console.run_command("echo hello") + assert response.strip() == "hello" + +def test_shell_semicolon_two(uboot_console): + """Test two shell commands separate by a semi-colon""" + cmd = "echo hello; echo world" + response = uboot_console.run_command(cmd) + # This validation method ignores the exact whitespace between the strings + assert response.index("hello") < response.index("world") + +def test_shell_semicolon_three(uboot_console): + """Test three shell commands separate by a semi-colon""" + cmd = "setenv list 1; setenv list ${list}2; setenv list ${list}3; " + \ + "echo ${list}" + response = uboot_console.run_command(cmd) + assert response.strip() == "123" + uboot_console.run_command("setenv list") + +def test_shell_run(uboot_console): + """Test the 'run' shell command""" + uboot_console.run_command("setenv foo 'setenv monty 1; setenv python 2'") + uboot_console.run_command("run foo") + response = uboot_console.run_command("echo $monty") + assert response.strip() == "1" + response = uboot_console.run_command("echo $python") + assert response.strip() == "2" + uboot_console.run_command("setenv foo") + uboot_console.run_command("setenv monty") + uboot_console.run_command("setenv python")

On 2 December 2015 at 15:18, Stephen Warren swarren@wwwdotorg.org wrote:
From: Stephen Warren swarren@nvidia.com
This tests whether the following features of the U-Boot shell:
- Execution of a directly entered command.
- Compound commands (; delimiter).
- Quoting of arguments containing spaces.
- Executing commands from environment variables.
Signed-off-by: Stephen Warren swarren@nvidia.com
test/py/test_shell_basics.py | 35 +++++++++++++++++++++++++++++++++++ 1 file changed, 35 insertions(+) create mode 100644 test/py/test_shell_basics.py
Reviewed-by: Simon Glass sjg@chromium.org
These sorts of tests are very valuable I think.

From: Stephen Warren swarren@nvidia.com
Migrate all most tests from command_ut.c into the Python test system. This allows the tests to be run against any U-Boot binary that supports the if command (i.e. where hush is enabled) without requiring that binary to be permanently bloated with the code from command_ut.
Some tests in command_ut.c can only be executed from C code, since they test internal (more unit-level) features of various U-Boot APIs. The migrated tests can all operate directly from the U-Boot console.
Signed-off-by: Stephen Warren swarren@nvidia.com --- test/command_ut.c | 133 ---------------------------------------- test/py/test_hush_if_test.py | 141 +++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 141 insertions(+), 133 deletions(-) create mode 100644 test/py/test_hush_if_test.py
diff --git a/test/command_ut.c b/test/command_ut.c index 926573a39543..35bd35ae2e30 100644 --- a/test/command_ut.c +++ b/test/command_ut.c @@ -20,21 +20,6 @@ static int do_ut_cmd(cmd_tbl_t *cmdtp, int flag, int argc, char * const argv[]) printf("%s: Testing commands\n", __func__); run_command("env default -f -a", 0);
- /* run a single command */ - run_command("setenv single 1", 0); - assert(!strcmp("1", getenv("single"))); - - /* make sure that compound statements work */ -#ifdef CONFIG_SYS_HUSH_PARSER - run_command("if test -n ${single} ; then setenv check 1; fi", 0); - assert(!strcmp("1", getenv("check"))); - run_command("setenv check", 0); -#endif - - /* commands separated by ; */ - run_command_list("setenv list 1; setenv list ${list}1", -1, 0); - assert(!strcmp("11", getenv("list"))); - /* commands separated by \n */ run_command_list("setenv list 1\n setenv list ${list}1", -1, 0); assert(!strcmp("11", getenv("list"))); @@ -43,11 +28,6 @@ static int do_ut_cmd(cmd_tbl_t *cmdtp, int flag, int argc, char * const argv[]) run_command_list("setenv list 1${list}\n", -1, 0); assert(!strcmp("111", getenv("list")));
- /* three commands in a row */ - run_command_list("setenv list 1\n setenv list ${list}2; " - "setenv list ${list}3", -1, 0); - assert(!strcmp("123", getenv("list"))); - /* a command string with \0 in it. Stuff after \0 should be ignored */ run_command("setenv list", 0); run_command_list(test_cmd, sizeof(test_cmd), 0); @@ -66,13 +46,6 @@ static int do_ut_cmd(cmd_tbl_t *cmdtp, int flag, int argc, char * const argv[]) assert(run_command_list("false", -1, 0) == 1); assert(run_command_list("echo", -1, 0) == 0);
- run_command("setenv foo 'setenv monty 1; setenv python 2'", 0); - run_command("run foo", 0); - assert(getenv("monty") != NULL); - assert(!strcmp("1", getenv("monty"))); - assert(getenv("python") != NULL); - assert(!strcmp("2", getenv("python"))); - #ifdef CONFIG_SYS_HUSH_PARSER run_command("setenv foo 'setenv black 1\nsetenv adder 2'", 0); run_command("run foo", 0); @@ -80,112 +53,6 @@ static int do_ut_cmd(cmd_tbl_t *cmdtp, int flag, int argc, char * const argv[]) assert(!strcmp("1", getenv("black"))); assert(getenv("adder") != NULL); assert(!strcmp("2", getenv("adder"))); - - /* Test the 'test' command */ - -#define HUSH_TEST(name, expr, expected_result) \ - run_command("if test " expr " ; then " \ - "setenv " #name "_" #expected_result " y; else " \ - "setenv " #name "_" #expected_result " n; fi", 0); \ - assert(!strcmp(#expected_result, getenv(#name "_" #expected_result))); \ - setenv(#name "_" #expected_result, NULL); - - /* Basic operators */ - HUSH_TEST(streq, "aaa = aaa", y); - HUSH_TEST(streq, "aaa = bbb", n); - - HUSH_TEST(strneq, "aaa != bbb", y); - HUSH_TEST(strneq, "aaa != aaa", n); - - HUSH_TEST(strlt, "aaa < bbb", y); - HUSH_TEST(strlt, "bbb < aaa", n); - - HUSH_TEST(strgt, "bbb > aaa", y); - HUSH_TEST(strgt, "aaa > bbb", n); - - HUSH_TEST(eq, "123 -eq 123", y); - HUSH_TEST(eq, "123 -eq 456", n); - - HUSH_TEST(ne, "123 -ne 456", y); - HUSH_TEST(ne, "123 -ne 123", n); - - HUSH_TEST(lt, "123 -lt 456", y); - HUSH_TEST(lt_eq, "123 -lt 123", n); - HUSH_TEST(lt, "456 -lt 123", n); - - HUSH_TEST(le, "123 -le 456", y); - HUSH_TEST(le_eq, "123 -le 123", y); - HUSH_TEST(le, "456 -le 123", n); - - HUSH_TEST(gt, "456 -gt 123", y); - HUSH_TEST(gt_eq, "123 -gt 123", n); - HUSH_TEST(gt, "123 -gt 456", n); - - HUSH_TEST(ge, "456 -ge 123", y); - HUSH_TEST(ge_eq, "123 -ge 123", y); - HUSH_TEST(ge, "123 -ge 456", n); - - HUSH_TEST(z, "-z """, y); - HUSH_TEST(z, "-z "aaa"", n); - - HUSH_TEST(n, "-n "aaa"", y); - HUSH_TEST(n, "-n """, n); - - /* Inversion of simple tests */ - HUSH_TEST(streq_inv, "! aaa = aaa", n); - HUSH_TEST(streq_inv, "! aaa = bbb", y); - - HUSH_TEST(streq_inv_inv, "! ! aaa = aaa", y); - HUSH_TEST(streq_inv_inv, "! ! aaa = bbb", n); - - /* Binary operators */ - HUSH_TEST(or_0_0, "aaa != aaa -o bbb != bbb", n); - HUSH_TEST(or_0_1, "aaa != aaa -o bbb = bbb", y); - HUSH_TEST(or_1_0, "aaa = aaa -o bbb != bbb", y); - HUSH_TEST(or_1_1, "aaa = aaa -o bbb = bbb", y); - - HUSH_TEST(and_0_0, "aaa != aaa -a bbb != bbb", n); - HUSH_TEST(and_0_1, "aaa != aaa -a bbb = bbb", n); - HUSH_TEST(and_1_0, "aaa = aaa -a bbb != bbb", n); - HUSH_TEST(and_1_1, "aaa = aaa -a bbb = bbb", y); - - /* Inversion within binary operators */ - HUSH_TEST(or_0_0_inv, "! aaa != aaa -o ! bbb != bbb", y); - HUSH_TEST(or_0_1_inv, "! aaa != aaa -o ! bbb = bbb", y); - HUSH_TEST(or_1_0_inv, "! aaa = aaa -o ! bbb != bbb", y); - HUSH_TEST(or_1_1_inv, "! aaa = aaa -o ! bbb = bbb", n); - - HUSH_TEST(or_0_0_inv_inv, "! ! aaa != aaa -o ! ! bbb != bbb", n); - HUSH_TEST(or_0_1_inv_inv, "! ! aaa != aaa -o ! ! bbb = bbb", y); - HUSH_TEST(or_1_0_inv_inv, "! ! aaa = aaa -o ! ! bbb != bbb", y); - HUSH_TEST(or_1_1_inv_inv, "! ! aaa = aaa -o ! ! bbb = bbb", y); - - setenv("ut_var_nonexistent", NULL); - setenv("ut_var_exists", "1"); - HUSH_TEST(z_varexp_quoted, "-z "$ut_var_nonexistent"", y); - HUSH_TEST(z_varexp_quoted, "-z "$ut_var_exists"", n); - setenv("ut_var_exists", NULL); - - run_command("setenv ut_var_space " "", 0); - assert(!strcmp(getenv("ut_var_space"), " ")); - run_command("setenv ut_var_test $ut_var_space", 0); - assert(!getenv("ut_var_test")); - run_command("setenv ut_var_test "$ut_var_space"", 0); - assert(!strcmp(getenv("ut_var_test"), " ")); - run_command("setenv ut_var_test " 1${ut_var_space}${ut_var_space} 2 "", 0); - assert(!strcmp(getenv("ut_var_test"), " 1 2 ")); - setenv("ut_var_space", NULL); - setenv("ut_var_test", NULL); - -#ifdef CONFIG_SANDBOX - /* File existence */ - HUSH_TEST(e, "-e hostfs - creating_this_file_breaks_uboot_unit_test", n); - run_command("sb save hostfs - creating_this_file_breaks_uboot_unit_test 0 1", 0); - HUSH_TEST(e, "-e hostfs - creating_this_file_breaks_uboot_unit_test", y); - /* Perhaps this could be replaced by an "rm" shell command one day */ - assert(!os_unlink("creating_this_file_breaks_uboot_unit_test")); - HUSH_TEST(e, "-e hostfs - creating_this_file_breaks_uboot_unit_test", n); -#endif #endif
assert(run_command("", 0) == 0); diff --git a/test/py/test_hush_if_test.py b/test/py/test_hush_if_test.py new file mode 100644 index 000000000000..e39e53613500 --- /dev/null +++ b/test/py/test_hush_if_test.py @@ -0,0 +1,141 @@ +# Copyright (c) 2015, NVIDIA CORPORATION. All rights reserved. +# +# SPDX-License-Identifier: GPL-2.0 + +import os +import os.path +import pytest + +subtests = ( + # Base if functionality + + ("true", True), + ("false", False), + + # Basic operators + + ("test aaa = aaa", True), + ("test aaa = bbb", False), + + ("test aaa != bbb", True), + ("test aaa != aaa", False), + + ("test aaa < bbb", True), + ("test bbb < aaa", False), + + ("test bbb > aaa", True), + ("test aaa > bbb", False), + + ("test 123 -eq 123", True), + ("test 123 -eq 456", False), + + ("test 123 -ne 456", True), + ("test 123 -ne 123", False), + + ("test 123 -lt 456", True), + ("test 123 -lt 123", False), + ("test 456 -lt 123", False), + + ("test 123 -le 456", True), + ("test 123 -le 123", True), + ("test 456 -le 123", False), + + ("test 456 -gt 123", True), + ("test 123 -gt 123", False), + ("test 123 -gt 456", False), + + ("test 456 -ge 123", True), + ("test 123 -ge 123", True), + ("test 123 -ge 456", False), + + ("test -z """, True), + ("test -z "aaa"", False), + + ("test -n "aaa"", True), + ("test -n """, False), + + # Inversion of simple tests + + ("test ! aaa = aaa", False), + ("test ! aaa = bbb", True), + ("test ! ! aaa = aaa", True), + ("test ! ! aaa = bbb", False), + + # Binary operators + + ("test aaa != aaa -o bbb != bbb", False), + ("test aaa != aaa -o bbb = bbb", True), + ("test aaa = aaa -o bbb != bbb", True), + ("test aaa = aaa -o bbb = bbb", True), + + ("test aaa != aaa -a bbb != bbb", False), + ("test aaa != aaa -a bbb = bbb", False), + ("test aaa = aaa -a bbb != bbb", False), + ("test aaa = aaa -a bbb = bbb", True), + + # Inversion within binary operators + + ("test ! aaa != aaa -o ! bbb != bbb", True), + ("test ! aaa != aaa -o ! bbb = bbb", True), + ("test ! aaa = aaa -o ! bbb != bbb", True), + ("test ! aaa = aaa -o ! bbb = bbb", False), + + ("test ! ! aaa != aaa -o ! ! bbb != bbb", False), + ("test ! ! aaa != aaa -o ! ! bbb = bbb", True), + ("test ! ! aaa = aaa -o ! ! bbb != bbb", True), + ("test ! ! aaa = aaa -o ! ! bbb = bbb", True), + + # -z operator + + ("test -z "$ut_var_nonexistent"", True), + ("test -z "$ut_var_exists"", False), +) + +def exec_hush_if(uboot_console, expr, result): + cmd = "if " + expr + "; then echo true; else echo false; fi" + response = uboot_console.run_command(cmd) + assert response.strip() == str(result).lower() + +@pytest.mark.buildconfigspec("sys_hush_parser") +def test_hush_if_test_setup(uboot_console): + uboot_console.run_command("setenv ut_var_nonexistent") + uboot_console.run_command("setenv ut_var_exists 1") + +@pytest.mark.buildconfigspec("sys_hush_parser") +@pytest.mark.parametrize("expr,result", subtests) +def test_hush_if_test(uboot_console, expr, result): + exec_hush_if(uboot_console, expr, result) + +@pytest.mark.buildconfigspec("sys_hush_parser") +def test_hush_if_test_teardown(uboot_console): + uboot_console.run_command("setenv ut_var_exists") + +@pytest.mark.buildconfigspec("sys_hush_parser") +# We might test this on real filesystems via UMS, DFU, "save", etc. +# Of those, only UMS currently allows file removal though. +@pytest.mark.boardspec("sandbox") +def test_hush_if_test_host_file_exists(uboot_console): + test_file = uboot_console.config.result_dir + \ + "/creating_this_file_breaks_uboot_tests" + + try: + os.unlink(test_file) + except: + pass + assert not os.path.exists(test_file) + + expr = "test -e hostfs - " + test_file + exec_hush_if(uboot_console, expr, False) + + try: + with file(test_file, "wb"): + pass + assert os.path.exists(test_file) + + expr = "test -e hostfs - " + test_file + exec_hush_if(uboot_console, expr, True) + finally: + os.unlink(test_file) + + expr = "test -e hostfs - " + test_file + exec_hush_if(uboot_console, expr, False)

Hi Stephen,
On 2 December 2015 at 15:18, Stephen Warren swarren@wwwdotorg.org wrote:
From: Stephen Warren swarren@nvidia.com
Migrate all most tests from command_ut.c into the Python test system. This allows the tests to be run against any U-Boot binary that supports the if command (i.e. where hush is enabled) without requiring that binary to be permanently bloated with the code from command_ut.
Some tests in command_ut.c can only be executed from C code, since they test internal (more unit-level) features of various U-Boot APIs. The migrated tests can all operate directly from the U-Boot console.
This seems to migrate more than just the 'if' tests suggested by the commit subject. Perhaps it should be split into two?
Is there any point in running these tests on real hardware? It takes forever due to needing to reset each time. Do we need to reset?
They fail on my board due I think to a problem with the printenv parsing:
Section: test_env_echo_exists Stream: console => printenv printenv baudrate=115200 bootargs=root=/dev/sdb3 init=/sbin/init rootwait ro bootcmd=ext2load scsi 0:3 01000000 /boot/vmlinuz; zboot 01000000 bootfile=bzImage consoledev=ttyS0 fdtcontroladdr=acd3dbc0 hostname=x86 loadaddr=0x1000000 netdev=eth0 nfsboot=setenv bootargs root=/dev/nfs rw nfsroot=$serverip:$rootpath ip=$ipaddr:$serverip:$gatewayip:$netmask:$hostname:$netdev:off console=$consoledev,$baudrate $othbootargs;tftpboot $loadaddr $bootfile;zboot $loadaddr othbootargs=acpi=off pciconfighost=1 ramboot=setenv bootargs root=/dev/ram rw ip=$ipaddr:$serverip:$gatewayip:$netmask:$hostname:$netdev:off console=$consoledev,$baudrate $othbootargs;tftpboot $loadaddr $bootfile;tftpboot $ramdiskaddr $ramdiskfile;zboot $loadaddr 0 $ramdiskaddr $filesize ramdiskaddr=0x2000000 ramdiskfile=initramfs.gz rootpath=/opt/nfsroot scsidevs=1 stderr=vga,serial stdin=usbkbd,i8042-kbd,serial stdout=serial
Environment size: 927/4092 bytes => FAILED: uboot_console = <uboot_console_exec_attach.ConsoleExecAttach object at 0x7feaa0f1a290>
@pytest.fixture(scope="module") def state_test_env(uboot_console):
return StateTestEnv(uboot_console)
test/py/test_env.py:39: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <test_env.StateTestEnv object at 0x7feaa0d4b7d0> uboot_console = <uboot_console_exec_attach.ConsoleExecAttach object at 0x7feaa0f1a290>
def __init__(self, uboot_console): self.uboot_console = uboot_console
self.get_env()
test/py/test_env.py:13: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <test_env.StateTestEnv object at 0x7feaa0d4b7d0>
def get_env(self): response = self.uboot_console.run_command("printenv") self.env = {} for l in response.splitlines(): if not "=" in l: continue
(var, value) = l.strip().split("=")
E ValueError: too many values to unpack
test/py/test_env.py:22: ValueError
Signed-off-by: Stephen Warren swarren@nvidia.com
test/command_ut.c | 133 ---------------------------------------- test/py/test_hush_if_test.py | 141 +++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 141 insertions(+), 133 deletions(-) create mode 100644 test/py/test_hush_if_test.py
diff --git a/test/command_ut.c b/test/command_ut.c index 926573a39543..35bd35ae2e30 100644 --- a/test/command_ut.c +++ b/test/command_ut.c @@ -20,21 +20,6 @@ static int do_ut_cmd(cmd_tbl_t *cmdtp, int flag, int argc, char * const argv[]) printf("%s: Testing commands\n", __func__); run_command("env default -f -a", 0);
/* run a single command */
run_command("setenv single 1", 0);
assert(!strcmp("1", getenv("single")));
/* make sure that compound statements work */
-#ifdef CONFIG_SYS_HUSH_PARSER
run_command("if test -n ${single} ; then setenv check 1; fi", 0);
assert(!strcmp("1", getenv("check")));
run_command("setenv check", 0);
-#endif
/* commands separated by ; */
run_command_list("setenv list 1; setenv list ${list}1", -1, 0);
assert(!strcmp("11", getenv("list")));
/* commands separated by \n */ run_command_list("setenv list 1\n setenv list ${list}1", -1, 0); assert(!strcmp("11", getenv("list")));
@@ -43,11 +28,6 @@ static int do_ut_cmd(cmd_tbl_t *cmdtp, int flag, int argc, char * const argv[]) run_command_list("setenv list 1${list}\n", -1, 0); assert(!strcmp("111", getenv("list")));
/* three commands in a row */
run_command_list("setenv list 1\n setenv list ${list}2; "
"setenv list ${list}3", -1, 0);
assert(!strcmp("123", getenv("list")));
/* a command string with \0 in it. Stuff after \0 should be ignored */ run_command("setenv list", 0); run_command_list(test_cmd, sizeof(test_cmd), 0);
@@ -66,13 +46,6 @@ static int do_ut_cmd(cmd_tbl_t *cmdtp, int flag, int argc, char * const argv[]) assert(run_command_list("false", -1, 0) == 1); assert(run_command_list("echo", -1, 0) == 0);
run_command("setenv foo 'setenv monty 1; setenv python 2'", 0);
run_command("run foo", 0);
assert(getenv("monty") != NULL);
assert(!strcmp("1", getenv("monty")));
assert(getenv("python") != NULL);
assert(!strcmp("2", getenv("python")));
#ifdef CONFIG_SYS_HUSH_PARSER run_command("setenv foo 'setenv black 1\nsetenv adder 2'", 0); run_command("run foo", 0); @@ -80,112 +53,6 @@ static int do_ut_cmd(cmd_tbl_t *cmdtp, int flag, int argc, char * const argv[]) assert(!strcmp("1", getenv("black"))); assert(getenv("adder") != NULL); assert(!strcmp("2", getenv("adder")));
/* Test the 'test' command */
-#define HUSH_TEST(name, expr, expected_result) \
run_command("if test " expr " ; then " \
"setenv " #name "_" #expected_result " y; else " \
"setenv " #name "_" #expected_result " n; fi", 0); \
assert(!strcmp(#expected_result, getenv(#name "_" #expected_result))); \
setenv(#name "_" #expected_result, NULL);
/* Basic operators */
HUSH_TEST(streq, "aaa = aaa", y);
HUSH_TEST(streq, "aaa = bbb", n);
HUSH_TEST(strneq, "aaa != bbb", y);
HUSH_TEST(strneq, "aaa != aaa", n);
HUSH_TEST(strlt, "aaa < bbb", y);
HUSH_TEST(strlt, "bbb < aaa", n);
HUSH_TEST(strgt, "bbb > aaa", y);
HUSH_TEST(strgt, "aaa > bbb", n);
HUSH_TEST(eq, "123 -eq 123", y);
HUSH_TEST(eq, "123 -eq 456", n);
HUSH_TEST(ne, "123 -ne 456", y);
HUSH_TEST(ne, "123 -ne 123", n);
HUSH_TEST(lt, "123 -lt 456", y);
HUSH_TEST(lt_eq, "123 -lt 123", n);
HUSH_TEST(lt, "456 -lt 123", n);
HUSH_TEST(le, "123 -le 456", y);
HUSH_TEST(le_eq, "123 -le 123", y);
HUSH_TEST(le, "456 -le 123", n);
HUSH_TEST(gt, "456 -gt 123", y);
HUSH_TEST(gt_eq, "123 -gt 123", n);
HUSH_TEST(gt, "123 -gt 456", n);
HUSH_TEST(ge, "456 -ge 123", y);
HUSH_TEST(ge_eq, "123 -ge 123", y);
HUSH_TEST(ge, "123 -ge 456", n);
HUSH_TEST(z, "-z \"\"", y);
HUSH_TEST(z, "-z \"aaa\"", n);
HUSH_TEST(n, "-n \"aaa\"", y);
HUSH_TEST(n, "-n \"\"", n);
/* Inversion of simple tests */
HUSH_TEST(streq_inv, "! aaa = aaa", n);
HUSH_TEST(streq_inv, "! aaa = bbb", y);
HUSH_TEST(streq_inv_inv, "! ! aaa = aaa", y);
HUSH_TEST(streq_inv_inv, "! ! aaa = bbb", n);
/* Binary operators */
HUSH_TEST(or_0_0, "aaa != aaa -o bbb != bbb", n);
HUSH_TEST(or_0_1, "aaa != aaa -o bbb = bbb", y);
HUSH_TEST(or_1_0, "aaa = aaa -o bbb != bbb", y);
HUSH_TEST(or_1_1, "aaa = aaa -o bbb = bbb", y);
HUSH_TEST(and_0_0, "aaa != aaa -a bbb != bbb", n);
HUSH_TEST(and_0_1, "aaa != aaa -a bbb = bbb", n);
HUSH_TEST(and_1_0, "aaa = aaa -a bbb != bbb", n);
HUSH_TEST(and_1_1, "aaa = aaa -a bbb = bbb", y);
/* Inversion within binary operators */
HUSH_TEST(or_0_0_inv, "! aaa != aaa -o ! bbb != bbb", y);
HUSH_TEST(or_0_1_inv, "! aaa != aaa -o ! bbb = bbb", y);
HUSH_TEST(or_1_0_inv, "! aaa = aaa -o ! bbb != bbb", y);
HUSH_TEST(or_1_1_inv, "! aaa = aaa -o ! bbb = bbb", n);
HUSH_TEST(or_0_0_inv_inv, "! ! aaa != aaa -o ! ! bbb != bbb", n);
HUSH_TEST(or_0_1_inv_inv, "! ! aaa != aaa -o ! ! bbb = bbb", y);
HUSH_TEST(or_1_0_inv_inv, "! ! aaa = aaa -o ! ! bbb != bbb", y);
HUSH_TEST(or_1_1_inv_inv, "! ! aaa = aaa -o ! ! bbb = bbb", y);
setenv("ut_var_nonexistent", NULL);
setenv("ut_var_exists", "1");
HUSH_TEST(z_varexp_quoted, "-z \"$ut_var_nonexistent\"", y);
HUSH_TEST(z_varexp_quoted, "-z \"$ut_var_exists\"", n);
setenv("ut_var_exists", NULL);
run_command("setenv ut_var_space \" \"", 0);
assert(!strcmp(getenv("ut_var_space"), " "));
run_command("setenv ut_var_test $ut_var_space", 0);
assert(!getenv("ut_var_test"));
run_command("setenv ut_var_test \"$ut_var_space\"", 0);
assert(!strcmp(getenv("ut_var_test"), " "));
run_command("setenv ut_var_test \" 1${ut_var_space}${ut_var_space} 2 \"", 0);
assert(!strcmp(getenv("ut_var_test"), " 1 2 "));
setenv("ut_var_space", NULL);
setenv("ut_var_test", NULL);
-#ifdef CONFIG_SANDBOX
/* File existence */
HUSH_TEST(e, "-e hostfs - creating_this_file_breaks_uboot_unit_test", n);
run_command("sb save hostfs - creating_this_file_breaks_uboot_unit_test 0 1", 0);
HUSH_TEST(e, "-e hostfs - creating_this_file_breaks_uboot_unit_test", y);
/* Perhaps this could be replaced by an "rm" shell command one day */
assert(!os_unlink("creating_this_file_breaks_uboot_unit_test"));
HUSH_TEST(e, "-e hostfs - creating_this_file_breaks_uboot_unit_test", n);
-#endif
Are you able to drop the os.h header?
#endif
assert(run_command("", 0) == 0);
diff --git a/test/py/test_hush_if_test.py b/test/py/test_hush_if_test.py new file mode 100644 index 000000000000..e39e53613500 --- /dev/null +++ b/test/py/test_hush_if_test.py @@ -0,0 +1,141 @@ +# Copyright (c) 2015, NVIDIA CORPORATION. All rights reserved. +# +# SPDX-License-Identifier: GPL-2.0
+import os +import os.path +import pytest
+subtests = (
- # Base if functionality
- ("true", True),
- ("false", False),
- # Basic operators
- ("test aaa = aaa", True),
- ("test aaa = bbb", False),
- ("test aaa != bbb", True),
- ("test aaa != aaa", False),
- ("test aaa < bbb", True),
- ("test bbb < aaa", False),
- ("test bbb > aaa", True),
- ("test aaa > bbb", False),
- ("test 123 -eq 123", True),
- ("test 123 -eq 456", False),
- ("test 123 -ne 456", True),
- ("test 123 -ne 123", False),
- ("test 123 -lt 456", True),
- ("test 123 -lt 123", False),
- ("test 456 -lt 123", False),
- ("test 123 -le 456", True),
- ("test 123 -le 123", True),
- ("test 456 -le 123", False),
- ("test 456 -gt 123", True),
- ("test 123 -gt 123", False),
- ("test 123 -gt 456", False),
- ("test 456 -ge 123", True),
- ("test 123 -ge 123", True),
- ("test 123 -ge 456", False),
- ("test -z """, True),
- ("test -z "aaa"", False),
- ("test -n "aaa"", True),
- ("test -n """, False),
- # Inversion of simple tests
- ("test ! aaa = aaa", False),
- ("test ! aaa = bbb", True),
- ("test ! ! aaa = aaa", True),
- ("test ! ! aaa = bbb", False),
- # Binary operators
- ("test aaa != aaa -o bbb != bbb", False),
- ("test aaa != aaa -o bbb = bbb", True),
- ("test aaa = aaa -o bbb != bbb", True),
- ("test aaa = aaa -o bbb = bbb", True),
- ("test aaa != aaa -a bbb != bbb", False),
- ("test aaa != aaa -a bbb = bbb", False),
- ("test aaa = aaa -a bbb != bbb", False),
- ("test aaa = aaa -a bbb = bbb", True),
- # Inversion within binary operators
- ("test ! aaa != aaa -o ! bbb != bbb", True),
- ("test ! aaa != aaa -o ! bbb = bbb", True),
- ("test ! aaa = aaa -o ! bbb != bbb", True),
- ("test ! aaa = aaa -o ! bbb = bbb", False),
- ("test ! ! aaa != aaa -o ! ! bbb != bbb", False),
- ("test ! ! aaa != aaa -o ! ! bbb = bbb", True),
- ("test ! ! aaa = aaa -o ! ! bbb != bbb", True),
- ("test ! ! aaa = aaa -o ! ! bbb = bbb", True),
- # -z operator
- ("test -z "$ut_var_nonexistent"", True),
- ("test -z "$ut_var_exists"", False),
+)
+def exec_hush_if(uboot_console, expr, result):
- cmd = "if " + expr + "; then echo true; else echo false; fi"
- response = uboot_console.run_command(cmd)
- assert response.strip() == str(result).lower()
+@pytest.mark.buildconfigspec("sys_hush_parser") +def test_hush_if_test_setup(uboot_console):
- uboot_console.run_command("setenv ut_var_nonexistent")
- uboot_console.run_command("setenv ut_var_exists 1")
+@pytest.mark.buildconfigspec("sys_hush_parser") +@pytest.mark.parametrize("expr,result", subtests) +def test_hush_if_test(uboot_console, expr, result):
- exec_hush_if(uboot_console, expr, result)
+@pytest.mark.buildconfigspec("sys_hush_parser") +def test_hush_if_test_teardown(uboot_console):
- uboot_console.run_command("setenv ut_var_exists")
+@pytest.mark.buildconfigspec("sys_hush_parser") +# We might test this on real filesystems via UMS, DFU, "save", etc. +# Of those, only UMS currently allows file removal though. +@pytest.mark.boardspec("sandbox") +def test_hush_if_test_host_file_exists(uboot_console):
- test_file = uboot_console.config.result_dir + \
"/creating_this_file_breaks_uboot_tests"
- try:
os.unlink(test_file)
- except:
pass
- assert not os.path.exists(test_file)
- expr = "test -e hostfs - " + test_file
- exec_hush_if(uboot_console, expr, False)
- try:
with file(test_file, "wb"):
pass
assert os.path.exists(test_file)
expr = "test -e hostfs - " + test_file
exec_hush_if(uboot_console, expr, True)
- finally:
os.unlink(test_file)
- expr = "test -e hostfs - " + test_file
- exec_hush_if(uboot_console, expr, False)
-- 2.6.3
Following up on our previous discussion the tests become slightly slower, but not enough to matter, with your new series. Arguably the tests are a bit harder to adjust, and harder to debug (how do I run gdb?). Any ideas on that?
Also how will we run the existing command unit tests? There should definitely be a way to do that.
Regards, Simon

On 12/19/2015 03:24 PM, Simon Glass wrote:
Hi Stephen,
On 2 December 2015 at 15:18, Stephen Warren swarren@wwwdotorg.org wrote:
From: Stephen Warren swarren@nvidia.com
Migrate all most tests from command_ut.c into the Python test system. This allows the tests to be run against any U-Boot binary that supports the if command (i.e. where hush is enabled) without requiring that binary to be permanently bloated with the code from command_ut.
Some tests in command_ut.c can only be executed from C code, since they test internal (more unit-level) features of various U-Boot APIs. The migrated tests can all operate directly from the U-Boot console.
This seems to migrate more than just the 'if' tests suggested by the commit subject. Perhaps it should be split into two?
Is there any point in running these tests on real hardware? It takes forever due to needing to reset each time. Do we need to reset?
No, that should no be needed.
They fail on my board due I think to a problem with the printenv parsing:
Section: test_env_echo_exists Stream: console => printenv
...
othbootargs=acpi=off
Yes, I hadn't tested with variable values that contained an =. Michal found this already. The following patch to test/py/test_env.py class StateTestEnv function get_env() should fix this:
- (var, value) = l.strip().split("=") + (var, value) = l.strip().split("=", 1)
...
Also how will we run the existing command unit tests? There should definitely be a way to do that.
We could add a test that runs "ut xxx" very easily. Of course, the result would only be an aggregate of "all unit tests passed" vs "some unit test failed" at present. Perhaps that could be improved later though.

From: Stephen Warren swarren@nvidia.com
This test invokes the "ums" command in U-Boot, and validates that a USB storage device is enumerated on the test host system, and can be read from.
Signed-off-by: Stephen Warren swarren@nvidia.com --- test/py/test_ums.py | 75 +++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 75 insertions(+) create mode 100644 test/py/test_ums.py
diff --git a/test/py/test_ums.py b/test/py/test_ums.py new file mode 100644 index 000000000000..55bcc7ccb703 --- /dev/null +++ b/test/py/test_ums.py @@ -0,0 +1,75 @@ +# Copyright (c) 2015, NVIDIA CORPORATION. All rights reserved. +# +# SPDX-License-Identifier: GPL-2.0 + +import os +import pytest +import time + +""" +Note: This test relies on: + +a) boardenv_* to contain configuration values to define which USB ports are +available for testing. Without this, this test will be automatically skipped. +For example: + +env__usb_dev_ports = ( + {"tgt_usb_ctlr": "0", "host_dev_link": "/dev/disk/by-path/pci-0000:00:14.0-usb-0:13:1.0-scsi-0:0:0:0"}, +) + +env__block_devs = ( + {"type": "mmc", "id": "0"}, # eMMC; always present + {"type": "mmc", "id": "1"}, # SD card; present since I plugged one in +) + +b) udev rules to set permissions on devices nodes, so that sudo is not +required. For example: + +ACTION=="add", SUBSYSTEM=="block", SUBSYSTEMS=="usb", KERNELS=="3-13", MODE:="666" + +(You may wish to change the group ID instead of setting the permissions wide +open. All that matters is that the user ID running the test can access the +device.) +""" + +def open_ums_device(host_dev_link): + try: + return open(host_dev_link, "rb") + except: + return None + +def wait_for_ums_device(host_dev_link): + for i in xrange(100): + fh = open_ums_device(host_dev_link) + if fh: + return fh + time.sleep(0.1) + raise Exception("UMS device did not appear") + +def wait_for_ums_device_gone(host_dev_link): + for i in xrange(100): + fh = open_ums_device(host_dev_link) + if not fh: + return + fh.close() + time.sleep(0.1) + raise Exception("UMS device did not disappear") + +@pytest.mark.buildconfigspec("cmd_usb_mass_storage") +def test_ums(uboot_console, env__usb_dev_port, env__block_devs): + tgt_usb_ctlr = env__usb_dev_port["tgt_usb_ctlr"] + host_dev_link = env__usb_dev_port["host_dev_link"] + + # We're interested in testing USB device mode on each port, not the cross- + # product of that with each device. So, just pick the first entry in the + # device list here. We'll test each block device somewhere else. + tgt_dev_type = env__block_devs[0]["type"] + tgt_dev_id = env__block_devs[0]["id"] + + cmd = "ums %s %s %s" % (tgt_usb_ctlr, tgt_dev_type, tgt_dev_id) + uboot_console.run_command("ums 0 mmc 0", wait_for_prompt=False) + fh = wait_for_ums_device(host_dev_link) + fh.read(4096) + fh.close() + uboot_console.ctrlc() + wait_for_ums_device_gone(host_dev_link)

HI Stephen,
On 2 December 2015 at 15:18, Stephen Warren swarren@wwwdotorg.org wrote:
From: Stephen Warren swarren@nvidia.com
This test invokes the "ums" command in U-Boot, and validates that a USB storage device is enumerated on the test host system, and can be read from.
Signed-off-by: Stephen Warren swarren@nvidia.com
test/py/test_ums.py | 75 +++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 75 insertions(+) create mode 100644 test/py/test_ums.py
Reviewed-by: Simon Glass sjg@chromium.org
Is the intent to replace or augment the existing ums tests?
Regards, Simon

On 12/19/2015 03:24 PM, Simon Glass wrote:
HI Stephen,
On 2 December 2015 at 15:18, Stephen Warren swarren@wwwdotorg.org wrote:
From: Stephen Warren swarren@nvidia.com
This test invokes the "ums" command in U-Boot, and validates that a USB storage device is enumerated on the test host system, and can be read from.
Signed-off-by: Stephen Warren swarren@nvidia.com
test/py/test_ums.py | 75 +++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 75 insertions(+) create mode 100644 test/py/test_ums.py
Reviewed-by: Simon Glass sjg@chromium.org
Is the intent to replace or augment the existing ums tests?
Eventually replace, although I haven't yet implemented everything that the existing tests do; the existing test does actual disk IO, whereas this test mostly just covers USB device and disk enumeration.

Hello Stephen,
Am 02.12.2015 um 23:18 schrieb Stephen Warren:
This tool aims to test U-Boot by executing U-Boot shell commands using the console interface. A single top-level script exists to execute or attach to the U-Boot console, run the entire script of tests against it, and summarize the results. Advantages of this approach are:
- Testing is performed in the same way a user or script would interact with U-Boot; there can be no disconnect.
- There is no need to write or embed test-related code into U-Boot itself. It is asserted that writing test-related code in Python is simpler and more flexible that writing it all in C.
- It is reasonably simple to interact with U-Boot in this way.
A few simple tests are provided as examples. Soon, we should convert as many as possible of the other tests in test/* and test/cmd_ut.c too.
In the future, I hope to publish (out-of-tree) the hook scripts, relay control utilities, and udev rules I will use for my own HW setup.
See README.md for more details!
Signed-off-by: Stephen Warren swarren@wwwdotorg.org Signed-off-by: Stephen Warren swarren@nvidia.com
Nice work!
I am working on another python approach, not only good for testing u-boot, also works with linux, or other console based tests, see:
[1] tbot https://github.com/hsdenx/tbot
With tbot it is possible to connect to a lab PC over ssh. So no need to have the hw physically accessible. Currently there is only one lab, the denx lab in munich (I live in hungary and tbot runs at my home),
It should be really easy to add other labs, if others want to join. see: https://github.com/hsdenx/tbot/blob/master/doc/howto_add_new_lab.txt
I set up nightly builds using buildbot [2] http://buildbot.net/ on my rasspyberry pi at my home (!low dsl speed, especially if my kids play games, so if you cannot see the webpage press F5 again ;-)
[3] http://xeidos.ddns.net/buildbot/waterfall
For every build there is a logfile, also shell log. The logfile is generated from tbot. It is possible to define different loglevel, as tbot uses "logging" python module. Each testcase can add logfile messages.
Testcases can call other testcases, so you can do very small testcases, and combinate them to bigger ones ...
As tbot defines board states, it is possible to switch between u-boot and linux tests, and tbot will switch automatically to the board state the testcase is defined for ... a testcase has only to call https://github.com/hsdenx/tbot/blob/master/src/common/tbotlib.py#L677
So it is for example easy to set a date in u-boot, switch to board state linux, check there the date, set another date in linux, and switch to u-boot and check there again the time ... (had such a testcase, but not for current version of tbot)
It would be great to have a lot of labs accessible with board(s) in it for u-boot/linux tests and make there automated nightly builds ...
Currently there are only basic tbot testcases, but some good examples, what is possible: - smartweb dfu test: https://github.com/hsdenx/tbot/blob/master/src/tc/tc_board_smartweb.py - compile u-boot source from current mainline - install it on the board - make dfu tests with dfu-util installed on the PC in the Lab
- ari_ubi https://github.com/hsdenx/tbot/blob/master/src/tc/tc_board_aristainetos2.py - compile current mainline source for the aristainetos2 board - install it on the board - make u-boot ubi/ubifs test on the nand and spi nor
- tqm5200s https://github.com/hsdenx/tbot/blob/master/src/tc/tc_board_tqm5200s_try_cur_... - compile current mainline source for the tqm5200s board - install it on the board with BDI - boot u-boot (currently fail, bugfix is posted on the ML)
- interesting testcase here: https://github.com/hsdenx/tbot/blob/master/src/tc/tc_board_git_bisect.py This testcase checks out a git tree, and start a "git bisect" with starting a testcase on the board for detecting if current version is good or bad. At the end, you get the info, which commit breaks ... so detected the fail for the tqm5200s, see: http://xeidos.ddns.net/buildbot/builders/tqm5200s/builds/3/steps/shell/logs/... If I find time I want to start this tc automatically in the nightly build if a testcase for a board fails. So we get for free every night breaking commits tested on real boards ...
- there are also linux test on [3] ...
- register check (used on the mcx board for pinmux on the ccu1 board for pinmux and gpmc register) https://github.com/hsdenx/tbot/blob/master/src/tc/tc_lx_check_reg_file.py checks the registers defined in a register file with current settings (use devmem2 for this task) register file example: https://github.com/hsdenx/tbot/blob/master/src/files/ccu1_pinmux_scm.reg create this file with testcase: https://github.com/hsdenx/tbot/blob/master/src/tc/tc_lx_create_reg_file.py
- https://github.com/hsdenx/tbot/blob/master/src/tc/tc_board_sirius_dds.py The sirius dds board had problems with ubi/ubifs and power cuts. This testcase exeutes 50 power cuts, if no error detected it succeeds, else fail. The above testcase does: - go into state u-boot - start linux with ubifs as rootfs line 40 (as it is an old testcase, this should be better "tb.set_board_state("linux")" - wait until Userspace APP SiriusApplicat is started - wait random seconds (3 -10) - power off the board - wait 3 seconds for powering really of the board - loop this 50 times if we have an error in this steps, testcase ends with error
For this testcase 56 lines needed (more as the half are comments)
Basic framework is running, time for adding more testcases is needed...
bye, Heiko

On 12/02/2015 11:47 PM, Heiko Schocher wrote:
Hello Stephen,
Am 02.12.2015 um 23:18 schrieb Stephen Warren:
This tool aims to test U-Boot by executing U-Boot shell commands using the console interface. A single top-level script exists to execute or attach to the U-Boot console, run the entire script of tests against it, and summarize the results. Advantages of this approach are:
- Testing is performed in the same way a user or script would interact with U-Boot; there can be no disconnect.
- There is no need to write or embed test-related code into U-Boot
itself. It is asserted that writing test-related code in Python is simpler and more flexible that writing it all in C.
- It is reasonably simple to interact with U-Boot in this way.
A few simple tests are provided as examples. Soon, we should convert as many as possible of the other tests in test/* and test/cmd_ut.c too.
In the future, I hope to publish (out-of-tree) the hook scripts, relay control utilities, and udev rules I will use for my own HW setup.
See README.md for more details!
Signed-off-by: Stephen Warren swarren@wwwdotorg.org Signed-off-by: Stephen Warren swarren@nvidia.com
Nice work!
I am working on another python approach, not only good for testing u-boot, also works with linux, or other console based tests, see:
[1] tbot https://github.com/hsdenx/tbot
That looks nice too.
I assume the scope there is too large to aim at inclusion into the U-Boot source tree, since it also aims at Linux testing too?

Hello Stephen,
Am 07.12.2015 um 22:51 schrieb Stephen Warren:
On 12/02/2015 11:47 PM, Heiko Schocher wrote:
Hello Stephen,
Am 02.12.2015 um 23:18 schrieb Stephen Warren:
This tool aims to test U-Boot by executing U-Boot shell commands using the console interface. A single top-level script exists to execute or attach to the U-Boot console, run the entire script of tests against it, and summarize the results. Advantages of this approach are:
- Testing is performed in the same way a user or script would interact with U-Boot; there can be no disconnect.
- There is no need to write or embed test-related code into U-Boot
itself. It is asserted that writing test-related code in Python is simpler and more flexible that writing it all in C.
- It is reasonably simple to interact with U-Boot in this way.
A few simple tests are provided as examples. Soon, we should convert as many as possible of the other tests in test/* and test/cmd_ut.c too.
In the future, I hope to publish (out-of-tree) the hook scripts, relay control utilities, and udev rules I will use for my own HW setup.
See README.md for more details!
Signed-off-by: Stephen Warren swarren@wwwdotorg.org Signed-off-by: Stephen Warren swarren@nvidia.com
Nice work!
I am working on another python approach, not only good for testing u-boot, also works with linux, or other console based tests, see:
[1] tbot https://github.com/hsdenx/tbot
That looks nice too.
Thanks! Users welcome ;-)
I assume the scope there is too large to aim at inclusion into the U-Boot source tree, since it also aims at Linux testing too?
Yes, tbot has a larger scope ... not only u-boot/linux, all what can be tested with a console ... board states currently are u-boot and linux yes, but I have no other task to test ... other states can be hopefully easy added ... and one big benefit is, you do not need to have the board in your hands ...
I hope to get more users/labs/boards which can be integrated into some nightly build ... thats the reason why I add in patch comments tbot results from time to time ...
bye, Heiko

On 12/02/2015 03:18 PM, Stephen Warren wrote:
This tool aims to test U-Boot by executing U-Boot shell commands using the console interface. A single top-level script exists to execute or attach to the U-Boot console, run the entire script of tests against it, and summarize the results. Advantages of this approach are:
- Testing is performed in the same way a user or script would interact with U-Boot; there can be no disconnect.
- There is no need to write or embed test-related code into U-Boot itself. It is asserted that writing test-related code in Python is simpler and more flexible that writing it all in C.
- It is reasonably simple to interact with U-Boot in this way.
A few simple tests are provided as examples. Soon, we should convert as many as possible of the other tests in test/* and test/cmd_ut.c too.
In the future, I hope to publish (out-of-tree) the hook scripts, relay control utilities, and udev rules I will use for my own HW setup.
I finally got permission to publish these. Examples are at: https://github.com/swarren/uboot-test-hooks

On 9.12.2015 17:32, Stephen Warren wrote:
On 12/02/2015 03:18 PM, Stephen Warren wrote:
This tool aims to test U-Boot by executing U-Boot shell commands using the console interface. A single top-level script exists to execute or attach to the U-Boot console, run the entire script of tests against it, and summarize the results. Advantages of this approach are:
- Testing is performed in the same way a user or script would interact with U-Boot; there can be no disconnect.
- There is no need to write or embed test-related code into U-Boot
itself. It is asserted that writing test-related code in Python is simpler and more flexible that writing it all in C.
- It is reasonably simple to interact with U-Boot in this way.
A few simple tests are provided as examples. Soon, we should convert as many as possible of the other tests in test/* and test/cmd_ut.c too.
In the future, I hope to publish (out-of-tree) the hook scripts, relay control utilities, and udev rules I will use for my own HW setup.
I finally got permission to publish these. Examples are at: https://github.com/swarren/uboot-test-hooks
Interesting. What's the normal setup which you have for the board? I see from your description that you use numato usb relay - I expect one with more channels for reset. Then you are able to control boot mode. Is it also via the same relay? How do you power up the board?
Thanks, Michal

On 12/16/2015 08:11 AM, Michal Simek wrote:
On 9.12.2015 17:32, Stephen Warren wrote:
On 12/02/2015 03:18 PM, Stephen Warren wrote:
This tool aims to test U-Boot by executing U-Boot shell commands using the console interface. A single top-level script exists to execute or attach to the U-Boot console, run the entire script of tests against it, and summarize the results. Advantages of this approach are:
- Testing is performed in the same way a user or script would interact with U-Boot; there can be no disconnect.
- There is no need to write or embed test-related code into U-Boot
itself. It is asserted that writing test-related code in Python is simpler and more flexible that writing it all in C.
- It is reasonably simple to interact with U-Boot in this way.
A few simple tests are provided as examples. Soon, we should convert as many as possible of the other tests in test/* and test/cmd_ut.c too.
In the future, I hope to publish (out-of-tree) the hook scripts, relay control utilities, and udev rules I will use for my own HW setup.
I finally got permission to publish these. Examples are at: https://github.com/swarren/uboot-test-hooks
Interesting. What's the normal setup which you have for the board? I see from your description that you use numato usb relay - I expect one with more channels for reset. Then you are able to control boot mode. Is it also via the same relay? How do you power up the board?
In my current setup I leave the board on all the time (or rather, manually turn on the power when I'm about to run the tests). Automating control of the power source is a step I'll take later.
For Tegra, there are two important signals: reset and "force recovery". Each of these has a separate relay, so the system currently uses 2 relays per target board. The numato relay board I own has 8 relays, although there are a number of different models.
On Tegra, when reset is pulsed:
- If force-recovery is connected, the SoC enters USB recovery mode. In this state, SW can be downloaded over USB into RAM and executed.
- If force-recovery is not connected, the SoC boots normally, from SW stored in flash (eMMC, SPI, ...)
The example scripts always use recovery mode to download U-Boot into RAM rather than writing it to flash first and then resetting. This saves wear cycles on the flash, but does mean the download happens in the "reset" rather than "flash" script, which may make the examples a bit different than for some other SoCs.
Finally, the example scripts support two boards; my home/laptop dev setup that uses a Numato relay board to control the signals to the board I use there, and my work desktop dev setup that uses our "PM342" debug board to controll the signals. The latter works logically the same as the numato relay board, except it contains electronic switches driven by an FTDI chip.

Hi Stephen,
2015-12-16 17:27 GMT+01:00 Stephen Warren swarren@wwwdotorg.org:
On 12/16/2015 08:11 AM, Michal Simek wrote:
On 9.12.2015 17:32, Stephen Warren wrote:
On 12/02/2015 03:18 PM, Stephen Warren wrote:
This tool aims to test U-Boot by executing U-Boot shell commands using the console interface. A single top-level script exists to execute or attach to the U-Boot console, run the entire script of tests against it, and summarize the results. Advantages of this approach are:
- Testing is performed in the same way a user or script would interact with U-Boot; there can be no disconnect.
- There is no need to write or embed test-related code into U-Boot
itself. It is asserted that writing test-related code in Python is simpler and more flexible that writing it all in C.
- It is reasonably simple to interact with U-Boot in this way.
A few simple tests are provided as examples. Soon, we should convert as many as possible of the other tests in test/* and test/cmd_ut.c too.
In the future, I hope to publish (out-of-tree) the hook scripts, relay control utilities, and udev rules I will use for my own HW setup.
I finally got permission to publish these. Examples are at: https://github.com/swarren/uboot-test-hooks
Interesting. What's the normal setup which you have for the board? I see from your description that you use numato usb relay - I expect one with more channels for reset. Then you are able to control boot mode. Is it also via the same relay? How do you power up the board?
In my current setup I leave the board on all the time (or rather, manually turn on the power when I'm about to run the tests). Automating control of the power source is a step I'll take later.
ok.
For Tegra, there are two important signals: reset and "force recovery".
Do you mean that these both signals are just connected out of chip?
Each of these has a separate relay, so the system currently uses 2 relays per target board. The numato relay board I own has 8 relays, although there are a number of different models.
ok
On Tegra, when reset is pulsed:
- If force-recovery is connected, the SoC enters USB recovery mode. In
this state, SW can be downloaded over USB into RAM and executed.
Is this bootrom feature? For xilinx boards there is all the time jtag available. It means download can be done via jtag instead.
- If force-recovery is not connected, the SoC boots normally, from SW
stored in flash (eMMC, SPI, ...)
The example scripts always use recovery mode to download U-Boot into RAM rather than writing it to flash first and then resetting. This saves wear cycles on the flash, but does mean the download happens in the "reset" rather than "flash" script, which may make the examples a bit different than for some other SoCs.
Are you testing all boot modes? Because I expect these needs to be tested too. Do you use SPL? If yes, are you going to test it in this way?
Finally, the example scripts support two boards; my home/laptop dev setup that uses a Numato relay board to control the signals to the board I use there, and my work desktop dev setup that uses our "PM342" debug board to controll the signals. The latter works logically the same as the numato relay board, except it contains electronic switches driven by an FTDI chip.
I expect this is FTDI chip on the target right?
thanks, Michal

On 12/16/2015 10:43 AM, Michal Simek wrote:
Hi Stephen,
2015-12-16 17:27 GMT+01:00 Stephen Warren <swarren@wwwdotorg.org mailto:swarren@wwwdotorg.org>:
On 12/16/2015 08:11 AM, Michal Simek wrote: On 9.12.2015 17:32, Stephen Warren wrote: On 12/02/2015 03:18 PM, Stephen Warren wrote: This tool aims to test U-Boot by executing U-Boot shell commands using the console interface. A single top-level script exists to execute or attach to the U-Boot console, run the entire script of tests against it, and summarize the results. Advantages of this approach are: - Testing is performed in the same way a user or script would interact with U-Boot; there can be no disconnect. - There is no need to write or embed test-related code into U-Boot itself. It is asserted that writing test-related code in Python is simpler and more flexible that writing it all in C. - It is reasonably simple to interact with U-Boot in this way. A few simple tests are provided as examples. Soon, we should convert as many as possible of the other tests in test/* and test/cmd_ut.c too. In the future, I hope to publish (out-of-tree) the hook scripts, relay control utilities, and udev rules I will use for my own HW setup. I finally got permission to publish these. Examples are at: https://github.com/swarren/uboot-test-hooks Interesting. What's the normal setup which you have for the board? I see from your description that you use numato usb relay - I expect one with more channels for reset. Then you are able to control boot mode. Is it also via the same relay? How do you power up the board? In my current setup I leave the board on all the time (or rather, manually turn on the power when I'm about to run the tests). Automating control of the power source is a step I'll take later.
ok.
For Tegra, there are two important signals: reset and "force recovery".
Do you mean that these both signals are just connected out of chip?
Yes. Reset is typically driven into the PMIC, and the signal to request force recovery is driven into Tegra itself.
Typically there are push-buttons on development boards to control those two signals. I've simply wired my relays across those buttons to simulate the button press.
Each of these has a separate relay, so the system currently uses 2 relays per target board. The numato relay board I own has 8 relays, although there are a number of different models.
ok
On Tegra, when reset is pulsed: - If force-recovery is connected, the SoC enters USB recovery mode. In this state, SW can be downloaded over USB into RAM and executed.
Is this bootrom feature?
Yes.
For xilinx boards there is all the time jtag available. It means download can be done via jtag instead.
That sounds plausible. The only issue might be general system state; can you reset everything to POR defaults via JTAG before the download? If not, perhaps e.g. the eMMC controller was partially initialized by previous code, which might interfere with assumptions made by the new code that's downloaded?
- If force-recovery is not connected, the SoC boots normally, from SW stored in flash (eMMC, SPI, ...) The example scripts always use recovery mode to download U-Boot into RAM rather than writing it to flash first and then resetting. This saves wear cycles on the flash, but does mean the download happens in the "reset" rather than "flash" script, which may make the examples a bit different than for some other SoCs.
Are you testing all boot modes? Because I expect these needs to be tested too. Do you use SPL? If yes, are you going to test it in this way?
With those example scripts, cold boot isn't being tested. However, (a) I could define a new board ID (or pick up environment variables) to cause that to be tested sometimes (b) I don't recall having seen any differences between cold boot and recovery mode boot in the past; we get a lot of quicker/lower-wear test coverage this way without too much additional risk.
SPL is in use. However, SPL on Tegra has a bit of a different job than it has on some other chips. The boot ROM always initializes SDRAM, and SPL actually runs on a different CPU and primarily has the job of booting the main CPU where the main U-Boot binary runs. For more information, see:
ftp://download.nvidia.com/tegra-public-appnotes/index.html
Finally, the example scripts support two boards; my home/laptop dev setup that uses a Numato relay board to control the signals to the board I use there, and my work desktop dev setup that uses our "PM342" debug board to controll the signals. The latter works logically the same as the numato relay board, except it contains electronic switches driven by an FTDI chip.
I expect this is FTDI chip on the target right?
It's actually a separate common debug board. Most/all of our development boards (and perhaps some production boards) have a standardized connector into which the common debug board plugs.
thanks, Michal
-- Michal Simek, Ing. (M.Eng), OpenPGP -> KeyID: FE3D1F91 w: www.monstr.eu http://www.monstr.eu p: +42-0-721842854 Maintainer of Linux kernel - Microblaze cpu - http://www.monstr.eu/fdt/ Maintainer of Linux kernel - Xilinx Zynq ARM architecture Microblaze U-BOOT custodian and responsible for u-boot arm zynq platform

2015-12-16 19:09 GMT+01:00 Stephen Warren swarren@wwwdotorg.org:
On 12/16/2015 10:43 AM, Michal Simek wrote:
Hi Stephen,
2015-12-16 17:27 GMT+01:00 Stephen Warren <swarren@wwwdotorg.org mailto:swarren@wwwdotorg.org>:
On 12/16/2015 08:11 AM, Michal Simek wrote: On 9.12.2015 17:32, Stephen Warren wrote: On 12/02/2015 03:18 PM, Stephen Warren wrote: This tool aims to test U-Boot by executing U-Boot shell commands using the console interface. A single top-level script exists to execute or attach to the U-Boot console, run the entire script of tests against it, and summarize the results. Advantages of this approach are: - Testing is performed in the same way a user or script would interact with U-Boot; there can be no disconnect. - There is no need to write or embed test-related code into U-Boot itself. It is asserted that writing test-related code in Python is simpler and more flexible that writing it all in C. - It is reasonably simple to interact with U-Boot in this way. A few simple tests are provided as examples. Soon, we should convert as many as possible of the other tests in test/* and test/cmd_ut.c too. In the future, I hope to publish (out-of-tree) the hook scripts, relay control utilities, and udev rules I will use for my own HW setup. I finally got permission to publish these. Examples are at: https://github.com/swarren/uboot-test-hooks Interesting. What's the normal setup which you have for the board? I see from your description that you use numato usb relay - I expect one with more channels for reset. Then you are able to control boot mode. Is it also via the same relay? How do you power up the board? In my current setup I leave the board on all the time (or rather, manually turn on the power when I'm about to run the tests). Automating control of the power source is a step I'll take later.
ok.
For Tegra, there are two important signals: reset and "force
recovery".
Do you mean that these both signals are just connected out of chip?
Yes. Reset is typically driven into the PMIC, and the signal to request force recovery is driven into Tegra itself.
Typically there are push-buttons on development boards to control those two signals. I've simply wired my relays across those buttons to simulate the button press.
ok I see.
Each of these has a separate relay, so the system currently uses 2
relays per target board. The numato relay board I own has 8 relays, although there are a number of different models.
ok
On Tegra, when reset is pulsed: - If force-recovery is connected, the SoC enters USB recovery mode. In this state, SW can be downloaded over USB into RAM and executed.
Is this bootrom feature?
Yes.
For xilinx boards there is all the time jtag available. It means
download can be done via jtag instead.
That sounds plausible. The only issue might be general system state; can you reset everything to POR defaults via JTAG before the download?
There is cpu reset, Soc reset on the board which can be used but I have IP power switch. It means I can handle power which ensure correct state.
If not, perhaps e.g. the eMMC controller was partially initialized by previous code, which might interfere with assumptions made by the new code that's downloaded?
I think my power switch solves this without any problem.
- If force-recovery is not connected, the SoC boots normally, from
SW stored in flash (eMMC, SPI, ...) The example scripts always use recovery mode to download U-Boot into RAM rather than writing it to flash first and then resetting. This saves wear cycles on the flash, but does mean the download happens in the "reset" rather than "flash" script, which may make the examples a bit different than for some other SoCs.
Are you testing all boot modes? Because I expect these needs to be tested too. Do you use SPL? If yes, are you going to test it in this way?
With those example scripts, cold boot isn't being tested. However, (a) I could define a new board ID (or pick up environment variables) to cause that to be tested sometimes (b) I don't recall having seen any differences between cold boot and recovery mode boot in the past; we get a lot of quicker/lower-wear test coverage this way without too much additional risk.
ok.
SPL is in use. However, SPL on Tegra has a bit of a different job than it has on some other chips. The boot ROM always initializes SDRAM, and SPL actually runs on a different CPU and primarily has the job of booting the main CPU where the main U-Boot binary runs. For more information, see:
ftp://download.nvidia.com/tegra-public-appnotes/index.html
Interesting.
Finally, the example scripts support two boards; my home/laptop dev
setup that uses a Numato relay board to control the signals to the board I use there, and my work desktop dev setup that uses our "PM342" debug board to controll the signals. The latter works logically the same as the numato relay board, except it contains electronic switches driven by an FTDI chip.
I expect this is FTDI chip on the target right?
It's actually a separate common debug board. Most/all of our development boards (and perhaps some production boards) have a standardized connector into which the common debug board plugs.
ok. I think my setup is not that far from what you are using and I expect that others SoCs will be very similar. Do you have any other testcases which you are running and you haven't sent?
I will try to find out some time for testing this week. If not, then next year.
Thanks, Michal

On 12/16/2015 11:32 AM, Michal Simek wrote:
2015-12-16 19:09 GMT+01:00 Stephen Warren <swarren@wwwdotorg.org mailto:swarren@wwwdotorg.org>:
On 12/16/2015 10:43 AM, Michal Simek wrote: Hi Stephen, 2015-12-16 17:27 GMT+01:00 Stephen Warren <swarren@wwwdotorg.org <mailto:swarren@wwwdotorg.org> <mailto:swarren@wwwdotorg.org <mailto:swarren@wwwdotorg.org>>>: On 12/16/2015 08:11 AM, Michal Simek wrote: On 9.12.2015 17:32, Stephen Warren wrote: On 12/02/2015 03:18 PM, Stephen Warren wrote: This tool aims to test U-Boot by executing U-Boot shell commands using the console interface. A single top-level script exists to execute or attach to the U-Boot console, run the entire script of tests against it, and summarize the results. Advantages of this approach are: - Testing is performed in the same way a user or script would interact with U-Boot; there can be no disconnect. - There is no need to write or embed test-related code into U-Boot itself. It is asserted that writing test-related code in Python is simpler and more flexible that writing it all in C. - It is reasonably simple to interact with U-Boot in this way. A few simple tests are provided as examples. Soon, we should convert as many as possible of the other tests in test/* and test/cmd_ut.c too. In the future, I hope to publish (out-of-tree) the hook scripts, relay control utilities, and udev rules I will use for my own HW setup. I finally got permission to publish these. Examples are at: https://github.com/swarren/uboot-test-hooks Interesting. What's the normal setup which you have for the board? I see from your description that you use numato usb relay - I expect one with more channels for reset. Then you are able to control boot mode. Is it also via the same relay? How do you power up the board? In my current setup I leave the board on all the time (or rather, manually turn on the power when I'm about to run the tests). Automating control of the power source is a step I'll take later. ok. For Tegra, there are two important signals: reset and "force recovery". Do you mean that these both signals are just connected out of chip? Yes. Reset is typically driven into the PMIC, and the signal to request force recovery is driven into Tegra itself. Typically there are push-buttons on development boards to control those two signals. I've simply wired my relays across those buttons to simulate the button press.
ok I see.
Each of these has a separate relay, so the system currently uses 2 relays per target board. The numato relay board I own has 8 relays, although there are a number of different models. ok On Tegra, when reset is pulsed: - If force-recovery is connected, the SoC enters USB recovery mode. In this state, SW can be downloaded over USB into RAM and executed. Is this bootrom feature? Yes. For xilinx boards there is all the time jtag available. It means download can be done via jtag instead. That sounds plausible. The only issue might be general system state; can you reset everything to POR defaults via JTAG before the download?
There is cpu reset, Soc reset on the board which can be used but I have IP power switch. It means I can handle power which ensure correct state.
If not, perhaps e.g. the eMMC controller was partially initialized by previous code, which might interfere with assumptions made by the new code that's downloaded?
I think my power switch solves this without any problem.
- If force-recovery is not connected, the SoC boots normally, from SW stored in flash (eMMC, SPI, ...) The example scripts always use recovery mode to download U-Boot into RAM rather than writing it to flash first and then resetting. This saves wear cycles on the flash, but does mean the download happens in the "reset" rather than "flash" script, which may make the examples a bit different than for some other SoCs. Are you testing all boot modes? Because I expect these needs to be tested too. Do you use SPL? If yes, are you going to test it in this way? With those example scripts, cold boot isn't being tested. However, (a) I could define a new board ID (or pick up environment variables) to cause that to be tested sometimes (b) I don't recall having seen any differences between cold boot and recovery mode boot in the past; we get a lot of quicker/lower-wear test coverage this way without too much additional risk.
ok.
SPL is in use. However, SPL on Tegra has a bit of a different job than it has on some other chips. The boot ROM always initializes SDRAM, and SPL actually runs on a different CPU and primarily has the job of booting the main CPU where the main U-Boot binary runs. For more information, see: ftp://download.nvidia.com/tegra-public-appnotes/index.html
Interesting.
Finally, the example scripts support two boards; my home/laptop dev setup that uses a Numato relay board to control the signals to the board I use there, and my work desktop dev setup that uses our "PM342" debug board to controll the signals. The latter works logically the same as the numato relay board, except it contains electronic switches driven by an FTDI chip. I expect this is FTDI chip on the target right? It's actually a separate common debug board. Most/all of our development boards (and perhaps some production boards) have a standardized connector into which the common debug board plugs.
ok. I think my setup is not that far from what you are using and I expect that others SoCs will be very similar. Do you have any other testcases which you are running and you haven't sent?
Not at present.
As an FYI, I typically publish my local work-in-progress branch at: git://github.com/swarren/u-boot.git tegra_dev

Hi Stephen,
Finally, the example scripts support two boards; my home/laptop dev setup that uses a Numato relay board to control the signals to the board I use there, and my work desktop dev setup that
uses our "PM342" debug board to controll the signals. The latter works logically the same as the numato relay board, except it contains electronic switches driven by an FTDI chip.
I expect this is FTDI chip on the target right? It's actually a separate common debug board. Most/all of our development boards (and perhaps some production boards) have a standardized connector into which the common debug board plugs.
ok. I think my setup is not that far from what you are using and I expect that others SoCs will be very similar. Do you have any other testcases which you are running and you haven't sent?
Not at present.
As an FYI, I typically publish my local work-in-progress branch at: git://github.com/swarren/u-boot.git tegra_dev
I have looked at your patches and no problem to get it work on microblaze and zynq board. I do use kermit without any problem. I used cu on Microblaze.
- What I do miss is power off functionality because it is not practical to keep board always on. On can be solved via reset script.
- Then place tests to separate folder for better separation.
- I see that output log doesn't handle tabs correctly - output from i2c bus for example.
- Is there any way to handle timeouts or stucks? For example to recognize if sleep 60 fails or just takes long. It means setting up timeouts will be good.
I will have more comments when I spend more time with it but it looks pretty good for start.
Thanks, Michal

On 12/18/2015 07:50 AM, Michal Simek wrote:
Hi Stephen,
Finally, the example scripts support two boards; my home/laptop dev setup that uses a Numato relay board to control the signals to the board I use there, and my work desktop dev setup that
uses our "PM342" debug board to controll the signals. The latter works logically the same as the numato relay board, except it contains electronic switches driven by an FTDI chip.
I expect this is FTDI chip on the target right? It's actually a separate common debug board. Most/all of our development boards (and perhaps some production boards) have a standardized connector into which the common debug board plugs.
ok. I think my setup is not that far from what you are using and I expect that others SoCs will be very similar. Do you have any other testcases which you are running and you haven't sent?
Not at present.
As an FYI, I typically publish my local work-in-progress branch at: git://github.com/swarren/u-boot.git tegra_dev
I have looked at your patches and no problem to get it work on microblaze and zynq board. I do use kermit without any problem. I used cu on Microblaze.
Great!
- What I do miss is power off functionality because it is not practical
to keep board always on. On can be solved via reset script.
Yes, I would expect that the flash or reset script would turn the board on. It should be easy to add an extra hook script at the end which turns the board off. Or, whatever automation system you use to invoke test.py could simply do that right after running test.py.
- Then place tests to separate folder for better separation.
You mean e.g. test/py/tests/ ?
- I see that output log doesn't handle tabs correctly - output from i2c
bus for example.
OK. I can easily make the logging code replace a TAB with something else, e.g. a chain of , although it will mean keeping track of the output character count since the last newline which will be a bit more painful. Let me look into this.
- Is there any way to handle timeouts or stucks? For example to
recognize if sleep 60 fails or just takes long. It means setting up timeouts will be good.
ubspawn.py:expect() does have a timeout capability, and uboot_console_base.py:ensure_spawned() sets this to 30s by default. There isn't currently any example of modifying or saving/restoring the timeout, or running commands that are expected to have a timeout, although either should be pretty easy to add. I expect the result would look something like this in a test:
with uboot_console.push_timeout(60000 + some_margin): uboot_console.run_command("sleep 60") # Perhaps the actual time taken should be validated here too
with uboot_console.timeout_is_expected(10000): # code that is expected to time out # Perhaps the following command would be integrated into the # timeout_is_expected() implementation, since I think it's the only # way you could recover from this situation? uboot_console.ctrlc()
... both modelled after the existing uboot_console.disable_check() code.
I will have more comments when I spend more time with it but it looks pretty good for start.
Thanks.

On 12/18/2015 11:33 AM, Stephen Warren wrote:
On 12/18/2015 07:50 AM, Michal Simek wrote:
...
- I see that output log doesn't handle tabs correctly - output from i2c
bus for example.
OK. I can easily make the logging code replace a TAB with something else, e.g. a chain of , although it will mean keeping track of the output character count since the last newline which will be a bit more painful. Let me look into this.
It looks like the <pre> tag handles TABs already, so I converted to use that. Take a look at:
https://github.com/swarren/u-boot/commit/ee09f5ddf2c646a6ba2b0dddbdf4ef59001...
"test/py: support TABs in log"

On 18.12.2015 23:36, Stephen Warren wrote:
On 12/18/2015 11:33 AM, Stephen Warren wrote:
On 12/18/2015 07:50 AM, Michal Simek wrote:
...
- I see that output log doesn't handle tabs correctly - output from i2c
bus for example.
OK. I can easily make the logging code replace a TAB with something else, e.g. a chain of , although it will mean keeping track of the output character count since the last newline which will be a bit more painful. Let me look into this.
It looks like the <pre> tag handles TABs already, so I converted to use that. Take a look at:
https://github.com/swarren/u-boot/commit/ee09f5ddf2c646a6ba2b0dddbdf4ef59001...
"test/py: support TABs in log"
That works. Feel free to add my Tested-by: Michal Simek michal.simek@xilinx.com
Thanks, Michal

On 18.12.2015 19:33, Stephen Warren wrote:
On 12/18/2015 07:50 AM, Michal Simek wrote:
Hi Stephen,
Finally, the example scripts support two boards; my home/laptop dev setup that uses a Numato relay board to control the
signals to the board I use there, and my work desktop dev setup that uses our "PM342" debug board to controll the signals. The latter works logically the same as the numato relay board, except it contains electronic switches driven by an FTDI chip.
I expect this is FTDI chip on the target right? It's actually a separate common debug board. Most/all of our development boards (and perhaps some production boards) have a standardized connector into which the common debug board plugs.
ok. I think my setup is not that far from what you are using and I expect that others SoCs will be very similar. Do you have any other testcases which you are running and you haven't sent?
Not at present.
As an FYI, I typically publish my local work-in-progress branch at: git://github.com/swarren/u-boot.git tegra_dev
I have looked at your patches and no problem to get it work on microblaze and zynq board. I do use kermit without any problem. I used cu on Microblaze.
Great!
btw: Is there any reason that you don't allow to clone your git repos?
- What I do miss is power off functionality because it is not practical
to keep board always on. On can be solved via reset script.
Yes, I would expect that the flash or reset script would turn the board on. It should be easy to add an extra hook script at the end which turns the board off. Or, whatever automation system you use to invoke test.py could simply do that right after running test.py.
- Then place tests to separate folder for better separation.
You mean e.g. test/py/tests/ ?
yes.
- Is there any way to handle timeouts or stucks? For example to
recognize if sleep 60 fails or just takes long. It means setting up timeouts will be good.
ubspawn.py:expect() does have a timeout capability, and uboot_console_base.py:ensure_spawned() sets this to 30s by default. There isn't currently any example of modifying or saving/restoring the timeout, or running commands that are expected to have a timeout, although either should be pretty easy to add. I expect the result would look something like this in a test:
with uboot_console.push_timeout(60000 + some_margin): uboot_console.run_command("sleep 60") # Perhaps the actual time taken should be validated here too
with uboot_console.timeout_is_expected(10000): # code that is expected to time out # Perhaps the following command would be integrated into the # timeout_is_expected() implementation, since I think it's the only # way you could recover from this situation? uboot_console.ctrlc()
... both modelled after the existing uboot_console.disable_check() code.
I think this should be the part of sleep testing.
I will have more comments when I spend more time with it but it looks pretty good for start.
Then I see incorrect timeout reporting with tftpboot
Loading: *%08################################################################# ###################### 2.4 MiB/s
Regarding board-identity parameter. If not setup you are using "na" but I think CONFIG_IDENT_STRING can be used instead. Also I would like to have this parameter available in test because for ethernet testing will be good to have several folders with golden images for testing.
Also is there a way to run one particular test for easier developing. I know that I can simply remove all testing files but better option will be useful.
Thanks, Michal

On 01/04/2016 05:41 AM, Michal Simek wrote:
On 18.12.2015 19:33, Stephen Warren wrote:
On 12/18/2015 07:50 AM, Michal Simek wrote:
Hi Stephen,
Finally, the example scripts support two boards; my home/laptop dev setup that uses a Numato relay board to control the
signals to the board I use there, and my work desktop dev setup that uses our "PM342" debug board to controll the signals. The latter works logically the same as the numato relay board, except it contains electronic switches driven by an FTDI chip.
I expect this is FTDI chip on the target right? It's actually a separate common debug board. Most/all of our development boards (and perhaps some production boards) have a standardized connector into which the common debug board plugs.
ok. I think my setup is not that far from what you are using and I expect that others SoCs will be very similar. Do you have any other testcases which you are running and you haven't sent?
Not at present.
As an FYI, I typically publish my local work-in-progress branch at: git://github.com/swarren/u-boot.git tegra_dev
I have looked at your patches and no problem to get it work on microblaze and zynq board. I do use kermit without any problem. I used cu on Microblaze.
Great!
btw: Is there any reason that you don't allow to clone your git repos?
Hmm. git protocol doesn't seem to work on github any more; try cloning over SSH if you have a github ID (git@github.com:swarren/u-boot.git) or HTTPS otherwise (https://github.com/swarren/u-boot.git).
...
I will have more comments when I spend more time with it but it looks pretty good for start.
Then I see incorrect timeout reporting with tftpboot
Loading: *%08################################################################# ###################### 2.4 MiB/s
This looks like another case where an individual test needs to adjust the timeout used to wait for the prompt.
Regarding board-identity parameter. If not setup you are using "na" but I think CONFIG_IDENT_STRING can be used instead.
I believe those two things are different. The test system's concept of board identity refers to the physical instance of the board (e.g. if you have 3 identical boards in order to test N branches/commits in parallel) whereas CONFIG_IDENT_STRING is something built into the U-Boot binary to identify the type (not instance) of board if I understand correctly.
Also I would like to have this parameter available in test because for ethernet testing will be good to have several folders with golden images for testing.
I believe you can access uboot_console.config.board_identity. However, that would make the tests depend on your particular environment, so it's probably not a good idea to use that parameter at all.
Also is there a way to run one particular test for easier developing. I know that I can simply remove all testing files but better option will be useful.
If you pass "-k testname" to the script, it'll only run test(s) that match that string. That's a standard pytest option.

Hello Stephen,
Am 16.12.2015 um 17:27 schrieb Stephen Warren:
On 12/16/2015 08:11 AM, Michal Simek wrote:
On 9.12.2015 17:32, Stephen Warren wrote:
On 12/02/2015 03:18 PM, Stephen Warren wrote:
This tool aims to test U-Boot by executing U-Boot shell commands using the console interface. A single top-level script exists to execute or attach to the U-Boot console, run the entire script of tests against it, and summarize the results. Advantages of this approach are:
- Testing is performed in the same way a user or script would interact with U-Boot; there can be no disconnect.
- There is no need to write or embed test-related code into U-Boot
itself. It is asserted that writing test-related code in Python is simpler and more flexible that writing it all in C.
- It is reasonably simple to interact with U-Boot in this way.
A few simple tests are provided as examples. Soon, we should convert as many as possible of the other tests in test/* and test/cmd_ut.c too.
In the future, I hope to publish (out-of-tree) the hook scripts, relay control utilities, and udev rules I will use for my own HW setup.
I finally got permission to publish these. Examples are at: https://github.com/swarren/uboot-test-hooks
Interesting. What's the normal setup which you have for the board? I see from your description that you use numato usb relay - I expect one with more channels for reset. Then you are able to control boot mode. Is it also via the same relay? How do you power up the board?
In my current setup I leave the board on all the time (or rather, manually turn on the power when I'm about to run the tests). Automating control of the power source is a step I'll take later.
Maybe you give tbot (I mentioned it in this thread) a chance?
There, this things are automated, also you can do linux (and other console based) tests ... Currently added a testcase [1] which adds patches from patchwork, which are in a users ToDo list, to a git tree! In this case it is a u-boot git tree ... checks them with checkpatch, compiles it, and tries the new image on the board and calls testcases ... fully automated in a now weekly build [2] ... (only weekly, but thats a setup parameter in buildbot, as I have tbot and buildbot running on a raspberry pi) and don;t forget, I have the board not where I run tbot, the boards are ~1000km from me .. So, it is possible to add a U-Boot Testsetup on a server, and test boards all over the world ...
For Tegra, there are two important signals: reset and "force recovery". Each of these has a separate relay, so the system currently uses 2 relays per target board. The numato relay board I own has 8 relays, although there are a number of different models.
On Tegra, when reset is pulsed:
- If force-recovery is connected, the SoC enters USB recovery mode. In this state, SW can be
downloaded over USB into RAM and executed.
- If force-recovery is not connected, the SoC boots normally, from SW stored in flash (eMMC, SPI, ...)
The example scripts always use recovery mode to download U-Boot into RAM rather than writing it to flash first and then resetting. This saves wear cycles on the flash, but does mean the download happens in the "reset" rather than "flash" script, which may make the examples a bit different than for some other SoCs.
Finally, the example scripts support two boards; my home/laptop dev setup that uses a Numato relay board to control the signals to the board I use there, and my work desktop dev setup that uses our "PM342" debug board to controll the signals. The latter works logically the same as the numato relay board, except it contains electronic switches driven by an FTDI chip.
I seperated such functions into a seperate python class, so such setup specific things should be easy to add for other (in tbot called lab) setups ...
Currently I have only a bdi testcase, which flashes a new image into the board, when it is broken ... but such relay things can be added of course ...
bye, Heiko [1] get patchlist from patchwork https://github.com/hsdenx/tbot/commit/0bcaf4877e7aad4df2039913dcb6e85303a0b1... apply them to git tree: https://github.com/hsdenx/tbot/commit/b7e2de3731252b518754cc3f71dc782559b0ca... and use this on the smartweb board: https://github.com/hsdenx/tbot/commit/56a1ac18e5730ae9ffa7686acfc52877272be9...
[2] weekly started testcase for the smartweb board: http://xeidos.ddns.net/buildbot/builders/smartweb_dfu log with the testcases from [1] http://xeidos.ddns.net/buildbot/builders/smartweb_dfu/builds/32/steps/shell/...

Hi Heiko,
On 16 December 2015 at 22:45, Heiko Schocher hs@denx.de wrote:
Hello Stephen,
Am 16.12.2015 um 17:27 schrieb Stephen Warren:
On 12/16/2015 08:11 AM, Michal Simek wrote:
On 9.12.2015 17:32, Stephen Warren wrote:
On 12/02/2015 03:18 PM, Stephen Warren wrote:
This tool aims to test U-Boot by executing U-Boot shell commands using the console interface. A single top-level script exists to execute or attach to the U-Boot console, run the entire script of tests against it, and summarize the results. Advantages of this approach are:
- Testing is performed in the same way a user or script would interact with U-Boot; there can be no disconnect.
- There is no need to write or embed test-related code into U-Boot
itself. It is asserted that writing test-related code in Python is simpler and more flexible that writing it all in C.
- It is reasonably simple to interact with U-Boot in this way.
A few simple tests are provided as examples. Soon, we should convert as many as possible of the other tests in test/* and test/cmd_ut.c too.
In the future, I hope to publish (out-of-tree) the hook scripts, relay control utilities, and udev rules I will use for my own HW setup.
I finally got permission to publish these. Examples are at: https://github.com/swarren/uboot-test-hooks
Interesting. What's the normal setup which you have for the board? I see from your description that you use numato usb relay - I expect one with more channels for reset. Then you are able to control boot mode. Is it also via the same relay? How do you power up the board?
In my current setup I leave the board on all the time (or rather, manually turn on the power when I'm about to run the tests). Automating control of the power source is a step I'll take later.
Maybe you give tbot (I mentioned it in this thread) a chance?
There, this things are automated, also you can do linux (and other console based) tests ... Currently added a testcase [1] which adds patches from patchwork, which are in a users ToDo list, to a git tree! In this case it is a u-boot git tree ... checks them with checkpatch, compiles it, and tries the new image on the board and calls testcases ... fully automated in a now weekly build [2] ... (only weekly, but thats a setup parameter in buildbot, as I have tbot and buildbot running on a raspberry pi) and don;t forget, I have the board not where I run tbot, the boards are ~1000km from me .. So, it is possible to add a U-Boot Testsetup on a server, and test boards all over the world ...
This sounds like a great development.
How can we get this so that it can be used by U-Boot people? Do you think you could add a README to the mainline, or some scripts to aid setting it up? I would be interested in setting up a few boards that run continuous testing, and I suspect others would also if it were easier.
For Tegra, there are two important signals: reset and "force recovery". Each of these has a separate relay, so the system currently uses 2 relays per target board. The numato relay board I own has 8 relays, although there are a number of different models.
On Tegra, when reset is pulsed:
- If force-recovery is connected, the SoC enters USB recovery mode. In
this state, SW can be downloaded over USB into RAM and executed.
- If force-recovery is not connected, the SoC boots normally, from SW
stored in flash (eMMC, SPI, ...)
The example scripts always use recovery mode to download U-Boot into RAM rather than writing it to flash first and then resetting. This saves wear cycles on the flash, but does mean the download happens in the "reset" rather than "flash" script, which may make the examples a bit different than for some other SoCs.
Finally, the example scripts support two boards; my home/laptop dev setup that uses a Numato relay board to control the signals to the board I use there, and my work desktop dev setup that uses our "PM342" debug board to controll the signals. The latter works logically the same as the numato relay board, except it contains electronic switches driven by an FTDI chip.
I seperated such functions into a seperate python class, so such setup specific things should be easy to add for other (in tbot called lab) setups ...
Currently I have only a bdi testcase, which flashes a new image into the board, when it is broken ... but such relay things can be added of course ...
bye, Heiko [1] get patchlist from patchwork
https://github.com/hsdenx/tbot/commit/0bcaf4877e7aad4df2039913dcb6e85303a0b1... apply them to git tree:
https://github.com/hsdenx/tbot/commit/b7e2de3731252b518754cc3f71dc782559b0ca... and use this on the smartweb board:
https://github.com/hsdenx/tbot/commit/56a1ac18e5730ae9ffa7686acfc52877272be9...
[2] weekly started testcase for the smartweb board: http://xeidos.ddns.net/buildbot/builders/smartweb_dfu log with the testcases from [1]
http://xeidos.ddns.net/buildbot/builders/smartweb_dfu/builds/32/steps/shell/...
-- DENX Software Engineering GmbH, Managing Director: Wolfgang Denk HRB 165235 Munich, Office: Kirchenstr.5, D-82194 Groebenzell, Germany _______________________________________________ U-Boot mailing list U-Boot@lists.denx.de http://lists.denx.de/mailman/listinfo/u-boot
Regards, Simon

Hello Simon,
Am 15.01.2016 um 00:12 schrieb Simon Glass:
Hi Heiko,
On 16 December 2015 at 22:45, Heiko Schocher hs@denx.de wrote:
Hello Stephen,
Am 16.12.2015 um 17:27 schrieb Stephen Warren:
On 12/16/2015 08:11 AM, Michal Simek wrote:
On 9.12.2015 17:32, Stephen Warren wrote:
On 12/02/2015 03:18 PM, Stephen Warren wrote:
[...]
In my current setup I leave the board on all the time (or rather, manually turn on the power when I'm about to run the tests). Automating control of the power source is a step I'll take later.
Maybe you give tbot (I mentioned it in this thread) a chance?
There, this things are automated, also you can do linux (and other console based) tests ... Currently added a testcase [1] which adds patches from patchwork, which are in a users ToDo list, to a git tree! In this case it is a u-boot git tree ... checks them with checkpatch, compiles it, and tries the new image on the board and calls testcases ... fully automated in a now weekly build [2] ... (only weekly, but thats a setup parameter in buildbot, as I have tbot and buildbot running on a raspberry pi) and don;t forget, I have the board not where I run tbot, the boards are ~1000km from me .. So, it is possible to add a U-Boot Testsetup on a server, and test boards all over the world ...
This sounds like a great development.
Thanks!
How can we get this so that it can be used by U-Boot people? Do you think you could add a README to the mainline, or some scripts to aid setting it up? I would be interested in setting up a few boards that run continuous testing, and I suspect others would also if it were easier.
Yes, good idea!
I think about preparing ASAP a patch for U-Boot, creating: u-boot:/tools/tbot/README -> common infos about tbot u-boot:/tools/tbot/README.install -> HowTo install / using it u-boot:/tools/tbot/README.create_a_testcase u-boot:/tools/tbot/README-ToDo -> my current list of ToDo
Is this Ok?
As a motivation for using tbot ;-) I just created a video from tbot running the smartweb testcase: https://github.com/hsdenx/tbot/blob/master/src/tc/tc_board_smartweb.py
https://www.youtube.com/watch?v=ZwUA0QNDnP4
Keep in mind, that I run tbot for the video here on my laptop at my home in hungary, the smartweb board is in munich/germany.
I use this testcase also in my weekly buildbot setup on my raspberry pi [2]. Tbot logs (not only U-Boot also linux tests) can be found there for interested people (Remark: Wolfgang said, the logs are unreadable, because filled with a lot of unnecessary developer output, Yes, he is correct!! I have on my ToDo list, to add a new logging level, where only board input/output is printed ... the loglevel tbot uses is definded in the board config file https://github.com/hsdenx/tbot/blob/master/tbot_smartweb.cfg#L13 )
What is done in the smartweb testcase: - rm old u-boot code, checkout current U-Boot master @ 00:14 - set a toolchain @ 00:18 - get all patchwork patches from my patchwork ToDo list calling testcase: https://github.com/hsdenx/tbot/blob/master/src/tc/tc_workfd_get_patchwork_nu... @ 00:18 - adding some patchwork patches, I have in a python list (This list is setup in the board config file: https://github.com/hsdenx/tbot/blob/master/tbot_smartweb.cfg#L34 (Heiko speculating: it would be nice if tbot removes from this list a patchworkpatchnumber, if it detects, that this patch is already now in mainline ... ) - apply local patches (If there are) - apply the patchwork patches https://github.com/hsdenx/tbot/blob/master/src/tc/tc_workfd_apply_patchwork_... currently not failing, when checkpatch detects errors/warnings @ 00:59 - compile U-Boot @ 02:35 - update SPL on the board @ 03:26 - update U-Boot on the board @ 03:55 - start DFU testcase on the board @ 04:13 https://github.com/hsdenx/tbot/blob/master/src/tc/tc_ub_dfu.py
This testcase starts the "dfu" U-Boot command, which waits until Ctrl-C is pressed Then I start on the lab PC the Userspace tool "dfu-util" which communicates over USB with the smartweb board ... and do some dfu up and downloads.
- at the end save the SPL and U-Boot bins, so I always have the lastest working bins [1]
- power off the board
bye, Heiko
[1] saving the latest working bins is interesting, because if a current U-Boot does not work on he board, I : - can restore the board with a debugger through a testcase, using the latest working bins. - and/or I can start a testcase which starts "git bisect" testcase: https://github.com/hsdenx/tbot/blob/master/src/tc/tc_board_git_bisect.py
This testcase needs 3 variables: tb.board_git_bisect_get_source_tc: Name of testcase which get/switches into the source tree you want to start a git bisect session tb.board_git_bisect_call_tc: Name of testcase which gets started, when "git bisect" waits for good or bad ... This testcase must find out if current source is good or bad. tb.board_git_bisect_good_commit: last working bins (Therefore I save the bins -> so I have the commit ;-)
This testcase is independent from U-Boot ... you can also use it in a linux tree or other source code ...
I used this testcase for example here: http://xeidos.ddns.net/buildbot/builders/tqm5200s/builds/3/steps/shell/logs/... at the end it calls "git bisect log" (search for this string), and it found the first bad commit, and I did nothing else as starting this testcase :-D
[2]http://xeidos.ddns.net/buildbot/tgrid http://xeidos.ddns.net/buildbot/builders/smartweb_dfu (If you do not see a webpage, reload it ... my DSL upload speed is not the fastest, also if my kids play ps4 games, it is busy)

Hi Heiko,
On 15 January 2016 at 23:29, Heiko Schocher hs@denx.de wrote:
Hello Simon,
Am 15.01.2016 um 00:12 schrieb Simon Glass:
Hi Heiko,
On 16 December 2015 at 22:45, Heiko Schocher hs@denx.de wrote:
Hello Stephen,
Am 16.12.2015 um 17:27 schrieb Stephen Warren:
On 12/16/2015 08:11 AM, Michal Simek wrote:
On 9.12.2015 17:32, Stephen Warren wrote:
On 12/02/2015 03:18 PM, Stephen Warren wrote:
[...]
In my current setup I leave the board on all the time (or rather, manually turn on the power when I'm about to run the tests). Automating control of the power source is a step I'll take later.
Maybe you give tbot (I mentioned it in this thread) a chance?
There, this things are automated, also you can do linux (and other console based) tests ... Currently added a testcase [1] which adds patches from patchwork, which are in a users ToDo list, to a git tree! In this case it is a u-boot git tree ... checks them with checkpatch, compiles it, and tries the new image on the board and calls testcases ... fully automated in a now weekly build [2] ... (only weekly, but thats a setup parameter in buildbot, as I have tbot and buildbot running on a raspberry pi) and don;t forget, I have the board not where I run tbot, the boards are ~1000km from me .. So, it is possible to add a U-Boot Testsetup on a server, and test boards all over the world ...
This sounds like a great development.
Thanks!
How can we get this so that it can be used by U-Boot people? Do you think you could add a README to the mainline, or some scripts to aid setting it up? I would be interested in setting up a few boards that run continuous testing, and I suspect others would also if it were easier.
Yes, good idea!
I think about preparing ASAP a patch for U-Boot, creating: u-boot:/tools/tbot/README -> common infos about tbot u-boot:/tools/tbot/README.install -> HowTo install / using it u-boot:/tools/tbot/README.create_a_testcase u-boot:/tools/tbot/README-ToDo -> my current list of ToDo
Is this Ok?
Sounds great.
As a motivation for using tbot ;-) I just created a video from tbot running the smartweb testcase: https://github.com/hsdenx/tbot/blob/master/src/tc/tc_board_smartweb.py
OK that helps explain it. It looks like it uses DFU to write new images to the boards - is that right?
Keep in mind, that I run tbot for the video here on my laptop at my home in hungary, the smartweb board is in munich/germany.
I use this testcase also in my weekly buildbot setup on my raspberry pi [2]. Tbot logs (not only U-Boot also linux tests) can be found there for interested people (Remark: Wolfgang said, the logs are unreadable, because filled with a lot of unnecessary developer output, Yes, he is correct!! I have on my ToDo list, to add a new logging level, where only board input/output is printed ... the loglevel tbot uses is definded in the board config file https://github.com/hsdenx/tbot/blob/master/tbot_smartweb.cfg#L13 )
Yes that's my main comment.
What is done in the smartweb testcase:
- rm old u-boot code, checkout current U-Boot master @ 00:14
- set a toolchain @ 00:18
- get all patchwork patches from my patchwork ToDo list calling testcase:
https://github.com/hsdenx/tbot/blob/master/src/tc/tc_workfd_get_patchwork_nu... @ 00:18
- adding some patchwork patches, I have in a python list (This list is setup in the board config file: https://github.com/hsdenx/tbot/blob/master/tbot_smartweb.cfg#L34 (Heiko speculating: it would be nice if tbot removes from this list a patchworkpatchnumber, if it detects, that this patch is already now in mainline ... )
- apply local patches (If there are)
- apply the patchwork patches
https://github.com/hsdenx/tbot/blob/master/src/tc/tc_workfd_apply_patchwork_... currently not failing, when checkpatch detects errors/warnings @ 00:59
compile U-Boot @ 02:35
update SPL on the board @ 03:26
update U-Boot on the board @ 03:55
start DFU testcase on the board @ 04:13 https://github.com/hsdenx/tbot/blob/master/src/tc/tc_ub_dfu.py
This testcase starts the "dfu" U-Boot command, which waits until Ctrl-C is pressed Then I start on the lab PC the Userspace tool "dfu-util" which
communicates over USB with the smartweb board ... and do some dfu up and downloads.
at the end save the SPL and U-Boot bins, so I always have the lastest working bins [1]
power off the board
bye, Heiko
Well I'll await your patches to the list and then see if I can get it working. What sort of hardware do I need for the power / reset control?
[1] saving the latest working bins is interesting, because if a current U-Boot does not work on he board, I :
can restore the board with a debugger through a testcase, using the latest working bins.
and/or I can start a testcase which starts "git bisect" testcase: https://github.com/hsdenx/tbot/blob/master/src/tc/tc_board_git_bisect.py
This testcase needs 3 variables: tb.board_git_bisect_get_source_tc: Name of testcase which get/switches into the source tree you want to start a git bisect session tb.board_git_bisect_call_tc: Name of testcase which gets started, when "git bisect" waits for good or bad ... This testcase must find out if current source is good or bad. tb.board_git_bisect_good_commit: last working bins (Therefore I save the bins -> so I have the commit ;-)
This testcase is independent from U-Boot ... you can also use it in a linux tree or other source code ...
I used this testcase for example here:
http://xeidos.ddns.net/buildbot/builders/tqm5200s/builds/3/steps/shell/logs/... at the end it calls "git bisect log" (search for this string), and it found the first bad commit, and I did nothing else as starting this testcase :-D
[2]http://xeidos.ddns.net/buildbot/tgrid http://xeidos.ddns.net/buildbot/builders/smartweb_dfu (If you do not see a webpage, reload it ... my DSL upload speed is not the fastest, also if my kids play ps4 games, it is busy) -- DENX Software Engineering GmbH, Managing Director: Wolfgang Denk HRB 165235 Munich, Office: Kirchenstr.5, D-82194 Groebenzell, Germany
Regards, Simon

Hello Simon,
Am 19.01.2016 um 04:42 schrieb Simon Glass:
Hi Heiko,
On 15 January 2016 at 23:29, Heiko Schocher hs@denx.de wrote:
Hello Simon,
Am 15.01.2016 um 00:12 schrieb Simon Glass:
Hi Heiko,
On 16 December 2015 at 22:45, Heiko Schocher hs@denx.de wrote:
Hello Stephen,
Am 16.12.2015 um 17:27 schrieb Stephen Warren:
On 12/16/2015 08:11 AM, Michal Simek wrote:
On 9.12.2015 17:32, Stephen Warren wrote: > > > On 12/02/2015 03:18 PM, Stephen Warren wrote:
[...]
In my current setup I leave the board on all the time (or rather, manually turn on the power when I'm about to run the tests). Automating control of the power source is a step I'll take later.
Maybe you give tbot (I mentioned it in this thread) a chance?
There, this things are automated, also you can do linux (and other console based) tests ... Currently added a testcase [1] which adds patches from patchwork, which are in a users ToDo list, to a git tree! In this case it is a u-boot git tree ... checks them with checkpatch, compiles it, and tries the new image on the board and calls testcases ... fully automated in a now weekly build [2] ... (only weekly, but thats a setup parameter in buildbot, as I have tbot and buildbot running on a raspberry pi) and don;t forget, I have the board not where I run tbot, the boards are ~1000km from me .. So, it is possible to add a U-Boot Testsetup on a server, and test boards all over the world ...
This sounds like a great development.
Thanks!
How can we get this so that it can be used by U-Boot people? Do you think you could add a README to the mainline, or some scripts to aid setting it up? I would be interested in setting up a few boards that run continuous testing, and I suspect others would also if it were easier.
Yes, good idea!
I think about preparing ASAP a patch for U-Boot, creating: u-boot:/tools/tbot/README -> common infos about tbot u-boot:/tools/tbot/README.install -> HowTo install / using it u-boot:/tools/tbot/README.create_a_testcase u-boot:/tools/tbot/README-ToDo -> my current list of ToDo
Is this Ok?
Sounds great.
Ok, sorry, I had no time yet, to making it ...
As a motivation for using tbot ;-) I just created a video from tbot running the smartweb testcase: https://github.com/hsdenx/tbot/blob/master/src/tc/tc_board_smartweb.py
OK that helps explain it. It looks like it uses DFU to write new images to the boards - is that right?
No, the board gets the new images through tftp ... I only test U-Boot DFU functionality on this board.
Keep in mind, that I run tbot for the video here on my laptop at my home in hungary, the smartweb board is in munich/germany.
I use this testcase also in my weekly buildbot setup on my raspberry pi [2]. Tbot logs (not only U-Boot also linux tests) can be found there for interested people (Remark: Wolfgang said, the logs are unreadable, because filled with a lot of unnecessary developer output, Yes, he is correct!! I have on my ToDo list, to add a new logging level, where only board input/output is printed ... the loglevel tbot uses is definded in the board config file https://github.com/hsdenx/tbot/blob/master/tbot_smartweb.cfg#L13 )
Yes that's my main comment.
Yes, on my todo list ... BTW: is the console output in the video helpful?
What is done in the smartweb testcase:
- rm old u-boot code, checkout current U-Boot master @ 00:14
- set a toolchain @ 00:18
- get all patchwork patches from my patchwork ToDo list calling testcase:
https://github.com/hsdenx/tbot/blob/master/src/tc/tc_workfd_get_patchwork_nu... @ 00:18
- adding some patchwork patches, I have in a python list (This list is setup in the board config file: https://github.com/hsdenx/tbot/blob/master/tbot_smartweb.cfg#L34 (Heiko speculating: it would be nice if tbot removes from this list a patchworkpatchnumber, if it detects, that this patch is already now in mainline ... )
- apply local patches (If there are)
- apply the patchwork patches
https://github.com/hsdenx/tbot/blob/master/src/tc/tc_workfd_apply_patchwork_... currently not failing, when checkpatch detects errors/warnings @ 00:59
compile U-Boot @ 02:35
update SPL on the board @ 03:26
update U-Boot on the board @ 03:55
start DFU testcase on the board @ 04:13 https://github.com/hsdenx/tbot/blob/master/src/tc/tc_ub_dfu.py
This testcase starts the "dfu" U-Boot command, which waits until Ctrl-C is pressed Then I start on the lab PC the Userspace tool "dfu-util" which
communicates over USB with the smartweb board ... and do some dfu up and downloads.
at the end save the SPL and U-Boot bins, so I always have the lastest working bins [1]
power off the board
bye, Heiko
Well I'll await your patches to the list and then see if I can get it working. What sort of hardware do I need for the power / reset control?
Ok.
You can use what you want for powering on/off the board!
If you can power on/off the board through a linux shell command, you are ready for tbot ...
Also you need a shell command for getting the current power state (on or off)
And you need to tell tbot, how you get access to the boards console ...
I seperated this tasks in testcases, so you need to write for this tasks a tbot testcase. If you want to access the serial console with kermit, this should work already ...
But let me try to find time for writing a patch ...
bye, Heiko

Hi Stephen,
On 2 December 2015 at 15:18, Stephen Warren swarren@wwwdotorg.org wrote:
This tool aims to test U-Boot by executing U-Boot shell commands using the console interface. A single top-level script exists to execute or attach to the U-Boot console, run the entire script of tests against it, and summarize the results. Advantages of this approach are:
- Testing is performed in the same way a user or script would interact with U-Boot; there can be no disconnect.
- There is no need to write or embed test-related code into U-Boot itself. It is asserted that writing test-related code in Python is simpler and more flexible that writing it all in C.
- It is reasonably simple to interact with U-Boot in this way.
A few simple tests are provided as examples. Soon, we should convert as many as possible of the other tests in test/* and test/cmd_ut.c too.
In the future, I hope to publish (out-of-tree) the hook scripts, relay control utilities, and udev rules I will use for my own HW setup.
See README.md for more details!
Signed-off-by: Stephen Warren swarren@wwwdotorg.org Signed-off-by: Stephen Warren swarren@nvidia.com
v2: Many fixes and tweaks have been squashed in. Separated out some of' the tests into separate commits, and added some more tests.
test/py/.gitignore | 1 + test/py/README.md | 300 +++++++++++++++++++++++++++++++++++ test/py/conftest.py | 278 ++++++++++++++++++++++++++++++++ test/py/multiplexed_log.css | 76 +++++++++ test/py/multiplexed_log.py | 193 ++++++++++++++++++++++ test/py/pytest.ini | 9 ++ test/py/test.py | 24 +++ test/py/test_000_version.py | 13 ++ test/py/test_help.py | 6 + test/py/test_unknown_cmd.py | 8 + test/py/uboot_console_base.py | 185 +++++++++++++++++++++ test/py/uboot_console_exec_attach.py | 36 +++++ test/py/uboot_console_sandbox.py | 31 ++++ test/py/ubspawn.py | 97 +++++++++++ 14 files changed, 1257 insertions(+) create mode 100644 test/py/.gitignore create mode 100644 test/py/README.md create mode 100644 test/py/conftest.py create mode 100644 test/py/multiplexed_log.css create mode 100644 test/py/multiplexed_log.py create mode 100644 test/py/pytest.ini create mode 100755 test/py/test.py create mode 100644 test/py/test_000_version.py create mode 100644 test/py/test_help.py create mode 100644 test/py/test_unknown_cmd.py create mode 100644 test/py/uboot_console_base.py create mode 100644 test/py/uboot_console_exec_attach.py create mode 100644 test/py/uboot_console_sandbox.py create mode 100644 test/py/ubspawn.py
This is a huge step forward for testing in U-Boot. Congratulations on putting this together!
Tested on chromebook_link, sandbox Tested-by: Simon Glass sjg@chromium.org
I've made various comments in the series as I think it needs a little tuning. I'm also interested in how we can arrange for the existing unit tests to be run (and results supported) by this framework.
One concern I have is about the ease of running and writing tests. It is pretty easy at present to run a particular driver model test:
./u-boot -d test.dtb -c "ut dm uclass"
and we can run this in gdb and figure out where things are going wrong (I do this quite a bit). Somehow we need to preserve this ease of use. The tests should be accessible. I'm not sure how you intend to make that work.
diff --git a/test/py/.gitignore b/test/py/.gitignore new file mode 100644 index 000000000000..0d20b6487c61 --- /dev/null +++ b/test/py/.gitignore @@ -0,0 +1 @@ +*.pyc diff --git a/test/py/README.md b/test/py/README.md new file mode 100644 index 000000000000..23a403eb8d88 --- /dev/null +++ b/test/py/README.md @@ -0,0 +1,300 @@ +# U-Boot pytest suite
+## Introduction
+This tool aims to test U-Boot by executing U-Boot shell commands using the +console interface. A single top-level script exists to execute or attach to the +U-Boot console, run the entire script of tests against it, and summarize the +results. Advantages of this approach are:
+- Testing is performed in the same way a user or script would interact with
- U-Boot; there can be no disconnect.
+- There is no need to write or embed test-related code into U-Boot itself.
- It is asserted that writing test-related code in Python is simpler and more
- flexible that writing it all in C.
+- It is reasonably simple to interact with U-Boot in this way.
+## Requirements
+The test suite is implemented using pytest. Interaction with the U-Boot console +involves executing some binary and interacting with its stdin/stdout. You will +need to implement various "hook" scripts that are called by the test suite at +the appropriate time.
+On Debian or Debian-like distributions, the following packages are required. +Similar package names should exist in other distributions.
+| Package | Version tested (Ubuntu 14.04) | +| -------------- | ----------------------------- | +| python | 2.7.5-5ubuntu3 | +| python-pytest | 2.5.1-1 |
+The test script supports either:
+- Executing a sandbox port of U-Boot on the local machine as a sub-process,
- and interacting with it over stdin/stdout.
+- Executing an external "hook" scripts to flash a U-Boot binary onto a
- physical board, attach to the board's console stream, and reset the board.
- Further details are described later.
+### Using `virtualenv` to provide requirements
+Older distributions (e.g. Ubuntu 10.04) may not provide all the required +packages, or may provide versions that are too old to run the test suite. One +can use the Python `virtualenv` script to locally install more up-to-date +versions of the required packages without interfering with the OS installation. +For example:
+```bash +$ cd /path/to/u-boot +$ sudo apt-get install python python-virtualenv +$ virtualenv venv +$ . ./venv/bin/activate +$ pip install pytest +```
+## Testing sandbox
+To run the testsuite on the sandbox port (U-Boot built as a native user-space +application), simply execute:
+``` +./test/py/test.py --bd sandbox --build +```
+The `--bd` option tells the test suite which board type is being tested. This +lets the test suite know which features the board has, and hence exactly what +can be tested.
Can we use -b to fit in with buildman and patman?
+The `--build` option tells U-Boot to compile U-Boot. Alternatively, you may +omit this option and build U-Boot yourself, in whatever way you choose, before +running the test script.
+The test script will attach to U-Boot, execute all valid tests for the board, +then print a summary of the test process. A complete log of the test session +will be written to `${build_dir}/test-log.html`. This is best viewed in a web +browser, but may be read directly as plain text, perhaps with the aid of the +`html2text` utility.
+## Command-line options
+- `--board-type`, `--bd`, `-B` set the type of the board to be tested. For
- example, `sandbox` or `seaboard`.
-b?
+- `--board-identity`, `--id` set the identity of the board to be tested.
- This allows differentiation between multiple instances of the same type of
- physical board that are attached to the same host machine. This parameter is
- not interpreted by the test script in any way, but rather is simply passed
- to the hook scripts described below, and may be used in any site-specific
- way deemed necessary.
+- `--build` indicates that the test script should compile U-Boot itself
- before running the tests. If using this option, make sure that any
- environment variables required by the build process are already set, such as
- `$CROSS_COMPILE`.
+- `--build-dir` sets the directory containing the compiled U-Boot binaries.
- If omitted, this is `${source_dir}/build-${board_type}`.
-d?
+- `--result-dir` sets the directory to write results, such as log files,
- into. If omitted, the build directory is used.
-r?
+- `--persistent-data-dir` sets the directory used to store persistent test
- data. This is test data that may be re-used across test runs, such as file-
- system images.
-d?
+`pytest` also implements a number of its own command-line options. Please see +`pytest` documentation for complete details. Execute `py.test --version` for +a brief summary. Note that U-Boot's test.py script passes all command-line +arguments directly to `pytest` for processing.
+## Testing real hardware
+The tools and techniques used to interact with real hardware will vary +radically between different host and target systems, and the whims of the user. +For this reason, the test suite does not attempt to directly interact with real +hardware in any way. Rather, it executes a standardized set of "hook" scripts +via `$PATH`. These scripts implement certain actions on behalf of the test +suite. This keeps the test suite simple and isolated from system variances +unrelated to U-Boot features.
+### Hook scripts
+#### Environment variables
+The following environment variables are set when running hook scripts:
+- `UBOOT_BOARD_TYPE` the board type being tested.
Shouldn't these be U_BOOT_BOARD_TYPE, etc.?
+- `UBOOT_BOARD_IDENTITY` the board identity being tested, or `na` if none was
- specified.
+- `UBOOT_SOURCE_DIR` the U-Boot source directory. +- `UBOOT_TEST_PY_DIR` the full path to `test/py/` in the source directory. +- `UBOOT_BUILD_DIR` the U-Boot build directory. +- `UBOOT_RESULT_DIR` the test result directory. +- `UBOOT_PERSISTENT_DATA_DIR` the test peristent data directory.
+#### `uboot-test-console`
+This script provides access to the U-Boot console. The script's stdin/stdout +should be connected to the board's console. This process should continue to run +indefinitely, until killed. The test suite will run this script in parallel +with all other hooks.
+This script may be implemented e.g. by exec()ing `cu`, `conmux`, etc.
+If you are able to run U-Boot under a hardware simulator such as qemu, then +you would likely spawn that simulator from this script. However, note that +`uboot-test-reset` may be called multiple times per test script run, and must
How aobut u-boot-test-reset, etc.?
+cause U-Boot to start execution from scratch each time. Hopefully your +simulator includes a virtual reset button! If not, you can launch the +simulator from `uboot-test-reset` instead, while arranging for this console +process to always communicate with the current simulator instance.
+#### `uboot-test-flash`
+Prior to running the test suite against a board, some arrangement must be made +so that the board executes the particular U-Boot binary to be tested. Often, +this involves writing the U-Boot binary to the board's flash ROM. The test +suite calls this hook script for that purpose.
+This script should perform the entire flashing process synchronously; the +script should only exit once flashing is complete, and a board reset will +cause the newly flashed U-Boot binary to be executed.
+It is conceivable that this script will do nothing. This might be useful in +the following cases:
+- Some other process has already written the desired U-Boot binary into the
- board's flash prior to running the test suite.
+- The board allows U-Boot to be downloaded directly into RAM, and executed
- from there. Use of this feature will reduce wear on the board's flash, so
- may be preferable if available, and if cold boot testing of U-Boot is not
- required. If this feature is used, the `uboot-test-reset` script should
- peform this download, since the board could conceivably be reset multiple
- times in a single test run.
+It is up to the user to determine if those situations exist, and to code this +hook script appropriately.
+This script will typically be implemented by calling out to some SoC- or +board-specific vendor flashing utility.
+#### `uboot-test-reset`
+Whenever the test suite needs to reset the target board, this script is +executed. This is guaranteed to happen at least once, prior to executing the +first test function. If any test fails, the test infra-structure will execute +this script again to restore U-Boot to an operational state before running the +next test function.
+This script will likely be implemented by communicating with some form of +relay or electronic switch attached to the board's reset signal.
+The semantics of this script require that when it is executed, U-Boot will +start running from scratch. If the U-Boot binary to be tested has been written +to flash, pulsing the board's reset signal is likely all this script need do. +However, in some scenarios, this script may perform other actions. For +example, it may call out to some SoC- or board-specific vendor utility in order +to download the U-Boot binary directly into RAM and execute it. This would +avoid the need for `uboot-test-flash` to actually write U-Boot to flash, thus +saving wear on the flash chip(s).
+### Board-type-specific configuration
+Each board has a different configuration and behaviour. Many of these +differences can be automatically detected by parsing the `.config` file in the +build directory. However, some differences can't yet be handled automatically.
+For each board, an optional Python module `uboot_board_${board_type}` may exist +to provide board-specific information to the test script. Any global value +defined in these modules is available for use by any test function. The data +contained in these scripts must be purely derived from U-Boot source code. +Hence, these configuration files are part of the U-Boot source tree too.
+### Execution environment configuration
+Each user's hardware setup may enable testing different subsets of the features +implemented by a particular board's configuration of U-Boot. For example, a +U-Boot configuration may support USB device mode and USB Mass Storage, but this +can only be tested if a USB cable is connected between the board and the host +machine running the test script.
+For each board, optional Python modules `uboot_boardenv_${board_type}` and +`uboot_boardenv_${board_type}_${board_identity}` may exist to provide +board-specific and board-identity-specific information to the test script. Any +global value defined in these modules is available for use by any test +function. The data contained in these is specific to a particular user's +hardware configuration. Hence, these configuration files are not part of the +U-Boot source tree, and should be installed outside of the source tree. Users +should set `$PYTHONPATH` prior to running the test script to allow these +modules to be loaded.
+### Board module parameter usage
+The test scripts rely on the following variables being defined by the board +module:
+- None at present.
+### U-Boot `.config` feature usage
+The test scripts rely on various U-Boot `.config` features, either directly in +order to test those features, or indirectly in order to query information from +the running U-Boot instance in order to test other features.
+One example is that testing of the `md` command requires knowledge of a RAM +address to use for the test. This data is parsed from the output of the +`bdinfo` command, and hence relies on CONFIG_CMD_BDI being enabled.
+For a complete list of dependencies, please search the test scripts for +instances of:
+- `buildconfig.get(...` +- `@pytest.mark.buildconfigspec(...`
+### Complete invocation example
+Assuming that you have installed the hook scripts into $HOME/ubtest/bin, and +any required environment configuration Python modules into $HOME/ubtest/py, +then you would likely invoke the test script as follows:
+If U-Boot has already been built:
+```bash +PATH=$HOME/ubtest/bin:$PATH \
- PYTHONPATH=${HOME}/ubtest/py:${PYTHONPATH} \
- ./test/py/test.py --bd seaboard
+```
+If you want the test script to compile U-Boot for you too, then you likely +need to set `$CROSS_COMPILE` to allow this, and invoke the test script as +follow:
+```bash +CROSS_COMPILE=arm-none-eabi- \
- PATH=$HOME/ubtest/bin:$PATH \
- PYTHONPATH=${HOME}/ubtest/py:${PYTHONPATH} \
- ./test/py/test.py --bd seaboard --build
+```
+## Writing tests
+Please refer to the pytest documentation for details of writing pytest tests. +Details specific to the U-Boot test suite are described below.
+A test fixture named `uboot_console` should be used by each test function. This +provides the means to interact with the U-Boot console, and retrieve board and +environment configuration information.
+The function `uboot_console.run_command()` executes a shell command on the +U-Boot console, and returns all output from that command. This allows +validation or interpretation of the command output. This function validates +that certain strings are not seen on the U-Boot console. These include shell +error messages and the U-Boot sign-on message (in order to detect unexpected +board resets). See the source of `uboot_console_base.py` for a complete list of +"bad" strings. Some test scenarios are expected to trigger these strings. Use +`uboot_console.disable_check()` to temporarily disable checking for specific +strings. See `test_unknown_cmd.py` for an example.
+Board- and board-environment configuration values may be accessed as sub-fields +of the `uboot_console.config` object, for example +`uboot_console.config.ram_base`.
+Build configuration values (from `.config`) may be accessed via the dictionary +`uboot_console.config.buildconfig`, with keys equal to the Kconfig variable +names. diff --git a/test/py/conftest.py b/test/py/conftest.py new file mode 100644 index 000000000000..b6efe03a60f8 --- /dev/null +++ b/test/py/conftest.py @@ -0,0 +1,278 @@ +# Copyright (c) 2015 Stephen Warren +# Copyright (c) 2015, NVIDIA CORPORATION. All rights reserved. +# +# SPDX-License-Identifier: GPL-2.0
+import atexit +import errno +import os +import os.path +import pexpect +import pytest +from _pytest.runner import runtestprotocol +import ConfigParser +import StringIO +import sys
+log = None +console = None
+def mkdir_p(path):
- try:
os.makedirs(path)
- except OSError as exc:
if exc.errno == errno.EEXIST and os.path.isdir(path):
pass
else:
raise
+def pytest_addoption(parser):
- parser.addoption("--build-dir", default=None,
help="U-Boot build directory (O=)")
You seem to use double quote consistently throughout rather than a single quote. That is different from the existing Python in the U-Boot tree. It might be worth swapping it out for consistency.
- parser.addoption("--result-dir", default=None,
help="U-Boot test result/tmp directory")
- parser.addoption("--persistent-data-dir", default=None,
help="U-Boot test persistent generated data directory")
- parser.addoption("--board-type", "--bd", "-B", default="sandbox",
help="U-Boot board type")
- parser.addoption("--board-identity", "--id", default="na",
help="U-Boot board identity/instance")
- parser.addoption("--build", default=False, action="store_true",
help="Compile U-Boot before running tests")
+def pytest_configure(config):
This series should have function comments throughout on non-trivial functions - e.g. purpose of the function and a description of the parameters and return value.
- global log
- global console
- global ubconfig
- test_py_dir = os.path.dirname(os.path.abspath(__file__))
- source_dir = os.path.dirname(os.path.dirname(test_py_dir))
- board_type = config.getoption("board_type")
- board_type_fn = board_type.replace("-", "_")
- board_identity = config.getoption("board_identity")
- board_identity_fn = board_identity.replace("-", "_")
- build_dir = config.getoption("build_dir")
- if not build_dir:
build_dir = source_dir + "/build-" + board_type
- mkdir_p(build_dir)
- result_dir = config.getoption("result_dir")
- if not result_dir:
result_dir = build_dir
- mkdir_p(result_dir)
- persistent_data_dir = config.getoption("persistent_data_dir")
- if not persistent_data_dir:
persistent_data_dir = build_dir + "/persistent-data"
- mkdir_p(persistent_data_dir)
- import multiplexed_log
- log = multiplexed_log.Logfile(result_dir + "/test-log.html")
- if config.getoption("build"):
if build_dir != source_dir:
o_opt = "O=%s" % build_dir
else:
o_opt = ""
cmds = (
["make", o_opt, "-s", board_type + "_defconfig"],
["make", o_opt, "-s", "-j8"],
)
runner = log.get_runner("make", sys.stdout)
for cmd in cmds:
runner.run(cmd, cwd=source_dir)
runner.close()
- class ArbitraryAttrContainer(object):
pass
- ubconfig = ArbitraryAttrContainer()
- ubconfig.brd = dict()
- ubconfig.env = dict()
- modules = [
(ubconfig.brd, "uboot_board_" + board_type_fn),
(ubconfig.env, "uboot_boardenv_" + board_type_fn),
(ubconfig.env, "uboot_boardenv_" + board_type_fn + "_" +
board_identity_fn),
- ]
- for (sub_config, mod_name) in modules:
try:
mod = __import__(mod_name)
except ImportError:
continue
sub_config.update(mod.__dict__)
- ubconfig.buildconfig = dict()
- for conf_file in (".config", "include/autoconf.mk"):
dot_config = build_dir + "/" + conf_file
if not os.path.exists(dot_config):
raise Exception(conf_file + " does not exist; " +
"try passing --build option?")
with open(dot_config, "rt") as f:
ini_str = "[root]\n" + f.read()
ini_sio = StringIO.StringIO(ini_str)
parser = ConfigParser.RawConfigParser()
parser.readfp(ini_sio)
ubconfig.buildconfig.update(parser.items("root"))
- ubconfig.test_py_dir = test_py_dir
- ubconfig.source_dir = source_dir
- ubconfig.build_dir = build_dir
- ubconfig.result_dir = result_dir
- ubconfig.persistent_data_dir = persistent_data_dir
- ubconfig.board_type = board_type
- ubconfig.board_identity = board_identity
- env_vars = (
"board_type",
"board_identity",
"source_dir",
"test_py_dir",
"build_dir",
"result_dir",
"persistent_data_dir",
- )
- for v in env_vars:
os.environ["UBOOT_" + v.upper()] = getattr(ubconfig, v)
- if board_type == "sandbox":
import uboot_console_sandbox
console = uboot_console_sandbox.ConsoleSandbox(log, ubconfig)
- else:
import uboot_console_exec_attach
console = uboot_console_exec_attach.ConsoleExecAttach(log, ubconfig)
+def pytest_generate_tests(metafunc):
- subconfigs = {
"brd": console.config.brd,
"env": console.config.env,
- }
- for fn in metafunc.fixturenames:
parts = fn.split("__")
if len(parts) < 2:
continue
if parts[0] not in subconfigs:
continue
subconfig = subconfigs[parts[0]]
vals = []
val = subconfig.get(fn, [])
if val:
vals = (val, )
else:
vals = subconfig.get(fn + "s", [])
metafunc.parametrize(fn, vals)
+@pytest.fixture(scope="session") +def uboot_console(request):
- return console
+tests_not_run = set() +tests_failed = set() +tests_skipped = set() +tests_passed = set()
+def pytest_itemcollected(item):
- tests_not_run.add(item.name)
+def cleanup():
- if console:
console.close()
- if log:
log.status_pass("%d passed" % len(tests_passed))
if tests_skipped:
log.status_skipped("%d skipped" % len(tests_skipped))
for test in tests_skipped:
log.status_skipped("... " + test)
if tests_failed:
log.status_fail("%d failed" % len(tests_failed))
for test in tests_failed:
log.status_fail("... " + test)
if tests_not_run:
log.status_fail("%d not run" % len(tests_not_run))
for test in tests_not_run:
log.status_fail("... " + test)
log.close()
+atexit.register(cleanup)
+def setup_boardspec(item):
- mark = item.get_marker("boardspec")
- if not mark:
return
- required_boards = []
- for board in mark.args:
if board.startswith("!"):
if ubconfig.board_type == board[1:]:
pytest.skip("board not supported")
return
else:
required_boards.append(board)
- if required_boards and ubconfig.board_type not in required_boards:
pytest.skip("board not supported")
+def setup_buildconfigspec(item):
- mark = item.get_marker("buildconfigspec")
- if not mark:
return
- for option in mark.args:
if not ubconfig.buildconfig.get("config_" + option.lower(), None):
pytest.skip(".config feature not enabled")
+def pytest_runtest_setup(item):
- log.start_section(item.name)
- setup_boardspec(item)
- setup_buildconfigspec(item)
+def pytest_runtest_protocol(item, nextitem):
- reports = runtestprotocol(item, nextitem=nextitem)
- failed = None
- skipped = None
- for report in reports:
if report.outcome == "failed":
failed = report
break
if report.outcome == "skipped":
if not skipped:
skipped = report
- if failed:
tests_failed.add(item.name)
- elif skipped:
tests_skipped.add(item.name)
- else:
tests_passed.add(item.name)
- tests_not_run.remove(item.name)
- try:
if failed:
msg = "FAILED:\n" + str(failed.longrepr)
log.status_fail(msg)
elif skipped:
msg = "SKIPPED:\n" + str(skipped.longrepr)
log.status_skipped(msg)
else:
log.status_pass("OK")
- except:
# If something went wrong with logging, it's better to let the test
# process continue, which may report other exceptions that triggered
# the logging issue (e.g. console.log wasn't created). Hence, just
# squash the exception. If the test setup failed due to e.g. syntax
# error somewhere else, this won't be seen. However, once that issue
# is fixed, if this exception still exists, it will then be logged as
# part of the test's stdout.
import traceback
print "Exception occurred while logging runtest status:"
traceback.print_exc()
# FIXME: Can we force a test failure here?
- log.end_section(item.name)
- if failed:
console.cleanup_spawn()
- return reports
diff --git a/test/py/multiplexed_log.css b/test/py/multiplexed_log.css new file mode 100644 index 000000000000..96d87ebe034b --- /dev/null +++ b/test/py/multiplexed_log.css @@ -0,0 +1,76 @@ +/*
- Copyright (c) 2015 Stephen Warren
- SPDX-License-Identifier: GPL-2.0
- */
+body {
- background-color: black;
- color: #ffffff;
+}
+.implicit {
- color: #808080;
+}
+.section {
- border-style: solid;
- border-color: #303030;
- border-width: 0px 0px 0px 5px;
- padding-left: 5px
+}
+.section-header {
- background-color: #303030;
- margin-left: -5px;
- margin-top: 5px;
+}
+.section-trailer {
- display: none;
+}
+.stream {
- border-style: solid;
- border-color: #303030;
- border-width: 0px 0px 0px 5px;
- padding-left: 5px
+}
+.stream-header {
- background-color: #303030;
- margin-left: -5px;
- margin-top: 5px;
+}
+.stream-trailer {
- display: none;
+}
+.error {
- color: #ff0000
+}
+.warning {
- color: #ffff00
+}
+.info {
- color: #808080
+}
+.action {
- color: #8080ff
+}
+.status-pass {
- color: #00ff00
+}
+.status-skipped {
- color: #ffff00
+}
+.status-fail {
- color: #ff0000
+} diff --git a/test/py/multiplexed_log.py b/test/py/multiplexed_log.py new file mode 100644 index 000000000000..58b9a9c50ecf --- /dev/null +++ b/test/py/multiplexed_log.py @@ -0,0 +1,193 @@ +# Copyright (c) 2015 Stephen Warren +# Copyright (c) 2015, NVIDIA CORPORATION. All rights reserved. +# +# SPDX-License-Identifier: GPL-2.0
+import cgi +import os.path +import shutil +import subprocess
+mod_dir = os.path.dirname(os.path.abspath(__file__))
+class LogfileStream(object):
- def __init__(self, logfile, name, chained_file):
self.logfile = logfile
self.name = name
self.chained_file = chained_file
- def close(self):
pass
- def write(self, data, implicit=False):
self.logfile.write(self, data, implicit)
if self.chained_file:
self.chained_file.write(data)
- def flush(self):
self.logfile.flush()
if self.chained_file:
self.chained_file.flush()
+class RunAndLog(object):
- def __init__(self, logfile, name, chained_file):
self.logfile = logfile
self.name = name
self.chained_file = chained_file
- def close(self):
pass
- def run(self, cmd, cwd=None):
msg = "+" + " ".join(cmd) + "\n"
if self.chained_file:
self.chained_file.write(msg)
self.logfile.write(self, msg)
try:
p = subprocess.Popen(cmd, cwd=cwd,
stdin=None, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
(output, stderr) = p.communicate()
status = p.returncode
except subprocess.CalledProcessError as cpe:
output = cpe.output
status = cpe.returncode
self.logfile.write(self, output)
if status:
if self.chained_file:
self.chained_file.write(output)
raise Exception("command failed; exit code " + str(status))
+class SectionCtxMgr(object):
- def __init__(self, log, marker):
self.log = log
self.marker = marker
- def __enter__(self):
self.log.start_section(self.marker)
- def __exit__(self, extype, value, traceback):
self.log.end_section(self.marker)
+class Logfile(object):
- def __init__(self, fn):
self.f = open(fn, "wt")
self.last_stream = None
self.linebreak = True
self.blocks = []
self.cur_evt = 1
shutil.copy(mod_dir + "/multiplexed_log.css", os.path.dirname(fn))
self.f.write("""\
+<html> +<head> +<link rel="stylesheet" type="text/css" href="multiplexed_log.css"> +</head> +<body> +<tt> +""")
- def close(self):
self.f.write("""\
+</tt> +</body> +</html> +""")
self.f.close()
- def _escape(self, data):
data = data.replace(chr(13), "")
data = "".join((c in self._nonprint) and ("%%%02x" % ord(c)) or
c for c in data)
data = cgi.escape(data)
data = data.replace(" ", " ")
self.linebreak = data[-1:-1] == "\n"
data = data.replace(chr(10), "<br/>\n")
return data
- def _terminate_stream(self):
self.cur_evt += 1
if not self.last_stream:
return
if not self.linebreak:
self.f.write("<br/>\n")
self.f.write("<div class=\"stream-trailer\" id=\"" +
self.last_stream.name + "\">End stream: " +
self.last_stream.name + "</div>\n")
self.f.write("</div>\n")
self.last_stream = None
- def _note(self, note_type, msg):
self._terminate_stream()
self.f.write("<div class=\"" + note_type + "\">\n")
self.f.write(self._escape(msg))
self.f.write("<br/>\n")
self.f.write("</div>\n")
self.linebreak = True
- def start_section(self, marker):
self._terminate_stream()
self.blocks.append(marker)
blk_path = "/".join(self.blocks)
self.f.write("<div class=\"section\" id=\"" + blk_path + "\">\n")
self.f.write("<div class=\"section-header\" id=\"" + blk_path +
"\">Section: " + blk_path + "</div>\n")
- def end_section(self, marker):
if (not self.blocks) or (marker != self.blocks[-1]):
raise Exception("Block nesting mismatch: \"%s\" \"%s\"" %
(marker, "/".join(self.blocks)))
self._terminate_stream()
blk_path = "/".join(self.blocks)
self.f.write("<div class=\"section-trailer\" id=\"section-trailer-" +
blk_path + "\">End section: " + blk_path + "</div>\n")
self.f.write("</div>\n")
self.blocks.pop()
- def section(self, marker):
return SectionCtxMgr(self, marker)
- def error(self, msg):
self._note("error", msg)
- def warning(self, msg):
self._note("warning", msg)
- def info(self, msg):
self._note("info", msg)
- def action(self, msg):
self._note("action", msg)
- def status_pass(self, msg):
self._note("status-pass", msg)
- def status_skipped(self, msg):
self._note("status-skipped", msg)
- def status_fail(self, msg):
self._note("status-fail", msg)
- def get_stream(self, name, chained_file=None):
return LogfileStream(self, name, chained_file)
- def get_runner(self, name, chained_file=None):
return RunAndLog(self, name, chained_file)
- _nonprint = ("^%" + "".join(chr(c) for c in range(0, 32) if c != 10) +
"".join(chr(c) for c in range(127, 256)))
- def write(self, stream, data, implicit=False):
if stream != self.last_stream:
self._terminate_stream()
self.f.write("<div class=\"stream\" id=\"%s\">\n" % stream.name)
self.f.write("<div class=\"stream-header\" id=\"" + stream.name +
"\">Stream: " + stream.name + "</div>\n")
if implicit:
self.f.write("<span class=\"implicit\">")
self.f.write(self._escape(data))
if implicit:
self.f.write("</span>")
self.last_stream = stream
- def flush(self):
self.f.flush()
diff --git a/test/py/pytest.ini b/test/py/pytest.ini new file mode 100644 index 000000000000..1bdff810d36e --- /dev/null +++ b/test/py/pytest.ini @@ -0,0 +1,9 @@ +# Copyright (c) 2015 Stephen Warren +# Copyright (c) 2015, NVIDIA CORPORATION. All rights reserved. +# +# SPDX-License-Identifier: GPL-2.0
+[pytest] +markers =
- boardspec: U-Boot: Describes the set of boards a test can/can't run on.
- buildconfigspec: U-Boot: Describes Kconfig/config-header constraints.
diff --git a/test/py/test.py b/test/py/test.py new file mode 100755 index 000000000000..7768216a2335 --- /dev/null +++ b/test/py/test.py @@ -0,0 +1,24 @@ +#!/usr/bin/env python
+# Copyright (c) 2015 Stephen Warren +# Copyright (c) 2015, NVIDIA CORPORATION. All rights reserved. +# +# SPDX-License-Identifier: GPL-2.0
+import os +import os.path +import sys
+sys.argv.pop(0)
+args = ["py.test", os.path.dirname(__file__)] +args.extend(sys.argv)
+try:
- os.execvp("py.test", args)
+except:
- import traceback
- traceback.print_exc()
- print >>sys.stderr, """
+exec(py.test) failed; perhaps you are missing some dependencies? +See test/md/README.md for the list.""" diff --git a/test/py/test_000_version.py b/test/py/test_000_version.py new file mode 100644 index 000000000000..360c8fd726e0 --- /dev/null +++ b/test/py/test_000_version.py @@ -0,0 +1,13 @@ +# Copyright (c) 2015 Stephen Warren +# +# SPDX-License-Identifier: GPL-2.0
+# pytest runs tests the order of their module path, which is related to the +# filename containing the test. This file is named such that it is sorted +# first, simply as a very basic sanity check of the functionality of the U-Boot +# command prompt.
+def test_version(uboot_console):
- with uboot_console.disable_check("main_signon"):
response = uboot_console.run_command("version")
- uboot_console.validate_main_signon_in_text(response)
diff --git a/test/py/test_help.py b/test/py/test_help.py new file mode 100644 index 000000000000..3cc896ee7af8 --- /dev/null +++ b/test/py/test_help.py @@ -0,0 +1,6 @@ +# Copyright (c) 2015 Stephen Warren +# +# SPDX-License-Identifier: GPL-2.0
+def test_help(uboot_console):
- uboot_console.run_command("help")
diff --git a/test/py/test_unknown_cmd.py b/test/py/test_unknown_cmd.py new file mode 100644 index 000000000000..ba12de56a294 --- /dev/null +++ b/test/py/test_unknown_cmd.py @@ -0,0 +1,8 @@ +# Copyright (c) 2015 Stephen Warren +# +# SPDX-License-Identifier: GPL-2.0
+def test_unknown_command(uboot_console):
- with uboot_console.disable_check("unknown_command"):
response = uboot_console.run_command("non_existent_cmd")
- assert("Unknown command 'non_existent_cmd' - try 'help'" in response)
diff --git a/test/py/uboot_console_base.py b/test/py/uboot_console_base.py new file mode 100644 index 000000000000..9f13fead2e7e --- /dev/null +++ b/test/py/uboot_console_base.py @@ -0,0 +1,185 @@ +# Copyright (c) 2015 Stephen Warren +# Copyright (c) 2015, NVIDIA CORPORATION. All rights reserved. +# +# SPDX-License-Identifier: GPL-2.0
+import multiplexed_log +import os +import pytest +import re +import sys
+pattern_uboot_spl_signon = re.compile("(U-Boot SPL \d{4}\.\d{2}-[^\r\n]*)") +pattern_uboot_main_signon = re.compile("(U-Boot \d{4}\.\d{2}-[^\r\n]*)") +pattern_stop_autoboot_prompt = re.compile("Hit any key to stop autoboot: ") +pattern_unknown_command = re.compile("Unknown command '.*' - try 'help'") +pattern_error_notification = re.compile("## Error: ")
+class ConsoleDisableCheck(object):
- def __init__(self, console, check_type):
self.console = console
self.check_type = check_type
- def __enter__(self):
self.console.disable_check_count[self.check_type] += 1
- def __exit__(self, extype, value, traceback):
self.console.disable_check_count[self.check_type] -= 1
+class ConsoleBase(object):
- def __init__(self, log, config, max_fifo_fill):
self.log = log
self.config = config
self.max_fifo_fill = max_fifo_fill
self.logstream = self.log.get_stream("console", sys.stdout)
# Array slice removes leading/trailing quotes
self.prompt = self.config.buildconfig["config_sys_prompt"][1:-1]
self.prompt_escaped = re.escape(self.prompt)
self.p = None
self.disable_check_count = {
"spl_signon": 0,
"main_signon": 0,
"unknown_command": 0,
"error_notification": 0,
}
self.at_prompt = False
self.at_prompt_logevt = None
self.ram_base = None
- def close(self):
if self.p:
self.p.close()
self.logstream.close()
- def run_command(self, cmd, wait_for_echo=True, send_nl=True, wait_for_prompt=True):
self.ensure_spawned()
if self.at_prompt and \
self.at_prompt_logevt != self.logstream.logfile.cur_evt:
self.logstream.write(self.prompt, implicit=True)
bad_patterns = []
bad_pattern_ids = []
if (self.disable_check_count["spl_signon"] == 0 and
self.uboot_spl_signon):
bad_patterns.append(self.uboot_spl_signon_escaped)
bad_pattern_ids.append("SPL signon")
if self.disable_check_count["main_signon"] == 0:
bad_patterns.append(self.uboot_main_signon_escaped)
bad_pattern_ids.append("U-Boot main signon")
if self.disable_check_count["unknown_command"] == 0:
bad_patterns.append(pattern_unknown_command)
bad_pattern_ids.append("Unknown command")
if self.disable_check_count["error_notification"] == 0:
bad_patterns.append(pattern_error_notification)
bad_pattern_ids.append("Error notification")
try:
self.at_prompt = False
if send_nl:
cmd += "\n"
while cmd:
# Limit max outstanding data, so UART FIFOs don't overflow
chunk = cmd[:self.max_fifo_fill]
cmd = cmd[self.max_fifo_fill:]
self.p.send(chunk)
if not wait_for_echo:
continue
chunk = re.escape(chunk)
chunk = chunk.replace("\\\n", "[\r\n]")
m = self.p.expect([chunk] + bad_patterns)
if m != 0:
self.at_prompt = False
raise Exception("Bad pattern found on console: " +
bad_pattern_ids[m - 1])
if not wait_for_prompt:
return
m = self.p.expect([self.prompt_escaped] + bad_patterns)
if m != 0:
self.at_prompt = False
raise Exception("Bad pattern found on console: " +
bad_pattern_ids[m - 1])
self.at_prompt = True
self.at_prompt_logevt = self.logstream.logfile.cur_evt
# Only strip \r\n; space/TAB might be significant if testing
# indentation.
return self.p.before.strip("\r\n")
except Exception as ex:
self.log.error(str(ex))
self.cleanup_spawn()
raise
- def ctrlc(self):
self.run_command(chr(3), wait_for_echo=False, send_nl=False)
- def ensure_spawned(self):
if self.p:
return
try:
self.at_prompt = False
self.log.action("Starting U-Boot")
self.p = self.get_spawn()
# Real targets can take a long time to scroll large amounts of
# text if LCD is enabled. This value may need tweaking in the
# future, possibly per-test to be optimal. This works for "help"
# on board "seaboard".
self.p.timeout = 30000
self.p.logfile_read = self.logstream
Also I have found that tests fail on chromebook_link because it cannot keep up with the pace of keyboard input. I'm not sure what the solution is - maybe the best thing is to implement buffering in the serial uclass, assuming that fixes it. For now I disabled LCD output.
I think it would be worth adding a test that checks for the banner and the prompt, so we know that other test failures are not due to this problem.
if self.config.buildconfig.get("CONFIG_SPL", False) == "y":
self.p.expect([pattern_uboot_spl_signon])
self.uboot_spl_signon = self.p.after
self.uboot_spl_signon_escaped = re.escape(self.p.after)
else:
self.uboot_spl_signon = None
self.p.expect([pattern_uboot_main_signon])
self.uboot_main_signon = self.p.after
self.uboot_main_signon_escaped = re.escape(self.p.after)
while True:
match = self.p.expect([self.prompt_escaped,
pattern_stop_autoboot_prompt])
if match == 1:
self.p.send(chr(3)) # CTRL-C
continue
break
self.at_prompt = True
self.at_prompt_logevt = self.logstream.logfile.cur_evt
except Exception as ex:
self.log.error(str(ex))
self.cleanup_spawn()
raise
- def cleanup_spawn(self):
try:
if self.p:
self.p.close()
except:
pass
self.p = None
- def validate_main_signon_in_text(self, text):
assert(self.uboot_main_signon in text)
- def disable_check(self, check_type):
return ConsoleDisableCheck(self, check_type)
- def find_ram_base(self):
if self.config.buildconfig.get("config_cmd_bdi", "n") != "y":
pytest.skip("bdinfo command not supported")
if self.ram_base == -1:
pytest.skip("Previously failed to find RAM bank start")
if self.ram_base is not None:
return self.ram_base
with self.log.section("find_ram_base"):
response = self.run_command("bdinfo")
for l in response.split("\n"):
if "-> start" in l:
self.ram_base = int(l.split("=")[1].strip(), 16)
break
if self.ram_base is None:
self.ram_base = -1
raise Exception("Failed to find RAM bank start in `bdinfo`")
return self.ram_base
diff --git a/test/py/uboot_console_exec_attach.py b/test/py/uboot_console_exec_attach.py new file mode 100644 index 000000000000..0267ae4dc070 --- /dev/null +++ b/test/py/uboot_console_exec_attach.py @@ -0,0 +1,36 @@ +# Copyright (c) 2015 Stephen Warren +# Copyright (c) 2015, NVIDIA CORPORATION. All rights reserved. +# +# SPDX-License-Identifier: GPL-2.0
It would be useful to have a short description at the top of each file / class explaining what it is for.
+from ubspawn import Spawn +from uboot_console_base import ConsoleBase
+def cmdline(app, args):
- return app + ' "' + '" "'.join(args) + '"'
+class ConsoleExecAttach(ConsoleBase):
- def __init__(self, log, config):
# The max_fifo_fill value might need tweaking per-board/-SoC?
# 1 would be safe anywhere, but is very slow (a pexpect issue?).
# 16 is a common FIFO size.
# HW flow control would mean this could be infinite.
super(ConsoleExecAttach, self).__init__(log, config, max_fifo_fill=16)
self.log.action("Flashing U-Boot")
cmd = ["uboot-test-flash", config.board_type, config.board_identity]
runner = self.log.get_runner(cmd[0])
runner.run(cmd)
runner.close()
[snip]
Regards, Simon

On 12/19/2015 03:24 PM, Simon Glass wrote:
Hi Stephen,
On 2 December 2015 at 15:18, Stephen Warren swarren@wwwdotorg.org wrote:
This tool aims to test U-Boot by executing U-Boot shell commands using the console interface. A single top-level script exists to execute or attach to the U-Boot console, run the entire script of tests against it, and summarize the results. Advantages of this approach are:
- Testing is performed in the same way a user or script would interact with U-Boot; there can be no disconnect.
- There is no need to write or embed test-related code into U-Boot itself. It is asserted that writing test-related code in Python is simpler and more flexible that writing it all in C.
- It is reasonably simple to interact with U-Boot in this way.
A few simple tests are provided as examples. Soon, we should convert as many as possible of the other tests in test/* and test/cmd_ut.c too.
In the future, I hope to publish (out-of-tree) the hook scripts, relay control utilities, and udev rules I will use for my own HW setup.
See README.md for more details!
This is a huge step forward for testing in U-Boot. Congratulations on putting this together!
Tested on chromebook_link, sandbox Tested-by: Simon Glass sjg@chromium.org
I've made various comments in the series as I think it needs a little tuning. I'm also interested in how we can arrange for the existing unit tests to be run (and results supported) by this framework.
One concern I have is about the ease of running and writing tests. It is pretty easy at present to run a particular driver model test:
./u-boot -d test.dtb -c "ut dm uclass"
and we can run this in gdb and figure out where things are going wrong (I do this quite a bit). Somehow we need to preserve this ease of use. The tests should be accessible. I'm not sure how you intend to make that work.
You can select which individual tests to run by passing "-k testname" on the command-line. Assuming Python-level tests have the desired granularity, that should be enough for test selection.
There's currently no support for running U-Boot under gdb. The acts of running U-Boot under gdb, disabling stdout redirection, and removing timeouts would be pretty easy to add. However, I'm not sure how you'd be able to interact with the gdb console and U-Boot output without interfering with the test script's need to capture the shell output in order to run the tests and interpret the results. Perhaps simplest would be to add some mechanism to pause the test process so that you could manually attach a gdb process externally at the appropriate time, perhaps along with the test process automatically spawning another shell/terminal/... to host the gdb (in which case it could be made to automatically attach to the correct process).
diff --git a/test/py/.gitignore b/test/py/.gitignore
+## Testing sandbox
+To run the testsuite on the sandbox port (U-Boot built as a native user-space +application), simply execute:
+``` +./test/py/test.py --bd sandbox --build +```
+The `--bd` option tells the test suite which board type is being tested. This +lets the test suite know which features the board has, and hence exactly what +can be tested.
Can we use -b to fit in with buildman and patman?
pytest reserves all lower-case single-letter options for itself, so we cannot use any of those. I suppose we could write a wrapper script to translate from -b to --board etc., but that would be a bit messy and prevent access to the shadowed pytest options.
diff --git a/test/py/conftest.py b/test/py/conftest.py
+def pytest_configure(config):
This series should have function comments throughout on non-trivial functions - e.g. purpose of the function and a description of the parameters and return value.
I can see this makes sense in some cases, but this particular function is a standard pytest function and already documented on http://pytest.org/. Would you still want additional documentation even for such functions?
diff --git a/test/py/uboot_console_base.py b/test/py/uboot_console_base.py
+class ConsoleBase(object):
- def ensure_spawned(self):
if self.p:
return
try:
self.at_prompt = False
self.log.action("Starting U-Boot")
self.p = self.get_spawn()
# Real targets can take a long time to scroll large amounts of
# text if LCD is enabled. This value may need tweaking in the
# future, possibly per-test to be optimal. This works for "help"
# on board "seaboard".
self.p.timeout = 30000
self.p.logfile_read = self.logstream
Also I have found that tests fail on chromebook_link because it cannot keep up with the pace of keyboard input. I'm not sure what the solution is - maybe the best thing is to implement buffering in the serial uclass, assuming that fixes it. For now I disabled LCD output.
I think it would be worth adding a test that checks for the banner and the prompt, so we know that other test failures are not due to this problem.
I had the same problem on Seaboard. I solved that by having the "expect" code wait for command echo bit-by-bit so that the host's TX side could never overflow the target's RX side FIFOs. Perhaps try lowering the max_fifo_fill value in test/py/uboot_console_exec_attach.py; does that solve the issue?

On 12/19/2015 03:24 PM, Simon Glass wrote:
Hi Stephen,
On 2 December 2015 at 15:18, Stephen Warren swarren@wwwdotorg.org wrote:
This tool aims to test U-Boot by executing U-Boot shell commands using the console interface. A single top-level script exists to execute or attach to the U-Boot console, run the entire script of tests against it, and summarize the results. Advantages of this approach are:
- Testing is performed in the same way a user or script would interact with U-Boot; there can be no disconnect.
- There is no need to write or embed test-related code into U-Boot itself. It is asserted that writing test-related code in Python is simpler and more flexible that writing it all in C.
- It is reasonably simple to interact with U-Boot in this way.
A few simple tests are provided as examples. Soon, we should convert as many as possible of the other tests in test/* and test/cmd_ut.c too.
In the future, I hope to publish (out-of-tree) the hook scripts, relay control utilities, and udev rules I will use for my own HW setup.
See README.md for more details!
I think it would be worth adding a test that checks for the banner and the prompt, so we know that other test failures are not due to this problem.
test_000_version.py already does this. It should always be the first test run, hence its odd filename.
participants (4)
-
Heiko Schocher
-
Michal Simek
-
Simon Glass
-
Stephen Warren