[PATCH 0/5] patman: test_util: Prettify test report outputs for python tools

These are a few changes to the test utilities shared by binman, patman, buildman and dtoc to make their printed test results cleaner. Simon has sent a patch making test_fdt use these utils [1], this applies on top of that and affects its output as well.
[1] dtoc: Update fdt tests to use test_util https://patchwork.ozlabs.org/project/uboot/patch/20220319000150.1722595-1-sj...
As a comparison, here's how the report currently looks when I add three bad tests to patman (one failing, one error, one skip). Line wrapped with backslashes at the ends for convenience, most things are otherwise jammed on one line:
Printing text block to stderr:
- test_error *
Printing text block to stdout: ++++++++++++++++
- test_failure +
++++++++++++++++ <unittest.result.TestResult run=48 errors=1 failures=1> patman.func_test.TestFunctional.test_error \ testtools.testresult.real._StringException: \ RuntimeError: Uncaught error during test
testtools.testresult.real._StringException: \ AssertionError: False is not true
[(<subunit.RemotedTestCase \ description='patman.func_test.TestFunctional.test_failure'>, \ 'testtools.testresult.real._StringException: \ AssertionError: False is not true\n\n')] 1 patman test SKIPPED: patman.func_test.TestFunctional.test_skipped \ (subunit.RemotedTestCase): Example skip reason
patman tests FAILED
This quickly gets very bad the more test failures there are.
Here's what I get after this series:
======================== Running patman tests ========================
ERROR: patman.func_test.TestFunctional.test_error (subunit.RemotedTestCase) patman.func_test.TestFunctional.test_error
testtools.testresult.real._StringException: stderr: {{{ Printing text block to stderr:
- test_error *
}}}
RuntimeError: Uncaught error during test
====================================================================== FAIL: patman.func_test.TestFunctional.test_failure (subunit.RemotedTestCase) patman.func_test.TestFunctional.test_failure
testtools.testresult.real._StringException: stdout: {{{ Printing text block to stdout: ++++++++++++++++
- test_failure +
++++++++++++++++ }}}
AssertionError: False is not true
====================================================================== SKIP: patman.func_test.TestFunctional.test_skipped (subunit.RemotedTestCase) patman.func_test.TestFunctional.test_skipped
Example skip reason
Ran 48 tests in 4.469s
FAILED (failures=1, errors=1, skipped=1)
There are some weird parts in this as well, like the stdout/stderr being quoted with braces and test descriptions being the same as test names, but they're quirks of the libraries used for concurrency test flows. The report looks slightly better when tests are run non-concurrently.
Alper Nebi Yasak (5): patman: test_util: Fix printing results for failed tests patman: test_util: Handle nonexistent tests while loading tests patman: test_util: Use unittest text runner to print test results patman: test_util: Customize unittest test results for more info patman: test_util: Print test stdout/stderr within test summaries
tools/binman/main.py | 8 +- tools/buildman/main.py | 8 +- tools/concurrencytest/concurrencytest.py | 83 ++++++++++++- tools/dtoc/main.py | 9 +- tools/dtoc/test_fdt.py | 8 +- tools/patman/main.py | 8 +- tools/patman/test_util.py | 150 +++++++++++++++-------- 7 files changed, 197 insertions(+), 77 deletions(-)

When printing a python tool's test results, the entire list of failed tests and their tracebacks are reprinted for every failed test. This makes the test output quite unreadable. Fix the loop to print failures and tracebacks one at a time.
Signed-off-by: Alper Nebi Yasak alpernebiyasak@gmail.com ---
tools/patman/test_util.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tools/patman/test_util.py b/tools/patman/test_util.py index c60eb3628e23..8b2220dbbafc 100644 --- a/tools/patman/test_util.py +++ b/tools/patman/test_util.py @@ -127,7 +127,7 @@ def report_result(toolname:str, test_name: str, result: unittest.TestResult): for test, err in result.errors: print(test.id(), err) for test, err in result.failures: - print(err, result.failures) + print(test.id(), err) if result.skipped: print('%d %s test%s SKIPPED:' % (len(result.skipped), toolname, 's' if len(result.skipped) > 1 else ''))

On Sat, 2 Apr 2022 at 11:06, Alper Nebi Yasak alpernebiyasak@gmail.com wrote:
When printing a python tool's test results, the entire list of failed tests and their tracebacks are reprinted for every failed test. This makes the test output quite unreadable. Fix the loop to print failures and tracebacks one at a time.
Signed-off-by: Alper Nebi Yasak alpernebiyasak@gmail.com
tools/patman/test_util.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
Reviewed-by: Simon Glass sjg@chromium.org

On Sat, 2 Apr 2022 at 11:06, Alper Nebi Yasak alpernebiyasak@gmail.com wrote:
When printing a python tool's test results, the entire list of failed tests and their tracebacks are reprinted for every failed test. This makes the test output quite unreadable. Fix the loop to print failures and tracebacks one at a time.
Signed-off-by: Alper Nebi Yasak alpernebiyasak@gmail.com
tools/patman/test_util.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
Reviewed-by: Simon Glass sjg@chromium.org
Applied to u-boot-dm, thanks!

It's possible to request a specific test to run when trying to run a python tool's tests. If we request a nonexistent test, the unittest loaders generate a fake test that reports this as an error. However, we get these fake tests even when the test exists, because test_util can load tests from multiple places one by one and the test we want only exists in one.
The test_util helpers currently remove these fake tests when printing test results, but that's more of a workaround than a proper solution. Instead, don't even try to load the missing tests.
Signed-off-by: Alper Nebi Yasak alpernebiyasak@gmail.com ---
tools/patman/test_util.py | 21 +++++---------------- 1 file changed, 5 insertions(+), 16 deletions(-)
diff --git a/tools/patman/test_util.py b/tools/patman/test_util.py index 8b2220dbbafc..a4c2a2c3c0b2 100644 --- a/tools/patman/test_util.py +++ b/tools/patman/test_util.py @@ -110,19 +110,6 @@ def report_result(toolname:str, test_name: str, result: unittest.TestResult): test_name: Name of test that was run, or None for all result: A unittest.TestResult object containing the results """ - # Remove errors which just indicate a missing test. Since Python v3.5 If an - # ImportError or AttributeError occurs while traversing name then a - # synthetic test that raises that error when run will be returned. These - # errors are included in the errors accumulated by result.errors. - if test_name: - errors = [] - - for test, err in result.errors: - if ("has no attribute '%s'" % test_name) not in err: - errors.append((test, err)) - result.testsRun -= 1 - result.errors = errors - print(result) for test, err in result.errors: print(test.id(), err) @@ -184,10 +171,12 @@ def run_test_suites(result, debug, verbosity, test_preserve_dirs, processes, preserve_outdirs=test_preserve_dirs and test_name is not None, toolpath=toolpath, verbosity=verbosity) if test_name: - try: + # Since Python v3.5 If an ImportError or AttributeError occurs + # while traversing a name then a synthetic test that raises that + # error when run will be returned. Check that the requested test + # exists, otherwise these errors are included in the results. + if test_name in loader.getTestCaseNames(module): suite.addTests(loader.loadTestsFromName(test_name, module)) - except AttributeError: - continue else: suite.addTests(loader.loadTestsFromTestCase(module)) if use_concurrent and processes != 1:

On Sat, 2 Apr 2022 at 11:06, Alper Nebi Yasak alpernebiyasak@gmail.com wrote:
It's possible to request a specific test to run when trying to run a python tool's tests. If we request a nonexistent test, the unittest loaders generate a fake test that reports this as an error. However, we get these fake tests even when the test exists, because test_util can load tests from multiple places one by one and the test we want only exists in one.
The test_util helpers currently remove these fake tests when printing test results, but that's more of a workaround than a proper solution. Instead, don't even try to load the missing tests.
Signed-off-by: Alper Nebi Yasak alpernebiyasak@gmail.com
tools/patman/test_util.py | 21 +++++---------------- 1 file changed, 5 insertions(+), 16 deletions(-)
Reviewed-by: Simon Glass sjg@chromium.org

On Sat, 2 Apr 2022 at 11:06, Alper Nebi Yasak alpernebiyasak@gmail.com wrote:
It's possible to request a specific test to run when trying to run a python tool's tests. If we request a nonexistent test, the unittest loaders generate a fake test that reports this as an error. However, we get these fake tests even when the test exists, because test_util can load tests from multiple places one by one and the test we want only exists in one.
The test_util helpers currently remove these fake tests when printing test results, but that's more of a workaround than a proper solution. Instead, don't even try to load the missing tests.
Signed-off-by: Alper Nebi Yasak alpernebiyasak@gmail.com
tools/patman/test_util.py | 21 +++++---------------- 1 file changed, 5 insertions(+), 16 deletions(-)
Reviewed-by: Simon Glass sjg@chromium.org
Applied to u-boot-dm, thanks!

The python tools' test utilities handle printing test results, but the output is quite bare compared to an ordinary unittest run. Delegate printing the results to a unittest text runner, which gives us niceties like clear separation between each test's result and how long it took to run the test suite.
Unfortunately it does not print info for skipped tests by default, but this can be handled later by a custom test result subclass. It also does not print the tool name; manually print a heading that includes the toolname so that the outputs of each tool's tests are distinguishable in the CI output.
Signed-off-by: Alper Nebi Yasak alpernebiyasak@gmail.com ---
tools/binman/main.py | 8 ++---- tools/buildman/main.py | 8 ++---- tools/dtoc/main.py | 9 +++--- tools/dtoc/test_fdt.py | 8 +++--- tools/patman/main.py | 8 ++---- tools/patman/test_util.py | 58 ++++++++++++++------------------------- 6 files changed, 38 insertions(+), 61 deletions(-)
diff --git a/tools/binman/main.py b/tools/binman/main.py index 9392b59adb15..d2eea1c4bb76 100755 --- a/tools/binman/main.py +++ b/tools/binman/main.py @@ -14,7 +14,6 @@ import site import sys import traceback -import unittest
# Get the absolute path to this file at run-time our_path = os.path.dirname(os.path.realpath(__file__)) @@ -80,19 +79,18 @@ def RunTests(debug, verbosity, processes, test_preserve_dirs, args, toolpath): from binman import image_test import doctest
- result = unittest.TestResult() test_name = args and args[0] or None
# Run the entry tests first ,since these need to be the first to import the # 'entry' module. - test_util.run_test_suites( - result, debug, verbosity, test_preserve_dirs, processes, test_name, + result = test_util.run_test_suites( + 'binman', debug, verbosity, test_preserve_dirs, processes, test_name, toolpath, [bintool_test.TestBintool, entry_test.TestEntry, ftest.TestFunctional, fdt_test.TestFdt, elf_test.TestElf, image_test.TestImage, cbfs_util_test.TestCbfs, fip_util_test.TestFip])
- return test_util.report_result('binman', test_name, result) + return (0 if result.wasSuccessful() else 1)
def RunTestCoverage(toolpath): """Run the tests and check that we get 100% coverage""" diff --git a/tools/buildman/main.py b/tools/buildman/main.py index 3b6af240802d..67c560c48d37 100755 --- a/tools/buildman/main.py +++ b/tools/buildman/main.py @@ -11,7 +11,6 @@ import os import re import sys -import unittest
# Bring in the patman libraries our_path = os.path.dirname(os.path.realpath(__file__)) @@ -34,19 +33,18 @@ def RunTests(skip_net_tests, verboose, args): from buildman import test import doctest
- result = unittest.TestResult() test_name = args and args[0] or None if skip_net_tests: test.use_network = False
# Run the entry tests first ,since these need to be the first to import the # 'entry' module. - test_util.run_test_suites( - result, False, verboose, False, None, test_name, [], + result = test_util.run_test_suites( + 'buildman', False, verboose, False, None, test_name, [], [test.TestBuild, func_test.TestFunctional, 'buildman.toolchain', 'patman.gitutil'])
- return test_util.report_result('buildman', test_name, result) + return (0 if result.wasSuccessful() else 1)
options, args = cmdline.ParseArgs()
diff --git a/tools/dtoc/main.py b/tools/dtoc/main.py index fac9db9c786f..5508759d4d56 100755 --- a/tools/dtoc/main.py +++ b/tools/dtoc/main.py @@ -24,7 +24,6 @@ from argparse import ArgumentParser import os import sys -import unittest
# Bring in the patman libraries our_path = os.path.dirname(os.path.realpath(__file__)) @@ -49,18 +48,18 @@ def run_tests(processes, args): from dtoc import test_src_scan from dtoc import test_dtoc
- result = unittest.TestResult() sys.argv = [sys.argv[0]] test_name = args.files and args.files[0] or None
test_dtoc.setup()
- test_util.run_test_suites( - result, debug=True, verbosity=1, test_preserve_dirs=False, + result = test_util.run_test_suites( + toolname='dtoc', debug=True, verbosity=1, test_preserve_dirs=False, processes=processes, test_name=test_name, toolpath=[], class_and_module_list=[test_dtoc.TestDtoc,test_src_scan.TestSrcScan])
- return test_util.report_result('binman', test_name, result) + return (0 if result.wasSuccessful() else 1) +
def RunTestCoverage(): """Run the tests and check that we get 100% coverage""" diff --git a/tools/dtoc/test_fdt.py b/tools/dtoc/test_fdt.py index 3859af8d032b..3baf4437cdd3 100755 --- a/tools/dtoc/test_fdt.py +++ b/tools/dtoc/test_fdt.py @@ -784,13 +784,13 @@ def RunTests(args): Returns: Return code, 0 on success """ - result = unittest.TestResult() test_name = args and args[0] or None - test_util.run_test_suites( - result, False, False, False, None, test_name, None, + result = test_util.run_test_suites( + 'test_fdt', False, False, False, None, test_name, None, [TestFdt, TestNode, TestProp, TestFdtUtil])
- return test_util.report_result('fdt', test_name, result) + return (0 if result.wasSuccessful() else 1) +
if __name__ != '__main__': sys.exit(1) diff --git a/tools/patman/main.py b/tools/patman/main.py index 2a2ac4570931..66d4806c8d8f 100755 --- a/tools/patman/main.py +++ b/tools/patman/main.py @@ -12,7 +12,6 @@ import shutil import sys import traceback -import unittest
if __name__ == "__main__": # Allow 'from patman import xxx to work' @@ -134,13 +133,12 @@ import doctest from patman import func_test
- result = unittest.TestResult() - test_util.run_test_suites( - result, False, False, False, None, None, None, + result = test_util.run_test_suites( + 'patman', False, False, False, None, None, None, [test_checkpatch.TestPatch, func_test.TestFunctional, 'gitutil', 'settings', 'terminal'])
- sys.exit(test_util.report_result('patman', args.testname, result)) + sys.exit(0 if result.wasSuccessful() else 1)
# Process commits, produce patches files, check them, email them elif args.cmd == 'send': diff --git a/tools/patman/test_util.py b/tools/patman/test_util.py index a4c2a2c3c0b2..ba8f87f75f03 100644 --- a/tools/patman/test_util.py +++ b/tools/patman/test_util.py @@ -102,36 +102,12 @@ def capture_sys_output(): sys.stdout, sys.stderr = old_out, old_err
-def report_result(toolname:str, test_name: str, result: unittest.TestResult): - """Report the results from a suite of tests - - Args: - toolname: Name of the tool that ran the tests - test_name: Name of test that was run, or None for all - result: A unittest.TestResult object containing the results - """ - print(result) - for test, err in result.errors: - print(test.id(), err) - for test, err in result.failures: - print(test.id(), err) - if result.skipped: - print('%d %s test%s SKIPPED:' % (len(result.skipped), toolname, - 's' if len(result.skipped) > 1 else '')) - for skip_info in result.skipped: - print('%s: %s' % (skip_info[0], skip_info[1])) - if result.errors or result.failures: - print('%s tests FAILED' % toolname) - return 1 - return 0 - - -def run_test_suites(result, debug, verbosity, test_preserve_dirs, processes, +def run_test_suites(toolname, debug, verbosity, test_preserve_dirs, processes, test_name, toolpath, class_and_module_list): """Run a series of test suites and collect the results
Args: - result: A unittest.TestResult object to add the results to + toolname: Name of the tool that ran the tests debug: True to enable debugging, which shows a full stack trace on error verbosity: Verbosity level to use (0-4) test_preserve_dirs: True to preserve the input directory used by tests @@ -145,11 +121,6 @@ def run_test_suites(result, debug, verbosity, test_preserve_dirs, processes, class_and_module_list: List of test classes (type class) and module names (type str) to run """ - for module in class_and_module_list: - if isinstance(module, str) and (not test_name or test_name == module): - suite = doctest.DocTestSuite(module) - suite.run(result) - sys.argv = [sys.argv[0]] if debug: sys.argv.append('-D') @@ -161,6 +132,19 @@ def run_test_suites(result, debug, verbosity, test_preserve_dirs, processes,
suite = unittest.TestSuite() loader = unittest.TestLoader() + runner = unittest.TextTestRunner( + stream=sys.stdout, + verbosity=(1 if verbosity is None else verbosity), + ) + + if use_concurrent and processes != 1: + suite = ConcurrentTestSuite(suite, + fork_for_tests(processes or multiprocessing.cpu_count())) + + for module in class_and_module_list: + if isinstance(module, str) and (not test_name or test_name == module): + suite.addTests(doctest.DocTestSuite(module)) + for module in class_and_module_list: if isinstance(module, str): continue @@ -179,9 +163,9 @@ def run_test_suites(result, debug, verbosity, test_preserve_dirs, processes, suite.addTests(loader.loadTestsFromName(test_name, module)) else: suite.addTests(loader.loadTestsFromTestCase(module)) - if use_concurrent and processes != 1: - concurrent_suite = ConcurrentTestSuite(suite, - fork_for_tests(processes or multiprocessing.cpu_count())) - concurrent_suite.run(result) - else: - suite.run(result) + + print(f" Running {toolname} tests ".center(70, "=")) + result = runner.run(suite) + print() + + return result

On Sat, 2 Apr 2022 at 11:06, Alper Nebi Yasak alpernebiyasak@gmail.com wrote:
The python tools' test utilities handle printing test results, but the output is quite bare compared to an ordinary unittest run. Delegate printing the results to a unittest text runner, which gives us niceties like clear separation between each test's result and how long it took to run the test suite.
Unfortunately it does not print info for skipped tests by default, but this can be handled later by a custom test result subclass. It also does not print the tool name; manually print a heading that includes the toolname so that the outputs of each tool's tests are distinguishable in the CI output.
Signed-off-by: Alper Nebi Yasak alpernebiyasak@gmail.com
tools/binman/main.py | 8 ++---- tools/buildman/main.py | 8 ++---- tools/dtoc/main.py | 9 +++--- tools/dtoc/test_fdt.py | 8 +++--- tools/patman/main.py | 8 ++---- tools/patman/test_util.py | 58 ++++++++++++++------------------------- 6 files changed, 38 insertions(+), 61 deletions(-)
Reviewed-by: Simon Glass sjg@chromium.org

On Sat, 2 Apr 2022 at 11:06, Alper Nebi Yasak alpernebiyasak@gmail.com wrote:
The python tools' test utilities handle printing test results, but the output is quite bare compared to an ordinary unittest run. Delegate printing the results to a unittest text runner, which gives us niceties like clear separation between each test's result and how long it took to run the test suite.
Unfortunately it does not print info for skipped tests by default, but this can be handled later by a custom test result subclass. It also does not print the tool name; manually print a heading that includes the toolname so that the outputs of each tool's tests are distinguishable in the CI output.
Signed-off-by: Alper Nebi Yasak alpernebiyasak@gmail.com
tools/binman/main.py | 8 ++---- tools/buildman/main.py | 8 ++---- tools/dtoc/main.py | 9 +++--- tools/dtoc/test_fdt.py | 8 +++--- tools/patman/main.py | 8 ++---- tools/patman/test_util.py | 58 ++++++++++++++------------------------- 6 files changed, 38 insertions(+), 61 deletions(-)
Reviewed-by: Simon Glass sjg@chromium.org
Applied to u-boot-dm, thanks!

By default, unittest test summaries only print extended info about tests that failed or couldn't run due to an error. Use a custom text result class to print info about more cases: skipped tests, expected failures and unexpected successes.
Signed-off-by: Alper Nebi Yasak alpernebiyasak@gmail.com --- This could be squashed into the previous patch, but makes the diff ugly.
tools/patman/test_util.py | 46 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 46 insertions(+)
diff --git a/tools/patman/test_util.py b/tools/patman/test_util.py index ba8f87f75f03..130d91409140 100644 --- a/tools/patman/test_util.py +++ b/tools/patman/test_util.py @@ -102,6 +102,51 @@ def capture_sys_output(): sys.stdout, sys.stderr = old_out, old_err
+class FullTextTestResult(unittest.TextTestResult): + """A test result class that can print extended text results to a stream + + This is meant to be used by a TestRunner as a result class. Like + TextTestResult, this prints out the names of tests as they are run, + errors as they occur, and a summary of the results at the end of the + test run. Beyond those, this prints information about skipped tests, + expected failures and unexpected successes. + + Args: + stream: A file-like object to write results to + descriptions (bool): True to print descriptions with test names + verbosity (int): Detail of printed output per test as they run + Test stdout and stderr always get printed when buffering + them is disabled by the test runner. In addition to that, + 0: Print nothing + 1: Print a dot per test + 2: Print test names + """ + def __init__(self, stream, descriptions, verbosity): + self.verbosity = verbosity + super().__init__(stream, descriptions, verbosity) + + def printErrors(self): + "Called by TestRunner after test run to summarize the tests" + # The parent class doesn't keep unexpected successes in the same + # format as the rest. Adapt it to what printErrorList expects. + unexpected_successes = [ + (test, 'Test was expected to fail, but succeeded.\n') + for test in self.unexpectedSuccesses + ] + + super().printErrors() # FAIL and ERROR + self.printErrorList('SKIP', self.skipped) + self.printErrorList('XFAIL', self.expectedFailures) + self.printErrorList('XPASS', unexpected_successes) + + def addSkip(self, test, reason): + """Called when a test is skipped.""" + # Add empty line to keep spacing consistent with other results + if not reason.endswith('\n'): + reason += '\n' + super().addSkip(test, reason) + + def run_test_suites(toolname, debug, verbosity, test_preserve_dirs, processes, test_name, toolpath, class_and_module_list): """Run a series of test suites and collect the results @@ -135,6 +180,7 @@ def run_test_suites(toolname, debug, verbosity, test_preserve_dirs, processes, runner = unittest.TextTestRunner( stream=sys.stdout, verbosity=(1 if verbosity is None else verbosity), + resultclass=FullTextTestResult, )
if use_concurrent and processes != 1:

By default, unittest test summaries only print extended info about tests that failed or couldn't run due to an error. Use a custom text result class to print info about more cases: skipped tests, expected failures and unexpected successes.
Signed-off-by: Alper Nebi Yasak alpernebiyasak@gmail.com --- This could be squashed into the previous patch, but makes the diff ugly.
tools/patman/test_util.py | 46 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 46 insertions(+)
Applied to u-boot-dm, thanks!

While running tests for a python tool, the tests' outputs get printed in whatever order they happen to run, without any indication as to which output belongs to which test. Unittest supports capturing these outputs and printing them as part of the test summaries, but when a failure or error occurs it switches back to printing as the tests run. Testtools and subunit tests can do the same as their parts inherit from unittest, but they don't outright expose this functionality.
On the unittest side, enable output buffering for the custom test result class. Try to avoid ugly outputs by not printing stdout/stderr before the test summary for low verbosity levels and for successful tests.
On the subunit side, implement a custom TestProtocolClient that enables the same underlying functionality and injects the captured streams as additional test details. This causes them to be merged into their test's error traceback message, which is later rebuilt into an exception and passed to our unittest report class.
Signed-off-by: Alper Nebi Yasak alpernebiyasak@gmail.com --- The way I had to do this for concurrencytest feels hacky, so I am ambivalent towards this patch. I'm actually fine with re-running a failing binman test alone with --debug -v6 to get better, isolated output from it.
tools/concurrencytest/concurrencytest.py | 83 +++++++++++++++++++++++- tools/patman/test_util.py | 33 +++++++++- 2 files changed, 112 insertions(+), 4 deletions(-)
diff --git a/tools/concurrencytest/concurrencytest.py b/tools/concurrencytest/concurrencytest.py index 5e88b94f4156..1c4f03f37e5b 100644 --- a/tools/concurrencytest/concurrencytest.py +++ b/tools/concurrencytest/concurrencytest.py @@ -31,6 +31,7 @@ from subunit.test_results import AutoTimingTestResultDecorator
from testtools import ConcurrentTestSuite, iterate_tests +from testtools.content import TracebackContent, text_content
_all__ = [ @@ -43,11 +44,81 @@ CPU_COUNT = cpu_count()
-def fork_for_tests(concurrency_num=CPU_COUNT): +class BufferingTestProtocolClient(TestProtocolClient): + """A TestProtocolClient which can buffer the test outputs + + This class captures the stdout and stderr output streams of the + tests as it runs them, and includes the output texts in the subunit + stream as additional details. + + Args: + stream: A file-like object to write a subunit stream to + buffer (bool): True to capture test stdout/stderr outputs and + include them in the test details + """ + def __init__(self, stream, buffer=True): + super().__init__(stream) + self.buffer = buffer + + def _addOutcome(self, outcome, test, error=None, details=None, + error_permitted=True): + """Report a test outcome to the subunit stream + + The parent class uses this function as a common implementation + for various methods that report successes, errors, failures, etc. + + This version automatically upgrades the error tracebacks to the + new 'details' format by wrapping them in a Content object, so + that we can include the captured test output in the test result + details. + + Args: + outcome: A string describing the outcome - used as the + event name in the subunit stream. + test: The test case whose outcome is to be reported + error: Standard unittest positional argument form - an + exc_info tuple. + details: New Testing-in-python drafted API; a dict from + string to subunit.Content objects. + error_permitted: If True then one and only one of error or + details must be supplied. If False then error must not + be supplied and details is still optional. + """ + if details is None: + details = {} + + # Parent will raise an exception if error_permitted is False but + # error is not None. We want that exception in that case, so + # don't touch error when error_permitted is explicitly False. + if error_permitted and error is not None: + # Parent class prefers error over details + details['traceback'] = TracebackContent(error, test) + error_permitted = False + error = None + + if self.buffer: + stdout = sys.stdout.getvalue() + if stdout: + details['stdout'] = text_content(stdout) + + stderr = sys.stderr.getvalue() + if stderr: + details['stderr'] = text_content(stderr) + + return super()._addOutcome(outcome, test, error=error, + details=details, error_permitted=error_permitted) + + +def fork_for_tests(concurrency_num=CPU_COUNT, buffer=False): """Implementation of `make_tests` used to construct `ConcurrentTestSuite`.
:param concurrency_num: number of processes to use. """ + if buffer: + test_protocol_client_class = BufferingTestProtocolClient + else: + test_protocol_client_class = TestProtocolClient + def do_fork(suite): """Take suite and start up multiple runners by forking (Unix only).
@@ -76,7 +147,7 @@ def do_fork(suite): # child actually gets keystrokes for pdb etc). sys.stdin.close() subunit_result = AutoTimingTestResultDecorator( - TestProtocolClient(stream) + test_protocol_client_class(stream) ) process_suite.run(subunit_result) except: @@ -93,7 +164,13 @@ def do_fork(suite): else: os.close(c2pwrite) stream = os.fdopen(c2pread, 'rb') - test = ProtocolTestCase(stream) + # If we don't pass the second argument here, it defaults + # to sys.stdout.buffer down the line. But if we don't + # pass it *now*, it may be resolved after sys.stdout is + # replaced with a StringIO (to capture tests' outputs) + # which doesn't have a buffer attribute and can end up + # occasionally causing a 'broken-runner' error. + test = ProtocolTestCase(stream, sys.stdout.buffer) result.append(test) return result return do_fork diff --git a/tools/patman/test_util.py b/tools/patman/test_util.py index 130d91409140..c27e0b39e5fa 100644 --- a/tools/patman/test_util.py +++ b/tools/patman/test_util.py @@ -15,6 +15,7 @@
from io import StringIO
+buffer_outputs = True use_concurrent = True try: from concurrencytest.concurrencytest import ConcurrentTestSuite @@ -120,6 +121,7 @@ class FullTextTestResult(unittest.TextTestResult): 0: Print nothing 1: Print a dot per test 2: Print test names + 3: Print test names, and buffered outputs for failing tests """ def __init__(self, stream, descriptions, verbosity): self.verbosity = verbosity @@ -139,12 +141,39 @@ def printErrors(self): self.printErrorList('XFAIL', self.expectedFailures) self.printErrorList('XPASS', unexpected_successes)
+ def addError(self, test, err): + """Called when an error has occurred.""" + super().addError(test, err) + self._mirrorOutput &= self.verbosity >= 3 + + def addFailure(self, test, err): + """Called when a test has failed.""" + super().addFailure(test, err) + self._mirrorOutput &= self.verbosity >= 3 + + def addSubTest(self, test, subtest, err): + """Called at the end of a subtest.""" + super().addSubTest(test, subtest, err) + self._mirrorOutput &= self.verbosity >= 3 + + def addSuccess(self, test): + """Called when a test has completed successfully""" + super().addSuccess(test) + # Don't print stdout/stderr for successful tests + self._mirrorOutput = False + def addSkip(self, test, reason): """Called when a test is skipped.""" # Add empty line to keep spacing consistent with other results if not reason.endswith('\n'): reason += '\n' super().addSkip(test, reason) + self._mirrorOutput &= self.verbosity >= 3 + + def addExpectedFailure(self, test, err): + """Called when an expected failure/error occurred.""" + super().addExpectedFailure(test, err) + self._mirrorOutput &= self.verbosity >= 3
def run_test_suites(toolname, debug, verbosity, test_preserve_dirs, processes, @@ -180,12 +209,14 @@ def run_test_suites(toolname, debug, verbosity, test_preserve_dirs, processes, runner = unittest.TextTestRunner( stream=sys.stdout, verbosity=(1 if verbosity is None else verbosity), + buffer=buffer_outputs, resultclass=FullTextTestResult, )
if use_concurrent and processes != 1: suite = ConcurrentTestSuite(suite, - fork_for_tests(processes or multiprocessing.cpu_count())) + fork_for_tests(processes or multiprocessing.cpu_count(), + buffer=buffer_outputs))
for module in class_and_module_list: if isinstance(module, str) and (not test_name or test_name == module):

On Sat, 2 Apr 2022 at 11:06, Alper Nebi Yasak alpernebiyasak@gmail.com wrote:
While running tests for a python tool, the tests' outputs get printed in whatever order they happen to run, without any indication as to which output belongs to which test. Unittest supports capturing these outputs and printing them as part of the test summaries, but when a failure or error occurs it switches back to printing as the tests run. Testtools and subunit tests can do the same as their parts inherit from unittest, but they don't outright expose this functionality.
On the unittest side, enable output buffering for the custom test result class. Try to avoid ugly outputs by not printing stdout/stderr before the test summary for low verbosity levels and for successful tests.
On the subunit side, implement a custom TestProtocolClient that enables the same underlying functionality and injects the captured streams as additional test details. This causes them to be merged into their test's error traceback message, which is later rebuilt into an exception and passed to our unittest report class.
Signed-off-by: Alper Nebi Yasak alpernebiyasak@gmail.com
The way I had to do this for concurrencytest feels hacky, so I am ambivalent towards this patch. I'm actually fine with re-running a failing binman test alone with --debug -v6 to get better, isolated output from it.
Yes that's what I tend to do, but I think we should try this patch. We can always drop it in a future release if it causes problems.
tools/concurrencytest/concurrencytest.py | 83 +++++++++++++++++++++++- tools/patman/test_util.py | 33 +++++++++- 2 files changed, 112 insertions(+), 4 deletions(-)
Reviewed-by: Simon Glass sjg@chromium.org

On Sat, 2 Apr 2022 at 11:06, Alper Nebi Yasak alpernebiyasak@gmail.com wrote:
While running tests for a python tool, the tests' outputs get printed in whatever order they happen to run, without any indication as to which output belongs to which test. Unittest supports capturing these outputs and printing them as part of the test summaries, but when a failure or error occurs it switches back to printing as the tests run. Testtools and subunit tests can do the same as their parts inherit from unittest, but they don't outright expose this functionality.
On the unittest side, enable output buffering for the custom test result class. Try to avoid ugly outputs by not printing stdout/stderr before the test summary for low verbosity levels and for successful tests.
On the subunit side, implement a custom TestProtocolClient that enables the same underlying functionality and injects the captured streams as additional test details. This causes them to be merged into their test's error traceback message, which is later rebuilt into an exception and passed to our unittest report class.
Signed-off-by: Alper Nebi Yasak alpernebiyasak@gmail.com
The way I had to do this for concurrencytest feels hacky, so I am ambivalent towards this patch. I'm actually fine with re-running a failing binman test alone with --debug -v6 to get better, isolated output from it.
Yes that's what I tend to do, but I think we should try this patch. We can always drop it in a future release if it causes problems.
tools/concurrencytest/concurrencytest.py | 83 +++++++++++++++++++++++- tools/patman/test_util.py | 33 +++++++++- 2 files changed, 112 insertions(+), 4 deletions(-)
Reviewed-by: Simon Glass sjg@chromium.org
Applied to u-boot-dm, thanks!
participants (2)
-
Alper Nebi Yasak
-
Simon Glass