
On Tue, Jan 24, 2023 at 11:55:32AM -0700, Simon Glass wrote:
Hi Tom,
On Tue, 24 Jan 2023 at 11:28, Tom Rini trini@konsulko.com wrote:
On Tue, Jan 24, 2023 at 11:15:15AM -0700, Simon Glass wrote:
Hi Tom,
On Tue, 24 Jan 2023 at 08:49, Tom Rini trini@konsulko.com wrote:
On Tue, Jan 24, 2023 at 03:56:00AM -0700, Simon Glass wrote:
Hi Tom,
On Mon, 23 Jan 2023 at 18:39, Tom Rini trini@konsulko.com wrote:
On Tue, Jan 17, 2023 at 10:47:22AM -0700, Simon Glass wrote:
> At present this test sets up a partition table on mmc1. But this is used > by the bootstd tests, so it is not possible to run those after this test > has run, without restarting the Python test harness. > > This is inconvenient when running tests repeatedly with 'ut dm'. Move the > test to use mmc2, which is not used by anything. > > Signed-off-by: Simon Glass sjg@chromium.org
When I run tests like this: TEST="not sleep and not event_dump and not bootmgr and not extension" ./tools/buildman/buildman -o /tmp/sandbox -P --board sandbox ./test/py/test.py --bd $ARGS --build-dir /tmp/sandbox -k "$TEST" -ra I get: ========================================== FAILURES =========================================== _________________________________ test_ut[ut_dm_dm_test_part] _________________________________ test/py/tests/test_ut.py:341: in test_ut assert output.endswith('Failures: 0') E assert False E + where False = <built-in method endswith of str object at 0x2776470>('Failures: 0') E + where <built-in method endswith of str object at 0x2776470> = 'Test: dm_test_part: part.c\r\r\n** No device specified **\r\r\nCouldn't find partition mmc <NULL>\r\r\n** No device ...: 0 == do_test(uts, 2, "1:2", 0): Expected 0x0 (0), got 0x1 (1)\r\r\nTest dm_test_part failed 4 times\r\r\nFailures: 4'.endswith ------------------------------------ Captured stdout call ------------------------------------- => ut dm dm_test_part Test: dm_test_part: part.c ** No device specified ** Couldn't find partition mmc <NULL> ** No device specified ** Couldn't find partition mmc ** No partition table - mmc 0 ** Couldn't find partition mmc 0 Could not find "test1" partition ** Bad device specification mmc #test1 ** ** Bad device specification mmc #test1 ** Couldn't find partition mmc #test1 ** Bad partition specification mmc 1:0 ** Couldn't find partition mmc 1:0 ** Invalid partition 2 ** Couldn't find partition mmc 1:2 test/dm/part.c:20, do_test(): expected == part_get_info_by_dev_and_name_or_num("mmc", part_str, &mmc_dev_desc, &part_info, whole): Expected 0x2 (2), got 0xfffffffe (-2) test/dm/part.c:82, dm_test_part(): 0 == do_test(uts, 2, "1:2", 0): Expected 0x0 (0), got 0x1 (1) Test: dm_test_part: part.c (flat tree) ** No device specified ** Couldn't find partition mmc <NULL> ** No device specified ** Couldn't find partition mmc ** No partition table - mmc 0 ** Couldn't find partition mmc 0 Could not find "test1" partition ** Bad device specification mmc #test1 ** ** Bad device specification mmc #test1 ** Couldn't find partition mmc #test1 ** Bad partition specification mmc 1:0 ** Couldn't find partition mmc 1:0 ** Invalid partition 2 ** Couldn't find partition mmc 1:2 test/dm/part.c:20, do_test(): expected == part_get_info_by_dev_and_name_or_num("mmc", part_str, &mmc_dev_desc, &part_info, whole): Expected 0x2 (2), got 0xfffffffe (-2) test/dm/part.c:82, dm_test_part(): 0 == do_test(uts, 2, "1:2", 0): Expected 0x0 (0), got 0x1 (1) Test dm_test_part failed 4 times Failures: 4 =>
Oh dear. I believe this is an ordering problem. These two patches need to be applied together because the first one changes mmc1.img and the second one relies on that change:
part: Add a function to find the first bootable partition dm: part: Update test to use mmc2
I have them quite far apart in the series.
I can merge them into one commit, perhaps?
And further tests down the series also fail / introduce failures, but this is the first one, and why I thought v2 had more problems.
For some reason I'm not seeing this, but it could be another ordering thing. I don't see failures on CI but it only tests the whole.
Let me know what you'd like me to do. I don't currently have a way to run tests on every commit, so my testing there is a bit random. I normally just build-test the series and then check with CI.
Well, your ordering comment is interesting. I bisect'd down to this patch being a problem because with the full series applied I see: ========================================== FAILURES =========================================== ___________________________________ test_ut_dm_init_bootstd ___________________________________ test/py/tests/test_ut.py:320: in test_ut_dm_init_bootstd setup_bootflow_image(u_boot_console) test/py/tests/test_ut.py:226: in setup_bootflow_image fname, mnt = setup_image(cons, mmc_dev, 0xc, second_part=True) test/py/tests/test_ut.py:45: in setup_image u_boot_utils.run_and_log(cons, 'sudo sfdisk %s' % fname, test/py/u_boot_utils.py:180: in run_and_log output = runner.run(cmd, ignore_errors=ignore_errors, stdin=stdin) test/py/multiplexed_log.py:182: in run raise exception E ValueError: Exit code: 1 ------------------------------------ Captured stdout call ------------------------------------- +qemu-img create /home/trini/work/u-boot/u-boot/mmc1.img 20M Formatting '/home/trini/work/u-boot/u-boot/mmc1.img', fmt=raw size=20971520 +sudo sfdisk /home/trini/work/u-boot/u-boot/mmc1.img Checking that no-one is using this disk right now ... OK
Disk /home/trini/work/u-boot/u-boot/mmc1.img: 20 MiB, 20971520 bytes, 40960 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes
> Created a new DOS disklabel with disk identifier 0xab7e7834.
/home/trini/work/u-boot/u-boot/mmc1.img1: Created a new partition 1 of type 'W95 FAT32 (LBA)' and of size 18 MiB. /home/trini/work/u-boot/u-boot/mmc1.img2: All space for primary partitions is in use.
I see this on my Ubuntu 20.04 machine but it doesn't happen on 22.04. Is it a bug in sfdisk? What version are you using?
Works: 2.37.2-4ubuntu3 Fails: 2.34-0.1ubuntu9.3
Do you see this in CI at all?
Ah, so that's the fun part of this then. Yes, CI is fine since it's on 22.04, but my workstation is on 20.04, and so fails. Doing a quick try and running sandbox tests in Docker on my workstation does have the series passing. So, the series is fine, but I need to adjust my local flow a little bit now.
OK ta.
Yes I actually only found that out when I tried it on another machine this morning. It was a surprise, but a bit of fiddling made me suspect it is a bug, e.g. that it doesn't allow partitions as small as 2MB?
One of the first hits I see on the error message is https://github.com/util-linux/util-linux/issues/851 which leads me to think yes, there's an sfdisk bug, and it's been fixed. So the reasonable paths forward here are that we just note that CI uses Ubuntu 22.02 and any failures based on host tool problems are up to users to deal with, I think. Since I let my workstation lag behind LTS upgrades for just a little bit, I'll just do the upgrade I was going to do in April, to 22.02.x, now'ish.