[U-Boot] Sharing a hardware lab

Hi Tom,
I have been meaning to have a crack at setting up a little hardware lab for a while.
I made some progress recently and hooked up a rpi_3 with sdwire for USB/SD, ykush for power and a little computer to control it. It builds U-Boot, sticks it on the SD card and runs pytest.
I pushed a tree here and hopefully you can see the 'hwlab' thing at the end:
https://gitlab.denx.de/u-boot/custodians/u-boot-dm/pipelines/148
So far it is just running the 'help' test. It seems to hang with serial console problems if I try to do more. It is not 100% reliable yet. I based it on Stephen's test hooks:
https://github.com/sglass68/uboot-test-hooks
Is it possible to share this so that others can use the lab when they push trees? Is it as simple as adding to the .gitlab-ci.yml file as I have done here?
https://gitlab.denx.de/u-boot/custodians/u-boot-dm/blob/gitlab-working/.gitl...
I also got tbot going in a similar way, to test booting into Linux. Should we look at integrating that at the same time? It should be fairly easy to do.
I have quite a lot of random boards and in principle it should not be too hard to hook up some more of them, with sufficient SDwires, hubs and patience.
Regards, Simon

Hello Simon,
Am 30.11.2019 um 05:23 schrieb Simon Glass:
Hi Tom,
I have been meaning to have a crack at setting up a little hardware lab for a while.
I made some progress recently and hooked up a rpi_3 with sdwire for USB/SD, ykush for power and a little computer to control it. It builds U-Boot, sticks it on the SD card and runs pytest.
I pushed a tree here and hopefully you can see the 'hwlab' thing at the end:
https://gitlab.denx.de/u-boot/custodians/u-boot-dm/pipelines/148
I get here a "404 Page not Found" error ...
So far it is just running the 'help' test. It seems to hang with serial console problems if I try to do more. It is not 100% reliable yet. I based it on Stephen's test hooks:
https://github.com/sglass68/uboot-test-hooks
Is it possible to share this so that others can use the lab when they push trees? Is it as simple as adding to the .gitlab-ci.yml file as I have done here?
https://gitlab.denx.de/u-boot/custodians/u-boot-dm/blob/gitlab-working/.gitl...
Hmm.. yes, should be posible, beside the fact, that we have not only one u-boot custodian git repo, so may more than one gitlabrunner want to access this lab ... and I think there is no way to handle this in gitlab... or?
I only see the way to "lock" the board in the lab and wait until you get the lock ... which should not to be too hard ...
I also got tbot going in a similar way, to test booting into Linux.
Great to hear!
Hmm... may out of topic ... I am unsure how exactly your hardware setup is, especially your "little computer" ?
I use for example a raspberry PI 3 as tbots "Lab host" which controlls
- DUT power on/off through PI gpio and a simple relay [1] - DUT bootmode (if possible) also through PI gpio and a relay [1] - console to DUT (usb2serial adapter) - on PI runs dhcp, nfs, tftp server - option: Of course you can also run tbot on the Lab host
Now you can use tbot for controlling power/connect to console
Also you can use this setup for daily work, as tbot has "interactive" testcases:
http://tbot.tools/modules/machine_shell.html#interactive-access
so you simply call:
tbot (some args) interactive_uboot
and can type after this your U-Boot commands in U-Boot commandshell.
If you want to integrate more boards, you can easily add them to your lab host, if you have no more gpios left on your pi, simply clone the sd card from the raspberry pi, put it into a new one, and you have a second lab host setup ... I put this setup into platic boxes, so I can take them away for example to customers ...
If your boards has the possibility to switch bootmode, you do not need a debugger to recover your board, just switch bootmode and reinstall a working U-Boot ... of course in a python tbot testcase. (And you can test different bootmodes with it)
Last chance to recover board with a debugger:
If your debugger has a commandline interface (for example BDI2000 has telnet interface), it is possible to define a tbot machine for it, and write tbot testcases for your debugger...
I use this for example on the socrates mpc85xx board in U-Boot mainline.
Should we look at integrating that at the same time? It should be fairly easy to do.
Added Harald to cc which did the tbot rewrite, as we discussed/plan to integrate test.py into tbot, may he can say here more, what is currently done.
I had integrated tbot for some customers into gitlab / jenkins also into buildbot, so this step should be easy, beside we have to discuss, if we handle the case if multiple gitlabrunner want to start a test on one board, see above.
I have quite a lot of random boards and in principle it should not be too hard to hook up some more of them, with sufficient SDwires, hubs and patience.
Yes, but don;t underestimate the effort for testing. I learned when setup automated tests, there is always something breaking ... look at my old setup [2], I had no time to reactivate it again, also there a test.py integration for some boards [3]
bye, Heiko [1] gpio relay
https://www.amazon.de/dp/B01EQAJP2I/?coliid=I27THS7EQ7A7LG&colid=1SIRH51...
[2] http://xeidos.ddns.net/tests/test_db_auslesen.php [3] http://xeidos.ddns.net/tbot/id_1035/test-log.html

On Fri, Nov 29, 2019 at 09:23:43PM -0700, Simon Glass wrote:
Hi Tom,
I have been meaning to have a crack at setting up a little hardware lab for a while.
I made some progress recently and hooked up a rpi_3 with sdwire for USB/SD, ykush for power and a little computer to control it. It builds U-Boot, sticks it on the SD card and runs pytest.
I pushed a tree here and hopefully you can see the 'hwlab' thing at the end:
https://gitlab.denx.de/u-boot/custodians/u-boot-dm/pipelines/148
So far it is just running the 'help' test. It seems to hang with serial console problems if I try to do more. It is not 100% reliable yet. I based it on Stephen's test hooks:
https://github.com/sglass68/uboot-test-hooks
Is it possible to share this so that others can use the lab when they push trees? Is it as simple as adding to the .gitlab-ci.yml file as I have done here?
https://gitlab.denx.de/u-boot/custodians/u-boot-dm/blob/gitlab-working/.gitl...
I also got tbot going in a similar way, to test booting into Linux. Should we look at integrating that at the same time? It should be fairly easy to do.
I have quite a lot of random boards and in principle it should not be too hard to hook up some more of them, with sufficient SDwires, hubs and patience.
There's two parts of this. The first part I think is that we need some good examples of how to have one private CI job poll / monitor other public jobs and run. I believe some labs do this today. This would be helpful as at least personally I'm kicking my hardware tests manually. This is because as best I can tell there isn't a way to include an optional stage/portion of a CI job.
The second part is that long term, we need to most likely develop some LAVA experience as that will get us easier access to various kernelci labs and in turn be included in kernelci labs, when the overall SoC and lab support being able to test firmware.

Hi Tom,
On Wed, 4 Dec 2019 at 15:30, Tom Rini trini@konsulko.com wrote:
On Fri, Nov 29, 2019 at 09:23:43PM -0700, Simon Glass wrote:
Hi Tom,
I have been meaning to have a crack at setting up a little hardware lab for a while.
I made some progress recently and hooked up a rpi_3 with sdwire for USB/SD, ykush for power and a little computer to control it. It builds U-Boot, sticks it on the SD card and runs pytest.
I pushed a tree here and hopefully you can see the 'hwlab' thing at the end:
https://gitlab.denx.de/u-boot/custodians/u-boot-dm/pipelines/148
So far it is just running the 'help' test. It seems to hang with serial console problems if I try to do more. It is not 100% reliable yet. I based it on Stephen's test hooks:
https://github.com/sglass68/uboot-test-hooks
Is it possible to share this so that others can use the lab when they push trees? Is it as simple as adding to the .gitlab-ci.yml file as I have done here?
https://gitlab.denx.de/u-boot/custodians/u-boot-dm/blob/gitlab-working/.gitl...
I also got tbot going in a similar way, to test booting into Linux. Should we look at integrating that at the same time? It should be fairly easy to do.
I have quite a lot of random boards and in principle it should not be too hard to hook up some more of them, with sufficient SDwires, hubs and patience.
Bumping this thread as I have now hooked up about about 8 mostly ARM and x86 boards and have tbot and pytest automation mostly working for them.
There's two parts of this. The first part I think is that we need some good examples of how to have one private CI job poll / monitor other public jobs and run. I believe some labs do this today. This would be helpful as at least personally I'm kicking my hardware tests manually. This is because as best I can tell there isn't a way to include an optional stage/portion of a CI job.
So the model here is that people with a lab 'watch' various repos? I think that would be useful. Stephen Warren does this I think, but I'm not sure how the builds are kicked off.
But what about a full public lab? E.g. is it possible to add some of the boards I have here to every build that people do?
The second part is that long term, we need to most likely develop some LAVA experience as that will get us easier access to various kernelci labs and in turn be included in kernelci labs, when the overall SoC and lab support being able to test firmware.
I wonder if these are set up for replacing firmware? It specifically mentions boards getting bricked, so I suspect not.
Regards, Simon

On 2/5/20 7:10 AM, Simon Glass wrote:
Hi Tom,
On Wed, 4 Dec 2019 at 15:30, Tom Rini trini@konsulko.com wrote:
On Fri, Nov 29, 2019 at 09:23:43PM -0700, Simon Glass wrote:
Hi Tom,
I have been meaning to have a crack at setting up a little hardware lab for a while.
I made some progress recently and hooked up a rpi_3 with sdwire for USB/SD, ykush for power and a little computer to control it. It builds U-Boot, sticks it on the SD card and runs pytest.
I pushed a tree here and hopefully you can see the 'hwlab' thing at the end:
https://gitlab.denx.de/u-boot/custodians/u-boot-dm/pipelines/148
So far it is just running the 'help' test. It seems to hang with serial console problems if I try to do more. It is not 100% reliable yet. I based it on Stephen's test hooks:
https://github.com/sglass68/uboot-test-hooks
Is it possible to share this so that others can use the lab when they push trees? Is it as simple as adding to the .gitlab-ci.yml file as I have done here?
https://gitlab.denx.de/u-boot/custodians/u-boot-dm/blob/gitlab-working/.gitl...
I also got tbot going in a similar way, to test booting into Linux. Should we look at integrating that at the same time? It should be fairly easy to do.
I have quite a lot of random boards and in principle it should not be too hard to hook up some more of them, with sufficient SDwires, hubs and patience.
Bumping this thread as I have now hooked up about about 8 mostly ARM and x86 boards and have tbot and pytest automation mostly working for them.
There's two parts of this. The first part I think is that we need some good examples of how to have one private CI job poll / monitor other public jobs and run. I believe some labs do this today. This would be helpful as at least personally I'm kicking my hardware tests manually. This is because as best I can tell there isn't a way to include an optional stage/portion of a CI job.
So the model here is that people with a lab 'watch' various repos? I think that would be useful. Stephen Warren does this I think, but I'm not sure how the builds are kicked off.
Yes, my Jenkins instance directly polls the relevant git repos/branches for changes, then do a full build and test cycle, so is independent of anything else.
Well actually, I mirror the git repos locally and Jenkins polls the mirrors, so that when I run n builds, the upstream git serves only get hit once for the mirroring operation, and not once per build, but that's an implementation detail.

Hi Stephen,
On Wed, 5 Feb 2020 at 11:21, Stephen Warren swarren@wwwdotorg.org wrote:
On 2/5/20 7:10 AM, Simon Glass wrote:
Hi Tom,
On Wed, 4 Dec 2019 at 15:30, Tom Rini trini@konsulko.com wrote:
On Fri, Nov 29, 2019 at 09:23:43PM -0700, Simon Glass wrote:
Hi Tom,
I have been meaning to have a crack at setting up a little hardware lab for a while.
I made some progress recently and hooked up a rpi_3 with sdwire for USB/SD, ykush for power and a little computer to control it. It builds U-Boot, sticks it on the SD card and runs pytest.
I pushed a tree here and hopefully you can see the 'hwlab' thing at the end:
https://gitlab.denx.de/u-boot/custodians/u-boot-dm/pipelines/148
So far it is just running the 'help' test. It seems to hang with serial console problems if I try to do more. It is not 100% reliable yet. I based it on Stephen's test hooks:
https://github.com/sglass68/uboot-test-hooks
Is it possible to share this so that others can use the lab when they push trees? Is it as simple as adding to the .gitlab-ci.yml file as I have done here?
https://gitlab.denx.de/u-boot/custodians/u-boot-dm/blob/gitlab-working/.gitl...
I also got tbot going in a similar way, to test booting into Linux. Should we look at integrating that at the same time? It should be fairly easy to do.
I have quite a lot of random boards and in principle it should not be too hard to hook up some more of them, with sufficient SDwires, hubs and patience.
Bumping this thread as I have now hooked up about about 8 mostly ARM and x86 boards and have tbot and pytest automation mostly working for them.
There's two parts of this. The first part I think is that we need some good examples of how to have one private CI job poll / monitor other public jobs and run. I believe some labs do this today. This would be helpful as at least personally I'm kicking my hardware tests manually. This is because as best I can tell there isn't a way to include an optional stage/portion of a CI job.
So the model here is that people with a lab 'watch' various repos? I think that would be useful. Stephen Warren does this I think, but I'm not sure how the builds are kicked off.
Yes, my Jenkins instance directly polls the relevant git repos/branches for changes, then do a full build and test cycle, so is independent of anything else.
Well actually, I mirror the git repos locally and Jenkins polls the mirrors, so that when I run n builds, the upstream git serves only get hit once for the mirroring operation, and not once per build, but that's an implementation detail.
OK thanks, that explains it.
Tom, I added a kea-sandbox runner - is it possible to try that as a public group runner?
Regards, Simon

On Wed, Feb 05, 2020 at 11:21:41AM -0700, Stephen Warren wrote:
On 2/5/20 7:10 AM, Simon Glass wrote:
Hi Tom,
On Wed, 4 Dec 2019 at 15:30, Tom Rini trini@konsulko.com wrote:
On Fri, Nov 29, 2019 at 09:23:43PM -0700, Simon Glass wrote:
Hi Tom,
I have been meaning to have a crack at setting up a little hardware lab for a while.
I made some progress recently and hooked up a rpi_3 with sdwire for USB/SD, ykush for power and a little computer to control it. It builds U-Boot, sticks it on the SD card and runs pytest.
I pushed a tree here and hopefully you can see the 'hwlab' thing at the end:
https://gitlab.denx.de/u-boot/custodians/u-boot-dm/pipelines/148
So far it is just running the 'help' test. It seems to hang with serial console problems if I try to do more. It is not 100% reliable yet. I based it on Stephen's test hooks:
https://github.com/sglass68/uboot-test-hooks
Is it possible to share this so that others can use the lab when they push trees? Is it as simple as adding to the .gitlab-ci.yml file as I have done here?
https://gitlab.denx.de/u-boot/custodians/u-boot-dm/blob/gitlab-working/.gitl...
I also got tbot going in a similar way, to test booting into Linux. Should we look at integrating that at the same time? It should be fairly easy to do.
I have quite a lot of random boards and in principle it should not be too hard to hook up some more of them, with sufficient SDwires, hubs and patience.
Bumping this thread as I have now hooked up about about 8 mostly ARM and x86 boards and have tbot and pytest automation mostly working for them.
There's two parts of this. The first part I think is that we need some good examples of how to have one private CI job poll / monitor other public jobs and run. I believe some labs do this today. This would be helpful as at least personally I'm kicking my hardware tests manually. This is because as best I can tell there isn't a way to include an optional stage/portion of a CI job.
So the model here is that people with a lab 'watch' various repos? I think that would be useful. Stephen Warren does this I think, but I'm not sure how the builds are kicked off.
Yes, my Jenkins instance directly polls the relevant git repos/branches for changes, then do a full build and test cycle, so is independent of anything else.
Well actually, I mirror the git repos locally and Jenkins polls the mirrors, so that when I run n builds, the upstream git serves only get hit once for the mirroring operation, and not once per build, but that's an implementation detail.
I'm of the opinion that labs that poll are probably better of an idea than trying to make something public and known visible to the world. Thanks!

Hi Tom,
On Fri, 7 Feb 2020 at 15:22, Tom Rini trini@konsulko.com wrote:
On Wed, Feb 05, 2020 at 11:21:41AM -0700, Stephen Warren wrote:
On 2/5/20 7:10 AM, Simon Glass wrote:
Hi Tom,
On Wed, 4 Dec 2019 at 15:30, Tom Rini trini@konsulko.com wrote:
On Fri, Nov 29, 2019 at 09:23:43PM -0700, Simon Glass wrote:
Hi Tom,
I have been meaning to have a crack at setting up a little hardware lab for a while.
I made some progress recently and hooked up a rpi_3 with sdwire for USB/SD, ykush for power and a little computer to control it. It builds U-Boot, sticks it on the SD card and runs pytest.
I pushed a tree here and hopefully you can see the 'hwlab' thing at the end:
https://gitlab.denx.de/u-boot/custodians/u-boot-dm/pipelines/148
So far it is just running the 'help' test. It seems to hang with serial console problems if I try to do more. It is not 100% reliable yet. I based it on Stephen's test hooks:
https://github.com/sglass68/uboot-test-hooks
Is it possible to share this so that others can use the lab when they push trees? Is it as simple as adding to the .gitlab-ci.yml file as I have done here?
https://gitlab.denx.de/u-boot/custodians/u-boot-dm/blob/gitlab-working/.gitl...
I also got tbot going in a similar way, to test booting into Linux. Should we look at integrating that at the same time? It should be fairly easy to do.
I have quite a lot of random boards and in principle it should not be too hard to hook up some more of them, with sufficient SDwires, hubs and patience.
Bumping this thread as I have now hooked up about about 8 mostly ARM and x86 boards and have tbot and pytest automation mostly working for them.
There's two parts of this. The first part I think is that we need some good examples of how to have one private CI job poll / monitor other public jobs and run. I believe some labs do this today. This would be helpful as at least personally I'm kicking my hardware tests manually. This is because as best I can tell there isn't a way to include an optional stage/portion of a CI job.
So the model here is that people with a lab 'watch' various repos? I think that would be useful. Stephen Warren does this I think, but I'm not sure how the builds are kicked off.
Yes, my Jenkins instance directly polls the relevant git repos/branches for changes, then do a full build and test cycle, so is independent of anything else.
Well actually, I mirror the git repos locally and Jenkins polls the mirrors, so that when I run n builds, the upstream git serves only get hit once for the mirroring operation, and not once per build, but that's an implementation detail.
I'm of the opinion that labs that poll are probably better of an idea than trying to make something public and known visible to the world. Thanks!
It's certainly a start.
But I think there is real value in adding this to gitlab because the committers can actually see and diagnose the failure themselves and it becomes part of the normal test flow, rather than something that is noticed later.
Regards, Simon

Hello Simon,
Am 05.02.2020 um 15:10 schrieb Simon Glass:
Hi Tom,
On Wed, 4 Dec 2019 at 15:30, Tom Rini trini@konsulko.com wrote:
On Fri, Nov 29, 2019 at 09:23:43PM -0700, Simon Glass wrote:
Hi Tom,
I have been meaning to have a crack at setting up a little hardware lab for a while.
I made some progress recently and hooked up a rpi_3 with sdwire for USB/SD, ykush for power and a little computer to control it. It builds U-Boot, sticks it on the SD card and runs pytest.
I pushed a tree here and hopefully you can see the 'hwlab' thing at the end:
https://gitlab.denx.de/u-boot/custodians/u-boot-dm/pipelines/148
So far it is just running the 'help' test. It seems to hang with serial console problems if I try to do more. It is not 100% reliable yet. I based it on Stephen's test hooks:
https://github.com/sglass68/uboot-test-hooks
Is it possible to share this so that others can use the lab when they push trees? Is it as simple as adding to the .gitlab-ci.yml file as I have done here?
https://gitlab.denx.de/u-boot/custodians/u-boot-dm/blob/gitlab-working/.gitl...
I also got tbot going in a similar way, to test booting into Linux. Should we look at integrating that at the same time? It should be fairly easy to do.
I have quite a lot of random boards and in principle it should not be too hard to hook up some more of them, with sufficient SDwires, hubs and patience.
Bumping this thread as I have now hooked up about about 8 mostly ARM and x86 boards and have tbot and pytest automation mostly working for them.
Great news!
added Harald Seiler to cc, as he did the tbot port to python3.6.
Do you have somewhere your tbot integration online?
I ask because on our ToDo list is to integrate pytest into tbot and may we can share work?
There's two parts of this. The first part I think is that we need some good examples of how to have one private CI job poll / monitor other public jobs and run. I believe some labs do this today. This would be helpful as at least personally I'm kicking my hardware tests manually. This is because as best I can tell there isn't a way to include an optional stage/portion of a CI job.
So the model here is that people with a lab 'watch' various repos? I think that would be useful. Stephen Warren does this I think, but I'm not sure how the builds are kicked off.
But what about a full public lab? E.g. is it possible to add some of the boards I have here to every build that people do?
Here begins the hard game I think, because what happens if 2 builds triggered in parallel and want to test a board in the same lab at the same time?
If you trigger the test with tbot, it should be easy to add a locking mechanism into tbots lab specific function power_check() [1]
May in this power_check() function tbot waits until it get the board... The locking mechanism itself is lab specific.
The second part is that long term, we need to most likely develop some LAVA experience as that will get us easier access to various kernelci labs and in turn be included in kernelci labs, when the overall SoC and lab support being able to test firmware.
I wonder if these are set up for replacing firmware? It specifically mentions boards getting bricked, so I suspect not.
Unfortunately I had not yet time for looking into LAVA or kernelci.
bye, Heiko
[1] https://github.com/Rahix/tbot/blob/master/tbot/machine/board/board.py#L58

Hi Heiko,
On Wed, 12 Feb 2020 at 01:50, Heiko Schocher hs@denx.de wrote:
Hello Simon,
Am 05.02.2020 um 15:10 schrieb Simon Glass:
Hi Tom,
On Wed, 4 Dec 2019 at 15:30, Tom Rini trini@konsulko.com wrote:
On Fri, Nov 29, 2019 at 09:23:43PM -0700, Simon Glass wrote:
Hi Tom,
I have been meaning to have a crack at setting up a little hardware lab for a while.
I made some progress recently and hooked up a rpi_3 with sdwire for USB/SD, ykush for power and a little computer to control it. It builds U-Boot, sticks it on the SD card and runs pytest.
I pushed a tree here and hopefully you can see the 'hwlab' thing at the end:
https://gitlab.denx.de/u-boot/custodians/u-boot-dm/pipelines/148
So far it is just running the 'help' test. It seems to hang with serial console problems if I try to do more. It is not 100% reliable yet. I based it on Stephen's test hooks:
https://github.com/sglass68/uboot-test-hooks
Is it possible to share this so that others can use the lab when they push trees? Is it as simple as adding to the .gitlab-ci.yml file as I have done here?
https://gitlab.denx.de/u-boot/custodians/u-boot-dm/blob/gitlab-working/.gitl...
I also got tbot going in a similar way, to test booting into Linux. Should we look at integrating that at the same time? It should be fairly easy to do.
I have quite a lot of random boards and in principle it should not be too hard to hook up some more of them, with sufficient SDwires, hubs and patience.
Bumping this thread as I have now hooked up about about 8 mostly ARM and x86 boards and have tbot and pytest automation mostly working for them.
Great news!
added Harald Seiler to cc, as he did the tbot port to python3.6.
Do you have somewhere your tbot integration online?
https://github.com/sglass68/ubtest
But it is tbot only. It would be good if there were a way to upstream this stuff.
For pytest I am sending upstream to:
https://github.com/swarren/uboot-test-hooks
BTW I have not yet got tbot to build U-Boot and write it onto the board. Do you have examples for that?
I ask because on our ToDo list is to integrate pytest into tbot and may we can share work?
There's two parts of this. The first part I think is that we need some good examples of how to have one private CI job poll / monitor other public jobs and run. I believe some labs do this today. This would be helpful as at least personally I'm kicking my hardware tests manually. This is because as best I can tell there isn't a way to include an optional stage/portion of a CI job.
So the model here is that people with a lab 'watch' various repos? I think that would be useful. Stephen Warren does this I think, but I'm not sure how the builds are kicked off.
But what about a full public lab? E.g. is it possible to add some of the boards I have here to every build that people do?
Here begins the hard game I think, because what happens if 2 builds triggered in parallel and want to test a board in the same lab at the same time?
The gitlab-runner thing seems to handle that.
If you trigger the test with tbot, it should be easy to add a locking mechanism into tbots lab specific function power_check() [1]
May in this power_check() function tbot waits until it get the board... The locking mechanism itself is lab specific.
The second part is that long term, we need to most likely develop some LAVA experience as that will get us easier access to various kernelci labs and in turn be included in kernelci labs, when the overall SoC and lab support being able to test firmware.
I wonder if these are set up for replacing firmware? It specifically mentions boards getting bricked, so I suspect not.
Unfortunately I had not yet time for looking into LAVA or kernelci.
I haven't much, but I do wonder if we could add firmware testing to it.
Regards, Simon

Hello Simon,
Am 12.02.2020 um 18:14 schrieb Simon Glass:
Hi Heiko,
On Wed, 12 Feb 2020 at 01:50, Heiko Schocher hs@denx.de wrote:
Hello Simon,
Am 05.02.2020 um 15:10 schrieb Simon Glass:
Hi Tom,
On Wed, 4 Dec 2019 at 15:30, Tom Rini trini@konsulko.com wrote:
On Fri, Nov 29, 2019 at 09:23:43PM -0700, Simon Glass wrote:
Hi Tom,
I have been meaning to have a crack at setting up a little hardware lab for a while.
I made some progress recently and hooked up a rpi_3 with sdwire for USB/SD, ykush for power and a little computer to control it. It builds U-Boot, sticks it on the SD card and runs pytest.
I pushed a tree here and hopefully you can see the 'hwlab' thing at the end:
https://gitlab.denx.de/u-boot/custodians/u-boot-dm/pipelines/148
So far it is just running the 'help' test. It seems to hang with serial console problems if I try to do more. It is not 100% reliable yet. I based it on Stephen's test hooks:
https://github.com/sglass68/uboot-test-hooks
Is it possible to share this so that others can use the lab when they push trees? Is it as simple as adding to the .gitlab-ci.yml file as I have done here?
https://gitlab.denx.de/u-boot/custodians/u-boot-dm/blob/gitlab-working/.gitl...
I also got tbot going in a similar way, to test booting into Linux. Should we look at integrating that at the same time? It should be fairly easy to do.
I have quite a lot of random boards and in principle it should not be too hard to hook up some more of them, with sufficient SDwires, hubs and patience.
Bumping this thread as I have now hooked up about about 8 mostly ARM and x86 boards and have tbot and pytest automation mostly working for them.
Great news!
added Harald Seiler to cc, as he did the tbot port to python3.6.
Do you have somewhere your tbot integration online?
https://github.com/sglass68/ubtest
But it is tbot only. It would be good if there were a way to upstream this stuff.
For pytest I am sending upstream to:
https://github.com/swarren/uboot-test-hooks
BTW I have not yet got tbot to build U-Boot and write it onto the board. Do you have examples for that?
We have them on our gitlab server, but only private as I know.
@Harald: Do you have some good examples somewhere online?
May the doc help here too:
http://tbot.tools/modules/tc.html#u-boot
and see [1]
I ask because on our ToDo list is to integrate pytest into tbot and may we can share work?
There's two parts of this. The first part I think is that we need some good examples of how to have one private CI job poll / monitor other public jobs and run. I believe some labs do this today. This would be helpful as at least personally I'm kicking my hardware tests manually. This is because as best I can tell there isn't a way to include an optional stage/portion of a CI job.
So the model here is that people with a lab 'watch' various repos? I think that would be useful. Stephen Warren does this I think, but I'm not sure how the builds are kicked off.
But what about a full public lab? E.g. is it possible to add some of the boards I have here to every build that people do?
Here begins the hard game I think, because what happens if 2 builds triggered in parallel and want to test a board in the same lab at the same time?
The gitlab-runner thing seems to handle that.
Ah, so all gitlabs need to use the same gitlab runner, ok.
If you trigger the test with tbot, it should be easy to add a locking mechanism into tbots lab specific function power_check() [1]
May in this power_check() function tbot waits until it get the board... The locking mechanism itself is lab specific.
The second part is that long term, we need to most likely develop some LAVA experience as that will get us easier access to various kernelci labs and in turn be included in kernelci labs, when the overall SoC and lab support being able to test firmware.
I wonder if these are set up for replacing firmware? It specifically mentions boards getting bricked, so I suspect not.
Unfortunately I had not yet time for looking into LAVA or kernelci.
I haven't much, but I do wonder if we could add firmware testing to it.
bye, Heiko
[1] tbot u-boot build example for the aristainetos board
add in your board config:
class aristainetosUBootBuilder(lab.UBootBuilder): name = "aristainetos" defconfig = "aristainetos2_defconfig" toolchain = "linaro-gnueabi" remote = "git@gitlab.denx.de:abb/aristainetos-uboot.git"
def do_checkout(self, target: linux.Path, clean: bool) -> git.GitRepository: return git.GitRepository( target=target, url=self.remote, clean=clean, rev="aristainetos-denx" )
class aristainetosUBoot(lab.UBootMachine): name = "ari-ub" prompt = "=> " autoboot_prompt = None build = aristainetosUBootBuilder()
in your lab config (just example) you need:
class UBootBuilder(uboot.UBootBuilder): if tbot.selectable.LabHost.name in ["pollux", "hercules"]: remote = "/home/git/u-boot.git"
def do_configure(self, bh: BH, repo: git.GitRepository[BH]) -> None: super().do_configure(bh, repo)
tbot.log.message("Patching U-Boot config ...")
# Add local-version tbot kconfig.set_string_value(repo / ".config", "CONFIG_LOCALVERSION", "-tbot")
# Tab completion kconfig.enable(repo / ".config", "CONFIG_AUTO_COMPLETE") [...]
class PolluxLab(connector.ParamikoConnector, linux.Bash, linux.Lab, linux.Builder): [...] def toolchains(self) -> typing.Dict[str, linux.build.Toolchain]: return { "bootlin-armv5-eabi": linux.build.EnvSetBootlinToolchain( arch = "armv5-eabi", libc = "glibc", typ = "stable", date = "2018.11-1", ), "linaro-gnueabi": linux.build.EnvSetLinaroToolchain( host_arch = "i686", arch = "arm-linux-gnueabi", date = "2018.05", gcc_vers = "7.3", gcc_subvers = "1", ), "generic-armv7a": linux.build.EnvScriptToolchain( linux.Path( self, "/home/hs/toolchain/linaro/gcc-linaro-7.2.1-2017.11-i686_arm-linux-gnueabi", ) ),
for using a toolchain from bootlin or linaro, please add the attached patch to tbot. If you have not installed the toolchain, this class downloads it. ! This patch is development state only !
Now, if you want to build on the same machine add simply:
def build(self) -> linux.Builder: return self
If you want to build on other linux machines:
def build(self) -> linux.Builder: if "pollux-build" in tbot.flags: return builders.PolluxSSH(self) elif "xpert-build" in tbot.flags: return builders.XpertSSH(self) elif "hercules-build" in tbot.flags: return builders.HerculesSSH(self) elif "threadripper-build" in tbot.flags: return builders.ThreadripperSSH(self)
and define for example in labs/builders.py this build machines, and you can select through tbot commandline flags, which build machine you want to use ...
May we should such an example to our doc ...

Hi Heiko,
Thanks for the hints! I pushed your things here:
https://github.com/sglass68/tbot/tree/simon
Then I try: tbot -l kea.py -b pcduino3.py uboot_build
and get an error:
tbot starting ... type <class 'module'> ├─Calling uboot_build ... │ └─Fail. (0.000s) ├─Exception: │ Traceback (most recent call last): │ File "/home/sglass/.local/lib/python3.6/site-packages/tbot-0.8.0-py3.6.egg/tbot/main.py", line 318, in main │ func(**params) │ File "/home/sglass/.local/lib/python3.6/site-packages/tbot-0.8.0-py3.6.egg/tbot/decorators.py", line 103, in wrapped │ result = tc(*args, **kwargs) │ File "/home/sglass/.local/lib/python3.6/site-packages/tbot-0.8.0-py3.6.egg/tbot/tc/uboot/build.py", line 271, in _build │ builder = UBootBuilder._get_selected_builder() │ File "/home/sglass/.local/lib/python3.6/site-packages/tbot-0.8.0-py3.6.egg/tbot/tc/uboot/build.py", line 160, in _get_selected_builder │ builder = getattr(tbot.selectable.UBootMachine, "build") │ AttributeError: type object 'UBootMachine' has no attribute 'build'
I'm a bit lost in all the classes and functions. A working example or a patch would be great!
I've pushed all my scripts here:
https://github.com/sglass68/ubtest
The top commit is your patches.
Regards, Simon
On Wed, 12 Feb 2020 at 22:49, Heiko Schocher hs@denx.de wrote:
Hello Simon,
Am 12.02.2020 um 18:14 schrieb Simon Glass:
Hi Heiko,
On Wed, 12 Feb 2020 at 01:50, Heiko Schocher hs@denx.de wrote:
Hello Simon,
Am 05.02.2020 um 15:10 schrieb Simon Glass:
Hi Tom,
On Wed, 4 Dec 2019 at 15:30, Tom Rini trini@konsulko.com wrote:
On Fri, Nov 29, 2019 at 09:23:43PM -0700, Simon Glass wrote:
Hi Tom,
I have been meaning to have a crack at setting up a little hardware lab for a while.
I made some progress recently and hooked up a rpi_3 with sdwire for USB/SD, ykush for power and a little computer to control it. It builds U-Boot, sticks it on the SD card and runs pytest.
I pushed a tree here and hopefully you can see the 'hwlab' thing at the end:
https://gitlab.denx.de/u-boot/custodians/u-boot-dm/pipelines/148
So far it is just running the 'help' test. It seems to hang with serial console problems if I try to do more. It is not 100% reliable yet. I based it on Stephen's test hooks:
https://github.com/sglass68/uboot-test-hooks
Is it possible to share this so that others can use the lab when they push trees? Is it as simple as adding to the .gitlab-ci.yml file as I have done here?
https://gitlab.denx.de/u-boot/custodians/u-boot-dm/blob/gitlab-working/.gitl...
I also got tbot going in a similar way, to test booting into Linux. Should we look at integrating that at the same time? It should be fairly easy to do.
I have quite a lot of random boards and in principle it should not be too hard to hook up some more of them, with sufficient SDwires, hubs and patience.
Bumping this thread as I have now hooked up about about 8 mostly ARM and x86 boards and have tbot and pytest automation mostly working for them.
Great news!
added Harald Seiler to cc, as he did the tbot port to python3.6.
Do you have somewhere your tbot integration online?
https://github.com/sglass68/ubtest
But it is tbot only. It would be good if there were a way to upstream this stuff.
For pytest I am sending upstream to:
https://github.com/swarren/uboot-test-hooks
BTW I have not yet got tbot to build U-Boot and write it onto the board. Do you have examples for that?
We have them on our gitlab server, but only private as I know.
@Harald: Do you have some good examples somewhere online?
May the doc help here too:
http://tbot.tools/modules/tc.html#u-boot
and see [1]
I ask because on our ToDo list is to integrate pytest into tbot and may we can share work?
There's two parts of this. The first part I think is that we need some good examples of how to have one private CI job poll / monitor other public jobs and run. I believe some labs do this today. This would be helpful as at least personally I'm kicking my hardware tests manually. This is because as best I can tell there isn't a way to include an optional stage/portion of a CI job.
So the model here is that people with a lab 'watch' various repos? I think that would be useful. Stephen Warren does this I think, but I'm not sure how the builds are kicked off.
But what about a full public lab? E.g. is it possible to add some of the boards I have here to every build that people do?
Here begins the hard game I think, because what happens if 2 builds triggered in parallel and want to test a board in the same lab at the same time?
The gitlab-runner thing seems to handle that.
Ah, so all gitlabs need to use the same gitlab runner, ok.
If you trigger the test with tbot, it should be easy to add a locking mechanism into tbots lab specific function power_check() [1]
May in this power_check() function tbot waits until it get the board... The locking mechanism itself is lab specific.
The second part is that long term, we need to most likely develop some LAVA experience as that will get us easier access to various kernelci labs and in turn be included in kernelci labs, when the overall SoC and lab support being able to test firmware.
I wonder if these are set up for replacing firmware? It specifically mentions boards getting bricked, so I suspect not.
Unfortunately I had not yet time for looking into LAVA or kernelci.
I haven't much, but I do wonder if we could add firmware testing to it.
bye, Heiko
[1] tbot u-boot build example for the aristainetos board
add in your board config:
class aristainetosUBootBuilder(lab.UBootBuilder): name = "aristainetos" defconfig = "aristainetos2_defconfig" toolchain = "linaro-gnueabi" remote = "git@gitlab.denx.de:abb/aristainetos-uboot.git"
def do_checkout(self, target: linux.Path, clean: bool) -> git.GitRepository: return git.GitRepository( target=target, url=self.remote, clean=clean, rev="aristainetos-denx" )
class aristainetosUBoot(lab.UBootMachine): name = "ari-ub" prompt = "=> " autoboot_prompt = None build = aristainetosUBootBuilder()
in your lab config (just example) you need:
class UBootBuilder(uboot.UBootBuilder): if tbot.selectable.LabHost.name in ["pollux", "hercules"]: remote = "/home/git/u-boot.git"
def do_configure(self, bh: BH, repo: git.GitRepository[BH]) -> None: super().do_configure(bh, repo) tbot.log.message("Patching U-Boot config ...") # Add local-version tbot kconfig.set_string_value(repo / ".config", "CONFIG_LOCALVERSION", "-tbot") # Tab completion kconfig.enable(repo / ".config", "CONFIG_AUTO_COMPLETE") [...]
class PolluxLab(connector.ParamikoConnector, linux.Bash, linux.Lab, linux.Builder): [...] def toolchains(self) -> typing.Dict[str, linux.build.Toolchain]: return { "bootlin-armv5-eabi": linux.build.EnvSetBootlinToolchain( arch = "armv5-eabi", libc = "glibc", typ = "stable", date = "2018.11-1", ), "linaro-gnueabi": linux.build.EnvSetLinaroToolchain( host_arch = "i686", arch = "arm-linux-gnueabi", date = "2018.05", gcc_vers = "7.3", gcc_subvers = "1", ), "generic-armv7a": linux.build.EnvScriptToolchain( linux.Path( self, "/home/hs/toolchain/linaro/gcc-linaro-7.2.1-2017.11-i686_arm-linux-gnueabi", ) ),
for using a toolchain from bootlin or linaro, please add the attached patch to tbot. If you have not installed the toolchain, this class downloads it. ! This patch is development state only !
Now, if you want to build on the same machine add simply:
def build(self) -> linux.Builder: return self
If you want to build on other linux machines:
def build(self) -> linux.Builder: if "pollux-build" in tbot.flags: return builders.PolluxSSH(self) elif "xpert-build" in tbot.flags: return builders.XpertSSH(self) elif "hercules-build" in tbot.flags: return builders.HerculesSSH(self) elif "threadripper-build" in tbot.flags: return builders.ThreadripperSSH(self)
and define for example in labs/builders.py this build machines, and you can select through tbot commandline flags, which build machine you want to use ...
May we should such an example to our doc ...
DENX Software Engineering GmbH, Managing Director: Wolfgang Denk HRB 165235 Munich, Office: Kirchenstr.5, D-82194 Groebenzell, Germany Phone: +49-8142-66989-52 Fax: +49-8142-66989-80 Email: hs@denx.de

Hello Simon,
On Sun, 2020-02-23 at 19:34 -0700, Simon Glass wrote:
Hi Heiko,
Thanks for the hints! I pushed your things here:
https://github.com/sglass68/tbot/tree/simon
Then I try: tbot -l kea.py -b pcduino3.py uboot_build
and get an error:
tbot starting ... type <class 'module'> ├─Calling uboot_build ... │ └─Fail. (0.000s) ├─Exception: │ Traceback (most recent call last): │ File "/home/sglass/.local/lib/python3.6/site-packages/tbot-0.8.0-py3.6.egg/tbot/main.py", line 318, in main │ func(**params) │ File "/home/sglass/.local/lib/python3.6/site-packages/tbot-0.8.0-py3.6.egg/tbot/decorators.py", line 103, in wrapped │ result = tc(*args, **kwargs) │ File "/home/sglass/.local/lib/python3.6/site-packages/tbot-0.8.0-py3.6.egg/tbot/tc/uboot/build.py", line 271, in _build │ builder = UBootBuilder._get_selected_builder() │ File "/home/sglass/.local/lib/python3.6/site-packages/tbot-0.8.0-py3.6.egg/tbot/tc/uboot/build.py", line 160, in _get_selected_builder │ builder = getattr(tbot.selectable.UBootMachine, "build") │ AttributeError: type object 'UBootMachine' has no attribute 'build'
I'm a bit lost in all the classes and functions. A working example or a patch would be great!
I've pushed all my scripts here:
https://github.com/sglass68/ubtest
The top commit is your patches.
I think you mixed a few things up while adding Heiko's stuff. Instead of your last commit, attempt the attached patch. It is untested of course but should point you in the right direction; here is a short summary of what it adds:
1. Your `kea` lab needs to be marked as a build-host. This means it needs the `toolchains` property which returns what toolchains are available. I added a dummy armv7-a toolchain which might need a bit more adjustment to work for you.
2. I created a UBootBuilder for pcduino3. This builder just specifies what defconfig to build and what toolchain to use (need to be defined in the selected lab).
Heiko's builder config is a bit more elaborate and also does some patching after applying the defconfig. This is of course also possible if you want to.
3. I added a U-Boot config for your board which is needed because its `build` property specifies which U-Boot builder to use. This will make the `uboot_build` testcase work properly. Later you might want to adjust this U-Boot config to actually work so you can make use of it for flashing the new U-Boot binary.
Some more links to documentation:
- Build-host config: https://tbot.tools/modules/machine_linux.html#builder - UBootBuilder class: https://tbot.tools/modules/tc.html#tbot.tc.uboot.UBootBuilder
Hope this helps!

Hi Harald,
On Mon, 24 Feb 2020 at 06:27, Harald Seiler hws@denx.de wrote:
Hello Simon,
On Sun, 2020-02-23 at 19:34 -0700, Simon Glass wrote:
Hi Heiko,
Thanks for the hints! I pushed your things here:
https://github.com/sglass68/tbot/tree/simon
Then I try: tbot -l kea.py -b pcduino3.py uboot_build
and get an error:
tbot starting ... type <class 'module'> ├─Calling uboot_build ... │ └─Fail. (0.000s) ├─Exception: │ Traceback (most recent call last): │ File "/home/sglass/.local/lib/python3.6/site-packages/tbot-0.8.0-py3.6.egg/tbot/main.py", line 318, in main │ func(**params) │ File "/home/sglass/.local/lib/python3.6/site-packages/tbot-0.8.0-py3.6.egg/tbot/decorators.py", line 103, in wrapped │ result = tc(*args, **kwargs) │ File "/home/sglass/.local/lib/python3.6/site-packages/tbot-0.8.0-py3.6.egg/tbot/tc/uboot/build.py", line 271, in _build │ builder = UBootBuilder._get_selected_builder() │ File "/home/sglass/.local/lib/python3.6/site-packages/tbot-0.8.0-py3.6.egg/tbot/tc/uboot/build.py", line 160, in _get_selected_builder │ builder = getattr(tbot.selectable.UBootMachine, "build") │ AttributeError: type object 'UBootMachine' has no attribute 'build'
I'm a bit lost in all the classes and functions. A working example or a patch would be great!
I've pushed all my scripts here:
https://github.com/sglass68/ubtest
The top commit is your patches.
I think you mixed a few things up while adding Heiko's stuff. Instead of your last commit, attempt the attached patch. It is untested of course but should point you in the right direction; here is a short summary of what it adds:
1. Your `kea` lab needs to be marked as a build-host. This means it needs the `toolchains` property which returns what toolchains are available. I added a dummy armv7-a toolchain which might need a bit more adjustment to work for you. 2. I created a UBootBuilder for pcduino3. This builder just specifies what defconfig to build and what toolchain to use (need to be defined in the selected lab). Heiko's builder config is a bit more elaborate and also does some patching after applying the defconfig. This is of course also possible if you want to. 3. I added a U-Boot config for your board which is needed because its `build` property specifies which U-Boot builder to use. This will make the `uboot_build` testcase work properly. Later you might want to adjust this U-Boot config to actually work so you can make use of it for flashing the new U-Boot binary.
Some more links to documentation:
- Build-host config: https://tbot.tools/modules/machine_linux.html#builder - UBootBuilder class: https://tbot.tools/modules/tc.html#tbot.tc.uboot.UBootBuilder
Hope this helps!
Yes it helps a lot, thank you. I now have it building U-Boot:
tbot -l kea.py -b pcduino3.py -p clean=False uboot_build interactive_uboot
I sent a tiny patch to tbot to fix an error I had, and I made a few minor changes to what you sent.
https://github.com/sglass68/ubtest/commit/7da7a3794d505e970e0e21a9b6ed3a7e5f...
So my next question is how to load U-Boot onto the board. I can see methods for poweron/poweroff but not for actually writing to the board. Is there a built-in structure for this? I cannot find it.
I am hoping that the Pcduino3 class can define a method like 'load_uboot()' to write U-Boot to the board?
Regards, Simon
[..]

Hi Simon,
On Sat, 2020-03-21 at 13:07 -0600, Simon Glass wrote:
Hi Harald,
On Mon, 24 Feb 2020 at 06:27, Harald Seiler hws@denx.de wrote:
Hello Simon,
On Sun, 2020-02-23 at 19:34 -0700, Simon Glass wrote:
Hi Heiko,
Thanks for the hints! I pushed your things here:
https://github.com/sglass68/tbot/tree/simon
Then I try: tbot -l kea.py -b pcduino3.py uboot_build
and get an error:
tbot starting ... type <class 'module'> ├─Calling uboot_build ... │ └─Fail. (0.000s) ├─Exception: │ Traceback (most recent call last): │ File "/home/sglass/.local/lib/python3.6/site-packages/tbot-0.8.0-py3.6.egg/tbot/main.py", line 318, in main │ func(**params) │ File "/home/sglass/.local/lib/python3.6/site-packages/tbot-0.8.0-py3.6.egg/tbot/decorators.py", line 103, in wrapped │ result = tc(*args, **kwargs) │ File "/home/sglass/.local/lib/python3.6/site-packages/tbot-0.8.0-py3.6.egg/tbot/tc/uboot/build.py", line 271, in _build │ builder = UBootBuilder._get_selected_builder() │ File "/home/sglass/.local/lib/python3.6/site-packages/tbot-0.8.0-py3.6.egg/tbot/tc/uboot/build.py", line 160, in _get_selected_builder │ builder = getattr(tbot.selectable.UBootMachine, "build") │ AttributeError: type object 'UBootMachine' has no attribute 'build'
I'm a bit lost in all the classes and functions. A working example or a patch would be great!
I've pushed all my scripts here:
https://github.com/sglass68/ubtest
The top commit is your patches.
I think you mixed a few things up while adding Heiko's stuff. Instead of your last commit, attempt the attached patch. It is untested of course but should point you in the right direction; here is a short summary of what it adds:
1. Your `kea` lab needs to be marked as a build-host. This means it needs the `toolchains` property which returns what toolchains are available. I added a dummy armv7-a toolchain which might need a bit more adjustment to work for you. 2. I created a UBootBuilder for pcduino3. This builder just specifies what defconfig to build and what toolchain to use (need to be defined in the selected lab). Heiko's builder config is a bit more elaborate and also does some patching after applying the defconfig. This is of course also possible if you want to. 3. I added a U-Boot config for your board which is needed because its `build` property specifies which U-Boot builder to use. This will make the `uboot_build` testcase work properly. Later you might want to adjust this U-Boot config to actually work so you can make use of it for flashing the new U-Boot binary.
Some more links to documentation:
- Build-host config: https://tbot.tools/modules/machine_linux.html#builder - UBootBuilder class: https://tbot.tools/modules/tc.html#tbot.tc.uboot.UBootBuilder
Hope this helps!
Yes it helps a lot, thank you. I now have it building U-Boot:
tbot -l kea.py -b pcduino3.py -p clean=False uboot_build interactive_uboot
I sent a tiny patch to tbot to fix an error I had, and I made a few minor changes to what you sent.
https://github.com/sglass68/ubtest/commit/7da7a3794d505e970e0e21a9b6ed3a7e5f...
Huh, actually I messed up in the patch I sent you. Please apply the following diff:
diff --git a/kea.py b/kea.py index a3ca968dc41e..f65d91b67f1d 100644 --- a/kea.py +++ b/kea.py @@ -39,7 +39,7 @@ class KeaLab( # toolchain is identified by a (unique) string. For pcduino3 in this # example, a toolchain named `armv7-a` is defined. return { - "armv7-a": ArmV7Toolchain, + "armv7-a": ArmV7Toolchain(), }
def build(self):
This will make everything work as intended and the patch you needed in tbot is also no longer necessary (Passing a class instead of an instance will not bind the method's `self` argument so you got an error from a weird place. Maybe I should add a check against this kind of mistake ...).
So my next question is how to load U-Boot onto the board. I can see methods for poweron/poweroff but not for actually writing to the board. Is there a built-in structure for this? I cannot find it.
I am hoping that the Pcduino3 class can define a method like 'load_uboot()' to write U-Boot to the board?
I added something similar to this in our DENX internal tbot configurations but did not yet publish it anywhere (maybe I should add it to tbot_contrib?). Just for you, here is what I did:
1. In the board configs, each U-Boot class defines a `flash()` method:
from tbot.tc import shell, git
class P2020RdbUBoot(board.Connector, board.UBootShell): name = "p2020rdb-uboot" prompt = "=> " build = P2020RdbUBootBuilder()
def flash(self, repo: git.GitRepository) -> None: self.env("autoload", "no") self.exec0("dhcp") self.env("serverip", "192.168.1.1")
tftpdir = self.host.fsroot / "tftpboot" / "p2020rdb" / "tbot2" if not tftpdir.is_dir(): self.host.exec0("mkdir", "-p", tftpdir)
name = "u-boot-with-spl.bin" tbot.log.message(f"Fetching {name} ...") shell.copy(repo / name, tftpdir / name)
addr = 0x10000000 self.exec0("tftp", hex(addr), "p2020rdb/tbot2/" + name) size = int(self.env("filesize"), 16) tbot.log.message(f"Downoad succeeded:\n U-Boot: {size} bytes")
tbot.log.message("Flashing ...") self.exec0("nand", "device", "0") self.exec0("nand", "erase.spread", "0", hex(size)) self.exec0("nand", "write", hex(addr), "0", hex(size))
As you can see, it get the repository (as a GitRepository [1]) where U-Boot was built, passed as an argument, and copies the relevant binaries to a tftp directory on the lab-host (using shell.copy() [2]). Then it downloads and flashes U-Boot like you would manually.
Alternatively, I could imagine the implementation connecting to a hardware debugger and uploading the binary to flash that way.
2. I defined a few testcases for flashing. Namely, `uboot_build_and_flash` (build U-Boot and then flash the new version) and `uboot_bare_flash` (don't rebuild, just flash the artifacts from the last build). These then use the board-specific U-Boot builder and flash method.
I also added a feature where you can specify `-fsafe` on the tbot command-line. This will, in case something goes wrong while flashing, drop you onto the interactive U-Boot command-line instead of powering off the board and reporting a failure.
I uploaded the testcases here:
https://gitlab.denx.de/snippets/10
[1]: http://tbot.tools/modules/tc.html#tbot.tc.git.GitRepository [2]: http://tbot.tools/modules/tc.html#tbot.tc.shell.copy

Dear Harald,
In message dbf4c70f7dc3992c0603318e47b262175fa66cc4.camel@denx.de you wrote:
I added something similar to this in our DENX internal tbot configurations but did not yet publish it anywhere (maybe I should add it to tbot_contrib?). Just
Yes, please do!
Best regards,
Wolfgang Denk

Hi Harald,
On Sun, 22 Mar 2020 at 03:56, Harald Seiler hws@denx.de wrote:
Hi Simon,
On Sat, 2020-03-21 at 13:07 -0600, Simon Glass wrote:
Hi Harald,
On Mon, 24 Feb 2020 at 06:27, Harald Seiler hws@denx.de wrote:
Hello Simon,
On Sun, 2020-02-23 at 19:34 -0700, Simon Glass wrote:
Hi Heiko,
Thanks for the hints! I pushed your things here:
https://github.com/sglass68/tbot/tree/simon
Then I try: tbot -l kea.py -b pcduino3.py uboot_build
and get an error:
tbot starting ... type <class 'module'> ├─Calling uboot_build ... │ └─Fail. (0.000s) ├─Exception: │ Traceback (most recent call last): │ File "/home/sglass/.local/lib/python3.6/site-packages/tbot-0.8.0-py3.6.egg/tbot/main.py", line 318, in main │ func(**params) │ File "/home/sglass/.local/lib/python3.6/site-packages/tbot-0.8.0-py3.6.egg/tbot/decorators.py", line 103, in wrapped │ result = tc(*args, **kwargs) │ File "/home/sglass/.local/lib/python3.6/site-packages/tbot-0.8.0-py3.6.egg/tbot/tc/uboot/build.py", line 271, in _build │ builder = UBootBuilder._get_selected_builder() │ File "/home/sglass/.local/lib/python3.6/site-packages/tbot-0.8.0-py3.6.egg/tbot/tc/uboot/build.py", line 160, in _get_selected_builder │ builder = getattr(tbot.selectable.UBootMachine, "build") │ AttributeError: type object 'UBootMachine' has no attribute 'build'
I'm a bit lost in all the classes and functions. A working example or a patch would be great!
I've pushed all my scripts here:
https://github.com/sglass68/ubtest
The top commit is your patches.
I think you mixed a few things up while adding Heiko's stuff. Instead of your last commit, attempt the attached patch. It is untested of course but should point you in the right direction; here is a short summary of what it adds:
1. Your `kea` lab needs to be marked as a build-host. This means it needs the `toolchains` property which returns what toolchains are available. I added a dummy armv7-a toolchain which might need a bit more adjustment to work for you. 2. I created a UBootBuilder for pcduino3. This builder just specifies what defconfig to build and what toolchain to use (need to be defined in the selected lab). Heiko's builder config is a bit more elaborate and also does some patching after applying the defconfig. This is of course also possible if you want to. 3. I added a U-Boot config for your board which is needed because its `build` property specifies which U-Boot builder to use. This will make the `uboot_build` testcase work properly. Later you might want to adjust this U-Boot config to actually work so you can make use of it for flashing the new U-Boot binary.
Some more links to documentation:
- Build-host config: https://tbot.tools/modules/machine_linux.html#builder - UBootBuilder class: https://tbot.tools/modules/tc.html#tbot.tc.uboot.UBootBuilder
Hope this helps!
Yes it helps a lot, thank you. I now have it building U-Boot:
tbot -l kea.py -b pcduino3.py -p clean=False uboot_build interactive_uboot
I sent a tiny patch to tbot to fix an error I had, and I made a few minor changes to what you sent.
https://github.com/sglass68/ubtest/commit/7da7a3794d505e970e0e21a9b6ed3a7e5f...
Huh, actually I messed up in the patch I sent you. Please apply the following diff:
diff --git a/kea.py b/kea.py index a3ca968dc41e..f65d91b67f1d 100644 --- a/kea.py +++ b/kea.py @@ -39,7 +39,7 @@ class KeaLab( # toolchain is identified by a (unique) string. For pcduino3 in this # example, a toolchain named `armv7-a` is defined. return {
"armv7-a": ArmV7Toolchain,
"armv7-a": ArmV7Toolchain(), }
def build(self):
This will make everything work as intended and the patch you needed in tbot is also no longer necessary (Passing a class instead of an instance will not bind the method's `self` argument so you got an error from a weird place. Maybe I should add a check against this kind of mistake ...).
OK that fixes it, thanks.
Actually I can call 'buildman -A <board>' to get the toolchain prefix for a board, so perhaps I should figure out how to work that in somehow, and just have a generic toolchain problem,
So my next question is how to load U-Boot onto the board. I can see methods for poweron/poweroff but not for actually writing to the board. Is there a built-in structure for this? I cannot find it.
I am hoping that the Pcduino3 class can define a method like 'load_uboot()' to write U-Boot to the board?
I added something similar to this in our DENX internal tbot configurations but did not yet publish it anywhere (maybe I should add it to tbot_contrib?). Just for you, here is what I did:
To me this should be a core feature, not contrib.
I wonder if there is an upstream for tbot labs? I think it would be useful to have a directory containing people's labs like we do for pytest. Then we can end up with common functions for doing things.
In the board configs, each U-Boot class defines a `flash()` method:
from tbot.tc import shell, git
class P2020RdbUBoot(board.Connector, board.UBootShell): name = "p2020rdb-uboot" prompt = "=> " build = P2020RdbUBootBuilder()
def flash(self, repo: git.GitRepository) -> None: self.env("autoload", "no") self.exec0("dhcp") self.env("serverip", "192.168.1.1") tftpdir = self.host.fsroot / "tftpboot" / "p2020rdb" / "tbot2" if not tftpdir.is_dir(): self.host.exec0("mkdir", "-p", tftpdir) name = "u-boot-with-spl.bin" tbot.log.message(f"Fetching {name} ...") shell.copy(repo / name, tftpdir / name) addr = 0x10000000 self.exec0("tftp", hex(addr), "p2020rdb/tbot2/" + name) size = int(self.env("filesize"), 16) tbot.log.message(f"Downoad succeeded:\n U-Boot: {size} bytes") tbot.log.message("Flashing ...") self.exec0("nand", "device", "0") self.exec0("nand", "erase.spread", "0", hex(size)) self.exec0("nand", "write", hex(addr), "0", hex(size))
As you can see, it get the repository (as a GitRepository [1]) where U-Boot was built, passed as an argument, and copies the relevant binaries to a tftp directory on the lab-host (using shell.copy() [2]). Then it downloads and flashes U-Boot like you would manually.
Alternatively, I could imagine the implementation connecting to a hardware debugger and uploading the binary to flash that way.
Thanks very much. That explained it for me. I tried this out and ended up moving it to the board class instead of the U-Boot class, since I don't know what state the board is in. I just want to turn it off and program it.
I think a lot of my confusion is all the classes and inheritance and things like selectable.py where objects are updated outside the module. Plus all the decorators, getattr/setattr...
I defined a few testcases for flashing. Namely, `uboot_build_and_flash` (build U-Boot and then flash the new version) and `uboot_bare_flash` (don't rebuild, just flash the artifacts from the last build). These then use the board-specific U-Boot builder and flash method.
I also added a feature where you can specify `-fsafe` on the tbot command-line. This will, in case something goes wrong while flashing, drop you onto the interactive U-Boot command-line instead of powering off the board and reporting a failure.
I uploaded the testcases here:
https://gitlab.denx.de/snippets/10
OK thanks. I am slowly getting the hang of it.
My current problem is how to execute Python code on the host (the machine that has the hardware attached). I end up having to call host.exec() but it if I want to open a file and modify it, I cannot do that easily remotely. Is it possible to run a tbot function on the lab machine?
Regards, Simon

Hello Simon,
On Sun, 2020-03-22 at 12:42 -0600, Simon Glass wrote:
Hi Harald,
On Sun, 22 Mar 2020 at 03:56, Harald Seiler hws@denx.de wrote:
Hi Simon,
On Sat, 2020-03-21 at 13:07 -0600, Simon Glass wrote:
[...]
Huh, actually I messed up in the patch I sent you. Please apply the following diff:
diff --git a/kea.py b/kea.py index a3ca968dc41e..f65d91b67f1d 100644 --- a/kea.py +++ b/kea.py @@ -39,7 +39,7 @@ class KeaLab( # toolchain is identified by a (unique) string. For pcduino3 in this # example, a toolchain named `armv7-a` is defined. return {
"armv7-a": ArmV7Toolchain,
"armv7-a": ArmV7Toolchain(), }
def build(self):
This will make everything work as intended and the patch you needed in tbot is also no longer necessary (Passing a class instead of an instance will not bind the method's `self` argument so you got an error from a weird place. Maybe I should add a check against this kind of mistake ...).
OK that fixes it, thanks.
Actually I can call 'buildman -A <board>' to get the toolchain prefix for a board, so perhaps I should figure out how to work that in somehow, and just have a generic toolchain problem,
That's not too difficult, actually. I suggest creating a generic UBootBuilder class from which your individual board's builders inherit:
import abc import contextlib
class BuildmanUBootBuilder(uboot.UBootBuilder): @abc.abstractmethod @property def buildman_board(self) -> str: raise Exception("abstract method")
# Overload `do_toolchain` to customize toolchain behavior @contextlib.contextmanager def do_toolchain(self, bh): prefix = bh.exec0("buildman", "-A", self.buildman_board)
with bh.subshell(): # Setup toolchain here bh.env("ARCH", "arm")
yield None
# and for each board
class MyBoardUBootBuilder(BuildmanUBootBuilder): buildman_board = "tegra"
I have never used buildman myself so I am not really sure about the details of what needs to be done here ... But if this would be a useful feature I could look into supporting it directly in the UBootBuilder class.
So my next question is how to load U-Boot onto the board. I can see methods for poweron/poweroff but not for actually writing to the board. Is there a built-in structure for this? I cannot find it.
I am hoping that the Pcduino3 class can define a method like 'load_uboot()' to write U-Boot to the board?
I added something similar to this in our DENX internal tbot configurations but did not yet publish it anywhere (maybe I should add it to tbot_contrib?). Just for you, here is what I did:
To me this should be a core feature, not contrib.
I guess the split of what belongs where is not yet clearly defined and I still need to work on separating the different components better. For me, core is just the code for interacting with machines; anything that looks like a testcase should be separate from that. Right now I've got tbot_contrib for this but I guess the naming might not be optimal ...
By the way, are you subscribed to tbot@lists.denx.de? I'll be posting updates regarding this (and other things) over there.
I wonder if there is an upstream for tbot labs? I think it would be useful to have a directory containing people's labs like we do for pytest. Then we can end up with common functions for doing things.
Maybe it makes sense to add a place in upstream tbot for these as well. Thanks for the idea!
In the board configs, each U-Boot class defines a `flash()` method:
from tbot.tc import shell, git
class P2020RdbUBoot(board.Connector, board.UBootShell): name = "p2020rdb-uboot" prompt = "=> " build = P2020RdbUBootBuilder()
def flash(self, repo: git.GitRepository) -> None: self.env("autoload", "no") self.exec0("dhcp") self.env("serverip", "192.168.1.1") tftpdir = self.host.fsroot / "tftpboot" / "p2020rdb" / "tbot2" if not tftpdir.is_dir(): self.host.exec0("mkdir", "-p", tftpdir) name = "u-boot-with-spl.bin" tbot.log.message(f"Fetching {name} ...") shell.copy(repo / name, tftpdir / name) addr = 0x10000000 self.exec0("tftp", hex(addr), "p2020rdb/tbot2/" + name) size = int(self.env("filesize"), 16) tbot.log.message(f"Downoad succeeded:\n U-Boot: {size} bytes") tbot.log.message("Flashing ...") self.exec0("nand", "device", "0") self.exec0("nand", "erase.spread", "0", hex(size)) self.exec0("nand", "write", hex(addr), "0", hex(size))
As you can see, it get the repository (as a GitRepository [1]) where U-Boot was built, passed as an argument, and copies the relevant binaries to a tftp directory on the lab-host (using shell.copy() [2]). Then it downloads and flashes U-Boot like you would manually.
Alternatively, I could imagine the implementation connecting to a hardware debugger and uploading the binary to flash that way.
Thanks very much. That explained it for me. I tried this out and ended up moving it to the board class instead of the U-Boot class, since I don't know what state the board is in. I just want to turn it off and program it.
During `flash()` (in my implementation), the board is powered on and waiting for commands at the U-Boot prompt. I know this won't work for the case where U-Boot is failing to boot. Alternatively, I could change it so the implementation decides on its own whether it needs U-Boot running. E.g.:
class MyBoardUBoot(board.Connector, board.UBootShell): @classmethod def flash(cls, board: board.Board, repo: git.GitRepository): # Board is powered on. An implementation using a debugger could # now flash U-Boot.
with board.host.run("arm-linux-gnu-gdb", elf) as gdb: gdb.read_until_prompt("(gdb) ")
gdb.sendline("target remote localhost:3333") gdb.read_until_prompt("(gdb) ")
gdb.sendline("load") gdb.read_until_prompt("(gdb) ")
gdb.sendline("quit") gdb.terminate0()
# An implementation like mine above initializes the UBoot machine # and then continues like before:
with cls(board) as ub: ub.exec0("dhcp") ub.exec0("nand", "update", ...)
Hope this makes sense ...
I think a lot of my confusion is all the classes and inheritance and things like selectable.py where objects are updated outside the module. Plus all the decorators, getattr/setattr...
Ideally I'd want all that to stay hidden from the user. I know that right now I have not really archieved that and the code is really messy in some places because of it.
I'd love some feedback on the things that aren't explained well in the documentation or places where features are lacking and you needed to dig into the internals.
I defined a few testcases for flashing. Namely, `uboot_build_and_flash` (build U-Boot and then flash the new version) and `uboot_bare_flash` (don't rebuild, just flash the artifacts from the last build). These then use the board-specific U-Boot builder and flash method.
I also added a feature where you can specify `-fsafe` on the tbot command-line. This will, in case something goes wrong while flashing, drop you onto the interactive U-Boot command-line instead of powering off the board and reporting a failure.
I uploaded the testcases here:
https://gitlab.denx.de/snippets/10
OK thanks. I am slowly getting the hang of it.
My current problem is how to execute Python code on the host (the machine that has the hardware attached). I end up having to call host.exec() but it if I want to open a file and modify it, I cannot do that easily remotely. Is it possible to run a tbot function on the lab machine?
The idea is that you shouldn't ever need to run Python code on the lab-host (so we don't have to make assumptions about the environment of the lab-host). What things do you need to do that can't be done right now? I'd argue that tbot should provide API to do them without manual (Python) scripting on the lab-host.
For accessing files in particular, I just added some methods (`read_text()` [1], `write_text()` [2], `read_bytes()` [3], and `write_bytes` [4]) a few days ago. Maybe they can help?
[1]: http://tbot.tools/modules/machine_linux.html#tbot.machine.linux.Path.read_te... [2]: http://tbot.tools/modules/machine_linux.html#tbot.machine.linux.Path.write_t... [3]: http://tbot.tools/modules/machine_linux.html#tbot.machine.linux.Path.read_by... [4]: http://tbot.tools/modules/machine_linux.html#tbot.machine.linux.Path.write_b...

Hi Harald,
On Mon, 23 Mar 2020 at 04:30, Harald Seiler hws@denx.de wrote:
Hello Simon,
On Sun, 2020-03-22 at 12:42 -0600, Simon Glass wrote:
Hi Harald,
On Sun, 22 Mar 2020 at 03:56, Harald Seiler hws@denx.de wrote:
Hi Simon,
On Sat, 2020-03-21 at 13:07 -0600, Simon Glass wrote:
[...]
Huh, actually I messed up in the patch I sent you. Please apply the following diff:
diff --git a/kea.py b/kea.py index a3ca968dc41e..f65d91b67f1d 100644 --- a/kea.py +++ b/kea.py @@ -39,7 +39,7 @@ class KeaLab( # toolchain is identified by a (unique) string. For pcduino3 in this # example, a toolchain named `armv7-a` is defined. return {
"armv7-a": ArmV7Toolchain,
"armv7-a": ArmV7Toolchain(), }
def build(self):
This will make everything work as intended and the patch you needed in tbot is also no longer necessary (Passing a class instead of an instance will not bind the method's `self` argument so you got an error from a weird place. Maybe I should add a check against this kind of mistake ...).
OK that fixes it, thanks.
Actually I can call 'buildman -A <board>' to get the toolchain prefix for a board, so perhaps I should figure out how to work that in somehow, and just have a generic toolchain problem,
That's not too difficult, actually. I suggest creating a generic UBootBuilder class from which your individual board's builders inherit:
import abc import contextlib class BuildmanUBootBuilder(uboot.UBootBuilder): @abc.abstractmethod @property def buildman_board(self) -> str: raise Exception("abstract method") # Overload `do_toolchain` to customize toolchain behavior @contextlib.contextmanager def do_toolchain(self, bh): prefix = bh.exec0("buildman", "-A", self.buildman_board) with bh.subshell(): # Setup toolchain here bh.env("ARCH", "arm") yield None # and for each board class MyBoardUBootBuilder(BuildmanUBootBuilder): buildman_board = "tegra"
I have never used buildman myself so I am not really sure about the details of what needs to be done here ... But if this would be a useful feature I could look into supporting it directly in the UBootBuilder class.
So my next question is how to load U-Boot onto the board. I can see methods for poweron/poweroff but not for actually writing to the board. Is there a built-in structure for this? I cannot find it.
I am hoping that the Pcduino3 class can define a method like 'load_uboot()' to write U-Boot to the board?
I added something similar to this in our DENX internal tbot configurations but did not yet publish it anywhere (maybe I should add it to tbot_contrib?). Just for you, here is what I did:
To me this should be a core feature, not contrib.
I guess the split of what belongs where is not yet clearly defined and I still need to work on separating the different components better. For me, core is just the code for interacting with machines; anything that looks like a testcase should be separate from that. Right now I've got tbot_contrib for this but I guess the naming might not be optimal ...
By the way, are you subscribed to tbot@lists.denx.de? I'll be posting updates regarding this (and other things) over there.
I wonder if there is an upstream for tbot labs? I think it would be useful to have a directory containing people's labs like we do for pytest. Then we can end up with common functions for doing things.
Maybe it makes sense to add a place in upstream tbot for these as well. Thanks for the idea!
In the board configs, each U-Boot class defines a `flash()` method:
from tbot.tc import shell, git
class P2020RdbUBoot(board.Connector, board.UBootShell): name = "p2020rdb-uboot" prompt = "=> " build = P2020RdbUBootBuilder()
def flash(self, repo: git.GitRepository) -> None: self.env("autoload", "no") self.exec0("dhcp") self.env("serverip", "192.168.1.1") tftpdir = self.host.fsroot / "tftpboot" / "p2020rdb" / "tbot2" if not tftpdir.is_dir(): self.host.exec0("mkdir", "-p", tftpdir) name = "u-boot-with-spl.bin" tbot.log.message(f"Fetching {name} ...") shell.copy(repo / name, tftpdir / name) addr = 0x10000000 self.exec0("tftp", hex(addr), "p2020rdb/tbot2/" + name) size = int(self.env("filesize"), 16) tbot.log.message(f"Downoad succeeded:\n U-Boot: {size} bytes") tbot.log.message("Flashing ...") self.exec0("nand", "device", "0") self.exec0("nand", "erase.spread", "0", hex(size)) self.exec0("nand", "write", hex(addr), "0", hex(size))
As you can see, it get the repository (as a GitRepository [1]) where U-Boot was built, passed as an argument, and copies the relevant binaries to a tftp directory on the lab-host (using shell.copy() [2]). Then it downloads and flashes U-Boot like you would manually.
Alternatively, I could imagine the implementation connecting to a hardware debugger and uploading the binary to flash that way.
Thanks very much. That explained it for me. I tried this out and ended up moving it to the board class instead of the U-Boot class, since I don't know what state the board is in. I just want to turn it off and program it.
During `flash()` (in my implementation), the board is powered on and waiting for commands at the U-Boot prompt. I know this won't work for the case where U-Boot is failing to boot. Alternatively, I could change it so the implementation decides on its own whether it needs U-Boot running. E.g.:
class MyBoardUBoot(board.Connector, board.UBootShell): @classmethod def flash(cls, board: board.Board, repo: git.GitRepository): # Board is powered on. An implementation using a debugger could # now flash U-Boot. with board.host.run("arm-linux-gnu-gdb", elf) as gdb: gdb.read_until_prompt("(gdb) ") gdb.sendline("target remote localhost:3333") gdb.read_until_prompt("(gdb) ") gdb.sendline("load") gdb.read_until_prompt("(gdb) ") gdb.sendline("quit") gdb.terminate0() # An implementation like mine above initializes the UBoot machine # and then continues like before: with cls(board) as ub: ub.exec0("dhcp") ub.exec0("nand", "update", ...)
Hope this makes sense ...
I think a lot of my confusion is all the classes and inheritance and things like selectable.py where objects are updated outside the module. Plus all the decorators, getattr/setattr...
Ideally I'd want all that to stay hidden from the user. I know that right now I have not really archieved that and the code is really messy in some places because of it.
I'd love some feedback on the things that aren't explained well in the documentation or places where features are lacking and you needed to dig into the internals.
I defined a few testcases for flashing. Namely, `uboot_build_and_flash` (build U-Boot and then flash the new version) and `uboot_bare_flash` (don't rebuild, just flash the artifacts from the last build). These then use the board-specific U-Boot builder and flash method.
I also added a feature where you can specify `-fsafe` on the tbot command-line. This will, in case something goes wrong while flashing, drop you onto the interactive U-Boot command-line instead of powering off the board and reporting a failure.
I uploaded the testcases here:
https://gitlab.denx.de/snippets/10
OK thanks. I am slowly getting the hang of it.
My current problem is how to execute Python code on the host (the machine that has the hardware attached). I end up having to call host.exec() but it if I want to open a file and modify it, I cannot do that easily remotely. Is it possible to run a tbot function on the lab machine?
The idea is that you shouldn't ever need to run Python code on the lab-host (so we don't have to make assumptions about the environment of the lab-host). What things do you need to do that can't be done right now? I'd argue that tbot should provide API to do them without manual (Python) scripting on the lab-host.
For accessing files in particular, I just added some methods (`read_text()` [1], `write_text()` [2], `read_bytes()` [3], and `write_bytes` [4]) a few days ago. Maybe they can help?
Thanks for all of this.
I will follow up on the tbot mailing list once I have figure out the toolchain stuff.
Regards, Simon
participants (6)
-
Harald Seiler
-
Heiko Schocher
-
Simon Glass
-
Stephen Warren
-
Tom Rini
-
Wolfgang Denk