[U-Boot] [ANN] U-Boot v2019.07-rc4 released

Hey all,
It's release day and here is v2019.07-rc4. At this point, I know we have some regression fixes for i.MX that are coming, and I'm expecting a fix to the build time failure for tinker-rk3288.
To repeat myself about DM migration deadlines, first, let me say again, that DM is not required for SPL. This comes up enough that I want to say it again here. Next, if there is active progress on converting things, we'll keep from pulling the code out. This is why for example, we haven't yet pulled out a lot of deprecated SPI code. Some of it is still in progress on being converted, so I need to update the series I posted after the last -rc to remove still less drivers.
In terms of a changelog, git log --merges v2019.07-rc3..v2019.07-rc4 continues to improve in quality. If you're sending me a PR, please include a few lines or words in summary and I'll be sure to put it into the merge commit.
As I mentioned with -rc3, with this cycle is coming closer to an end, it's time to decide if we're going to keep this 3 month cycle or go back to 2 months. After the last release while I did get some feedback, the overall balance is still in the 3 month bucket.
I'm planning on doing -rc5 on June 24th with the release scheduled on July 8th. Thanks all!

On 6/11/19 3:31 AM, Tom Rini wrote:
Hey all,
It's release day and here is v2019.07-rc4. At this point, I know we have some regression fixes for i.MX that are coming, and I'm expecting a fix to the build time failure for tinker-rk3288.
To repeat myself about DM migration deadlines, first, let me say again, that DM is not required for SPL. This comes up enough that I want to say it again here. Next, if there is active progress on converting things, we'll keep from pulling the code out. This is why for example, we haven't yet pulled out a lot of deprecated SPI code. Some of it is still in progress on being converted, so I need to update the series I posted after the last -rc to remove still less drivers.
In terms of a changelog, git log --merges v2019.07-rc3..v2019.07-rc4 continues to improve in quality. If you're sending me a PR, please include a few lines or words in summary and I'll be sure to put it into the merge commit.
Do you have a list of the platforms that are currently in the danger zone ?
As I mentioned with -rc3, with this cycle is coming closer to an end, it's time to decide if we're going to keep this 3 month cycle or go back to 2 months. After the last release while I did get some feedback, the overall balance is still in the 3 month bucket.
I vote for 3 months, I very much like it.
I'm planning on doing -rc5 on June 24th with the release scheduled on July 8th. Thanks all!
Great

On Tue, Jun 11, 2019 at 04:56:24AM +0200, Marek Vasut wrote:
On 6/11/19 3:31 AM, Tom Rini wrote:
Hey all,
It's release day and here is v2019.07-rc4. At this point, I know we have some regression fixes for i.MX that are coming, and I'm expecting a fix to the build time failure for tinker-rk3288.
To repeat myself about DM migration deadlines, first, let me say again, that DM is not required for SPL. This comes up enough that I want to say it again here. Next, if there is active progress on converting things, we'll keep from pulling the code out. This is why for example, we haven't yet pulled out a lot of deprecated SPI code. Some of it is still in progress on being converted, so I need to update the series I posted after the last -rc to remove still less drivers.
In terms of a changelog, git log --merges v2019.07-rc3..v2019.07-rc4 continues to improve in quality. If you're sending me a PR, please include a few lines or words in summary and I'll be sure to put it into the merge commit.
Do you have a list of the platforms that are currently in the danger zone ?
In what way? Uncaught size overflow? No, but I hope a lot of maintainers take advantage of this new feature so we can catch this problem and do something about the root cause. DM migration stuff? That doesn't cause the board to get dropped but to lose functionality. It's too late in this cycle for the SPI conversion series now, but the Freescale/NXP driver was updated, so it's not nearly as many boards as it looked like before.

On 6/11/19 1:15 PM, Tom Rini wrote:
On Tue, Jun 11, 2019 at 04:56:24AM +0200, Marek Vasut wrote:
On 6/11/19 3:31 AM, Tom Rini wrote:
Hey all,
It's release day and here is v2019.07-rc4. At this point, I know we have some regression fixes for i.MX that are coming, and I'm expecting a fix to the build time failure for tinker-rk3288.
To repeat myself about DM migration deadlines, first, let me say again, that DM is not required for SPL. This comes up enough that I want to say it again here. Next, if there is active progress on converting things, we'll keep from pulling the code out. This is why for example, we haven't yet pulled out a lot of deprecated SPI code. Some of it is still in progress on being converted, so I need to update the series I posted after the last -rc to remove still less drivers.
In terms of a changelog, git log --merges v2019.07-rc3..v2019.07-rc4 continues to improve in quality. If you're sending me a PR, please include a few lines or words in summary and I'll be sure to put it into the merge commit.
Do you have a list of the platforms that are currently in the danger zone ?
In what way? Uncaught size overflow? No, but I hope a lot of maintainers take advantage of this new feature so we can catch this problem and do something about the root cause. DM migration stuff?
Yes, DM migration stuff, would be nice to have a list of boards which still generate warnings.
That doesn't cause the board to get dropped but to lose functionality. It's too late in this cycle for the SPI conversion series now, but the Freescale/NXP driver was updated, so it's not nearly as many boards as it looked like before.

Hi Tom,
Hey all,
It's release day and here is v2019.07-rc4. At this point, I know we have some regression fixes for i.MX that are coming, and I'm expecting a fix to the build time failure for tinker-rk3288.
To repeat myself about DM migration deadlines, first, let me say again, that DM is not required for SPL. This comes up enough that I want to say it again here. Next, if there is active progress on converting things, we'll keep from pulling the code out. This is why for example, we haven't yet pulled out a lot of deprecated SPI code. Some of it is still in progress on being converted, so I need to update the series I posted after the last -rc to remove still less drivers.
In terms of a changelog, git log --merges v2019.07-rc3..v2019.07-rc4 continues to improve in quality. If you're sending me a PR, please include a few lines or words in summary and I'll be sure to put it into the merge commit.
As I mentioned with -rc3, with this cycle is coming closer to an end, it's time to decide if we're going to keep this 3 month cycle or go back to 2 months.
I do prefer 3 months cycle. IMHO with 3 months cycle, we do have a time to fix things and adding new code is done in merge window.
After the last release while I did get some feedback, the overall balance is still in the 3 month bucket.
I'm planning on doing -rc5 on June 24th with the release scheduled on July 8th. Thanks all!
Best regards,
Lukasz Majewski
--
DENX Software Engineering GmbH, Managing Director: Wolfgang Denk HRB 165235 Munich, Office: Kirchenstr.5, D-82194 Groebenzell, Germany Phone: (+49)-8142-66989-59 Fax: (+49)-8142-66989-80 Email: lukma@denx.de

Hello,
Am 11.06.19 um 03:31 schrieb Tom Rini:
It's release day and here is v2019.07-rc4. At this point, I know we have some regression fixes for i.MX that are coming, and I'm expecting a fix to the build time failure for tinker-rk3288.
To repeat myself about DM migration deadlines, first, let me say again, that DM is not required for SPL. This comes up enough that I want to say it again here. Next, if there is active progress on converting things, we'll keep from pulling the code out. This is why for example, we haven't yet pulled out a lot of deprecated SPI code. Some of it is still in progress on being converted, so I need to update the series I posted after the last -rc to remove still less drivers.
I've noticed that Poplar board is regressing with v2019.07-rc4 compared to v2019.04. It doesn't even show a U-Boot version banner:
INFO: Boot BL33 from 0x37000000 for 0 Bytes NOTICE: BL31: v2.1(debug): NOTICE: BL31: Built : 16:00:26, Jun 16 2019 INFO: ARM GICv2 driver initialized INFO: BL31: Initializing runtime services WARNING: BL31: cortex_a53: CPU workaround for 819472 was missing! WARNING: BL31: cortex_a53: CPU workaround for 824069 was missing! WARNING: BL31: cortex_a53: CPU workaround for 827319 was missing! INFO: BL31: cortex_a53: CPU workaround for 855873 was applied INFO: BL31: InitINFO: BL31: Preparing for EL3 exit to normal world INFO: Entry point address = 0x37000000 INFO: SPSR = 0x3c9
I'm using TF-A v2.1 with the pending eMMC fix from https://review.trustedfirmware.org/c/TF-A/trusted-firmware-a/+/1230 but same issue without that patch.
I don't see any report or fix for this yet - is this related to DM?
Thanks, Andreas

On Sun, Jun 16, 2019 at 07:36:08PM +0200, Andreas Färber wrote:
Hello,
Am 11.06.19 um 03:31 schrieb Tom Rini:
It's release day and here is v2019.07-rc4. At this point, I know we have some regression fixes for i.MX that are coming, and I'm expecting a fix to the build time failure for tinker-rk3288.
To repeat myself about DM migration deadlines, first, let me say again, that DM is not required for SPL. This comes up enough that I want to say it again here. Next, if there is active progress on converting things, we'll keep from pulling the code out. This is why for example, we haven't yet pulled out a lot of deprecated SPI code. Some of it is still in progress on being converted, so I need to update the series I posted after the last -rc to remove still less drivers.
I've noticed that Poplar board is regressing with v2019.07-rc4 compared to v2019.04. It doesn't even show a U-Boot version banner:
INFO: Boot BL33 from 0x37000000 for 0 Bytes NOTICE: BL31: v2.1(debug): NOTICE: BL31: Built : 16:00:26, Jun 16 2019 INFO: ARM GICv2 driver initialized INFO: BL31: Initializing runtime services WARNING: BL31: cortex_a53: CPU workaround for 819472 was missing! WARNING: BL31: cortex_a53: CPU workaround for 824069 was missing! WARNING: BL31: cortex_a53: CPU workaround for 827319 was missing! INFO: BL31: cortex_a53: CPU workaround for 855873 was applied INFO: BL31: InitINFO: BL31: Preparing for EL3 exit to normal world INFO: Entry point address = 0x37000000 INFO: SPSR = 0x3c9
I'm using TF-A v2.1 with the pending eMMC fix from https://review.trustedfirmware.org/c/TF-A/trusted-firmware-a/+/1230 but same issue without that patch.
I don't see any report or fix for this yet - is this related to DM?
Thanks for reporting, Andreas. I just ran a git bisect which points to commit below.
3a7c45f6a772 ("simple-bus: add DM_FLAG_PRE_RELOC flag to simple-bus driver")
I'm not sure this indicates above commit is problematic or Poplar platform is buggy on DM support though.
Shawn

Hi,
On Mon, Jun 17, 2019 at 10:17 AM Shawn Guo shawn.guo@linaro.org wrote:
On Sun, Jun 16, 2019 at 07:36:08PM +0200, Andreas Färber wrote:
Hello,
Am 11.06.19 um 03:31 schrieb Tom Rini:
It's release day and here is v2019.07-rc4. At this point, I know we have some regression fixes for i.MX that are coming, and I'm expecting a fix to the build time failure for tinker-rk3288.
To repeat myself about DM migration deadlines, first, let me say again, that DM is not required for SPL. This comes up enough that I want to say it again here. Next, if there is active progress on converting things, we'll keep from pulling the code out. This is why for example, we haven't yet pulled out a lot of deprecated SPI code. Some of it is still in progress on being converted, so I need to update the series I posted after the last -rc to remove still less drivers.
I've noticed that Poplar board is regressing with v2019.07-rc4 compared to v2019.04. It doesn't even show a U-Boot version banner:
INFO: Boot BL33 from 0x37000000 for 0 Bytes NOTICE: BL31: v2.1(debug): NOTICE: BL31: Built : 16:00:26, Jun 16 2019 INFO: ARM GICv2 driver initialized INFO: BL31: Initializing runtime services WARNING: BL31: cortex_a53: CPU workaround for 819472 was missing! WARNING: BL31: cortex_a53: CPU workaround for 824069 was missing! WARNING: BL31: cortex_a53: CPU workaround for 827319 was missing! INFO: BL31: cortex_a53: CPU workaround for 855873 was applied INFO: BL31: InitINFO: BL31: Preparing for EL3 exit to normal world INFO: Entry point address = 0x37000000 INFO: SPSR = 0x3c9
I'm using TF-A v2.1 with the pending eMMC fix from https://review.trustedfirmware.org/c/TF-A/trusted-firmware-a/+/1230 but same issue without that patch.
I don't see any report or fix for this yet - is this related to DM?
Thanks for reporting, Andreas. I just ran a git bisect which points to commit below.
3a7c45f6a772 ("simple-bus: add DM_FLAG_PRE_RELOC flag to simple-bus driver")
I'm not sure this indicates above commit is problematic or Poplar platform is buggy on DM support though.
Please increase CONFIG_SYS_MALLOC_F_LEN and have a try. The above commits makes the simple-bus driver available before pre-relocation which requires a little more memory.
Regards, Bin

On Mon, Jun 17, 2019 at 10:19:14AM +0800, Bin Meng wrote:
Hi,
On Mon, Jun 17, 2019 at 10:17 AM Shawn Guo shawn.guo@linaro.org wrote:
On Sun, Jun 16, 2019 at 07:36:08PM +0200, Andreas Färber wrote:
Hello,
Am 11.06.19 um 03:31 schrieb Tom Rini:
It's release day and here is v2019.07-rc4. At this point, I know we have some regression fixes for i.MX that are coming, and I'm expecting a fix to the build time failure for tinker-rk3288.
To repeat myself about DM migration deadlines, first, let me say again, that DM is not required for SPL. This comes up enough that I want to say it again here. Next, if there is active progress on converting things, we'll keep from pulling the code out. This is why for example, we haven't yet pulled out a lot of deprecated SPI code. Some of it is still in progress on being converted, so I need to update the series I posted after the last -rc to remove still less drivers.
I've noticed that Poplar board is regressing with v2019.07-rc4 compared to v2019.04. It doesn't even show a U-Boot version banner:
INFO: Boot BL33 from 0x37000000 for 0 Bytes NOTICE: BL31: v2.1(debug): NOTICE: BL31: Built : 16:00:26, Jun 16 2019 INFO: ARM GICv2 driver initialized INFO: BL31: Initializing runtime services WARNING: BL31: cortex_a53: CPU workaround for 819472 was missing! WARNING: BL31: cortex_a53: CPU workaround for 824069 was missing! WARNING: BL31: cortex_a53: CPU workaround for 827319 was missing! INFO: BL31: cortex_a53: CPU workaround for 855873 was applied INFO: BL31: InitINFO: BL31: Preparing for EL3 exit to normal world INFO: Entry point address = 0x37000000 INFO: SPSR = 0x3c9
I'm using TF-A v2.1 with the pending eMMC fix from https://review.trustedfirmware.org/c/TF-A/trusted-firmware-a/+/1230 but same issue without that patch.
I don't see any report or fix for this yet - is this related to DM?
Thanks for reporting, Andreas. I just ran a git bisect which points to commit below.
3a7c45f6a772 ("simple-bus: add DM_FLAG_PRE_RELOC flag to simple-bus driver")
I'm not sure this indicates above commit is problematic or Poplar platform is buggy on DM support though.
Please increase CONFIG_SYS_MALLOC_F_LEN and have a try. The above commits makes the simple-bus driver available before pre-relocation which requires a little more memory.
Ah, thanks much for the input, Bin. It worked! I will send a fix for it shortly.
Shawn

Hi Tom,
On Tue, 11 Jun 2019 at 02:33, Tom Rini trini@konsulko.com wrote:
Hey all,
It's release day and here is v2019.07-rc4. At this point, I know we have some regression fixes for i.MX that are coming, and I'm expecting a fix to the build time failure for tinker-rk3288.
To repeat myself about DM migration deadlines, first, let me say again, that DM is not required for SPL. This comes up enough that I want to say it again here. Next, if there is active progress on converting things, we'll keep from pulling the code out. This is why for example, we haven't yet pulled out a lot of deprecated SPI code. Some of it is still in progress on being converted, so I need to update the series I posted after the last -rc to remove still less drivers.
In terms of a changelog, git log --merges v2019.07-rc3..v2019.07-rc4 continues to improve in quality. If you're sending me a PR, please include a few lines or words in summary and I'll be sure to put it into the merge commit.
As I mentioned with -rc3, with this cycle is coming closer to an end, it's time to decide if we're going to keep this 3 month cycle or go back to 2 months. After the last release while I did get some feedback, the overall balance is still in the 3 month bucket.
I vote for 2 months. I find the current release disappears into a black hole for a about a month, and patches sit on the list for what seems like eternity. I have the option of doing a -next branch but it seems that very few do this.
I'd like to better understand the benefits of the 3-month timeline.
Regards, Simon

On 6/22/19 4:55 PM, Simon Glass wrote:
Hi Tom,
On Tue, 11 Jun 2019 at 02:33, Tom Rini trini@konsulko.com wrote:
Hey all,
It's release day and here is v2019.07-rc4. At this point, I know we have some regression fixes for i.MX that are coming, and I'm expecting a fix to the build time failure for tinker-rk3288.
To repeat myself about DM migration deadlines, first, let me say again, that DM is not required for SPL. This comes up enough that I want to say it again here. Next, if there is active progress on converting things, we'll keep from pulling the code out. This is why for example, we haven't yet pulled out a lot of deprecated SPI code. Some of it is still in progress on being converted, so I need to update the series I posted after the last -rc to remove still less drivers.
In terms of a changelog, git log --merges v2019.07-rc3..v2019.07-rc4 continues to improve in quality. If you're sending me a PR, please include a few lines or words in summary and I'll be sure to put it into the merge commit.
As I mentioned with -rc3, with this cycle is coming closer to an end, it's time to decide if we're going to keep this 3 month cycle or go back to 2 months. After the last release while I did get some feedback, the overall balance is still in the 3 month bucket.
I vote for 2 months. I find the current release disappears into a black hole for a about a month, and patches sit on the list for what seems like eternity. I have the option of doing a -next branch but it seems that very few do this.
I'd like to better understand the benefits of the 3-month timeline.
Stability, with the 2 months cycle, there was no time to stabilize the release and fix bugs, everyone was just cannonfodding patches upstream all the time. The result was always a release which wasn't quite well done, had rough edges, unfixed bugs and it wasn't something which you could deploy.
Now we have 1 month window where we only accept bugfixes and where things slow down. This removes some stress from the maintainers too. And that lets us take a step back and think about the bigger project questions, rather than just dealing with the onslaught of patches.

Hi Simon,
Am 22.06.19 um 16:55 schrieb Simon Glass:
I'd like to better understand the benefits of the 3-month timeline.
It takes time to learn about a release, package and build it, test it on various hardware, investigate and report errors, wait for feedback and fixes, rinse and repeat with the next -rc. Many people don't do this as their main job.
If we shorten the release cycle, newer boards will get out faster (which is good) but the overall quality of boards not actively worked on (because they were working good enough before) will decay, which is bad. The only way to counteract that would be to automatically test on real hardware rather than just building, and doing that for all these masses of boards seems unrealistic.
Regards, Andreas

Hi,
On Sat, 22 Jun 2019 at 16:10, Andreas Färber afaerber@suse.de wrote:
Hi Simon,
Am 22.06.19 um 16:55 schrieb Simon Glass:
I'd like to better understand the benefits of the 3-month timeline.
It takes time to learn about a release, package and build it, test it on various hardware, investigate and report errors, wait for feedback and fixes, rinse and repeat with the next -rc. Many people don't do this as their main job.
If we shorten the release cycle, newer boards will get out faster (which is good) but the overall quality of boards not actively worked on (because they were working good enough before) will decay, which is bad. The only way to counteract that would be to automatically test on real hardware rather than just building, and doing that for all these masses of boards seems unrealistic.
Here I think you are talking about distributions. But why not just take every second release?
I have certain had the experience of getting a board our of the cupboard and finding that the latest U-Boot doesn't work, nor the one before, nor the three before that.
Are we actually seeing an improvement in regressions? I feel that testing is the only way to get that.
Perhaps we should select a small subset of boards which do get tested, and actually have custodians build/test on those for every rc?
Regards, Simon

Hi,
Am 22.06.19 um 20:15 schrieb Simon Glass:
On Sat, 22 Jun 2019 at 16:10, Andreas Färber afaerber@suse.de wrote:
Am 22.06.19 um 16:55 schrieb Simon Glass:
I'd like to better understand the benefits of the 3-month timeline.
It takes time to learn about a release, package and build it, test it on various hardware, investigate and report errors, wait for feedback and fixes, rinse and repeat with the next -rc. Many people don't do this as their main job.
If we shorten the release cycle, newer boards will get out faster (which is good) but the overall quality of boards not actively worked on (because they were working good enough before) will decay, which is bad. The only way to counteract that would be to automatically test on real hardware rather than just building, and doing that for all these masses of boards seems unrealistic.
Here I think you are talking about distributions. But why not just take every second release?
You're missing my point: What good is it to do a release when you yourself consider it of such poor quality that you advise others not to take it?
That has nothing per-se to do with who uses that release and whether you build it in OBS or locally.
I have certain had the experience of getting a board our of the cupboard and finding that the latest U-Boot doesn't work, nor the one before, nor the three before that.
Are we actually seeing an improvement in regressions?
I don't understand that question. The proposal, as I understood it, is to shorten the release cycle by one month, doing six instead of four releases. How could there be an improvement by leaving it as it is? My fear is that the change will make it _worse_, i.e. no improvement but rather risk of more regressions by switching to _two_ months.
In this very same -rc4 thread I reported one such regression, and luckily a patch was quickly prepared to address it. It's not yet merged despite tested - review also takes time.
I feel that testing is the only way to get that.
Agreed. And my point was that testing takes time. Increasing the release frequency shortens the time for testing of each release.
Perhaps we should select a small subset of boards which do get tested, and actually have custodians build/test on those for every rc?
Many custodians are volunteers. You can't force them to test boards at a pace you dictate to them. Also, more frequent releases also increase the chances of a custodian/maintainer being on vacation during a release.
And a working, say, BeagleBone doesn't make the user of a random other board any happier. ;-)
Regards, Andreas

Hi Andreas,
On Sat, 22 Jun 2019 at 20:08, Andreas Färber afaerber@suse.de wrote:
Hi,
Am 22.06.19 um 20:15 schrieb Simon Glass:
On Sat, 22 Jun 2019 at 16:10, Andreas Färber afaerber@suse.de wrote:
Am 22.06.19 um 16:55 schrieb Simon Glass:
I'd like to better understand the benefits of the 3-month timeline.
It takes time to learn about a release, package and build it, test it on various hardware, investigate and report errors, wait for feedback and fixes, rinse and repeat with the next -rc. Many people don't do this as their main job.
If we shorten the release cycle, newer boards will get out faster (which is good) but the overall quality of boards not actively worked on (because they were working good enough before) will decay, which is bad. The only way to counteract that would be to automatically test on real hardware rather than just building, and doing that for all these masses of boards seems unrealistic.
Here I think you are talking about distributions. But why not just take every second release?
You're missing my point: What good is it to do a release when you yourself consider it of such poor quality that you advise others not to take it?
Who said that?
Your point, I thought, was that people didn't have time to test it?
That has nothing per-se to do with who uses that release and whether you build it in OBS or locally.
I have certain had the experience of getting a board our of the cupboard and finding that the latest U-Boot doesn't work, nor the one before, nor the three before that.
Are we actually seeing an improvement in regressions?
I don't understand that question. The proposal, as I understood it, is to shorten the release cycle by one month, doing six instead of four releases. How could there be an improvement by leaving it as it is? My fear is that the change will make it _worse_, i.e. no improvement but rather risk of more regressions by switching to _two_ months.
In this very same -rc4 thread I reported one such regression, and luckily a patch was quickly prepared to address it. It's not yet merged despite tested - review also takes time.
I feel that testing is the only way to get that.
Agreed. And my point was that testing takes time. Increasing the release frequency shortens the time for testing of each release.
Perhaps we should select a small subset of boards which do get tested, and actually have custodians build/test on those for every rc?
Many custodians are volunteers. You can't force them to test boards at a pace you dictate to them. Also, more frequent releases also increase the chances of a custodian/maintainer being on vacation during a release.
And a working, say, BeagleBone doesn't make the user of a random other board any happier. ;-)
Another question...how much do people care about the latest and greatest features? Perhaps we should be merging patches frequently, but creating a release branch separate from master?
The current process seems very very slow to me.
Regards, Simon

Hi Simon,
Am 22.06.19 um 21:14 schrieb Simon Glass:
On Sat, 22 Jun 2019 at 20:08, Andreas Färber afaerber@suse.de wrote:
Am 22.06.19 um 20:15 schrieb Simon Glass:
On Sat, 22 Jun 2019 at 16:10, Andreas Färber afaerber@suse.de wrote:
Am 22.06.19 um 16:55 schrieb Simon Glass:
I'd like to better understand the benefits of the 3-month timeline.
It takes time to learn about a release, package and build it, test it on various hardware, investigate and report errors, wait for feedback and fixes, rinse and repeat with the next -rc. Many people don't do this as their main job.
If we shorten the release cycle, newer boards will get out faster (which is good) but the overall quality of boards not actively worked on (because they were working good enough before) will decay, which is bad. The only way to counteract that would be to automatically test on real hardware rather than just building, and doing that for all these masses of boards seems unrealistic.
Here I think you are talking about distributions. But why not just take every second release?
You're missing my point: What good is it to do a release when you yourself consider it of such poor quality that you advise others not to take it?
Who said that?
You, quoted above. In response to my concerns about decreasing quality you suggested to take only every second release. That doesn't improve the quality of either. It implies that one may have such bad quality that people should skip it and yet does nothing to improve the next.
Your point, I thought, was that people didn't have time to test it?
Not quite, I was talking about the full build-test-report-fix cycle taking its time. Bugs in one -rc don't necessarily get detected and fixed in time for the next -rc.
And I fail to see how your suggestion of skipping releases gives them more time to test before the U-Boot release. That would rather be an argument for slowing down the U-Boot release cycle beyond 3 months (which I'm not asking).
Regards, Andreas

Hi Andreas,,
On Sat, 22 Jun 2019 at 20:49, Andreas Färber afaerber@suse.de wrote:
Hi Simon,
Am 22.06.19 um 21:14 schrieb Simon Glass:
On Sat, 22 Jun 2019 at 20:08, Andreas Färber afaerber@suse.de wrote:
Am 22.06.19 um 20:15 schrieb Simon Glass:
On Sat, 22 Jun 2019 at 16:10, Andreas Färber afaerber@suse.de wrote:
Am 22.06.19 um 16:55 schrieb Simon Glass:
I'd like to better understand the benefits of the 3-month timeline.
It takes time to learn about a release, package and build it, test it on various hardware, investigate and report errors, wait for feedback and fixes, rinse and repeat with the next -rc. Many people don't do this as their main job.
If we shorten the release cycle, newer boards will get out faster (which is good) but the overall quality of boards not actively worked on (because they were working good enough before) will decay, which is bad. The only way to counteract that would be to automatically test on real hardware rather than just building, and doing that for all these masses of boards seems unrealistic.
Here I think you are talking about distributions. But why not just take every second release?
You're missing my point: What good is it to do a release when you yourself consider it of such poor quality that you advise others not to take it?
Who said that?
You, quoted above. In response to my concerns about decreasing quality you suggested to take only every second release. That doesn't improve the quality of either. It implies that one may have such bad quality that people should skip it and yet does nothing to improve the next.
Actually I did not say that I consider the release of such poor quality. Nor did I advise others to take it. I suspect this is a misunderstanding of "But why not just take every second release?".
My point was that if people don't have time to test every release, then just put in the time to test every second release.
I am actually not sure how much a bit of extra time helps with stability. Have the last two releases been more reliable on hardware, since people have found time to test them using the 9-week -rc phases, whereas the previous 6-week one did not?
Your point, I thought, was that people didn't have time to test it?
Not quite, I was talking about the full build-test-report-fix cycle taking its time. Bugs in one -rc don't necessarily get detected and fixed in time for the next -rc.
That doesn't change unless the distance between rc's increases. But I think your point is that there are more -rc releases so more time to find and fix things.
And I fail to see how your suggestion of skipping releases gives them more time to test before the U-Boot release. That would rather be an argument for slowing down the U-Boot release cycle beyond 3 months (which I'm not asking).
It would mean testing only every second release, which halves the time you spend testing, a significant reduction in load. Just need to schedule testing time over (say) 6 week, 3 times a year instead of 6 times.
I think an automated test setup is the best way to do this. Marek asks who would run it? Perhaps people who have an interest in each board and want to spend less time on manual testing? We already have the technology to do this, with pytest and tbot.
Regards, Simon

On 6/24/19 3:56 PM, Simon Glass wrote:
Hi Andreas,,
On Sat, 22 Jun 2019 at 20:49, Andreas Färber afaerber@suse.de wrote:
Hi Simon,
Am 22.06.19 um 21:14 schrieb Simon Glass:
On Sat, 22 Jun 2019 at 20:08, Andreas Färber afaerber@suse.de wrote:
Am 22.06.19 um 20:15 schrieb Simon Glass:
On Sat, 22 Jun 2019 at 16:10, Andreas Färber afaerber@suse.de wrote:
Am 22.06.19 um 16:55 schrieb Simon Glass: > I'd like to better understand the benefits of the 3-month timeline.
It takes time to learn about a release, package and build it, test it on various hardware, investigate and report errors, wait for feedback and fixes, rinse and repeat with the next -rc. Many people don't do this as their main job.
If we shorten the release cycle, newer boards will get out faster (which is good) but the overall quality of boards not actively worked on (because they were working good enough before) will decay, which is bad. The only way to counteract that would be to automatically test on real hardware rather than just building, and doing that for all these masses of boards seems unrealistic.
Here I think you are talking about distributions. But why not just take every second release?
You're missing my point: What good is it to do a release when you yourself consider it of such poor quality that you advise others not to take it?
Who said that?
You, quoted above. In response to my concerns about decreasing quality you suggested to take only every second release. That doesn't improve the quality of either. It implies that one may have such bad quality that people should skip it and yet does nothing to improve the next.
Actually I did not say that I consider the release of such poor quality. Nor did I advise others to take it. I suspect this is a misunderstanding of "But why not just take every second release?".
My point was that if people don't have time to test every release, then just put in the time to test every second release.
So what about be the point of releasing the untested intermediate release at all ? I'm sure people can just grab u-boot/master or -rc2 just fine.
[...]

Hi Marek,
On Mon, 24 Jun 2019 at 11:10, Marek Vasut marex@denx.de wrote:
On 6/24/19 3:56 PM, Simon Glass wrote:
Hi Andreas,,
On Sat, 22 Jun 2019 at 20:49, Andreas Färber afaerber@suse.de wrote:
Hi Simon,
Am 22.06.19 um 21:14 schrieb Simon Glass:
On Sat, 22 Jun 2019 at 20:08, Andreas Färber afaerber@suse.de wrote:
Am 22.06.19 um 20:15 schrieb Simon Glass:
On Sat, 22 Jun 2019 at 16:10, Andreas Färber afaerber@suse.de wrote: > Am 22.06.19 um 16:55 schrieb Simon Glass: >> I'd like to better understand the benefits of the 3-month timeline. > > It takes time to learn about a release, package and build it, test it on > various hardware, investigate and report errors, wait for feedback and > fixes, rinse and repeat with the next -rc. Many people don't do this as > their main job. > > If we shorten the release cycle, newer boards will get out faster (which > is good) but the overall quality of boards not actively worked on > (because they were working good enough before) will decay, which is bad. > The only way to counteract that would be to automatically test on real > hardware rather than just building, and doing that for all these masses > of boards seems unrealistic.
Here I think you are talking about distributions. But why not just take every second release?
You're missing my point: What good is it to do a release when you yourself consider it of such poor quality that you advise others not to take it?
Who said that?
You, quoted above. In response to my concerns about decreasing quality you suggested to take only every second release. That doesn't improve the quality of either. It implies that one may have such bad quality that people should skip it and yet does nothing to improve the next.
Actually I did not say that I consider the release of such poor quality. Nor did I advise others to take it. I suspect this is a misunderstanding of "But why not just take every second release?".
My point was that if people don't have time to test every release, then just put in the time to test every second release.
So what about be the point of releasing the untested intermediate release at all ? I'm sure people can just grab u-boot/master or -rc2 just fine.
Because (I contend) these releases do actually attract testing effort and are stable in most cases. I think this is the 90/10 rule - we are adding a road-block in the project for the 10% of boards that are super, super important...so important that no one can actually find time to test them :-)
Regards, Simon

On 6/22/19 8:15 PM, Simon Glass wrote:
Hi,
On Sat, 22 Jun 2019 at 16:10, Andreas Färber afaerber@suse.de wrote:
Hi Simon,
Am 22.06.19 um 16:55 schrieb Simon Glass:
I'd like to better understand the benefits of the 3-month timeline.
It takes time to learn about a release, package and build it, test it on various hardware, investigate and report errors, wait for feedback and fixes, rinse and repeat with the next -rc. Many people don't do this as their main job.
If we shorten the release cycle, newer boards will get out faster (which is good) but the overall quality of boards not actively worked on (because they were working good enough before) will decay, which is bad. The only way to counteract that would be to automatically test on real hardware rather than just building, and doing that for all these masses of boards seems unrealistic.
Here I think you are talking about distributions. But why not just take every second release?
I have certain had the experience of getting a board our of the cupboard and finding that the latest U-Boot doesn't work, nor the one before, nor the three before that.
Are we actually seeing an improvement in regressions? I feel that testing is the only way to get that.
Perhaps we should select a small subset of boards which do get tested, and actually have custodians build/test on those for every rc?
What I have been doing before all my recent pull requests is to boot both an arm32 (Orange Pi) and and an aarch64 (Pine A64 LTS) board via bootefi and GRUB. To make this easier I am using a Raspberry with a relay board and a Tizen SD-Wire card (https://wiki.tizen.org/SDWire) controlling the system under test, cf https://pbs.twimg.com/media/D5ugi3iX4AAh1bn.jpg:large What would be needed is scripts to automate the testing including all the Python tests.
It would make sense to have such test automation for all of our architectures similar to what Kernel CI (https://kernelci.org/) does.
Regards
Heinrich

On 6/22/19 9:12 PM, Heinrich Schuchardt wrote:
On 6/22/19 8:15 PM, Simon Glass wrote:
Hi,
On Sat, 22 Jun 2019 at 16:10, Andreas Färber afaerber@suse.de wrote:
Hi Simon,
Am 22.06.19 um 16:55 schrieb Simon Glass:
I'd like to better understand the benefits of the 3-month timeline.
It takes time to learn about a release, package and build it, test it on various hardware, investigate and report errors, wait for feedback and fixes, rinse and repeat with the next -rc. Many people don't do this as their main job.
If we shorten the release cycle, newer boards will get out faster (which is good) but the overall quality of boards not actively worked on (because they were working good enough before) will decay, which is bad. The only way to counteract that would be to automatically test on real hardware rather than just building, and doing that for all these masses of boards seems unrealistic.
Here I think you are talking about distributions. But why not just take every second release?
I have certain had the experience of getting a board our of the cupboard and finding that the latest U-Boot doesn't work, nor the one before, nor the three before that.
Are we actually seeing an improvement in regressions? I feel that testing is the only way to get that.
Perhaps we should select a small subset of boards which do get tested, and actually have custodians build/test on those for every rc?
What I have been doing before all my recent pull requests is to boot both an arm32 (Orange Pi) and and an aarch64 (Pine A64 LTS) board via bootefi and GRUB. To make this easier I am using a Raspberry with a relay board and a Tizen SD-Wire card (https://wiki.tizen.org/SDWire) controlling the system under test, cf https://pbs.twimg.com/media/D5ugi3iX4AAh1bn.jpg:large What would be needed is scripts to automate the testing including all the Python tests.
It would make sense to have such test automation for all of our architectures similar to what Kernel CI (https://kernelci.org/) does.
So who's gonna set it up and host it ?

On Sat, Jun 22, 2019 at 09:43:42PM +0200, Marek Vasut wrote:
On 6/22/19 9:12 PM, Heinrich Schuchardt wrote:
On 6/22/19 8:15 PM, Simon Glass wrote:
Hi,
On Sat, 22 Jun 2019 at 16:10, Andreas Färber afaerber@suse.de wrote:
Hi Simon,
Am 22.06.19 um 16:55 schrieb Simon Glass:
I'd like to better understand the benefits of the 3-month timeline.
It takes time to learn about a release, package and build it, test it on various hardware, investigate and report errors, wait for feedback and fixes, rinse and repeat with the next -rc. Many people don't do this as their main job.
If we shorten the release cycle, newer boards will get out faster (which is good) but the overall quality of boards not actively worked on (because they were working good enough before) will decay, which is bad. The only way to counteract that would be to automatically test on real hardware rather than just building, and doing that for all these masses of boards seems unrealistic.
Here I think you are talking about distributions. But why not just take every second release?
I have certain had the experience of getting a board our of the cupboard and finding that the latest U-Boot doesn't work, nor the one before, nor the three before that.
Are we actually seeing an improvement in regressions? I feel that testing is the only way to get that.
Perhaps we should select a small subset of boards which do get tested, and actually have custodians build/test on those for every rc?
What I have been doing before all my recent pull requests is to boot both an arm32 (Orange Pi) and and an aarch64 (Pine A64 LTS) board via bootefi and GRUB. To make this easier I am using a Raspberry with a relay board and a Tizen SD-Wire card (https://wiki.tizen.org/SDWire) controlling the system under test, cf https://pbs.twimg.com/media/D5ugi3iX4AAh1bn.jpg:large What would be needed is scripts to automate the testing including all the Python tests.
It would make sense to have such test automation for all of our architectures similar to what Kernel CI (https://kernelci.org/) does.
So who's gonna set it up and host it ?
My hope is that we can make use of the GitLab CI features to carefully (!!!!) expose some labs and setups.

On 24/06/2019 17:29, Tom Rini wrote:
On Sat, Jun 22, 2019 at 09:43:42PM +0200, Marek Vasut wrote:
On 6/22/19 9:12 PM, Heinrich Schuchardt wrote:
On 6/22/19 8:15 PM, Simon Glass wrote:
Hi,
On Sat, 22 Jun 2019 at 16:10, Andreas Färber afaerber@suse.de wrote:
Hi Simon,
Am 22.06.19 um 16:55 schrieb Simon Glass:
I'd like to better understand the benefits of the 3-month timeline.
It takes time to learn about a release, package and build it, test it on various hardware, investigate and report errors, wait for feedback and fixes, rinse and repeat with the next -rc. Many people don't do this as their main job.
If we shorten the release cycle, newer boards will get out faster (which is good) but the overall quality of boards not actively worked on (because they were working good enough before) will decay, which is bad. The only way to counteract that would be to automatically test on real hardware rather than just building, and doing that for all these masses of boards seems unrealistic.
Here I think you are talking about distributions. But why not just take every second release?
I have certain had the experience of getting a board our of the cupboard and finding that the latest U-Boot doesn't work, nor the one before, nor the three before that.
Are we actually seeing an improvement in regressions? I feel that testing is the only way to get that.
Perhaps we should select a small subset of boards which do get tested, and actually have custodians build/test on those for every rc?
What I have been doing before all my recent pull requests is to boot both an arm32 (Orange Pi) and and an aarch64 (Pine A64 LTS) board via bootefi and GRUB. To make this easier I am using a Raspberry with a relay board and a Tizen SD-Wire card (https://wiki.tizen.org/SDWire) controlling the system under test, cf https://pbs.twimg.com/media/D5ugi3iX4AAh1bn.jpg:large What would be needed is scripts to automate the testing including all the Python tests.
It would make sense to have such test automation for all of our architectures similar to what Kernel CI (https://kernelci.org/) does.
So who's gonna set it up and host it ?
My hope is that we can make use of the GitLab CI features to carefully (!!!!) expose some labs and setups.
Yes, the Gitlab CI could send jobs to lava instances to run physical boot tests, we (baylibre) are investigating this at some point, re-using our kernelCI infrastructure.
Neil
U-Boot-Board-Maintainers mailing list U-Boot-Board-Maintainers@lists.denx.de https://lists.denx.de/listinfo/u-boot-board-maintainers

On Tue, Jun 25, 2019 at 01:10:26PM +0200, Neil Armstrong wrote:
On 24/06/2019 17:29, Tom Rini wrote:
On Sat, Jun 22, 2019 at 09:43:42PM +0200, Marek Vasut wrote:
On 6/22/19 9:12 PM, Heinrich Schuchardt wrote:
On 6/22/19 8:15 PM, Simon Glass wrote:
Hi,
On Sat, 22 Jun 2019 at 16:10, Andreas Färber afaerber@suse.de wrote:
Hi Simon,
Am 22.06.19 um 16:55 schrieb Simon Glass: > I'd like to better understand the benefits of the 3-month timeline.
It takes time to learn about a release, package and build it, test it on various hardware, investigate and report errors, wait for feedback and fixes, rinse and repeat with the next -rc. Many people don't do this as their main job.
If we shorten the release cycle, newer boards will get out faster (which is good) but the overall quality of boards not actively worked on (because they were working good enough before) will decay, which is bad. The only way to counteract that would be to automatically test on real hardware rather than just building, and doing that for all these masses of boards seems unrealistic.
Here I think you are talking about distributions. But why not just take every second release?
I have certain had the experience of getting a board our of the cupboard and finding that the latest U-Boot doesn't work, nor the one before, nor the three before that.
Are we actually seeing an improvement in regressions? I feel that testing is the only way to get that.
Perhaps we should select a small subset of boards which do get tested, and actually have custodians build/test on those for every rc?
What I have been doing before all my recent pull requests is to boot both an arm32 (Orange Pi) and and an aarch64 (Pine A64 LTS) board via bootefi and GRUB. To make this easier I am using a Raspberry with a relay board and a Tizen SD-Wire card (https://wiki.tizen.org/SDWire) controlling the system under test, cf https://pbs.twimg.com/media/D5ugi3iX4AAh1bn.jpg:large What would be needed is scripts to automate the testing including all the Python tests.
It would make sense to have such test automation for all of our architectures similar to what Kernel CI (https://kernelci.org/) does.
So who's gonna set it up and host it ?
My hope is that we can make use of the GitLab CI features to carefully (!!!!) expose some labs and setups.
Yes, the Gitlab CI could send jobs to lava instances to run physical boot tests, we (baylibre) are investigating this at some point, re-using our kernelCI infrastructure.
That seems like overkill, possibly. How hard would it be to have lava kick off our test.py code? In the .gitlab-ci.yml I posted, I migrated the logic we have for travis to run our tests. I wonder how hard it would be to have test.py "check out" or whatever machines from lava?

25.06.2019 15:04, Tom Rini пишет:
On Tue, Jun 25, 2019 at 01:10:26PM +0200, Neil Armstrong wrote:
On 24/06/2019 17:29, Tom Rini wrote:
On Sat, Jun 22, 2019 at 09:43:42PM +0200, Marek Vasut wrote:
On 6/22/19 9:12 PM, Heinrich Schuchardt wrote:
On 6/22/19 8:15 PM, Simon Glass wrote:
Hi,
On Sat, 22 Jun 2019 at 16:10, Andreas Färber afaerber@suse.de wrote: > > Hi Simon, > > Am 22.06.19 um 16:55 schrieb Simon Glass: >> I'd like to better understand the benefits of the >> 3-month timeline. > > It takes time to learn about a release, package and > build it, test it on various hardware, investigate and > report errors, wait for feedback and fixes, rinse and > repeat with the next -rc. Many people don't do this as > their main job. > > If we shorten the release cycle, newer boards will get > out faster (which is good) but the overall quality of > boards not actively worked on (because they were > working good enough before) will decay, which is bad. > The only way to counteract that would be to > automatically test on real hardware rather than just > building, and doing that for all these masses of boards > seems unrealistic.
Here I think you are talking about distributions. But why not just take every second release?
I have certain had the experience of getting a board our of the cupboard and finding that the latest U-Boot doesn't work, nor the one before, nor the three before that.
Are we actually seeing an improvement in regressions? I feel that testing is the only way to get that.
Perhaps we should select a small subset of boards which do get tested, and actually have custodians build/test on those for every rc?
What I have been doing before all my recent pull requests is to boot both an arm32 (Orange Pi) and and an aarch64 (Pine A64 LTS) board via bootefi and GRUB. To make this easier I am using a Raspberry with a relay board and a Tizen SD-Wire card (https://wiki.tizen.org/SDWire) controlling the system under test, cf https://pbs.twimg.com/media/D5ugi3iX4AAh1bn.jpg:large What would be needed is scripts to automate the testing including all the Python tests.
It would make sense to have such test automation for all of our architectures similar to what Kernel CI (https://kernelci.org/) does.
So who's gonna set it up and host it ?
My hope is that we can make use of the GitLab CI features to carefully (!!!!) expose some labs and setups.
Yes, the Gitlab CI could send jobs to lava instances to run physical boot tests, we (baylibre) are investigating this at some point, re-using our kernelCI infrastructure.
That seems like overkill, possibly. How hard would it be to have lava kick off our test.py code? In the .gitlab-ci.yml I posted, I migrated the logic we have for travis to run our tests. I wonder how hard it would be to have test.py "check out" or whatever machines from lava?
Isn't it possible to kick off the lava from gitlab webhooks?
_______________________________________________ U-Boot mailing list U-Boot@lists.denx.de https://lists.denx.de/listinfo/u-boot

On 30/06/2019 12:34, Matwey V. Kornilov wrote:
25.06.2019 15:04, Tom Rini пишет:
On Tue, Jun 25, 2019 at 01:10:26PM +0200, Neil Armstrong wrote:
On 24/06/2019 17:29, Tom Rini wrote:
On Sat, Jun 22, 2019 at 09:43:42PM +0200, Marek Vasut wrote:
On 6/22/19 9:12 PM, Heinrich Schuchardt wrote:
On 6/22/19 8:15 PM, Simon Glass wrote: > Hi, > > On Sat, 22 Jun 2019 at 16:10, Andreas Färber > afaerber@suse.de wrote: >> >> Hi Simon, >> >> Am 22.06.19 um 16:55 schrieb Simon Glass: >>> I'd like to better understand the benefits of the >>> 3-month timeline. >> >> It takes time to learn about a release, package and >> build it, test it on various hardware, investigate and >> report errors, wait for feedback and fixes, rinse and >> repeat with the next -rc. Many people don't do this as >> their main job. >> >> If we shorten the release cycle, newer boards will get >> out faster (which is good) but the overall quality of >> boards not actively worked on (because they were >> working good enough before) will decay, which is bad. >> The only way to counteract that would be to >> automatically test on real hardware rather than just >> building, and doing that for all these masses of boards >> seems unrealistic. > > Here I think you are talking about distributions. But why > not just take every second release? > > I have certain had the experience of getting a board our > of the cupboard and finding that the latest U-Boot > doesn't work, nor the one before, nor the three before > that. > > Are we actually seeing an improvement in regressions? I > feel that testing is the only way to get that. > > Perhaps we should select a small subset of boards which > do get tested, and actually have custodians build/test on > those for every rc?
What I have been doing before all my recent pull requests is to boot both an arm32 (Orange Pi) and and an aarch64 (Pine A64 LTS) board via bootefi and GRUB. To make this easier I am using a Raspberry with a relay board and a Tizen SD-Wire card (https://wiki.tizen.org/SDWire) controlling the system under test, cf https://pbs.twimg.com/media/D5ugi3iX4AAh1bn.jpg:large What would be needed is scripts to automate the testing including all the Python tests.
It would make sense to have such test automation for all of our architectures similar to what Kernel CI (https://kernelci.org/) does.
So who's gonna set it up and host it ?
My hope is that we can make use of the GitLab CI features to carefully (!!!!) expose some labs and setups.
Yes, the Gitlab CI could send jobs to lava instances to run physical boot tests, we (baylibre) are investigating this at some point, re-using our kernelCI infrastructure.
That seems like overkill, possibly. How hard would it be to have lava kick off our test.py code? In the .gitlab-ci.yml I posted, I migrated the logic we have for travis to run our tests. I wonder how hard it would be to have test.py "check out" or whatever machines from lava?
Isn't it possible to kick off the lava from gitlab webhooks?
Not sure, but you can totally generate and submits jobs to lava from gitlab ci : https://gitlab.freedesktop.org/mesa/mesa/blob/master/src/gallium/drivers/pan... as collabora does for panfrost.
Neil
_______________________________________________ U-Boot mailing list U-Boot@lists.denx.de https://lists.denx.de/listinfo/u-boot
U-Boot mailing list U-Boot@lists.denx.de https://lists.denx.de/listinfo/u-boot

On Jun 22, 2019, at 2:43 PM, Marek Vasut marex@denx.de wrote:
On 6/22/19 9:12 PM, Heinrich Schuchardt wrote:
On 6/22/19 8:15 PM, Simon Glass wrote:
Hi,
On Sat, 22 Jun 2019 at 16:10, Andreas Färber afaerber@suse.de wrote:
Hi Simon,
Am 22.06.19 um 16:55 schrieb Simon Glass:
I'd like to better understand the benefits of the 3-month timeline.
It takes time to learn about a release, package and build it, test it on various hardware, investigate and report errors, wait for feedback and fixes, rinse and repeat with the next -rc. Many people don't do this as their main job.
If we shorten the release cycle, newer boards will get out faster (which is good) but the overall quality of boards not actively worked on (because they were working good enough before) will decay, which is bad. The only way to counteract that would be to automatically test on real hardware rather than just building, and doing that for all these masses of boards seems unrealistic.
Here I think you are talking about distributions. But why not just take every second release?
I have certain had the experience of getting a board our of the cupboard and finding that the latest U-Boot doesn't work, nor the one before, nor the three before that.
Are we actually seeing an improvement in regressions? I feel that testing is the only way to get that.
Perhaps we should select a small subset of boards which do get tested, and actually have custodians build/test on those for every rc?
What I have been doing before all my recent pull requests is to boot both an arm32 (Orange Pi) and and an aarch64 (Pine A64 LTS) board via bootefi and GRUB. To make this easier I am using a Raspberry with a relay board and a Tizen SD-Wire card (https://wiki.tizen.org/SDWire) controlling the system under test, cf https://pbs.twimg.com/media/D5ugi3iX4AAh1bn.jpg:large What would be needed is scripts to automate the testing including all the Python tests.
It would make sense to have such test automation for all of our architectures similar to what Kernel CI (https://kernelci.org/) does.
So who's gonna set it up and host it ?
I just got the infrastructure going to do this for the HiFive Unleashed (RiscV port), but that’s only one board right now.
I’d propose that one of the responsibilities of being a custodian/ maintainer for a board and/or arch is a commitment to run a *simple* automated testing framework on a set of boards.
I’ve looked into KenrelCI enough to see that it seems rather complex to get up and running. We need a dead-simple setup (a few debian packages? A container? An SDcard image for a BeagleBone?) that can collect serial console output and power cycle a board.
Eventually maybe we should have a Tizen SDWire or something like that, however that requires some real money for board development since I can’t seem to find a source for where I can buy an SDWire.
With the HiFive Unleashed in SiFive’s test lab, we use OpenOCD for JTAG, all I need is one USB cable and I can load U-boot via JTAG, and boot a recovery image, and reload the SDcard, so the SDwire is not really necessary for boards that have easy JTAG setup.

On 7/2/19 6:04 PM, Troy Benjegerdes wrote:
On Jun 22, 2019, at 2:43 PM, Marek Vasut marex@denx.de wrote:
On 6/22/19 9:12 PM, Heinrich Schuchardt wrote:
On 6/22/19 8:15 PM, Simon Glass wrote:
Hi,
On Sat, 22 Jun 2019 at 16:10, Andreas Färber afaerber@suse.de wrote:
Hi Simon,
Am 22.06.19 um 16:55 schrieb Simon Glass:
I'd like to better understand the benefits of the 3-month timeline.
It takes time to learn about a release, package and build it, test it on various hardware, investigate and report errors, wait for feedback and fixes, rinse and repeat with the next -rc. Many people don't do this as their main job.
If we shorten the release cycle, newer boards will get out faster (which is good) but the overall quality of boards not actively worked on (because they were working good enough before) will decay, which is bad. The only way to counteract that would be to automatically test on real hardware rather than just building, and doing that for all these masses of boards seems unrealistic.
Here I think you are talking about distributions. But why not just take every second release?
I have certain had the experience of getting a board our of the cupboard and finding that the latest U-Boot doesn't work, nor the one before, nor the three before that.
Are we actually seeing an improvement in regressions? I feel that testing is the only way to get that.
Perhaps we should select a small subset of boards which do get tested, and actually have custodians build/test on those for every rc?
What I have been doing before all my recent pull requests is to boot both an arm32 (Orange Pi) and and an aarch64 (Pine A64 LTS) board via bootefi and GRUB. To make this easier I am using a Raspberry with a relay board and a Tizen SD-Wire card (https://wiki.tizen.org/SDWire) controlling the system under test, cf https://pbs.twimg.com/media/D5ugi3iX4AAh1bn.jpg:large What would be needed is scripts to automate the testing including all the Python tests.
It would make sense to have such test automation for all of our architectures similar to what Kernel CI (https://kernelci.org/) does.
So who's gonna set it up and host it ?
I just got the infrastructure going to do this for the HiFive Unleashed (RiscV port), but that’s only one board right now.
I’d propose that one of the responsibilities of being a custodian/ maintainer for a board and/or arch is a commitment to run a *simple* automated testing framework on a set of boards.
I’ve looked into KenrelCI enough to see that it seems rather complex to get up and running. We need a dead-simple setup (a few debian packages? A container? An SDcard image for a BeagleBone?) that can collect serial console output and power cycle a board.
My proposal would be that you take a few exceptionally weird, disparate, boards and try to implement dead-simple CI around those. You'll start hitting all kinds of weird issues in the process and the dead-simple will quickly start turning into complicated :)
You can, however, explore the U-Boot test.py, tbot, or whatever other test frameworks there are and try to come up with some easy setup howto.
Eventually maybe we should have a Tizen SDWire or something like that, however that requires some real money for board development since I can’t seem to find a source for where I can buy an SDWire.
Not all boards boot from SD cards though, some boot from hard-to-recover non-replaceable boot media.
With the HiFive Unleashed in SiFive’s test lab, we use OpenOCD for JTAG, all I need is one USB cable and I can load U-boot via JTAG, and boot a recovery image, and reload the SDcard, so the SDwire is not really necessary for boards that have easy JTAG setup.
btw booting from JTAG and cold-booting from boot media may trigger different bugs. You most certainly want to install the bootloader and try both cold and warm boot.

Hi Troy,
On Tue, 2 Jul 2019 at 10:04, Troy Benjegerdes troy.benjegerdes@sifive.com wrote:
On Jun 22, 2019, at 2:43 PM, Marek Vasut marex@denx.de wrote:
On 6/22/19 9:12 PM, Heinrich Schuchardt wrote:
On 6/22/19 8:15 PM, Simon Glass wrote:
Hi,
On Sat, 22 Jun 2019 at 16:10, Andreas Färber afaerber@suse.de wrote:
Hi Simon,
Am 22.06.19 um 16:55 schrieb Simon Glass:
I'd like to better understand the benefits of the 3-month timeline.
It takes time to learn about a release, package and build it, test it on various hardware, investigate and report errors, wait for feedback and fixes, rinse and repeat with the next -rc. Many people don't do this as their main job.
If we shorten the release cycle, newer boards will get out faster (which is good) but the overall quality of boards not actively worked on (because they were working good enough before) will decay, which is bad. The only way to counteract that would be to automatically test on real hardware rather than just building, and doing that for all these masses of boards seems unrealistic.
Here I think you are talking about distributions. But why not just take every second release?
I have certain had the experience of getting a board our of the cupboard and finding that the latest U-Boot doesn't work, nor the one before, nor the three before that.
Are we actually seeing an improvement in regressions? I feel that testing is the only way to get that.
Perhaps we should select a small subset of boards which do get tested, and actually have custodians build/test on those for every rc?
What I have been doing before all my recent pull requests is to boot both an arm32 (Orange Pi) and and an aarch64 (Pine A64 LTS) board via bootefi and GRUB. To make this easier I am using a Raspberry with a relay board and a Tizen SD-Wire card (https://wiki.tizen.org/SDWire) controlling the system under test, cf https://pbs.twimg.com/media/D5ugi3iX4AAh1bn.jpg:large What would be needed is scripts to automate the testing including all the Python tests.
It would make sense to have such test automation for all of our architectures similar to what Kernel CI (https://kernelci.org/) does.
So who's gonna set it up and host it ?
I just got the infrastructure going to do this for the HiFive Unleashed (RiscV port), but that’s only one board right now.
I’d propose that one of the responsibilities of being a custodian/ maintainer for a board and/or arch is a commitment to run a *simple* automated testing framework on a set of boards.
SGTM, and I feel we should work towards a shared solution ideally in the U-Boot tree to make this easy for people. Much exists already.
I’ve looked into KenrelCI enough to see that it seems rather complex to get up and running. We need a dead-simple setup (a few debian packages? A container? An SDcard image for a BeagleBone?) that can collect serial console output and power cycle a board.
Eventually maybe we should have a Tizen SDWire or something like that, however that requires some real money for board development since I can’t seem to find a source for where I can buy an SDWire.
Me neither.
So where can we buy this magic board?
Regards, Simon

On Wed, Jul 03, 2019 at 09:59:22AM -0600, Simon Glass wrote:
Hi Troy,
On Tue, 2 Jul 2019 at 10:04, Troy Benjegerdes troy.benjegerdes@sifive.com wrote:
On Jun 22, 2019, at 2:43 PM, Marek Vasut marex@denx.de wrote:
On 6/22/19 9:12 PM, Heinrich Schuchardt wrote:
On 6/22/19 8:15 PM, Simon Glass wrote:
Hi,
On Sat, 22 Jun 2019 at 16:10, Andreas Färber afaerber@suse.de wrote:
Hi Simon,
Am 22.06.19 um 16:55 schrieb Simon Glass: > I'd like to better understand the benefits of the 3-month timeline.
It takes time to learn about a release, package and build it, test it on various hardware, investigate and report errors, wait for feedback and fixes, rinse and repeat with the next -rc. Many people don't do this as their main job.
If we shorten the release cycle, newer boards will get out faster (which is good) but the overall quality of boards not actively worked on (because they were working good enough before) will decay, which is bad. The only way to counteract that would be to automatically test on real hardware rather than just building, and doing that for all these masses of boards seems unrealistic.
Here I think you are talking about distributions. But why not just take every second release?
I have certain had the experience of getting a board our of the cupboard and finding that the latest U-Boot doesn't work, nor the one before, nor the three before that.
Are we actually seeing an improvement in regressions? I feel that testing is the only way to get that.
Perhaps we should select a small subset of boards which do get tested, and actually have custodians build/test on those for every rc?
What I have been doing before all my recent pull requests is to boot both an arm32 (Orange Pi) and and an aarch64 (Pine A64 LTS) board via bootefi and GRUB. To make this easier I am using a Raspberry with a relay board and a Tizen SD-Wire card (https://wiki.tizen.org/SDWire) controlling the system under test, cf https://pbs.twimg.com/media/D5ugi3iX4AAh1bn.jpg:large What would be needed is scripts to automate the testing including all the Python tests.
It would make sense to have such test automation for all of our architectures similar to what Kernel CI (https://kernelci.org/) does.
So who's gonna set it up and host it ?
I just got the infrastructure going to do this for the HiFive Unleashed (RiscV port), but that’s only one board right now.
I’d propose that one of the responsibilities of being a custodian/ maintainer for a board and/or arch is a commitment to run a *simple* automated testing framework on a set of boards.
SGTM, and I feel we should work towards a shared solution ideally in the U-Boot tree to make this easy for people. Much exists already.
I’ve looked into KenrelCI enough to see that it seems rather complex to get up and running. We need a dead-simple setup (a few debian packages? A container? An SDcard image for a BeagleBone?) that can collect serial console output and power cycle a board.
Eventually maybe we should have a Tizen SDWire or something like that, however that requires some real money for board development since I can’t seem to find a source for where I can buy an SDWire.
Me neither.
So where can we buy this magic board?
You can't, exactly. It's up at https://wiki.tizen.org/SDWire and Heinrich might have a bit more to say, but it's one of those things that we may want to see how much a run, or two runs cost and have a US and EU person that can resell-at-cost.

On Jul 3, 2019, at 11:04 AM, Tom Rini trini@konsulko.com wrote:
On Wed, Jul 03, 2019 at 09:59:22AM -0600, Simon Glass wrote:
Hi Troy,
On Tue, 2 Jul 2019 at 10:04, Troy Benjegerdes troy.benjegerdes@sifive.com wrote:
On Jun 22, 2019, at 2:43 PM, Marek Vasut marex@denx.de wrote:
On 6/22/19 9:12 PM, Heinrich Schuchardt wrote:
On 6/22/19 8:15 PM, Simon Glass wrote:
Hi,
On Sat, 22 Jun 2019 at 16:10, Andreas Färber afaerber@suse.de wrote: > > Hi Simon, > > Am 22.06.19 um 16:55 schrieb Simon Glass: >> I'd like to better understand the benefits of the 3-month timeline. > > It takes time to learn about a release, package and build it, test it on > various hardware, investigate and report errors, wait for feedback and > fixes, rinse and repeat with the next -rc. Many people don't do this as > their main job. > > If we shorten the release cycle, newer boards will get out faster (which > is good) but the overall quality of boards not actively worked on > (because they were working good enough before) will decay, which is bad. > The only way to counteract that would be to automatically test on real > hardware rather than just building, and doing that for all these masses > of boards seems unrealistic.
Here I think you are talking about distributions. But why not just take every second release?
I have certain had the experience of getting a board our of the cupboard and finding that the latest U-Boot doesn't work, nor the one before, nor the three before that.
Are we actually seeing an improvement in regressions? I feel that testing is the only way to get that.
Perhaps we should select a small subset of boards which do get tested, and actually have custodians build/test on those for every rc?
What I have been doing before all my recent pull requests is to boot both an arm32 (Orange Pi) and and an aarch64 (Pine A64 LTS) board via bootefi and GRUB. To make this easier I am using a Raspberry with a relay board and a Tizen SD-Wire card (https://wiki.tizen.org/SDWire) controlling the system under test, cf https://pbs.twimg.com/media/D5ugi3iX4AAh1bn.jpg:large What would be needed is scripts to automate the testing including all the Python tests.
It would make sense to have such test automation for all of our architectures similar to what Kernel CI (https://kernelci.org/) does.
So who's gonna set it up and host it ?
I just got the infrastructure going to do this for the HiFive Unleashed (RiscV port), but that’s only one board right now.
I’d propose that one of the responsibilities of being a custodian/ maintainer for a board and/or arch is a commitment to run a *simple* automated testing framework on a set of boards.
SGTM, and I feel we should work towards a shared solution ideally in the U-Boot tree to make this easy for people. Much exists already.
I’ve looked into KenrelCI enough to see that it seems rather complex to get up and running. We need a dead-simple setup (a few debian packages? A container? An SDcard image for a BeagleBone?) that can collect serial console output and power cycle a board.
Eventually maybe we should have a Tizen SDWire or something like that, however that requires some real money for board development since I can’t seem to find a source for where I can buy an SDWire.
Me neither.
So where can we buy this magic board?
You can't, exactly. It's up at https://wiki.tizen.org/SDWire and Heinrich might have a bit more to say, but it's one of those things that we may want to see how much a run, or two runs cost and have a US and EU person that can resell-at-cost.
-- Tom
What are the chances I can get KiCad sources for the board ;)

On 7/3/19 6:22 PM, Troy Benjegerdes wrote:
On Jul 3, 2019, at 11:04 AM, Tom Rini trini@konsulko.com wrote:
On Wed, Jul 03, 2019 at 09:59:22AM -0600, Simon Glass wrote:
Hi Troy,
On Tue, 2 Jul 2019 at 10:04, Troy Benjegerdes troy.benjegerdes@sifive.com wrote:
On Jun 22, 2019, at 2:43 PM, Marek Vasut marex@denx.de wrote:
On 6/22/19 9:12 PM, Heinrich Schuchardt wrote:
On 6/22/19 8:15 PM, Simon Glass wrote: > Hi, > > On Sat, 22 Jun 2019 at 16:10, Andreas Färber afaerber@suse.de wrote: >> >> Hi Simon, >> >> Am 22.06.19 um 16:55 schrieb Simon Glass: >>> I'd like to better understand the benefits of the 3-month timeline. >> >> It takes time to learn about a release, package and build it, test it on >> various hardware, investigate and report errors, wait for feedback and >> fixes, rinse and repeat with the next -rc. Many people don't do this as >> their main job. >> >> If we shorten the release cycle, newer boards will get out faster (which >> is good) but the overall quality of boards not actively worked on >> (because they were working good enough before) will decay, which is bad. >> The only way to counteract that would be to automatically test on real >> hardware rather than just building, and doing that for all these masses >> of boards seems unrealistic. > > Here I think you are talking about distributions. But why not just > take every second release? > > I have certain had the experience of getting a board our of the > cupboard and finding that the latest U-Boot doesn't work, nor the one > before, nor the three before that. > > Are we actually seeing an improvement in regressions? I feel that > testing is the only way to get that. > > Perhaps we should select a small subset of boards which do get tested, > and actually have custodians build/test on those for every rc?
What I have been doing before all my recent pull requests is to boot both an arm32 (Orange Pi) and and an aarch64 (Pine A64 LTS) board via bootefi and GRUB. To make this easier I am using a Raspberry with a relay board and a Tizen SD-Wire card (https://wiki.tizen.org/SDWire) controlling the system under test, cf https://pbs.twimg.com/media/D5ugi3iX4AAh1bn.jpg:large What would be needed is scripts to automate the testing including all the Python tests.
It would make sense to have such test automation for all of our architectures similar to what Kernel CI (https://kernelci.org/) does.
So who's gonna set it up and host it ?
I just got the infrastructure going to do this for the HiFive Unleashed (RiscV port), but that’s only one board right now.
I’d propose that one of the responsibilities of being a custodian/ maintainer for a board and/or arch is a commitment to run a *simple* automated testing framework on a set of boards.
SGTM, and I feel we should work towards a shared solution ideally in the U-Boot tree to make this easy for people. Much exists already.
I’ve looked into KenrelCI enough to see that it seems rather complex to get up and running. We need a dead-simple setup (a few debian packages? A container? An SDcard image for a BeagleBone?) that can collect serial console output and power cycle a board.
Eventually maybe we should have a Tizen SDWire or something like that, however that requires some real money for board development since I can’t seem to find a source for where I can buy an SDWire.
Me neither.
So where can we buy this magic board?
You can't, exactly. It's up at https://wiki.tizen.org/SDWire and Heinrich might have a bit more to say, but it's one of those things that we may want to see how much a run, or two runs cost and have a US and EU person that can resell-at-cost.
-- Tom
What are the chances I can get KiCad sources for the board ;)
CAD files are available online at https://git.tizen.org/cgit/tools/testlab/sd-mux/tree/doc/hardware/SDWire
Adam Malinowski who designed the board is selling it currently at ca. 45.00 EUR incl. VAT.
If you have a suggestion for production in larger lot sizes, please, contact him.
Best regards
Heinrich

Hello Heinrich,
Am 22.06.2019 um 21:12 schrieb Heinrich Schuchardt:
On 6/22/19 8:15 PM, Simon Glass wrote:
Hi,
On Sat, 22 Jun 2019 at 16:10, Andreas Färber afaerber@suse.de wrote:
Hi Simon,
Am 22.06.19 um 16:55 schrieb Simon Glass:
I'd like to better understand the benefits of the 3-month timeline.
It takes time to learn about a release, package and build it, test it on various hardware, investigate and report errors, wait for feedback and fixes, rinse and repeat with the next -rc. Many people don't do this as their main job.
If we shorten the release cycle, newer boards will get out faster (which is good) but the overall quality of boards not actively worked on (because they were working good enough before) will decay, which is bad. The only way to counteract that would be to automatically test on real hardware rather than just building, and doing that for all these masses of boards seems unrealistic.
Here I think you are talking about distributions. But why not just take every second release?
I have certain had the experience of getting a board our of the cupboard and finding that the latest U-Boot doesn't work, nor the one before, nor the three before that.
Are we actually seeing an improvement in regressions? I feel that testing is the only way to get that.
Perhaps we should select a small subset of boards which do get tested, and actually have custodians build/test on those for every rc?
What I have been doing before all my recent pull requests is to boot both an arm32 (Orange Pi) and and an aarch64 (Pine A64 LTS) board via bootefi and GRUB. To make this easier I am using a Raspberry with a relay board and a Tizen SD-Wire card (https://wiki.tizen.org/SDWire) controlling the system under test, cf https://pbs.twimg.com/media/D5ugi3iX4AAh1bn.jpg:large What would be needed is scripts to automate the testing including all the Python tests.
It would make sense to have such test automation for all of our architectures similar to what Kernel CI (https://kernelci.org/) does.
Yes ... my dream is also we have a lot of boards, on which we can test automagically. My approach was tbot, with which I made weekly automated commandline tests (u-boot and linux), see example results from my old tbot version:
http://xeidos.ddns.net/tests/test_db_auslesen.php#987
(Intentionally link to latest good test, as I found no time yet to refresh my testsetup to new tbot, see later).
Above webpage is created through a tbot generator, which fills a mysql database and the webpage is a simple php script. I also speculated to write a generator to connect to kernel-ci ... but time is the missing resource.
For the above example tbot and the webserver run on a raspberry pi, also the raspberry pi is used as "Lab Host" which controlls the board under test, so cheap hardware costs. It is easy to adapt tbot to your hardware setup, see [2]
But tbot was a hack as my python skills are not the greatest, so luckily Harald (added to cc) found time to rewrite it completly from scratch (many thanks!), see [1] and [2]
Ok, missing here and there functionality from the old version, but it is much more cleaner ... may you take a look at it, also it is Open Source, feel free to help and improve tbot!
I use tbot for my daily work, as tbot has also an interactive mode [3], with which you can use tbot for powering on your board and connect to for example the u-boot command line). No need anymore to know where your board is and how to power it on or how to connect to the console. (Background I mostly have no real physical access to hardware I work on)
So you can work while developing with tbot, and at the end you have immediately an automated setup you can integrate into your CI. As tbot is a commandline tool, this is mostly easy to do.
Also it is possible to call pytest framework [4] from u-boot, and of course you can call this out of tbot [5].
bye, Heiko
[1] https://github.com/Rahix/tbot [2] https://rahix.de/tbot/ [3] https://rahix.de/tbot/getting-started.html#interactive [4] http://git.denx.de/?p=u-boot.git;a=blob;f=test/README;h=4bc9ca3a6ae9de0e3ed6... [5] result from pytest framework called from tbot http://xeidos.ddns.net/tbot/id_985/test-log.html
Regards
Heinrich _______________________________________________ U-Boot-Custodians mailing list U-Boot-Custodians@lists.denx.de https://lists.denx.de/listinfo/u-boot-custodians

22.06.2019 22:12, Heinrich Schuchardt пишет:
On 6/22/19 8:15 PM, Simon Glass wrote:
Hi,
On Sat, 22 Jun 2019 at 16:10, Andreas Färber afaerber@suse.de wrote:
Hi Simon,
Am 22.06.19 um 16:55 schrieb Simon Glass:
I'd like to better understand the benefits of the 3-month timeline.
It takes time to learn about a release, package and build it, test it on various hardware, investigate and report errors, wait for feedback and fixes, rinse and repeat with the next -rc. Many people don't do this as their main job.
If we shorten the release cycle, newer boards will get out faster (which is good) but the overall quality of boards not actively worked on (because they were working good enough before) will decay, which is bad. The only way to counteract that would be to automatically test on real hardware rather than just building, and doing that for all these masses of boards seems unrealistic.
Here I think you are talking about distributions. But why not just take every second release?
I have certain had the experience of getting a board our of the cupboard and finding that the latest U-Boot doesn't work, nor the one before, nor the three before that.
Are we actually seeing an improvement in regressions? I feel that testing is the only way to get that.
Perhaps we should select a small subset of boards which do get tested, and actually have custodians build/test on those for every rc?
What I have been doing before all my recent pull requests is to boot both an arm32 (Orange Pi) and and an aarch64 (Pine A64 LTS) board via bootefi and GRUB. To make this easier I am using a Raspberry with a relay board and a Tizen SD-Wire card (https://wiki.tizen.org/SDWire) controlling the system under test, cf https://pbs.twimg.com/media/D5ugi3iX4AAh1bn.jpg:large What would be needed is scripts to automate the testing including all the Python tests.
Is it possible to buy SDwire somewhere?
It would make sense to have such test automation for all of our architectures similar to what Kernel CI (https://kernelci.org/) does.
Regards
Heinrich _______________________________________________ U-Boot mailing list U-Boot@lists.denx.de https://lists.denx.de/listinfo/u-boot
participants (12)
-
Andreas Färber
-
Bin Meng
-
Heiko Schocher
-
Heinrich Schuchardt
-
Lukasz Majewski
-
Marek Vasut
-
Matwey V. Kornilov
-
Neil Armstrong
-
Shawn Guo
-
Simon Glass
-
Tom Rini
-
Troy Benjegerdes