[U-Boot] livetime of boards

Hello all,
We have the real problem, that we have a lot of old boards, which are unmaintained in U-Boot, and we have no chance to find out, if this boards are longer used/tested ...
So, the question raises, should we introduce a column in boards.cfg, which shows the "livetime" of a board support in U-Boot?
If we introduce this, we init the livetime column in boards.cfg with a value of n for all current boards, or if we add a new board support. (n releases valid) I think n = 4 would be a good starting point.
Every release cycle the livetime value gets decremented by one. (Can be done by a script sitting in the tools directory?)
livetime == 0 -> EMail to board maintainer, that board livetime ends. please test and send an update, if board support should remain in U-Boot.
livetime == -1 -> board gets deleted in next release
If a board maintainer gets the above EMail, or of course whenever he really try a current release on a real hardware, he can send a patch, which updates the livetime column again back to n ...
So, the hope is, we have after n releases only real tested and used boards in U-Boot...
What do others think?
short ToDo list: - make an initial patch for boards.cfg - make a script, which decrements the "livetime" column in boards.cfg and send an EMail for the "livetime == 0" case to the boardmaintainer and make a delete patch for boards.cfg if "livetime == -1" ? - add a doc/README.livetime
bye, heiko

On Tue, Nov 05, 2013 at 02:05:28PM +0100, Heiko Schocher wrote:
Hello all,
We have the real problem, that we have a lot of old boards, which are unmaintained in U-Boot, and we have no chance to find out, if this boards are longer used/tested ...
We also have a feature, lots of hardware support because lots of things don't change drastically, frequently. That's not to say that I wouldn't mind dropping old platforms (even that ones I have sentimental feelings towards), and would certainly like to see more and frequent sanity testing.
This problem comes up when we talk about doing big changes, and I think that's the right time to talk about things. And I think the answer should be, we try and convert things forward and when it's not obvious if things will still work correctly, or how to do it, that's when we need to make a hard push on the board maintainers to find some time to work on things.
It's not that being a U-Boot board maintainer is a near zero effort, when someone steps up to own a board, but to be frank, it also shouldn't be a frequent high-touch area either.
So, the question raises, should we introduce a column in boards.cfg, which shows the "livetime" of a board support in U-Boot?
I sense a lot of conflicting patches.

Dear Tom,
In message 20131105203736.GM5925@bill-the-cat you wrote:
We have the real problem, that we have a lot of old boards, which are unmaintained in U-Boot, and we have no chance to find out, if this boards are longer used/tested ...
We also have a feature, lots of hardware support because lots of things don't change drastically, frequently. That's not to say that I wouldn't mind dropping old platforms (even that ones I have sentimental feelings towards), and would certainly like to see more and frequent sanity testing.
I think Heiko's idea of documenting test reports is pretty cool - but of course we need to discuss in detail how to implement this, and also decide wether we use this for (semi-automatic) code cleanup (by removing boards that have not been tested for a long time).
This problem comes up when we talk about doing big changes, and I think that's the right time to talk about things. And I think the answer should be, we try and convert things forward and when it's not obvious if things will still work correctly, or how to do it, that's when we need to make a hard push on the board maintainers to find some time to work on things.
Agreed. And here information how recently (or maybe even how frequently) a board has been tested (build tested, run on actual hardware) would come in really handy. we can probbaly automate build testing one way or another, but for actual runtime tests we will lways depend on the board maintainers, or board users.
So, the question raises, should we introduce a column in boards.cfg, which shows the "livetime" of a board support in U-Boot?
I sense a lot of conflicting patches.
Again I agree. Also, I fear that boards.cfg is becoming more and more unreadable by adding even more stuff. If I see this correctly, the maximum line length in boards.cfg already exceeds 360 characters :-(
So nstead of adding this information to boards.cfg we could probably use separate files for such information. We could provide tools to make test reports really easy, say something like
scripts/build_test scripts/run_test
which the user would just call with a "passed" or "failed" argument; the scripts could then auto-detect which configuration and which exact U-Boot version were in use, and send an email. Whether that would be a patch against the source code or something that get's auto-added to a wiki page is just an implementation detail. But if we had something like this, we could get a muchbetter understanding how actively boards are being tested.
So when you're once again doing some change that requires touching files for some othe rboards, you could simply check that database. If you see that 3 out of the last 5 releases have reported succesful run-time tests you will probably decide to accept the needed efforts, but when you see the last test report is more than 5 years old, you will probably rather decide to initiate a code removal process.
Best regards,
Wolfgang Denk

Hello Wolfgang, Tom,
Am 06.11.2013 08:50, schrieb Wolfgang Denk:
Dear Tom,
In message20131105203736.GM5925@bill-the-cat you wrote:
We have the real problem, that we have a lot of old boards, which are unmaintained in U-Boot, and we have no chance to find out, if this boards are longer used/tested ...
We also have a feature, lots of hardware support because lots of things don't change drastically, frequently. That's not to say that I wouldn't mind dropping old platforms (even that ones I have sentimental feelings towards), and would certainly like to see more and frequent sanity testing.
Yep, thats the reason I had in mind. We must have more real tests on real hardware (or at least, get back the info, that this is done somewhere ...) ... thats the idea behind to force the board maintainers to give back us such info.
I think Heiko's idea of documenting test reports is pretty cool - but of course we need to discuss in detail how to implement this, and also decide wether we use this for (semi-automatic) code cleanup (by removing boards that have not been tested for a long time).
I vote for removing boards from which we get no such test info back. The board support code is just availiable, just not for current releases.
If code does not change drastically, such a "compile and try it on the hardware, give feedback to mainline" should not cost much time the board maintainer, and maybe we add a script, which the board maintainer just have to call, to send an "board tested EMail" to the mailinglist ...
Maybe a lot of boardmaintainers do this tests, and this info is important for mainline, but we have no mechanism to collect this info!
And if code changes drastically (which it currently does and will do in future I think) a board maintainer must decide, if it makes sense to sync the board with current mainline or the board get dropped from mainline ...
And I have this problem currently with i2c subsystem ... a lot of boards to convert to new framework, and I cannot decide, which are really maintained... or if currently, the converted boards, really work. (And I think, not only the i2c subsystem has this problem)
This problem comes up when we talk about doing big changes, and I think that's the right time to talk about things. And I think the answer should be, we try and convert things forward and when it's not obvious if things will still work correctly, or how to do it, that's when we need to make a hard push on the board maintainers to find some time to work on things.
Agreed. And here information how recently (or maybe even how frequently) a board has been tested (build tested, run on actual hardware) would come in really handy. we can probbaly automate build testing one way or another, but for actual runtime tests we will lways depend on the board maintainers, or board users.
Hmm.. Is it always obvious, if changes are big? Do we really get feedback when doing such big changes? I just reworking the i2c framework, and I have not really much feedback from board maintainers for their boards I changed ... Does this change really work on all boards I converted? I have no chance to get this info. But if we have such a livetime feature, I can be sure, that after n releases all boards in mainline are tested ... automatically ... I want such a feature ...
So, the question raises, should we introduce a column in boards.cfg, which shows the "livetime" of a board support in U-Boot?
I sense a lot of conflicting patches.
Again I agree. Also, I fear that boards.cfg is becoming more and more unreadable by adding even more stuff. If I see this correctly, the maximum line length in boards.cfg already exceeds 360 characters :-(
Right, boards.cfg gets unhandy ... Hmm .. what with the column "Staus" ... instead of "Active" it would be more informative to have there the livetime counter, and a single digit saves some characters ;-)
But you are right, that approach leads in a lot of conflicting patches ... but I think, we just pooled board information in boards.cfg, so this would be the right place in my eyes ...
Maybe we get such Information "a Boards is tested with current mainline" inform of an EMail with an Text "Board xy tested with commit mm. Please update livetime" ... and we can add a script, which updates the livetime for this board, so we can prevent conflicting patches ... ?
So nstead of adding this information to boards.cfg we could probably use separate files for such information. We could provide tools to make test reports really easy, say something like
scripts/build_test scripts/run_test
which the user would just call with a "passed" or "failed" argument; the scripts could then auto-detect which configuration and which exact U-Boot version were in use, and send an email. Whether that would be a patch against the source code or something that get's auto-added to a wiki page is just an implementation detail. But if we had something like this, we could get a muchbetter understanding how actively boards are being tested.
Yes, that sound also good. I want to see the test information in the sourcecode, not somewhere on a wiki...
So when you're once again doing some change that requires touching files for some othe rboards, you could simply check that database. If you see that 3 out of the last 5 releases have reported succesful run-time tests you will probably decide to accept the needed efforts,
Hmm.. that works, if you have to touch some (some < 5) boards. But If you have to touch > 5 boards, this gets unhandy...
but when you see the last test report is more than 5 years old, you will probably rather decide to initiate a code removal process.
If we decide to delete older boards after n release cycles without testreports, we must not decide nor look in a database. We are sure, we have only "good and working" boards ... and we just do the necessary work for new features ... and we are sure, that we get back testreports within n release cycles ...
So let us decide first, if we want to go this way ...
bye, Heiko

Hello all together,
On 11/07/2013 09:17 AM, Heiko Schocher wrote:
Am 06.11.2013 08:50, schrieb Wolfgang Denk:
In message20131105203736.GM5925@bill-the-cat you wrote:
<snip problem description>
Full ACK, we need some way to track which board is working with the current ToT or at least on a release basis.
So, the question raises, should we introduce a column in boards.cfg, which shows the "livetime" of a board support in U-Boot?
I sense a lot of conflicting patches.
Again I agree. Also, I fear that boards.cfg is becoming more and more unreadable by adding even more stuff. If I see this correctly, the maximum line length in boards.cfg already exceeds 360 characters :-(
Right, boards.cfg gets unhandy ... Hmm .. what with the column "Staus" ... instead of "Active" it would be more informative to have there the livetime counter, and a single digit saves some characters ;-)
I can't understand the status field at all, just for the record ;)
But you are right, that approach leads in a lot of conflicting patches ... but I think, we just pooled board information in boards.cfg, so this would be the right place in my eyes ...
Maybe we get such Information "a Boards is tested with current mainline" inform of an EMail with an Text "Board xy tested with commit mm. Please update livetime" ... and we can add a script, which updates the livetime for this board, so we can prevent conflicting patches ... ?
I agree here with Tom. Beside the possibility of conflicting pahces I see another problem here. We will get a lot of patches just for increasing the tested counter for a single board. These patches needs to be handled in some way. If we shift to some integrated system (gerrit comes to mind) this could be easier than today, but it will bind resources anyways. Therefore I think it is a bad idea to save a such often changing information in the source code repository.
So nstead of adding this information to boards.cfg we could probably use separate files for such information. We could provide tools to make test reports really easy, say something like
scripts/build_test scripts/run_test
which the user would just call with a "passed" or "failed" argument; the scripts could then auto-detect which configuration and which exact U-Boot version were in use, and send an email. Whether that would be a patch against the source code or something that get's auto-added to a wiki page is just an implementation detail. But if we had something like this, we could get a muchbetter understanding how actively boards are being tested.
Yes, that sound also good. I want to see the test information in the sourcecode, not somewhere on a wiki...
I think another place than the source code repository would be best for gathering such frequently changing information. Why not use some wiki other other web service for that purpose?
I don't want to search a web page for the information 'board X is not tested since ...' either. But we could easily write some scripts and add them to the source code repository to provide it.
So when you're once again doing some change that requires touching files for some othe rboards, you could simply check that database. If you see that 3 out of the last 5 releases have reported succesful run-time tests you will probably decide to accept the needed efforts,
Hmm.. that works, if you have to touch some (some < 5) boards. But If you have to touch > 5 boards, this gets unhandy...
How about:
MAKEALL --check-boards -s at91
;)
but when you see the last test report is more than 5 years old, you will probably rather decide to initiate a code removal process.
Why not save the SHA1 with the build-/runtime-tested information? Then we could easily build helper scripts to query that database when this board was last tested.
If we decide to delete older boards after n release cycles without testreports, we must not decide nor look in a database. We are sure, we have only "good and working" boards ... and we just do the necessary work for new features ... and we are sure, that we get back testreports within n release cycles ...
So let us decide first, if we want to go this way ...
Yes, we should introduce some mechanism to check when a specific board was last runtime tested. But I fear the overhead with patches that update a tested counter.
Best regards
Andreas Bießmann

Hello Andreas,
Am 07.11.2013 10:37, schrieb Andreas Bießmann:
Hello all together,
On 11/07/2013 09:17 AM, Heiko Schocher wrote:
Am 06.11.2013 08:50, schrieb Wolfgang Denk:
In message20131105203736.GM5925@bill-the-cat you wrote:
<snip problem description>
Full ACK, we need some way to track which board is working with the current ToT or at least on a release basis.
So, the question raises, should we introduce a column in boards.cfg, which shows the "livetime" of a board support in U-Boot?
I sense a lot of conflicting patches.
Again I agree. Also, I fear that boards.cfg is becoming more and more unreadable by adding even more stuff. If I see this correctly, the maximum line length in boards.cfg already exceeds 360 characters :-(
Right, boards.cfg gets unhandy ... Hmm .. what with the column "Staus" ... instead of "Active" it would be more informative to have there the livetime counter, and a single digit saves some characters ;-)
I can't understand the status field at all, just for the record ;)
Hmm.. good question ...
But you are right, that approach leads in a lot of conflicting patches ... but I think, we just pooled board information in boards.cfg, so this would be the right place in my eyes ...
Maybe we get such Information "a Boards is tested with current mainline" inform of an EMail with an Text "Board xy tested with commit mm. Please update livetime" ... and we can add a script, which updates the livetime for this board, so we can prevent conflicting patches ... ?
I agree here with Tom. Beside the possibility of conflicting pahces I see another problem here. We will get a lot of patches just for increasing the tested counter for a single board. These patches needs to be handled in some way. If we shift to some integrated system (gerrit comes to mind) this could be easier than today, but it will bind resources anyways. Therefore I think it is a bad idea to save a such often changing information in the source code repository.
I see this info just changing once when releasing a new U-Boot version.
I not see anymore a patch for updating the livetime counter for every board, instead we should have a script which a board maintainer can call, after he did a board test, which sends an EMail to the U-Boot ML with a special format, saying: ------------------------------------ subject: livetime: board name
Tested-by: ...
with commit ... ------------------------------------ On the mailserver a script scans all incoming EMails for his Subject, (Is this possible?) and collect the infos, for which boards, such EMails arrived. When releasing a new U-Boot version, this collected info can be used to update this livetime counter through another script saying "collect_livetime_info" (also this script can automatically send EMails to board maintainers, for boards which had reached end of livetime, outputs a delete board lists ...)
So (in current case Tom) should, before releasing a new U-Boot version, first call this script "collect_livetime_info" and he get:
-> one livetime counter patch for current release -> one list for boards which reach end of life -> one list for boards, which should be deleted
All Infos are "release info" I think, and fully fit in the commit for the new release ...
... maybe "deleting boards" can be done automatically, but this is not a trivial job ...
So, with such a solution, I see no big additional cost for adding such a feature (except the task "deleting old boards", which is maybe not trivial)
Do not understand me wrong, if we find another solution, I am happy also ... just spinning around ...
So nstead of adding this information to boards.cfg we could probably use separate files for such information. We could provide tools to make test reports really easy, say something like
scripts/build_test scripts/run_test
which the user would just call with a "passed" or "failed" argument; the scripts could then auto-detect which configuration and which exact U-Boot version were in use, and send an email. Whether that would be a patch against the source code or something that get's auto-added to a wiki page is just an implementation detail. But if we had something like this, we could get a muchbetter understanding how actively boards are being tested.
Yes, that sound also good. I want to see the test information in the sourcecode, not somewhere on a wiki...
I think another place than the source code repository would be best for gathering such frequently changing information. Why not use some wiki other other web service for that purpose?
See above explanation, I see this info not frequently changing, just always with a new U-Boot release ... and can nearly (except the "delete old boards" case) automatised ...
I don't want to search a web page for the information 'board X is not tested since ...' either. But we could easily write some scripts and add them to the source code repository to provide it.
Ok, fine with me too. I just thinking about this problem, and how we can fix it ;-)
So when you're once again doing some change that requires touching files for some othe rboards, you could simply check that database. If you see that 3 out of the last 5 releases have reported succesful run-time tests you will probably decide to accept the needed efforts,
Hmm.. that works, if you have to touch some (some< 5) boards. But If you have to touch> 5 boards, this gets unhandy...
How about:
MAKEALL --check-boards -s at91
;)
;-)
but when you see the last test report is more than 5 years old, you will probably rather decide to initiate a code removal process.
Why not save the SHA1 with the build-/runtime-tested information? Then we could easily build helper scripts to query that database when this board was last tested.
Ok, also good idea.
If we decide to delete older boards after n release cycles without testreports, we must not decide nor look in a database. We are sure, we have only "good and working" boards ... and we just do the necessary work for new features ... and we are sure, that we get back testreports within n release cycles ...
So let us decide first, if we want to go this way ...
Yes, we should introduce some mechanism to check when a specific board was last runtime tested. But I fear the overhead with patches that update a tested counter.
I thought with "decide": Do we want to delete "old boards"? With this, we do not need a "MAKEALL --check-boards -s at91" when we introduce new features, as all boards in mainline are in a well tested shape ...
Ok, two decisions:
- Do we want to collect board testinginformation? - Do we want to delete old boards automatically after we do not get some test reports after a time intervall?
bye, Heiko

Hi Heiko,
On Thu, 07 Nov 2013 11:39:08 +0100, Heiko Schocher hs@denx.de wrote:
Right, boards.cfg gets unhandy ... Hmm .. what with the column "Staus" ... instead of "Active" it would be more informative to have there the livetime counter, and a single digit saves some characters ;-)
I'm not sure we need to save characters from boards.cfg.
Note also that one goal of boards.cfg is to not have multiple files around that have to remain consistent.
Still :
I can't understand the status field at all, just for the record ;)
Hmm.. good question ...
There has been some discussion on this already; indeed, the status field is not really aptly named or defined and should be reworked.
We could, for instance, have a "Last tested" field instead of the "Active" field.
Ok, two decisions:
- Do we want to collect board testinginformation?
I'd say we might want to, if only because it tells us if a board maintainer is still active or not, instead of finding out long later.
- Do we want to delete old boards automatically after we do not get some test reports after a time intervall?
I don't, at least, not without a long enough period during which the board remains in "unmaintained" state. E.g., for each release, the list of "unmaintained" boards is produced along with the release notes, and boards which are still "unmaintained" when nearing the next release (roughly 90 days later) get deleted right before the new release.
bye, Heiko
Amicalement,

Hello Albert,
Am 07.11.2013 12:13, schrieb Albert ARIBAUD:
Hi Heiko,
On Thu, 07 Nov 2013 11:39:08 +0100, Heiko Schocherhs@denx.de wrote:
Right, boards.cfg gets unhandy ... Hmm .. what with the column "Staus" ... instead of "Active" it would be more informative to have there the livetime counter, and a single digit saves some characters ;-)
I'm not sure we need to save characters from boards.cfg.
Note also that one goal of boards.cfg is to not have multiple files around that have to remain consistent.
Yep, exactly. Thats why I think we could collect this in boards.cfg.
Still :
I can't understand the status field at all, just for the record ;)
Hmm.. good question ...
There has been some discussion on this already; indeed, the status field is not really aptly named or defined and should be reworked.
We could, for instance, have a "Last tested" field instead of the "Active" field.
Yep.
Ok, two decisions:
- Do we want to collect board testinginformation?
I'd say we might want to, if only because it tells us if a board maintainer is still active or not, instead of finding out long later.
Yes.
- Do we want to delete old boards automatically after we do not get some test reports after a time intervall?
I don't, at least, not without a long enough period during which the board remains in "unmaintained" state. E.g., for each release, the list of "unmaintained" boards is produced along with the release notes, and boards which are still "unmaintained" when nearing the next release (roughly 90 days later) get deleted right before the new release.
I proposed:
livetime = 4 releases cycle without testing If livetime = 0 -> EMail to board maintainer, please test or board get deleted.
If livetime -1 -> board deleted....
bye, Heiko

Dear Heiko Schocher,
In message 527B7CB0.6040707@denx.de you wrote:
Note also that one goal of boards.cfg is to not have multiple files around that have to remain consistent.
Yep, exactly. Thats why I think we could collect this in boards.cfg.
No, we really want to have a database here, which collects more than just a timestamp. I would lile to be able to see the history as well as information which exact commit ID has been tested (and I strongly vote to allow testing at any time, not only for a end-of-release- cycle version). we should probably also collect information about the build environment (at least versions of make, gcc, bintuils), etc.
This by far exceeds what could be done in boards.cfg, and it exceeds anything that could be run over the mailing list.
Best regards,
Wolfgang Denk

Hello Wolfgang,
Am 07.11.2013 13:06, schrieb Wolfgang Denk:
Dear Heiko Schocher,
In message527B7CB0.6040707@denx.de you wrote:
Note also that one goal of boards.cfg is to not have multiple files around that have to remain consistent.
Yep, exactly. Thats why I think we could collect this in boards.cfg.
No, we really want to have a database here, which collects more than just a timestamp. I would lile to be able to see the history as well as information which exact commit ID has been tested (and I strongly vote to allow testing at any time, not only for a end-of-release- cycle version). we should probably also collect information about the build environment (at least versions of make, gcc, bintuils), etc.
This by far exceeds what could be done in boards.cfg, and it exceeds anything that could be run over the mailing list.
Ok, if we need all such information, it breaks boards.cfg, correct.
bye, Heiko

Hello Heiko,
On 11/07/2013 11:39 AM, Heiko Schocher wrote:
Am 07.11.2013 10:37, schrieb Andreas Bießmann:
On 11/07/2013 09:17 AM, Heiko Schocher wrote:
Am 06.11.2013 08:50, schrieb Wolfgang Denk:
In message20131105203736.GM5925@bill-the-cat you wrote:
<snip>
But you are right, that approach leads in a lot of conflicting patches ... but I think, we just pooled board information in boards.cfg, so this would be the right place in my eyes ...
Maybe we get such Information "a Boards is tested with current mainline" inform of an EMail with an Text "Board xy tested with commit mm. Please update livetime" ... and we can add a script, which updates the livetime for this board, so we can prevent conflicting patches ... ?
I agree here with Tom. Beside the possibility of conflicting pahces I see another problem here. We will get a lot of patches just for increasing the tested counter for a single board. These patches needs to be handled in some way. If we shift to some integrated system (gerrit comes to mind) this could be easier than today, but it will bind resources anyways. Therefore I think it is a bad idea to save a such often changing information in the source code repository.
I see this info just changing once when releasing a new U-Boot version.
The saved information how often a board was runtime tested with the correct SHA1 of the u-boot/master could be quite useful. In the end just the last tested commit will be interesting but it could give some information how often that specific board is used. The information must not be generated by a board maintainer ... the maintainer could then see if he needs to pull out a board or if one else run the test before.
If we would save this in the repository we do not have this information in time. If we send the information to a list we need to parse it or use some other tool to provide the information. Beside that we will pollute the list with status updates about boards being tested. It could be hard to find real patches in that information flood then.
<snip mail proposal>
So (in current case Tom) should, before releasing a new U-Boot version, first call this script "collect_livetime_info" and he get:
-> one livetime counter patch for current release -> one list for boards which reach end of life -> one list for boards, which should be deleted
Good idea, but the information could also be saved on a website or in another database. It should be easily filled by the tester and also easily queried by wherever is interested in.
All Infos are "release info" I think, and fully fit in the commit for the new release ...
I also think that should be done on release only.
... maybe "deleting boards" can be done automatically, but this is not a trivial job ...
I think deleting should be done in next release then to give the board maintainer some time to check the boards. On a new release the board maintainer should be mailed that in the next release the board will be removed. We should also store this somewhere in the code (status in boards.cfg?).
Next question is what to do if the mail bounces ;)
So, with such a solution, I see no big additional cost for adding such a feature (except the task "deleting old boards", which is maybe not trivial)
Do not understand me wrong, if we find another solution, I am happy also ... just spinning around ...
Me too.
<snip>
If we decide to delete older boards after n release cycles without testreports, we must not decide nor look in a database. We are sure, we have only "good and working" boards ... and we just do the necessary work for new features ... and we are sure, that we get back testreports within n release cycles ...
So let us decide first, if we want to go this way ...
Yes, we should introduce some mechanism to check when a specific board was last runtime tested. But I fear the overhead with patches that update a tested counter.
I thought with "decide": Do we want to delete "old boards"? With this, we do not need a "MAKEALL --check-boards -s at91" when we introduce new features, as all boards in mainline are in a well tested shape ...
Ok, two decisions:
- Do we want to collect board testinginformation?
I think we should do that i none way or another.
- Do we want to delete old boards automatically after we do not get some test reports after a time intervall?
And we should delete 'unmaintained' boards, when is to be discussed. I'm currently fiddling with at91 gpio and ask myself if I should adopt all the boards or just let them fail ...
Best regards
Andreas Bießmann

Hello Andreas,
Am 07.11.2013 12:24, schrieb Andreas Bießmann:
Hello Heiko,
On 11/07/2013 11:39 AM, Heiko Schocher wrote:
Am 07.11.2013 10:37, schrieb Andreas Bießmann:
On 11/07/2013 09:17 AM, Heiko Schocher wrote:
Am 06.11.2013 08:50, schrieb Wolfgang Denk:
In message20131105203736.GM5925@bill-the-cat you wrote:
<snip>
But you are right, that approach leads in a lot of conflicting patches ... but I think, we just pooled board information in boards.cfg, so this would be the right place in my eyes ...
Maybe we get such Information "a Boards is tested with current mainline" inform of an EMail with an Text "Board xy tested with commit mm. Please update livetime" ... and we can add a script, which updates the livetime for this board, so we can prevent conflicting patches ... ?
I agree here with Tom. Beside the possibility of conflicting pahces I see another problem here. We will get a lot of patches just for increasing the tested counter for a single board. These patches needs to be handled in some way. If we shift to some integrated system (gerrit comes to mind) this could be easier than today, but it will bind resources anyways. Therefore I think it is a bad idea to save a such often changing information in the source code repository.
I see this info just changing once when releasing a new U-Boot version.
The saved information how often a board was runtime tested with the correct SHA1 of the u-boot/master could be quite useful. In the end just the last tested commit will be interesting but it could give some information how often that specific board is used. The information must not be generated by a board maintainer ... the maintainer could then see if he needs to pull out a board or if one else run the test before.
If we would save this in the repository we do not have this information in time. If we send the information to a list we need to parse it or use some other tool to provide the information. Beside that we will pollute the list with status updates about boards being tested. It could be hard to find real patches in that information flood then.
Hmm... I hope we get a lot of such EMails ... and think, this is not a big problem ... Or, maybe, if we get a lot of such EMails, maybe we open a u-boot-testing list?
<snip mail proposal>
So (in current case Tom) should, before releasing a new U-Boot version, first call this script "collect_livetime_info" and he get:
-> one livetime counter patch for current release -> one list for boards which reach end of life -> one list for boards, which should be deleted
Good idea, but the information could also be saved on a website or in another database. It should be easily filled by the tester and also easily queried by wherever is interested in.
Ok, if we have this info, we can show it wherever we want ...
All Infos are "release info" I think, and fully fit in the commit for the new release ...
I also think that should be done on release only.
Yep! But collecting this infos can be done all the time.
... maybe "deleting boards" can be done automatically, but this is not a trivial job ...
I think deleting should be done in next release then to give the board maintainer some time to check the boards. On a new release the board maintainer should be mailed that in the next release the board will be removed. We should also store this somewhere in the code (status in boards.cfg?).
See my proposal for the livetime counter:
livetime init value n (n=4) livetimer decrement on every new release livetimer set to n, if in release cycle comes a test report livetimer == 0 -> EMail to board maintainer, board reached end of live in mainline, please send test report. livetimer == -1 -> board get deleted
So all info is in boards.cfg availiable ...
Next question is what to do if the mail bounces ;)
Board gets deleted, as board maintainer didn;t send an update patch for boards.cfg ...
So, with such a solution, I see no big additional cost for adding such a feature (except the task "deleting old boards", which is maybe not trivial)
Do not understand me wrong, if we find another solution, I am happy also ... just spinning around ...
Me too.
<snip>
If we decide to delete older boards after n release cycles without testreports, we must not decide nor look in a database. We are sure, we have only "good and working" boards ... and we just do the necessary work for new features ... and we are sure, that we get back testreports within n release cycles ...
So let us decide first, if we want to go this way ...
Yes, we should introduce some mechanism to check when a specific board was last runtime tested. But I fear the overhead with patches that update a tested counter.
I thought with "decide": Do we want to delete "old boards"? With this, we do not need a "MAKEALL --check-boards -s at91" when we introduce new features, as all boards in mainline are in a well tested shape ...
Ok, two decisions:
- Do we want to collect board testinginformation?
I think we should do that i none way or another.
Yep.
- Do we want to delete old boards automatically after we do not get some test reports after a time intervall?
And we should delete 'unmaintained' boards, when is to be discussed. I'm currently fiddling with at91 gpio and ask myself if I should adopt all the boards or just let them fail ...
You do not have this problem when we descide to delete old boards!
bye, Heiko

Dear Heiko,
In message 527B7EE6.4030307@denx.de you wrote:
Hmm... I hope we get a lot of such EMails ... and think, this is not a big problem ... Or, maybe, if we get a lot of such EMails, maybe we open a u-boot-testing list?
NAK. Mailing lists are good for some kind of information - especially for such information that is read by humans.
All you want to do here is feed a database with data. This is not what mailing lists were made for, so we should really use a more appropriate interface.
See my proposal for the livetime counter:
livetime init value n (n=4) livetimer decrement on every new release livetimer set to n, if in release cycle comes a test report livetimer == 0 -> EMail to board maintainer, board reached end of live in mainline, please send test report. livetimer == -1 -> board get deleted
This is too simple in one respect (as you are not including information that may be vital, like which exact commit ID has been tested, or which tool chain has been used to build it).
Also we should probably not only allow for positive test reports, but also allow to report failures (to mark board support as broken).
All this cannot be done in boards.cfg .
And there is no need to as long as we provide tools to query for any information we might be interested in.
Best regards,
Wolfgang Denk

Hello Wolfgang,
Am 07.11.2013 13:12, schrieb Wolfgang Denk:
Dear Heiko,
In message527B7EE6.4030307@denx.de you wrote:
Hmm... I hope we get a lot of such EMails ... and think, this is not a big problem ... Or, maybe, if we get a lot of such EMails, maybe we open a u-boot-testing list?
NAK. Mailing lists are good for some kind of information - especially for such information that is read by humans.
All you want to do here is feed a database with data. This is not what mailing lists were made for, so we should really use a more appropriate interface.
Ok. But this status report can be in readable text format ;-)
See my proposal for the livetime counter:
livetime init value n (n=4) livetimer decrement on every new release livetimer set to n, if in release cycle comes a test report livetimer == 0 -> EMail to board maintainer, board reached end of live in mainline, please send test report. livetimer == -1 -> board get deleted
This is too simple in one respect (as you are not including information that may be vital, like which exact commit ID has been tested, or which tool chain has been used to build it).
Ok.
Also we should probably not only allow for positive test reports, but also allow to report failures (to mark board support as broken).
Yep, this will set livetime = 0 -> EMail to board maintainer ...
All this cannot be done in boards.cfg .
And there is no need to as long as we provide tools to query for any information we might be interested in.
Ok, then we must define this way ...
bye, Heiko

Dear Heiko Schocher,
In message 527B8C7F.6060503@denx.de you wrote:
All you want to do here is feed a database with data. This is not what mailing lists were made for, so we should really use a more appropriate interface.
Ok. But this status report can be in readable text format ;-)
Yes, it can. So do you want to see 250 "test passed" messages for the BeagleBone or the Rasperry Pi for every few commits that get merged? I don't. In a database, we can automatically filter redundant information.
You, you can use a mailing list for submitting such information, but I doubt that it would be efficient. And I definitely do not want to see this on the current U-Boot ML.
Best regards,
Wolfgang Denk

Hello Wolfgang,
Am 07.11.2013 20:19, schrieb Wolfgang Denk:
Dear Heiko Schocher,
In message527B8C7F.6060503@denx.de you wrote:
All you want to do here is feed a database with data. This is not what mailing lists were made for, so we should really use a more appropriate interface.
Ok. But this status report can be in readable text format ;-)
Yes, it can. So do you want to see 250 "test passed" messages for the BeagleBone or the Rasperry Pi for every few commits that get merged? I don't. In a database, we can automatically filter redundant information.
Ok, thats a good point!
You, you can use a mailing list for submitting such information, but I doubt that it would be efficient. And I definitely do not want to see this on the current U-Boot ML.
Ok, let us discuss the way we collect such information, if we decided that we want testreports ...
bye, Heiko

Dear Heiko,
In message 527C7716.8060308@denx.de you wrote:
You, you can use a mailing list for submitting such information, but I doubt that it would be efficient. And I definitely do not want to see this on the current U-Boot ML.
Ok, let us discuss the way we collect such information, if we decided that we want testreports ...
I think there is a general agreement that we do want such reports.
We just need to get a better understanding, what exactly we want, and then how we can get this.
Best regards,
Wolfgang Denk

Dear "Andreas Bießmann",
In message 527B7883.1080302@gmail.com you wrote:
The saved information how often a board was runtime tested with the correct SHA1 of the u-boot/master could be quite useful. In the end just the last tested commit will be interesting but it could give some information how often that specific board is used. The information must not be generated by a board maintainer ... the
s/must not/need not/ (faux ami in action, I guess)
maintainer could then see if he needs to pull out a board or if one else run the test before.
I fully agree - everybody should be able to provide such test information. Actually it would be a big help to board maintainers as well if these would get test reports from actual users of the hardware.
If we would save this in the repository we do not have this information in time. If we send the information to a list we need to parse it or use some other tool to provide the information. Beside that we will pollute the list with status updates about boards being tested. It could be hard to find real patches in that information flood then.
Agreed too. I doubt if a mailing list makes sense to collect such data. It would probably be more efficient to provide a web based service for this. It just has to be easy to submit reports, and to query the status for boards.
-> one livetime counter patch for current release -> one list for boards which reach end of life -> one list for boards, which should be deleted
Good idea, but the information could also be saved on a website or in another database. It should be easily filled by the tester and also easily queried by wherever is interested in.
Agreed. I definitely do not want to see such trafic on the regular U-Boot mailing list.
All Infos are "release info" I think, and fully fit in the commit for the new release ...
I also think that should be done on release only.
Why? To me it makes a lot of sense to also collect information on intermediate snapshots.
I think deleting should be done in next release then to give the board maintainer some time to check the boards. On a new release the board maintainer should be mailed that in the next release the board will be removed. We should also store this somewhere in the code (status in boards.cfg?).
Next question is what to do if the mail bounces ;)
Mail bounces (and new address of maintainer cannot be found, and no other user volunteers to take over maintenance) => board is unmaintained => board gets removed.
- Do we want to delete old boards automatically after we do not get some test reports after a time intervall?
And we should delete 'unmaintained' boards, when is to be discussed. I'm currently fiddling with at91 gpio and ask myself if I should adopt all the boards or just let them fail ...
I hesitate to automatically remove existing boards. Why would we want to do that? To reduce efforts, right? So I vote to keep boards as long as they are either maintained, or they at least "do not hurt". If a board just builds fine and does not cause any additional efforts we should keep it, no matter if there is an active maintainer or test reports or not. Only when a board becomes a pain to somebody - say, because it develops build errors, or wit would require efforts to adapt it to some new feature, _then_ we would check if this is one of the "precious" boards we want to keep or if it is just old cruft nobody cares about anyway. And only then I would remove it.
Best regards,
Wolfgang Denk

Dear Wolfgang Denk,
On 11/07/2013 01:01 PM, Wolfgang Denk wrote:
In message 527B7883.1080302@gmail.com you wrote:
The saved information how often a board was runtime tested with the correct SHA1 of the u-boot/master could be quite useful. In the end just the last tested commit will be interesting but it could give some information how often that specific board is used. The information must not be generated by a board maintainer ... the
s/must not/need not/ (faux ami in action, I guess)
You're right
<snip>
All Infos are "release info" I think, and fully fit in the commit for the new release ...
I also think that should be done on release only.
Why? To me it makes a lot of sense to also collect information on intermediate snapshots.
Well, I think we should query that database on release and transform the testing information to some information maybe stored in release code and to trigger board maintainers to do tests. Maybe the proposed lifetime counter is the correct tooling here.
We should introduce some service to gather testing information which should have quite high throughput. But we should also install some measures to see the 'liveliness' of some board's tests in the released source code.
I think deleting should be done in next release then to give the board maintainer some time to check the boards. On a new release the board maintainer should be mailed that in the next release the board will be removed. We should also store this somewhere in the code (status in boards.cfg?).
Next question is what to do if the mail bounces ;)
Mail bounces (and new address of maintainer cannot be found, and no other user volunteers to take over maintenance) => board is unmaintained => board gets removed.
Sounds good, but doesn't fit to your next statement ...
- Do we want to delete old boards automatically after we do not get some test reports after a time intervall?
And we should delete 'unmaintained' boards, when is to be discussed. I'm currently fiddling with at91 gpio and ask myself if I should adopt all the boards or just let them fail ...
I hesitate to automatically remove existing boards. Why would we want to do that? To reduce efforts, right? So I vote to keep boards as long as they are either maintained, or they at least "do not hurt". If a board just builds fine and does not cause any additional efforts we should keep it, no matter if there is an active maintainer or test reports or not. Only when a board becomes a pain to somebody - say, because it develops build errors, or wit would require efforts to adapt it to some new feature, _then_ we would check if this is one of the "precious" boards we want to keep or if it is just old cruft nobody cares about anyway. And only then I would remove it.
Sounds good.
Best regards
Andreas Bießmann

Hello Wolfgang,
Am 07.11.2013 13:01, schrieb Wolfgang Denk:
Dear "Andreas Bießmann",
In message527B7883.1080302@gmail.com you wrote:
[...]
maintainer could then see if he needs to pull out a board or if one else run the test before.
I fully agree - everybody should be able to provide such test information. Actually it would be a big help to board maintainers as well if these would get test reports from actual users of the hardware.
Yep.
If we would save this in the repository we do not have this information in time. If we send the information to a list we need to parse it or use some other tool to provide the information. Beside that we will pollute the list with status updates about boards being tested. It could be hard to find real patches in that information flood then.
Agreed too. I doubt if a mailing list makes sense to collect such data. It would probably be more efficient to provide a web based service for this. It just has to be easy to submit reports, and to query the status for boards.
Ok, how would look like such a web based service?
-> one livetime counter patch for current release -> one list for boards which reach end of life -> one list for boards, which should be deleted
Good idea, but the information could also be saved on a website or in another database. It should be easily filled by the tester and also easily queried by wherever is interested in.
Agreed. I definitely do not want to see such trafic on the regular U-Boot mailing list.
Oh, ok ... then we must look for another way.
All Infos are "release info" I think, and fully fit in the commit for the new release ...
I also think that should be done on release only.
Why? To me it makes a lot of sense to also collect information on intermediate snapshots.
Hmm.. I thought we get a "test report" (however this looks like in the end) based on a commit id in u-boot/master ... isn;t this enough?
I think deleting should be done in next release then to give the board maintainer some time to check the boards. On a new release the board maintainer should be mailed that in the next release the board will be removed. We should also store this somewhere in the code (status in boards.cfg?).
Next question is what to do if the mail bounces ;)
Mail bounces (and new address of maintainer cannot be found, and no other user volunteers to take over maintenance) => board is unmaintained => board gets removed.
Full Ack.
- Do we want to delete old boards automatically after we do not get some test reports after a time intervall?
And we should delete 'unmaintained' boards, when is to be discussed. I'm currently fiddling with at91 gpio and ask myself if I should adopt all the boards or just let them fail ...
I hesitate to automatically remove existing boards. Why would we want to do that? To reduce efforts, right? So I vote to keep boards as long as they are either maintained, or they at least "do not hurt".
Why the "not hurt case"? Is it really good to have a lot of boards, which compile clean, but we do not know, if the code really works?
I prefer to have in current mainline only boards, which really work or at least maintained... if a board maintainer did the work to bring it into mainline, it should be interested in "stay in" mainline. If this interest is lost and no other volunteers ... board is useless in mainline.
Code for old boards is not lost.
If a board just builds fine and does not cause any additional efforts we should keep it, no matter if there is an active maintainer or test reports or not. Only when a board becomes a pain to somebody - say, because it develops build errors, or wit would require efforts to adapt it to some new feature, _then_ we would check if this is one of the "precious" boards we want to keep or if it is just old cruft nobody cares about anyway. And only then I would remove it.
Ok, that is a way to go, if we have for all boards the information, in which maintainstate it is ... but think of for example the new i2c framework, how much boards use I2C? I have to check now all this boards, decide to delete it or convert it ... very time consuming frustrating work ... and maybe for a lot of "do not hurt" boards only waste of time ... :-(
if we have a clean mainline state, where I know all boards are working the decision is clear, convert all ... and board maintainers will test it ... and we have always a clean compile state for mainline
And if we have this test/delete cycle, I can be sure, that after a defined time all boards in mainline are working!
bye, Heiko

Dear Heiko,
In message 527B8BA7.2070605@denx.de you wrote:
Agreed too. I doubt if a mailing list makes sense to collect such data. It would probably be more efficient to provide a web based service for this. It just has to be easy to submit reports, and to query the status for boards.
Ok, how would look like such a web based service?
I have no idea. Not yet. Let's first define what we want to have, then think about how to implement it.
I also think that should be done on release only.
Why? To me it makes a lot of sense to also collect information on intermediate snapshots.
Hmm.. I thought we get a "test report" (however this looks like in the end) based on a commit id in u-boot/master ... isn;t this enough?
Yes, this is perfectly fine. I just want to allow this at _any_ time, not only once per release (near the end of the release cycle). especially for releases where bigger changes get merged it may be precious information to know when the code stopped working.
I hesitate to automatically remove existing boards. Why would we want to do that? To reduce efforts, right? So I vote to keep boards as long as they are either maintained, or they at least "do not hurt".
Why the "not hurt case"? Is it really good to have a lot of boards, which compile clean, but we do not know, if the code really works?
Well, one reason is efficiency. If the code builds fine, and does not cause efforts druing any of the ongoing work, it is more efficient to just keep it than to actively remove it, which would require active work. I. e. I want to reduce the work load on the maintainer(s) such that they only have to become active if such action saves even more effort. Actually it may even seem more efficient in some cases to perform minor fixing even of unmaintained boards than to remove them.
I prefer to have in current mainline only boards, which really work or at least maintained... if a board maintainer did the work to bring it into mainline, it should be interested in "stay in" mainline. If this interest is lost and no other volunteers ... board is useless in mainline.
But should we not also try to minimize efforts, especially on the custodians? If a board does not cause any trouble, we should not have to invent additional efforts for it.
Ok, that is a way to go, if we have for all boards the information, in which maintainstate it is ... but think of for example the new i2c framework, how much boards use I2C? I have to check now all this boards, decide to delete it or convert it ... very time consuming frustrating work ... and maybe for a lot of "do not hurt" boards only waste of time ... :-(
I don't really understand this argument. The I2C code is either hardware independent (say, the command line interface code), or it is platform specific (say, the driver code for the I2C controller on a specific SoC), or it is board dendent (say, some specific twiddeling with I2C devices to perform some magic operations on a board).
In the first two cases the work wil have to be done in any case (except for the realy rare case that there are only old, unmaintained boards left using this SoC). An for the board specific code you can check if you need to adapt it by checking the board's test status. If the board is unmaintained, then we enter the "it hurts" branch, and drop the board.
if we have a clean mainline state, where I know all boards are working the decision is clear, convert all ... and board maintainers will test it ... and we have always a clean compile state for mainline
Again: you don't need any knowledge about boards that are not affected by your I2C changes.
And if we have this test/delete cycle, I can be sure, that after a defined time all boards in mainline are working!
Yes, but the cost is additional efforts, and being more aggressive than necessary. I dislike both.
Best regards,
Wolfgang Denk

Hello Wolfgang,
Am 07.11.2013 20:15, schrieb Wolfgang Denk:
Dear Heiko,
In message527B8BA7.2070605@denx.de you wrote:
Agreed too. I doubt if a mailing list makes sense to collect such data. It would probably be more efficient to provide a web based service for this. It just has to be easy to submit reports, and to query the status for boards.
Ok, how would look like such a web based service?
I have no idea. Not yet. Let's first define what we want to have, then think about how to implement it.
Ok.
I also think that should be done on release only.
Why? To me it makes a lot of sense to also collect information on intermediate snapshots.
Hmm.. I thought we get a "test report" (however this looks like in the end) based on a commit id in u-boot/master ... isn;t this enough?
Yes, this is perfectly fine. I just want to allow this at _any_ time, not only once per release (near the end of the release cycle). especially for releases where bigger changes get merged it may be precious information to know when the code stopped working.
With "once", I only meant, we once update the "livetimer" and collect all the time test reports!
I hesitate to automatically remove existing boards. Why would we want to do that? To reduce efforts, right? So I vote to keep boards as long as they are either maintained, or they at least "do not hurt".
Why the "not hurt case"? Is it really good to have a lot of boards, which compile clean, but we do not know, if the code really works?
Well, one reason is efficiency. If the code builds fine, and does not cause efforts druing any of the ongoing work, it is more efficient to just keep it than to actively remove it, which would require active work. I. e. I want to reduce the work load on the maintainer(s) such that they only have to become active if such action saves even more effort. Actually it may even seem more efficient in some cases to perform minor fixing even of unmaintained boards than to remove them.
But in the long term I think, we have more work with old, unmaintained boards, then removing it. And if we remove boards actively, we maybe force more the board maintainers (for examples corporates who have interest that the board stays in mainline) to really test them ;-)
I prefer to have in current mainline only boards, which really work or at least maintained... if a board maintainer did the work to bring it into mainline, it should be interested in "stay in" mainline. If this interest is lost and no other volunteers ... board is useless in mainline.
But should we not also try to minimize efforts, especially on the custodians? If a board does not cause any trouble, we should not have to invent additional efforts for it.
Hmm... I am not really sure, if it is more work for removing a board or fixing compiling issues ...
Ok, that is a way to go, if we have for all boards the information, in which maintainstate it is ... but think of for example the new i2c framework, how much boards use I2C? I have to check now all this boards, decide to delete it or convert it ... very time consuming frustrating work ... and maybe for a lot of "do not hurt" boards only waste of time ... :-(
I don't really understand this argument. The I2C code is either hardware independent (say, the command line interface code), or it is platform specific (say, the driver code for the I2C controller on a specific SoC), or it is board dendent (say, some specific twiddeling with I2C devices to perform some magic operations on a board).
In the first two cases the work wil have to be done in any case (except for the realy rare case that there are only old, unmaintained boards left using this SoC). An for the board specific code you can
check if you need to adapt it by checking the board's test status. If the board is unmaintained, then we enter the "it hurts" branch, and drop the board.
Hmm... :
pollux:u-boot hs [master] $ grep -lr HARD_I2C arch/powerpc/ [...] arch/powerpc/cpu/mpc8xx/i2c.c [...] arch/powerpc/cpu/mpc8260/i2c.c arch/powerpc/cpu/mpc5xxx/i2c.c arch/powerpc/cpu/mpc824x/drivers/i2c/i2c.c arch/powerpc/cpu/mpc512x/i2c.c pollux:u-boot hs [master] $
Is it worth to move the drivers to drivers/i2c/ and convert them to the new framework? Or is it better to delete them and the boards, which use them ... Ok, if we had such a "livetimer" without deleting old boards, you are right, I could do now:
- search the boards, which use the drivers - search the state of this board - decide, dropping or convert ... On which criteria whould this decision be done? I have no idea if this boards exist anymore ...
If we have only good boards, I have only one way -> convert. No time wasted for the above steps ...
So, yes, I think we do not need to delete the boards, just mark it broken ...
My hope with "delete old boards from mainline" is, that we become more testreports from board maintainers, as they (should) have interest that the board support stays in mainline. If they know, that the board is only marked as broken, old or whatever ... they maybe do not reserve the time for testing a board once a year ...
if we have a clean mainline state, where I know all boards are working the decision is clear, convert all ... and board maintainers will test it ... and we have always a clean compile state for mainline
Again: you don't need any knowledge about boards that are not affected by your I2C changes.
And if we have this test/delete cycle, I can be sure, that after a defined time all boards in mainline are working!
Yes, but the cost is additional efforts, and being more aggressive than necessary. I dislike both.
I dislike additional efforts too (But the question is, needs removing old boards more effort than fixing them all the time maybe uselessly?). Is it really aggressive to want once a year a testreport from board maintainers? They get a lot of effort from the mainline for free...
bye, Heiko

Dear Heiko,
In message 527C767D.3070700@denx.de you wrote:
Yes, this is perfectly fine. I just want to allow this at _any_ time, not only once per release (near the end of the release cycle). especially for releases where bigger changes get merged it may be precious information to know when the code stopped working.
With "once", I only meant, we once update the "livetimer" and collect all the time test reports!
But with your suggestion, the only stoprage for such information is the life timer entry in the boards.cfg file. Or would you suggest to store additional information elsewhere? Then we don't need the (then redundant) counter in boards.cfg ...
Well, one reason is efficiency. If the code builds fine, and does not cause efforts druing any of the ongoing work, it is more efficient to just keep it than to actively remove it, which would require active work. I. e. I want to reduce the work load on the maintainer(s) such that they only have to become active if such action saves even more effort. Actually it may even seem more efficient in some cases to perform minor fixing even of unmaintained boards than to remove them.
But in the long term I think, we have more work with old, unmaintained boards, then removing it. And if we remove boards actively, we maybe
YOu misunderstand. My intention is to keep them as long as they do NOT cause any efforts. If they do, they are candidates for checking for removal.
force more the board maintainers (for examples corporates who have interest that the board stays in mainline) to really test them ;-)
Yes, but you organize more work for all of us.
But should we not also try to minimize efforts, especially on the custodians? If a board does not cause any trouble, we should not have to invent additional efforts for it.
Hmm... I am not really sure, if it is more work for removing a board or fixing compiling issues ...
If we have to fix compilation issues, that means that the board _IS_ a candidate for removal. I suggested to keep them as long as they do "not cause any trouble" !
I don't really understand this argument. The I2C code is either hardware independent (say, the command line interface code), or it is platform specific (say, the driver code for the I2C controller on a specific SoC), or it is board dendent (say, some specific twiddeling with I2C devices to perform some magic operations on a board).
In the first two cases the work wil have to be done in any case (except for the realy rare case that there are only old, unmaintained boards left using this SoC). An for the board specific code you can
check if you need to adapt it by checking the board's test status. If the board is unmaintained, then we enter the "it hurts" branch, and drop the board.
Hmm... :
pollux:u-boot hs [master] $ grep -lr HARD_I2C arch/powerpc/ [...] arch/powerpc/cpu/mpc8xx/i2c.c [...] arch/powerpc/cpu/mpc8260/i2c.c arch/powerpc/cpu/mpc5xxx/i2c.c arch/powerpc/cpu/mpc824x/drivers/i2c/i2c.c arch/powerpc/cpu/mpc512x/i2c.c pollux:u-boot hs [master] $
THi sis all architecure support. None of these files are board specific. None of them would be removed if you crap old boards.
Is it worth to move the drivers to drivers/i2c/ and convert them to the new framework? Or is it better to delete them and the boards, which use them ... Ok, if we had such a "livetimer" without deleting old boards, you are right, I could do now:
How do you know which boards use these drivers?
- search the boards, which use the drivers
- search the state of this board
- decide, dropping or convert ... On which criteria whould this decision be done? I have no idea if this boards exist anymore ...
There is no conflict with what I wrote - if you can find that only "old" boards are using these drivers, then both the boards and the drivers are candidates for removalk. But you need to find this correlation first, and verify that there are no "good" boards left that use trhem do.
All of this is totally different of the mechanism how to determine what are "good" or "old" boards.
If we have only good boards, I have only one way -> convert. No time wasted for the above steps ...
It does not really matter when you have to check if there are "good" users of this code left. You have to do this in both cases - either when removing the boards automatically, or when touching that code.
With your solution, you will probably have to do it twice, as you cannot be really sure that the guy who removed the board support verified that he was the last user of some global architecture support code.
I dislike additional efforts too (But the question is, needs removing old boards more effort than fixing them all the time maybe uselessly?).
Can you please re-read what I wrote? I say we keep them as long as they do NOT need ANY fixing. If they do, this is a toalluy new game.
Best regards,
Wolfgang Denk

Hello Wolfgang,
Am 08.11.2013 07:20, schrieb Wolfgang Denk:
Dear Heiko,
In message527C767D.3070700@denx.de you wrote:
Yes, this is perfectly fine. I just want to allow this at _any_ time, not only once per release (near the end of the release cycle). especially for releases where bigger changes get merged it may be precious information to know when the code stopped working.
With "once", I only meant, we once update the "livetimer" and collect all the time test reports!
But with your suggestion, the only stoprage for such information is the life timer entry in the boards.cfg file. Or would you suggest to store additional information elsewhere? Then we don't need the (then redundant) counter in boards.cfg ...
Yes, if we store more information then the "livetime", we need another approach. But with storing this somewhere else, we must take care that the number and names of the boards in the external database are in sync with the boards in mainline.
Well, one reason is efficiency. If the code builds fine, and does not cause efforts druing any of the ongoing work, it is more efficient to just keep it than to actively remove it, which would require active work. I. e. I want to reduce the work load on the maintainer(s) such that they only have to become active if such action saves even more effort. Actually it may even seem more efficient in some cases to perform minor fixing even of unmaintained boards than to remove them.
But in the long term I think, we have more work with old, unmaintained boards, then removing it. And if we remove boards actively, we maybe
YOu misunderstand. My intention is to keep them as long as they do NOT cause any efforts. If they do, they are candidates for checking for removal.
Ah, ok!
force more the board maintainers (for examples corporates who have interest that the board stays in mainline) to really test them ;-)
Yes, but you organize more work for all of us.
That would be bad.
But should we not also try to minimize efforts, especially on the custodians? If a board does not cause any trouble, we should not have to invent additional efforts for it.
Hmm... I am not really sure, if it is more work for removing a board or fixing compiling issues ...
If we have to fix compilation issues, that means that the board _IS_ a candidate for removal. I suggested to keep them as long as they do "not cause any trouble" !
Ok, got it, that sounds good. So, if we introduce new features we have to check for boards which are affected:
- board is in maintained state -> convert to new feature - board is in unmaintained state -> drop it ...
I don't really understand this argument. The I2C code is either hardware independent (say, the command line interface code), or it is platform specific (say, the driver code for the I2C controller on a specific SoC), or it is board dendent (say, some specific twiddeling with I2C devices to perform some magic operations on a board).
In the first two cases the work wil have to be done in any case (except for the realy rare case that there are only old, unmaintained boards left using this SoC). An for the board specific code you can
check if you need to adapt it by checking the board's test status. If the board is unmaintained, then we enter the "it hurts" branch, and drop the board.
Hmm... :
pollux:u-boot hs [master] $ grep -lr HARD_I2C arch/powerpc/ [...] arch/powerpc/cpu/mpc8xx/i2c.c [...] arch/powerpc/cpu/mpc8260/i2c.c arch/powerpc/cpu/mpc5xxx/i2c.c arch/powerpc/cpu/mpc824x/drivers/i2c/i2c.c arch/powerpc/cpu/mpc512x/i2c.c pollux:u-boot hs [master] $
THi sis all architecure support. None of these files are board specific. None of them would be removed if you crap old boards.
Hmm.. a U-Boot rule is, remove unnused code, or? If we remove all boards, who use this drivers, we remove the drivers also ...
Is it worth to move the drivers to drivers/i2c/ and convert them to the new framework? Or is it better to delete them and the boards, which use them ... Ok, if we had such a "livetimer" without deleting old boards, you are right, I could do now:
How do you know which boards use these drivers?
I must find out ...
- search the boards, which use the drivers
- search the state of this board
- decide, dropping or convert ... On which criteria whould this decision be done? I have no idea if this boards exist anymore ...
There is no conflict with what I wrote - if you can find that only "old" boards are using these drivers, then both the boards and the drivers are candidates for removalk. But you need to find this correlation first, and verify that there are no "good" boards left that use trhem do.
Yes.
But there is a difference! With having old boards in mainline, I must do the above step "search the state, and decide what to do". With having only "good" boards in mainline, I do not need this steps.
But I aggree with you, if we have the rule: "unmaintained board and compile error" -> remove it
This decision is easy and I am fine with it.
All of this is totally different of the mechanism how to determine what are "good" or "old" boards.
If we have only good boards, I have only one way -> convert. No time wasted for the above steps ...
It does not really matter when you have to check if there are "good" users of this code left. You have to do this in both cases - either when removing the boards automatically, or when touching that code.
With your solution, you will probably have to do it twice, as you cannot be really sure that the guy who removed the board support verified that he was the last user of some global architecture support code.
Yes for the driver code. As I said in this discusion, removing boards is no trivial job.
I dislike additional efforts too (But the question is, needs removing old boards more effort than fixing them all the time maybe uselessly?).
Can you please re-read what I wrote? I say we keep them as long as they do NOT need ANY fixing. If they do, this is a toalluy new game.
I got it now ;-)
bye, Heiko

On Thu, Nov 07, 2013 at 10:37:24AM +0100, Andreas Bie?mann wrote:
Hello all together,
On 11/07/2013 09:17 AM, Heiko Schocher wrote:
Am 06.11.2013 08:50, schrieb Wolfgang Denk:
[snip]
So when you're once again doing some change that requires touching files for some othe rboards, you could simply check that database. If you see that 3 out of the last 5 releases have reported succesful run-time tests you will probably decide to accept the needed efforts,
Hmm.. that works, if you have to touch some (some < 5) boards. But If you have to touch > 5 boards, this gets unhandy...
How about:
MAKEALL --check-boards -s at91
;)
I feel this is the hard part of the problem, and what we're glossing over. What has to be tested by the board maintainer? What are we going to leave to their discretion? Will am335x_evm not count if I don't dig up the NOR cape for it?

Dear Tom Rini,
On 11/07/2013 02:31 PM, Tom Rini wrote:
On Thu, Nov 07, 2013 at 10:37:24AM +0100, Andreas Bie?mann wrote:
Hello all together,
On 11/07/2013 09:17 AM, Heiko Schocher wrote:
Am 06.11.2013 08:50, schrieb Wolfgang Denk:
[snip]
So when you're once again doing some change that requires touching files for some othe rboards, you could simply check that database. If you see that 3 out of the last 5 releases have reported succesful run-time tests you will probably decide to accept the needed efforts,
Hmm.. that works, if you have to touch some (some < 5) boards. But If you have to touch > 5 boards, this gets unhandy...
How about:
MAKEALL --check-boards -s at91
;)
I feel this is the hard part of the problem, and what we're glossing over. What has to be tested by the board maintainer? What are we going to leave to their discretion? Will am335x_evm not count if I don't dig up the NOR cape for it?
for the time being I'd glad to see reports of (un)successful boot with configured bootm command.
But I see your point, there is another input vector for the tests. I think this could only be defined on a per board basis. To pick up your example, I think it is worth to know that one tested the am335x_evm to boot via NAND. At least for the maintainer, so he can skip that and just test the NOR booting.
Best regards
Andreas Bießmann

Dear Tom,
In message 20131107133159.GR5925@bill-the-cat you wrote:
I feel this is the hard part of the problem, and what we're glossing over. What has to be tested by the board maintainer? What are we going to leave to their discretion? Will am335x_evm not count if I don't dig up the NOR cape for it?
Good question. Eventually this is something that develops over time.
Intially, we might be satisfied with a very basic "it works" message, which may just mean that this specific version booted on the actual hardware.
In the long run, we might provide a more detailed questionaire to the reported. I could for example imagine a tool that parses the board's config file and then provides some checkboxes - if there is NOR flash configured on the board, ask if NOR has been tested; similar for network, MMC, USB, ... other features.
One day we might even have more developers using automatic test tools so we could generate information on a per-command base.
I know that I'm just dreaming, but we should try to just be open for any such future extensions, even if we start really small now.
I think we all agree that _any_ kind of test information will be better than none.
Best regards,
Wolfgang Denk

On Thu, Nov 07, 2013 at 08:26:57PM +0100, Wolfgang Denk wrote:
Dear Tom,
In message 20131107133159.GR5925@bill-the-cat you wrote:
I feel this is the hard part of the problem, and what we're glossing over. What has to be tested by the board maintainer? What are we going to leave to their discretion? Will am335x_evm not count if I don't dig up the NOR cape for it?
Good question. Eventually this is something that develops over time.
Intially, we might be satisfied with a very basic "it works" message, which may just mean that this specific version booted on the actual hardware.
In the long run, we might provide a more detailed questionaire to the reported. I could for example imagine a tool that parses the board's config file and then provides some checkboxes - if there is NOR flash configured on the board, ask if NOR has been tested; similar for network, MMC, USB, ... other features.
One day we might even have more developers using automatic test tools so we could generate information on a per-command base.
I know that I'm just dreaming, but we should try to just be open for any such future extensions, even if we start really small now.
I think we all agree that _any_ kind of test information will be better than none.
What we need to be careful of here is making sure whatever we grow is both useful and not overly complicated. What I honestly wonder about is automated testing for commands (crc32 pops to mind only because I just fixed things) but otherwise having things broken down into a front end where people select what they did "Booted a ___ into ___ via ___", provide some output from a command (maybe add just a touch more info to 'version') and cover non-boot testing with copy/paste'able drop-downs.
I know automated testing is The Thing, but given N frameworks, everyone of them has issues because frankly, every SoC family has its own quirks about how boot and load and what is and is not even feasible, especially for the bootloader.

Dear Tom,
In message 20131107205133.GT5925@bill-the-cat you wrote:
What we need to be careful of here is making sure whatever we grow is both useful and not overly complicated. What I honestly wonder about is automated testing for commands (crc32 pops to mind only because I just fixed things) but otherwise having things broken down into a front end where people select what they did "Booted a ___ into ___ via ___", provide some output from a command (maybe add just a touch more info to 'version') and cover non-boot testing with copy/paste'able drop-downs.
I know automated testing is The Thing, but given N frameworks, everyone of them has issues because frankly, every SoC family has its own quirks about how boot and load and what is and is not even feasible, especially for the bootloader.
Agreed. In the end, you will probably define a specific set of test cases that can be run automatically on a board. We will never have 100% coverage, nor any generic set of tests that fits all boards.
OK, a pretty large sub-set of common functionality can be tested in the sandbox, which is a great achievement ...
I still firmly belive that the approach we've taken a long time ago, i. e. to use a test framework (DUTS) in combination with a (today wiki based) documentation framework (DULG) is something that can meet a wide range of requirements.
Unfortunately, we haven't been able yet to find a test framework that makes it easy for the average user to add new test cases. I'm not even talking here about the core of the framework, that has to deal with things like attaching to the console of a board, control power and/or reset state, detect if a JTAG debugger is attached and such things. In the current state, our code is expect based (and thus implemented in tcl), and it's really a PITA to add new test cases to it.
BUt it would require significant efforts to rwrite this in a more modern context...
Best regards,
Wolfgang Denk

Hello Wolfgang,
Am 07.11.2013 20:26, schrieb Wolfgang Denk:
Dear Tom,
In message20131107133159.GR5925@bill-the-cat you wrote:
I feel this is the hard part of the problem, and what we're glossing over. What has to be tested by the board maintainer? What are we going to leave to their discretion? Will am335x_evm not count if I don't dig up the NOR cape for it?
Good question. Eventually this is something that develops over time.
Yes, let us go step by step.
Intially, we might be satisfied with a very basic "it works" message, which may just mean that this specific version booted on the actual hardware.
In the long run, we might provide a more detailed questionaire to the reported. I could for example imagine a tool that parses the board's config file and then provides some checkboxes - if there is NOR flash configured on the board, ask if NOR has been tested; similar for network, MMC, USB, ... other features.
Cool... and we can maybe start testscripts based on this information... (Huch... I am sure I am sitting in Front of my PC and not sleeping anymore)
One day we might even have more developers using automatic test tools so we could generate information on a per-command base.
I know that I'm just dreaming, but we should try to just be open for any such future extensions, even if we start really small now.
Maybe we add in tools a directory, where we add such tests? But this would be another big discussion...
I think we all agree that _any_ kind of test information will be better than none.
Yes.
bye, Heiko

Hello Tom,
Am 07.11.2013 14:31, schrieb Tom Rini:
On Thu, Nov 07, 2013 at 10:37:24AM +0100, Andreas Bie?mann wrote:
Hello all together,
On 11/07/2013 09:17 AM, Heiko Schocher wrote:
Am 06.11.2013 08:50, schrieb Wolfgang Denk:
[snip]
So when you're once again doing some change that requires touching files for some othe rboards, you could simply check that database. If you see that 3 out of the last 5 releases have reported succesful run-time tests you will probably decide to accept the needed efforts,
Hmm.. that works, if you have to touch some (some< 5) boards. But If you have to touch> 5 boards, this gets unhandy...
How about:
MAKEALL --check-boards -s at91
;)
I feel this is the hard part of the problem, and what we're glossing over. What has to be tested by the board maintainer? What are we going to leave to their discretion? Will am335x_evm not count if I don't dig up the NOR cape for it?
Good point! What gets tested on a board can only decide the person who execute the test on the board. We must trust, that it covers hopefully all functionality ...
Hmm... would we want to introduce testscripts to U-Boot, which should be executed on boards? Hehe... I see, this discussion breaks my fast thought, of introducing just a small and fast livetimer mechanism ;-)
bye, Heiko
participants (5)
-
Albert ARIBAUD
-
Andreas Bießmann
-
Heiko Schocher
-
Tom Rini
-
Wolfgang Denk