[U-Boot] Notes from the U-Boot BOF Meeting in Geneva 2012/07/12

Hi,
as promised, here are my expanded notes from the BoF meeting at LSM2012 last week. It was a pleasure to get some core developers into one room at the same time and discuss controversial topics without the health of any one attendant being in jeopardy at any time, so thanks again to everybody for making this meeting as successful as it was!
[Note that this text expresses my understanding of the topics and thus may sometimes be slightly imprecise or bluntly wrong, so feel free to correct me where due :) ]
* Tips on how to use JTAG to debug custom hardware that does not boot
Some experiences on how to recover bricked hardware with the help of JTAG debuggers were exchanged. Some recurring problems one should be aware of in this area are:
- Breakpoints on unintialized RAM - How to cope with relocation (e.g. reload symbols in gdb with an offset to the ELF symbol table as described in the Wiki) - Different system architectures, with different responsibilities to setup RAM - What is BootROM doing before U-Boot is running? Analyze the system at hand with respect to its expected boot sequence before hooking into it with a debugger.
General rules seem to be hard to find for details as the early phases of booting vary greatly over the range of supported hardware in U-Boot.
* U-Boot History visualized
The history of the U-Boot repository was shown visualized with the help of the gource tool:
http://code.google.com/p/gource/
Unfortunately the movie shown was 2.4GiB in size and found to be unsuitable for putting up on our webspace. We will look into producing a smaller version to publish.
It is visually clear from the short movie, that the U-Boot community and the amount of interaction has grown substantially over time. The sheer amount of source files also grew in relation to that ;)
* Patman
As Simon Glass, the author of the 'patman' tool, was physically present, he agreed to give a short introduction to this tool recently added to the U-Boot repository:
http://git.denx.de/?p=u-boot.git;a=tree;f=tools/patman
In short, this tool allows to elegantly manage entire patch series over multiple iterations and easily send them to the mailing list. Controlled through meta-data included (but stripped from submission) in the git commit messages, it helps formally verifying patch series with checkpatch, tracking inter-submission changes and addressing the relevant mail recipients.
It was agreed that the help to be gained from the tool was substantial and that it should get more exhibition in the documentation and among contributors.
* Conflict resolution: setting up a moderator procedure for unhappy submitters
A recent occurence on the mailing list where contributers were sent through multiple rounds of patch submissions for non-obvious reasons was used as a ground for discussing non-technical aspects of patch submissions. It was especially discussed that the privilege of being a custodian in U-Boot also brings obligations that should be defined more clearly. Essentially we would like to see somewhat "enforceable" rights for the "motor" of the project, i.e. for patch submitters.
For this we should extend our documentation to clearly cover responsibility of custodiands. Require making constructive (testable) suggestions on how to get patch accepted when nak-ing patches for example. This can also be enforced by patch submitters by waiting for clear and precise suggestions for re-work.
It was also found to be a good idea to document people willing to act as "moderators" for unhappy contributers, i.e. who can look into problems and try to moderate. (I would volunteer to be such a moderator).
Currently the lack of any reaction whatsoever was identified to be a very discouraging sign for contributors. One thing we could do is to declare a "soft" time-limit (two weeks) that patches need to be looked at. After this time-limit, one could declare "backup-custodians" to push patches to or merge patches into some "-staging" branch. (What to do with that branch then?) For this to work, custodians would need to announce "off-times" which currently is not general consensus.
* Patchwork pros / cons
While talking about the patch submission process it became clear that the handling through patchwork still has some unsolved problems, the most obvious being the large backlog of patches in the archieve.
It was agreed that automatic marking of changes as accepted when changes hit mainline would be a good thing. There is some info on our wiki available, although it is not clear who uses them:
http://www.denx.de/wiki/view/U-Boot/Patches#pwclient and http://www.mail-archive.com/patchwork@lists.ozlabs.org/msg00057.html
A tool to automatically mark patches as superceded on newer versions was also suggested to be very valuable.
Unanswered questions:
- What can patman do automatically? - Can gerrit help the workflow?
During the discussion it showed that many people believe that patches outside the "regular" merge window are handled in-efficiently ending up as large piles in patchwork. It was agreed that rather then being a "accept / don't accept" division, it should be a "accept into mainline / accept into -next branches" split hopefully keeping the flow of incoming patches more uninterrupted. For this to work, every custodian would open his own "-next" branch and start merging patches from the mailing list resulting in patchwork becoming cleaner during "bug fix" phases.
It was discussed whether to do some "automatic" merging of these per-custodian trees into a central next, but majority of people believed that the patch handling process should remain as unchanged as possible in sync with the "principle of least surprise".
* Continuous integration
Discussing such automatic merges, the need for continous integration became apparant. As mid- to longterm goals we would like to see automatic builds shifting the requirement to "do MAKEALL" from individual posters to (cheap) machine time. One obstacle on this way is the complexity of automatic builds for different architectures, i.e. which toolchain to use and how to use them correctly.
It was agreed to be a good thing to collect Mike's cross-toolchains on denx.de and ease running MAKEALL. Especially add more switches to MAKEALL for toolchain selection, log-, build- directories. Turn interpretation of environment variables into switches to be more in line with "good unix practice".
Simon Glass mentioned that he is working on "buildman" which he will present on the mailing list in due time.
(Probably I forgot to mention the eldk-switch tool explicitely that also has a "database" to setup the ELDK environment for known targets:
http://git.denx.de/?p=eldk-switch.git;a=summary
Maybe it should be extended to cover other tool-chains also?)
* Sandbox
For people not being aware of the "user-space U-Boot", i.e. the sandbox configuration, Simon Glass described actual use cases for it in his work. He used it as an excellent test bed for generic code and tests not easy to setup on real hardware. During the discussion it became obvious that it may be a very good enabling tool for the "driver model" changes to come. It was also found to be a worthwhile aim to integrate it into the DUTS framework. (I will look into that when time allows).
* U-Boot driver model
Having some more time on our hands, Marek Vasut was asked to interactively present some aspects of his and his teams work for a driver model. Additional to the key concepts shown in his presentation on thursday, it became clear that it would be extremely helpful if the Linux and U-Boot driver model would be as close as possible to ease porting _and_ maintenance of drivers from the former to the latter. It appears that the Linux driver model lacks the late initialization needed by the U-Boot "initialize hardware only when needed" design goal. Achieving it was one motivation to not simply "clone" the Linux driver model. Although this and other details were discussed very intensly, it was more than obvious that a lot of current problems and limits of scalability of our code base can be alleviated with a solid driver model. All participants agreed to more closely review the documents available in Mareks git Repo:
git://git.denx.de/u-boot-marex.git (dm branch, doc/driver-model/)
Discussion of the driver model currently is hosted at a separate mailing list:
http://lists.denx.de/mailman/listinfo/u-boot-dm
Maybe the discussion should be moved completely to the "regular" users mailing list.
* Followup Meetings
Most participants expressed interest into further meetings of the U-Boot developers in the future, probably once a year. We will actively look for occassions more closely tied to the embedded community for this.
Ok, this was actually agreed on after the first couple of beers, but still I believe it fits in here nicely ;)
* Organizational info
Attendants (in alphebetical order of first names)
Anatolij Gustschin Detlev Zundel Fabio Estevam François Revol (Haiku project) Marek Vasut Mike Frysinger Simon Glass Stefan Roese Stefano Babic Tom Rini Wolfgang Grandegger
I'm sorry that I missed the name of one other participant who uses U-Boot on a re-implemented Atari ST (Hatari?) project and gave some interesting "user" feedback on several topics.
Cheers Detlev

Hi Detlev,
On Tue, Jul 17, 2012 at 7:30 AM, Detlev Zundel dzu@denx.de wrote:
- Conflict resolution: setting up a moderator procedure for unhappy submitters
A recent occurence on the mailing list where contributers were sent through multiple rounds of patch submissions for non-obvious reasons was used as a ground for discussing non-technical aspects of patch submissions. It was especially discussed that the privilege of being a custodian in U-Boot also brings obligations that should be defined more clearly. Essentially we would like to see somewhat "enforceable" rights for the "motor" of the project, i.e. for patch submitters.
For this we should extend our documentation to clearly cover responsibility of custodiands. Require making constructive (testable) suggestions on how to get patch accepted when nak-ing patches for example. This can also be enforced by patch submitters by waiting for clear and precise suggestions for re-work.
It was also found to be a good idea to document people willing to act as "moderators" for unhappy contributers, i.e. who can look into problems and try to moderate. (I would volunteer to be such a moderator).
For some first time submitters, it can be a very daunting (and discouraging) process to get patches accepted - Especially if they are submitting a large and complex patch-set to introduce a new board or arch.
I have in the past acted as a mentor for first-time submitters. This involved taking the submission offline for a couple of revisions. This gives the submitter some room for error and an opportunity to have the submission process explained in direct relation to their patches without the often blunt and confusing feedback that arises on the list. Of course this has it's own set of problems as it can result in strange revision numbers when the patches come back to the list.
I'm more than happy to continue doing this on the understanding that it can lead to some 'jitter' in the submission process.
Currently the lack of any reaction whatsoever was identified to be a very discouraging sign for contributors. One thing we could do is to declare a "soft" time-limit (two weeks) that patches need to be looked at. After this time-limit, one could declare "backup-custodians" to push patches to or merge patches into some "-staging" branch. (What to do with that branch then?) For this to work, custodians would need to announce "off-times" which currently is not general consensus.
I like this idea in principle. In order to work, I think there needs to be two or three 'top-tier' custodians who will arbitrarily assign 'stale' patches to another custodian. And maybe one top-top-tier custodian (Hi Wolfgang ;)) in case the patch goes very stale.
I think a couple of ideas could be employed: - If a custodian is backlogged (or otherwise unable to immediately review the patch) they should issue a quick reply with an estimated date that they will be able to review the patch. At least then the submitter is not left in the dark - If a custodian simply does not have time to review a patch, they should request for another custodian to take ownership. If no other custodian responds, the top-tier custodians can take action. - If a patch is submitted outside the merge window, or the patch requires major rework such that the custodian does not think the patch will be included in the current merge, a clear message needs to be provided to the submitter (The recent ZFS patches are a good example) - If the patch is a 'miscellaneous' patch (i.e. no clear custodian) then whichever custodian 'claims' it should send a message back to the list stating so - That custodian is then responsible for communicating with the submitter regarding the status of the patch (via the list or offline if required - see my mentoring comments above)
Whatever we do, each patch needs to have clear indications of: - Which custodian is responsible for it - When it will get reviewed - What the review state is (approved, awaiting changes, rejected etc) - When it will get merged (current release or next release)
- Patchwork pros / cons
While talking about the patch submission process it became clear that the handling through patchwork still has some unsolved problems, the most obvious being the large backlog of patches in the archieve.
I haven't had a chance to do my usual patchwork housekeeping for a while. When annoying thing about patchwork is that old revisions do not get automatically marked as superseded.
It was agreed that automatic marking of changes as accepted when changes hit mainline would be a good thing. There is some info on our wiki available, although it is not clear who uses them:
http://www.denx.de/wiki/view/U-Boot/Patches#pwclient and http://www.mail-archive.com/patchwork@lists.ozlabs.org/msg00057.html
A tool to automatically mark patches as superceded on newer versions was also suggested to be very valuable.
As I said :)
Also, if one (and only one) maintainer is Cc'd on a patch, it would be nice is it was automatically assigned to them. Same goes for tags in the patch subject - there should be a way to automatically assign a fair number of patches.
Unanswered questions:
- What can patman do automatically?
- Can gerrit help the workflow?
During the discussion it showed that many people believe that patches outside the "regular" merge window are handled in-efficiently ending up as large piles in patchwork. It was agreed that rather then being a "accept / don't accept" division, it should be a "accept into mainline / accept into -next branches" split hopefully keeping the flow of incoming patches more uninterrupted. For this to work, every custodian would open his own "-next" branch and start merging patches from the mailing list resulting in patchwork becoming cleaner during "bug fix" phases.
And this is where patchwork fails us - The list of states is extremely limiting
It was discussed whether to do some "automatic" merging of these per-custodian trees into a central next, but majority of people believed that the patch handling process should remain as unchanged as possible in sync with the "principle of least surprise".
I agree that automatic merging is a 'Bad Thing(tm)'. But one thing I notice (and I don't know if this is a recent thing) but there seems to be a case of zero merge activity up to the closing of the merge window and then a rash of merging just prior to the RCs. I favour a more continuous merge strategy.
- Continuous integration
Discussing such automatic merges, the need for continous integration became apparant. As mid- to longterm goals we would like to see automatic builds shifting the requirement to "do MAKEALL" from individual posters to (cheap) machine time. One obstacle on this way is the complexity of automatic builds for different architectures, i.e. which toolchain to use and how to use them correctly.
'Bring out your nightly builds!'
I very much like the idea of continuous integration - The number of time that a build is breaking only to be picked up weeks after the offending commit is getting a bit annoying.
It was agreed to be a good thing to collect Mike's cross-toolchains on denx.de and ease running MAKEALL. Especially add more switches to MAKEALL for toolchain selection, log-, build- directories. Turn interpretation of environment variables into switches to be more in line with "good unix practice".
Simon Glass mentioned that he is working on "buildman" which he will present on the mailing list in due time.
Looking forward to it
Regards,
Graeme

On Tuesday 17 July 2012 01:11:01 Graeme Russ wrote:
It was discussed whether to do some "automatic" merging of these per-custodian trees into a central next, but majority of people believed that the patch handling process should remain as unchanged as possible in sync with the "principle of least surprise".
I agree that automatic merging is a 'Bad Thing(tm)'. But one thing I notice (and I don't know if this is a recent thing) but there seems to be a case of zero merge activity up to the closing of the merge window and then a rash of merging just prior to the RCs. I favour a more continuous merge strategy.
I favored the automatic merging at the conference mainly because of one reason:
To detect potential merge conflicts as early as possible. And send the result of this automated merge to the list (or a new list).
In combination with (automated) nightly builds this not only catches merge conflicts but also build problems. All this should be pretty easy to automate. And it moves the detection of those problems closer to the submission of the patches. So we (and the original patch authors) don't have to figure out what the patch was all about weeks later.
Just my 0.02$. :)
Thanks, Stefan
-- DENX Software Engineering GmbH, MD: Wolfgang Denk & Detlev Zundel HRB 165235 Munich, Office: Kirchenstr.5, D-82194 Groebenzell, Germany Phone: (+49)-8142-66989-0 Fax: (+49)-8142-66989-80 Email: office@denx.de

Hi Stefan,
On 07/17/2012 08:37 PM, Stefan Roese wrote:
On Tuesday 17 July 2012 01:11:01 Graeme Russ wrote:
It was discussed whether to do some "automatic" merging of these per-custodian trees into a central next, but majority of people believed that the patch handling process should remain as unchanged as possible in sync with the "principle of least surprise".
I agree that automatic merging is a 'Bad Thing(tm)'. But one thing I notice (and I don't know if this is a recent thing) but there seems to be a case of zero merge activity up to the closing of the merge window and then a rash of merging just prior to the RCs. I favour a more continuous merge strategy.
I favored the automatic merging at the conference mainly because of one reason:
To detect potential merge conflicts as early as possible. And send the result of this automated merge to the list (or a new list).
100% agree with first sentence. Worried about how much extra traffic an auto-build would cause if it was mailing the list as well
In combination with (automated) nightly builds this not only catches merge conflicts but also build problems. All this should be pretty easy to automate. And it moves the detection of those problems closer to the submission of the patches. So we (and the original patch authors) don't have to figure out what the patch was all about weeks later.
I think U-Boot has reached the point that purely manual patch management is not longer cutting the mustard.
Maybe it's time to seriously look at a gerrit + jenkins based solution?
Regards,
Graeme

On 07/17/2012 10:10 PM, Graeme Russ wrote:
Maybe it's time to seriously look at a gerrit + jenkins based solution?
Here's a good demo video:
http://alblue.bandlem.com/2011/02/gerrit-git-review-with-jenkins-ci.html

Dear Graeme,
In message 5005562E.6070903@gmail.com you wrote:
I think U-Boot has reached the point that purely manual patch management is not longer cutting the mustard.
100% agreed. The problem I see is that we haven't found a tool that provides the needed interfaces to deal with the amount of patches we have to handle.
Patchwork has a number os serious problems, as I see it:
- It chokes on a lot (or all?) base64 encoded messages that have special charactes in the comitter's names (and/or the commit message). In my opinion it renders the tool more or less worthless if you cannot rely that it really covers all patches.
- Our main communication is e-mail based, and this has proven to be a very efficient way to do the work we have to do - so we need a tool that integrates with this tooling. When I send an e-mail reply to a submitter requesting changes to his patch, I want to be able to use the same e-mail message to automatically change the patch status in PW. Unfortunately, no such way exists.
- PW identifies patches based on the hash value over the commit body. It appears to search oldest first, and stop on first hit. This causes problems with resubmitted patches. Assume someone submits a patch, an I as for changes in the commit message (better documen- tation, etc.); I mark the patch as changes requested. The submitter sends a new version, with improved commit message. I mark the old patch as "superseded", and the new one as "under review". When I finally want to apply the new patch, I usually do this from my mailer, which results in using the hash to locate the patch. Result: 1) The old patc gets applied instead of the new one. 2) The old patch gets mared as applied, the new one remains in "under review" state.
These are just the most aggravating bugs; I've discussed all of these on the PW mailing list, and with JK. Nothing happened since.
For me PW is more or less dead.
Maybe it's time to seriously look at a gerrit + jenkins based solution?
I am not sure that gerrit will solve any of the problems we have. I may be missing it, but for example I don't see any integration into a mostly e-mail based work flow. From what I have seen so far (which is not much, I admit) it appears we would again add another tool that in the first place requires additional steps which interrupt the work flow. Speaking for myself, this is a killing point.
And Jenkins... well, we have been using this for some time internally to run test builds for U-Boot. I can tell you a thing or two about it, and Marek has his own story to tell about his experiences when he added to the build matrix.
As is, we try hard to get rid of Jenkins, because it does not scale well to the type of builds we want to be able to do. Marek even started setting up his own test build framework...
Best regards,
Wolfgang Denk

Hi Wolfgang,
On Wed, Jul 18, 2012 at 5:21 PM, Wolfgang Denk wd@denx.de wrote:
Dear Graeme,
In message 5005562E.6070903@gmail.com you wrote:
I think U-Boot has reached the point that purely manual patch management is not longer cutting the mustard.
100% agreed. The problem I see is that we haven't found a tool that provides the needed interfaces to deal with the amount of patches we have to handle.
Patchwork has a number os serious problems, as I see it:
[snip PW issue list]
For me PW is more or less dead.
The problem is, without 'a tool', it is practically impossible to keep track of the status of all the patches submitted to the list. Despite it's flaws, Patchwork has at least helped in this regard
Maybe it's time to seriously look at a gerrit + jenkins based solution?
I am not sure that gerrit will solve any of the problems we have. I may be missing it, but for example I don't see any integration into a mostly e-mail based work flow. From what I have seen so far (which is not much, I admit) it appears we would again add another tool that in the first place requires additional steps which interrupt the work flow. Speaking for myself, this is a killing point.
There are a few things I don't like about gerrit: - Not based on an email-centric workflow - Need to 'drill-down' to get to the actual patch - UI is overly verbose
But there are other things I do like: - Maintains the revision history of each patch - Keeps track of review status - Keeps track of the what branch the patch is against
Patchwork is GPL'd and, in my personal opinion, gets fairly close to what we might need. Maybe we could take Patchwork and modify it to suit our needs?
And Jenkins... well, we have been using this for some time internally to run test builds for U-Boot. I can tell you a thing or two about it, and Marek has his own story to tell about his experiences when he added to the build matrix.
As is, we try hard to get rid of Jenkins, because it does not scale well to the type of builds we want to be able to do. Marek even started setting up his own test build framework...
OK, so we already have a fair number of in-house tools that have been developed to get the job done. We have checkpatch.pl, patman, buildman (in development), and Marek's build framework. Why don't we look at integrating these - A modified Patchwork could: - Automatically run checkpatch and test if the patch applies - Notify the build framework to trigger a build-test - Apply patches to repo's when the maintainer sends an 'Accepted-by:' to the mailing list - Re-run apply and build tests when a maintainer issues a pull request - Re-run the apply and build tests on all 'staged' patches when patches are committed or branches are merged
I short, we have three options - Modify our workflow so we can use existing tools - Modify existing tools and/or create new tools to match our existing workflow - A bit of both
And remember, Linus wrote git because no other tool was available that exactly suited his needs
Regards,
Graeme

Dear Graeme Russ,
[...]
Maybe it's time to seriously look at a gerrit + jenkins based solution?
I am not sure that gerrit will solve any of the problems we have. I may be missing it, but for example I don't see any integration into a mostly e-mail based work flow. From what I have seen so far (which is not much, I admit) it appears we would again add another tool that in the first place requires additional steps which interrupt the work flow. Speaking for myself, this is a killing point.
There are a few things I don't like about gerrit:
- Not based on an email-centric workflow
+1
- Need to 'drill-down' to get to the actual patch
- UI is overly verbose
Add - it's java crap, prone to breakage. - it's overengineered
And ad. jenkins -- with all that plugins infrastructure, it's so vast it can even make coffee and bake a cake damned!
But there are other things I do like:
- Maintains the revision history of each patch
If you follow some rules though :/
- Keeps track of review status
Not so usable tho
- Keeps track of the what branch the patch is against
Yes?
Patchwork is GPL'd and, in my personal opinion, gets fairly close to what we might need. Maybe we could take Patchwork and modify it to suit our needs?
Maybe ... where're the sources?
And Jenkins... well, we have been using this for some time internally to run test builds for U-Boot. I can tell you a thing or two about it, and Marek has his own story to tell about his experiences when he added to the build matrix.
As is, we try hard to get rid of Jenkins, because it does not scale well to the type of builds we want to be able to do. Marek even started setting up his own test build framework...
OK, so we already have a fair number of in-house tools that have been developed to get the job done. We have checkpatch.pl, patman, buildman (in development), and Marek's build framework. Why don't we look at integrating these - A modified Patchwork could:
- Automatically run checkpatch and test if the patch applies
But based on tags in the email header, so it'd know against which tree. This is doable, yes!
- Notify the build framework to trigger a build-test
Which might schedule vast MAKEALL across all arches, effectivelly clogging it very soon.
- Apply patches to repo's when the maintainer sends an 'Accepted-by:' to the mailing list
Such email can be forged!
- Re-run apply and build tests when a maintainer issues a pull request
You mean when maintainer clicks "Submit pull RQ of this branch" ... then it's rebuild it and only after it passes submit the pullrq?
- Re-run the apply and build tests on all 'staged' patches when patches are committed or branches are merged
Um, what do you mean here?
I short, we have three options
- Modify our workflow so we can use existing tools
- Modify existing tools and/or create new tools to match our existing workflow
- A bit of both
And remember, Linus wrote git because no other tool was available that exactly suited his needs
Regards,
Graeme
Best regards, Marek Vasut

Hi Marek,
On Sun, Jul 22, 2012 at 12:46 AM, Marek Vasut marex@denx.de wrote:
Dear Graeme Russ,
Patchwork is GPL'd and, in my personal opinion, gets fairly close to what we might need. Maybe we could take Patchwork and modify it to suit our needs?
Maybe ... where're the sources?
git clone git://ozlabs.org/home/jk/git/patchwork
OK, so we already have a fair number of in-house tools that have been developed to get the job done. We have checkpatch.pl, patman, buildman (in development), and Marek's build framework. Why don't we look at integrating these - A modified Patchwork could:
- Automatically run checkpatch and test if the patch applies
But based on tags in the email header, so it'd know against which tree. This is doable, yes!
- Notify the build framework to trigger a build-test
Which might schedule vast MAKEALL across all arches, effectivelly clogging it very soon.
Yes, I know. Hmmm, maybe if every 24 hours the auto build infrastructure: - Runs a MAKEALL on the mainline repo (if any patches have been committed) - Runs a MAKEALL after applying all patches meeting pre-determined conditions. For example: - All patches passing the automated 'checkpatch' - All patches which have been ack'd or tested - All patches applied to sub-repo's (i.e. do a git-pull of each sub-repo) - If the mainline MAKEALL is clean but the 'patched' MAKEALL is not, use git bisect to identify the first patch that breaks the build
- Apply patches to repo's when the maintainer sends an 'Accepted-by:' to the mailing list
Such email can be forged!
- Re-run apply and build tests when a maintainer issues a pull request
You mean when maintainer clicks "Submit pull RQ of this branch" ... then it's rebuild it and only after it passes submit the pullrq?
Yes - But see above. If the build infrastructure is building with all the repos applied we will get instant feedback that a repo is out-of-step with mainline rather than waiting for Wolfgang to pull.
- Re-run the apply and build tests on all 'staged' patches when patches are committed or branches are merged
Um, what do you mean here?
Well, we effectively have: a) Mainline b) Mainline + patches applied to repos c) Mainline + patched applied to repos + unapplied ack'd patches c) Mainline + patched applied to repos + unapplied ack'd patches + unack'd patches
My thought would be to build test a, b and c every 24 hours and if c passes MAKEALL, do a git apply test on the oustanding patches (and maybe a MAKEALL)
Regards,
Graeme

Dear Graeme Russ,
Hi Marek,
On Sun, Jul 22, 2012 at 12:46 AM, Marek Vasut marex@denx.de wrote:
Dear Graeme Russ,
Patchwork is GPL'd and, in my personal opinion, gets fairly close to what we might need. Maybe we could take Patchwork and modify it to suit our needs?
Maybe ... where're the sources?
git clone git://ozlabs.org/home/jk/git/patchwork
Yep, found it recently ... I can't do python and I don't have the capacity to learn it now, any volunteers?
OK, so we already have a fair number of in-house tools that have been developed to get the job done. We have checkpatch.pl, patman, buildman (in development), and Marek's build framework. Why don't we look at integrating
these - A modified Patchwork could:
- Automatically run checkpatch and test if the patch applies
But based on tags in the email header, so it'd know against which tree. This is doable, yes!
- Notify the build framework to trigger a build-test
Which might schedule vast MAKEALL across all arches, effectivelly clogging it very soon.
Yes, I know. Hmmm, maybe if every 24 hours the auto build infrastructure:
- Runs a MAKEALL on the mainline repo (if any patches have been committed)
Certainly ... it takes 16 hours to do so on my dedicated machine though (more now, since I started building sparc too ;-D ). But WD has some pretty badass machines that can do it really quick :-)
- Runs a MAKEALL after applying all patches meeting pre-determined conditions. For example:
- All patches passing the automated 'checkpatch'
- All patches which have been ack'd or tested
Or simply build it again and again, with patches that passed checkpatch applied and do one big build of the main repo every 24 hours if patches were added.
And if the patches passed compile-testing, we can mark them somehow.
- All patches applied to sub-repo's (i.e. do a git-pull of each
sub-repo) - If the mainline MAKEALL is clean but the 'patched' MAKEALL is not, use git bisect to identify the first patch that breaks the build
Hm yea ... serverfarm needed here. Badass machines and badass cluster is a difference ;-)
- Apply patches to repo's when the maintainer sends an 'Accepted-by:'
to
the mailing list
Such email can be forged!
- Re-run apply and build tests when a maintainer issues a pull request
You mean when maintainer clicks "Submit pull RQ of this branch" ... then it's rebuild it and only after it passes submit the pullrq?
Yes - But see above. If the build infrastructure is building with all the repos applied we will get instant feedback that a repo is out-of-step with mainline rather than waiting for Wolfgang to pull.
Uh, I think my english decoder got clogged somewhere in here ... can you ignore/abort/retry please ? ;-)
Re-run the apply and build tests on all 'staged' patches when patches
are committed or branches are merged
Um, what do you mean here?
Well, we effectively have: a) Mainline b) Mainline + patches applied to repos c) Mainline + patched applied to repos + unapplied ack'd patches c) Mainline + patched applied to repos + unapplied ack'd patches + unack'd patches
My thought would be to build test a, b and c every 24 hours and if c passes MAKEALL, do a git apply test on the oustanding patches (and maybe a MAKEALL)
You have c) twice in there (nit :) ). Still, we'd need a pretty badass buildsetup for that, right? But indeed, such an algo sounds nice.
Regards,
Graeme
Best regards, Marek Vasut

Hi Marek,
On Mon, Jul 23, 2012 at 11:47 AM, Marek Vasut marex@denx.de wrote:
Dear Graeme Russ,
Yes - But see above. If the build infrastructure is building with all the repos applied we will get instant feedback that a repo is out-of-step with mainline rather than waiting for Wolfgang to pull.
Uh, I think my english decoder got clogged somewhere in here ... can you ignore/abort/retry please ? ;-)
Sometimes after Wolfgang pulls a repo (or applies patches directly to mainline) a subsequent pull of another repo will have a merge conflict. It would be nice to catch them early
<random thoughts> What I am thinking is a patch tracker (not manager) which basically has an internal queue of unapplied (to mainline) patches. When a patch gets submitted, it will be sanity checked (checkpatch). If the sanity checks pass (or are overruled) then a git-apply test is run. If this passes, the patch gets added to the queue. The mailing list gets informed that the patch has been 'provisionally accepted' and has been queued for formal review.
If a patch get's NACK'd, or the auto-build infrastructure determines that the patch breaks the build, the patch gets removed from the queue. When a patch gets removed from the mailing list gets informed that the patch has been removed from the queue and a new revision needs to be resubmitted.
If a new revision of a patch is submitted, the patch tracker attempts to replace the old patch at the same location. If the new patch cannot be applied, it (and the old patch) gets removed from the queue
I'm thinking that the patch tracker can keep track of which repo the patch belongs to. If the patch tracker had a non-mailing list interface that triggered the patch tracker to apply the patch to the corresponding repo, that would be great.
So any time a patch is committed to mainline or a repo, the patch tracker would remove that patch from the queue then redo the git-apply test to each patch left in the queue. </random thoughts>
Regards,
Graeme

Dear Graeme Russ,
Hi Marek,
On Mon, Jul 23, 2012 at 11:47 AM, Marek Vasut marex@denx.de wrote:
Dear Graeme Russ,
Yes - But see above. If the build infrastructure is building with all the repos applied we will get instant feedback that a repo is out-of-step with mainline rather than waiting for Wolfgang to pull.
Uh, I think my english decoder got clogged somewhere in here ... can you ignore/abort/retry please ? ;-)
Sometimes after Wolfgang pulls a repo (or applies patches directly to mainline) a subsequent pull of another repo will have a merge conflict. It would be nice to catch them early
Oh, that's true
<random thoughts>
Ok, this tag prooved to be very dangerous in the past when produced by you ;-D
What I am thinking is a patch tracker (not manager) which basically has an internal queue of unapplied (to mainline) patches. When a patch gets submitted, it will be sanity checked (checkpatch). If the sanity checks pass (or are overruled) then a git-apply test is run. If this passes, the patch gets added to the queue. The mailing list gets informed that the patch has been 'provisionally accepted' and has been queued for formal review.
Mm mm, nice :)
If a patch get's NACK'd, or the auto-build infrastructure determines that the patch breaks the build, the patch gets removed from the queue. When a patch gets removed from the mailing list gets informed that the patch has been removed from the queue and a new revision needs to be resubmitted.
If a new revision of a patch is submitted, the patch tracker attempts to replace the old patch at the same location. If the new patch cannot be applied, it (and the old patch) gets removed from the queue
Here is the point where you'll need the deus-ex-machine to intervene ... since some patches can't be automatically processed so easily.
I'm thinking that the patch tracker can keep track of which repo the patch belongs to. If the patch tracker had a non-mailing list interface that triggered the patch tracker to apply the patch to the corresponding repo, that would be great.
Certainly.
So any time a patch is committed to mainline or a repo, the patch tracker would remove that patch from the queue then redo the git-apply test to each patch left in the queue. </random thoughts>
Wowzie, I survived the section this time :-)
But then, how shall we go about it? Any python gurus around?
Regards,
Graeme
[...]
Best regards, Marek Vasut

Marek wrote loads of stuff then wrote...
But then, how shall we go about it? Any python gurus around?
I wouldn't class myself as a "guru" as that should be a title that is bestowed on you by others but I know a fair amount of Python and might be able to have a go at implementing some of this stuff once we know exactly what we are after.
Andy.

Dear Graeme Russ,
In message CALButCKeTeWHbgspHM6=Wyw+7mXWvq6mJH8WxHyFLzVxbE2ByA@mail.gmail.com you wrote:
What I am thinking is a patch tracker (not manager) which basically has an internal queue of unapplied (to mainline) patches. When a patch gets submitted, it will be sanity checked (checkpatch). If the sanity checks pass (or are overruled) then a git-apply test is run. If this passes, the patch gets added to the queue. The mailing list gets informed that the patch has been 'provisionally accepted' and has been queued for formal review.
This is something PatchWork should be able to do.
I'm thinking that the patch tracker can keep track of which repo the patch belongs to. If the patch tracker had a non-mailing list interface that triggered the patch tracker to apply the patch to the corresponding repo, that would be great.
Sorry, I cannot parse the last sentence.
So any time a patch is committed to mainline or a repo, the patch tracker would remove that patch from the queue then redo the git-apply test to each patch left in the queue.
Sounds like a suggestion for an extension to PW to me...
On the other hand, any such tool will fail in all cases where the users do not strictly follow the rules. Trivial example: when people post a new version of the patch without proper In-reply-to: and/or References: headers no such tool will be able to associate any such submission with the older patch(es) the new posting is supposed to supersede. Just look at what PW looks like...
Best regards,
Wolfgang Denk

Dear Marek Vasut,
In message 201207230347.31993.marex@denx.de you wrote:
Yes, I know. Hmmm, maybe if every 24 hours the auto build infrastructure:
- Runs a MAKEALL on the mainline repo (if any patches have been committed)
Certainly ... it takes 16 hours to do so on my dedicated machine though (more now, since I started building sparc too ;-D ). But WD has some pretty badass machines that can do it really quick :-)
Not nearly quick enough for all arches and all boards and and all repos...
- All patches applied to sub-repo's (i.e. do a git-pull of each
sub-repo) - If the mainline MAKEALL is clean but the 'patched' MAKEALL is not, use git bisect to identify the first patch that breaks the build
Hm yea ... serverfarm needed here. Badass machines and badass cluster is a difference ;-)
Sponsors needed to pay for such a infrastructure...
You have c) twice in there (nit :) ). Still, we'd need a pretty badass buildsetup for that, right? But indeed, such an algo sounds nice.
I wish we had unlimited resources...
But then - U-Boot is _much_ smaller than Linux, and they do not do anything like that, yet they manage to keep going. What are we doing wrong / differently?
Best regards,
Wolfgang Denk

On Mon, Jul 23, 2012 at 08:20:15AM +0200, Wolfgang Denk wrote:
Dear Marek Vasut,
In message 201207230347.31993.marex@denx.de you wrote:
Yes, I know. Hmmm, maybe if every 24 hours the auto build infrastructure:
- Runs a MAKEALL on the mainline repo (if any patches have been committed)
Certainly ... it takes 16 hours to do so on my dedicated machine though (more now, since I started building sparc too ;-D ). But WD has some pretty badass machines that can do it really quick :-)
Not nearly quick enough for all arches and all boards and and all repos...
- All patches applied to sub-repo's (i.e. do a git-pull of each
sub-repo) - If the mainline MAKEALL is clean but the 'patched' MAKEALL is not, use git bisect to identify the first patch that breaks the build
Hm yea ... serverfarm needed here. Badass machines and badass cluster is a difference ;-)
Sponsors needed to pay for such a infrastructure...
You have c) twice in there (nit :) ). Still, we'd need a pretty badass buildsetup for that, right? But indeed, such an algo sounds nice.
I wish we had unlimited resources...
But then - U-Boot is _much_ smaller than Linux, and they do not do anything like that, yet they manage to keep going. What are we doing wrong / differently?
I think it boils down to community size. There's a lot of people building and testing random combinations. There's just not much of that today for U-Boot.

Dear Tom Rini,
On Mon, Jul 23, 2012 at 08:20:15AM +0200, Wolfgang Denk wrote:
Dear Marek Vasut,
In message 201207230347.31993.marex@denx.de you wrote:
Yes, I know. Hmmm, maybe if every 24 hours the auto build
infrastructure:
- Runs a MAKEALL on the mainline repo (if any patches have been
committed)
Certainly ... it takes 16 hours to do so on my dedicated machine though (more now, since I started building sparc too ;-D ). But WD has some pretty badass machines that can do it really quick :-)
Not nearly quick enough for all arches and all boards and and all repos...
- All patches applied to sub-repo's (i.e. do a git-pull of each
sub-repo) - If the mainline MAKEALL is clean but the 'patched' MAKEALL is not, use git bisect to identify the first patch that breaks the build
Hm yea ... serverfarm needed here. Badass machines and badass cluster is a difference ;-)
Sponsors needed to pay for such a infrastructure...
You have c) twice in there (nit :) ). Still, we'd need a pretty badass buildsetup for that, right? But indeed, such an algo sounds nice.
I wish we had unlimited resources...
But then - U-Boot is _much_ smaller than Linux, and they do not do anything like that, yet they manage to keep going. What are we doing wrong / differently?
I think it boils down to community size. There's a lot of people building and testing random combinations. There's just not much of that today for U-Boot.
Yea ... I think our community is kinda broken :-(
Best regards, Marek Vasut

Dear Graeme,
In message CALButCJ7CG+H-8rOBBiXSdb8_BS13oucgF8QGN53+M0sBrWRCw@mail.gmail.com you wrote:
Yes, I know. Hmmm, maybe if every 24 hours the auto build infrastructure:
- Runs a MAKEALL on the mainline repo (if any patches have been committed)
- Runs a MAKEALL after applying all patches meeting pre-determined conditions. For example:
- All patches passing the automated 'checkpatch'
- All patches which have been ack'd or tested
- All patches applied to sub-repo's (i.e. do a git-pull of each sub-repo)
- If the mainline MAKEALL is clean but the 'patched' MAKEALL is not, use git bisect to identify the first patch that breaks the build
I dare predicting that this will never work. We have some 40+ custodian repos, and automatic pulling all of them at an arbitrary point of time will probably always cause some kind of merge conflicts.
Yes - But see above. If the build infrastructure is building with all the repos applied we will get instant feedback that a repo is out-of-step with mainline rather than waiting for Wolfgang to pull.
For some repos it is more or less the "normal" situation to be not in sync with mainline.
Well, we effectively have: a) Mainline b) Mainline + patches applied to repos
...to a large number (> 40) of repos, all in very different states.
c) Mainline + patched applied to repos + unapplied ack'd patches c) Mainline + patched applied to repos + unapplied ack'd patches + unack'd patches
My thought would be to build test a, b and c every 24 hours and if c passes MAKEALL, do a git apply test on the oustanding patches (and maybe a MAKEALL)
Running a full MAKEALL for all architectures and boards, for all (> 40) repositories, every 24 hours, requires more CPU and I/O cycles that we can currently afford.
We probably need a much more intelligent test build system for such patches:
1) Try to find out which architecture(s) and/or boards are affected: - it modifies files in arch/FOO? Then ARCH=FOO is affected. - it modifies driver BAR? This driver is enabled for the following boards: ... 2) Build a small sub-set (2...5 boards) of the affected boards. 3) Terminate building after the first board that shows new errors or warnings.
Only patches that pass all these tests should be advanced to the next stage, where they get applied to some tree.
Best regards,
Wolfgang Denk

On Mon, Jul 23, 2012 at 08:16:12AM +0200, Wolfgang Denk wrote:
[snip]
Running a full MAKEALL for all architectures and boards, for all (> 40) repositories, every 24 hours, requires more CPU and I/O cycles that we can currently afford.
MAKEALL is indeed consuming. But I wanted to follow up here on something I've talked a little about on IRC now. On my local box I've cut the time it takes for MAKEALL -a powerpc down from 60min to 33min by using BUILD_NBUILDS=6 BUILD_NCPUS=1 (on a 6 processor (grep -c processor /proc/cpuinfo)) box. I've seen similar reductions on my just TI parts builds. Not that this solves all of our problems, but it should help.

Dear Tom Rini,
On Mon, Jul 23, 2012 at 08:16:12AM +0200, Wolfgang Denk wrote:
[snip]
Running a full MAKEALL for all architectures and boards, for all (> 40) repositories, every 24 hours, requires more CPU and I/O cycles that we can currently afford.
MAKEALL is indeed consuming. But I wanted to follow up here on something I've talked a little about on IRC now. On my local box I've cut the time it takes for MAKEALL -a powerpc down from 60min to 33min by using BUILD_NBUILDS=6 BUILD_NCPUS=1 (on a 6 processor (grep -c processor /proc/cpuinfo)) box. I've seen similar reductions on my just TI parts builds. Not that this solves all of our problems, but it should help.
So, we discussed with WD on jabber (yay, so many communication protocols involved ;-) ) and we figured out that some boards will simply get broken by the DM. So we'll be able to carve those out and trim down the build times.
Best regards, Marek Vasut

On Wed, Jul 18, 2012 at 09:21:40AM +0200, Wolfgang Denk wrote:
[snip]
And Jenkins... well, we have been using this for some time internally to run test builds for U-Boot. I can tell you a thing or two about it, and Marek has his own story to tell about his experiences when he added to the build matrix.
As is, we try hard to get rid of Jenkins, because it does not scale well to the type of builds we want to be able to do. Marek even started setting up his own test build framework...
I told Marek on IRC that I don't understand this, given a lot of the things I've made Jenkins do before and that at the end of the day it's $whatever-pass/fail-logic on top of a bash script to do the building and testing.

Dear Tom Rini,
On Wed, Jul 18, 2012 at 09:21:40AM +0200, Wolfgang Denk wrote:
[snip]
And Jenkins... well, we have been using this for some time internally to run test builds for U-Boot. I can tell you a thing or two about it, and Marek has his own story to tell about his experiences when he added to the build matrix.
As is, we try hard to get rid of Jenkins, because it does not scale well to the type of builds we want to be able to do. Marek even started setting up his own test build framework...
I told Marek on IRC that I don't understand this, given a lot of the things I've made Jenkins do before and that at the end of the day it's $whatever-pass/fail-logic
Not really, what about the warning-logic ? Aka. I actually need jenkins to do tristate results. How, I didn't figure out.
on top of a bash script to do the building and testing.
So in the end, jenkins is just an executor, bringing in pile of java overhead and possible random breakage. I use MAKEALL in my script, which does exactly what I need ... and even tests MAKEALL ;-)
Best regards, Marek Vasut

Hi All,
On 07/21/2012 11:27 AM, Marek Vasut wrote:
Dear Tom Rini,
On Wed, Jul 18, 2012 at 09:21:40AM +0200, Wolfgang Denk wrote:
[snip]
And Jenkins... well, we have been using this for some time internally to run test builds for U-Boot. I can tell you a thing or two about it, and Marek has his own story to tell about his experiences when he added to the build matrix.
As is, we try hard to get rid of Jenkins, because it does not scale well to the type of builds we want to be able to do. Marek even started setting up his own test build framework...
I told Marek on IRC that I don't understand this, given a lot of the things I've made Jenkins do before and that at the end of the day it's $whatever-pass/fail-logic
Not really, what about the warning-logic ? Aka. I actually need jenkins to do tristate results. How, I didn't figure out.
on top of a bash script to do the building and testing.
So in the end, jenkins is just an executor, bringing in pile of java overhead and possible random breakage. I use MAKEALL in my script, which does exactly what I need ... and even tests MAKEALL ;-)
I don't think a protracted 'tool x' doesn't do this and 'tool y' doesn't do that is going to get us anywhere.
What we need to do is define exactly what we want out of the patch management, automated build, etc. tools. We can then see if there are any tools which already exist which fit our needs. If no existing tools fit, look at the ones that come closest and investigate what would be required to get them to a state that they would.
We already know that git is a perfect fit for source code management, and the mailing list is how we will continue to submit, review and discuss patches. So that gives a good starting point.
Patch Management: - Integrate with existing email work flow. It must pick up patches from the mailing list, and any output it generates must get posted to the mailing list - Reliably track revisions of patches (mark superseded version as such) - Automatically run sanity checks (checkpatch, test apply, etc.) - Track which repo patches below to - Rerun sanity checks on unapplied patches when new patches are applied to the associated repo - Track patch pre-requisite requirements (need to specify such requirments in the patch itself) - Track ack'd, nack'd, tested, etc, posted to the mailing list - Group multi-patch sets and retain the 0/x patch as it usually contains relevant information
Automatic Build: - Nightly MAKEALL with output sent to mailing list (only need to run if a new patch has been applied) - MAKEALL against each repo - Automatic build test of patches which pass through the sanity checks of the patch management tool. This one is really tricky as a MAKEALL for each patch posted to the ML is going to require too many CPU cycles. We need a way to determine what configurations a particular patch is going to impact and only test against them
Regards,
Graeme

On Sat, Jul 21, 2012 at 02:28:45PM +1000, Graeme Russ wrote:
[snip]
I don't think a protracted 'tool x' doesn't do this and 'tool y' doesn't do that is going to get us anywhere.
Agreed, even if I did just reply to Marek :)
What we need to do is define exactly what we want out of the patch management, automated build, etc. tools. We can then see if there are any tools which already exist which fit our needs. If no existing tools fit, look at the ones that come closest and investigate what would be required to get them to a state that they would.
We already know that git is a perfect fit for source code management, and the mailing list is how we will continue to submit, review and discuss patches. So that gives a good starting point.
Patch Management:
- Integrate with existing email work flow. It must pick up patches from the mailing list, and any output it generates must get posted to the mailing list
- Reliably track revisions of patches (mark superseded version as such)
- Automatically run sanity checks (checkpatch, test apply, etc.)
- Track which repo patches below to
Also track which maintainer(s) a patch belongs to and allow for people to opt-in to some notice about patches being assigned to them.
- Rerun sanity checks on unapplied patches when new patches are applied to the associated repo
- Track patch pre-requisite requirements (need to specify such requirments in the patch itself)
- Track ack'd, nack'd, tested, etc, posted to the mailing list
- Group multi-patch sets and retain the 0/x patch as it usually contains relevant information
Automatic Build:
- Nightly MAKEALL with output sent to mailing list (only need to run if a new patch has been applied)
- MAKEALL against each repo
- Automatic build test of patches which pass through the sanity checks of the patch management tool. This one is really tricky as a MAKEALL for each patch posted to the ML is going to require too many CPU cycles. We need a way to determine what configurations a particular patch is going to impact and only test against them
I would phrase the last one a little differently. Allow a job to be submitted that consists of repository X and patches 1-N. For a given repository we can say here's the full and representative short build list of targets.

On Sat, Jul 21, 2012 at 03:27:30AM +0200, Marek Vasut wrote:
Dear Tom Rini,
On Wed, Jul 18, 2012 at 09:21:40AM +0200, Wolfgang Denk wrote:
[snip]
And Jenkins... well, we have been using this for some time internally to run test builds for U-Boot. I can tell you a thing or two about it, and Marek has his own story to tell about his experiences when he added to the build matrix.
As is, we try hard to get rid of Jenkins, because it does not scale well to the type of builds we want to be able to do. Marek even started setting up his own test build framework...
I told Marek on IRC that I don't understand this, given a lot of the things I've made Jenkins do before and that at the end of the day it's $whatever-pass/fail-logic
Not really, what about the warning-logic ? Aka. I actually need jenkins to do tristate results. How, I didn't figure out.
Yes, you can have the build go "yellow" for warnings.
on top of a bash script to do the building and testing.
So in the end, jenkins is just an executor, bringing in pile of java overhead and possible random breakage. I use MAKEALL in my script, which does exactly what I need ... and even tests MAKEALL ;-)
Yes, you're writing your own overhead and dealing with bugs in that rather than using an existing project :)

Dear Tom Rini,
On Sat, Jul 21, 2012 at 03:27:30AM +0200, Marek Vasut wrote:
Dear Tom Rini,
On Wed, Jul 18, 2012 at 09:21:40AM +0200, Wolfgang Denk wrote:
[snip]
And Jenkins... well, we have been using this for some time internally to run test builds for U-Boot. I can tell you a thing or two about it, and Marek has his own story to tell about his experiences when he added to the build matrix.
As is, we try hard to get rid of Jenkins, because it does not scale well to the type of builds we want to be able to do. Marek even started setting up his own test build framework...
I told Marek on IRC that I don't understand this, given a lot of the things I've made Jenkins do before and that at the end of the day it's $whatever-pass/fail-logic
Not really, what about the warning-logic ? Aka. I actually need jenkins to do tristate results. How, I didn't figure out.
Yes, you can have the build go "yellow" for warnings.
How?
on top of a bash script to do the building and testing.
So in the end, jenkins is just an executor, bringing in pile of java overhead and possible random breakage. I use MAKEALL in my script, which does exactly what I need ... and even tests MAKEALL ;-)
Yes, you're writing your own overhead and dealing with bugs in that rather than using an existing project :)
Well, jenkins just crashed so badly I didn't manage to get it back in a working state, you know ...
Best regards, Marek Vasut

On 07/23/2012 10:17 AM, Marek Vasut wrote:
Dear Tom Rini,
On Sat, Jul 21, 2012 at 03:27:30AM +0200, Marek Vasut wrote:
Dear Tom Rini,
On Wed, Jul 18, 2012 at 09:21:40AM +0200, Wolfgang Denk wrote:
[snip]
And Jenkins... well, we have been using this for some time internally to run test builds for U-Boot. I can tell you a thing or two about it, and Marek has his own story to tell about his experiences when he added to the build matrix.
As is, we try hard to get rid of Jenkins, because it does not scale well to the type of builds we want to be able to do. Marek even started setting up his own test build framework...
I told Marek on IRC that I don't understand this, given a lot of the things I've made Jenkins do before and that at the end of the day it's $whatever-pass/fail-logic
Not really, what about the warning-logic ? Aka. I actually need jenkins to do tristate results. How, I didn't figure out.
Yes, you can have the build go "yellow" for warnings.
How?
Post build stuff and promoted builds. I'm hopeful once I get a few patch series polished up and posted for v2012.11 I can go back and give my Jenkins instance some more attention.
on top of a bash script to do the building and testing.
So in the end, jenkins is just an executor, bringing in pile of java overhead and possible random breakage. I use MAKEALL in my script, which does exactly what I need ... and even tests MAKEALL ;-)
Yes, you're writing your own overhead and dealing with bugs in that rather than using an existing project :)
Well, jenkins just crashed so badly I didn't manage to get it back in a working state, you know ...
Never had that happen, sorry :)

Dear Tom Rini,
On 07/23/2012 10:17 AM, Marek Vasut wrote:
Dear Tom Rini,
On Sat, Jul 21, 2012 at 03:27:30AM +0200, Marek Vasut wrote:
Dear Tom Rini,
On Wed, Jul 18, 2012 at 09:21:40AM +0200, Wolfgang Denk wrote:
[snip]
And Jenkins... well, we have been using this for some time internally to run test builds for U-Boot. I can tell you a thing or two about it, and Marek has his own story to tell about his experiences when he added to the build matrix.
As is, we try hard to get rid of Jenkins, because it does not scale well to the type of builds we want to be able to do. Marek even started setting up his own test build framework...
I told Marek on IRC that I don't understand this, given a lot of the things I've made Jenkins do before and that at the end of the day it's $whatever-pass/fail-logic
Not really, what about the warning-logic ? Aka. I actually need jenkins to do tristate results. How, I didn't figure out.
Yes, you can have the build go "yellow" for warnings.
How?
Post build stuff and promoted builds. I'm hopeful once I get a few patch series polished up and posted for v2012.11 I can go back and give my Jenkins instance some more attention.
But then, do we really need to poke into this now? Maybe we should look more into the PW first
on top of a bash script to do the building and testing.
So in the end, jenkins is just an executor, bringing in pile of java overhead and possible random breakage. I use MAKEALL in my script, which does exactly what I need ... and even tests MAKEALL ;-)
Yes, you're writing your own overhead and dealing with bugs in that rather than using an existing project :)
Well, jenkins just crashed so badly I didn't manage to get it back in a working state, you know ...
Never had that happen, sorry :)
"Never happened to me" or almost like "Works for me" kind of bloody sentence ;-)
Best regards, Marek Vasut

On 07/23/2012 11:11 AM, Marek Vasut wrote:
Dear Tom Rini,
On 07/23/2012 10:17 AM, Marek Vasut wrote:
Dear Tom Rini,
On Sat, Jul 21, 2012 at 03:27:30AM +0200, Marek Vasut wrote:
Dear Tom Rini,
On Wed, Jul 18, 2012 at 09:21:40AM +0200, Wolfgang Denk wrote:
[snip]
> And Jenkins... well, we have been using this for some time internally > to run test builds for U-Boot. I can tell you a thing or two about > it, and Marek has his own story to tell about his experiences when he > added to the build matrix. > > As is, we try hard to get rid of Jenkins, because it does not scale > well to the type of builds we want to be able to do. Marek even > started setting up his own test build framework...
I told Marek on IRC that I don't understand this, given a lot of the things I've made Jenkins do before and that at the end of the day it's $whatever-pass/fail-logic
Not really, what about the warning-logic ? Aka. I actually need jenkins to do tristate results. How, I didn't figure out.
Yes, you can have the build go "yellow" for warnings.
How?
Post build stuff and promoted builds. I'm hopeful once I get a few patch series polished up and posted for v2012.11 I can go back and give my Jenkins instance some more attention.
But then, do we really need to poke into this now? Maybe we should look more into the PW first
Me or the community? I'm going to poke Jenkins regardless since it's something I've got setup already (and I've contributed a few changes and a plugin, in a past life). Is it the right answer for the community? TBD :) Is it the first thing we should address? No, the patch collection / management problem should indeed come first.

Dear Tom Rini,
On 07/23/2012 11:11 AM, Marek Vasut wrote:
Dear Tom Rini,
On 07/23/2012 10:17 AM, Marek Vasut wrote:
Dear Tom Rini,
On Sat, Jul 21, 2012 at 03:27:30AM +0200, Marek Vasut wrote:
Dear Tom Rini,
> On Wed, Jul 18, 2012 at 09:21:40AM +0200, Wolfgang Denk wrote: > > [snip] > >> And Jenkins... well, we have been using this for some time >> internally to run test builds for U-Boot. I can tell you a thing >> or two about it, and Marek has his own story to tell about his >> experiences when he added to the build matrix. >> >> As is, we try hard to get rid of Jenkins, because it does not scale >> well to the type of builds we want to be able to do. Marek even >> started setting up his own test build framework... > > I told Marek on IRC that I don't understand this, given a lot of the > things I've made Jenkins do before and that at the end of the day > it's $whatever-pass/fail-logic
Not really, what about the warning-logic ? Aka. I actually need jenkins to do tristate results. How, I didn't figure out.
Yes, you can have the build go "yellow" for warnings.
How?
Post build stuff and promoted builds. I'm hopeful once I get a few patch series polished up and posted for v2012.11 I can go back and give my Jenkins instance some more attention.
But then, do we really need to poke into this now? Maybe we should look more into the PW first
Me or the community? I'm going to poke Jenkins regardless since it's something I've got setup already (and I've contributed a few changes and a plugin, in a past life). Is it the right answer for the community? TBD :) Is it the first thing we should address? No, the patch collection / management problem should indeed come first.
Just what I had in mind :)
Best regards, Marek Vasut

Dear Graeme Russ,
In message CALButCJAu4r8wdTpYC7XPfhq_uJFbhQpGUeiNnEx-jHJjK1nUg@mail.gmail.com you wrote:
Currently the lack of any reaction whatsoever was identified to be a very discouraging sign for contributors. One thing we could do is to declare a "soft" time-limit (two weeks) that patches need to be looked at. After this time-limit, one could declare "backup-custodians" to push patches to or merge patches into some "-staging" branch. (What to do with that branch then?) For this to work, custodians would need to announce "off-times" which currently is not general consensus.
Declaring a time limit would not help much. The fact that patches receive no response is not a result of disinterest, but usually of lack of time. In this situation the chances that someone monitors the list for time-outs is small, and even if he spots these, there is still no no resource available for processing the patch.
The idea of "backup-custodians" is not new either. I have requested repeatedly that the existing custodians (who have write permission to patchwork and to the u-boot-staging tree) help me with processing the load of "generic" patches that end up on my table. The amount of patches going this way is minimal. In the last 3 months, only Stefan Roese used this tree, and only for a single pull request.
But this is where help would really be efficient: more people are needed to pick up, review, and especially test the load of generic patches. Things like new file system support are independent of specific hardware (which is why this stuff ends up on my table), so everyone can help here. On the other hand, this stuff is usually qurte test-intensive, so I really need more help with that.
I like this idea in principle. In order to work, I think there needs to be two or three 'top-tier' custodians who will arbitrarily assign 'stale' patches to another custodian. And maybe one top-top-tier custodian (Hi Wolfgang ;)) in case the patch goes very stale.
Assinging work does not help if there are no resources to process the assignments. We already have assignments: usually on me. But I cannot handle all this load.
It was agreed that automatic marking of changes as accepted when changes hit mainline would be a good thing. There is some info on our wiki available, although it is not clear who uses them:
http://www.denx.de/wiki/view/U-Boot/Patches#pwclient and http://www.mail-archive.com/patchwork@lists.ozlabs.org/msg00057.html
A tool to automatically mark patches as superceded on newer versions was also suggested to be very valuable.
I regularly use this method to update patch status, but the results are not really satisfactory, especially due to the shortcomings (read: bugs) of PW itself.
Also, if one (and only one) maintainer is Cc'd on a patch, it would be nice is it was automatically assigned to them. Same goes for tags in the patch subject - there should be a way to automatically assign a fair number of patches.
This can probablybe done. I think initial assignment of new patches can be done based on X-Patchwork mail headers. We just need tools that generate these - i. e. a wrapper around git-send-email?
During the discussion it showed that many people believe that patches outside the "regular" merge window are handled in-efficiently ending up as large piles in patchwork. It was agreed that rather then being a
I disagree. There is no real difference between patche submitted during or outside a MW. They all end up on the large pile :-(
"accept / don't accept" division, it should be a "accept into mainline / accept into -next branches" split hopefully keeping the flow of incoming patches more uninterrupted. For this to work, every custodian would open his own "-next" branch and start merging patches from the mailing list resulting in patchwork becoming cleaner during "bug fix" phases.
And this is where patchwork fails us - The list of states is extremely limiting
I think we can define new states if needed. But this does not fix any of the issues that make PW such a PITA to use.
Discussing such automatic merges, the need for continous integration became apparant. As mid- to longterm goals we would like to see automatic builds shifting the requirement to "do MAKEALL" from individual posters to (cheap) machine time. One obstacle on this way is the complexity of automatic builds for different architectures, i.e. which toolchain to use and how to use them correctly.
'Bring out your nightly builds!'
I very much like the idea of continuous integration - The number of time that a build is breaking only to be picked up weeks after the offending commit is getting a bit annoying.
I'm not so sure what makes sense here. I have often dreamed of automatic testing (checkpatch + MAKEALL) of all submitted patches. But looking at the current situation, we not only have big problems because many patches have non-standard requirements (they apply only against a specific tree, and require N previous patches to be applied first), but we would probably aut-reject 95+ % of all submissions. I don't know if thi would be encouraging for submitters...
Simon Glass mentioned that he is working on "buildman" which he will present on the mailing list in due time.
Maybe Simon and Marek could put their heads together?
Best regards, Viele Grüße,
Wolfgang Denk

On Wednesday 18 July 2012 03:41:39 Wolfgang Denk wrote:
Graeme Russ wrote:
Also, if one (and only one) maintainer is Cc'd on a patch, it would be nice is it was automatically assigned to them. Same goes for tags in the patch subject - there should be a way to automatically assign a fair number of patches.
This can probablybe done. I think initial assignment of new patches can be done based on X-Patchwork mail headers. We just need tools that generate these - i. e. a wrapper around git-send-email?
if there are headers available, then it sounds like something we could integrate into patman -mike

Dear Wolfgang Denk,
Dear Graeme Russ,
In message <CALButCJAu4r8wdTpYC7XPfhq_uJFbhQpGUeiNnEx-
jHJjK1nUg@mail.gmail.com> you wrote:
Currently the lack of any reaction whatsoever was identified to be a very discouraging sign for contributors. One thing we could do is to declare a "soft" time-limit (two weeks) that patches need to be looked at. After this time-limit, one could declare "backup-custodians" to push patches to or merge patches into some "-staging" branch. (What to do with that branch then?) For this to work, custodians would need to announce "off-times" which currently is not general consensus.
Declaring a time limit would not help much. The fact that patches receive no response is not a result of disinterest, but usually of lack of time. In this situation the chances that someone monitors the list for time-outs is small, and even if he spots these, there is still no no resource available for processing the patch.
The idea of "backup-custodians" is not new either. I have requested repeatedly that the existing custodians (who have write permission to patchwork and to the u-boot-staging tree) help me with processing the load of "generic" patches that end up on my table. The amount of patches going this way is minimal. In the last 3 months, only Stefan Roese used this tree, and only for a single pull request.
But this is where help would really be efficient: more people are needed to pick up, review, and especially test the load of generic patches. Things like new file system support are independent of specific hardware (which is why this stuff ends up on my table), so everyone can help here. On the other hand, this stuff is usually qurte test-intensive, so I really need more help with that.
Ain't this the stuff that can be automated?
I like this idea in principle. In order to work, I think there needs to be two or three 'top-tier' custodians who will arbitrarily assign 'stale' patches to another custodian. And maybe one top-top-tier custodian (Hi Wolfgang ;)) in case the patch goes very stale.
Assinging work does not help if there are no resources to process the assignments. We already have assignments: usually on me. But I cannot handle all this load.
+1 on this one. I'm choking even on the small amount of patches I have, I've already asked SR how he manages to process so many of them.
[...]
I think we can define new states if needed. But this does not fix any of the issues that make PW such a PITA to use.
And more thing that'd be cool would be if patchwork (or any other tool) could push the Acked patches directly into the respective git tree. Maybe even issue a pullRQ when appropriate (but that'd have to be requested by the custodian).
Discussing such automatic merges, the need for continous integration became apparant. As mid- to longterm goals we would like to see automatic builds shifting the requirement to "do MAKEALL" from individual posters to (cheap) machine time. One obstacle on this way is the complexity of automatic builds for different architectures, i.e. which toolchain to use and how to use them correctly.
'Bring out your nightly builds!'
I can adjust my automatic testing system to also store the compiled results. Effectively giving us nightly builds. But then, there're many other tools needed to generate all those weirdo flashable images (u-boot.imx, u-boot.sb etc).
The other problem is how to find the boards that actually need rebuild on per- patch basis. And for generic patches, we'll need to do MAKEALL across all architectures anyway, which takes a bit of time.
I very much like the idea of continuous integration - The number of time that a build is breaking only to be picked up weeks after the offending commit is getting a bit annoying.
I'm not so sure what makes sense here. I have often dreamed of automatic testing (checkpatch + MAKEALL) of all submitted patches. But looking at the current situation, we not only have big problems because many patches have non-standard requirements (they apply only against a specific tree, and require N previous patches to be applied first), but we would probably aut-reject 95+ % of all submissions. I don't know if thi would be encouraging for submitters...
Gerrit can do that, but running MAKEALL on each pushed set would still need a vast amount of computing power.
Simon Glass mentioned that he is working on "buildman" which he will present on the mailing list in due time.
Maybe Simon and Marek could put their heads together?
buildman? Are there any news on that?
Best regards, Viele Grüße,
Wolfgang Denk
Best regards, Marek Vasut

On Sat, Jul 21, 2012 at 04:40:27PM +0200, Marek Vasut wrote:
[snip]
The other problem is how to find the boards that actually need rebuild on per- patch basis. And for generic patches, we'll need to do MAKEALL across all architectures anyway, which takes a bit of time.
I think we (custodians) need to define representative sets. For TI stuff, you don't need to hit every omap3 board, but just a handful to be good enough. And while my PowerPC is rusty, I know we don't need to build as many combinations of boards as we do to be happy in an every-patch build (just nightly / once there's changes live on wd's master branch would be the world).

Dear Tom Rini,
On Sat, Jul 21, 2012 at 04:40:27PM +0200, Marek Vasut wrote:
[snip]
The other problem is how to find the boards that actually need rebuild on per- patch basis. And for generic patches, we'll need to do MAKEALL across all architectures anyway, which takes a bit of time.
I think we (custodians) need to define representative sets. For TI stuff, you don't need to hit every omap3 board, but just a handful to be good enough. And while my PowerPC is rusty, I know we don't need to build as many combinations of boards as we do to be happy in an every-patch build (just nightly / once there's changes live on wd's master branch would be the world).
Good idea ... USB is generally properly isolated part, yet it can impact crazy boards, like MPC83xx (well, I still have my shallow grave around, which I left just by WD's sheer grace ;-) ).
Best regards, Marek Vasut

On Mon, 16 Jul 2012 23:30:49 +0200 Detlev Zundel dzu@denx.de wrote:
as promised, here are my expanded notes from the BoF meeting at LSM2012 last week. It was a pleasure to get some core developers into one room
Any word on Kconfig support?
Kim

Dear Kim Phillips,
On Mon, 16 Jul 2012 23:30:49 +0200
Detlev Zundel dzu@denx.de wrote:
as promised, here are my expanded notes from the BoF meeting at LSM2012 last week. It was a pleasure to get some core developers into one room
Any word on Kconfig support?
Kim
I was digging in it for a bit, but then Graeme took over.
Best regards, Marek Vasut

Hi Kim & Marek,
On 07/21/2012 07:09 AM, Marek Vasut wrote:
Dear Kim Phillips,
On Mon, 16 Jul 2012 23:30:49 +0200
Detlev Zundel dzu@denx.de wrote:
as promised, here are my expanded notes from the BoF meeting at LSM2012 last week. It was a pleasure to get some core developers into one room
Any word on Kconfig support?
Kim
I was digging in it for a bit, but then Graeme took over.
I have started on it - I've ported over the Kbuild infrastructure into a dedicated 'kbuild' makefile which is called from the main makefile. This make modifications to the existing makefile very minimal
Now it's just a case of building all the Kconfig files which is, to say the least, a massive task. I have a lot of other things going on, so unfortunately progress is slow
Regards,
Graeme

On 07/20/2012 04:34 PM, Graeme Russ wrote:
Hi Kim & Marek,
On 07/21/2012 07:09 AM, Marek Vasut wrote:
Dear Kim Phillips,
On Mon, 16 Jul 2012 23:30:49 +0200
Detlev Zundel dzu@denx.de wrote:
as promised, here are my expanded notes from the BoF meeting at LSM2012 last week. It was a pleasure to get some core developers into one room
Any word on Kconfig support?
Kim
I was digging in it for a bit, but then Graeme took over.
I have started on it - I've ported over the Kbuild infrastructure into a dedicated 'kbuild' makefile which is called from the main makefile. This make modifications to the existing makefile very minimal
Now it's just a case of building all the Kconfig files which is, to say the least, a massive task. I have a lot of other things going on, so unfortunately progress is slow
How about a transitional phase, where a symbol can either be a kconfig symbol or handled the old way via the board config file? That way, custodians/maintainers could handle the conversion of their own symbols (with the expectation that this be done within a reasonable timeframe).
-Scott

Hi Scott,
On 07/21/2012 07:40 AM, Scott Wood wrote:
On 07/20/2012 04:34 PM, Graeme Russ wrote:
Hi Kim & Marek,
On 07/21/2012 07:09 AM, Marek Vasut wrote:
Dear Kim Phillips,
On Mon, 16 Jul 2012 23:30:49 +0200
Detlev Zundel dzu@denx.de wrote:
as promised, here are my expanded notes from the BoF meeting at LSM2012 last week. It was a pleasure to get some core developers into one room
Any word on Kconfig support?
Kim
I was digging in it for a bit, but then Graeme took over.
I have started on it - I've ported over the Kbuild infrastructure into a dedicated 'kbuild' makefile which is called from the main makefile. This make modifications to the existing makefile very minimal
Now it's just a case of building all the Kconfig files which is, to say the least, a massive task. I have a lot of other things going on, so unfortunately progress is slow
How about a transitional phase, where a symbol can either be a kconfig symbol or handled the old way via the board config file? That way, custodians/maintainers could handle the conversion of their own symbols (with the expectation that this be done within a reasonable timeframe).
Well the plan in to have 'make xyz_config' produce the equivalent .config file (which it pretty much does anyway) - You can then use Kconfig to tweak the board config
Or use Kconfig to create a config from scratch
Regards,
Graeme

Dear Graeme Russ,
Hi Scott,
On 07/21/2012 07:40 AM, Scott Wood wrote:
On 07/20/2012 04:34 PM, Graeme Russ wrote:
Hi Kim & Marek,
On 07/21/2012 07:09 AM, Marek Vasut wrote:
Dear Kim Phillips,
On Mon, 16 Jul 2012 23:30:49 +0200
Detlev Zundel dzu@denx.de wrote:
as promised, here are my expanded notes from the BoF meeting at LSM2012 last week. It was a pleasure to get some core developers into one room
Any word on Kconfig support?
Kim
I was digging in it for a bit, but then Graeme took over.
I have started on it - I've ported over the Kbuild infrastructure into a dedicated 'kbuild' makefile which is called from the main makefile. This make modifications to the existing makefile very minimal
Now it's just a case of building all the Kconfig files which is, to say the least, a massive task. I have a lot of other things going on, so unfortunately progress is slow
How about a transitional phase, where a symbol can either be a kconfig symbol or handled the old way via the board config file? That way, custodians/maintainers could handle the conversion of their own symbols (with the expectation that this be done within a reasonable timeframe).
Well the plan in to have 'make xyz_config' produce the equivalent .config file (which it pretty much does anyway) - You can then use Kconfig to tweak the board config
Or use Kconfig to create a config from scratch
Maybe you can push your current patchset somewhere so others can hack on it too ?
Regards,
Graeme
Best regards, Marek Vasut

Hi Graeme,
On Fri, Jul 20, 2012 at 2:34 PM, Graeme Russ graeme.russ@gmail.com wrote:
Hi Kim & Marek,
On 07/21/2012 07:09 AM, Marek Vasut wrote:
Dear Kim Phillips,
On Mon, 16 Jul 2012 23:30:49 +0200
Detlev Zundel dzu@denx.de wrote:
as promised, here are my expanded notes from the BoF meeting at LSM2012 last week. It was a pleasure to get some core developers into one room
Any word on Kconfig support?
Kim
I was digging in it for a bit, but then Graeme took over.
I have started on it - I've ported over the Kbuild infrastructure into a dedicated 'kbuild' makefile which is called from the main makefile. This make modifications to the existing makefile very minimal
Now it's just a case of building all the Kconfig files which is, to say the least, a massive task. I have a lot of other things going on, so unfortunately progress is slow
I wonder how you got on with that? Any work-in-progress that could be used as a base? Want some help? It seems like a useful feature.
Regards, Simon
Regards,
Graeme
U-Boot mailing list U-Boot@lists.denx.de http://lists.denx.de/mailman/listinfo/u-boot

Dear Simon,
In message CAPnjgZ089gS0gLHBVBVARpX=awcyRUUmSLGNiR4HFkwZZLsL-g@mail.gmail.com you wrote:
I have started on it - I've ported over the Kbuild infrastructure into a dedicated 'kbuild' makefile which is called from the main makefile. This make modifications to the existing makefile very minimal
Now it's just a case of building all the Kconfig files which is, to say the least, a massive task. I have a lot of other things going on, so unfortunately progress is slow
I wonder how you got on with that? Any work-in-progress that could be used as a base? Want some help? It seems like a useful feature.
I also wonder if this has to be a one-step change-it-all-at-once operation? Maybe we can add the infrastructure in a neutral way, and then start moving code to the Kconfig files step by step - similar to what we did with moving Makefile rules out into boards.cfg ?
Best regards,
Wolfgang Denk

Hi Wolfgang,
On Mon, Feb 18, 2013 at 8:59 PM, Wolfgang Denk wd@denx.de wrote:
Dear Simon,
In message CAPnjgZ089gS0gLHBVBVARpX=awcyRUUmSLGNiR4HFkwZZLsL-g@mail.gmail.com you wrote:
I have started on it - I've ported over the Kbuild infrastructure into a dedicated 'kbuild' makefile which is called from the main makefile. This make modifications to the existing makefile very minimal
Now it's just a case of building all the Kconfig files which is, to say the least, a massive task. I have a lot of other things going on, so unfortunately progress is slow
I wonder how you got on with that? Any work-in-progress that could be used as a base? Want some help? It seems like a useful feature.
I also wonder if this has to be a one-step change-it-all-at-once operation? Maybe we can add the infrastructure in a neutral way, and then start moving code to the Kconfig files step by step - similar to what we did with moving Makefile rules out into boards.cfg ?
Alas I do not have access to the code I was working on (study is buried in stuff) and I really didn't get that far anyway. But, I got as far as knowing it is possible to run both the current Makefile infrastructure and the KConfig infrastructure in parallel. The trick is to create additional Makefiles (Makefile.kc for example). You just need to add stubs for the KConfig targets (menuconfig, xconfig, etc).
Once the core KConfig Make infrastructure is in place, it's simply (?) a case of building all the KConfig files
Regards,
Graeme

Dear Graeme Russ,
Hi Wolfgang,
On Mon, Feb 18, 2013 at 8:59 PM, Wolfgang Denk wd@denx.de wrote:
Dear Simon,
In message <CAPnjgZ089gS0gLHBVBVARpX=awcyRUUmSLGNiR4HFkwZZLsL-
g@mail.gmail.com> you wrote:
I have started on it - I've ported over the Kbuild infrastructure into a dedicated 'kbuild' makefile which is called from the main makefile. This make modifications to the existing makefile very minimal
Now it's just a case of building all the Kconfig files which is, to say the least, a massive task. I have a lot of other things going on, so unfortunately progress is slow
I wonder how you got on with that? Any work-in-progress that could be used as a base? Want some help? It seems like a useful feature.
I also wonder if this has to be a one-step change-it-all-at-once operation? Maybe we can add the infrastructure in a neutral way, and then start moving code to the Kconfig files step by step - similar to what we did with moving Makefile rules out into boards.cfg ?
Alas I do not have access to the code I was working on (study is buried in stuff) and I really didn't get that far anyway. But, I got as far as knowing it is possible to run both the current Makefile infrastructure and the KConfig infrastructure in parallel. The trick is to create additional Makefiles (Makefile.kc for example). You just need to add stubs for the KConfig targets (menuconfig, xconfig, etc).
Once the core KConfig Make infrastructure is in place, it's simply (?) a case of building all the KConfig files
True, we can add Kconfig first and Kbuild afterwards.
Best regards, Marek Vasut
participants (12)
-
Andy Pont
-
Detlev Zundel
-
Graeme Russ
-
Kim Phillips
-
Marek Vasut
-
Marek Vasut
-
Mike Frysinger
-
Scott Wood
-
Simon Glass
-
Stefan Roese
-
Tom Rini
-
Wolfgang Denk