
Dear Tom,
In message 20131107205133.GT5925@bill-the-cat you wrote:
What we need to be careful of here is making sure whatever we grow is both useful and not overly complicated. What I honestly wonder about is automated testing for commands (crc32 pops to mind only because I just fixed things) but otherwise having things broken down into a front end where people select what they did "Booted a ___ into ___ via ___", provide some output from a command (maybe add just a touch more info to 'version') and cover non-boot testing with copy/paste'able drop-downs.
I know automated testing is The Thing, but given N frameworks, everyone of them has issues because frankly, every SoC family has its own quirks about how boot and load and what is and is not even feasible, especially for the bootloader.
Agreed. In the end, you will probably define a specific set of test cases that can be run automatically on a board. We will never have 100% coverage, nor any generic set of tests that fits all boards.
OK, a pretty large sub-set of common functionality can be tested in the sandbox, which is a great achievement ...
I still firmly belive that the approach we've taken a long time ago, i. e. to use a test framework (DUTS) in combination with a (today wiki based) documentation framework (DULG) is something that can meet a wide range of requirements.
Unfortunately, we haven't been able yet to find a test framework that makes it easy for the average user to add new test cases. I'm not even talking here about the core of the framework, that has to deal with things like attaching to the console of a board, control power and/or reset state, detect if a JTAG debugger is attached and such things. In the current state, our code is expect based (and thus implemented in tcl), and it's really a PITA to add new test cases to it.
BUt it would require significant efforts to rwrite this in a more modern context...
Best regards,
Wolfgang Denk