[U-Boot] [PATCH] sandbox_flattree: Switch to TPMv2 support

In order to have the test.py tests for TPMv2 run automatically we need to have one of our sandbox builds use TPMv2 rather than TPMv1. Switch sandbox_flattree over to this style of TPM.
Cc: Simon Glass sjg@chromium.org Signed-off-by: Tom Rini trini@konsulko.com --- I'm tempted to switch the main sandbox target over instead as I don't quite see where we're running the tpm1.x tests automatically. Would that be a better idea? --- configs/sandbox_flattree_defconfig | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/configs/sandbox_flattree_defconfig b/configs/sandbox_flattree_defconfig index 08072e8f0312..ca67de1be8aa 100644 --- a/configs/sandbox_flattree_defconfig +++ b/configs/sandbox_flattree_defconfig @@ -51,8 +51,8 @@ CONFIG_CMD_BOOTSTAGE=y CONFIG_CMD_PMIC=y CONFIG_CMD_REGULATOR=y CONFIG_CMD_TPM=y -CONFIG_CMD_TPM_TEST=y CONFIG_CMD_EXT4_WRITE=y +CONFIG_CMD_LOG=y CONFIG_MAC_PARTITION=y CONFIG_AMIGA_PARTITION=y CONFIG_OF_CONTROL=y @@ -156,7 +156,8 @@ CONFIG_SYSRESET=y CONFIG_TIMER=y CONFIG_TIMER_EARLY=y CONFIG_SANDBOX_TIMER=y -CONFIG_TPM_TIS_SANDBOX=y +# CONFIG_TPM_V1 is not set +CONFIG_TPM_V2=y CONFIG_TPM2_TIS_SANDBOX=y CONFIG_USB=y CONFIG_DM_USB=y

+Miquel due to sandbox TPM issue
Hi Tom,
On 25 May 2018 at 06:27, Tom Rini trini@konsulko.com wrote:
In order to have the test.py tests for TPMv2 run automatically we need to have one of our sandbox builds use TPMv2 rather than TPMv1. Switch sandbox_flattree over to this style of TPM.
The problem seems to be that the sandbox driver is only built with either TPMv1 or TPMv2. It needs to be able to build with both, so we can run tests with both.
It really doesn't make any sense to have build-time branches for sandbox.
We currently have:
sandbox - should be used for most tests sandbox64 - special build that forces a 64-bit host sandbox_flattree - builds with dev_read_...() functions defined as inline. We need this build so that we can test those inline functions, and we cannot build with both the inline functions and the non-inline functions since they are named the same sandbox_noblk - builds without CONFIG_BLK, which means the legacy block drivers are used. We cannot use both the legacy and driver-model block drivers since they implement the same functions sandbox_spl - builds sandbox with SPL support, so you can run spl/u-boot-spl and it will start up and then load ./u-boot. We could probably remove this and add SPL support to the vanilla sandbox build, since people can still run ./u-boot directly
At present there are unnecessary config differences between these builds. This is explained by the fact that it is a pain for people to have to add configs separately to each defconfig. But we should probably make them more common. I will take a look.
What do you think about dropping sandbox_spl and make sandbox build SPL? It does take slightly longer to build, perhaps 25%.
Cc: Simon Glass sjg@chromium.org Signed-off-by: Tom Rini trini@konsulko.com
I'm tempted to switch the main sandbox target over instead as I don't quite see where we're running the tpm1.x tests automatically. Would that be a better idea?
Miquel, can we adjust the code to build both TPMv1 and v2 for sandbox, and select at run-time?
configs/sandbox_flattree_defconfig | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
Regards, Simon

On Fri, Jun 01, 2018 at 09:25:19AM -0600, Simon Glass wrote:
+Miquel due to sandbox TPM issue
Hi Tom,
On 25 May 2018 at 06:27, Tom Rini trini@konsulko.com wrote:
In order to have the test.py tests for TPMv2 run automatically we need to have one of our sandbox builds use TPMv2 rather than TPMv1. Switch sandbox_flattree over to this style of TPM.
The problem seems to be that the sandbox driver is only built with either TPMv1 or TPMv2. It needs to be able to build with both, so we can run tests with both.
Right. But we don't have any run-time automatic tests for TPMv1 as the 'tpm test' command needs to be done manually, at least today (unless I'm missing something under test/py/tests/). And we can't (functionally in real uses) have both TPM types available. Perhaps we should make TPMv2 the default for sandbox? All of the TPMv1 code will still be getting build-time exercised due to platforms with TPMv1 on them.
It really doesn't make any sense to have build-time branches for sandbox.
We currently have:
sandbox - should be used for most tests sandbox64 - special build that forces a 64-bit host sandbox_flattree - builds with dev_read_...() functions defined as inline. We need this build so that we can test those inline functions, and we cannot build with both the inline functions and the non-inline functions since they are named the same sandbox_noblk - builds without CONFIG_BLK, which means the legacy block drivers are used. We cannot use both the legacy and driver-model block drivers since they implement the same functions sandbox_spl - builds sandbox with SPL support, so you can run spl/u-boot-spl and it will start up and then load ./u-boot. We could probably remove this and add SPL support to the vanilla sandbox build, since people can still run ./u-boot directly
At present there are unnecessary config differences between these builds. This is explained by the fact that it is a pain for people to have to add configs separately to each defconfig. But we should probably make them more common. I will take a look.
OK.
What do you think about dropping sandbox_spl and make sandbox build SPL? It does take slightly longer to build, perhaps 25%.
That's fine with me.
Cc: Simon Glass sjg@chromium.org Signed-off-by: Tom Rini trini@konsulko.com
I'm tempted to switch the main sandbox target over instead as I don't quite see where we're running the tpm1.x tests automatically. Would that be a better idea?
Miquel, can we adjust the code to build both TPMv1 and v2 for sandbox, and select at run-time?
I thought we had talked about that before and couldn't easily? One thing I am a bit wary of is adding indirection for build coverage sake.

Hi Tom,
On 1 June 2018 at 11:55, Tom Rini trini@konsulko.com wrote:
On Fri, Jun 01, 2018 at 09:25:19AM -0600, Simon Glass wrote:
+Miquel due to sandbox TPM issue
Hi Tom,
On 25 May 2018 at 06:27, Tom Rini trini@konsulko.com wrote:
In order to have the test.py tests for TPMv2 run automatically we need to have one of our sandbox builds use TPMv2 rather than TPMv1. Switch sandbox_flattree over to this style of TPM.
The problem seems to be that the sandbox driver is only built with either TPMv1 or TPMv2. It needs to be able to build with both, so we can run tests with both.
Right. But we don't have any run-time automatic tests for TPMv1 as the 'tpm test' command needs to be done manually, at least today (unless I'm missing something under test/py/tests/). And we can't (functionally in real uses) have both TPM types available. Perhaps we should make TPMv2 the default for sandbox? All of the TPMv1 code will still be getting build-time exercised due to platforms with TPMv1 on them.
I'll take a look at this. It should actually be quite easy to have two TPMs in sandbox, one v1 and one v2. At least I don't know of any impediment.
It really doesn't make any sense to have build-time branches for sandbox.
We currently have:
sandbox - should be used for most tests sandbox64 - special build that forces a 64-bit host sandbox_flattree - builds with dev_read_...() functions defined as inline. We need this build so that we can test those inline functions, and we cannot build with both the inline functions and the non-inline functions since they are named the same sandbox_noblk - builds without CONFIG_BLK, which means the legacy block drivers are used. We cannot use both the legacy and driver-model block drivers since they implement the same functions sandbox_spl - builds sandbox with SPL support, so you can run spl/u-boot-spl and it will start up and then load ./u-boot. We could probably remove this and add SPL support to the vanilla sandbox build, since people can still run ./u-boot directly
At present there are unnecessary config differences between these builds. This is explained by the fact that it is a pain for people to have to add configs separately to each defconfig. But we should probably make them more common. I will take a look.
OK.
What do you think about dropping sandbox_spl and make sandbox build SPL? It does take slightly longer to build, perhaps 25%.
That's fine with me.
Cc: Simon Glass sjg@chromium.org Signed-off-by: Tom Rini trini@konsulko.com
I'm tempted to switch the main sandbox target over instead as I don't quite see where we're running the tpm1.x tests automatically. Would that be a better idea?
Miquel, can we adjust the code to build both TPMv1 and v2 for sandbox, and select at run-time?
I thought we had talked about that before and couldn't easily? One thing I am a bit wary of is adding indirection for build coverage sake.
Yes, I am hoping that it is just different drivers with the same API but perhaps I am going to be disappointed.
Regards, Simon

Hello,
Sorry for the delay.
On Sat, 2 Jun 2018 10:15:17 -0600, Simon Glass sjg@chromium.org wrote:
Hi Tom,
On 1 June 2018 at 11:55, Tom Rini trini@konsulko.com wrote:
On Fri, Jun 01, 2018 at 09:25:19AM -0600, Simon Glass wrote:
+Miquel due to sandbox TPM issue
Hi Tom,
On 25 May 2018 at 06:27, Tom Rini trini@konsulko.com wrote:
In order to have the test.py tests for TPMv2 run automatically we need to have one of our sandbox builds use TPMv2 rather than TPMv1. Switch sandbox_flattree over to this style of TPM.
The problem seems to be that the sandbox driver is only built with either TPMv1 or TPMv2. It needs to be able to build with both, so we can run tests with both.
Right. But we don't have any run-time automatic tests for TPMv1 as the 'tpm test' command needs to be done manually, at least today (unless I'm missing something under test/py/tests/). And we can't (functionally in real uses) have both TPM types available. Perhaps we should make TPMv2 the default for sandbox? All of the TPMv1 code will still be getting build-time exercised due to platforms with TPMv1 on them.
I'll take a look at this. It should actually be quite easy to have two TPMs in sandbox, one v1 and one v2. At least I don't know of any impediment.
It really doesn't make any sense to have build-time branches for sandbox.
We currently have:
sandbox - should be used for most tests sandbox64 - special build that forces a 64-bit host sandbox_flattree - builds with dev_read_...() functions defined as inline. We need this build so that we can test those inline functions, and we cannot build with both the inline functions and the non-inline functions since they are named the same sandbox_noblk - builds without CONFIG_BLK, which means the legacy block drivers are used. We cannot use both the legacy and driver-model block drivers since they implement the same functions sandbox_spl - builds sandbox with SPL support, so you can run spl/u-boot-spl and it will start up and then load ./u-boot. We could probably remove this and add SPL support to the vanilla sandbox build, since people can still run ./u-boot directly
At present there are unnecessary config differences between these builds. This is explained by the fact that it is a pain for people to have to add configs separately to each defconfig. But we should probably make them more common. I will take a look.
OK.
What do you think about dropping sandbox_spl and make sandbox build SPL? It does take slightly longer to build, perhaps 25%.
That's fine with me.
Cc: Simon Glass sjg@chromium.org Signed-off-by: Tom Rini trini@konsulko.com
I'm tempted to switch the main sandbox target over instead as I don't quite see where we're running the tpm1.x tests automatically. Would that be a better idea?
Miquel, can we adjust the code to build both TPMv1 and v2 for sandbox, and select at run-time?
I thought we had talked about that before and couldn't easily? One thing I am a bit wary of is adding indirection for build coverage sake.
Yes, I am hoping that it is just different drivers with the same API but perhaps I am going to be disappointed.
Indeed, both versions share the same 'architecture' but quite a few structures/functions are defined differently for each TPM flavour in different files. What makes the magic are the #ifdef TPM_V1 #else #endif blocks around includes, making them mutually exclusive.
Choice has been made not to use both flavours at the same time in the second series, when I clearly made a separation between v1 code and v2 code. Trying to compile them both with just some Kconfig hacks would simply not work IMHO.
My apologies for not being helpful at all... As Tom said, there are no tests running on v1 code so maybe it's better to exercise v2 code in Sandbox and let people compile-test the former on their own?
Regards, Miquèl

Hi Miquel,
On 6 June 2018 at 23:38, Miquel Raynal miquel.raynal@bootlin.com wrote:
Hello,
Sorry for the delay.
On Sat, 2 Jun 2018 10:15:17 -0600, Simon Glass sjg@chromium.org wrote:
Hi Tom,
On 1 June 2018 at 11:55, Tom Rini trini@konsulko.com wrote:
On Fri, Jun 01, 2018 at 09:25:19AM -0600, Simon Glass wrote:
+Miquel due to sandbox TPM issue
Hi Tom,
On 25 May 2018 at 06:27, Tom Rini trini@konsulko.com wrote:
In order to have the test.py tests for TPMv2 run automatically we need to have one of our sandbox builds use TPMv2 rather than TPMv1. Switch sandbox_flattree over to this style of TPM.
The problem seems to be that the sandbox driver is only built with either TPMv1 or TPMv2. It needs to be able to build with both, so we can run tests with both.
Right. But we don't have any run-time automatic tests for TPMv1 as the 'tpm test' command needs to be done manually, at least today (unless I'm missing something under test/py/tests/). And we can't (functionally in real uses) have both TPM types available. Perhaps we should make TPMv2 the default for sandbox? All of the TPMv1 code will still be getting build-time exercised due to platforms with TPMv1 on them.
I'll take a look at this. It should actually be quite easy to have two TPMs in sandbox, one v1 and one v2. At least I don't know of any impediment.
It really doesn't make any sense to have build-time branches for sandbox.
We currently have:
sandbox - should be used for most tests sandbox64 - special build that forces a 64-bit host sandbox_flattree - builds with dev_read_...() functions defined as inline. We need this build so that we can test those inline functions, and we cannot build with both the inline functions and the non-inline functions since they are named the same sandbox_noblk - builds without CONFIG_BLK, which means the legacy block drivers are used. We cannot use both the legacy and driver-model block drivers since they implement the same functions sandbox_spl - builds sandbox with SPL support, so you can run spl/u-boot-spl and it will start up and then load ./u-boot. We could probably remove this and add SPL support to the vanilla sandbox build, since people can still run ./u-boot directly
At present there are unnecessary config differences between these builds. This is explained by the fact that it is a pain for people to have to add configs separately to each defconfig. But we should probably make them more common. I will take a look.
OK.
What do you think about dropping sandbox_spl and make sandbox build SPL? It does take slightly longer to build, perhaps 25%.
That's fine with me.
Cc: Simon Glass sjg@chromium.org Signed-off-by: Tom Rini trini@konsulko.com
I'm tempted to switch the main sandbox target over instead as I don't quite see where we're running the tpm1.x tests automatically. Would that be a better idea?
Miquel, can we adjust the code to build both TPMv1 and v2 for sandbox, and select at run-time?
I thought we had talked about that before and couldn't easily? One thing I am a bit wary of is adding indirection for build coverage sake.
Yes, I am hoping that it is just different drivers with the same API but perhaps I am going to be disappointed.
Indeed, both versions share the same 'architecture' but quite a few structures/functions are defined differently for each TPM flavour in different files. What makes the magic are the #ifdef TPM_V1 #else #endif blocks around includes, making them mutually exclusive.
Choice has been made not to use both flavours at the same time in the second series, when I clearly made a separation between v1 code and v2 code. Trying to compile them both with just some Kconfig hacks would simply not work IMHO.
My apologies for not being helpful at all... As Tom said, there are no tests running on v1 code so maybe it's better to exercise v2 code in Sandbox and let people compile-test the former on their own?
I had a play with this and it does not seem too tricky.
With a bit of fiddling I got it to build except for this:
/home/sjg/c/src/third_party/u-boot/files/cmd/tpm-v2.c:324: multiple definition of `get_tpm_commands'
I think if you adjust it to check the driver version (v1 or v2), then you can use either the v1 or v2 command set. You could move the get_tpm_commands function into the uclass so it can check the driver. As to whether the driver is v1 or v2, I wonder if the driver could set a 'version' flag in tpm_chip_priv() ?
I really don't like the idea of having mutually exclusive code in driver model, so it would be good to fix this.
Regards, Simon

Hi Simon,
On Thu, 7 Jun 2018 16:25:28 -0800, Simon Glass sjg@chromium.org wrote:
Hi Miquel,
On 6 June 2018 at 23:38, Miquel Raynal miquel.raynal@bootlin.com wrote:
Hello,
Sorry for the delay.
On Sat, 2 Jun 2018 10:15:17 -0600, Simon Glass sjg@chromium.org wrote:
Hi Tom,
On 1 June 2018 at 11:55, Tom Rini trini@konsulko.com wrote:
On Fri, Jun 01, 2018 at 09:25:19AM -0600, Simon Glass wrote:
+Miquel due to sandbox TPM issue
Hi Tom,
On 25 May 2018 at 06:27, Tom Rini trini@konsulko.com wrote:
In order to have the test.py tests for TPMv2 run automatically we need to have one of our sandbox builds use TPMv2 rather than TPMv1. Switch sandbox_flattree over to this style of TPM.
The problem seems to be that the sandbox driver is only built with either TPMv1 or TPMv2. It needs to be able to build with both, so we can run tests with both.
Right. But we don't have any run-time automatic tests for TPMv1 as the 'tpm test' command needs to be done manually, at least today (unless I'm missing something under test/py/tests/). And we can't (functionally in real uses) have both TPM types available. Perhaps we should make TPMv2 the default for sandbox? All of the TPMv1 code will still be getting build-time exercised due to platforms with TPMv1 on them.
I'll take a look at this. It should actually be quite easy to have two TPMs in sandbox, one v1 and one v2. At least I don't know of any impediment.
It really doesn't make any sense to have build-time branches for sandbox.
We currently have:
sandbox - should be used for most tests sandbox64 - special build that forces a 64-bit host sandbox_flattree - builds with dev_read_...() functions defined as inline. We need this build so that we can test those inline functions, and we cannot build with both the inline functions and the non-inline functions since they are named the same sandbox_noblk - builds without CONFIG_BLK, which means the legacy block drivers are used. We cannot use both the legacy and driver-model block drivers since they implement the same functions sandbox_spl - builds sandbox with SPL support, so you can run spl/u-boot-spl and it will start up and then load ./u-boot. We could probably remove this and add SPL support to the vanilla sandbox build, since people can still run ./u-boot directly
At present there are unnecessary config differences between these builds. This is explained by the fact that it is a pain for people to have to add configs separately to each defconfig. But we should probably make them more common. I will take a look.
OK.
What do you think about dropping sandbox_spl and make sandbox build SPL? It does take slightly longer to build, perhaps 25%.
That's fine with me.
Cc: Simon Glass sjg@chromium.org Signed-off-by: Tom Rini trini@konsulko.com
I'm tempted to switch the main sandbox target over instead as I don't quite see where we're running the tpm1.x tests automatically. Would that be a better idea?
Miquel, can we adjust the code to build both TPMv1 and v2 for sandbox, and select at run-time?
I thought we had talked about that before and couldn't easily? One thing I am a bit wary of is adding indirection for build coverage sake.
Yes, I am hoping that it is just different drivers with the same API but perhaps I am going to be disappointed.
Indeed, both versions share the same 'architecture' but quite a few structures/functions are defined differently for each TPM flavour in different files. What makes the magic are the #ifdef TPM_V1 #else #endif blocks around includes, making them mutually exclusive.
Choice has been made not to use both flavours at the same time in the second series, when I clearly made a separation between v1 code and v2 code. Trying to compile them both with just some Kconfig hacks would simply not work IMHO.
My apologies for not being helpful at all... As Tom said, there are no tests running on v1 code so maybe it's better to exercise v2 code in Sandbox and let people compile-test the former on their own?
I had a play with this and it does not seem too tricky.
With a bit of fiddling I got it to build except for this:
/home/sjg/c/src/third_party/u-boot/files/cmd/tpm-v2.c:324: multiple definition of `get_tpm_commands'
That's one problem. I'm pretty sure at some point we will need to declare differently tpm_chip_priv depending on the version. Using two structures in an enumeration could be the way to handle it.
Another point is that doing so, you embed twice the code and symbols than what's really needed. Is not having mutually exclusive code better than enlarging U-Boot binary?
I think if you adjust it to check the driver version (v1 or v2), then you can use either the v1 or v2 command set. You could move the get_tpm_commands function into the uclass so it can check the driver.
It means patching all drivers if we want to do it cleanly.
As to whether the driver is v1 or v2, I wonder if the driver could set a 'version' flag in tpm_chip_priv() ?
Probably.
I really don't like the idea of having mutually exclusive code in driver model, so it would be good to fix this.
I'll be away the next weeks, so I won't work on it before end of June. Can you share a diff of your changes?
Thanks, Miquèl

Hi Miquel,
On 7 June 2018 at 23:36, Miquel Raynal miquel.raynal@bootlin.com wrote:
Hi Simon,
On Thu, 7 Jun 2018 16:25:28 -0800, Simon Glass sjg@chromium.org wrote:
Hi Miquel,
On 6 June 2018 at 23:38, Miquel Raynal miquel.raynal@bootlin.com wrote:
Hello,
Sorry for the delay.
On Sat, 2 Jun 2018 10:15:17 -0600, Simon Glass sjg@chromium.org wrote:
Hi Tom,
On 1 June 2018 at 11:55, Tom Rini trini@konsulko.com wrote:
On Fri, Jun 01, 2018 at 09:25:19AM -0600, Simon Glass wrote:
+Miquel due to sandbox TPM issue
Hi Tom,
On 25 May 2018 at 06:27, Tom Rini trini@konsulko.com wrote: > In order to have the test.py tests for TPMv2 run automatically we need > to have one of our sandbox builds use TPMv2 rather than TPMv1. Switch > sandbox_flattree over to this style of TPM.
The problem seems to be that the sandbox driver is only built with either TPMv1 or TPMv2. It needs to be able to build with both, so we can run tests with both.
Right. But we don't have any run-time automatic tests for TPMv1 as the 'tpm test' command needs to be done manually, at least today (unless I'm missing something under test/py/tests/). And we can't (functionally in real uses) have both TPM types available. Perhaps we should make TPMv2 the default for sandbox? All of the TPMv1 code will still be getting build-time exercised due to platforms with TPMv1 on them.
I'll take a look at this. It should actually be quite easy to have two TPMs in sandbox, one v1 and one v2. At least I don't know of any impediment.
It really doesn't make any sense to have build-time branches for sandbox.
We currently have:
sandbox - should be used for most tests sandbox64 - special build that forces a 64-bit host sandbox_flattree - builds with dev_read_...() functions defined as inline. We need this build so that we can test those inline functions, and we cannot build with both the inline functions and the non-inline functions since they are named the same sandbox_noblk - builds without CONFIG_BLK, which means the legacy block drivers are used. We cannot use both the legacy and driver-model block drivers since they implement the same functions sandbox_spl - builds sandbox with SPL support, so you can run spl/u-boot-spl and it will start up and then load ./u-boot. We could probably remove this and add SPL support to the vanilla sandbox build, since people can still run ./u-boot directly
At present there are unnecessary config differences between these builds. This is explained by the fact that it is a pain for people to have to add configs separately to each defconfig. But we should probably make them more common. I will take a look.
OK.
What do you think about dropping sandbox_spl and make sandbox build SPL? It does take slightly longer to build, perhaps 25%.
That's fine with me.
> Cc: Simon Glass sjg@chromium.org > Signed-off-by: Tom Rini trini@konsulko.com > --- > I'm tempted to switch the main sandbox target over instead as I don't > quite see where we're running the tpm1.x tests automatically. Would > that be a better idea? > ---
Miquel, can we adjust the code to build both TPMv1 and v2 for sandbox, and select at run-time?
I thought we had talked about that before and couldn't easily? One thing I am a bit wary of is adding indirection for build coverage sake.
Yes, I am hoping that it is just different drivers with the same API but perhaps I am going to be disappointed.
Indeed, both versions share the same 'architecture' but quite a few structures/functions are defined differently for each TPM flavour in different files. What makes the magic are the #ifdef TPM_V1 #else #endif blocks around includes, making them mutually exclusive.
Choice has been made not to use both flavours at the same time in the second series, when I clearly made a separation between v1 code and v2 code. Trying to compile them both with just some Kconfig hacks would simply not work IMHO.
My apologies for not being helpful at all... As Tom said, there are no tests running on v1 code so maybe it's better to exercise v2 code in Sandbox and let people compile-test the former on their own?
I had a play with this and it does not seem too tricky.
With a bit of fiddling I got it to build except for this:
/home/sjg/c/src/third_party/u-boot/files/cmd/tpm-v2.c:324: multiple definition of `get_tpm_commands'
That's one problem. I'm pretty sure at some point we will need to declare differently tpm_chip_priv depending on the version. Using two structures in an enumeration could be the way to handle it.
I think you can just remove the #ifdef from inside struct tpm_chip_priv - it's not really a nice thing to do anyway.
Another point is that doing so, you embed twice the code and symbols than what's really needed. Is not having mutually exclusive code better than enlarging U-Boot binary?
The sandbox binary is enormous since it enables as many features as it can. We can always create a minimal sandbox if it becomes useful, but for now sandbox is mostly for testing.
I think if you adjust it to check the driver version (v1 or v2), then you can use either the v1 or v2 command set. You could move the get_tpm_commands function into the uclass so it can check the driver.
It means patching all drivers if we want to do it cleanly.
Yes, you could have a field that needs to be set so that the uclass knows which version it is. Alternatively if you want to save a patch you could have an is_v2 bool, which defaults to 0 for v1 drivers.
As to whether the driver is v1 or v2, I wonder if the driver could set a 'version' flag in tpm_chip_priv() ?
Probably.
I really don't like the idea of having mutually exclusive code in driver model, so it would be good to fix this.
I'll be away the next weeks, so I won't work on it before end of June. Can you share a diff of your changes?
Yes I pushed a patch to u-boot-dm branch tpm-working.
Regards, Simon

Hi Simon,
On Fri, 8 Jun 2018 13:59:07 -0800, Simon Glass sjg@chromium.org wrote:
Hi Miquel,
On 7 June 2018 at 23:36, Miquel Raynal miquel.raynal@bootlin.com wrote:
Hi Simon,
On Thu, 7 Jun 2018 16:25:28 -0800, Simon Glass sjg@chromium.org wrote:
Hi Miquel,
On 6 June 2018 at 23:38, Miquel Raynal miquel.raynal@bootlin.com wrote:
Hello,
Sorry for the delay.
On Sat, 2 Jun 2018 10:15:17 -0600, Simon Glass sjg@chromium.org wrote:
Hi Tom,
On 1 June 2018 at 11:55, Tom Rini trini@konsulko.com wrote:
On Fri, Jun 01, 2018 at 09:25:19AM -0600, Simon Glass wrote: > +Miquel due to sandbox TPM issue > > Hi Tom, > > On 25 May 2018 at 06:27, Tom Rini trini@konsulko.com wrote: > > In order to have the test.py tests for TPMv2 run automatically we need > > to have one of our sandbox builds use TPMv2 rather than TPMv1. Switch > > sandbox_flattree over to this style of TPM. > > The problem seems to be that the sandbox driver is only built with > either TPMv1 or TPMv2. It needs to be able to build with both, so we > can run tests with both.
Right. But we don't have any run-time automatic tests for TPMv1 as the 'tpm test' command needs to be done manually, at least today (unless I'm missing something under test/py/tests/). And we can't (functionally in real uses) have both TPM types available. Perhaps we should make TPMv2 the default for sandbox? All of the TPMv1 code will still be getting build-time exercised due to platforms with TPMv1 on them.
I'll take a look at this. It should actually be quite easy to have two TPMs in sandbox, one v1 and one v2. At least I don't know of any impediment.
> It really doesn't make any sense to have build-time branches for sandbox. > > We currently have: > > sandbox - should be used for most tests > sandbox64 - special build that forces a 64-bit host > sandbox_flattree - builds with dev_read_...() functions defined as > inline. We need this build so that we can test those inline functions, > and we cannot build with both the inline functions and the non-inline > functions since they are named the same > sandbox_noblk - builds without CONFIG_BLK, which means the legacy > block drivers are used. We cannot use both the legacy and driver-model > block drivers since they implement the same functions > sandbox_spl - builds sandbox with SPL support, so you can run > spl/u-boot-spl and it will start up and then load ./u-boot. We could > probably remove this and add SPL support to the vanilla sandbox build, > since people can still run ./u-boot directly > > At present there are unnecessary config differences between these > builds. This is explained by the fact that it is a pain for people to > have to add configs separately to each defconfig. But we should > probably make them more common. I will take a look.
OK.
> What do you think about dropping sandbox_spl and make sandbox build > SPL? It does take slightly longer to build, perhaps 25%.
That's fine with me.
> > Cc: Simon Glass sjg@chromium.org > > Signed-off-by: Tom Rini trini@konsulko.com > > --- > > I'm tempted to switch the main sandbox target over instead as I don't > > quite see where we're running the tpm1.x tests automatically. Would > > that be a better idea? > > --- > > Miquel, can we adjust the code to build both TPMv1 and v2 for sandbox, > and select at run-time?
I thought we had talked about that before and couldn't easily? One thing I am a bit wary of is adding indirection for build coverage sake.
Yes, I am hoping that it is just different drivers with the same API but perhaps I am going to be disappointed.
Indeed, both versions share the same 'architecture' but quite a few structures/functions are defined differently for each TPM flavour in different files. What makes the magic are the #ifdef TPM_V1 #else #endif blocks around includes, making them mutually exclusive.
Choice has been made not to use both flavours at the same time in the second series, when I clearly made a separation between v1 code and v2 code. Trying to compile them both with just some Kconfig hacks would simply not work IMHO.
My apologies for not being helpful at all... As Tom said, there are no tests running on v1 code so maybe it's better to exercise v2 code in Sandbox and let people compile-test the former on their own?
I had a play with this and it does not seem too tricky.
With a bit of fiddling I got it to build except for this:
/home/sjg/c/src/third_party/u-boot/files/cmd/tpm-v2.c:324: multiple definition of `get_tpm_commands'
That's one problem. I'm pretty sure at some point we will need to declare differently tpm_chip_priv depending on the version. Using two structures in an enumeration could be the way to handle it.
I think you can just remove the #ifdef from inside struct tpm_chip_priv - it's not really a nice thing to do anyway.
Ok.
Another point is that doing so, you embed twice the code and symbols than what's really needed. Is not having mutually exclusive code better than enlarging U-Boot binary?
The sandbox binary is enormous since it enables as many features as it can. We can always create a minimal sandbox if it becomes useful, but for now sandbox is mostly for testing.
I meant for all the platforms. But you are right that this double selection should only happen for Sandbox.
I think if you adjust it to check the driver version (v1 or v2), then you can use either the v1 or v2 command set. You could move the get_tpm_commands function into the uclass so it can check the driver.
It means patching all drivers if we want to do it cleanly.
Yes, you could have a field that needs to be set so that the uclass knows which version it is. Alternatively if you want to save a patch you could have an is_v2 bool, which defaults to 0 for v1 drivers.
Maybe an enum would be better. 0 would still mean a v1 driver.
As to whether the driver is v1 or v2, I wonder if the driver could set a 'version' flag in tpm_chip_priv() ?
Probably.
I really don't like the idea of having mutually exclusive code in driver model, so it would be good to fix this.
I'll be away the next weeks, so I won't work on it before end of June. Can you share a diff of your changes?
Yes I pushed a patch to u-boot-dm branch tpm-working.
Thanks!
Regards, Simon
Regards, Miquèl

Hi Simon,
> > In order to have the test.py tests for TPMv2 run automatically we need > > to have one of our sandbox builds use TPMv2 rather than TPMv1. Switch > > sandbox_flattree over to this style of TPM. > > The problem seems to be that the sandbox driver is only built with > either TPMv1 or TPMv2. It needs to be able to build with both, so we > can run tests with both.
Right. But we don't have any run-time automatic tests for TPMv1 as the 'tpm test' command needs to be done manually, at least today (unless I'm missing something under test/py/tests/). And we can't (functionally in real uses) have both TPM types available. Perhaps we should make TPMv2 the default for sandbox? All of the TPMv1 code will still be getting build-time exercised due to platforms with TPMv1 on them.
I'll take a look at this. It should actually be quite easy to have two TPMs in sandbox, one v1 and one v2. At least I don't know of any impediment.
> It really doesn't make any sense to have build-time branches for sandbox. > > We currently have: > > sandbox - should be used for most tests > sandbox64 - special build that forces a 64-bit host > sandbox_flattree - builds with dev_read_...() functions defined as > inline. We need this build so that we can test those inline functions, > and we cannot build with both the inline functions and the non-inline > functions since they are named the same > sandbox_noblk - builds without CONFIG_BLK, which means the legacy > block drivers are used. We cannot use both the legacy and driver-model > block drivers since they implement the same functions > sandbox_spl - builds sandbox with SPL support, so you can run > spl/u-boot-spl and it will start up and then load ./u-boot. We could > probably remove this and add SPL support to the vanilla sandbox build, > since people can still run ./u-boot directly > > At present there are unnecessary config differences between these > builds. This is explained by the fact that it is a pain for people to > have to add configs separately to each defconfig. But we should > probably make them more common. I will take a look.
OK.
> What do you think about dropping sandbox_spl and make sandbox build > SPL? It does take slightly longer to build, perhaps 25%.
That's fine with me.
> > Cc: Simon Glass sjg@chromium.org > > Signed-off-by: Tom Rini trini@konsulko.com > > --- > > I'm tempted to switch the main sandbox target over instead as I don't > > quite see where we're running the tpm1.x tests automatically. Would > > that be a better idea? > > --- > > Miquel, can we adjust the code to build both TPMv1 and v2 for sandbox, > and select at run-time?
I thought we had talked about that before and couldn't easily? One thing I am a bit wary of is adding indirection for build coverage sake.
Yes, I am hoping that it is just different drivers with the same API but perhaps I am going to be disappointed.
Indeed, both versions share the same 'architecture' but quite a few structures/functions are defined differently for each TPM flavour in different files. What makes the magic are the #ifdef TPM_V1 #else #endif blocks around includes, making them mutually exclusive.
Choice has been made not to use both flavours at the same time in the second series, when I clearly made a separation between v1 code and v2 code. Trying to compile them both with just some Kconfig hacks would simply not work IMHO.
My apologies for not being helpful at all... As Tom said, there are no tests running on v1 code so maybe it's better to exercise v2 code in Sandbox and let people compile-test the former on their own?
I had a play with this and it does not seem too tricky.
With a bit of fiddling I got it to build except for this:
/home/sjg/c/src/third_party/u-boot/files/cmd/tpm-v2.c:324: multiple definition of `get_tpm_commands'
That's one problem. I'm pretty sure at some point we will need to declare differently tpm_chip_priv depending on the version. Using two structures in an enumeration could be the way to handle it.
I think you can just remove the #ifdef from inside struct tpm_chip_priv - it's not really a nice thing to do anyway.
Another point is that doing so, you embed twice the code and symbols than what's really needed. Is not having mutually exclusive code better than enlarging U-Boot binary?
The sandbox binary is enormous since it enables as many features as it can. We can always create a minimal sandbox if it becomes useful, but for now sandbox is mostly for testing.
I think if you adjust it to check the driver version (v1 or v2), then you can use either the v1 or v2 command set. You could move the get_tpm_commands function into the uclass so it can check the driver.
It means patching all drivers if we want to do it cleanly.
Yes, you could have a field that needs to be set so that the uclass knows which version it is. Alternatively if you want to save a patch you could have an is_v2 bool, which defaults to 0 for v1 drivers.
As to whether the driver is v1 or v2, I wonder if the driver could set a 'version' flag in tpm_chip_priv() ?
Probably.
I really don't like the idea of having mutually exclusive code in driver model, so it would be good to fix this.
I'll be away the next weeks, so I won't work on it before end of June. Can you share a diff of your changes?
Yes I pushed a patch to u-boot-dm branch tpm-working.
I think I got a proper patch for the above request.
One thing I could not figure out: There is one U_BOOT_CMD() definition per file (v1 and v2). Because now I have to call them 'tpm' and 'tpm2', it creates two commands instead of one. I've got the whole logic behind for the driver to tell which stack it wants to use, but the user still has to use either one command or the other because now both are defined (and we can't define twice U_BOOT_CMD() with 'tpm' as command name.
Do you have a solution in mind?
Otherwise people already using TPM v2 (if any) will have to change their scripts to use 'tpm2' instead of 'tpm' (the tests in test/py/tests will also change). I really dislike this change just for compile testing while I had both stacks hidden behind a single command...
What do you think?
Here is the patch: https://github.com/miquelraynal/u-boot/commit/549822f82d797d6125a25af207eae1...
Thanks, Miquèl

Hi Miquel,
Miquel Raynal miquel.raynal@bootlin.com wrote on Fri, 13 Jul 2018 23:35:23 +0200:
Hi Simon,
> > > In order to have the test.py tests for TPMv2 run automatically we need > > > to have one of our sandbox builds use TPMv2 rather than TPMv1. Switch > > > sandbox_flattree over to this style of TPM. > > > > The problem seems to be that the sandbox driver is only built with > > either TPMv1 or TPMv2. It needs to be able to build with both, so we > > can run tests with both. > > Right. But we don't have any run-time automatic tests for TPMv1 as the > 'tpm test' command needs to be done manually, at least today (unless I'm > missing something under test/py/tests/). And we can't (functionally in > real uses) have both TPM types available. Perhaps we should make TPMv2 > the default for sandbox? All of the TPMv1 code will still be getting > build-time exercised due to platforms with TPMv1 on them.
I'll take a look at this. It should actually be quite easy to have two TPMs in sandbox, one v1 and one v2. At least I don't know of any impediment.
> > > It really doesn't make any sense to have build-time branches for sandbox. > > > > We currently have: > > > > sandbox - should be used for most tests > > sandbox64 - special build that forces a 64-bit host > > sandbox_flattree - builds with dev_read_...() functions defined as > > inline. We need this build so that we can test those inline functions, > > and we cannot build with both the inline functions and the non-inline > > functions since they are named the same > > sandbox_noblk - builds without CONFIG_BLK, which means the legacy > > block drivers are used. We cannot use both the legacy and driver-model > > block drivers since they implement the same functions > > sandbox_spl - builds sandbox with SPL support, so you can run > > spl/u-boot-spl and it will start up and then load ./u-boot. We could > > probably remove this and add SPL support to the vanilla sandbox build, > > since people can still run ./u-boot directly > > > > At present there are unnecessary config differences between these > > builds. This is explained by the fact that it is a pain for people to > > have to add configs separately to each defconfig. But we should > > probably make them more common. I will take a look. > > OK. > > > What do you think about dropping sandbox_spl and make sandbox build > > SPL? It does take slightly longer to build, perhaps 25%. > > That's fine with me. > > > > Cc: Simon Glass sjg@chromium.org > > > Signed-off-by: Tom Rini trini@konsulko.com > > > --- > > > I'm tempted to switch the main sandbox target over instead as I don't > > > quite see where we're running the tpm1.x tests automatically. Would > > > that be a better idea? > > > --- > > > > Miquel, can we adjust the code to build both TPMv1 and v2 for sandbox, > > and select at run-time? > > I thought we had talked about that before and couldn't easily? One > thing I am a bit wary of is adding indirection for build coverage sake.
Yes, I am hoping that it is just different drivers with the same API but perhaps I am going to be disappointed.
Indeed, both versions share the same 'architecture' but quite a few structures/functions are defined differently for each TPM flavour in different files. What makes the magic are the #ifdef TPM_V1 #else #endif blocks around includes, making them mutually exclusive.
Choice has been made not to use both flavours at the same time in the second series, when I clearly made a separation between v1 code and v2 code. Trying to compile them both with just some Kconfig hacks would simply not work IMHO.
My apologies for not being helpful at all... As Tom said, there are no tests running on v1 code so maybe it's better to exercise v2 code in Sandbox and let people compile-test the former on their own?
I had a play with this and it does not seem too tricky.
With a bit of fiddling I got it to build except for this:
/home/sjg/c/src/third_party/u-boot/files/cmd/tpm-v2.c:324: multiple definition of `get_tpm_commands'
That's one problem. I'm pretty sure at some point we will need to declare differently tpm_chip_priv depending on the version. Using two structures in an enumeration could be the way to handle it.
I think you can just remove the #ifdef from inside struct tpm_chip_priv - it's not really a nice thing to do anyway.
Another point is that doing so, you embed twice the code and symbols than what's really needed. Is not having mutually exclusive code better than enlarging U-Boot binary?
The sandbox binary is enormous since it enables as many features as it can. We can always create a minimal sandbox if it becomes useful, but for now sandbox is mostly for testing.
I think if you adjust it to check the driver version (v1 or v2), then you can use either the v1 or v2 command set. You could move the get_tpm_commands function into the uclass so it can check the driver.
It means patching all drivers if we want to do it cleanly.
Yes, you could have a field that needs to be set so that the uclass knows which version it is. Alternatively if you want to save a patch you could have an is_v2 bool, which defaults to 0 for v1 drivers.
As to whether the driver is v1 or v2, I wonder if the driver could set a 'version' flag in tpm_chip_priv() ?
Probably.
I really don't like the idea of having mutually exclusive code in driver model, so it would be good to fix this.
I'll be away the next weeks, so I won't work on it before end of June. Can you share a diff of your changes?
Yes I pushed a patch to u-boot-dm branch tpm-working.
I think I got a proper patch for the above request.
One thing I could not figure out: There is one U_BOOT_CMD() definition per file (v1 and v2). Because now I have to call them 'tpm' and 'tpm2', it creates two commands instead of one. I've got the whole logic behind for the driver to tell which stack it wants to use, but the user still has to use either one command or the other because now both are defined (and we can't define twice U_BOOT_CMD() with 'tpm' as command name.
Do you have a solution in mind?
Otherwise people already using TPM v2 (if any) will have to change their scripts to use 'tpm2' instead of 'tpm' (the tests in test/py/tests will also change). I really dislike this change just for compile testing while I had both stacks hidden behind a single command...
What do you think?
Here is the patch: https://github.com/miquelraynal/u-boot/commit/549822f82d797d6125a25af207eae1...
Actually it will still work for people using 'tpm' instead of 'tpm2' if CONFIG_TPM_V1 is not set in their configuration.
I fixed a compilation issue in the above commit, I think the patch is ready, will send it after a few more tests.
Thanks, Miquèl
participants (3)
-
Miquel Raynal
-
Simon Glass
-
Tom Rini