
Hi,
Am 2021-10-31 17:45, schrieb Z.Q. Hou:
-----Original Message----- From: Marc Zyngier [mailto:maz@kernel.org] Sent: 2021年10月29日 5:09 To: Michael Walle michael@walle.cc Cc: u-boot@lists.denx.de; Vladimir Oltean vladimir.oltean@nxp.com; Z.Q. Hou zhiqiang.hou@nxp.com; Bharat Gooty bharat.gooty@broadcom.com; Rayagonda Kokatanur rayagonda.kokatanur@broadcom.com; Simon Glass sjg@chromium.org; Priyanka Jain priyanka.jain@nxp.com; Tom Rini trini@konsulko.com Subject: Re: [PATCH 2/2] Revert "arch: arm: use dt and UCLASS_SYSCON to get gic lpi details"
On Wed, 27 Oct 2021 17:54:54 +0100, Michael Walle michael@walle.cc wrote:
Stop using the device tree as a source for ad-hoc information.
This reverts commit 2ae7adc659f7fca9ea65df4318e5bca2b8274310.
Signed-off-by: Michael Walle michael@walle.cc
arch/arm/Kconfig | 2 - arch/arm/cpu/armv8/fsl-layerscape/soc.c | 27 +++++++++- arch/arm/include/asm/gic-v3.h | 4 +- arch/arm/lib/gic-v3-its.c | 66 +++---------------------- 4 files changed, 36 insertions(+), 63 deletions(-)
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index 02f8306f15..86c1ebde05 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -82,8 +82,6 @@ config GICV3
config GIC_V3_ITS bool "ARM GICV3 ITS"
- select REGMAP
- select SYSCON select IRQ help ARM GICV3 Interrupt translation service (ITS).
diff --git a/arch/arm/cpu/armv8/fsl-layerscape/soc.c b/arch/arm/cpu/armv8/fsl-layerscape/soc.c index c0e100d21c..a08ed3f544 100644 --- a/arch/arm/cpu/armv8/fsl-layerscape/soc.c +++ b/arch/arm/cpu/armv8/fsl-layerscape/soc.c
Why is this FSL specific?
@@ -41,11 +41,36 @@ DECLARE_GLOBAL_DATA_PTR; #endif
#ifdef CONFIG_GIC_V3_ITS +#define PENDTABLE_MAX_SZ ALIGN(BIT(ITS_MAX_LPI_NRBITS), SZ_64K) +#define PROPTABLE_MAX_SZ ALIGN(BIT(ITS_MAX_LPI_NRBITS) / 8,
SZ_64K)
This looks completely wrong.
The pending table needs one bit per LPI, and the property table one byte per LPI. Here, you have it the other way around.
It's a typo, will fix after the revert patch applied.
Please keep me on CC.
..
Finally, ITS_MAX_LPI_NRBITS is hardcoded to 16, while it can actually vary from 14 to 32 (and even further limited by some hypervisors), depending on the implementation. Granted, this was broken before this patch, and in most cases, 64k is more than enough.
This is only for Layerscape platforms, so hardcoded to 16 bit works.
But why is this layerscape only? Can't we make it so it works on any platform. If I understand Marc correctly, it should be possible.
..
- ret = gic_lpi_tables_init(gic_lpi_base, cpu_numcores());
This really should fetch the number of CPUs from the DT rather then some SoC specific black magic...
Currently in most Layerscape platforms' DTS file there isn't cpu nodes. On Layerscape platforms the implemented core number can be get from GUT register.
So this would be the perfect time to actually sync the device trees with linux.
-michael