[PATCH v2 0/7] arm: cache: cp15: don't map reserved region with no-map property

Hi,
It it the v2 serie of [1].
This v2 serie is build/can be applied on top of 2 previous series - [2] for stm32mp parts and added dram_bank_mmu_setup - [3] for LMB impacts
After V1 remarks, I propose this separate serie [2] for DACR support.
On STM32MP15x platform we can use OP-TEE, loaded in DDR in a region protected by a firewall. This region is reserved in device with "no-map" property.
Sometime the platform boot failed in U-boot on a Cortex A7 access to this region (depending of the binary and the issue can change with compiler version or with code alignment), then the firewall raise an error, for example:
E/TC:0 tzc_it_handler:19 TZC permission failure E/TC:0 dump_fail_filter:420 Permission violation on filter 0 E/TC:0 dump_fail_filter:425 Violation @0xde5c6bf0, non-secure privileged read, AXI ID 5c0 E/TC:0 Panic
After investigation, the forbidden access is a speculative request performed by the Cortex A7 because all the DDR is mapped as MEMORY with CACHEABLE property.
The issue is solved only when the region reserved by OP-TEE is no more mapped in U-Boot as it is already done in Linux kernel.
Tested on DK2 board with OP-TEE 3.12 / TF-A 2.4:
With hard-coded address for OP-TEE reserved memory, the error doesn't occur.
void dram_bank_mmu_setup(int bank) { ....
for (i = start >> MMU_SECTION_SHIFT; i < (start >> MMU_SECTION_SHIFT) + (size >> MMU_SECTION_SHIFT); i++) { option = DCACHE_DEFAULT_OPTION; if (i >= 0xde0) option = INVALID_ENTRY; set_section_dcache(i, option); } }
Just by modifying the test on 0xde0 to 0xdf0, the OP-TEE memory protected by firewall is mapped cacheable and the error occurs.
I think that can be a general issue for ARM architecture: the no-map tag in device should be respected by U-boot, so I propose a generic solution in arm/lib/cache-cp15.c:dram_bank_mmu_setup().
This v2 serie is composed by 7 patches - 1..3/7: preliminary steps to support flags in library in lmb (as it is done in memblock.c in Linux) - 4/7: unitary test on the added feature in lmb lib - 5/7: save the no-map flags in lmb when the device tree is parsed - 6/7: update the stm32mp mmu support - 7/7: update the generic behavior for "no-map" region in arm/lib/cache-cp15.c::dram_bank_mmu_setup()
I can drop this last patch if this feature is not required by other ARM architectures; the weak function is updated in patch 6 in STM32MP architecture (in arch/arm/mach-stm32mp/cpu.c) to correct the initial problem.
See also [4] which handle same speculative access on armv8 for area with Executable attribute.
[1] http://patchwork.ozlabs.org/project/uboot/list/?series=206296&state=* [2] http://patchwork.ozlabs.org/project/uboot/list/?series=228202&state=* [3] http://patchwork.ozlabs.org/project/uboot/list/?series=227570&state=* [4] http://patchwork.ozlabs.org/project/uboot/patch/20200903000106.5016-1-marek....
Regards Patrick
Changes in v2: - remove unnecessary comments in lmb.h - rebase on latest lmb patches - NEW: update in stm32mp specific MMU setup functions
Patrick Delaunay (7): lmb: Add support of flags for no-map properties lmb: add lmb_is_reserved_flags lmb: add lmb_dump_region() function test: lmb: add test for lmb_reserve_flags image-fdt: save no-map parameter of reserve-memory stm32mp: don't map the reserved region with no-map property arm: cache: cp15: don't map the reserved region with no-map property
arch/arm/include/asm/system.h | 3 ++ arch/arm/lib/cache-cp15.c | 19 ++++++- arch/arm/mach-stm32mp/cpu.c | 16 +++++- common/image-fdt.c | 23 ++++++--- include/lmb.h | 21 ++++++++ lib/lmb.c | 94 +++++++++++++++++++++++++---------- test/lib/lmb.c | 89 +++++++++++++++++++++++++++++++++ 7 files changed, 226 insertions(+), 39 deletions(-)

Add "flags" in lmb_property to save the "no-map" property of reserved region and a new function lmb_reserve_flags() to check this flag.
The default allocation use flags = LMB_NONE.
The adjacent reserved memory region are merged only when they have the same flags value.
This patch is partially based on flags support done in Linux kernel mm/memblock .c (previously lmb.c); it is why LMB_NOMAP = 0x4, it is aligned with MEMBLOCK_NOMAP value.
Signed-off-by: Patrick Delaunay patrick.delaunay@st.com Signed-off-by: Patrick Delaunay patrick.delaunay@foss.st.com ---
Changes in v2: - remove unnecessary comments in lmb.h - rebase on latest lmb patches
include/lmb.h | 20 ++++++++++++++++++++ lib/lmb.c | 52 ++++++++++++++++++++++++++++++++++++++++++--------- 2 files changed, 63 insertions(+), 9 deletions(-)
diff --git a/include/lmb.h b/include/lmb.h index 97be24ed66..4987e61bf3 100644 --- a/include/lmb.h +++ b/include/lmb.h @@ -12,6 +12,16 @@ * Copyright (C) 2001 Peter Bergner, IBM Corp. */
+/** + * enum lmb_flags - definition of memory region attributes + * @LMB_NONE: no special request + * @LMB_NOMAP: don't add to mmu configuration + */ +enum lmb_flags { + LMB_NONE = 0x0, + LMB_NOMAP = 0x4, +}; + /** * struct lmb_property - Description of one region. * @@ -21,6 +31,7 @@ struct lmb_property { phys_addr_t base; phys_size_t size; + enum lmb_flags flags; };
/** @@ -63,6 +74,8 @@ extern void lmb_init_and_reserve_range(struct lmb *lmb, phys_addr_t base, phys_size_t size, void *fdt_blob); extern long lmb_add(struct lmb *lmb, phys_addr_t base, phys_size_t size); extern long lmb_reserve(struct lmb *lmb, phys_addr_t base, phys_size_t size); +extern long lmb_reserve_flags(struct lmb *lmb, phys_addr_t base, + phys_size_t size, enum lmb_flags flags); extern phys_addr_t lmb_alloc(struct lmb *lmb, phys_size_t size, ulong align); extern phys_addr_t lmb_alloc_base(struct lmb *lmb, phys_size_t size, ulong align, phys_addr_t max_addr); @@ -86,6 +99,13 @@ lmb_size_bytes(struct lmb_region *type, unsigned long region_nr) void board_lmb_reserve(struct lmb *lmb); void arch_lmb_reserve(struct lmb *lmb);
+/* Low level functions */ + +static inline bool lmb_is_nomap(struct lmb_property *m) +{ + return !!(m->flags & LMB_NOMAP); +} + #endif /* __KERNEL__ */
#endif /* _LINUX_LMB_H */ diff --git a/lib/lmb.c b/lib/lmb.c index c97be0a064..5975f6b6e1 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -25,6 +25,8 @@ void lmb_dump_all_force(struct lmb *lmb) (unsigned long long)lmb->memory.region[i].base); printf(" .size = 0x%llx\n", (unsigned long long)lmb->memory.region[i].size); + printf(" .flags = 0x%x\n", + lmb->memory.region[i].flags); }
printf("\n reserved.cnt = 0x%lx\n", lmb->reserved.cnt); @@ -33,6 +35,8 @@ void lmb_dump_all_force(struct lmb *lmb) (unsigned long long)lmb->reserved.region[i].base); printf(" .size = 0x%llx\n", (unsigned long long)lmb->reserved.region[i].size); + printf(" .flags = 0x%x\n", + lmb->reserved.region[i].flags); } }
@@ -81,6 +85,7 @@ static void lmb_remove_region(struct lmb_region *rgn, unsigned long r) for (i = r; i < rgn->cnt - 1; i++) { rgn->region[i].base = rgn->region[i + 1].base; rgn->region[i].size = rgn->region[i + 1].size; + rgn->region[i].flags = rgn->region[i + 1].flags; } rgn->cnt--; } @@ -141,7 +146,8 @@ void lmb_init_and_reserve_range(struct lmb *lmb, phys_addr_t base, }
/* This routine called with relocation disabled. */ -static long lmb_add_region(struct lmb_region *rgn, phys_addr_t base, phys_size_t size) +static long lmb_add_region_flags(struct lmb_region *rgn, phys_addr_t base, + phys_size_t size, enum lmb_flags flags) { unsigned long coalesced = 0; long adjacent, i; @@ -149,6 +155,7 @@ static long lmb_add_region(struct lmb_region *rgn, phys_addr_t base, phys_size_t if (rgn->cnt == 0) { rgn->region[0].base = base; rgn->region[0].size = size; + rgn->region[0].flags = flags; rgn->cnt = 1; return 0; } @@ -157,18 +164,27 @@ static long lmb_add_region(struct lmb_region *rgn, phys_addr_t base, phys_size_t for (i = 0; i < rgn->cnt; i++) { phys_addr_t rgnbase = rgn->region[i].base; phys_size_t rgnsize = rgn->region[i].size; + phys_size_t rgnflags = rgn->region[i].flags;
- if ((rgnbase == base) && (rgnsize == size)) - /* Already have this region, so we're done */ - return 0; + if (rgnbase == base && rgnsize == size) { + if (flags == rgnflags) + /* Already have this region, so we're done */ + return 0; + else + return -1; /* regions with new flags */ + }
adjacent = lmb_addrs_adjacent(base, size, rgnbase, rgnsize); if (adjacent > 0) { + if (flags != rgnflags) + break; rgn->region[i].base -= size; rgn->region[i].size += size; coalesced++; break; } else if (adjacent < 0) { + if (flags != rgnflags) + break; rgn->region[i].size += size; coalesced++; break; @@ -179,8 +195,10 @@ static long lmb_add_region(struct lmb_region *rgn, phys_addr_t base, phys_size_t }
if ((i < rgn->cnt - 1) && lmb_regions_adjacent(rgn, i, i + 1)) { - lmb_coalesce_regions(rgn, i, i + 1); - coalesced++; + if (rgn->region[i].flags == rgn->region[i + 1].flags) { + lmb_coalesce_regions(rgn, i, i + 1); + coalesced++; + } }
if (coalesced) @@ -193,9 +211,11 @@ static long lmb_add_region(struct lmb_region *rgn, phys_addr_t base, phys_size_t if (base < rgn->region[i].base) { rgn->region[i + 1].base = rgn->region[i].base; rgn->region[i + 1].size = rgn->region[i].size; + rgn->region[i + 1].flags = rgn->region[i].flags; } else { rgn->region[i + 1].base = base; rgn->region[i + 1].size = size; + rgn->region[i + 1].flags = flags; break; } } @@ -203,6 +223,7 @@ static long lmb_add_region(struct lmb_region *rgn, phys_addr_t base, phys_size_t if (base < rgn->region[0].base) { rgn->region[0].base = base; rgn->region[0].size = size; + rgn->region[0].flags = flags; }
rgn->cnt++; @@ -210,6 +231,12 @@ static long lmb_add_region(struct lmb_region *rgn, phys_addr_t base, phys_size_t return 0; }
+static long lmb_add_region(struct lmb_region *rgn, phys_addr_t base, + phys_size_t size) +{ + return lmb_add_region_flags(rgn, base, size, LMB_NONE); +} + /* This routine may be called with relocation disabled. */ long lmb_add(struct lmb *lmb, phys_addr_t base, phys_size_t size) { @@ -264,14 +291,21 @@ long lmb_free(struct lmb *lmb, phys_addr_t base, phys_size_t size) * beginging of the hole and add the region after hole. */ rgn->region[i].size = base - rgn->region[i].base; - return lmb_add_region(rgn, end + 1, rgnend - end); + return lmb_add_region_flags(rgn, end + 1, rgnend - end, + rgn->region[i].flags); }
-long lmb_reserve(struct lmb *lmb, phys_addr_t base, phys_size_t size) +long lmb_reserve_flags(struct lmb *lmb, phys_addr_t base, phys_size_t size, + enum lmb_flags flags) { struct lmb_region *_rgn = &(lmb->reserved);
- return lmb_add_region(_rgn, base, size); + return lmb_add_region_flags(_rgn, base, size, flags); +} + +long lmb_reserve(struct lmb *lmb, phys_addr_t base, phys_size_t size) +{ + return lmb_reserve_flags(lmb, base, size, LMB_NONE); }
static long lmb_overlaps_region(struct lmb_region *rgn, phys_addr_t base,

Add a new function lmb_is_reserved_flags to check is a address is reserved with a specific flags.
This function can be used to check if an address had be reserved with no-map flags with:
lmb_is_reserved_flags(lmb, addr, LMB_NOMAP);
Signed-off-by: Patrick Delaunay patrick.delaunay@st.com Signed-off-by: Patrick Delaunay patrick.delaunay@foss.st.com ---
(no changes since v1)
include/lmb.h | 1 + lib/lmb.c | 10 ++++++++-- 2 files changed, 9 insertions(+), 2 deletions(-)
diff --git a/include/lmb.h b/include/lmb.h index 4987e61bf3..59db94501d 100644 --- a/include/lmb.h +++ b/include/lmb.h @@ -85,6 +85,7 @@ extern phys_addr_t lmb_alloc_addr(struct lmb *lmb, phys_addr_t base, phys_size_t size); extern phys_size_t lmb_get_free_size(struct lmb *lmb, phys_addr_t addr); extern int lmb_is_reserved(struct lmb *lmb, phys_addr_t addr); +extern int lmb_is_reserved_flags(struct lmb *lmb, phys_addr_t addr, int flags); extern long lmb_free(struct lmb *lmb, phys_addr_t base, phys_size_t size);
extern void lmb_dump_all(struct lmb *lmb); diff --git a/lib/lmb.c b/lib/lmb.c index 5975f6b6e1..af398c5144 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -440,7 +440,7 @@ phys_size_t lmb_get_free_size(struct lmb *lmb, phys_addr_t addr) return 0; }
-int lmb_is_reserved(struct lmb *lmb, phys_addr_t addr) +int lmb_is_reserved_flags(struct lmb *lmb, phys_addr_t addr, int flags) { int i;
@@ -448,11 +448,17 @@ int lmb_is_reserved(struct lmb *lmb, phys_addr_t addr) phys_addr_t upper = lmb->reserved.region[i].base + lmb->reserved.region[i].size - 1; if ((addr >= lmb->reserved.region[i].base) && (addr <= upper)) - return 1; + return !!((lmb->reserved.region[i].flags & flags) + == flags); } return 0; }
+int lmb_is_reserved(struct lmb *lmb, phys_addr_t addr) +{ + return lmb_is_reserved_flags(lmb, addr, LMB_NONE); +} + __weak void board_lmb_reserve(struct lmb *lmb) { /* please define platform specific board_lmb_reserve() */

Add lmb_dump_region() function, to simplify lmb_dump_all_force(). This patch is based on Linux memblock dump function.
An example of bdinfo output is:
..... fdt_size = 0x000146a0 FB base = 0xfdd00000 lmb_dump_all: memory.cnt = 0x1 memory[0] [0xc0000000-0xffffffff], 0x40000000 bytes flags: 0 reserved.cnt = 0x6 reserved[0] [0x10000000-0x10045fff], 0x00046000 bytes flags: 4 reserved[1] [0x30000000-0x3003ffff], 0x00040000 bytes flags: 4 reserved[2] [0x38000000-0x3800ffff], 0x00010000 bytes flags: 4 reserved[3] [0xe8000000-0xefffffff], 0x08000000 bytes flags: 4 reserved[4] [0xfbaea344-0xfdffffff], 0x02515cbc bytes flags: 0 reserved[5] [0xfe000000-0xffffffff], 0x02000000 bytes flags: 4 arch_number = 0x00000000 TLB addr = 0xfdff0000 ....
Signed-off-by: Patrick Delaunay patrick.delaunay@st.com Signed-off-by: Patrick Delaunay patrick.delaunay@foss.st.com ---
(no changes since v1)
lib/lmb.c | 40 ++++++++++++++++++++-------------------- 1 file changed, 20 insertions(+), 20 deletions(-)
diff --git a/lib/lmb.c b/lib/lmb.c index af398c5144..b24eda89fb 100644 --- a/lib/lmb.c +++ b/lib/lmb.c @@ -14,32 +14,32 @@
#define LMB_ALLOC_ANYWHERE 0
-void lmb_dump_all_force(struct lmb *lmb) +static void lmb_dump_region(struct lmb_region *rgn, char *name) { - unsigned long i; + unsigned long long base, size, end; + enum lmb_flags flags; + int i;
- printf("lmb_dump_all:\n"); - printf(" memory.cnt = 0x%lx\n", lmb->memory.cnt); - for (i = 0; i < lmb->memory.cnt; i++) { - printf(" memory.reg[0x%lx].base = 0x%llx\n", i, - (unsigned long long)lmb->memory.region[i].base); - printf(" .size = 0x%llx\n", - (unsigned long long)lmb->memory.region[i].size); - printf(" .flags = 0x%x\n", - lmb->memory.region[i].flags); - } + printf(" %s.cnt = 0x%lx\n", name, rgn->cnt);
- printf("\n reserved.cnt = 0x%lx\n", lmb->reserved.cnt); - for (i = 0; i < lmb->reserved.cnt; i++) { - printf(" reserved.reg[0x%lx].base = 0x%llx\n", i, - (unsigned long long)lmb->reserved.region[i].base); - printf(" .size = 0x%llx\n", - (unsigned long long)lmb->reserved.region[i].size); - printf(" .flags = 0x%x\n", - lmb->reserved.region[i].flags); + for (i = 0; i < rgn->cnt; i++) { + base = rgn->region[i].base; + size = rgn->region[i].size; + end = base + size - 1; + flags = rgn->region[i].flags; + + printf(" %s[%d]\t[0x%llx-0x%llx], 0x%08llx bytes flags: %x\n", + name, i, base, end, size, flags); } }
+void lmb_dump_all_force(struct lmb *lmb) +{ + printf("lmb_dump_all:\n"); + lmb_dump_region(&lmb->memory, "memory"); + lmb_dump_region(&lmb->reserved, "reserved"); +} + void lmb_dump_all(struct lmb *lmb) { #ifdef DEBUG

Add a test to check the management of reserved region with flags.
Signed-off-by: Patrick Delaunay patrick.delaunay@st.com Signed-off-by: Patrick Delaunay patrick.delaunay@foss.st.com ---
(no changes since v1)
test/lib/lmb.c | 89 ++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 89 insertions(+)
diff --git a/test/lib/lmb.c b/test/lib/lmb.c index 644ee78758..d7bd826190 100644 --- a/test/lib/lmb.c +++ b/test/lib/lmb.c @@ -659,3 +659,92 @@ static int lib_test_lmb_get_free_size(struct unit_test_state *uts)
DM_TEST(lib_test_lmb_get_free_size, UT_TESTF_SCAN_PDATA | UT_TESTF_SCAN_FDT); + +static int lib_test_lmb_flags(struct unit_test_state *uts) +{ + const phys_addr_t ram = 0x40000000; + const phys_size_t ram_size = 0x20000000; + struct lmb lmb; + long ret; + + lmb_init(&lmb); + + ret = lmb_add(&lmb, ram, ram_size); + ut_asserteq(ret, 0); + + /* reserve, same flag */ + ret = lmb_reserve_flags(&lmb, 0x40010000, 0x10000, LMB_NOMAP); + ut_asserteq(ret, 0); + ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40010000, 0x10000, + 0, 0, 0, 0); + + /* reserve again, same flag */ + ret = lmb_reserve_flags(&lmb, 0x40010000, 0x10000, LMB_NOMAP); + ut_asserteq(ret, 0); + ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40010000, 0x10000, + 0, 0, 0, 0); + + /* reserve again, new flag */ + ret = lmb_reserve_flags(&lmb, 0x40010000, 0x10000, LMB_NONE); + ut_asserteq(ret, -1); + ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40010000, 0x10000, + 0, 0, 0, 0); + + ut_asserteq(lmb_is_nomap(&lmb.reserved.region[0]), 1); + + /* merge after */ + ret = lmb_reserve_flags(&lmb, 0x40020000, 0x10000, LMB_NOMAP); + ut_asserteq(ret, 1); + ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40010000, 0x20000, + 0, 0, 0, 0); + + /* merge before */ + ret = lmb_reserve_flags(&lmb, 0x40000000, 0x10000, LMB_NOMAP); + ut_asserteq(ret, 1); + ASSERT_LMB(&lmb, ram, ram_size, 1, 0x40000000, 0x30000, + 0, 0, 0, 0); + + ut_asserteq(lmb_is_nomap(&lmb.reserved.region[0]), 1); + + ret = lmb_reserve_flags(&lmb, 0x40030000, 0x10000, LMB_NONE); + ut_asserteq(ret, 0); + ASSERT_LMB(&lmb, ram, ram_size, 2, 0x40000000, 0x30000, + 0x40030000, 0x10000, 0, 0); + + ut_asserteq(lmb_is_nomap(&lmb.reserved.region[0]), 1); + ut_asserteq(lmb_is_nomap(&lmb.reserved.region[1]), 0); + + /* test that old API use LMB_NONE */ + ret = lmb_reserve(&lmb, 0x40040000, 0x10000); + ut_asserteq(ret, 1); + ASSERT_LMB(&lmb, ram, ram_size, 2, 0x40000000, 0x30000, + 0x40030000, 0x20000, 0, 0); + + ut_asserteq(lmb_is_nomap(&lmb.reserved.region[0]), 1); + ut_asserteq(lmb_is_nomap(&lmb.reserved.region[1]), 0); + + ret = lmb_reserve_flags(&lmb, 0x40070000, 0x10000, LMB_NOMAP); + ut_asserteq(ret, 0); + ASSERT_LMB(&lmb, ram, ram_size, 3, 0x40000000, 0x30000, + 0x40030000, 0x20000, 0x40070000, 0x10000); + + ret = lmb_reserve_flags(&lmb, 0x40050000, 0x10000, LMB_NOMAP); + ut_asserteq(ret, 0); + ASSERT_LMB(&lmb, ram, ram_size, 4, 0x40000000, 0x30000, + 0x40030000, 0x20000, 0x40050000, 0x10000); + + /* merge with 2 adjacent regions */ + ret = lmb_reserve_flags(&lmb, 0x40060000, 0x10000, LMB_NOMAP); + ut_asserteq(ret, 2); + ASSERT_LMB(&lmb, ram, ram_size, 3, 0x40000000, 0x30000, + 0x40030000, 0x20000, 0x40050000, 0x30000); + + ut_asserteq(lmb_is_nomap(&lmb.reserved.region[0]), 1); + ut_asserteq(lmb_is_nomap(&lmb.reserved.region[1]), 0); + ut_asserteq(lmb_is_nomap(&lmb.reserved.region[2]), 1); + + return 0; +} + +DM_TEST(lib_test_lmb_flags, + UT_TESTF_SCAN_PDATA | UT_TESTF_SCAN_FDT);

Save the no-map information present in 'reserved-memory' node to allow correct handling when the MMU is configured in board to avoid speculative access.
Signed-off-by: Patrick Delaunay patrick.delaunay@st.com Signed-off-by: Patrick Delaunay patrick.delaunay@foss.st.com ---
(no changes since v1)
common/image-fdt.c | 23 +++++++++++++++-------- 1 file changed, 15 insertions(+), 8 deletions(-)
diff --git a/common/image-fdt.c b/common/image-fdt.c index 707b44a69d..efd29fdb7b 100644 --- a/common/image-fdt.c +++ b/common/image-fdt.c @@ -74,18 +74,20 @@ static const image_header_t *image_get_fdt(ulong fdt_addr) #endif
static void boot_fdt_reserve_region(struct lmb *lmb, uint64_t addr, - uint64_t size) + uint64_t size, enum lmb_flags flags) { long ret;
- ret = lmb_reserve(lmb, addr, size); + ret = lmb_reserve_flags(lmb, addr, size, flags); if (ret >= 0) { - debug(" reserving fdt memory region: addr=%llx size=%llx\n", - (unsigned long long)addr, (unsigned long long)size); + debug(" reserving fdt memory region: addr=%llx size=%llx flags=%x\n", + (unsigned long long)addr, + (unsigned long long)size, flags); } else { puts("ERROR: reserving fdt memory region failed "); - printf("(addr=%llx size=%llx)\n", - (unsigned long long)addr, (unsigned long long)size); + printf("(addr=%llx size=%llx flags=%x)\n", + (unsigned long long)addr, + (unsigned long long)size, flags); } }
@@ -105,6 +107,7 @@ void boot_fdt_add_mem_rsv_regions(struct lmb *lmb, void *fdt_blob) int i, total, ret; int nodeoffset, subnode; struct fdt_resource res; + enum lmb_flags flags;
if (fdt_check_header(fdt_blob) != 0) return; @@ -114,7 +117,7 @@ void boot_fdt_add_mem_rsv_regions(struct lmb *lmb, void *fdt_blob) for (i = 0; i < total; i++) { if (fdt_get_mem_rsv(fdt_blob, i, &addr, &size) != 0) continue; - boot_fdt_reserve_region(lmb, addr, size); + boot_fdt_reserve_region(lmb, addr, size, LMB_NONE); }
/* process reserved-memory */ @@ -126,9 +129,13 @@ void boot_fdt_add_mem_rsv_regions(struct lmb *lmb, void *fdt_blob) ret = fdt_get_resource(fdt_blob, subnode, "reg", 0, &res); if (!ret && fdtdec_get_is_enabled(fdt_blob, subnode)) { + flags = LMB_NONE; + if (fdtdec_get_bool(fdt_blob, subnode, + "no-map")) + flags = LMB_NOMAP; addr = res.start; size = res.end - res.start + 1; - boot_fdt_reserve_region(lmb, addr, size); + boot_fdt_reserve_region(lmb, addr, size, flags); }
subnode = fdt_next_subnode(fdt_blob, subnode);

No more map the reserved region with "no-map" property by marking the corresponding TLB entries with invalid entry (=0) to avoid speculative access.
This patch fixes an issue where predictive read access on secure DDR OP-TEE reserved area are caught by firewall.
Signed-off-by: Patrick Delaunay patrick.delaunay@foss.st.com ---
Changes in v2: - NEW: update in stm32mp specific MMU setup functions
arch/arm/mach-stm32mp/cpu.c | 16 ++++++++++++++-- 1 file changed, 14 insertions(+), 2 deletions(-)
diff --git a/arch/arm/mach-stm32mp/cpu.c b/arch/arm/mach-stm32mp/cpu.c index 030066dc7c..2e0d709fed 100644 --- a/arch/arm/mach-stm32mp/cpu.c +++ b/arch/arm/mach-stm32mp/cpu.c @@ -12,6 +12,7 @@ #include <env.h> #include <init.h> #include <log.h> +#include <lmb.h> #include <misc.h> #include <net.h> #include <asm/io.h> @@ -220,6 +221,9 @@ void dram_bank_mmu_setup(int bank) int i; phys_addr_t start; phys_size_t size; + struct lmb lmb; + bool use_lmb = false; + enum dcache_option option;
if (IS_ENABLED(CONFIG_SPL_BUILD)) { start = ALIGN_DOWN(STM32_SYSRAM_BASE, MMU_SECTION_SIZE); @@ -228,6 +232,10 @@ void dram_bank_mmu_setup(int bank) /* bd->bi_dram is available only after relocation */ start = bd->bi_dram[bank].start; size = bd->bi_dram[bank].size; + if (IS_ENABLED(CONFIG_LMB)) { + use_lmb = true; + lmb_init_and_reserve(&lmb, bd, (void *)gd->fdt_blob); + } } else { /* mark cacheable and executable the beggining of the DDR */ start = STM32_DDR_BASE; @@ -236,8 +244,12 @@ void dram_bank_mmu_setup(int bank)
for (i = start >> MMU_SECTION_SHIFT; i < (start >> MMU_SECTION_SHIFT) + (size >> MMU_SECTION_SHIFT); - i++) - set_section_dcache(i, DCACHE_DEFAULT_OPTION); + i++) { + option = DCACHE_DEFAULT_OPTION; + if (use_lmb && lmb_is_reserved_flags(&lmb, i << MMU_SECTION_SHIFT, LMB_NOMAP)) + option = INVALID_ENTRY; + set_section_dcache(i, option); + } } /* * initialize the MMU and activate cache in SPL or in U-Boot pre-reloc stage

No more map the reserved region with "no-map" property by marking the corresponding TLB entries with invalid entry (=0) to avoid speculative access.
This patch fixes potential issue when predictive access is done by ARM core.
Signed-off-by: Patrick Delaunay patrick.delaunay@st.com Signed-off-by: Patrick Delaunay patrick.delaunay@foss.st.com ---
(no changes since v1)
arch/arm/include/asm/system.h | 3 +++ arch/arm/lib/cache-cp15.c | 19 +++++++++++++++++-- 2 files changed, 20 insertions(+), 2 deletions(-)
diff --git a/arch/arm/include/asm/system.h b/arch/arm/include/asm/system.h index 11fceec4d2..c63ed07f2c 100644 --- a/arch/arm/include/asm/system.h +++ b/arch/arm/include/asm/system.h @@ -444,6 +444,7 @@ static inline void set_cr(unsigned int val)
/* options available for data cache on each page */ enum dcache_option { + INVALID_ENTRY = 0, DCACHE_OFF = TTB_SECT | TTB_SECT_MAIR(0) | TTB_SECT_XN_MASK, DCACHE_WRITETHROUGH = TTB_SECT | TTB_SECT_MAIR(1), DCACHE_WRITEBACK = TTB_SECT | TTB_SECT_MAIR(2), @@ -474,6 +475,7 @@ enum dcache_option { * 1 1 1 Outer/Inner Write-Back, Read-Allocate Write-Allocate */ enum dcache_option { + INVALID_ENTRY = 0, DCACHE_OFF = TTB_SECT_DOMAIN(0) | TTB_SECT_XN_MASK | TTB_SECT, DCACHE_WRITETHROUGH = TTB_SECT_DOMAIN(0) | TTB_SECT | TTB_SECT_C_MASK, DCACHE_WRITEBACK = DCACHE_WRITETHROUGH | TTB_SECT_B_MASK, @@ -483,6 +485,7 @@ enum dcache_option { #define TTB_SECT_AP (3 << 10) /* options available for data cache on each page */ enum dcache_option { + INVALID_ENTRY = 0, DCACHE_OFF = 0x12, DCACHE_WRITETHROUGH = 0x1a, DCACHE_WRITEBACK = 0x1e, diff --git a/arch/arm/lib/cache-cp15.c b/arch/arm/lib/cache-cp15.c index 8a49e5217c..8a354d364d 100644 --- a/arch/arm/lib/cache-cp15.c +++ b/arch/arm/lib/cache-cp15.c @@ -6,6 +6,7 @@
#include <common.h> #include <cpu_func.h> +#include <lmb.h> #include <log.h> #include <asm/system.h> #include <asm/cache.h> @@ -101,18 +102,32 @@ void mmu_set_region_dcache_behaviour(phys_addr_t start, size_t size, __weak void dram_bank_mmu_setup(int bank) { struct bd_info *bd = gd->bd; + struct lmb lmb; int i;
/* bd->bi_dram is available only after relocation */ if ((gd->flags & GD_FLG_RELOC) == 0) return;
+ /* + * don't allow cache on reserved memory tagged 'no-map' in DT + * => avoid speculative access to "secure" data + */ + if (IS_ENABLED(CONFIG_LMB)) + lmb_init_and_reserve(&lmb, bd, (void *)gd->fdt_blob); + debug("%s: bank: %d\n", __func__, bank); for (i = bd->bi_dram[bank].start >> MMU_SECTION_SHIFT; i < (bd->bi_dram[bank].start >> MMU_SECTION_SHIFT) + (bd->bi_dram[bank].size >> MMU_SECTION_SHIFT); - i++) - set_section_dcache(i, DCACHE_DEFAULT_OPTION); + i++) { + if (IS_ENABLED(CONFIG_LMB) && + lmb_is_reserved_flags(&lmb, i << MMU_SECTION_SHIFT, + LMB_NOMAP)) + set_section_dcache(i, INVALID_ENTRY); + else + set_section_dcache(i, DCACHE_DEFAULT_OPTION); + } }
/* to activate the MMU we need to set up virtual memory: use 1M areas */

Hi,
On 2/8/21 2:26 PM, Patrick Delaunay wrote:
No more map the reserved region with "no-map" property by marking the corresponding TLB entries with invalid entry (=0) to avoid speculative access.
This patch fixes potential issue when predictive access is done by ARM core.
Signed-off-by: Patrick Delaunay patrick.delaunay@st.com Signed-off-by: Patrick Delaunay patrick.delaunay@foss.st.com
(no changes since v1)
arch/arm/include/asm/system.h | 3 +++ arch/arm/lib/cache-cp15.c | 19 +++++++++++++++++-- 2 files changed, 20 insertions(+), 2 deletions(-)
diff --git a/arch/arm/include/asm/system.h b/arch/arm/include/asm/system.h index 11fceec4d2..c63ed07f2c 100644 --- a/arch/arm/include/asm/system.h +++ b/arch/arm/include/asm/system.h @@ -444,6 +444,7 @@ static inline void set_cr(unsigned int val)
/* options available for data cache on each page */ enum dcache_option {
- INVALID_ENTRY = 0, DCACHE_OFF = TTB_SECT | TTB_SECT_MAIR(0) | TTB_SECT_XN_MASK, DCACHE_WRITETHROUGH = TTB_SECT | TTB_SECT_MAIR(1), DCACHE_WRITEBACK = TTB_SECT | TTB_SECT_MAIR(2),
@@ -474,6 +475,7 @@ enum dcache_option {
- 1 1 1 Outer/Inner Write-Back, Read-Allocate Write-Allocate
*/ enum dcache_option {
- INVALID_ENTRY = 0, DCACHE_OFF = TTB_SECT_DOMAIN(0) | TTB_SECT_XN_MASK | TTB_SECT, DCACHE_WRITETHROUGH = TTB_SECT_DOMAIN(0) | TTB_SECT | TTB_SECT_C_MASK, DCACHE_WRITEBACK = DCACHE_WRITETHROUGH | TTB_SECT_B_MASK,
@@ -483,6 +485,7 @@ enum dcache_option { #define TTB_SECT_AP (3 << 10) /* options available for data cache on each page */ enum dcache_option {
- INVALID_ENTRY = 0, DCACHE_OFF = 0x12, DCACHE_WRITETHROUGH = 0x1a, DCACHE_WRITEBACK = 0x1e,
diff --git a/arch/arm/lib/cache-cp15.c b/arch/arm/lib/cache-cp15.c index 8a49e5217c..8a354d364d 100644 --- a/arch/arm/lib/cache-cp15.c +++ b/arch/arm/lib/cache-cp15.c @@ -6,6 +6,7 @@
#include <common.h> #include <cpu_func.h> +#include <lmb.h> #include <log.h> #include <asm/system.h> #include <asm/cache.h> @@ -101,18 +102,32 @@ void mmu_set_region_dcache_behaviour(phys_addr_t start, size_t size, __weak void dram_bank_mmu_setup(int bank) { struct bd_info *bd = gd->bd;
struct lmb lmb; int i;
/* bd->bi_dram is available only after relocation */ if ((gd->flags & GD_FLG_RELOC) == 0) return;
/*
* don't allow cache on reserved memory tagged 'no-map' in DT
* => avoid speculative access to "secure" data
*/
if (IS_ENABLED(CONFIG_LMB))
lmb_init_and_reserve(&lmb, bd, (void *)gd->fdt_blob);
debug("%s: bank: %d\n", __func__, bank); for (i = bd->bi_dram[bank].start >> MMU_SECTION_SHIFT; i < (bd->bi_dram[bank].start >> MMU_SECTION_SHIFT) + (bd->bi_dram[bank].size >> MMU_SECTION_SHIFT);
i++)
set_section_dcache(i, DCACHE_DEFAULT_OPTION);
i++) {
if (IS_ENABLED(CONFIG_LMB) &&
lmb_is_reserved_flags(&lmb, i << MMU_SECTION_SHIFT,
LMB_NOMAP))
set_section_dcache(i, INVALID_ENTRY);
else
set_section_dcache(i, DCACHE_DEFAULT_OPTION);
} }
/* to activate the MMU we need to set up virtual memory: use 1M areas */
Hi,
After more test on stm32mp15x platform, the patch [6/7] introduced
performance issue for the boot time.
The device tree parsing done in lmb_init_and_reserve() to found the
the reserved memory region with "no-map" tag is done with data
cache deactivated (as it is called before MMU and data cache activation).
This parsing increase the device boot time (several second lost in stm32mp15).
So I need to sent a update version for this patch [6/7]
But I will also drop this patch patch [7/7] to avoid to increase the boot time
on other arm platform using cp15.
Regards
Patrick
participants (2)
-
Patrick DELAUNAY
-
Patrick Delaunay