[U-Boot] [PATCH v1 0/6] Convert FSL LayerScape ARMv8 SoCs to use common MMU code

To use common MMU code, non-identical mapping needs to be supported. Minior change in the MMU framework is required to support splitting blocks. With these changes, using common code is straight forward. Attention is needed where the tables are for early boot, secure and non-secure ram situations.
York Sun (6): armv8: Move secure_ram variable out of generic global data armv8: Add tlb_allocated to arch global data armv8: mmu: house cleaning armv8: mmu: split block if necessary armv8: mmu: Add support of non-identical mapping armv8: layerscape: Convert to use common MMU framework
README | 3 +- arch/arm/cpu/armv8/cache_v8.c | 112 ++++---- arch/arm/cpu/armv8/fsl-layerscape/cpu.c | 365 ++++--------------------- arch/arm/cpu/armv8/s32v234/cpu.c | 12 +- arch/arm/cpu/armv8/zynqmp/cpu.c | 21 +- arch/arm/include/asm/arch-fsl-layerscape/cpu.h | 310 ++++++++++++--------- arch/arm/include/asm/armv8/mmu.h | 5 +- arch/arm/include/asm/global_data.h | 15 + arch/arm/mach-exynos/mmu-arm64.c | 9 +- arch/arm/mach-meson/board.c | 6 +- arch/arm/mach-snapdragon/sysmap-apq8016.c | 6 +- arch/arm/mach-sunxi/board.c | 6 +- arch/arm/mach-tegra/arm64-mmu.c | 6 +- arch/arm/mach-uniphier/arm64/mem_map.c | 6 +- board/armltd/vexpress64/vexpress64.c | 6 +- board/cavium/thunderx/thunderx.c | 9 +- board/freescale/ls1043aqds/ddr.c | 15 +- board/freescale/ls1043ardb/ddr.c | 15 +- board/freescale/ls2080a/ddr.c | 15 +- board/freescale/ls2080aqds/ddr.c | 15 +- board/freescale/ls2080ardb/ddr.c | 15 +- board/hisilicon/hikey/hikey.c | 6 +- board/raspberrypi/rpi/rpi.c | 6 +- cmd/bdinfo.c | 4 +- common/board_f.c | 11 +- include/asm-generic/global_data.h | 14 - 26 files changed, 439 insertions(+), 574 deletions(-)

Secure_ram variable was put in generic global data. But only ARMv8 uses this variable. Move it to ARM specific data structure.
Signed-off-by: York Sun york.sun@nxp.com ---
README | 3 ++- arch/arm/cpu/armv8/fsl-layerscape/cpu.c | 20 ++++++++++---------- arch/arm/include/asm/global_data.h | 14 ++++++++++++++ board/freescale/ls1043aqds/ddr.c | 15 ++++++++------- board/freescale/ls1043ardb/ddr.c | 15 ++++++++------- board/freescale/ls2080a/ddr.c | 15 ++++++++------- board/freescale/ls2080aqds/ddr.c | 15 ++++++++------- board/freescale/ls2080ardb/ddr.c | 15 ++++++++------- cmd/bdinfo.c | 4 ++-- common/board_f.c | 2 +- include/asm-generic/global_data.h | 14 -------------- 11 files changed, 69 insertions(+), 63 deletions(-)
diff --git a/README b/README index 03bed18..1a5f121 100644 --- a/README +++ b/README @@ -3783,10 +3783,11 @@ Configuration Settings: You only need to set this if address zero isn't writeable
- CONFIG_SYS_MEM_RESERVE_SECURE + Only implemented for ARMv8 for now. If defined, the size of CONFIG_SYS_MEM_RESERVE_SECURE memory is substracted from total RAM and won't be reported to OS. This memory can be used as secure memory. A variable - gd->secure_ram is used to track the location. In systems + gd->arch.secure_ram is used to track the location. In systems the RAM base is not zero, or RAM is divided into banks, this variable needs to be recalcuated to get the address.
diff --git a/arch/arm/cpu/armv8/fsl-layerscape/cpu.c b/arch/arm/cpu/armv8/fsl-layerscape/cpu.c index 8062106..a397f5d 100644 --- a/arch/arm/cpu/armv8/fsl-layerscape/cpu.c +++ b/arch/arm/cpu/armv8/fsl-layerscape/cpu.c @@ -289,8 +289,8 @@ static inline int final_secure_ddr(u64 *level0_table, * These tables are in DRAM. Sub tables are added to enable cache for * QBMan and OCRAM. * - * Put the MMU table in secure memory if gd->secure_ram is valid. - * OCRAM will be not used for this purpose so gd->secure_ram can't be 0. + * Put the MMU table in secure memory if gd->arch.secure_ram is valid. + * OCRAM will be not used for this purpose so gd->arch.secure_ram can't be 0. * * Level 1 table 0 contains 512 entries for each 1GB from 0 to 512GB. * Level 1 table 1 contains 512 entries for each 1GB from 512GB to 1TB. @@ -321,13 +321,13 @@ static inline void final_mmu_setup(void)
if (el == 3) { /* - * Only use gd->secure_ram if the address is recalculated + * Only use gd->arch.secure_ram if the address is recalculated * Align to 4KB for MMU table */ - if (gd->secure_ram & MEM_RESERVE_SECURE_MAINTAINED) - level0_table = (u64 *)(gd->secure_ram & ~0xfff); + if (gd->arch.secure_ram & MEM_RESERVE_SECURE_MAINTAINED) + level0_table = (u64 *)(gd->arch.secure_ram & ~0xfff); else - printf("MMU warning: gd->secure_ram is not maintained, disabled.\n"); + printf("MMU warning: gd->arch.secure_ram is not maintained, disabled.\n"); } #endif level1_table0 = level0_table + 512; @@ -374,7 +374,7 @@ static inline void final_mmu_setup(void) } /* Set the secure memory to secure in MMU */ #ifdef CONFIG_SYS_MEM_RESERVE_SECURE - if (el == 3 && gd->secure_ram & MEM_RESERVE_SECURE_MAINTAINED) { + if (el == 3 && gd->arch.secure_ram & MEM_RESERVE_SECURE_MAINTAINED) { #ifdef CONFIG_FSL_LSCH3 level2_table_secure = level2_table1 + 512; #elif defined(CONFIG_FSL_LSCH2) @@ -382,10 +382,10 @@ static inline void final_mmu_setup(void) #endif if (!final_secure_ddr(level0_table, level2_table_secure, - gd->secure_ram & ~0x3)) { - gd->secure_ram |= MEM_RESERVE_SECURE_SECURED; + gd->arch.secure_ram & ~0x3)) { + gd->arch.secure_ram |= MEM_RESERVE_SECURE_SECURED; debug("Now MMU table is in secured memory at 0x%llx\n", - gd->secure_ram & ~0x3); + gd->arch.secure_ram & ~0x3); } else { printf("MMU warning: Failed to secure DDR\n"); } diff --git a/arch/arm/include/asm/global_data.h b/arch/arm/include/asm/global_data.h index 77d2653..2d76cd4 100644 --- a/arch/arm/include/asm/global_data.h +++ b/arch/arm/include/asm/global_data.h @@ -44,6 +44,20 @@ struct arch_global_data { unsigned long tlb_emerg; #endif #endif +#ifdef CONFIG_SYS_MEM_RESERVE_SECURE +#define MEM_RESERVE_SECURE_SECURED 0x1 +#define MEM_RESERVE_SECURE_MAINTAINED 0x2 +#define MEM_RESERVE_SECURE_ADDR_MASK (~0x3) + /* + * Secure memory addr + * This variable needs maintenance if the RAM base is not zero, + * or if RAM splits into non-consecutive banks. It also has a + * flag indicating the secure memory is marked as secure by MMU. + * Flags used: 0x1 secured + * 0x2 maintained + */ + phys_addr_t secure_ram; +#endif
#ifdef CONFIG_OMAP_COMMON u32 omap_boot_device; diff --git a/board/freescale/ls1043aqds/ddr.c b/board/freescale/ls1043aqds/ddr.c index 0fd835d..d4540d0 100644 --- a/board/freescale/ls1043aqds/ddr.c +++ b/board/freescale/ls1043aqds/ddr.c @@ -128,7 +128,7 @@ phys_size_t initdram(int board_type) void dram_init_banksize(void) { /* - * gd->secure_ram tracks the location of secure memory. + * gd->arch.secure_ram tracks the location of secure memory. * It was set as if the memory starts from 0. * The address needs to add the offset of its bank. */ @@ -139,16 +139,17 @@ void dram_init_banksize(void) gd->bd->bi_dram[1].size = gd->ram_size - CONFIG_SYS_DDR_BLOCK1_SIZE; #ifdef CONFIG_SYS_MEM_RESERVE_SECURE - gd->secure_ram = gd->bd->bi_dram[1].start + - gd->secure_ram - - CONFIG_SYS_DDR_BLOCK1_SIZE; - gd->secure_ram |= MEM_RESERVE_SECURE_MAINTAINED; + gd->arch.secure_ram = gd->bd->bi_dram[1].start + + gd->arch.secure_ram - + CONFIG_SYS_DDR_BLOCK1_SIZE; + gd->arch.secure_ram |= MEM_RESERVE_SECURE_MAINTAINED; #endif } else { gd->bd->bi_dram[0].size = gd->ram_size; #ifdef CONFIG_SYS_MEM_RESERVE_SECURE - gd->secure_ram = gd->bd->bi_dram[0].start + gd->secure_ram; - gd->secure_ram |= MEM_RESERVE_SECURE_MAINTAINED; + gd->arch.secure_ram = gd->bd->bi_dram[0].start + + gd->arch.secure_ram; + gd->arch.secure_ram |= MEM_RESERVE_SECURE_MAINTAINED; #endif } } diff --git a/board/freescale/ls1043ardb/ddr.c b/board/freescale/ls1043ardb/ddr.c index 1e2fd2e..61b1cc4 100644 --- a/board/freescale/ls1043ardb/ddr.c +++ b/board/freescale/ls1043ardb/ddr.c @@ -189,7 +189,7 @@ phys_size_t initdram(int board_type) void dram_init_banksize(void) { /* - * gd->secure_ram tracks the location of secure memory. + * gd->arch.secure_ram tracks the location of secure memory. * It was set as if the memory starts from 0. * The address needs to add the offset of its bank. */ @@ -200,16 +200,17 @@ void dram_init_banksize(void) gd->bd->bi_dram[1].size = gd->ram_size - CONFIG_SYS_DDR_BLOCK1_SIZE; #ifdef CONFIG_SYS_MEM_RESERVE_SECURE - gd->secure_ram = gd->bd->bi_dram[1].start + - gd->secure_ram - - CONFIG_SYS_DDR_BLOCK1_SIZE; - gd->secure_ram |= MEM_RESERVE_SECURE_MAINTAINED; + gd->arch.secure_ram = gd->bd->bi_dram[1].start + + gd->arch.secure_ram - + CONFIG_SYS_DDR_BLOCK1_SIZE; + gd->arch.secure_ram |= MEM_RESERVE_SECURE_MAINTAINED; #endif } else { gd->bd->bi_dram[0].size = gd->ram_size; #ifdef CONFIG_SYS_MEM_RESERVE_SECURE - gd->secure_ram = gd->bd->bi_dram[0].start + gd->secure_ram; - gd->secure_ram |= MEM_RESERVE_SECURE_MAINTAINED; + gd->arch.secure_ram = gd->bd->bi_dram[0].start + + gd->arch.secure_ram; + gd->arch.secure_ram |= MEM_RESERVE_SECURE_MAINTAINED; #endif } } diff --git a/board/freescale/ls2080a/ddr.c b/board/freescale/ls2080a/ddr.c index 1827ddc..e6130ec 100644 --- a/board/freescale/ls2080a/ddr.c +++ b/board/freescale/ls2080a/ddr.c @@ -177,7 +177,7 @@ void dram_init_banksize(void) #endif
/* - * gd->secure_ram tracks the location of secure memory. + * gd->arch.secure_ram tracks the location of secure memory. * It was set as if the memory starts from 0. * The address needs to add the offset of its bank. */ @@ -188,16 +188,17 @@ void dram_init_banksize(void) gd->bd->bi_dram[1].size = gd->ram_size - CONFIG_SYS_LS2_DDR_BLOCK1_SIZE; #ifdef CONFIG_SYS_MEM_RESERVE_SECURE - gd->secure_ram = gd->bd->bi_dram[1].start + - gd->secure_ram - - CONFIG_SYS_LS2_DDR_BLOCK1_SIZE; - gd->secure_ram |= MEM_RESERVE_SECURE_MAINTAINED; + gd->arch.secure_ram = gd->bd->bi_dram[1].start + + gd->arch.secure_ram - + CONFIG_SYS_LS2_DDR_BLOCK1_SIZE; + gd->arch.secure_ram |= MEM_RESERVE_SECURE_MAINTAINED; #endif } else { gd->bd->bi_dram[0].size = gd->ram_size; #ifdef CONFIG_SYS_MEM_RESERVE_SECURE - gd->secure_ram = gd->bd->bi_dram[0].start + gd->secure_ram; - gd->secure_ram |= MEM_RESERVE_SECURE_MAINTAINED; + gd->arch.secure_ram = gd->bd->bi_dram[0].start + + gd->arch.secure_ram; + gd->arch.secure_ram |= MEM_RESERVE_SECURE_MAINTAINED; #endif }
diff --git a/board/freescale/ls2080aqds/ddr.c b/board/freescale/ls2080aqds/ddr.c index fcb0366..9c6f477 100644 --- a/board/freescale/ls2080aqds/ddr.c +++ b/board/freescale/ls2080aqds/ddr.c @@ -177,7 +177,7 @@ void dram_init_banksize(void) #endif
/* - * gd->secure_ram tracks the location of secure memory. + * gd->arch.secure_ram tracks the location of secure memory. * It was set as if the memory starts from 0. * The address needs to add the offset of its bank. */ @@ -188,16 +188,17 @@ void dram_init_banksize(void) gd->bd->bi_dram[1].size = gd->ram_size - CONFIG_SYS_LS2_DDR_BLOCK1_SIZE; #ifdef CONFIG_SYS_MEM_RESERVE_SECURE - gd->secure_ram = gd->bd->bi_dram[1].start + - gd->secure_ram - - CONFIG_SYS_LS2_DDR_BLOCK1_SIZE; - gd->secure_ram |= MEM_RESERVE_SECURE_MAINTAINED; + gd->arch.secure_ram = gd->bd->bi_dram[1].start + + gd->arch.secure_ram - + CONFIG_SYS_LS2_DDR_BLOCK1_SIZE; + gd->arch.secure_ram |= MEM_RESERVE_SECURE_MAINTAINED; #endif } else { gd->bd->bi_dram[0].size = gd->ram_size; #ifdef CONFIG_SYS_MEM_RESERVE_SECURE - gd->secure_ram = gd->bd->bi_dram[0].start + gd->secure_ram; - gd->secure_ram |= MEM_RESERVE_SECURE_MAINTAINED; + gd->arch.secure_ram = gd->bd->bi_dram[0].start + + gd->arch.secure_ram; + gd->arch.secure_ram |= MEM_RESERVE_SECURE_MAINTAINED; #endif }
diff --git a/board/freescale/ls2080ardb/ddr.c b/board/freescale/ls2080ardb/ddr.c index a04d21b..ecd1e71 100644 --- a/board/freescale/ls2080ardb/ddr.c +++ b/board/freescale/ls2080ardb/ddr.c @@ -177,7 +177,7 @@ void dram_init_banksize(void) #endif
/* - * gd->secure_ram tracks the location of secure memory. + * gd->arch.secure_ram tracks the location of secure memory. * It was set as if the memory starts from 0. * The address needs to add the offset of its bank. */ @@ -188,16 +188,17 @@ void dram_init_banksize(void) gd->bd->bi_dram[1].size = gd->ram_size - CONFIG_SYS_LS2_DDR_BLOCK1_SIZE; #ifdef CONFIG_SYS_MEM_RESERVE_SECURE - gd->secure_ram = gd->bd->bi_dram[1].start + - gd->secure_ram - - CONFIG_SYS_LS2_DDR_BLOCK1_SIZE; - gd->secure_ram |= MEM_RESERVE_SECURE_MAINTAINED; + gd->arch.secure_ram = gd->bd->bi_dram[1].start + + gd->arch.secure_ram - + CONFIG_SYS_LS2_DDR_BLOCK1_SIZE; + gd->arch.secure_ram |= MEM_RESERVE_SECURE_MAINTAINED; #endif } else { gd->bd->bi_dram[0].size = gd->ram_size; #ifdef CONFIG_SYS_MEM_RESERVE_SECURE - gd->secure_ram = gd->bd->bi_dram[0].start + gd->secure_ram; - gd->secure_ram |= MEM_RESERVE_SECURE_MAINTAINED; + gd->arch.secure_ram = gd->bd->bi_dram[0].start + + gd->arch.secure_ram; + gd->arch.secure_ram |= MEM_RESERVE_SECURE_MAINTAINED; #endif }
diff --git a/cmd/bdinfo.c b/cmd/bdinfo.c index 1c4bed9..f2435ab 100644 --- a/cmd/bdinfo.c +++ b/cmd/bdinfo.c @@ -385,9 +385,9 @@ static int do_bdinfo(cmd_tbl_t *cmdtp, int flag, int argc, }
#ifdef CONFIG_SYS_MEM_RESERVE_SECURE - if (gd->secure_ram & MEM_RESERVE_SECURE_SECURED) { + if (gd->arch.secure_ram & MEM_RESERVE_SECURE_SECURED) { print_num("Secure ram", - gd->secure_ram & MEM_RESERVE_SECURE_ADDR_MASK); + gd->arch.secure_ram & MEM_RESERVE_SECURE_ADDR_MASK); } #endif #if defined(CONFIG_CMD_NET) && !defined(CONFIG_DM_ETH) diff --git a/common/board_f.c b/common/board_f.c index d405b5b..0fc96bd 100644 --- a/common/board_f.c +++ b/common/board_f.c @@ -339,7 +339,7 @@ static int setup_dest_addr(void) * Record secure memory location. Need recalcuate if memory splits * into banks, or the ram base is not zero. */ - gd->secure_ram = gd->ram_size; + gd->arch.secure_ram = gd->ram_size; #endif /* * Subtract specified amount of memory to hide so that it won't diff --git a/include/asm-generic/global_data.h b/include/asm-generic/global_data.h index 0abcbe4..a6d1d2a 100644 --- a/include/asm-generic/global_data.h +++ b/include/asm-generic/global_data.h @@ -55,20 +55,6 @@ typedef struct global_data {
unsigned long relocaddr; /* Start address of U-Boot in RAM */ phys_size_t ram_size; /* RAM size */ -#ifdef CONFIG_SYS_MEM_RESERVE_SECURE -#define MEM_RESERVE_SECURE_SECURED 0x1 -#define MEM_RESERVE_SECURE_MAINTAINED 0x2 -#define MEM_RESERVE_SECURE_ADDR_MASK (~0x3) - /* - * Secure memory addr - * This variable needs maintenance if the RAM base is not zero, - * or if RAM splits into non-consecutive banks. It also has a - * flag indicating the secure memory is marked as secure by MMU. - * Flags used: 0x1 secured - * 0x2 maintained - */ - phys_addr_t secure_ram; -#endif unsigned long mon_len; /* monitor len */ unsigned long irq_sp; /* irq stack pointer */ unsigned long start_addr_sp; /* start_addr_stackpointer */

When secure ram is used, MMU tables have to be put into secure ram. To use common MMU code, gd->arch.tlb_addr will be used to host TLB entry pointer. To save allocated memory for later use, tlb_allocated variable is added to global data structure.
Signed-off-by: York Sun york.sun@nxp.com ---
arch/arm/include/asm/global_data.h | 1 + common/board_f.c | 9 +++++++++ 2 files changed, 10 insertions(+)
diff --git a/arch/arm/include/asm/global_data.h b/arch/arm/include/asm/global_data.h index 2d76cd4..1055017 100644 --- a/arch/arm/include/asm/global_data.h +++ b/arch/arm/include/asm/global_data.h @@ -57,6 +57,7 @@ struct arch_global_data { * 0x2 maintained */ phys_addr_t secure_ram; + unsigned long tlb_allocated; #endif
#ifdef CONFIG_OMAP_COMMON diff --git a/common/board_f.c b/common/board_f.c index 0fc96bd..a1138b0 100644 --- a/common/board_f.c +++ b/common/board_f.c @@ -432,6 +432,15 @@ static int reserve_mmu(void) gd->arch.tlb_addr = gd->relocaddr; debug("TLB table from %08lx to %08lx\n", gd->arch.tlb_addr, gd->arch.tlb_addr + gd->arch.tlb_size); + +#ifdef CONFIG_SYS_MEM_RESERVE_SECURE + /* + * Record allocated tlb_addr in case gd->tlb_addr to be overwritten + * with location within secure ram. + */ + gd->arch.tlb_allocated = gd->arch.tlb_addr; +#endif + return 0; } #endif

Make setup_pgtages() and get_tcr() available for platform code to customize MMU tables. Remove unintentional call of create_table().
Signed-off-by: York Sun york.sun@nxp.com ---
arch/arm/cpu/armv8/cache_v8.c | 13 ++++++++----- arch/arm/include/asm/armv8/mmu.h | 2 ++ 2 files changed, 10 insertions(+), 5 deletions(-)
diff --git a/arch/arm/cpu/armv8/cache_v8.c b/arch/arm/cpu/armv8/cache_v8.c index 1615542..b8867a7 100644 --- a/arch/arm/cpu/armv8/cache_v8.c +++ b/arch/arm/cpu/armv8/cache_v8.c @@ -35,7 +35,7 @@ DECLARE_GLOBAL_DATA_PTR; * off: FFF */
-static u64 get_tcr(int el, u64 *pips, u64 *pva_bits) +u64 get_tcr(int el, u64 *pips, u64 *pva_bits) { u64 max_addr = 0; u64 ips, va_bits; @@ -349,10 +349,13 @@ __weak u64 get_page_table_size(void) return size; }
-static void setup_pgtables(void) +void setup_pgtables(void) { int i;
+ if (!gd->arch.tlb_fillptr || !gd->arch.tlb_addr) + panic("Page table pointer not setup."); + /* * Allocate the first level we're on with invalidate entries. * If the starting level is 0 (va_bits >= 39), then this is our @@ -363,9 +366,6 @@ static void setup_pgtables(void) /* Now add all MMU table entries one after another to the table */ for (i = 0; mem_map[i].size || mem_map[i].attrs; i++) add_map(&mem_map[i]); - - /* Create the same thing once more for our emergency page table */ - create_table(); }
static void setup_all_pgtables(void) @@ -527,6 +527,9 @@ void mmu_set_region_dcache_behaviour(phys_addr_t start, size_t size,
debug("start=%lx size=%lx\n", (ulong)start, (ulong)size);
+ if (!gd->arch.tlb_emerg) + panic("Emergency page table not setup."); + /* * We can not modify page tables that we're currently running on, * so we first need to switch to the "emergency" page tables where diff --git a/arch/arm/include/asm/armv8/mmu.h b/arch/arm/include/asm/armv8/mmu.h index 0d08ed3..b7b4706 100644 --- a/arch/arm/include/asm/armv8/mmu.h +++ b/arch/arm/include/asm/armv8/mmu.h @@ -141,6 +141,8 @@ struct mm_region { };
extern struct mm_region *mem_map; +void setup_pgtables(void); +u64 get_tcr(int el, u64 *pips, u64 *pva_bits); #endif
#endif /* _ASM_ARMV8_MMU_H_ */

When page tables are created, allow later table to be created on previous block entry. Splitting block feature is already working with current code. This patch only rearranges the code order and adds one condition to call split_block().
Signed-off-by: York Sun york.sun@nxp.com ---
arch/arm/cpu/armv8/cache_v8.c | 70 +++++++++++++++++++++++-------------------- 1 file changed, 38 insertions(+), 32 deletions(-)
diff --git a/arch/arm/cpu/armv8/cache_v8.c b/arch/arm/cpu/armv8/cache_v8.c index b8867a7..8604035 100644 --- a/arch/arm/cpu/armv8/cache_v8.c +++ b/arch/arm/cpu/armv8/cache_v8.c @@ -167,6 +167,37 @@ static void set_pte_table(u64 *pte, u64 *table) *pte = PTE_TYPE_TABLE | (ulong)table; }
+/* Splits a block PTE into table with subpages spanning the old block */ +static void split_block(u64 *pte, int level) +{ + u64 old_pte = *pte; + u64 *new_table; + u64 i = 0; + /* level describes the parent level, we need the child ones */ + int levelshift = level2shift(level + 1); + + if (pte_type(pte) != PTE_TYPE_BLOCK) + panic("PTE %p (%llx) is not a block. Some driver code wants to " + "modify dcache settings for an range not covered in " + "mem_map.", pte, old_pte); + + new_table = create_table(); + debug("Splitting pte %p (%llx) into %p\n", pte, old_pte, new_table); + + for (i = 0; i < MAX_PTE_ENTRIES; i++) { + new_table[i] = old_pte | (i << levelshift); + + /* Level 3 block PTEs have the table type */ + if ((level + 1) == 3) + new_table[i] |= PTE_TYPE_TABLE; + + debug("Setting new_table[%lld] = %llx\n", i, new_table[i]); + } + + /* Set the new table into effect */ + set_pte_table(pte, new_table); +} + /* Add one mm_region map entry to the page tables */ static void add_map(struct mm_region *map) { @@ -188,6 +219,8 @@ static void add_map(struct mm_region *map)
for (level = 1; level < 4; level++) { pte = find_pte(addr, level); + if (!pte) + panic("pte not found\n"); blocksize = 1ULL << level2shift(level); debug("Checking if pte fits for addr=%llx size=%llx " "blocksize=%llx\n", addr, size, blocksize); @@ -199,48 +232,21 @@ static void add_map(struct mm_region *map) addr += blocksize; size -= blocksize; break; - } else if ((pte_type(pte) == PTE_TYPE_FAULT)) { + } else if (pte_type(pte) == PTE_TYPE_FAULT) { /* Page doesn't fit, create subpages */ debug("Creating subtable for addr 0x%llx " "blksize=%llx\n", addr, blocksize); new_table = create_table(); set_pte_table(pte, new_table); + } else if (pte_type(pte) == PTE_TYPE_BLOCK) { + debug("Split block into subtable for addr 0x%llx blksize=0x%llx\n", + addr, blocksize); + split_block(pte, level); } } } }
-/* Splits a block PTE into table with subpages spanning the old block */ -static void split_block(u64 *pte, int level) -{ - u64 old_pte = *pte; - u64 *new_table; - u64 i = 0; - /* level describes the parent level, we need the child ones */ - int levelshift = level2shift(level + 1); - - if (pte_type(pte) != PTE_TYPE_BLOCK) - panic("PTE %p (%llx) is not a block. Some driver code wants to " - "modify dcache settings for an range not covered in " - "mem_map.", pte, old_pte); - - new_table = create_table(); - debug("Splitting pte %p (%llx) into %p\n", pte, old_pte, new_table); - - for (i = 0; i < MAX_PTE_ENTRIES; i++) { - new_table[i] = old_pte | (i << levelshift); - - /* Level 3 block PTEs have the table type */ - if ((level + 1) == 3) - new_table[i] |= PTE_TYPE_TABLE; - - debug("Setting new_table[%lld] = %llx\n", i, new_table[i]); - } - - /* Set the new table into effect */ - set_pte_table(pte, new_table); -} - enum pte_type { PTE_INVAL, PTE_BLOCK,

Introduce virtual and physical addresses in the mapping table. This change have no impact on existing boards because they all use idential mapping.
Signed-off-by: York Sun york.sun@nxp.com ---
arch/arm/cpu/armv8/cache_v8.c | 37 +++++++++++++++++-------------- arch/arm/cpu/armv8/s32v234/cpu.c | 12 ++++++---- arch/arm/cpu/armv8/zynqmp/cpu.c | 21 ++++++++++++------ arch/arm/include/asm/armv8/mmu.h | 3 ++- arch/arm/mach-exynos/mmu-arm64.c | 9 ++++---- arch/arm/mach-meson/board.c | 6 +++-- arch/arm/mach-snapdragon/sysmap-apq8016.c | 6 +++-- arch/arm/mach-sunxi/board.c | 6 +++-- arch/arm/mach-tegra/arm64-mmu.c | 6 +++-- arch/arm/mach-uniphier/arm64/mem_map.c | 6 +++-- board/armltd/vexpress64/vexpress64.c | 6 +++-- board/cavium/thunderx/thunderx.c | 9 +++++--- board/hisilicon/hikey/hikey.c | 6 +++-- board/raspberrypi/rpi/rpi.c | 6 +++-- 14 files changed, 86 insertions(+), 53 deletions(-)
diff --git a/arch/arm/cpu/armv8/cache_v8.c b/arch/arm/cpu/armv8/cache_v8.c index 8604035..ac909a1 100644 --- a/arch/arm/cpu/armv8/cache_v8.c +++ b/arch/arm/cpu/armv8/cache_v8.c @@ -44,7 +44,7 @@ u64 get_tcr(int el, u64 *pips, u64 *pva_bits)
/* Find the largest address we need to support */ for (i = 0; mem_map[i].size || mem_map[i].attrs; i++) - max_addr = max(max_addr, mem_map[i].base + mem_map[i].size); + max_addr = max(max_addr, mem_map[i].virt + mem_map[i].size);
/* Calculate the maximum physical (and thus virtual) address */ if (max_addr > (1ULL << 44)) { @@ -202,7 +202,8 @@ static void split_block(u64 *pte, int level) static void add_map(struct mm_region *map) { u64 *pte; - u64 addr = map->base; + u64 virt = map->virt; + u64 phys = map->phys; u64 size = map->size; u64 attrs = map->attrs | PTE_TYPE_BLOCK | PTE_BLOCK_AF; u64 blocksize; @@ -210,37 +211,39 @@ static void add_map(struct mm_region *map) u64 *new_table;
while (size) { - pte = find_pte(addr, 0); + pte = find_pte(virt, 0); if (pte && (pte_type(pte) == PTE_TYPE_FAULT)) { - debug("Creating table for addr 0x%llx\n", addr); + debug("Creating table for virt 0x%llx\n", virt); new_table = create_table(); set_pte_table(pte, new_table); }
for (level = 1; level < 4; level++) { - pte = find_pte(addr, level); + pte = find_pte(virt, level); if (!pte) panic("pte not found\n"); + blocksize = 1ULL << level2shift(level); - debug("Checking if pte fits for addr=%llx size=%llx " - "blocksize=%llx\n", addr, size, blocksize); - if (size >= blocksize && !(addr & (blocksize - 1))) { + debug("Checking if pte fits for virt=%llx size=%llx blocksize=%llx\n", + virt, size, blocksize); + if (size >= blocksize && !(virt & (blocksize - 1))) { /* Page fits, create block PTE */ - debug("Setting PTE %p to block addr=%llx\n", - pte, addr); - *pte = addr | attrs; - addr += blocksize; + debug("Setting PTE %p to block virt=%llx\n", + pte, virt); + *pte = phys | attrs; + virt += blocksize; + phys += blocksize; size -= blocksize; break; } else if (pte_type(pte) == PTE_TYPE_FAULT) { /* Page doesn't fit, create subpages */ - debug("Creating subtable for addr 0x%llx " - "blksize=%llx\n", addr, blocksize); + debug("Creating subtable for virt 0x%llx blksize=%llx\n", + virt, blocksize); new_table = create_table(); set_pte_table(pte, new_table); } else if (pte_type(pte) == PTE_TYPE_BLOCK) { - debug("Split block into subtable for addr 0x%llx blksize=0x%llx\n", - addr, blocksize); + debug("Split block into subtable for virt 0x%llx blksize=0x%llx\n", + virt, blocksize); split_block(pte, level); } } @@ -271,7 +274,7 @@ static int count_required_pts(u64 addr, int level, u64 maxaddr)
for (i = 0; mem_map[i].size || mem_map[i].attrs; i++) { struct mm_region *map = &mem_map[i]; - u64 start = map->base; + u64 start = map->virt; u64 end = start + map->size;
/* Check if the PTE would overlap with the map */ diff --git a/arch/arm/cpu/armv8/s32v234/cpu.c b/arch/arm/cpu/armv8/s32v234/cpu.c index dac12a2..5c97e0e 100644 --- a/arch/arm/cpu/armv8/s32v234/cpu.c +++ b/arch/arm/cpu/armv8/s32v234/cpu.c @@ -32,24 +32,28 @@ u32 cpu_mask(void)
static struct mm_region s32v234_mem_map[] = { { - .base = S32V234_IRAM_BASE, + .virt = S32V234_IRAM_BASE, + .phys = S32V234_IRAM_BASE, .size = S32V234_IRAM_SIZE, .attrs = PTE_BLOCK_MEMTYPE(MT_NORMAL) | PTE_BLOCK_OUTER_SHARE }, { - .base = S32V234_DRAM_BASE1, + .virt = S32V234_DRAM_BASE1, + .phys = S32V234_DRAM_BASE1, .size = S32V234_DRAM_SIZE1, .attrs = PTE_BLOCK_MEMTYPE(MT_NORMAL) | PTE_BLOCK_OUTER_SHARE }, { - .base = S32V234_PERIPH_BASE, + .virt = S32V234_PERIPH_BASE, + .phys = S32V234_PERIPH_BASE, .size = S32V234_PERIPH_SIZE, .attrs = PTE_BLOCK_MEMTYPE(MT_DEVICE_NGNRNE) | PTE_BLOCK_NON_SHARE /* TODO: Do we need these? */ /* | PTE_BLOCK_PXN | PTE_BLOCK_UXN */ }, { - .base = S32V234_DRAM_BASE2, + .virt = S32V234_DRAM_BASE2, + .phys = S32V234_DRAM_BASE2, .size = S32V234_DRAM_SIZE2, .attrs = PTE_BLOCK_MEMTYPE(MT_NORMAL_NC) | PTE_BLOCK_OUTER_SHARE diff --git a/arch/arm/cpu/armv8/zynqmp/cpu.c b/arch/arm/cpu/armv8/zynqmp/cpu.c index 509f0aa..b0f1295 100644 --- a/arch/arm/cpu/armv8/zynqmp/cpu.c +++ b/arch/arm/cpu/armv8/zynqmp/cpu.c @@ -18,40 +18,47 @@ DECLARE_GLOBAL_DATA_PTR;
static struct mm_region zynqmp_mem_map[] = { { - .base = 0x0UL, + .virt = 0x0UL, + .phys = 0x0UL, .size = 0x80000000UL, .attrs = PTE_BLOCK_MEMTYPE(MT_NORMAL) | PTE_BLOCK_INNER_SHARE }, { - .base = 0x80000000UL, + .virt = 0x80000000UL, + .phys = 0x80000000UL, .size = 0x70000000UL, .attrs = PTE_BLOCK_MEMTYPE(MT_DEVICE_NGNRNE) | PTE_BLOCK_NON_SHARE | PTE_BLOCK_PXN | PTE_BLOCK_UXN }, { - .base = 0xf8000000UL, + .virt = 0xf8000000UL, + .phys = 0xf8000000UL, .size = 0x07e00000UL, .attrs = PTE_BLOCK_MEMTYPE(MT_DEVICE_NGNRNE) | PTE_BLOCK_NON_SHARE | PTE_BLOCK_PXN | PTE_BLOCK_UXN }, { - .base = 0xffe00000UL, + .virt = 0xffe00000UL, + .phys = 0xffe00000UL, .size = 0x00200000UL, .attrs = PTE_BLOCK_MEMTYPE(MT_NORMAL) | PTE_BLOCK_INNER_SHARE }, { - .base = 0x400000000UL, + .virt = 0x400000000UL, + .phys = 0x400000000UL, .size = 0x200000000UL, .attrs = PTE_BLOCK_MEMTYPE(MT_DEVICE_NGNRNE) | PTE_BLOCK_NON_SHARE | PTE_BLOCK_PXN | PTE_BLOCK_UXN }, { - .base = 0x600000000UL, + .virt = 0x600000000UL, + .phys = 0x600000000UL, .size = 0x800000000UL, .attrs = PTE_BLOCK_MEMTYPE(MT_NORMAL) | PTE_BLOCK_INNER_SHARE }, { - .base = 0xe00000000UL, + .virt = 0xe00000000UL, + .phys = 0xe00000000UL, .size = 0xf200000000UL, .attrs = PTE_BLOCK_MEMTYPE(MT_DEVICE_NGNRNE) | PTE_BLOCK_NON_SHARE | diff --git a/arch/arm/include/asm/armv8/mmu.h b/arch/arm/include/asm/armv8/mmu.h index b7b4706..aa0f3c4 100644 --- a/arch/arm/include/asm/armv8/mmu.h +++ b/arch/arm/include/asm/armv8/mmu.h @@ -135,7 +135,8 @@ static inline void set_ttbr_tcr_mair(int el, u64 table, u64 tcr, u64 attr) }
struct mm_region { - u64 base; + u64 virt; + u64 phys; u64 size; u64 attrs; }; diff --git a/arch/arm/mach-exynos/mmu-arm64.c b/arch/arm/mach-exynos/mmu-arm64.c index ba6d99d..2381422 100644 --- a/arch/arm/mach-exynos/mmu-arm64.c +++ b/arch/arm/mach-exynos/mmu-arm64.c @@ -13,21 +13,20 @@ DECLARE_GLOBAL_DATA_PTR; #ifdef CONFIG_EXYNOS7420 static struct mm_region exynos7420_mem_map[] = { { - .base = 0x10000000UL, + .virt = 0x10000000UL, + .phys = 0x10000000UL, .size = 0x10000000UL, .attrs = PTE_BLOCK_MEMTYPE(MT_DEVICE_NGNRNE) | PTE_BLOCK_NON_SHARE | PTE_BLOCK_PXN | PTE_BLOCK_UXN, }, { - .base = 0x40000000UL, + .virt = 0x40000000UL, + .phys = 0x40000000UL, .size = 0x80000000UL, .attrs = PTE_BLOCK_MEMTYPE(MT_NORMAL) | PTE_BLOCK_INNER_SHARE, }, { /* List terminator */ - .base = 0, - .size = 0, - .attrs = 0, }, };
diff --git a/arch/arm/mach-meson/board.c b/arch/arm/mach-meson/board.c index 64fa3c1..1dd53e2 100644 --- a/arch/arm/mach-meson/board.c +++ b/arch/arm/mach-meson/board.c @@ -48,12 +48,14 @@ void reset_cpu(ulong addr)
static struct mm_region gxbb_mem_map[] = { { - .base = 0x0UL, + .virt = 0x0UL, + .phys = 0x0UL, .size = 0x80000000UL, .attrs = PTE_BLOCK_MEMTYPE(MT_NORMAL) | PTE_BLOCK_INNER_SHARE }, { - .base = 0x80000000UL, + .virt = 0x80000000UL, + .phys = 0x80000000UL, .size = 0x80000000UL, .attrs = PTE_BLOCK_MEMTYPE(MT_DEVICE_NGNRNE) | PTE_BLOCK_NON_SHARE | diff --git a/arch/arm/mach-snapdragon/sysmap-apq8016.c b/arch/arm/mach-snapdragon/sysmap-apq8016.c index ef0db2a..580b9c7 100644 --- a/arch/arm/mach-snapdragon/sysmap-apq8016.c +++ b/arch/arm/mach-snapdragon/sysmap-apq8016.c @@ -11,13 +11,15 @@
static struct mm_region apq8016_mem_map[] = { { - .base = 0x0UL, /* Peripheral block */ + .virt = 0x0UL, /* Peripheral block */ + .phys = 0x0UL, /* Peripheral block */ .size = 0x8000000UL, .attrs = PTE_BLOCK_MEMTYPE(MT_DEVICE_NGNRNE) | PTE_BLOCK_NON_SHARE | PTE_BLOCK_PXN | PTE_BLOCK_UXN }, { - .base = 0x80000000UL, /* DDR */ + .virt = 0x80000000UL, /* DDR */ + .phys = 0x80000000UL, /* DDR */ .size = 0x80000000UL, .attrs = PTE_BLOCK_MEMTYPE(MT_NORMAL) | PTE_BLOCK_INNER_SHARE diff --git a/arch/arm/mach-sunxi/board.c b/arch/arm/mach-sunxi/board.c index bd15b9b..2195264 100644 --- a/arch/arm/mach-sunxi/board.c +++ b/arch/arm/mach-sunxi/board.c @@ -46,13 +46,15 @@ struct fel_stash fel_stash __attribute__((section(".data"))); static struct mm_region sunxi_mem_map[] = { { /* SRAM, MMIO regions */ - .base = 0x0UL, + .virt = 0x0UL, + .phys = 0x0UL, .size = 0x40000000UL, .attrs = PTE_BLOCK_MEMTYPE(MT_DEVICE_NGNRNE) | PTE_BLOCK_NON_SHARE }, { /* RAM */ - .base = 0x40000000UL, + .virt = 0x40000000UL, + .phys = 0x40000000UL, .size = 0x80000000UL, .attrs = PTE_BLOCK_MEMTYPE(MT_NORMAL) | PTE_BLOCK_INNER_SHARE diff --git a/arch/arm/mach-tegra/arm64-mmu.c b/arch/arm/mach-tegra/arm64-mmu.c index 501c4f0..7b1d258 100644 --- a/arch/arm/mach-tegra/arm64-mmu.c +++ b/arch/arm/mach-tegra/arm64-mmu.c @@ -14,13 +14,15 @@
static struct mm_region tegra_mem_map[] = { { - .base = 0x0UL, + .virt = 0x0UL, + .phys = 0x0UL, .size = 0x80000000UL, .attrs = PTE_BLOCK_MEMTYPE(MT_DEVICE_NGNRNE) | PTE_BLOCK_NON_SHARE | PTE_BLOCK_PXN | PTE_BLOCK_UXN }, { - .base = 0x80000000UL, + .virt = 0x80000000UL, + .phys = 0x80000000UL, .size = 0xff80000000UL, .attrs = PTE_BLOCK_MEMTYPE(MT_NORMAL) | PTE_BLOCK_INNER_SHARE diff --git a/arch/arm/mach-uniphier/arm64/mem_map.c b/arch/arm/mach-uniphier/arm64/mem_map.c index 74ef919..67bc4f1 100644 --- a/arch/arm/mach-uniphier/arm64/mem_map.c +++ b/arch/arm/mach-uniphier/arm64/mem_map.c @@ -10,14 +10,16 @@
static struct mm_region uniphier_mem_map[] = { { - .base = 0x00000000, + .virt = 0x00000000, + .phys = 0x00000000, .size = 0x80000000, .attrs = PTE_BLOCK_MEMTYPE(MT_DEVICE_NGNRNE) | PTE_BLOCK_NON_SHARE | PTE_BLOCK_PXN | PTE_BLOCK_UXN }, { - .base = 0x80000000, + .virt = 0x80000000, + .phys = 0x80000000, .size = 0xc0000000, .attrs = PTE_BLOCK_MEMTYPE(MT_NORMAL) | PTE_BLOCK_INNER_SHARE diff --git a/board/armltd/vexpress64/vexpress64.c b/board/armltd/vexpress64/vexpress64.c index 973b579..e34af6c 100644 --- a/board/armltd/vexpress64/vexpress64.c +++ b/board/armltd/vexpress64/vexpress64.c @@ -31,13 +31,15 @@ U_BOOT_DEVICE(vexpress_serials) = {
static struct mm_region vexpress64_mem_map[] = { { - .base = 0x0UL, + .virt = 0x0UL, + .phys = 0x0UL, .size = 0x80000000UL, .attrs = PTE_BLOCK_MEMTYPE(MT_DEVICE_NGNRNE) | PTE_BLOCK_NON_SHARE | PTE_BLOCK_PXN | PTE_BLOCK_UXN }, { - .base = 0x80000000UL, + .virt = 0x80000000UL, + .phys = 0x80000000UL, .size = 0xff80000000UL, .attrs = PTE_BLOCK_MEMTYPE(MT_NORMAL) | PTE_BLOCK_INNER_SHARE diff --git a/board/cavium/thunderx/thunderx.c b/board/cavium/thunderx/thunderx.c index 9131a38..960ca53 100644 --- a/board/cavium/thunderx/thunderx.c +++ b/board/cavium/thunderx/thunderx.c @@ -45,16 +45,19 @@ DECLARE_GLOBAL_DATA_PTR;
static struct mm_region thunderx_mem_map[] = { { - .base = 0x000000000000UL, + .virt = 0x000000000000UL, + .phys = 0x000000000000UL, .size = 0x40000000000UL, .attrs = PTE_BLOCK_MEMTYPE(MT_NORMAL) | PTE_BLOCK_NON_SHARE, }, { - .base = 0x800000000000UL, + .virt = 0x800000000000UL, + .phys = 0x800000000000UL, .size = 0x40000000000UL, .attrs = PTE_BLOCK_MEMTYPE(MT_DEVICE_NGNRNE) | PTE_BLOCK_NON_SHARE, }, { - .base = 0x840000000000UL, + .virt = 0x840000000000UL, + .phys = 0x840000000000UL, .size = 0x40000000000UL, .attrs = PTE_BLOCK_MEMTYPE(MT_DEVICE_NGNRNE) | PTE_BLOCK_NON_SHARE, diff --git a/board/hisilicon/hikey/hikey.c b/board/hisilicon/hikey/hikey.c index 7abc678..72d6334 100644 --- a/board/hisilicon/hikey/hikey.c +++ b/board/hisilicon/hikey/hikey.c @@ -93,12 +93,14 @@ U_BOOT_DEVICE(hikey_seriala) = {
static struct mm_region hikey_mem_map[] = { { - .base = 0x0UL, + .virt = 0x0UL, + .phys = 0x0UL, .size = 0x80000000UL, .attrs = PTE_BLOCK_MEMTYPE(MT_NORMAL) | PTE_BLOCK_INNER_SHARE }, { - .base = 0x80000000UL, + .virt = 0x80000000UL, + .phys = 0x80000000UL, .size = 0x80000000UL, .attrs = PTE_BLOCK_MEMTYPE(MT_DEVICE_NGNRNE) | PTE_BLOCK_NON_SHARE | diff --git a/board/raspberrypi/rpi/rpi.c b/board/raspberrypi/rpi/rpi.c index c45ddb1..fbfbf6c 100644 --- a/board/raspberrypi/rpi/rpi.c +++ b/board/raspberrypi/rpi/rpi.c @@ -234,12 +234,14 @@ static const struct rpi_model *model; #ifdef CONFIG_ARM64 static struct mm_region bcm2837_mem_map[] = { { - .base = 0x00000000UL, + .virt = 0x00000000UL, + .phys = 0x00000000UL, .size = 0x3f000000UL, .attrs = PTE_BLOCK_MEMTYPE(MT_NORMAL) | PTE_BLOCK_INNER_SHARE }, { - .base = 0x3f000000UL, + .virt = 0x3f000000UL, + .phys = 0x3f000000UL, .size = 0x01000000UL, .attrs = PTE_BLOCK_MEMTYPE(MT_DEVICE_NGNRNE) | PTE_BLOCK_NON_SHARE |

Drop platform code to create static MMU tables. Use common framework to create MMU tables on the run. Tested on LS2080ARDB with secure and non-secure ram scenarios.
Signed-off-by: York Sun york.sun@nxp.com
Alison Wang alison.wang@nxp.com, Prabhakar Kushwaha prabhakar.kushwaha@nxp.com
---
arch/arm/cpu/armv8/fsl-layerscape/cpu.c | 359 ++++--------------------- arch/arm/include/asm/arch-fsl-layerscape/cpu.h | 310 ++++++++++++--------- 2 files changed, 237 insertions(+), 432 deletions(-)
diff --git a/arch/arm/cpu/armv8/fsl-layerscape/cpu.c b/arch/arm/cpu/armv8/fsl-layerscape/cpu.c index a397f5d..4790686 100644 --- a/arch/arm/cpu/armv8/fsl-layerscape/cpu.c +++ b/arch/arm/cpu/armv8/fsl-layerscape/cpu.c @@ -26,13 +26,7 @@
DECLARE_GLOBAL_DATA_PTR;
-static struct mm_region layerscape_mem_map[] = { - { - /* List terminator */ - 0, - } -}; -struct mm_region *mem_map = layerscape_mem_map; +struct mm_region *mem_map = early_map;
void cpu_name(char *name) { @@ -56,233 +50,35 @@ void cpu_name(char *name) }
#ifndef CONFIG_SYS_DCACHE_OFF -static void set_pgtable_section(u64 *page_table, u64 index, u64 section, - u64 memory_type, u64 attribute) -{ - u64 value; - - value = section | PTE_TYPE_BLOCK | PTE_BLOCK_AF; - value |= PMD_ATTRINDX(memory_type); - value |= attribute; - page_table[index] = value; -} - -static void set_pgtable_table(u64 *page_table, u64 index, u64 *table_addr) -{ - u64 value; - - value = (u64)table_addr | PTE_TYPE_TABLE; - page_table[index] = value; -} - -/* - * Set the block entries according to the information of the table. - */ -static int set_block_entry(const struct sys_mmu_table *list, - struct table_info *table) -{ - u64 block_size = 0, block_shift = 0; - u64 block_addr, index; - int j; - - if (table->entry_size == BLOCK_SIZE_L1) { - block_size = BLOCK_SIZE_L1; - block_shift = SECTION_SHIFT_L1; - } else if (table->entry_size == BLOCK_SIZE_L2) { - block_size = BLOCK_SIZE_L2; - block_shift = SECTION_SHIFT_L2; - } else { - return -EINVAL; - } - - block_addr = list->phys_addr; - index = (list->virt_addr - table->table_base) >> block_shift; - - for (j = 0; j < (list->size >> block_shift); j++) { - set_pgtable_section(table->ptr, - index, - block_addr, - list->memory_type, - list->attribute); - block_addr += block_size; - index++; - } - - return 0; -} - -/* - * Find the corresponding table entry for the list. - */ -static int find_table(const struct sys_mmu_table *list, - struct table_info *table, u64 *level0_table) -{ - u64 index = 0, level = 0; - u64 *level_table = level0_table; - u64 temp_base = 0, block_size = 0, block_shift = 0; - - while (level < 3) { - if (level == 0) { - block_size = BLOCK_SIZE_L0; - block_shift = SECTION_SHIFT_L0; - } else if (level == 1) { - block_size = BLOCK_SIZE_L1; - block_shift = SECTION_SHIFT_L1; - } else if (level == 2) { - block_size = BLOCK_SIZE_L2; - block_shift = SECTION_SHIFT_L2; - } - - index = 0; - while (list->virt_addr >= temp_base) { - index++; - temp_base += block_size; - } - - temp_base -= block_size; - - if ((level_table[index - 1] & PTE_TYPE_MASK) == - PTE_TYPE_TABLE) { - level_table = (u64 *)(level_table[index - 1] & - ~PTE_TYPE_MASK); - level++; - continue; - } else { - if (level == 0) - return -EINVAL; - - if ((list->phys_addr + list->size) > - (temp_base + block_size * NUM_OF_ENTRY)) - return -EINVAL; - - /* - * Check the address and size of the list member is - * aligned with the block size. - */ - if (((list->phys_addr & (block_size - 1)) != 0) || - ((list->size & (block_size - 1)) != 0)) - return -EINVAL; - - table->ptr = level_table; - table->table_base = temp_base - - ((index - 1) << block_shift); - table->entry_size = block_size; - - return 0; - } - } - return -EINVAL; -} - /* * To start MMU before DDR is available, we create MMU table in SRAM. * The base address of SRAM is CONFIG_SYS_FSL_OCRAM_BASE. We use three * levels of translation tables here to cover 40-bit address space. * We use 4KB granule size, with 40 bits physical address, T0SZ=24 - * Level 0 IA[39], table address @0 - * Level 1 IA[38:30], table address @0x1000, 0x2000 - * Level 2 IA[29:21], table address @0x3000, 0x4000 - * Address above 0x5000 is free for other purpose. + * Address above EARLY_PGTABLE_SIZE (0x5000) is free for other purpose. + * Note, the debug print in cache_v8.c is not usable for debugging + * these early MMU tables because UART is not yet available. */ static inline void early_mmu_setup(void) { - unsigned int el, i; - u64 *level0_table = (u64 *)CONFIG_SYS_FSL_OCRAM_BASE; - u64 *level1_table0 = (u64 *)(CONFIG_SYS_FSL_OCRAM_BASE + 0x1000); - u64 *level1_table1 = (u64 *)(CONFIG_SYS_FSL_OCRAM_BASE + 0x2000); - u64 *level2_table0 = (u64 *)(CONFIG_SYS_FSL_OCRAM_BASE + 0x3000); - u64 *level2_table1 = (u64 *)(CONFIG_SYS_FSL_OCRAM_BASE + 0x4000); - - struct table_info table = {level0_table, 0, BLOCK_SIZE_L0}; + unsigned int el = current_el();
- /* Invalidate all table entries */ - memset(level0_table, 0, 0x5000); + /* global data is already setup, no allocation yet */ + gd->arch.tlb_addr = CONFIG_SYS_FSL_OCRAM_BASE; + gd->arch.tlb_fillptr = gd->arch.tlb_addr; + gd->arch.tlb_size = EARLY_PGTABLE_SIZE;
- /* Fill in the table entries */ - set_pgtable_table(level0_table, 0, level1_table0); - set_pgtable_table(level0_table, 1, level1_table1); - set_pgtable_table(level1_table0, 0, level2_table0); + /* Create early page tables */ + setup_pgtables();
-#ifdef CONFIG_FSL_LSCH3 - set_pgtable_table(level1_table0, - CONFIG_SYS_FLASH_BASE >> SECTION_SHIFT_L1, - level2_table1); -#elif defined(CONFIG_FSL_LSCH2) - set_pgtable_table(level1_table0, 1, level2_table1); -#endif - /* Find the table and fill in the block entries */ - for (i = 0; i < ARRAY_SIZE(early_mmu_table); i++) { - if (find_table(&early_mmu_table[i], - &table, level0_table) == 0) { - /* - * If find_table() returns error, it cannot be dealt - * with here. Breakpoint can be added for debugging. - */ - set_block_entry(&early_mmu_table[i], &table); - /* - * If set_block_entry() returns error, it cannot be - * dealt with here too. - */ - } - } - - el = current_el(); - - set_ttbr_tcr_mair(el, (u64)level0_table, LAYERSCAPE_TCR, + /* point TTBR to the new table */ + set_ttbr_tcr_mair(el, gd->arch.tlb_addr, + get_tcr(el, NULL, NULL) & + ~(TCR_ORGN_MASK | TCR_IRGN_MASK), MEMORY_ATTRIBUTES); - set_sctlr(get_sctlr() | CR_M); -}
-#ifdef CONFIG_SYS_MEM_RESERVE_SECURE -/* - * Called from final mmu setup. The phys_addr is new, non-existing - * address. A new sub table is created @level2_table_secure to cover - * size of CONFIG_SYS_MEM_RESERVE_SECURE memory. - */ -static inline int final_secure_ddr(u64 *level0_table, - u64 *level2_table_secure, - phys_addr_t phys_addr) -{ - int ret = -EINVAL; - struct table_info table = {}; - struct sys_mmu_table ddr_entry = { - 0, 0, BLOCK_SIZE_L1, MT_NORMAL, - PTE_BLOCK_OUTER_SHARE | PTE_BLOCK_NS - }; - u64 index; - - /* Need to create a new table */ - ddr_entry.virt_addr = phys_addr & ~(BLOCK_SIZE_L1 - 1); - ddr_entry.phys_addr = phys_addr & ~(BLOCK_SIZE_L1 - 1); - ret = find_table(&ddr_entry, &table, level0_table); - if (ret) - return ret; - index = (ddr_entry.virt_addr - table.table_base) >> SECTION_SHIFT_L1; - set_pgtable_table(table.ptr, index, level2_table_secure); - table.ptr = level2_table_secure; - table.table_base = ddr_entry.virt_addr; - table.entry_size = BLOCK_SIZE_L2; - ret = set_block_entry(&ddr_entry, &table); - if (ret) { - printf("MMU error: could not fill non-secure ddr block entries\n"); - return ret; - } - ddr_entry.virt_addr = phys_addr; - ddr_entry.phys_addr = phys_addr; - ddr_entry.size = CONFIG_SYS_MEM_RESERVE_SECURE; - ddr_entry.attribute = PTE_BLOCK_OUTER_SHARE; - ret = find_table(&ddr_entry, &table, level0_table); - if (ret) { - printf("MMU error: could not find secure ddr table\n"); - return ret; - } - ret = set_block_entry(&ddr_entry, &table); - if (ret) - printf("MMU error: could not set secure ddr block entry\n"); - - return ret; + set_sctlr(get_sctlr() | CR_M); } -#endif
/* * The final tables look similar to early tables, but different in detail. @@ -291,113 +87,58 @@ static inline int final_secure_ddr(u64 *level0_table, * * Put the MMU table in secure memory if gd->arch.secure_ram is valid. * OCRAM will be not used for this purpose so gd->arch.secure_ram can't be 0. - * - * Level 1 table 0 contains 512 entries for each 1GB from 0 to 512GB. - * Level 1 table 1 contains 512 entries for each 1GB from 512GB to 1TB. - * Level 2 table 0 contains 512 entries for each 2MB from 0 to 1GB. - * - * For LSCH3: - * Level 2 table 1 contains 512 entries for each 2MB from 32GB to 33GB. - * For LSCH2: - * Level 2 table 1 contains 512 entries for each 2MB from 1GB to 2GB. - * Level 2 table 2 contains 512 entries for each 2MB from 20GB to 21GB. */ static inline void final_mmu_setup(void) { + u64 tlb_addr_save = gd->arch.tlb_addr; unsigned int el = current_el(); - unsigned int i; - u64 *level0_table = (u64 *)gd->arch.tlb_addr; - u64 *level1_table0; - u64 *level1_table1; - u64 *level2_table0; - u64 *level2_table1; -#ifdef CONFIG_FSL_LSCH2 - u64 *level2_table2; -#endif - struct table_info table = {NULL, 0, BLOCK_SIZE_L0}; - #ifdef CONFIG_SYS_MEM_RESERVE_SECURE - u64 *level2_table_secure; - - if (el == 3) { - /* - * Only use gd->arch.secure_ram if the address is recalculated - * Align to 4KB for MMU table - */ - if (gd->arch.secure_ram & MEM_RESERVE_SECURE_MAINTAINED) - level0_table = (u64 *)(gd->arch.secure_ram & ~0xfff); - else - printf("MMU warning: gd->arch.secure_ram is not maintained, disabled.\n"); - } -#endif - level1_table0 = level0_table + 512; - level1_table1 = level1_table0 + 512; - level2_table0 = level1_table1 + 512; - level2_table1 = level2_table0 + 512; -#ifdef CONFIG_FSL_LSCH2 - level2_table2 = level2_table1 + 512; + int index; #endif - table.ptr = level0_table;
- /* Invalidate all table entries */ - memset(level0_table, 0, PGTABLE_SIZE); - - /* Fill in the table entries */ - set_pgtable_table(level0_table, 0, level1_table0); - set_pgtable_table(level0_table, 1, level1_table1); - set_pgtable_table(level1_table0, 0, level2_table0); -#ifdef CONFIG_FSL_LSCH3 - set_pgtable_table(level1_table0, - CONFIG_SYS_FSL_QBMAN_BASE >> SECTION_SHIFT_L1, - level2_table1); -#elif defined(CONFIG_FSL_LSCH2) - set_pgtable_table(level1_table0, 1, level2_table1); - set_pgtable_table(level1_table0, - CONFIG_SYS_FSL_QBMAN_BASE >> SECTION_SHIFT_L1, - level2_table2); -#endif - - /* Find the table and fill in the block entries */ - for (i = 0; i < ARRAY_SIZE(final_mmu_table); i++) { - if (find_table(&final_mmu_table[i], - &table, level0_table) == 0) { - if (set_block_entry(&final_mmu_table[i], - &table) != 0) { - printf("MMU error: could not set block entry for %p\n", - &final_mmu_table[i]); - } + mem_map = final_map;
- } else { - printf("MMU error: could not find the table for %p\n", - &final_mmu_table[i]); - } - } - /* Set the secure memory to secure in MMU */ #ifdef CONFIG_SYS_MEM_RESERVE_SECURE - if (el == 3 && gd->arch.secure_ram & MEM_RESERVE_SECURE_MAINTAINED) { -#ifdef CONFIG_FSL_LSCH3 - level2_table_secure = level2_table1 + 512; -#elif defined(CONFIG_FSL_LSCH2) - level2_table_secure = level2_table2 + 512; -#endif - if (!final_secure_ddr(level0_table, - level2_table_secure, - gd->arch.secure_ram & ~0x3)) { + if (gd->arch.secure_ram & MEM_RESERVE_SECURE_MAINTAINED) { + if (el == 3) { + /* + * Only use gd->arch.secure_ram if the address is + * recalculated. Align to 4KB for MMU table. + */ + /* put page tables in secure ram */ + index = ARRAY_SIZE(final_map) - 2; + gd->arch.tlb_addr = gd->arch.secure_ram & ~0xfff; + final_map[index].virt = gd->arch.secure_ram & ~0x3; + final_map[index].phys = final_map[index].virt; + final_map[index].size = CONFIG_SYS_MEM_RESERVE_SECURE; + final_map[index].attrs = PTE_BLOCK_OUTER_SHARE; gd->arch.secure_ram |= MEM_RESERVE_SECURE_SECURED; - debug("Now MMU table is in secured memory at 0x%llx\n", - gd->arch.secure_ram & ~0x3); + tlb_addr_save = gd->arch.tlb_addr; } else { - printf("MMU warning: Failed to secure DDR\n"); + /* Use allocated (board_f.c) memory for TLB */ + tlb_addr_save = gd->arch.tlb_allocated; } } #endif
+ /* Reset the fill ptr */ + gd->arch.tlb_fillptr = tlb_addr_save; + + /* Create normal system page tables */ + setup_pgtables(); + + /* Create emergency page tables */ + gd->arch.tlb_addr = gd->arch.tlb_fillptr; + gd->arch.tlb_emerg = gd->arch.tlb_addr; + setup_pgtables(); + gd->arch.tlb_addr = tlb_addr_save; + /* flush new MMU table */ - flush_dcache_range((ulong)level0_table, - (ulong)level0_table + gd->arch.tlb_size); + flush_dcache_range(gd->arch.tlb_addr, + gd->arch.tlb_addr + gd->arch.tlb_size);
/* point TTBR to the new table */ - set_ttbr_tcr_mair(el, (u64)level0_table, LAYERSCAPE_TCR_FINAL, + set_ttbr_tcr_mair(el, gd->arch.tlb_addr, get_tcr(el, NULL, NULL), MEMORY_ATTRIBUTES); /* * MMU is already enabled, just need to invalidate TLB to load the diff --git a/arch/arm/include/asm/arch-fsl-layerscape/cpu.h b/arch/arm/include/asm/arch-fsl-layerscape/cpu.h index df877dd..5f5b8ba 100644 --- a/arch/arm/include/asm/arch-fsl-layerscape/cpu.h +++ b/arch/arm/include/asm/arch-fsl-layerscape/cpu.h @@ -19,29 +19,6 @@ static struct cpu_type cpu_type_list[] = {
#ifndef CONFIG_SYS_DCACHE_OFF
-#define SECTION_SHIFT_L0 39UL -#define SECTION_SHIFT_L1 30UL -#define SECTION_SHIFT_L2 21UL -#define BLOCK_SIZE_L0 0x8000000000 -#define BLOCK_SIZE_L1 0x40000000 -#define BLOCK_SIZE_L2 0x200000 -#define NUM_OF_ENTRY 512 -#define TCR_EL2_PS_40BIT (2 << 16) - -#define LAYERSCAPE_VA_BITS (40) -#define LAYERSCAPE_TCR (TCR_TG0_4K | \ - TCR_EL2_PS_40BIT | \ - TCR_SHARED_NON | \ - TCR_ORGN_NC | \ - TCR_IRGN_NC | \ - TCR_T0SZ(LAYERSCAPE_VA_BITS)) -#define LAYERSCAPE_TCR_FINAL (TCR_TG0_4K | \ - TCR_EL2_PS_40BIT | \ - TCR_SHARED_OUTER | \ - TCR_ORGN_WBWA | \ - TCR_IRGN_WBWA | \ - TCR_T0SZ(LAYERSCAPE_VA_BITS)) - #ifdef CONFIG_FSL_LSCH3 #define CONFIG_SYS_FSL_CCSR_BASE 0x00000000 #define CONFIG_SYS_FSL_CCSR_SIZE 0x10000000 @@ -101,174 +78,261 @@ static struct cpu_type cpu_type_list[] = { #define CONFIG_SYS_FSL_DRAM_SIZE3 0x7800000000 /* 480GB */ #endif
-struct sys_mmu_table { - u64 virt_addr; - u64 phys_addr; - u64 size; - u64 memory_type; - u64 attribute; -}; - -struct table_info { - u64 *ptr; - u64 table_base; - u64 entry_size; -}; - -static const struct sys_mmu_table early_mmu_table[] = { +#define EARLY_PGTABLE_SIZE 0x5000 +static struct mm_region early_map[] = { #ifdef CONFIG_FSL_LSCH3 { CONFIG_SYS_FSL_CCSR_BASE, CONFIG_SYS_FSL_CCSR_BASE, - CONFIG_SYS_FSL_CCSR_SIZE, MT_DEVICE_NGNRNE, - PTE_BLOCK_NON_SHARE | PTE_BLOCK_PXN | PTE_BLOCK_UXN }, + CONFIG_SYS_FSL_CCSR_SIZE, + PTE_BLOCK_MEMTYPE(MT_DEVICE_NGNRNE) | + PTE_BLOCK_NON_SHARE | PTE_BLOCK_PXN | PTE_BLOCK_UXN + }, { CONFIG_SYS_FSL_OCRAM_BASE, CONFIG_SYS_FSL_OCRAM_BASE, - CONFIG_SYS_FSL_OCRAM_SIZE, MT_NORMAL, PTE_BLOCK_NON_SHARE }, + CONFIG_SYS_FSL_OCRAM_SIZE, + PTE_BLOCK_MEMTYPE(MT_NORMAL) | PTE_BLOCK_NON_SHARE + }, { CONFIG_SYS_FSL_QSPI_BASE1, CONFIG_SYS_FSL_QSPI_BASE1, - CONFIG_SYS_FSL_QSPI_SIZE1, MT_NORMAL, PTE_BLOCK_NON_SHARE}, + CONFIG_SYS_FSL_QSPI_SIZE1, + PTE_BLOCK_MEMTYPE(MT_NORMAL) | PTE_BLOCK_NON_SHARE}, /* For IFC Region #1, only the first 4MB is cache-enabled */ { CONFIG_SYS_FSL_IFC_BASE1, CONFIG_SYS_FSL_IFC_BASE1, - CONFIG_SYS_FSL_IFC_SIZE1_1, MT_NORMAL, PTE_BLOCK_NON_SHARE }, + CONFIG_SYS_FSL_IFC_SIZE1_1, + PTE_BLOCK_MEMTYPE(MT_NORMAL) | PTE_BLOCK_NON_SHARE + }, { CONFIG_SYS_FSL_IFC_BASE1 + CONFIG_SYS_FSL_IFC_SIZE1_1, CONFIG_SYS_FSL_IFC_BASE1 + CONFIG_SYS_FSL_IFC_SIZE1_1, CONFIG_SYS_FSL_IFC_SIZE1 - CONFIG_SYS_FSL_IFC_SIZE1_1, - MT_DEVICE_NGNRNE, PTE_BLOCK_NON_SHARE }, + PTE_BLOCK_MEMTYPE(MT_DEVICE_NGNRNE) | PTE_BLOCK_NON_SHARE + }, { CONFIG_SYS_FLASH_BASE, CONFIG_SYS_FSL_IFC_BASE1, - CONFIG_SYS_FSL_IFC_SIZE1, MT_DEVICE_NGNRNE, PTE_BLOCK_NON_SHARE }, + CONFIG_SYS_FSL_IFC_SIZE1, + PTE_BLOCK_MEMTYPE(MT_DEVICE_NGNRNE) | PTE_BLOCK_NON_SHARE + }, { CONFIG_SYS_FSL_DRAM_BASE1, CONFIG_SYS_FSL_DRAM_BASE1, - CONFIG_SYS_FSL_DRAM_SIZE1, MT_NORMAL, - PTE_BLOCK_OUTER_SHARE | PTE_BLOCK_NS }, + CONFIG_SYS_FSL_DRAM_SIZE1, + PTE_BLOCK_MEMTYPE(MT_NORMAL) | + PTE_BLOCK_OUTER_SHARE | PTE_BLOCK_NS + }, /* Map IFC region #2 up to CONFIG_SYS_FLASH_BASE for NAND boot */ { CONFIG_SYS_FSL_IFC_BASE2, CONFIG_SYS_FSL_IFC_BASE2, CONFIG_SYS_FLASH_BASE - CONFIG_SYS_FSL_IFC_BASE2, - MT_DEVICE_NGNRNE, PTE_BLOCK_NON_SHARE }, + PTE_BLOCK_MEMTYPE(MT_DEVICE_NGNRNE) | PTE_BLOCK_NON_SHARE + }, { CONFIG_SYS_FSL_DCSR_BASE, CONFIG_SYS_FSL_DCSR_BASE, - CONFIG_SYS_FSL_DCSR_SIZE, MT_DEVICE_NGNRNE, - PTE_BLOCK_NON_SHARE | PTE_BLOCK_PXN | PTE_BLOCK_UXN }, + CONFIG_SYS_FSL_DCSR_SIZE, + PTE_BLOCK_MEMTYPE(MT_DEVICE_NGNRNE) | + PTE_BLOCK_NON_SHARE | PTE_BLOCK_PXN | PTE_BLOCK_UXN + }, { CONFIG_SYS_FSL_DRAM_BASE2, CONFIG_SYS_FSL_DRAM_BASE2, - CONFIG_SYS_FSL_DRAM_SIZE2, MT_NORMAL, - PTE_BLOCK_OUTER_SHARE | PTE_BLOCK_NS }, + CONFIG_SYS_FSL_DRAM_SIZE2, + PTE_BLOCK_MEMTYPE(MT_NORMAL) | + PTE_BLOCK_OUTER_SHARE | PTE_BLOCK_NS + }, #elif defined(CONFIG_FSL_LSCH2) { CONFIG_SYS_FSL_CCSR_BASE, CONFIG_SYS_FSL_CCSR_BASE, - CONFIG_SYS_FSL_CCSR_SIZE, MT_DEVICE_NGNRNE, - PTE_BLOCK_NON_SHARE | PTE_BLOCK_PXN | PTE_BLOCK_UXN }, + CONFIG_SYS_FSL_CCSR_SIZE, + PTE_BLOCK_MEMTYPE(MT_DEVICE_NGNRNE) | + PTE_BLOCK_NON_SHARE | PTE_BLOCK_PXN | PTE_BLOCK_UXN + }, { CONFIG_SYS_FSL_OCRAM_BASE, CONFIG_SYS_FSL_OCRAM_BASE, - CONFIG_SYS_FSL_OCRAM_SIZE, MT_NORMAL, PTE_BLOCK_NON_SHARE }, + CONFIG_SYS_FSL_OCRAM_SIZE, + PTE_BLOCK_MEMTYPE(MT_NORMAL) | PTE_BLOCK_NON_SHARE + }, { CONFIG_SYS_FSL_DCSR_BASE, CONFIG_SYS_FSL_DCSR_BASE, - CONFIG_SYS_FSL_DCSR_SIZE, MT_DEVICE_NGNRNE, - PTE_BLOCK_NON_SHARE | PTE_BLOCK_PXN | PTE_BLOCK_UXN }, + CONFIG_SYS_FSL_DCSR_SIZE, + PTE_BLOCK_MEMTYPE(MT_DEVICE_NGNRNE) | + PTE_BLOCK_NON_SHARE | PTE_BLOCK_PXN | PTE_BLOCK_UXN + }, { CONFIG_SYS_FSL_QSPI_BASE, CONFIG_SYS_FSL_QSPI_BASE, - CONFIG_SYS_FSL_QSPI_SIZE, MT_DEVICE_NGNRNE, PTE_BLOCK_NON_SHARE }, + CONFIG_SYS_FSL_QSPI_SIZE, + PTE_BLOCK_MEMTYPE(MT_DEVICE_NGNRNE) | PTE_BLOCK_NON_SHARE + }, { CONFIG_SYS_FSL_IFC_BASE, CONFIG_SYS_FSL_IFC_BASE, - CONFIG_SYS_FSL_IFC_SIZE, MT_DEVICE_NGNRNE, PTE_BLOCK_NON_SHARE }, + CONFIG_SYS_FSL_IFC_SIZE, + PTE_BLOCK_MEMTYPE(MT_DEVICE_NGNRNE) | PTE_BLOCK_NON_SHARE + }, { CONFIG_SYS_FSL_DRAM_BASE1, CONFIG_SYS_FSL_DRAM_BASE1, - CONFIG_SYS_FSL_DRAM_SIZE1, MT_NORMAL, - PTE_BLOCK_OUTER_SHARE | PTE_BLOCK_NS }, + CONFIG_SYS_FSL_DRAM_SIZE1, + PTE_BLOCK_MEMTYPE(MT_NORMAL) | + PTE_BLOCK_OUTER_SHARE | PTE_BLOCK_NS + }, { CONFIG_SYS_FSL_DRAM_BASE2, CONFIG_SYS_FSL_DRAM_BASE2, - CONFIG_SYS_FSL_DRAM_SIZE2, MT_NORMAL, - PTE_BLOCK_OUTER_SHARE | PTE_BLOCK_NS }, + CONFIG_SYS_FSL_DRAM_SIZE2, + PTE_BLOCK_MEMTYPE(MT_NORMAL) | + PTE_BLOCK_OUTER_SHARE | PTE_BLOCK_NS + }, #endif + {}, /* list terminator */ };
-static const struct sys_mmu_table final_mmu_table[] = { +static struct mm_region final_map[] = { #ifdef CONFIG_FSL_LSCH3 { CONFIG_SYS_FSL_CCSR_BASE, CONFIG_SYS_FSL_CCSR_BASE, - CONFIG_SYS_FSL_CCSR_SIZE, MT_DEVICE_NGNRNE, - PTE_BLOCK_NON_SHARE | PTE_BLOCK_PXN | PTE_BLOCK_UXN }, + CONFIG_SYS_FSL_CCSR_SIZE, + PTE_BLOCK_MEMTYPE(MT_DEVICE_NGNRNE) | + PTE_BLOCK_NON_SHARE | PTE_BLOCK_PXN | PTE_BLOCK_UXN + }, { CONFIG_SYS_FSL_OCRAM_BASE, CONFIG_SYS_FSL_OCRAM_BASE, - CONFIG_SYS_FSL_OCRAM_SIZE, MT_NORMAL, PTE_BLOCK_NON_SHARE }, + CONFIG_SYS_FSL_OCRAM_SIZE, + PTE_BLOCK_MEMTYPE(MT_NORMAL) | PTE_BLOCK_NON_SHARE + }, { CONFIG_SYS_FSL_DRAM_BASE1, CONFIG_SYS_FSL_DRAM_BASE1, - CONFIG_SYS_FSL_DRAM_SIZE1, MT_NORMAL, - PTE_BLOCK_OUTER_SHARE | PTE_BLOCK_NS }, + CONFIG_SYS_FSL_DRAM_SIZE1, + PTE_BLOCK_MEMTYPE(MT_NORMAL) | + PTE_BLOCK_OUTER_SHARE | PTE_BLOCK_NS + }, { CONFIG_SYS_FSL_QSPI_BASE1, CONFIG_SYS_FSL_QSPI_BASE1, - CONFIG_SYS_FSL_QSPI_SIZE1, MT_NORMAL, PTE_BLOCK_NON_SHARE}, + CONFIG_SYS_FSL_QSPI_SIZE1, + PTE_BLOCK_MEMTYPE(MT_NORMAL) | PTE_BLOCK_NON_SHARE + }, { CONFIG_SYS_FSL_QSPI_BASE2, CONFIG_SYS_FSL_QSPI_BASE2, - CONFIG_SYS_FSL_QSPI_SIZE2, MT_DEVICE_NGNRNE, - PTE_BLOCK_NON_SHARE | PTE_BLOCK_PXN | PTE_BLOCK_UXN }, + CONFIG_SYS_FSL_QSPI_SIZE2, + PTE_BLOCK_MEMTYPE(MT_DEVICE_NGNRNE) | + PTE_BLOCK_NON_SHARE | PTE_BLOCK_PXN | PTE_BLOCK_UXN + }, { CONFIG_SYS_FSL_IFC_BASE2, CONFIG_SYS_FSL_IFC_BASE2, - CONFIG_SYS_FSL_IFC_SIZE2, MT_DEVICE_NGNRNE, PTE_BLOCK_NON_SHARE }, + CONFIG_SYS_FSL_IFC_SIZE2, + PTE_BLOCK_MEMTYPE(MT_DEVICE_NGNRNE) | PTE_BLOCK_NON_SHARE + }, { CONFIG_SYS_FSL_DCSR_BASE, CONFIG_SYS_FSL_DCSR_BASE, - CONFIG_SYS_FSL_DCSR_SIZE, MT_DEVICE_NGNRNE, - PTE_BLOCK_NON_SHARE | PTE_BLOCK_PXN | PTE_BLOCK_UXN }, + CONFIG_SYS_FSL_DCSR_SIZE, + PTE_BLOCK_MEMTYPE(MT_DEVICE_NGNRNE) | + PTE_BLOCK_NON_SHARE | PTE_BLOCK_PXN | PTE_BLOCK_UXN + }, { CONFIG_SYS_FSL_MC_BASE, CONFIG_SYS_FSL_MC_BASE, - CONFIG_SYS_FSL_MC_SIZE, MT_DEVICE_NGNRNE, - PTE_BLOCK_NON_SHARE | PTE_BLOCK_PXN | PTE_BLOCK_UXN }, + CONFIG_SYS_FSL_MC_SIZE, + PTE_BLOCK_MEMTYPE(MT_DEVICE_NGNRNE) | + PTE_BLOCK_NON_SHARE | PTE_BLOCK_PXN | PTE_BLOCK_UXN + }, { CONFIG_SYS_FSL_NI_BASE, CONFIG_SYS_FSL_NI_BASE, - CONFIG_SYS_FSL_NI_SIZE, MT_DEVICE_NGNRNE, - PTE_BLOCK_NON_SHARE | PTE_BLOCK_PXN | PTE_BLOCK_UXN }, + CONFIG_SYS_FSL_NI_SIZE, + PTE_BLOCK_MEMTYPE(MT_DEVICE_NGNRNE) | + PTE_BLOCK_NON_SHARE | PTE_BLOCK_PXN | PTE_BLOCK_UXN + }, /* For QBMAN portal, only the first 64MB is cache-enabled */ { CONFIG_SYS_FSL_QBMAN_BASE, CONFIG_SYS_FSL_QBMAN_BASE, - CONFIG_SYS_FSL_QBMAN_SIZE_1, MT_NORMAL, - PTE_BLOCK_NON_SHARE | PTE_BLOCK_PXN | PTE_BLOCK_UXN | PTE_BLOCK_NS }, + CONFIG_SYS_FSL_QBMAN_SIZE_1, + PTE_BLOCK_MEMTYPE(MT_NORMAL) | + PTE_BLOCK_NON_SHARE | PTE_BLOCK_PXN | PTE_BLOCK_UXN | PTE_BLOCK_NS + }, { CONFIG_SYS_FSL_QBMAN_BASE + CONFIG_SYS_FSL_QBMAN_SIZE_1, CONFIG_SYS_FSL_QBMAN_BASE + CONFIG_SYS_FSL_QBMAN_SIZE_1, CONFIG_SYS_FSL_QBMAN_SIZE - CONFIG_SYS_FSL_QBMAN_SIZE_1, - MT_DEVICE_NGNRNE, PTE_BLOCK_NON_SHARE | PTE_BLOCK_PXN | PTE_BLOCK_UXN }, + PTE_BLOCK_MEMTYPE(MT_DEVICE_NGNRNE) | + PTE_BLOCK_NON_SHARE | PTE_BLOCK_PXN | PTE_BLOCK_UXN + }, { CONFIG_SYS_PCIE1_PHYS_ADDR, CONFIG_SYS_PCIE1_PHYS_ADDR, - CONFIG_SYS_PCIE1_PHYS_SIZE, MT_DEVICE_NGNRNE, - PTE_BLOCK_NON_SHARE | PTE_BLOCK_PXN | PTE_BLOCK_UXN }, + CONFIG_SYS_PCIE1_PHYS_SIZE, + PTE_BLOCK_MEMTYPE(MT_DEVICE_NGNRNE) | + PTE_BLOCK_NON_SHARE | PTE_BLOCK_PXN | PTE_BLOCK_UXN + }, { CONFIG_SYS_PCIE2_PHYS_ADDR, CONFIG_SYS_PCIE2_PHYS_ADDR, - CONFIG_SYS_PCIE2_PHYS_SIZE, MT_DEVICE_NGNRNE, - PTE_BLOCK_NON_SHARE | PTE_BLOCK_PXN | PTE_BLOCK_UXN }, + CONFIG_SYS_PCIE2_PHYS_SIZE, + PTE_BLOCK_MEMTYPE(MT_DEVICE_NGNRNE) | + PTE_BLOCK_NON_SHARE | PTE_BLOCK_PXN | PTE_BLOCK_UXN + }, { CONFIG_SYS_PCIE3_PHYS_ADDR, CONFIG_SYS_PCIE3_PHYS_ADDR, - CONFIG_SYS_PCIE3_PHYS_SIZE, MT_DEVICE_NGNRNE, - PTE_BLOCK_NON_SHARE | PTE_BLOCK_PXN | PTE_BLOCK_UXN }, + CONFIG_SYS_PCIE3_PHYS_SIZE, + PTE_BLOCK_MEMTYPE(MT_DEVICE_NGNRNE) | + PTE_BLOCK_NON_SHARE | PTE_BLOCK_PXN | PTE_BLOCK_UXN + }, #ifdef CONFIG_LS2080A { CONFIG_SYS_PCIE4_PHYS_ADDR, CONFIG_SYS_PCIE4_PHYS_ADDR, - CONFIG_SYS_PCIE4_PHYS_SIZE, MT_DEVICE_NGNRNE, - PTE_BLOCK_NON_SHARE | PTE_BLOCK_PXN | PTE_BLOCK_UXN }, + CONFIG_SYS_PCIE4_PHYS_SIZE, + PTE_BLOCK_MEMTYPE(MT_DEVICE_NGNRNE) | + PTE_BLOCK_NON_SHARE | PTE_BLOCK_PXN | PTE_BLOCK_UXN + }, #endif { CONFIG_SYS_FSL_WRIOP1_BASE, CONFIG_SYS_FSL_WRIOP1_BASE, - CONFIG_SYS_FSL_WRIOP1_SIZE, MT_DEVICE_NGNRNE, - PTE_BLOCK_NON_SHARE | PTE_BLOCK_PXN | PTE_BLOCK_UXN }, + CONFIG_SYS_FSL_WRIOP1_SIZE, + PTE_BLOCK_MEMTYPE(MT_DEVICE_NGNRNE) | + PTE_BLOCK_NON_SHARE | PTE_BLOCK_PXN | PTE_BLOCK_UXN + }, { CONFIG_SYS_FSL_AIOP1_BASE, CONFIG_SYS_FSL_AIOP1_BASE, - CONFIG_SYS_FSL_AIOP1_SIZE, MT_DEVICE_NGNRNE, - PTE_BLOCK_NON_SHARE | PTE_BLOCK_PXN | PTE_BLOCK_UXN }, + CONFIG_SYS_FSL_AIOP1_SIZE, + PTE_BLOCK_MEMTYPE(MT_DEVICE_NGNRNE) | + PTE_BLOCK_NON_SHARE | PTE_BLOCK_PXN | PTE_BLOCK_UXN + }, { CONFIG_SYS_FSL_PEBUF_BASE, CONFIG_SYS_FSL_PEBUF_BASE, - CONFIG_SYS_FSL_PEBUF_SIZE, MT_DEVICE_NGNRNE, - PTE_BLOCK_NON_SHARE | PTE_BLOCK_PXN | PTE_BLOCK_UXN }, + CONFIG_SYS_FSL_PEBUF_SIZE, + PTE_BLOCK_MEMTYPE(MT_DEVICE_NGNRNE) | + PTE_BLOCK_NON_SHARE | PTE_BLOCK_PXN | PTE_BLOCK_UXN + }, { CONFIG_SYS_FSL_DRAM_BASE2, CONFIG_SYS_FSL_DRAM_BASE2, - CONFIG_SYS_FSL_DRAM_SIZE2, MT_NORMAL, - PTE_BLOCK_OUTER_SHARE | PTE_BLOCK_NS }, + CONFIG_SYS_FSL_DRAM_SIZE2, + PTE_BLOCK_MEMTYPE(MT_NORMAL) | + PTE_BLOCK_OUTER_SHARE | PTE_BLOCK_NS + }, #elif defined(CONFIG_FSL_LSCH2) { CONFIG_SYS_FSL_BOOTROM_BASE, CONFIG_SYS_FSL_BOOTROM_BASE, - CONFIG_SYS_FSL_BOOTROM_SIZE, MT_DEVICE_NGNRNE, - PTE_BLOCK_NON_SHARE | PTE_BLOCK_PXN | PTE_BLOCK_UXN }, + CONFIG_SYS_FSL_BOOTROM_SIZE, + PTE_BLOCK_MEMTYPE(MT_DEVICE_NGNRNE) | + PTE_BLOCK_NON_SHARE | PTE_BLOCK_PXN | PTE_BLOCK_UXN + }, { CONFIG_SYS_FSL_CCSR_BASE, CONFIG_SYS_FSL_CCSR_BASE, - CONFIG_SYS_FSL_CCSR_SIZE, MT_DEVICE_NGNRNE, - PTE_BLOCK_NON_SHARE | PTE_BLOCK_PXN | PTE_BLOCK_UXN }, + CONFIG_SYS_FSL_CCSR_SIZE, + PTE_BLOCK_MEMTYPE(MT_DEVICE_NGNRNE) | + PTE_BLOCK_NON_SHARE | PTE_BLOCK_PXN | PTE_BLOCK_UXN + }, { CONFIG_SYS_FSL_OCRAM_BASE, CONFIG_SYS_FSL_OCRAM_BASE, - CONFIG_SYS_FSL_OCRAM_SIZE, MT_NORMAL, PTE_BLOCK_NON_SHARE }, + CONFIG_SYS_FSL_OCRAM_SIZE, + PTE_BLOCK_MEMTYPE(MT_NORMAL) | PTE_BLOCK_NON_SHARE + }, { CONFIG_SYS_FSL_DCSR_BASE, CONFIG_SYS_FSL_DCSR_BASE, - CONFIG_SYS_FSL_DCSR_SIZE, MT_DEVICE_NGNRNE, - PTE_BLOCK_NON_SHARE | PTE_BLOCK_PXN | PTE_BLOCK_UXN }, + CONFIG_SYS_FSL_DCSR_SIZE, + PTE_BLOCK_MEMTYPE(MT_DEVICE_NGNRNE) | + PTE_BLOCK_NON_SHARE | PTE_BLOCK_PXN | PTE_BLOCK_UXN + }, { CONFIG_SYS_FSL_QSPI_BASE, CONFIG_SYS_FSL_QSPI_BASE, - CONFIG_SYS_FSL_QSPI_SIZE, MT_DEVICE_NGNRNE, - PTE_BLOCK_NON_SHARE | PTE_BLOCK_PXN | PTE_BLOCK_UXN }, + CONFIG_SYS_FSL_QSPI_SIZE, + PTE_BLOCK_MEMTYPE(MT_DEVICE_NGNRNE) | + PTE_BLOCK_NON_SHARE | PTE_BLOCK_PXN | PTE_BLOCK_UXN + }, { CONFIG_SYS_FSL_IFC_BASE, CONFIG_SYS_FSL_IFC_BASE, - CONFIG_SYS_FSL_IFC_SIZE, MT_DEVICE_NGNRNE, PTE_BLOCK_NON_SHARE }, + CONFIG_SYS_FSL_IFC_SIZE, + PTE_BLOCK_MEMTYPE(MT_DEVICE_NGNRNE) | PTE_BLOCK_NON_SHARE + }, { CONFIG_SYS_FSL_DRAM_BASE1, CONFIG_SYS_FSL_DRAM_BASE1, - CONFIG_SYS_FSL_DRAM_SIZE1, MT_NORMAL, - PTE_BLOCK_OUTER_SHARE | PTE_BLOCK_NS }, + CONFIG_SYS_FSL_DRAM_SIZE1, + PTE_BLOCK_MEMTYPE(MT_NORMAL) | + PTE_BLOCK_OUTER_SHARE | PTE_BLOCK_NS + }, { CONFIG_SYS_FSL_QBMAN_BASE, CONFIG_SYS_FSL_QBMAN_BASE, - CONFIG_SYS_FSL_QBMAN_SIZE, MT_DEVICE_NGNRNE, - PTE_BLOCK_NON_SHARE | PTE_BLOCK_PXN | PTE_BLOCK_UXN }, + CONFIG_SYS_FSL_QBMAN_SIZE, + PTE_BLOCK_MEMTYPE(MT_DEVICE_NGNRNE) | + PTE_BLOCK_NON_SHARE | PTE_BLOCK_PXN | PTE_BLOCK_UXN + }, { CONFIG_SYS_FSL_DRAM_BASE2, CONFIG_SYS_FSL_DRAM_BASE2, - CONFIG_SYS_FSL_DRAM_SIZE2, MT_NORMAL, - PTE_BLOCK_OUTER_SHARE | PTE_BLOCK_NS }, + CONFIG_SYS_FSL_DRAM_SIZE2, + PTE_BLOCK_MEMTYPE(MT_NORMAL) | + PTE_BLOCK_OUTER_SHARE | PTE_BLOCK_NS + }, { CONFIG_SYS_PCIE1_PHYS_ADDR, CONFIG_SYS_PCIE1_PHYS_ADDR, - CONFIG_SYS_PCIE1_PHYS_SIZE, MT_DEVICE_NGNRNE, - PTE_BLOCK_NON_SHARE | PTE_BLOCK_PXN | PTE_BLOCK_UXN }, + CONFIG_SYS_PCIE1_PHYS_SIZE, + PTE_BLOCK_MEMTYPE(MT_DEVICE_NGNRNE) | + PTE_BLOCK_NON_SHARE | PTE_BLOCK_PXN | PTE_BLOCK_UXN + }, { CONFIG_SYS_PCIE2_PHYS_ADDR, CONFIG_SYS_PCIE2_PHYS_ADDR, - CONFIG_SYS_PCIE2_PHYS_SIZE, MT_DEVICE_NGNRNE, - PTE_BLOCK_NON_SHARE | PTE_BLOCK_PXN | PTE_BLOCK_UXN }, + CONFIG_SYS_PCIE2_PHYS_SIZE, + PTE_BLOCK_MEMTYPE(MT_DEVICE_NGNRNE) | + PTE_BLOCK_NON_SHARE | PTE_BLOCK_PXN | PTE_BLOCK_UXN + }, { CONFIG_SYS_PCIE3_PHYS_ADDR, CONFIG_SYS_PCIE3_PHYS_ADDR, - CONFIG_SYS_PCIE3_PHYS_SIZE, MT_DEVICE_NGNRNE, - PTE_BLOCK_NON_SHARE | PTE_BLOCK_PXN | PTE_BLOCK_UXN }, + CONFIG_SYS_PCIE3_PHYS_SIZE, + PTE_BLOCK_MEMTYPE(MT_DEVICE_NGNRNE) | + PTE_BLOCK_NON_SHARE | PTE_BLOCK_PXN | PTE_BLOCK_UXN + }, { CONFIG_SYS_FSL_DRAM_BASE3, CONFIG_SYS_FSL_DRAM_BASE3, - CONFIG_SYS_FSL_DRAM_SIZE3, MT_NORMAL, - PTE_BLOCK_OUTER_SHARE | PTE_BLOCK_NS }, + CONFIG_SYS_FSL_DRAM_SIZE3, + PTE_BLOCK_MEMTYPE(MT_NORMAL) | + PTE_BLOCK_OUTER_SHARE | PTE_BLOCK_NS + }, #endif -}; +#ifdef CONFIG_SYS_MEM_RESERVE_SECURE + {}, /* space holder for secure mem */ #endif + {}, +}; +#endif /* !CONFIG_SYS_DCACHE_OFF */
int fsl_qoriq_core_to_cluster(unsigned int core); u32 cpu_mask(void);

On 25.06.16 01:46, York Sun wrote:
Drop platform code to create static MMU tables. Use common framework to create MMU tables on the run. Tested on LS2080ARDB with secure and non-secure ram scenarios.
Signed-off-by: York Sun york.sun@nxp.com
Alison Wang <alison.wang@nxp.com>, Prabhakar Kushwaha <prabhakar.kushwaha@nxp.com>
What is this? :)
Alex

On 07/18/2016 11:44 PM, Alexander Graf wrote:
On 25.06.16 01:46, York Sun wrote:
Drop platform code to create static MMU tables. Use common framework to create MMU tables on the run. Tested on LS2080ARDB with secure and non-secure ram scenarios.
Signed-off-by: York Sun york.sun@nxp.com
Alison Wang <alison.wang@nxp.com>, Prabhakar Kushwaha <prabhakar.kushwaha@nxp.com>
What is this? :)
This comes from patman tag. I must have used wrong format so patman sent it out. Will fix when merging.
Series-to: U-boot Series-version: 1 Series-cc: Alexander Graf agraf@suse.de, Alison Wang alison.wang@nxp.com, Prabhakar Kushwaha prabhakar.kushwaha@nxp.com
York

On 25.06.16 01:46, York Sun wrote:
To use common MMU code, non-identical mapping needs to be supported. Minior change in the MMU framework is required to support splitting blocks. With these changes, using common code is straight forward. Attention is needed where the tables are for early boot, secure and non-secure ram situations.
Reviewed-by: Alexander Graf agraf@suse.de
Alex

On 06/24/2016 04:46 PM, York Sun wrote:
To use common MMU code, non-identical mapping needs to be supported. Minior change in the MMU framework is required to support splitting blocks. With these changes, using common code is straight forward. Attention is needed where the tables are for early boot, secure and non-secure ram situations.
York Sun (6): armv8: Move secure_ram variable out of generic global data armv8: Add tlb_allocated to arch global data armv8: mmu: house cleaning armv8: mmu: split block if necessary armv8: mmu: Add support of non-identical mapping armv8: layerscape: Convert to use common MMU framework
This set is applied to fsl-qoriq master. Awaiting upstream.
York
participants (3)
-
Alexander Graf
-
York Sun
-
york sun