[U-Boot] [PATCH v5 00/11] ARMv7: add PSCI support to U-Boot

Hi,
Marc is rather busy so I've taken it upon myself to rebase this series onto the latest master. v4 would have been applied except for a warning which it caused on aarch64 which I have (trivially) resolved this time around. The only other change is s/OBJCFLAGS/OBJCOPYFLAGS/ due to changes in the underlying code.
I'd like to get this into the next merge window since it is the basis for sunxi smp support.
I have been testing on a cubietruck (sun7i/A20). I've also build tested "./MAKEALL -v armltd" which builds a variety of 32- and 64-bit boards.
The code is also available at git://gitorious.org/ijc/u-boot.git psci-v5
The sunxi patches which build on this are at git://gitorious.org/ijc/u-boot.git psci-a20-v5
Cheers, Ian.
Marc's original blurb:
PSCI is an ARM standard that provides a generic interface that supervisory software can use to manage power in the following situations: - Core idle management - CPU hotplug - big.LITTLE migration models - System shutdown and reset
It basically allows the kernel to offload these tasks to the firmware, and rely on common kernel side code that just calls into PSCI.
More importantly, it gives a way to ensure that CPUs enter the kernel at the appropriate exception level (ie HYP mode, to allow the use of the virtualization extensions), even across events like CPUs being powered off/on or suspended.
The main idea here is to turn some of the existing U-Boot code into a separate section that can live in secure RAM (or a reserved page of memory), containing a secure monitor that will implement the PSCI operations. This code will still be alive when U-Boot is long gone, hence the need for a piece of memory that will not be touched by the OS.
This patch series contains 3 parts: - the first four patches are just bug fixes - the next two refactor the HYP/non-secure code to allow relocation in secure memory - the last four contain the generic PSCI code and DT infrastructure
This implements the original 0.1 spec, as nobody implements the new 0.2 version so far. I plan to update this support to 0.2 once there is an official binding available (and support in the kernel).
Most of the development has been done on an Allwinner A20 SoC, which is the main user of this code at the moment. I hope new SoCs will be using this method in the future (my primary goal for this series being to avoid more stupid SMP code from creeping up in the Linux kernel). As instructed, I've removed the A20 support code and made it a separate series, as there is now an effort to mainline this code (see Ian Campbell patch series).
With these three series applied, the A20 now boots in HYP mode, Linux finds the secondary CPU without any SMP code present in the kernel, and runs KVM out of the box. The Xen/ARM guys managed to do the same fairly easily, as did at least one XVizor user.
This code has also been tested on a VExpress TC2, running KVM with all 5 CPUs, in order to make sure there was no obvious regression.

From: Marc Zyngier marc.zyngier@arm.com
Having the switch to non-secure in the "prep" phase is causing all kind of troubles, as that stage can be called multiple times.
Instead, move the switch to non-secure to the last possible phase, when there is no turning back anymore.
Signed-off-by: Marc Zyngier marc.zyngier@arm.com Acked-by: Ian Campbell ijc@hellion.org.uk --- v5: [ijc] Continue to call do_nonsec_virt_switch for arm64 too --- arch/arm/lib/bootm.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/arch/arm/lib/bootm.c b/arch/arm/lib/bootm.c index 47ee070..304210e 100644 --- a/arch/arm/lib/bootm.c +++ b/arch/arm/lib/bootm.c @@ -242,7 +242,6 @@ static void boot_prep_linux(bootm_headers_t *images) printf("FDT and ATAGS support not compiled in - hanging\n"); hang(); } - do_nonsec_virt_switch(); }
/* Subcommand: GO */ @@ -260,8 +259,10 @@ static void boot_jump_linux(bootm_headers_t *images, int flag)
announce_and_cleanup(fake);
- if (!fake) + if (!fake) { + do_nonsec_virt_switch(); kernel_entry(images->ft_addr); + } #else unsigned long machid = gd->bd->bi_arch_number; char *s; @@ -287,8 +288,10 @@ static void boot_jump_linux(bootm_headers_t *images, int flag) else r2 = gd->bd->bi_boot_params;
- if (!fake) + if (!fake) { + do_nonsec_virt_switch(); kernel_entry(0, machid, r2); + } #endif }

From: Marc Zyngier marc.zyngier@arm.com
A CP15 instruction execution can be reordered, requiring an isb to be sure it is executed in program order.
Signed-off-by: Marc Zyngier marc.zyngier@arm.com Acked-by: Ian Campbell ijc@hellion.org.uk --- arch/arm/cpu/armv7/nonsec_virt.S | 1 + 1 file changed, 1 insertion(+)
diff --git a/arch/arm/cpu/armv7/nonsec_virt.S b/arch/arm/cpu/armv7/nonsec_virt.S index 6367e09..12de5c2 100644 --- a/arch/arm/cpu/armv7/nonsec_virt.S +++ b/arch/arm/cpu/armv7/nonsec_virt.S @@ -46,6 +46,7 @@ _secure_monitor: #endif
mcr p15, 0, r1, c1, c1, 0 @ write SCR (with NS bit set) + isb
#ifdef CONFIG_ARMV7_VIRT mrceq p15, 0, r0, c12, c0, 1 @ get MVBAR value

From: Marc Zyngier marc.zyngier@arm.com
Before switching to non-secure, make sure that CNTVOFF is set to zero on all CPUs. Otherwise, kernel running in non-secure without HYP enabled (hence using virtual timers) may observe timers that are not synchronized, effectively seeing time going backward...
Signed-off-by: Marc Zyngier marc.zyngier@arm.com Acked-by: Ian Campbell ijc@hellion.org.uk --- arch/arm/cpu/armv7/nonsec_virt.S | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/arch/arm/cpu/armv7/nonsec_virt.S b/arch/arm/cpu/armv7/nonsec_virt.S index 12de5c2..b5c946f 100644 --- a/arch/arm/cpu/armv7/nonsec_virt.S +++ b/arch/arm/cpu/armv7/nonsec_virt.S @@ -38,10 +38,10 @@ _secure_monitor: bic r1, r1, #0x4e @ clear IRQ, FIQ, EA, nET bits orr r1, r1, #0x31 @ enable NS, AW, FW bits
-#ifdef CONFIG_ARMV7_VIRT mrc p15, 0, r0, c0, c1, 1 @ read ID_PFR1 and r0, r0, #CPUID_ARM_VIRT_MASK @ mask virtualization bits cmp r0, #(1 << CPUID_ARM_VIRT_SHIFT) +#ifdef CONFIG_ARMV7_VIRT orreq r1, r1, #0x100 @ allow HVC instruction #endif
@@ -52,7 +52,14 @@ _secure_monitor: mrceq p15, 0, r0, c12, c0, 1 @ get MVBAR value mcreq p15, 4, r0, c12, c0, 0 @ write HVBAR #endif + bne 1f
+ @ Reset CNTVOFF to 0 before leaving monitor mode + mrc p15, 0, r0, c0, c1, 1 @ read ID_PFR1 + ands r0, r0, #CPUID_ARM_GENTIMER_MASK @ test arch timer bits + movne r0, #0 + mcrrne p15, 4, r0, r0, c14 @ Reset CNTVOFF to zero +1: movs pc, lr @ return to non-secure SVC
_hyp_trap:

From: Marc Zyngier marc.zyngier@arm.com
In order to be able to use the various mode constants (far more readable than random hex values), add the missing HYP and A values.
Also update arm/lib/interrupts.c to display HYP instead of an unknown value.
Signed-off-by: Marc Zyngier marc.zyngier@arm.com Acked-by: Ian Campbell ijc@hellion.org.uk --- arch/arm/include/asm/proc-armv/ptrace.h | 2 ++ arch/arm/lib/interrupts.c | 2 +- 2 files changed, 3 insertions(+), 1 deletion(-)
diff --git a/arch/arm/include/asm/proc-armv/ptrace.h b/arch/arm/include/asm/proc-armv/ptrace.h index 21aef58..71df5a9 100644 --- a/arch/arm/include/asm/proc-armv/ptrace.h +++ b/arch/arm/include/asm/proc-armv/ptrace.h @@ -38,12 +38,14 @@ struct pt_regs { #define IRQ_MODE 0x12 #define SVC_MODE 0x13 #define ABT_MODE 0x17 +#define HYP_MODE 0x1a #define UND_MODE 0x1b #define SYSTEM_MODE 0x1f #define MODE_MASK 0x1f #define T_BIT 0x20 #define F_BIT 0x40 #define I_BIT 0x80 +#define A_BIT 0x100 #define CC_V_BIT (1 << 28) #define CC_C_BIT (1 << 29) #define CC_Z_BIT (1 << 30) diff --git a/arch/arm/lib/interrupts.c b/arch/arm/lib/interrupts.c index 758b013..f6b7c03 100644 --- a/arch/arm/lib/interrupts.c +++ b/arch/arm/lib/interrupts.c @@ -103,7 +103,7 @@ void show_regs (struct pt_regs *regs) "UK12_26", "UK13_26", "UK14_26", "UK15_26", "USER_32", "FIQ_32", "IRQ_32", "SVC_32", "UK4_32", "UK5_32", "UK6_32", "ABT_32", - "UK8_32", "UK9_32", "UK10_32", "UND_32", + "UK8_32", "UK9_32", "HYP_32", "UND_32", "UK12_32", "UK13_32", "UK14_32", "SYS_32", };

From: Marc Zyngier marc.zyngier@arm.com
In anticipation of refactoring the HYP/non-secure code to run from secure RAM, add a new linker section that will contain that code.
Nothing is using it just yet.
Signed-off-by: Marc Zyngier marc.zyngier@arm.com Acked-by: Ian Campbell ijc@hellion.org.uk --- v5: [ijc] s/OBJCFLAGS/OBJCOPYFLAGS/ --- arch/arm/config.mk | 2 +- arch/arm/cpu/u-boot.lds | 30 ++++++++++++++++++++++++++++++ arch/arm/lib/sections.c | 2 ++ 3 files changed, 33 insertions(+), 1 deletion(-)
diff --git a/arch/arm/config.mk b/arch/arm/config.mk index 66ecc2e..2bdfca5 100644 --- a/arch/arm/config.mk +++ b/arch/arm/config.mk @@ -113,7 +113,7 @@ endif ifdef CONFIG_ARM64 OBJCOPYFLAGS += -j .text -j .rodata -j .data -j .u_boot_list -j .rela.dyn else -OBJCOPYFLAGS += -j .text -j .rodata -j .hash -j .data -j .got.plt -j .u_boot_list -j .rel.dyn +OBJCOPYFLAGS += -j .text -j .secure_text -j .rodata -j .hash -j .data -j .got.plt -j .u_boot_list -j .rel.dyn endif
ifneq ($(CONFIG_IMX_CONFIG),) diff --git a/arch/arm/cpu/u-boot.lds b/arch/arm/cpu/u-boot.lds index a7728e0..7336162 100644 --- a/arch/arm/cpu/u-boot.lds +++ b/arch/arm/cpu/u-boot.lds @@ -7,6 +7,8 @@ * SPDX-License-Identifier: GPL-2.0+ */
+#include <config.h> + OUTPUT_FORMAT("elf32-littlearm", "elf32-littlearm", "elf32-littlearm") OUTPUT_ARCH(arm) ENTRY(_start) @@ -23,6 +25,34 @@ SECTIONS *(.text*) }
+#if defined(CONFIG_ARMV7_NONSEC) || defined(CONFIG_ARMV7_VIRT) || defined(CONFIG_ARMV7_PSCI) + +#ifndef CONFIG_ARMV7_SECURE_BASE +#define CONFIG_ARMV7_SECURE_BASE +#endif + + .__secure_start : { + . = ALIGN(0x1000); + *(.__secure_start) + } + + .secure_text CONFIG_ARMV7_SECURE_BASE : + AT(ADDR(.__secure_start) + SIZEOF(.__secure_start)) + { + *(._secure.text) + } + + . = LOADADDR(.__secure_start) + + SIZEOF(.__secure_start) + + SIZEOF(.secure_text); + + __secure_end_lma = .; + .__secure_end : AT(__secure_end_lma) { + *(.__secure_end) + LONG(0x1d1071c); /* Must output something to reset LMA */ + } +#endif + . = ALIGN(4); .rodata : { *(SORT_BY_ALIGNMENT(SORT_BY_NAME(.rodata*))) }
diff --git a/arch/arm/lib/sections.c b/arch/arm/lib/sections.c index 5b30bcb..a1205c3 100644 --- a/arch/arm/lib/sections.c +++ b/arch/arm/lib/sections.c @@ -25,4 +25,6 @@ char __image_copy_start[0] __attribute__((section(".__image_copy_start"))); char __image_copy_end[0] __attribute__((section(".__image_copy_end"))); char __rel_dyn_start[0] __attribute__((section(".__rel_dyn_start"))); char __rel_dyn_end[0] __attribute__((section(".__rel_dyn_end"))); +char __secure_start[0] __attribute__((section(".__secure_start"))); +char __secure_end[0] __attribute__((section(".__secure_end"))); char _end[0] __attribute__((section(".__end")));

From: Marc Zyngier marc.zyngier@arm.com
The current non-sec switching code suffers from one major issue: it cannot run in secure RAM, as a large part of u-boot still needs to be run while we're switched to non-secure.
This patch reworks the whole HYP/non-secure strategy by: - making sure the secure code is the *last* thing u-boot executes before entering the payload - performing an exception return from secure mode directly into the payload - allowing the code to be dynamically relocated to secure RAM before switching to non-secure.
This involves quite a bit of horrible code, specially as u-boot relocation is quite primitive.
Signed-off-by: Marc Zyngier marc.zyngier@arm.com Acked-by: Ian Campbell ijc@hellion.org.uk --- arch/arm/cpu/armv7/nonsec_virt.S | 161 +++++++++++++++++++-------------------- arch/arm/cpu/armv7/virt-v7.c | 59 +++++--------- arch/arm/include/asm/armv7.h | 10 ++- arch/arm/include/asm/secure.h | 26 +++++++ arch/arm/lib/bootm.c | 22 +++--- 5 files changed, 138 insertions(+), 140 deletions(-) create mode 100644 arch/arm/include/asm/secure.h
diff --git a/arch/arm/cpu/armv7/nonsec_virt.S b/arch/arm/cpu/armv7/nonsec_virt.S index b5c946f..2a43e3c 100644 --- a/arch/arm/cpu/armv7/nonsec_virt.S +++ b/arch/arm/cpu/armv7/nonsec_virt.S @@ -10,10 +10,13 @@ #include <linux/linkage.h> #include <asm/gic.h> #include <asm/armv7.h> +#include <asm/proc-armv/ptrace.h>
.arch_extension sec .arch_extension virt
+ .pushsection ._secure.text, "ax" + .align 5 /* the vector table for secure state and HYP mode */ _monitor_vectors: @@ -22,51 +25,86 @@ _monitor_vectors: adr pc, _secure_monitor .word 0 .word 0 - adr pc, _hyp_trap + .word 0 .word 0 .word 0
+.macro is_cpu_virt_capable tmp + mrc p15, 0, \tmp, c0, c1, 1 @ read ID_PFR1 + and \tmp, \tmp, #CPUID_ARM_VIRT_MASK @ mask virtualization bits + cmp \tmp, #(1 << CPUID_ARM_VIRT_SHIFT) +.endm + /* * secure monitor handler * U-boot calls this "software interrupt" in start.S * This is executed on a "smc" instruction, we use a "smc #0" to switch * to non-secure state. - * We use only r0 and r1 here, due to constraints in the caller. + * r0, r1, r2: passed to the callee + * ip: target PC */ _secure_monitor: - mrc p15, 0, r1, c1, c1, 0 @ read SCR - bic r1, r1, #0x4e @ clear IRQ, FIQ, EA, nET bits - orr r1, r1, #0x31 @ enable NS, AW, FW bits + mrc p15, 0, r5, c1, c1, 0 @ read SCR + bic r5, r5, #0x4e @ clear IRQ, FIQ, EA, nET bits + orr r5, r5, #0x31 @ enable NS, AW, FW bits
- mrc p15, 0, r0, c0, c1, 1 @ read ID_PFR1 - and r0, r0, #CPUID_ARM_VIRT_MASK @ mask virtualization bits - cmp r0, #(1 << CPUID_ARM_VIRT_SHIFT) + mov r6, #SVC_MODE @ default mode is SVC + is_cpu_virt_capable r4 #ifdef CONFIG_ARMV7_VIRT - orreq r1, r1, #0x100 @ allow HVC instruction + orreq r5, r5, #0x100 @ allow HVC instruction + moveq r6, #HYP_MODE @ Enter the kernel as HYP #endif
- mcr p15, 0, r1, c1, c1, 0 @ write SCR (with NS bit set) + mcr p15, 0, r5, c1, c1, 0 @ write SCR (with NS bit set) isb
-#ifdef CONFIG_ARMV7_VIRT - mrceq p15, 0, r0, c12, c0, 1 @ get MVBAR value - mcreq p15, 4, r0, c12, c0, 0 @ write HVBAR -#endif bne 1f
@ Reset CNTVOFF to 0 before leaving monitor mode - mrc p15, 0, r0, c0, c1, 1 @ read ID_PFR1 - ands r0, r0, #CPUID_ARM_GENTIMER_MASK @ test arch timer bits - movne r0, #0 - mcrrne p15, 4, r0, r0, c14 @ Reset CNTVOFF to zero + mrc p15, 0, r4, c0, c1, 1 @ read ID_PFR1 + ands r4, r4, #CPUID_ARM_GENTIMER_MASK @ test arch timer bits + movne r4, #0 + mcrrne p15, 4, r4, r4, c14 @ Reset CNTVOFF to zero 1: - movs pc, lr @ return to non-secure SVC - -_hyp_trap: - mrs lr, elr_hyp @ for older asm: .byte 0x00, 0xe3, 0x0e, 0xe1 - mov pc, lr @ do no switch modes, but - @ return to caller - + mov lr, ip + mov ip, #(F_BIT | I_BIT | A_BIT) @ Set A, I and F + tst lr, #1 @ Check for Thumb PC + orrne ip, ip, #T_BIT @ Set T if Thumb + orr ip, ip, r6 @ Slot target mode in + msr spsr_cxfs, ip @ Set full SPSR + movs pc, lr @ ERET to non-secure + +ENTRY(_do_nonsec_entry) + mov ip, r0 + mov r0, r1 + mov r1, r2 + mov r2, r3 + smc #0 +ENDPROC(_do_nonsec_entry) + +.macro get_cbar_addr addr +#ifdef CONFIG_ARM_GIC_BASE_ADDRESS + ldr \addr, =CONFIG_ARM_GIC_BASE_ADDRESS +#else + mrc p15, 4, \addr, c15, c0, 0 @ read CBAR + bfc \addr, #0, #15 @ clear reserved bits +#endif +.endm + +.macro get_gicd_addr addr + get_cbar_addr \addr + add \addr, \addr, #GIC_DIST_OFFSET @ GIC dist i/f offset +.endm + +.macro get_gicc_addr addr, tmp + get_cbar_addr \addr + is_cpu_virt_capable \tmp + movne \tmp, #GIC_CPU_OFFSET_A9 @ GIC CPU offset for A9 + moveq \tmp, #GIC_CPU_OFFSET_A15 @ GIC CPU offset for A15/A7 + add \addr, \addr, \tmp +.endm + +#ifndef CONFIG_ARMV7_PSCI /* * Secondary CPUs start here and call the code for the core specific parts * of the non-secure and HYP mode transition. The GIC distributor specific @@ -74,31 +112,21 @@ _hyp_trap: * Then they go back to wfi and wait to be woken up by the kernel again. */ ENTRY(_smp_pen) - mrs r0, cpsr - orr r0, r0, #0xc0 - msr cpsr, r0 @ disable interrupts - ldr r1, =_start - mcr p15, 0, r1, c12, c0, 0 @ set VBAR + cpsid i + cpsid f
bl _nonsec_init - mov r12, r0 @ save GICC address -#ifdef CONFIG_ARMV7_VIRT - bl _switch_to_hyp -#endif - - ldr r1, [r12, #GICC_IAR] @ acknowledge IPI - str r1, [r12, #GICC_EOIR] @ signal end of interrupt
adr r0, _smp_pen @ do not use this address again b smp_waitloop @ wait for IPIs, board specific ENDPROC(_smp_pen) +#endif
/* * Switch a core to non-secure state. * * 1. initialize the GIC per-core interface * 2. allow coprocessor access in non-secure modes - * 3. switch the cpu mode (by calling "smc #0") * * Called from smp_pen by secondary cores and directly by the BSP. * Do not assume that the stack is available and only use registers @@ -108,38 +136,23 @@ ENDPROC(_smp_pen) * though, but we check this in C before calling this function. */ ENTRY(_nonsec_init) -#ifdef CONFIG_ARM_GIC_BASE_ADDRESS - ldr r2, =CONFIG_ARM_GIC_BASE_ADDRESS -#else - mrc p15, 4, r2, c15, c0, 0 @ read CBAR - bfc r2, #0, #15 @ clear reserved bits -#endif - add r3, r2, #GIC_DIST_OFFSET @ GIC dist i/f offset + get_gicd_addr r3 + mvn r1, #0 @ all bits to 1 str r1, [r3, #GICD_IGROUPRn] @ allow private interrupts
- mrc p15, 0, r0, c0, c0, 0 @ read MIDR - ldr r1, =MIDR_PRIMARY_PART_MASK - and r0, r0, r1 @ mask out variant and revision + get_gicc_addr r3, r1
- ldr r1, =MIDR_CORTEX_A7_R0P0 & MIDR_PRIMARY_PART_MASK - cmp r0, r1 @ check for Cortex-A7 - - ldr r1, =MIDR_CORTEX_A15_R0P0 & MIDR_PRIMARY_PART_MASK - cmpne r0, r1 @ check for Cortex-A15 - - movne r1, #GIC_CPU_OFFSET_A9 @ GIC CPU offset for A9 - moveq r1, #GIC_CPU_OFFSET_A15 @ GIC CPU offset for A15/A7 - add r3, r2, r1 @ r3 = GIC CPU i/f addr - - mov r1, #1 @ set GICC_CTLR[enable] + mov r1, #3 @ Enable both groups str r1, [r3, #GICC_CTLR] @ and clear all other bits mov r1, #0xff str r1, [r3, #GICC_PMR] @ set priority mask register
+ mrc p15, 0, r0, c1, c1, 2 movw r1, #0x3fff - movt r1, #0x0006 - mcr p15, 0, r1, c1, c1, 2 @ NSACR = all copros to non-sec + movt r1, #0x0004 + orr r0, r0, r1 + mcr p15, 0, r0, c1, c1, 2 @ NSACR = all copros to non-sec
/* The CNTFRQ register of the generic timer needs to be * programmed in secure state. Some primary bootloaders / firmware @@ -157,21 +170,9 @@ ENTRY(_nonsec_init)
adr r1, _monitor_vectors mcr p15, 0, r1, c12, c0, 1 @ set MVBAR to secure vectors - - mrc p15, 0, ip, c12, c0, 0 @ save secure copy of VBAR - isb - smc #0 @ call into MONITOR mode - - mcr p15, 0, ip, c12, c0, 0 @ write non-secure copy of VBAR - - mov r1, #1 - str r1, [r3, #GICC_CTLR] @ enable non-secure CPU i/f - add r2, r2, #GIC_DIST_OFFSET - str r1, [r2, #GICD_CTLR] @ allow private interrupts
mov r0, r3 @ return GICC address - bx lr ENDPROC(_nonsec_init)
@@ -183,18 +184,10 @@ ENTRY(smp_waitloop) ldr r1, [r1] cmp r0, r1 @ make sure we dont execute this code beq smp_waitloop @ again (due to a spurious wakeup) - mov pc, r1 + mov r0, r1 + b _do_nonsec_entry ENDPROC(smp_waitloop) .weak smp_waitloop #endif
-ENTRY(_switch_to_hyp) - mov r0, lr - mov r1, sp @ save SVC copy of LR and SP - isb - hvc #0 @ for older asm: .byte 0x70, 0x00, 0x40, 0xe1 - mov sp, r1 - mov lr, r0 @ restore SVC copy of LR and SP - - bx lr -ENDPROC(_switch_to_hyp) + .popsection diff --git a/arch/arm/cpu/armv7/virt-v7.c b/arch/arm/cpu/armv7/virt-v7.c index 2cd604f..6500030 100644 --- a/arch/arm/cpu/armv7/virt-v7.c +++ b/arch/arm/cpu/armv7/virt-v7.c @@ -13,17 +13,10 @@ #include <asm/armv7.h> #include <asm/gic.h> #include <asm/io.h> +#include <asm/secure.h>
unsigned long gic_dist_addr;
-static unsigned int read_cpsr(void) -{ - unsigned int reg; - - asm volatile ("mrs %0, cpsr\n" : "=r" (reg)); - return reg; -} - static unsigned int read_id_pfr1(void) { unsigned int reg; @@ -72,6 +65,18 @@ static unsigned long get_gicd_base_address(void) #endif }
+static void relocate_secure_section(void) +{ +#ifdef CONFIG_ARMV7_SECURE_BASE + size_t sz = __secure_end - __secure_start; + + memcpy((void *)CONFIG_ARMV7_SECURE_BASE, __secure_start, sz); + flush_dcache_range(CONFIG_ARMV7_SECURE_BASE, + CONFIG_ARMV7_SECURE_BASE + sz + 1); + invalidate_icache_all(); +#endif +} + static void kick_secondary_cpus_gic(unsigned long gicdaddr) { /* kick all CPUs (except this one) by writing to GICD_SGIR */ @@ -83,35 +88,7 @@ void __weak smp_kick_all_cpus(void) kick_secondary_cpus_gic(gic_dist_addr); }
-int armv7_switch_hyp(void) -{ - unsigned int reg; - - /* check whether we are in HYP mode already */ - if ((read_cpsr() & 0x1f) == 0x1a) { - debug("CPU already in HYP mode\n"); - return 0; - } - - /* check whether the CPU supports the virtualization extensions */ - reg = read_id_pfr1(); - if ((reg & CPUID_ARM_VIRT_MASK) != 1 << CPUID_ARM_VIRT_SHIFT) { - printf("HYP mode: Virtualization extensions not implemented.\n"); - return -1; - } - - /* call the HYP switching code on this CPU also */ - _switch_to_hyp(); - - if ((read_cpsr() & 0x1F) != 0x1a) { - printf("HYP mode: switch not successful.\n"); - return -1; - } - - return 0; -} - -int armv7_switch_nonsec(void) +int armv7_init_nonsec(void) { unsigned int reg; unsigned itlinesnr, i; @@ -147,11 +124,13 @@ int armv7_switch_nonsec(void) for (i = 1; i <= itlinesnr; i++) writel((unsigned)-1, gic_dist_addr + GICD_IGROUPRn + 4 * i);
- smp_set_core_boot_addr((unsigned long)_smp_pen, -1); +#ifndef CONFIG_ARMV7_PSCI + smp_set_core_boot_addr((unsigned long)secure_ram_addr(_smp_pen), -1); smp_kick_all_cpus(); +#endif
/* call the non-sec switching code on this CPU also */ - _nonsec_init(); - + relocate_secure_section(); + secure_ram_addr(_nonsec_init)(); return 0; } diff --git a/arch/arm/include/asm/armv7.h b/arch/arm/include/asm/armv7.h index 395444e..11476dd 100644 --- a/arch/arm/include/asm/armv7.h +++ b/arch/arm/include/asm/armv7.h @@ -78,13 +78,17 @@ void v7_outer_cache_inval_range(u32 start, u32 end);
#if defined(CONFIG_ARMV7_NONSEC) || defined(CONFIG_ARMV7_VIRT)
-int armv7_switch_nonsec(void); -int armv7_switch_hyp(void); +int armv7_init_nonsec(void);
/* defined in assembly file */ unsigned int _nonsec_init(void); +void _do_nonsec_entry(void *target_pc, unsigned long r0, + unsigned long r1, unsigned long r2); void _smp_pen(void); -void _switch_to_hyp(void); + +extern char __secure_start[]; +extern char __secure_end[]; + #endif /* CONFIG_ARMV7_NONSEC || CONFIG_ARMV7_VIRT */
#endif /* ! __ASSEMBLY__ */ diff --git a/arch/arm/include/asm/secure.h b/arch/arm/include/asm/secure.h new file mode 100644 index 0000000..effdb18 --- /dev/null +++ b/arch/arm/include/asm/secure.h @@ -0,0 +1,26 @@ +#ifndef __ASM_SECURE_H +#define __ASM_SECURE_H + +#include <config.h> + +#ifdef CONFIG_ARMV7_SECURE_BASE +/* + * Warning, horror ahead. + * + * The target code lives in our "secure ram", but u-boot doesn't know + * that, and has blindly added reloc_off to every relocation + * entry. Gahh. Do the opposite conversion. This hack also prevents + * GCC from generating code veeners, which u-boot doesn't relocate at + * all... + */ +#define secure_ram_addr(_fn) ({ \ + DECLARE_GLOBAL_DATA_PTR; \ + void *__fn = _fn; \ + typeof(_fn) *__tmp = (__fn - gd->reloc_off); \ + __tmp; \ + }) +#else +#define secure_ram_addr(_fn) (_fn) +#endif + +#endif diff --git a/arch/arm/lib/bootm.c b/arch/arm/lib/bootm.c index 304210e..a08586f 100644 --- a/arch/arm/lib/bootm.c +++ b/arch/arm/lib/bootm.c @@ -20,6 +20,7 @@ #include <libfdt.h> #include <fdt_support.h> #include <asm/bootm.h> +#include <asm/secure.h> #include <linux/compiler.h>
#if defined(CONFIG_ARMV7_NONSEC) || defined(CONFIG_ARMV7_VIRT) @@ -184,27 +185,17 @@ static void setup_end_tag(bd_t *bd)
__weak void setup_board_tags(struct tag **in_params) {}
+#ifdef CONFIG_ARM64 static void do_nonsec_virt_switch(void) { -#if defined(CONFIG_ARMV7_NONSEC) || defined(CONFIG_ARMV7_VIRT) - if (armv7_switch_nonsec() == 0) -#ifdef CONFIG_ARMV7_VIRT - if (armv7_switch_hyp() == 0) - debug("entered HYP mode\n"); -#else - debug("entered non-secure state\n"); -#endif -#endif - -#ifdef CONFIG_ARM64 smp_kick_all_cpus(); flush_dcache_all(); /* flush cache before swtiching to EL2 */ armv8_switch_to_el2(); #ifdef CONFIG_ARMV8_SWITCH_TO_EL1 armv8_switch_to_el1(); #endif -#endif } +#endif
/* Subcommand: PREP */ static void boot_prep_linux(bootm_headers_t *images) @@ -289,8 +280,13 @@ static void boot_jump_linux(bootm_headers_t *images, int flag) r2 = gd->bd->bi_boot_params;
if (!fake) { - do_nonsec_virt_switch(); +#if defined(CONFIG_ARMV7_NONSEC) || defined(CONFIG_ARMV7_VIRT) + armv7_init_nonsec(); + secure_ram_addr(_do_nonsec_entry)(kernel_entry, + 0, machid, r2); +#else kernel_entry(0, machid, r2); +#endif } #endif }

From: Marc Zyngier marc.zyngier@arm.com
Implement core support for PSCI. As this is generic code, it doesn't implement anything really useful (all the functions are returning Not Implemented).
Signed-off-by: Marc Zyngier marc.zyngier@arm.com Acked-by: Ian Campbell ijc@hellion.org.uk --- arch/arm/cpu/armv7/Makefile | 4 ++ arch/arm/cpu/armv7/psci.S | 102 ++++++++++++++++++++++++++++++++++++++++++++ arch/arm/include/asm/psci.h | 35 +++++++++++++++ 3 files changed, 141 insertions(+) create mode 100644 arch/arm/cpu/armv7/psci.S create mode 100644 arch/arm/include/asm/psci.h
diff --git a/arch/arm/cpu/armv7/Makefile b/arch/arm/cpu/armv7/Makefile index 232118d..735c4ad 100644 --- a/arch/arm/cpu/armv7/Makefile +++ b/arch/arm/cpu/armv7/Makefile @@ -23,6 +23,10 @@ obj-y += nonsec_virt.o obj-y += virt-v7.o endif
+ifneq ($(CONFIG_ARMV7_PSCI),) +obj-y += psci.o +endif + obj-$(CONFIG_KONA) += kona-common/ obj-$(CONFIG_OMAP_COMMON) += omap-common/ obj-$(CONFIG_SYS_ARCH_TIMER) += arch_timer.o diff --git a/arch/arm/cpu/armv7/psci.S b/arch/arm/cpu/armv7/psci.S new file mode 100644 index 0000000..bf11a34 --- /dev/null +++ b/arch/arm/cpu/armv7/psci.S @@ -0,0 +1,102 @@ +/* + * Copyright (C) 2013,2014 - ARM Ltd + * Author: Marc Zyngier marc.zyngier@arm.com + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program. If not, see http://www.gnu.org/licenses/. + */ + +#include <config.h> +#include <linux/linkage.h> +#include <asm/psci.h> + + .pushsection ._secure.text, "ax" + + .arch_extension sec + + .align 5 + .globl _psci_vectors +_psci_vectors: + b default_psci_vector @ reset + b default_psci_vector @ undef + b _smc_psci @ smc + b default_psci_vector @ pabort + b default_psci_vector @ dabort + b default_psci_vector @ hyp + b default_psci_vector @ irq + b psci_fiq_enter @ fiq + +ENTRY(psci_fiq_enter) + movs pc, lr +ENDPROC(psci_fiq_enter) +.weak psci_fiq_enter + +ENTRY(default_psci_vector) + movs pc, lr +ENDPROC(default_psci_vector) +.weak default_psci_vector + +ENTRY(psci_cpu_suspend) +ENTRY(psci_cpu_off) +ENTRY(psci_cpu_on) +ENTRY(psci_migrate) + mov r0, #ARM_PSCI_RET_NI @ Return -1 (Not Implemented) + mov pc, lr +ENDPROC(psci_migrate) +ENDPROC(psci_cpu_on) +ENDPROC(psci_cpu_off) +ENDPROC(psci_cpu_suspend) +.weak psci_cpu_suspend +.weak psci_cpu_off +.weak psci_cpu_on +.weak psci_migrate + +_psci_table: + .word ARM_PSCI_FN_CPU_SUSPEND + .word psci_cpu_suspend + .word ARM_PSCI_FN_CPU_OFF + .word psci_cpu_off + .word ARM_PSCI_FN_CPU_ON + .word psci_cpu_on + .word ARM_PSCI_FN_MIGRATE + .word psci_migrate + .word 0 + .word 0 + +_smc_psci: + push {r4-r7,lr} + + @ Switch to secure + mrc p15, 0, r7, c1, c1, 0 + bic r4, r7, #1 + mcr p15, 0, r4, c1, c1, 0 + isb + + adr r4, _psci_table +1: ldr r5, [r4] @ Load PSCI function ID + ldr r6, [r4, #4] @ Load target PC + cmp r5, #0 @ If reach the end, bail out + moveq r0, #ARM_PSCI_RET_INVAL @ Return -2 (Invalid) + beq 2f + cmp r0, r5 @ If not matching, try next entry + addne r4, r4, #8 + bne 1b + + blx r6 @ Execute PSCI function + + @ Switch back to non-secure +2: mcr p15, 0, r7, c1, c1, 0 + + pop {r4-r7, lr} + movs pc, lr @ Return to the kernel + + .popsection diff --git a/arch/arm/include/asm/psci.h b/arch/arm/include/asm/psci.h new file mode 100644 index 0000000..704b4b0 --- /dev/null +++ b/arch/arm/include/asm/psci.h @@ -0,0 +1,35 @@ +/* + * Copyright (C) 2013 - ARM Ltd + * Author: Marc Zyngier marc.zyngier@arm.com + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program. If not, see http://www.gnu.org/licenses/. + */ + +#ifndef __ARM_PSCI_H__ +#define __ARM_PSCI_H__ + +/* PSCI interface */ +#define ARM_PSCI_FN_BASE 0x95c1ba5e +#define ARM_PSCI_FN(n) (ARM_PSCI_FN_BASE + (n)) + +#define ARM_PSCI_FN_CPU_SUSPEND ARM_PSCI_FN(0) +#define ARM_PSCI_FN_CPU_OFF ARM_PSCI_FN(1) +#define ARM_PSCI_FN_CPU_ON ARM_PSCI_FN(2) +#define ARM_PSCI_FN_MIGRATE ARM_PSCI_FN(3) + +#define ARM_PSCI_RET_SUCCESS 0 +#define ARM_PSCI_RET_NI (-1) +#define ARM_PSCI_RET_INVAL (-2) +#define ARM_PSCI_RET_DENIED (-3) + +#endif /* __ARM_PSCI_H__ */

From: Marc Zyngier marc.zyngier@arm.com
Allow the switch to a second stage secure monitor just before switching to non-secure.
This allows a resident piece of firmware to be active once the kernel has been entered (the u-boot monitor is dead anyway, its pages being reused).
Signed-off-by: Marc Zyngier marc.zyngier@arm.com Acked-by: Ian Campbell ijc@hellion.org.uk --- arch/arm/cpu/armv7/nonsec_virt.S | 13 +++++++++++-- 1 file changed, 11 insertions(+), 2 deletions(-)
diff --git a/arch/arm/cpu/armv7/nonsec_virt.S b/arch/arm/cpu/armv7/nonsec_virt.S index 2a43e3c..745670e 100644 --- a/arch/arm/cpu/armv7/nonsec_virt.S +++ b/arch/arm/cpu/armv7/nonsec_virt.S @@ -44,10 +44,19 @@ _monitor_vectors: * ip: target PC */ _secure_monitor: +#ifdef CONFIG_ARMV7_PSCI + ldr r5, =_psci_vectors @ Switch to the next monitor + mcr p15, 0, r5, c12, c0, 1 + isb + + @ Obtain a secure stack, and configure the PSCI backend + bl psci_arch_init +#endif + mrc p15, 0, r5, c1, c1, 0 @ read SCR - bic r5, r5, #0x4e @ clear IRQ, FIQ, EA, nET bits + bic r5, r5, #0x4a @ clear IRQ, EA, nET bits orr r5, r5, #0x31 @ enable NS, AW, FW bits - + @ FIQ preserved for secure mode mov r6, #SVC_MODE @ default mode is SVC is_cpu_virt_capable r4 #ifdef CONFIG_ARMV7_VIRT

From: Ma Haijun mahaijuns@gmail.com
Some architecture needs extra device tree setup. Instead of adding yet another hook, convert arch_fixup_memory_node to be a generic FDT fixup function.
[maz: collapsed 3 patches into one, rewrote commit message]
Signed-off-by: Ma Haijun mahaijuns@gmail.com Signed-off-by: Marc Zyngier marc.zyngier@arm.com Acked-by: Ian Campbell ijc@hellion.org.uk --- arch/arm/lib/bootm-fdt.c | 2 +- arch/arm/lib/bootm.c | 2 +- common/image-fdt.c | 7 +++++-- include/common.h | 6 +++--- 4 files changed, 10 insertions(+), 7 deletions(-)
diff --git a/arch/arm/lib/bootm-fdt.c b/arch/arm/lib/bootm-fdt.c index e40691d..8394e15 100644 --- a/arch/arm/lib/bootm-fdt.c +++ b/arch/arm/lib/bootm-fdt.c @@ -20,7 +20,7 @@
DECLARE_GLOBAL_DATA_PTR;
-int arch_fixup_memory_node(void *blob) +int arch_fixup_fdt(void *blob) { bd_t *bd = gd->bd; int bank; diff --git a/arch/arm/lib/bootm.c b/arch/arm/lib/bootm.c index a08586f..178e8fb 100644 --- a/arch/arm/lib/bootm.c +++ b/arch/arm/lib/bootm.c @@ -359,7 +359,7 @@ void boot_prep_vxworks(bootm_headers_t *images) if (images->ft_addr) { off = fdt_path_offset(images->ft_addr, "/memory"); if (off < 0) { - if (arch_fixup_memory_node(images->ft_addr)) + if (arch_fixup_fdt(images->ft_addr)) puts("## WARNING: fixup memory failed!\n"); } } diff --git a/common/image-fdt.c b/common/image-fdt.c index 9fc7481..96061a8 100644 --- a/common/image-fdt.c +++ b/common/image-fdt.c @@ -450,7 +450,7 @@ __weak int ft_verify_fdt(void *fdt) return 1; }
-__weak int arch_fixup_memory_node(void *blob) +__weak int arch_fixup_fdt(void *blob) { return 0; } @@ -467,7 +467,10 @@ int image_setup_libfdt(bootm_headers_t *images, void *blob, puts(" - must RESET the board to recover.\n"); return -1; } - arch_fixup_memory_node(blob); + if (arch_fixup_fdt(blob) < 0) { + puts("ERROR: arch specific fdt fixup failed"); + return -1; + } if (IMAGE_OF_BOARD_SETUP) ft_board_setup(blob, gd->bd); fdt_fixup_ethernet(blob); diff --git a/include/common.h b/include/common.h index 2e5a6d3..c9dc400 100644 --- a/include/common.h +++ b/include/common.h @@ -320,14 +320,14 @@ int arch_early_init_r(void); void board_show_dram(ulong size);
/** - * arch_fixup_memory_node() - Write arch-specific memory information to fdt + * arch_fixup_fdt() - Write arch-specific information to fdt * - * Defined in arch/$(ARCH)/lib/bootm.c + * Defined in arch/$(ARCH)/lib/bootm-fdt.c * * @blob: FDT blob to write to * @return 0 if ok, or -ve FDT_ERR_... on failure */ -int arch_fixup_memory_node(void *blob); +int arch_fixup_fdt(void *blob);
/* common/flash.c */ void flash_perror (int);

From: Marc Zyngier marc.zyngier@arm.com
Generate the PSCI node in the device tree.
Also add a reserve section for the "secure" code that lives in in normal RAM, so that the kernel knows it'd better not trip on it.
Signed-off-by: Marc Zyngier marc.zyngier@arm.com Acked-by: Ian Campbell ijc@hellion.org.uk --- arch/arm/cpu/armv7/Makefile | 1 + arch/arm/cpu/armv7/virt-dt.c | 100 +++++++++++++++++++++++++++++++++++++++++++ arch/arm/include/asm/armv7.h | 1 + arch/arm/lib/bootm-fdt.c | 12 +++++- 4 files changed, 112 insertions(+), 2 deletions(-) create mode 100644 arch/arm/cpu/armv7/virt-dt.c
diff --git a/arch/arm/cpu/armv7/Makefile b/arch/arm/cpu/armv7/Makefile index 735c4ad..703ce8c 100644 --- a/arch/arm/cpu/armv7/Makefile +++ b/arch/arm/cpu/armv7/Makefile @@ -21,6 +21,7 @@ endif ifneq ($(CONFIG_ARMV7_NONSEC)$(CONFIG_ARMV7_VIRT),) obj-y += nonsec_virt.o obj-y += virt-v7.o +obj-y += virt-dt.o endif
ifneq ($(CONFIG_ARMV7_PSCI),) diff --git a/arch/arm/cpu/armv7/virt-dt.c b/arch/arm/cpu/armv7/virt-dt.c new file mode 100644 index 0000000..0b0d6a7 --- /dev/null +++ b/arch/arm/cpu/armv7/virt-dt.c @@ -0,0 +1,100 @@ +/* + * Copyright (C) 2013 - ARM Ltd + * Author: Marc Zyngier marc.zyngier@arm.com + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program. If not, see http://www.gnu.org/licenses/. + */ + +#include <common.h> +#include <stdio_dev.h> +#include <linux/ctype.h> +#include <linux/types.h> +#include <asm/global_data.h> +#include <libfdt.h> +#include <fdt_support.h> +#include <asm/armv7.h> +#include <asm/psci.h> + +static int fdt_psci(void *fdt) +{ +#ifdef CONFIG_ARMV7_PSCI + int nodeoff; + int tmp; + + nodeoff = fdt_path_offset(fdt, "/cpus"); + if (nodeoff < 0) { + printf("couldn't find /cpus\n"); + return nodeoff; + } + + /* add 'enable-method = "psci"' to each cpu node */ + for (tmp = fdt_first_subnode(fdt, nodeoff); + tmp >= 0; + tmp = fdt_next_subnode(fdt, tmp)) { + const struct fdt_property *prop; + int len; + + prop = fdt_get_property(fdt, tmp, "device_type", &len); + if (!prop) + continue; + if (len < 4) + continue; + if (strcmp(prop->data, "cpu")) + continue; + + fdt_setprop_string(fdt, tmp, "enable-method", "psci"); + } + + nodeoff = fdt_path_offset(fdt, "/psci"); + if (nodeoff < 0) { + nodeoff = fdt_path_offset(fdt, "/"); + if (nodeoff < 0) + return nodeoff; + + nodeoff = fdt_add_subnode(fdt, nodeoff, "psci"); + if (nodeoff < 0) + return nodeoff; + } + + tmp = fdt_setprop_string(fdt, nodeoff, "compatible", "arm,psci"); + if (tmp) + return tmp; + tmp = fdt_setprop_string(fdt, nodeoff, "method", "smc"); + if (tmp) + return tmp; + tmp = fdt_setprop_u32(fdt, nodeoff, "cpu_suspend", ARM_PSCI_FN_CPU_SUSPEND); + if (tmp) + return tmp; + tmp = fdt_setprop_u32(fdt, nodeoff, "cpu_off", ARM_PSCI_FN_CPU_OFF); + if (tmp) + return tmp; + tmp = fdt_setprop_u32(fdt, nodeoff, "cpu_on", ARM_PSCI_FN_CPU_ON); + if (tmp) + return tmp; + tmp = fdt_setprop_u32(fdt, nodeoff, "migrate", ARM_PSCI_FN_MIGRATE); + if (tmp) + return tmp; +#endif + return 0; +} + +int armv7_update_dt(void *fdt) +{ +#ifndef CONFIG_ARMV7_SECURE_BASE + /* secure code lives in RAM, keep it alive */ + fdt_add_mem_rsv(fdt, (unsigned long)__secure_start, + __secure_end - __secure_start); +#endif + + return fdt_psci(fdt); +} diff --git a/arch/arm/include/asm/armv7.h b/arch/arm/include/asm/armv7.h index 11476dd..323f282 100644 --- a/arch/arm/include/asm/armv7.h +++ b/arch/arm/include/asm/armv7.h @@ -79,6 +79,7 @@ void v7_outer_cache_inval_range(u32 start, u32 end); #if defined(CONFIG_ARMV7_NONSEC) || defined(CONFIG_ARMV7_VIRT)
int armv7_init_nonsec(void); +int armv7_update_dt(void *fdt);
/* defined in assembly file */ unsigned int _nonsec_init(void); diff --git a/arch/arm/lib/bootm-fdt.c b/arch/arm/lib/bootm-fdt.c index 8394e15..d4f1578 100644 --- a/arch/arm/lib/bootm-fdt.c +++ b/arch/arm/lib/bootm-fdt.c @@ -17,13 +17,14 @@
#include <common.h> #include <fdt_support.h> +#include <asm/armv7.h>
DECLARE_GLOBAL_DATA_PTR;
int arch_fixup_fdt(void *blob) { bd_t *bd = gd->bd; - int bank; + int bank, ret; u64 start[CONFIG_NR_DRAM_BANKS]; u64 size[CONFIG_NR_DRAM_BANKS];
@@ -32,5 +33,12 @@ int arch_fixup_fdt(void *blob) size[bank] = bd->bi_dram[bank].size; }
- return fdt_fixup_memory_banks(blob, start, size, CONFIG_NR_DRAM_BANKS); + ret = fdt_fixup_memory_banks(blob, start, size, CONFIG_NR_DRAM_BANKS); +#if defined(CONFIG_ARMV7_NONSEC) || defined(CONFIG_ARMV7_VIRT) + if (ret) + return ret; + + ret = armv7_update_dt(blob); +#endif + return ret; }

From: Marc Zyngier marc.zyngier@arm.com
Having a form of whitelist to check if we know of a CPU core and and obtain CBAR is a bit silly.
It doesn't scale (how about A12, A17, as well as other I don't know about?), and is actually a property of the SoC, not the core.
So either it works and everybody is happy, or it doesn't and the u-boot port to this SoC is providing the real address via a configuration option.
The result of the above is that this code doesn't need to exist, is thus forcefully removed.
Signed-off-by: Marc Zyngier marc.zyngier@arm.com Acked-by: Ian Campbell ijc@hellion.org.uk --- arch/arm/cpu/armv7/virt-v7.c | 17 ----------------- 1 file changed, 17 deletions(-)
diff --git a/arch/arm/cpu/armv7/virt-v7.c b/arch/arm/cpu/armv7/virt-v7.c index 6500030..651ca40 100644 --- a/arch/arm/cpu/armv7/virt-v7.c +++ b/arch/arm/cpu/armv7/virt-v7.c @@ -30,25 +30,8 @@ static unsigned long get_gicd_base_address(void) #ifdef CONFIG_ARM_GIC_BASE_ADDRESS return CONFIG_ARM_GIC_BASE_ADDRESS + GIC_DIST_OFFSET; #else - unsigned midr; unsigned periphbase;
- /* check whether we are an Cortex-A15 or A7. - * The actual HYP switch should work with all CPUs supporting - * the virtualization extension, but we need the GIC address, - * which we know only for sure for those two CPUs. - */ - asm("mrc p15, 0, %0, c0, c0, 0\n" : "=r"(midr)); - switch (midr & MIDR_PRIMARY_PART_MASK) { - case MIDR_CORTEX_A9_R0P1: - case MIDR_CORTEX_A15_R0P0: - case MIDR_CORTEX_A7_R0P0: - break; - default: - printf("nonsec: could not determine GIC address.\n"); - return -1; - } - /* get the GIC base address from the CBAR register */ asm("mrc p15, 4, %0, c15, c0, 0\n" : "=r" (periphbase));

Hi Ian,
On Sat, 12 Jul 2014 14:23:41 +0100, Ian Campbell ijc@hellion.org.uk wrote:
Hi,
Marc is rather busy so I've taken it upon myself to rebase this series onto the latest master. v4 would have been applied except for a warning which it caused on aarch64 which I have (trivially) resolved this time around. The only other change is s/OBJCFLAGS/OBJCOPYFLAGS/ due to changes in the underlying code.
I'd like to get this into the next merge window since it is the basis for sunxi smp support.
I have been testing on a cubietruck (sun7i/A20). I've also build tested "./MAKEALL -v armltd" which builds a variety of 32- and 64-bit boards.
The code is also available at git://gitorious.org/ijc/u-boot.git psci-v5
The sunxi patches which build on this are at git://gitorious.org/ijc/u-boot.git psci-a20-v5
Cheers, Ian.
Marc's original blurb:
PSCI is an ARM standard that provides a generic interface that supervisory software can use to manage power in the following situations:
- Core idle management
- CPU hotplug
- big.LITTLE migration models
- System shutdown and reset
It basically allows the kernel to offload these tasks to the firmware, and rely on common kernel side code that just calls into PSCI.
More importantly, it gives a way to ensure that CPUs enter the kernel at the appropriate exception level (ie HYP mode, to allow the use of the virtualization extensions), even across events like CPUs being powered off/on or suspended.
The main idea here is to turn some of the existing U-Boot code into a separate section that can live in secure RAM (or a reserved page of memory), containing a secure monitor that will implement the PSCI operations. This code will still be alive when U-Boot is long gone, hence the need for a piece of memory that will not be touched by the OS.
This patch series contains 3 parts:
- the first four patches are just bug fixes
- the next two refactor the HYP/non-secure code to allow relocation in secure memory
- the last four contain the generic PSCI code and DT infrastructure
This implements the original 0.1 spec, as nobody implements the new 0.2 version so far. I plan to update this support to 0.2 once there is an official binding available (and support in the kernel).
Most of the development has been done on an Allwinner A20 SoC, which is the main user of this code at the moment. I hope new SoCs will be using this method in the future (my primary goal for this series being to avoid more stupid SMP code from creeping up in the Linux kernel). As instructed, I've removed the A20 support code and made it a separate series, as there is now an effort to mainline this code (see Ian Campbell patch series).
With these three series applied, the A20 now boots in HYP mode, Linux finds the secondary CPU without any SMP code present in the kernel, and runs KVM out of the box. The Xen/ARM guys managed to do the same fairly easily, as did at least one XVizor user.
This code has also been tested on a VExpress TC2, running KVM with all 5 CPUs, in order to make sure there was no obvious regression.
Whole series applied to u-boot-arm/master, thanks!
Amicalement,
participants (2)
-
Albert ARIBAUD
-
Ian Campbell