
Hi,
+void mmu_set_region_dcache_behaviour(phys_addr_t start, size_t size,
enum dcache_option option)
+{
- u64 *page_table = arch_get_page_table();
- u64 upto, end;
- if (page_table == NULL)
return;
- end = ALIGN(start + size, (1 << MMU_SECTION_SHIFT)) >>
MMU_SECTION_SHIFT;
- start = start >> MMU_SECTION_SHIFT;
- for (upto = start; upto < end; upto++) {
page_table[upto] &= ~PMD_ATTRINDX_MASK;
page_table[upto] |= PMD_ATTRINDX(option);
- }
These writes might not be visible to the page table walkers immediately, and the TLBs might still contain stale values for a while afterwards. That could render the cache maintenance useless (as speculative fetches could still occur due to cacheable attributes still being in place).
You need a DSB to ensure writes are visible to the page table walkers (with a compiler barrier to ensure that the writes actually occur before the DSB), and some TLB maintenance (complete with another DSB) to ensure that the TLBs don't contain stale values by the time to get to the cache afterwards.
Also minor nit, but s/upto/i/?
- start = start << MMU_SECTION_SHIFT;
- end = end << MMU_SECTION_SHIFT;
- flush_dcache_range(start, end);
+} #else /* CONFIG_SYS_DCACHE_OFF */
void invalidate_dcache_all(void) @@ -170,6 +197,11 @@ int dcache_status(void) return 0; }
+void mmu_set_region_dcache_behaviour(phys_addr_t start, size_t size,
enum dcache_option option)
+{ +}
#endif /* CONFIG_SYS_DCACHE_OFF */
#ifndef CONFIG_SYS_ICACHE_OFF diff --git a/arch/arm/include/asm/system.h b/arch/arm/include/asm/system.h index 760e8ab..868ea54 100644 --- a/arch/arm/include/asm/system.h +++ b/arch/arm/include/asm/system.h @@ -15,9 +15,15 @@ #define CR_EE (1 << 25) /* Exception (Big) Endian */
#define PGTABLE_SIZE (0x10000) +/* 2MB granularity */ +#define MMU_SECTION_SHIFT 21
Do we only expect 4K pages for now?
Thanks, Mark.