[U-Boot] [RFC PATCH] ARM: cache: cp15: Align addresses when initial page_table setup is flushed

Change made in the commit: "arm: Show cache warnings in U-Boot proper only" SHA1: bcc53bf095893fbdae531a9a7b5d4ef4a125a7fc
has revealed that during initial setting of MMU regions in the mmu_set_region_dcache_behavior() function some addresses are unaligned to platform cache line size.
As a result we were experiencing following warning messages at early boot: CACHE: Misaligned operation at range [8fff0000, 8fff0004] CACHE: Misaligned operation at range [8fff0024, 8fff0028]
Those were caused by an attempt to update single page_table (gd->arch.tlb_addr) entries with proper TLB cache settings. Since TLB section covers large area (up to 2MiB), we had to update very small amount of cache data, very often much smaller than single cache line size (e.g. 32 or 64 bytes).
This patch squashes this warning by properly aligning start and end addresses. In fact it does what cache HW would do anyway (flush the whole data cache lines). Even without this patch it all worked, because TLB table sections were initialized to default values earlier.
Signed-off-by: Lukasz Majewski l.majewski@majess.pl --- arch/arm/lib/cache-cp15.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/arch/arm/lib/cache-cp15.c b/arch/arm/lib/cache-cp15.c index 1121dc3..913c554 100644 --- a/arch/arm/lib/cache-cp15.c +++ b/arch/arm/lib/cache-cp15.c @@ -62,6 +62,7 @@ void mmu_set_region_dcache_behaviour(phys_addr_t start, size_t size, enum dcache_option option) { u32 *page_table = (u32 *)gd->arch.tlb_addr; + u32 align_start_addr, align_end_addr; unsigned long upto, end;
end = ALIGN(start + size, MMU_SECTION_SIZE) >> MMU_SECTION_SHIFT; @@ -70,7 +71,14 @@ void mmu_set_region_dcache_behaviour(phys_addr_t start, size_t size, option); for (upto = start; upto < end; upto++) set_section_dcache(upto, option); - mmu_page_table_flush((u32)&page_table[start], (u32)&page_table[end]); + + align_start_addr = (u32)&page_table[start] + & ~(CONFIG_SYS_CACHELINE_SIZE - 1); + align_end_addr = ALIGN((u32)&page_table[end], + CONFIG_SYS_CACHELINE_SIZE); + debug("%s: align_start_addr: 0x%x align end_addr: 0x%x\n", __func__, + align_start_addr, align_end_addr); + mmu_page_table_flush(align_start_addr, align_end_addr); }
__weak void dram_bank_mmu_setup(int bank)

On 08/09/2016 10:41 AM, Lukasz Majewski wrote:
Change made in the commit: "arm: Show cache warnings in U-Boot proper only" SHA1: bcc53bf095893fbdae531a9a7b5d4ef4a125a7fc
has revealed that during initial setting of MMU regions in the mmu_set_region_dcache_behavior() function some addresses are unaligned to platform cache line size.
As a result we were experiencing following warning messages at early boot: CACHE: Misaligned operation at range [8fff0000, 8fff0004] CACHE: Misaligned operation at range [8fff0024, 8fff0028]
Those were caused by an attempt to update single page_table (gd->arch.tlb_addr) entries with proper TLB cache settings. Since TLB section covers large area (up to 2MiB), we had to update very small amount of cache data, very often much smaller than single cache line size (e.g. 32 or 64 bytes).
This patch squashes this warning by properly aligning start and end addresses. In fact it does what cache HW would do anyway (flush the whole data cache lines). Even without this patch it all worked, because TLB table sections were initialized to default values earlier.
I presume this works because the MMU table has only one writer, which is the CPU, yes ?
Signed-off-by: Lukasz Majewski l.majewski@majess.pl
arch/arm/lib/cache-cp15.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/arch/arm/lib/cache-cp15.c b/arch/arm/lib/cache-cp15.c index 1121dc3..913c554 100644 --- a/arch/arm/lib/cache-cp15.c +++ b/arch/arm/lib/cache-cp15.c @@ -62,6 +62,7 @@ void mmu_set_region_dcache_behaviour(phys_addr_t start, size_t size, enum dcache_option option) { u32 *page_table = (u32 *)gd->arch.tlb_addr;
u32 align_start_addr, align_end_addr; unsigned long upto, end;
end = ALIGN(start + size, MMU_SECTION_SIZE) >> MMU_SECTION_SHIFT;
@@ -70,7 +71,14 @@ void mmu_set_region_dcache_behaviour(phys_addr_t start, size_t size, option); for (upto = start; upto < end; upto++) set_section_dcache(upto, option);
- mmu_page_table_flush((u32)&page_table[start], (u32)&page_table[end]);
- align_start_addr = (u32)&page_table[start]
& ~(CONFIG_SYS_CACHELINE_SIZE - 1);
- align_end_addr = ALIGN((u32)&page_table[end],
CONFIG_SYS_CACHELINE_SIZE);
- debug("%s: align_start_addr: 0x%x align end_addr: 0x%x\n", __func__,
align_start_addr, align_end_addr);
- mmu_page_table_flush(align_start_addr, align_end_addr);
}
__weak void dram_bank_mmu_setup(int bank)

Dear all,
Change made in the commit: "arm: Show cache warnings in U-Boot proper only" SHA1: bcc53bf095893fbdae531a9a7b5d4ef4a125a7fc
has revealed that during initial setting of MMU regions in the mmu_set_region_dcache_behavior() function some addresses are unaligned to platform cache line size.
As a result we were experiencing following warning messages at early boot: CACHE: Misaligned operation at range [8fff0000, 8fff0004] CACHE: Misaligned operation at range [8fff0024, 8fff0028]
Those were caused by an attempt to update single page_table (gd->arch.tlb_addr) entries with proper TLB cache settings. Since TLB section covers large area (up to 2MiB), we had to update very small amount of cache data, very often much smaller than single cache line size (e.g. 32 or 64 bytes).
This patch squashes this warning by properly aligning start and end addresses. In fact it does what cache HW would do anyway (flush the whole data cache lines). Even without this patch it all worked, because TLB table sections were initialized to default values earlier.
Signed-off-by: Lukasz Majewski l.majewski@majess.pl
arch/arm/lib/cache-cp15.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/arch/arm/lib/cache-cp15.c b/arch/arm/lib/cache-cp15.c index 1121dc3..913c554 100644 --- a/arch/arm/lib/cache-cp15.c +++ b/arch/arm/lib/cache-cp15.c @@ -62,6 +62,7 @@ void mmu_set_region_dcache_behaviour(phys_addr_t start, size_t size, enum dcache_option option) { u32 *page_table = (u32 *)gd->arch.tlb_addr;
u32 align_start_addr, align_end_addr; unsigned long upto, end;
end = ALIGN(start + size, MMU_SECTION_SIZE) >>
MMU_SECTION_SHIFT; @@ -70,7 +71,14 @@ void mmu_set_region_dcache_behaviour(phys_addr_t start, size_t size, option); for (upto = start; upto < end; upto++) set_section_dcache(upto, option);
- mmu_page_table_flush((u32)&page_table[start],
(u32)&page_table[end]); +
- align_start_addr = (u32)&page_table[start]
& ~(CONFIG_SYS_CACHELINE_SIZE - 1);
- align_end_addr = ALIGN((u32)&page_table[end],
CONFIG_SYS_CACHELINE_SIZE);
This patch is an RFC on purpose :-).
It requires the CONFIG_SYS_CACHELINE_SIZE to be defined for the target platform. However, it happens that some older boards (like Samsung's smdk2410, Siemens amx) don't have this define, which means that I would need to define it for each board.
The other option is to get the cache line size from coprocessor (like get_ccsidr() @ arch/arm/cpu/armv7/cache_v7.c). Such approach would require some common code extraction.
Best regards, Łukasz Majewski
- debug("%s: align_start_addr: 0x%x align end_addr: 0x%x\n",
__func__,
align_start_addr, align_end_addr);
- mmu_page_table_flush(align_start_addr, align_end_addr);
}
__weak void dram_bank_mmu_setup(int bank)

On Tue, Aug 9, 2016 at 5:41 AM, Lukasz Majewski l.majewski@majess.pl wrote:
Change made in the commit: "arm: Show cache warnings in U-Boot proper only" SHA1: bcc53bf095893fbdae531a9a7b5d4ef4a125a7fc
has revealed that during initial setting of MMU regions in the mmu_set_region_dcache_behavior() function some addresses are unaligned to platform cache line size.
As a result we were experiencing following warning messages at early boot: CACHE: Misaligned operation at range [8fff0000, 8fff0004] CACHE: Misaligned operation at range [8fff0024, 8fff0028]
Those were caused by an attempt to update single page_table (gd->arch.tlb_addr) entries with proper TLB cache settings. Since TLB section covers large area (up to 2MiB), we had to update very small amount of cache data, very often much smaller than single cache line size (e.g. 32 or 64 bytes).
This patch squashes this warning by properly aligning start and end addresses. In fact it does what cache HW would do anyway (flush the whole data cache lines). Even without this patch it all worked, because TLB table sections were initialized to default values earlier.
Signed-off-by: Lukasz Majewski l.majewski@majess.pl
Stefan has also sent a patch for this: https://patchwork.ozlabs.org/patch/656470/

Hi Fabio,
On Tue, Aug 9, 2016 at 5:41 AM, Lukasz Majewski l.majewski@majess.pl wrote:
Change made in the commit: "arm: Show cache warnings in U-Boot proper only" SHA1: bcc53bf095893fbdae531a9a7b5d4ef4a125a7fc
has revealed that during initial setting of MMU regions in the mmu_set_region_dcache_behavior() function some addresses are unaligned to platform cache line size.
As a result we were experiencing following warning messages at early boot: CACHE: Misaligned operation at range [8fff0000, 8fff0004] CACHE: Misaligned operation at range [8fff0024, 8fff0028]
Those were caused by an attempt to update single page_table (gd->arch.tlb_addr) entries with proper TLB cache settings. Since TLB section covers large area (up to 2MiB), we had to update very small amount of cache data, very often much smaller than single cache line size (e.g. 32 or 64 bytes).
This patch squashes this warning by properly aligning start and end addresses. In fact it does what cache HW would do anyway (flush the whole data cache lines). Even without this patch it all worked, because TLB table sections were initialized to default values earlier.
Signed-off-by: Lukasz Majewski l.majewski@majess.pl
Stefan has also sent a patch for this: https://patchwork.ozlabs.org/patch/656470/
I see that I wasn't the only one.
Both patches are identical, Stefan was first :-)
My concern is that, as I've written with comment to my patch, that when I was running build tests some other boards were broken since they didn't define CONFIG_SYS_CACHELINE_SIZE.
Best regards, Łukasz Majewski

Hi Lukasz,
On Wed, Aug 10, 2016 at 5:15 AM, Lukasz Majewski l.majewski@majess.pl wrote:
I see that I wasn't the only one.
Both patches are identical, Stefan was first :-)
My concern is that, as I've written with comment to my patch, that when I was running build tests some other boards were broken since they didn't define CONFIG_SYS_CACHELINE_SIZE.
For which boards did you see build failure with this patch?
Thanks

Hi Fabio,
Hi Lukasz,
On Wed, Aug 10, 2016 at 5:15 AM, Lukasz Majewski l.majewski@majess.pl wrote:
I see that I wasn't the only one.
Both patches are identical, Stefan was first :-)
My concern is that, as I've written with comment to my patch, that when I was running build tests some other boards were broken since they didn't define CONFIG_SYS_CACHELINE_SIZE.
For which boards did you see build failure with this patch?
Branch: master SHA1: f60d0603edca472c4458b30956f38c6c1a836d66
Siemens: axm board
./tools/buildman/buildman.py --branch=HEAD siemens --detail --verbose --show_errors --force-build --count=1 --output-dir=./BUILD/
04: ARM: cache: cp15: Align addresses when initial page_table setup is flushed arm: + axm +../arch/arm/lib/cache-cp15.c: In function 'mmu_set_region_dcache_behaviour': +../arch/arm/lib/cache-cp15.c:76:7: error: 'CONFIG_SYS_CACHELINE_SIZE' undeclared (first use in this function) + & ~(CONFIG_SYS_CACHELINE_SIZE - 1);
Samsung: smdk2410
01: ARM: cache: cp15: Align addresses when initial page_table setup is flushed arm: + smdk2410 +../arch/arm/lib/cache-cp15.c: In function 'mmu_set_region_dcache_behaviour': +../arch/arm/lib/cache-cp15.c:76:7: error: 'CONFIG_SYS_CACHELINE_SIZE' undeclared (first use in this function) + & ~(CONFIG_SYS_CACHELINE_SIZE - 1);
Probably more boards is affected too.
Thanks
Best regards, Łukasz Majewski
participants (3)
-
Fabio Estevam
-
Lukasz Majewski
-
Marek Vasut