
On Fri, Feb 29, 2008 at 06:10:10PM +0100, Rafal Jaworowski wrote:
Not really, unfortunatelly: the 85xx still lacks flushing the d-cache before disabling it. I was going to fix this by refactoring existing d-cache disabling/flushing routines into a common code that would sit in the lib_ppc/ppccache.S (as mostly exisiting implementations are just copy/paste of the same thing) and have 85xx use it too, but didn't have time yet to clean it up. If anyone is willing to do it sooner, I won't complain :)
The implementations for other CPUs such as 86xx are a bit questionable (arbitrarily using the cache line times 65536 as the size to flush, and inefficiently iterating 4 bytes at a time rather than a cache line).
Here's an 85xx implementation from an as-yet-unmerged Linux tree (replace KERNELBASE with something appropriate for U-boot) that dynamically figures out the cache and cache block sizes. Note that it assumes at most 8 ways.
_GLOBAL(flush_disable_L1) mfspr r3, SPRN_L1CFG0
rlwinm r5, r3, 9, 3 /* Extract cache block size */ twlgti r5, 1 /* Only 32 and 64 byte cache blocks * are currently defined. */ li r4, 32 subfic r6, r5, 2 /* r6 = log2(1KiB / cache block size) - * log2(number of ways) */ slw r5, r4, r5 /* r5 = cache block size */
rlwinm r7, r3, 0, 0xff /* Extract number of KiB in the cache */ mulli r7, r7, 13 /* An 8-way cache will require 13 * loads per way. */ slw r7, r7, r6
lis r4, KERNELBASE@h mtctr r7
1: lwz r0, 0(r4) /* Load... */ add r4, r4, r5 bdnz 1b
msync lis r4, KERNELBASE@h mtctr r7
1: dcbf 0, r4 /* ...and flush. */ add r4, r4, r5 bdnz 1b
mfspr r4, SPRN_L1CSR0 /* Invalidate and disable d-cache */ li r5, 2 rlwimi r4, r5, 0, 3 msync isync mtspr SPRN_L1CSR0, r4 isync
1: mfspr r4, SPRN_L1CSR0 /* Wait for the invalidate to finish */ andi. r4, r4, 2 bne 1b
rlwimi r4, r3, 2, 3 /* Extract cache type */ twlgti r4, 1 /* Only 0 (Harvard) and 1 (Unified) * are currently defined. */
andi. r4, r4, 1 /* If it's unified, we're done. */ bnelr
mfspr r4, SPRN_L1CSR1 /* Otherwise, invalidate the i-cache */ li r5, 2 rlwimi r4, r5, 0, 3 msync isync mtspr SPRN_L1CSR1, r4 isync
1: mfspr r4, SPRN_L1CSR1 /* Wait for the invalidate to finish */ andi. r4, r4, 2 bne 1b blr
_GLOBAL(invalidate_enable_L1) mfspr r4, SPRN_L1CSR0 /* Invalidate d-cache */ ori r4, r4, 2 msync isync mtspr SPRN_L1CSR0, r4 isync
1: mfspr r4, SPRN_L1CSR0 /* Wait for the invalidate to finish */ andi. r5, r4, 2 bne 1b
ori r4, r4, 1 /* Enable d-cache */ msync isync mtspr SPRN_L1CSR0, r4 isync
mfspr r3, SPRN_L1CFG0 rlwimi r4, r3, 2, 3 /* Extract cache type */ twlgti r4, 1 /* Only 0 (Harvard) and 1 (Unified) * are currently defined. */
andi. r4, r4, 1 /* If it's unified, we're done. */ bnelr
mfspr r4, SPRN_L1CSR1 /* Otherwise, do the i-cache as well */ ori r5, r4, 2 msync isync mtspr SPRN_L1CSR1, r4 isync
1: mfspr r4, SPRN_L1CSR1 /* Wait for the invalidate to finish */ andi. r4, r4, 2 bne 1b
ori r4, r4, 1 /* Enable i-cache */ msync isync mtspr SPRN_L1CSR1, r4 isync
blr
/* r3 = virtual address of L2 controller */ _GLOBAL(flush_disable_L2) /* It's a write-through cache, so only invalidation is needed. */ lwz r4, 0(r3) li r5, 1 rlwimi r4, r5, 30, 0xc0000000 stw r4, 0(r3)
/* Wait for the invalidate to finish */ 1: lwz r4, 0(r3) andis. r4, r4, 0x4000 bne 1b
blr
/* r3 = virtual address of L2 controller */ _GLOBAL(invalidate_enable_L2) lwz r4, 0(r3) li r5, 3 rlwimi r4, r5, 30, 0xc0000000 stw r4, 0(r3)
/* Wait for the invalidate to finish */ 1: lwz r4, 0(r3) andis. r4, r4, 0x4000 bne 1b
blr