
Hi Aneesh,
On Thu, Jun 21, 2012 at 2:55 PM, Sricharan R r.sricharan@ti.com wrote:
Hi, [snip..]
On 06/15/2012 07:48 AM, R, Sricharan wrote:
Hi,
On Fri, Jun 15, 2012 at 12:31 AM, Tom Rinitrini@ti.com wrote:
If we are built with D-CACHE enabled but have run 'dcache off' and
then
attempt to flush unaligned regions we spam the console with
problems
that aren't true (as the cache was off).
Today we do cache maintenance operations after the dcache is
turned off.
One example is before jumping to kernel, we try to invalidate
the caches,
in cache turned off state. So with this patch those maintenance
calls will
do nothing, which is not correct.
Ah yes, But, shouldn't we be doing these same operations as part of turning the cache off?
The problem is that while turning of dcaches, we flush it, and
turn
cache and MMU off. But these operations are not happening automatically in a single call. So there is a chance of valid entries present in cache even after it is OFF.
I think this is what we need to fix. Otherwise, Tom's change looks good to me. How about an invalidate in dcache_disable() or something like that?
Correct. So if the cleaning up sequence is fine, then ok. So there should be no need to check if caches are OFF inside the maintenance functions. Kernel does not looks like doing such checks. The real issue looks like we should have had assembly level functions to do the cleanup routines for flush-invalidate-turn OFF caches/MM atomically.
Actually, with something like speculation in the behind, only way of ensuring that we do not have any valid entries is to do a invalidate all, after turn of caches/mmu. So this means that we will have a maintenance after turning off.
Thanks, Sricharan