
On 2/25/2017 1:25 AM, Rush, Jason A. wrote:
R, Vignesh wrote:
On 2/24/2017 12:55 AM, Marek Vasut wrote:
On 02/23/2017 08:22 PM, Rush, Jason A. wrote:
Marek Vasut wrote:
On 02/22/2017 06:37 PM, Rush, Jason A. wrote:
Marek Vasut wrote: > On 02/21/2017 05:50 PM, Rush, Jason A. wrote:
[...]
You could try reverting my commits: commit 57897c13de03ac0136d64641a3eab526c6810387 spi: cadence_qspi_apb: Use 32 bit indirect write transaction when possible commit b63b46313ed29e9b0c36b3d6b9407f6eade40c8f spi: cadence_qspi_apb: Use 32 bit indirect read transaction when possible
When I reverted these two commits and added my patch for the indirect trigger_address, it works correctly.
Oops, these patches are required as Cadence QSPI controller(I am not sure if all versions of IP are newer versions only) has a limitation that the external master is only permitted to issue 32-bit data interface reads until the last word of an indirect transfer.
Also, when I disabled the dcache (dcache off) as Marek suggested, it works correctly when running from the master branch (again with my indirect trigger_address patch).
Just that I understand correctly, with latest master(with no patches reverted) + your patch for indirect trigger_address + dcache off, you don't see any problem?
But there were other patches by others in v2017.01-rc1, like spi: cadence_qspi: Fix CS timings which may have impact.
I left all other commits in except the two Vignesh suggested to revert, so it seems to be related to those two commits and caching. As another data point, I can load and boot linux with caching on from another source (MMC). So I don't think it's a problem with memory/caching in general.
Any suggestions on how to proceed from here?
My patches use common bounce_buffer implementation which does dcache flush/invalidate and if dcache has issues then I guess those operations may be causing data corruption. Could you do a bit more research for me?
1. As a hack, could you just disable dcache operations in bounce_buffer implementation? Here is the diff for the same:
diff --git a/common/bouncebuf.c b/common/bouncebuf.c index 054d9e0302cc..2878b9eed1ae 100644 --- a/common/bouncebuf.c +++ b/common/bouncebuf.c @@ -55,21 +55,21 @@ int bounce_buffer_start(struct bounce_buffer *state, void *data, * Flush data to RAM so DMA reads can pick it up, * and any CPU writebacks don't race with DMA writes */ - flush_dcache_range((unsigned long)state->bounce_buffer, - (unsigned long)(state->bounce_buffer) + - state->len_aligned); +// flush_dcache_range((unsigned long)state->bounce_buffer, +// (unsigned long)(state->bounce_buffer) + +// state->len_aligned);
return 0; }
int bounce_buffer_stop(struct bounce_buffer *state) { - if (state->flags & GEN_BB_WRITE) { - /* Invalidate cache so that CPU can see any newly DMA'd data */ - invalidate_dcache_range((unsigned long)state->bounce_buffer, - (unsigned long)(state->bounce_buffer) + - state->len_aligned); - } +// if (state->flags & GEN_BB_WRITE) { +// /* Invalidate cache so that CPU can see any newly DMA'd data */ +// invalidate_dcache_range((unsigned long)state->bounce_buffer, +// (unsigned long)(state->bounce_buffer) + +// state->len_aligned); +// }
if (state->bounce_buffer == state->user_buffer) return 0;
2. If that works, I guess there is some issue wrt CQSPI and dcache on your platform, I suggest you to revert my above two patches and try non bounce buffer version of my changes here: https://patchwork.ozlabs.org/patch/693069/. This patch takes care of indirect write. I don't have similar patch for indirect read but that wasn't required.