
On Wed, Apr 01, 2015 at 04:21:50PM +0100, Andy Pont wrote:
Hi David,
<snipped for brevity>
for (i = 0; i < len; i++) { /* wait till TX register is empty (TXS == 1) */
while (!(readl(&ds->regs->channel[ds->slave.cs].chstat) & OMAP3_MCSPI_CHSTAT_TXS)) {start = get_timer(0);
if (--timeout <= 0) {
if (get_timer(start) > SPI_WAIT_TIMEOUT) { printf("SPI TXS timed out, status=0x%08x\n", readl(&ds->regs->channel[ds-
slave.cs].chstat));
return -1;
I have a couple of questions...
Firstly, when in SPL is there access to the get_timer() function?
We call timer_init() from board_init_r() in SPL, prior to diving down into loading (or checking for Falcon vs Regular) so this is safe.
Secondly, when using Falcon mode to load Linux directly from SPI (Falcon mode) then we want to maximise the throughput and save every CPU cycle we possibly can. Adding yet another function call into the for loop and hence calling it a couple of million times seems, on the face of it, like it is going to slow things down.
I'd like to see measurements to prove me wrong but this both seems like a bad idea (optimizing by being incorrect, this gives us a correct timeout check like other drivers do) and really unlikely I would think to be noticable. Since we'll be doing the same code-paths in both regular and SPL, trying to time things (by loading a big file) would be easy enough I think. Thanks!