
On Tue, Jan 24, 2012 at 11:59:40PM +0100, Colin Cross wrote:
On Tue, Jan 24, 2012 at 2:43 PM, Stephen Warren swarren@nvidia.com wrote:
Colin Cross wrote at Tuesday, January 24, 2012 3:33 PM:
On Tue, Jan 24, 2012 at 2:08 PM, Stephen Warren swarren@nvidia.com wrote:
Peter De Schrijver wrote at Tuesday, January 24, 2012 2:53 AM:
What about the peripheral resets which are also handled by CAR? Peripheral clock nodes also offer assert and deassert methods for the reset signal associated with them. Those methods are used when powergating domains for example. Should we model this in the same binding?
In most cases, I think the resets are handled purely within clock enable and disable, so there's no need to explicitly manage them or represent them in the device tree. Do you agree here?
clk_enable could force a deasserted reset, but clk_disable cannot imply asserting reset. The clocks need to be turned off for power management without resetting the registers.
Yes, I just spoke to someone off-list, and he said the same thing. He went on to say that therefore even the reset removal with clk_enable was questionable, and that drivers should explicitly manage reset removal themselves. Does that seem a reasonable stance? It'd certainly take away the somewhat asymmetric nature of reset removal just magically working when you first enable the clocks. We'd presumably then want a common reset infra-structure and binding to match that change?
In 99% of drivers reset is irrelevant, as long as the block is out of reset by the time the driver starts writing to registers. clk_enable is a decent place to ensure that - it always has to be done before writing to the registers, and it maps nicely to the reset bits on Tegra. It would be nice if those driver didn't need to deassert reset explicitly, especially if that requires a Tegra-specific API.
I think this could be solved by moving the drivers to runtime PM and handle the clock and reset bits in tegra platform device specific code. iirc that's the way it's handled in OMAP?
Cheers,
Peter.