
On 25/10/2022 18.32, Tim Harvey wrote:
Greetings,
I've noticed a regression since the merge of the cyclic framework use for watchdog on my imx8m boards:
cyclic_register for watchdog@30280000 failed WDT: Failed to start watchdog@30280000
A bisect lead me to the following 3 sequential patches: 29caf9305b6f cyclic: Use schedule() instead of WATCHDOG_RESET() ^^^ bad 881d4108257a cyclic: Introduce schedule() function ^^^ unbuildable c2fd0ca1a822 watchdog: Integrate watchdog triggering into the cyclic framework ^^^ unbootable
Can you provide some more details on "unbootable" and "unbuildable"? I.e., what build error do you get for the middle patch, and what do you see on the console with the "unbootable" image?
Also, the .configs used in each case might be interesting, certainly all lines containing CYCLIC, WATCHDOG or WDT.
One thing I notice is that I think wdt_stop should unregister the cyclic function; there's really no point having the cyclic framework keep calling wdt_cyclic() only to bail out at "if (!priv->running)" - moreover, it's almost certainly a bug if the device is started again via another wdt_start(), because then we have two cyclic instances registered for the same device.
I also think the cyclic API is somewhat misdesigned. The storage for the cyclic metadata should be provided by the caller (e.g. in the watchdog case just embedded as part of struct wdt_priv), so that cyclic_register() can never fail. Otherwise we have this awkward situation where ops->start() have already been called and succeeded, but then perhaps we fail to register the cyclic instance; should we then do wdt_stop(), and what if _that_ then fails?
This also allows the callback to be a little more type-safe; let the callback take a "struct cyclic_info *" as argument, and then the callback can use e.g. container_of to get the containing wdt_priv.
Rasmus