
Hi Stefan,
On Thu, 13 Jul 2023 at 08:17, Stefan Roese sr@denx.de wrote:
This patch adds the new Kconfig option CONFIG_CYCLIC_RATELIMIT_US which defines the min allowed time after with a new call into the cyclic infrastructure is allowed. This results in a ratelimiting of the all functions hooked into the cyclic interface. As it's been noticed that on some platforms, that high frequent calls to schedule() (replaced from WATCHDOG_RESET) may lead to a performance degration.
When a high frequent calling into schedule() is detected, a warning is logged only once to help indentify this frantic caller.
Signed-off-by: Stefan Roese sr@denx.de Cc: Simon Glass sjg@chromium.org Cc: Christophe Leroy christophe.leroy@csgroup.eu
common/Kconfig | 11 +++++++++++ common/cyclic.c | 27 +++++++++++++++++++++++++-- include/asm-generic/global_data.h | 1 + 3 files changed, 37 insertions(+), 2 deletions(-)
diff --git a/common/Kconfig b/common/Kconfig index 42baca20a618..0611cce301a5 100644 --- a/common/Kconfig +++ b/common/Kconfig @@ -601,6 +601,17 @@ config CYCLIC_MAX_CPU_TIME_US takes longer than this duration this function will get unregistered automatically.
+config CYCLIC_RATELIMIT_US
int "Sets the min time/delay in us after a new call into schedule is done"
default 100
1000 ?
help
The min allowed time after with a new call into the cyclic
infrastructure is allowed. This results in a ratelimiting of the
rate limiting
all functions hooked into the cyclic interface.
As it's been noticed that on some platforms, that high frequent
calls to schedule() (replaced from WATCHDOG_RESET) may lead to a
performance degration.
degradation
also below
endif # CYCLIC
config EVENT diff --git a/common/cyclic.c b/common/cyclic.c index a49bfc88f5c0..c945ae55c965 100644 --- a/common/cyclic.c +++ b/common/cyclic.c @@ -12,6 +12,7 @@ #include <log.h> #include <malloc.h> #include <time.h> +#include <linux/bug.h> #include <linux/errno.h> #include <linux/list.h> #include <asm/global_data.h> @@ -109,8 +110,30 @@ void schedule(void) * schedule() might get called very early before the cyclic IF is * ready. Make sure to only call cyclic_run() when it's initalized. */
if (gd)
cyclic_run();
if (gd) {
uint64_t now;
Should be u64 in here I think.
/*
* Add some ratelimiting to not call into the cyclic IF too
* ofter. As it's been noticed that on some platforms, that
* high frequent calls to schedule() (replaced from
* WATCHDOG_RESET) may lead to a performance degration.
*/
/* Do not reset the watchdog too often */
join with above comment
now = timer_get_us();
if (time_after_eq64(now, gd->cyclic_next_call)) {
cyclic_run();
gd->cyclic_next_call = now + CONFIG_CYCLIC_RATELIMIT_US;
} else {
/*
* Throw a warning (only once) to help identifying
* frantic callers
*/
WARN_ONCE(1, "schedule() called very often, now = %lld us next call at %lld us, dt = %lld us\n",
now, gd->cyclic_next_call,
gd->cyclic_next_call - now);
}
}
}
int cyclic_unregister_all(void) diff --git a/include/asm-generic/global_data.h b/include/asm-generic/global_data.h index a1e1b9d64005..af26ae4dfc65 100644 --- a/include/asm-generic/global_data.h +++ b/include/asm-generic/global_data.h @@ -484,6 +484,7 @@ struct global_data { * @cyclic_list: list of registered cyclic functions */ struct hlist_head cyclic_list;
uint64_t cyclic_next_call;
#endif /** * @dmtag_list: List of DM tags -- 2.41.0
Regards, Simon