
Currently the fitImage data area is resized in 1 kiB steps. This works when bundling smaller images below some 1 MiB, but when bundling large images into the fitImage, this make binman spend extreme amount of time and CPU just spinning in pylibfdt FdtSw.check_space() until the size grows enough for the large image to fit into the data area. Increase the default step to 64 kiB, which is a reasonable compromise -- the U-Boot blobs are somewhere in the 64kiB...1MiB range, DT blob are just short of 64 kiB, and so are the other blobs. This reduces binman runtime with 32 MiB blob from 2.3 minutes to 5 seconds.
The following can be used to trigger the problem if rand.bin is some 32 MiB. " / { itb { fit { images { test { compression = "none"; description = "none"; type = "flat_dt";
blob { filename = "rand.bin"; type = "blob-ext"; }; }; }; }; };
configurations { binman_configuration: config { loadables = "test"; }; }; }; "
Signed-off-by: Marek Vasut marex@denx.de Cc: Alper Nebi Yasak alpernebiyasak@gmail.com Cc: Simon Glass sjg@chromium.org --- tools/binman/etype/fit.py | 1 + 1 file changed, 1 insertion(+)
diff --git a/tools/binman/etype/fit.py b/tools/binman/etype/fit.py index 12306623af6..ad43fce18ec 100644 --- a/tools/binman/etype/fit.py +++ b/tools/binman/etype/fit.py @@ -658,6 +658,7 @@ class Entry_fit(Entry_section): # Build a new tree with all nodes and properties starting from the # entry node fsw = libfdt.FdtSw() + fsw.INC_SIZE = 65536 fsw.finish_reservemap() to_remove = [] loadables = []