[PATCH 0/9] Enable arm64 host support in Gitlab

Hey all,
First, thanks to Simon Glass and also Linaro, we now have access to a few fast arm64 host machines in our Gitlab instance, to use as CI runners. This series finishes the work that I pushed earlier and Simon had started that enables arm64 hosts to be used for most things now.
The first notable change, especially if you use this on your own Gitlab instance is that "DEFAULT_TAG" is now unused and we instead have: - DEFAULT_ALL_TAG: - DEFAULT_ARM64_TAG: - DEFAULT_AMD64_TAG: - DEFAULT_FAST_AMD64_TAG:
This lets us say that some jobs can be run on all runners, because they are small enough that anything we'd connect to CI is fast enough and it also does not depend on the underlying host architecture. Next we have tags for any arm64 host, or any amd64 host. Finally, we have a tag for fast amd64 hosts. What these last three are for is that we have a few jobs that need to run on amd64 hosts and so we have to restrict them there, but we also have now reworked the world build jobs to build (almost) everything in a single job and on the fast amd64 machines this is still as quick as the old way was, in practice.
To reach this point, we say that the Xtensa jobs can only run on amd64 hosts. Our targets only work with the binary-only toolchain and so this is a reasonable limit and we exclude them from the world build jobs. We also need to deal with ensuring the right toolchain is used regardless what the host architecture is and that we don't use the host toolchain by accident. Finally, because some of these changes needed to be worked out in the linter, fix some of the general warnings that notes as well.

The xtensa architecture is interesting in that the platforms we support are only valid on the binary-only toolchains as the DC233C instruction set requires those toolchains (and not the FSF instruction set). Only install the binary toolchain on amd64 hosts and only run the tests on them as well.
Signed-off-by: Tom Rini trini@konsulko.com --- .gitlab-ci.yml | 8 +++++++- tools/docker/Dockerfile | 4 +++- 2 files changed, 10 insertions(+), 2 deletions(-)
diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml index 57037e243ecb..2671c3bb1061 100644 --- a/.gitlab-ci.yml +++ b/.gitlab-ci.yml @@ -144,12 +144,14 @@ build all PowerPC platforms: exit $ret; fi;
+# We exclude xtensa here due to not being able to build on aarch64 +# hosts but covering all platforms in the pytest section. build all other platforms: extends: .world_build script: - ret=0; git config --global --add safe.directory "${CI_PROJECT_DIR}"; - ./tools/buildman/buildman -o /tmp -PEWM -x arm,powerpc || ret=$?; + ./tools/buildman/buildman -o /tmp -PEWM -x arm,powerpc,xtensa || ret=$?; if [[ $ret -ne 0 ]]; then ./tools/buildman/buildman -o /tmp -seP; exit $ret; @@ -451,6 +453,8 @@ qemu-xtensa-dc233c test.py: variables: TEST_PY_BD: "qemu-xtensa-dc233c" TEST_PY_TEST_SPEC: "not sleep and not efi" + tags: + - all <<: *buildman_and_testpy_dfn
r2dplus_i82557c test.py: @@ -514,6 +518,8 @@ xtfpga test.py: TEST_PY_BD: "xtfpga" TEST_PY_TEST_SPEC: "not sleep" TEST_PY_ID: "--id qemu" + tags: + - all <<: *buildman_and_testpy_dfn
coreboot test.py: diff --git a/tools/docker/Dockerfile b/tools/docker/Dockerfile index ce1ad7cb23a2..8723834e6e5c 100644 --- a/tools/docker/Dockerfile +++ b/tools/docker/Dockerfile @@ -320,7 +320,9 @@ RUN virtualenv -p /usr/bin/python3 /tmp/venv && \ # Create the buildman config file RUN /bin/echo -e "[toolchain]\nroot = /usr" > ~/.buildman RUN /bin/echo -e "kernelorg = /opt/gcc-13.2.0-nolibc/*" >> ~/.buildman -RUN /bin/echo -e "\n[toolchain-prefix]\nxtensa = /opt/2020.07/xtensa-dc233c-elf/bin/xtensa-dc233c-elf-" >> ~/.buildman; +RUN if [ "$TARGETPLATFORM" = "linux/amd64" ]; then \ + /bin/echo -e "\n[toolchain-prefix]\nxtensa = /opt/2020.07/xtensa-dc233c-elf/bin/xtensa-dc233c-elf-" >> ~/.buildman; \ + fi RUN /bin/echo -e "\n[toolchain-alias]\nsh = sh2" >> ~/.buildman RUN /bin/echo -e "\nx86 = i386" >> ~/.buildman;

On Sun, 8 Dec 2024 at 12:08, Tom Rini trini@konsulko.com wrote:
The xtensa architecture is interesting in that the platforms we support are only valid on the binary-only toolchains as the DC233C instruction set requires those toolchains (and not the FSF instruction set). Only install the binary toolchain on amd64 hosts and only run the tests on them as well.
Signed-off-by: Tom Rini trini@konsulko.com
.gitlab-ci.yml | 8 +++++++- tools/docker/Dockerfile | 4 +++- 2 files changed, 10 insertions(+), 2 deletions(-)
Reviewed-by: Simon Glass sjg@chromium.org

On Sun, Dec 08, 2024 at 11:07:24AM -0600, Tom Rini wrote:
The xtensa architecture is interesting in that the platforms we support are only valid on the binary-only toolchains as the DC233C instruction set requires those toolchains (and not the FSF instruction set). Only install the binary toolchain on amd64 hosts and only run the tests on them as well.
Signed-off-by: Tom Rini trini@konsulko.com Reviewed-by: Simon Glass sjg@chromium.org
For the series, applied to u-boot/next, thanks!

Remove the rest of the places where we hard-code the version of the toolchain we're using.
Signed-off-by: Tom Rini trini@konsulko.com --- tools/docker/Dockerfile | 52 ++++++++++++++++++++--------------------- 1 file changed, 26 insertions(+), 26 deletions(-)
diff --git a/tools/docker/Dockerfile b/tools/docker/Dockerfile index 8723834e6e5c..38a9c4b9ec69 100644 --- a/tools/docker/Dockerfile +++ b/tools/docker/Dockerfile @@ -156,11 +156,11 @@ RUN git clone git://git.savannah.gnu.org/grub.git /tmp/grub && \ mkdir -p /opt/grub && \ ./configure --target=aarch64 --with-platform=efi \ CC=gcc \ - TARGET_CC=/opt/gcc-13.2.0-nolibc/aarch64-linux/bin/aarch64-linux-gcc \ - TARGET_OBJCOPY=/opt/gcc-13.2.0-nolibc/aarch64-linux/bin/aarch64-linux-objcopy \ - TARGET_STRIP=/opt/gcc-13.2.0-nolibc/aarch64-linux/bin/aarch64-linux-strip \ - TARGET_NM=/opt/gcc-13.2.0-nolibc/aarch64-linux/bin/aarch64-linux-nm \ - TARGET_RANLIB=/opt/gcc-13.2.0-nolibc/aarch64-linux/bin/aarch64-linux-ranlib && \ + TARGET_CC=/opt/gcc-${TCVER}-nolibc/aarch64-linux/bin/aarch64-linux-gcc \ + TARGET_OBJCOPY=/opt/gcc-${TCVER}-nolibc/aarch64-linux/bin/aarch64-linux-objcopy \ + TARGET_STRIP=/opt/gcc-${TCVER}-nolibc/aarch64-linux/bin/aarch64-linux-strip \ + TARGET_NM=/opt/gcc-${TCVER}-nolibc/aarch64-linux/bin/aarch64-linux-nm \ + TARGET_RANLIB=/opt/gcc-${TCVER}-nolibc/aarch64-linux/bin/aarch64-linux-ranlib && \ make -j$(nproc) && \ ./grub-mkimage -O arm64-efi -o /opt/grub/grubaa64.efi --prefix= -d \ grub-core cat chain configfile echo efinet ext2 fat halt help linux \ @@ -170,11 +170,11 @@ RUN git clone git://git.savannah.gnu.org/grub.git /tmp/grub && \ make clean && \ ./configure --target=arm --with-platform=efi \ CC=gcc \ - TARGET_CC=/opt/gcc-13.2.0-nolibc/arm-linux-gnueabi/bin/arm-linux-gnueabi-gcc \ - TARGET_OBJCOPY=/opt/gcc-13.2.0-nolibc/arm-linux-gnueabi/bin/arm-linux-gnueabi-objcopy \ - TARGET_STRIP=/opt/gcc-13.2.0-nolibc/arm-linux-gnueabi/bin/arm-linux-gnueabi-strip \ - TARGET_NM=/opt/gcc-13.2.0-nolibc/arm-linux-gnueabi/bin/arm-linux-gnueabi-nm \ - TARGET_RANLIB=/opt/gcc-13.2.0-nolibc/arm-linux-gnueabi/bin/arm-linux-gnueabi-ranlib && \ + TARGET_CC=/opt/gcc-${TCVER}-nolibc/arm-linux-gnueabi/bin/arm-linux-gnueabi-gcc \ + TARGET_OBJCOPY=/opt/gcc-${TCVER}-nolibc/arm-linux-gnueabi/bin/arm-linux-gnueabi-objcopy \ + TARGET_STRIP=/opt/gcc-${TCVER}-nolibc/arm-linux-gnueabi/bin/arm-linux-gnueabi-strip \ + TARGET_NM=/opt/gcc-${TCVER}-nolibc/arm-linux-gnueabi/bin/arm-linux-gnueabi-nm \ + TARGET_RANLIB=/opt/gcc-${TCVER}-nolibc/arm-linux-gnueabi/bin/arm-linux-gnueabi-ranlib && \ make -j$(nproc) && \ ./grub-mkimage -O arm-efi -o /opt/grub/grubarm.efi --prefix= -d \ grub-core cat chain configfile echo efinet ext2 fat halt help linux \ @@ -184,11 +184,11 @@ RUN git clone git://git.savannah.gnu.org/grub.git /tmp/grub && \ make clean && \ ./configure --target=riscv64 --with-platform=efi \ CC=gcc \ - TARGET_CC=/opt/gcc-13.2.0-nolibc/riscv64-linux/bin/riscv64-linux-gcc \ - TARGET_OBJCOPY=/opt/gcc-13.2.0-nolibc/riscv64-linux/bin/riscv64-linux-objcopy \ - TARGET_STRIP=/opt/gcc-13.2.0-nolibc/riscv64-linux/bin/riscv64-linux-strip \ - TARGET_NM=/opt/gcc-13.2.0-nolibc/riscv64-linux/bin/riscv64-linux-nm \ - TARGET_RANLIB=/opt/gcc-13.2.0-nolibc/riscv64-linux/bin/riscv64-linux-ranlib && \ + TARGET_CC=/opt/gcc-${TCVER}-nolibc/riscv64-linux/bin/riscv64-linux-gcc \ + TARGET_OBJCOPY=/opt/gcc-${TCVER}-nolibc/riscv64-linux/bin/riscv64-linux-objcopy \ + TARGET_STRIP=/opt/gcc-${TCVER}-nolibc/riscv64-linux/bin/riscv64-linux-strip \ + TARGET_NM=/opt/gcc-${TCVER}-nolibc/riscv64-linux/bin/riscv64-linux-nm \ + TARGET_RANLIB=/opt/gcc-${TCVER}-nolibc/riscv64-linux/bin/riscv64-linux-ranlib && \ make -j$(nproc) && \ ./grub-mkimage -O riscv64-efi -o /opt/grub/grubriscv64.efi --prefix= -d \ grub-core cat chain configfile echo efinet ext2 fat halt help linux \ @@ -198,22 +198,22 @@ RUN git clone git://git.savannah.gnu.org/grub.git /tmp/grub && \ make clean && \ ./configure --target=i386 --with-platform=efi \ CC=gcc \ - TARGET_CC=/opt/gcc-13.2.0-nolibc/i386-linux/bin/i386-linux-gcc \ - TARGET_OBJCOPY=/opt/gcc-13.2.0-nolibc/i386-linux/bin/i386-linux-objcopy \ - TARGET_STRIP=/opt/gcc-13.2.0-nolibc/i386-linux/bin/i386-linux-strip \ - TARGET_NM=/opt/gcc-13.2.0-nolibc/i386-linux/bin/i386-linux-nm \ - TARGET_RANLIB=/opt/gcc-13.2.0-nolibc/i386-linux/bin/i386-linux-ranlib && \ + TARGET_CC=/opt/gcc-${TCVER}-nolibc/i386-linux/bin/i386-linux-gcc \ + TARGET_OBJCOPY=/opt/gcc-${TCVER}-nolibc/i386-linux/bin/i386-linux-objcopy \ + TARGET_STRIP=/opt/gcc-${TCVER}-nolibc/i386-linux/bin/i386-linux-strip \ + TARGET_NM=/opt/gcc-${TCVER}-nolibc/i386-linux/bin/i386-linux-nm \ + TARGET_RANLIB=/opt/gcc-${TCVER}-nolibc/i386-linux/bin/i386-linux-ranlib && \ make -j$(nproc) && \ ./grub-mkimage -O i386-efi -o /opt/grub/grub_x86.efi --prefix= -d \ grub-core normal echo lsefimmap lsefi lsefisystab efinet tftp minicmd && \ make clean && \ ./configure --target=x86_64 --with-platform=efi \ CC=gcc \ - TARGET_CC=/opt/gcc-13.2.0-nolibc/x86_64-linux/bin/x86_64-linux-gcc \ - TARGET_OBJCOPY=/opt/gcc-13.2.0-nolibc/x86_64-linux/bin/x86_64-linux-objcopy \ - TARGET_STRIP=/opt/gcc-13.2.0-nolibc/x86_64-linux/bin/x86_64-linux-strip \ - TARGET_NM=/opt/gcc-13.2.0-nolibc/x86_64-linux/bin/x86_64-linux-nm \ - TARGET_RANLIB=/opt/gcc-13.2.0-nolibc/x86_64-linux/bin/x86_64-linux-ranlib && \ + TARGET_CC=/opt/gcc-${TCVER}-nolibc/x86_64-linux/bin/x86_64-linux-gcc \ + TARGET_OBJCOPY=/opt/gcc-${TCVER}-nolibc/x86_64-linux/bin/x86_64-linux-objcopy \ + TARGET_STRIP=/opt/gcc-${TCVER}-nolibc/x86_64-linux/bin/x86_64-linux-strip \ + TARGET_NM=/opt/gcc-${TCVER}-nolibc/x86_64-linux/bin/x86_64-linux-nm \ + TARGET_RANLIB=/opt/gcc-${TCVER}-nolibc/x86_64-linux/bin/x86_64-linux-ranlib && \ make -j$(nproc) && \ ./grub-mkimage -O x86_64-efi -o /opt/grub/grub_x64.efi --prefix= -d \ grub-core normal echo lsefimmap lsefi lsefisystab efinet tftp minicmd && \ @@ -319,7 +319,7 @@ RUN virtualenv -p /usr/bin/python3 /tmp/venv && \
# Create the buildman config file RUN /bin/echo -e "[toolchain]\nroot = /usr" > ~/.buildman -RUN /bin/echo -e "kernelorg = /opt/gcc-13.2.0-nolibc/*" >> ~/.buildman +RUN /bin/echo -e "kernelorg = /opt/gcc-${TCVER}-nolibc/*" >> ~/.buildman RUN if [ "$TARGETPLATFORM" = "linux/amd64" ]; then \ /bin/echo -e "\n[toolchain-prefix]\nxtensa = /opt/2020.07/xtensa-dc233c-elf/bin/xtensa-dc233c-elf-" >> ~/.buildman; \ fi

On Sun, 8 Dec 2024 at 12:08, Tom Rini trini@konsulko.com wrote:
Remove the rest of the places where we hard-code the version of the toolchain we're using.
Signed-off-by: Tom Rini trini@konsulko.com
tools/docker/Dockerfile | 52 ++++++++++++++++++++--------------------- 1 file changed, 26 insertions(+), 26 deletions(-)
Reviewed-by: Simon Glass sjg@chromium.org

First, introduce DEFAULT_ALL_TAG, DEFAULT_ARM64_TAG, DEFAULT_AMD64_TAG and DEFAULT_FAST_AMD64_TAG and remove the previous DEFAULT_TAG (as anyone making use of that will need to adjust their jobs). This allows us to say that some jobs can run on amd64 or arm64 hosts under the all tag, while some jobs must run on amd64 (the Xtensa jobs due to binary-only toolchains and sandbox for now) Then we rework the world build stage to only run on our very fast amd64 hosts, or our arm64 hosts (which are also very fast). This should result in a similar overall build time but also a much more consistent overall build time as we won't have the two big world jobs possibly run on our slower build nodes.
Signed-off-by: Tom Rini trini@konsulko.com --- .gitlab-ci.yml | 74 ++++++++++++++++++-------------------------------- 1 file changed, 27 insertions(+), 47 deletions(-)
diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml index 2671c3bb1061..7912eeee4401 100644 --- a/.gitlab-ci.yml +++ b/.gitlab-ci.yml @@ -1,14 +1,17 @@ # SPDX-License-Identifier: GPL-2.0+
variables: - DEFAULT_TAG: "" + DEFAULT_ALL_TAG: "all" + DEFAULT_ARM64_TAG: "arm64" + DEFAULT_AMD64_TAG: "amd64" + DEFAULT_FAST_AMD64_TAG: "fast amd64" MIRROR_DOCKER: docker.io SJG_LAB: "" PLATFORM: linux/amd64,linux/arm64
default: tags: - - ${DEFAULT_TAG} + - ${DEFAULT_ALL_TAG}
# Grab our configured image. The source for this is found # in the u-boot tree at tools/docker/Dockerfile @@ -102,56 +105,21 @@ stages: junit: results.xml expire_in: 1 week
-.world_build: +build all platforms in a single job: stage: world build rules: - when: always - -build all 32bit ARM platforms: - extends: .world_build - script: - - ret=0; - git config --global --add safe.directory "${CI_PROJECT_DIR}"; - pip install -r tools/buildman/requirements.txt; - ./tools/buildman/buildman -o /tmp -PEWM arm -x aarch64 || ret=$?; - if [[ $ret -ne 0 ]]; then - ./tools/buildman/buildman -o /tmp -seP; - exit $ret; - fi; - -build all 64bit ARM platforms: - extends: .world_build + parallel: + matrix: + - HOST: "arm64" + - HOST: "fast amd64" + tags: + - ${HOST} script: - - virtualenv -p /usr/bin/python3 /tmp/venv - - . /tmp/venv/bin/activate - ret=0; git config --global --add safe.directory "${CI_PROJECT_DIR}"; pip install -r tools/buildman/requirements.txt; - ./tools/buildman/buildman -o /tmp -PEWM aarch64 || ret=$?; - if [[ $ret -ne 0 ]]; then - ./tools/buildman/buildman -o /tmp -seP; - exit $ret; - fi; - -build all PowerPC platforms: - extends: .world_build - script: - - ret=0; - git config --global --add safe.directory "${CI_PROJECT_DIR}"; - ./tools/buildman/buildman -o /tmp -P -E -W powerpc || ret=$?; - if [[ $ret -ne 0 ]]; then - ./tools/buildman/buildman -o /tmp -seP; - exit $ret; - fi; - -# We exclude xtensa here due to not being able to build on aarch64 -# hosts but covering all platforms in the pytest section. -build all other platforms: - extends: .world_build - script: - - ret=0; - git config --global --add safe.directory "${CI_PROJECT_DIR}"; - ./tools/buildman/buildman -o /tmp -PEWM -x arm,powerpc,xtensa || ret=$?; + ./tools/buildman/buildman -o /tmp -PEWM -x xtensa || ret=$?; if [[ $ret -ne 0 ]]; then ./tools/buildman/buildman -o /tmp -seP; exit $ret; @@ -200,6 +168,8 @@ Build tools-only and envtools:
Run binman, buildman, dtoc, Kconfig and patman testsuites: extends: .testsuites + tags: + - ${DEFAULT_AMD64_TAG} script: - git config --global user.name "GitLab CI Runner"; git config --global user.email trini@konsulko.com; @@ -259,22 +229,30 @@ Check packing of Python tools:
# Test sandbox with test.py sandbox test.py: + tags: + - ${DEFAULT_AMD64_TAG} variables: TEST_PY_BD: "sandbox" <<: *buildman_and_testpy_dfn
sandbox with clang test.py: + tags: + - ${DEFAULT_AMD64_TAG} variables: TEST_PY_BD: "sandbox" OVERRIDE: "-O clang-17" <<: *buildman_and_testpy_dfn
sandbox64 test.py: + tags: + - ${DEFAULT_AMD64_TAG} variables: TEST_PY_BD: "sandbox64" <<: *buildman_and_testpy_dfn
sandbox64 with clang test.py: + tags: + - ${DEFAULT_AMD64_TAG} variables: TEST_PY_BD: "sandbox64" OVERRIDE: "-O clang-17" @@ -329,6 +307,8 @@ evb-ast2600 test.py: <<: *buildman_and_testpy_dfn
sandbox_flattree test.py: + tags: + - ${DEFAULT_AMD64_TAG} variables: TEST_PY_BD: "sandbox_flattree" <<: *buildman_and_testpy_dfn @@ -454,7 +434,7 @@ qemu-xtensa-dc233c test.py: TEST_PY_BD: "qemu-xtensa-dc233c" TEST_PY_TEST_SPEC: "not sleep and not efi" tags: - - all + - ${DEFAULT_AMD64_TAG} <<: *buildman_and_testpy_dfn
r2dplus_i82557c test.py: @@ -519,7 +499,7 @@ xtfpga test.py: TEST_PY_TEST_SPEC: "not sleep" TEST_PY_ID: "--id qemu" tags: - - all + - ${DEFAULT_AMD64_TAG} <<: *buildman_and_testpy_dfn
coreboot test.py:

Hi Tom,
On Sun, 8 Dec 2024 at 12:08, Tom Rini trini@konsulko.com wrote:
First, introduce DEFAULT_ALL_TAG, DEFAULT_ARM64_TAG, DEFAULT_AMD64_TAG and DEFAULT_FAST_AMD64_TAG and remove the previous DEFAULT_TAG (as anyone making use of that will need to adjust their jobs). This allows us to say that some jobs can run on amd64 or arm64 hosts under the all tag, while some jobs must run on amd64 (the Xtensa jobs due to binary-only toolchains and sandbox for now) Then we rework the world build stage to only run on our very fast amd64 hosts, or our arm64 hosts (which are also very fast). This should result in a similar overall build time but also a much more consistent overall build time as we won't have the two big world jobs possibly run on our slower build nodes.
Signed-off-by: Tom Rini trini@konsulko.com
.gitlab-ci.yml | 74 ++++++++++++++++++-------------------------------- 1 file changed, 27 insertions(+), 47 deletions(-)
I'd prefer fast_amd64 to avoid spaces
What is the run time for this new, unified 'world' build? I just wonder whether splitting it might make the runs go faster?
Regards, SImon

On Mon, Dec 09, 2024 at 08:00:42AM -0700, Simon Glass wrote:
Hi Tom,
On Sun, 8 Dec 2024 at 12:08, Tom Rini trini@konsulko.com wrote:
First, introduce DEFAULT_ALL_TAG, DEFAULT_ARM64_TAG, DEFAULT_AMD64_TAG and DEFAULT_FAST_AMD64_TAG and remove the previous DEFAULT_TAG (as anyone making use of that will need to adjust their jobs). This allows us to say that some jobs can run on amd64 or arm64 hosts under the all tag, while some jobs must run on amd64 (the Xtensa jobs due to binary-only toolchains and sandbox for now) Then we rework the world build stage to only run on our very fast amd64 hosts, or our arm64 hosts (which are also very fast). This should result in a similar overall build time but also a much more consistent overall build time as we won't have the two big world jobs possibly run on our slower build nodes.
Signed-off-by: Tom Rini trini@konsulko.com
.gitlab-ci.yml | 74 ++++++++++++++++++-------------------------------- 1 file changed, 27 insertions(+), 47 deletions(-)
I'd prefer fast_amd64 to avoid spaces
I don't see a reason to avoid spaces here.
What is the run time for this new, unified 'world' build? I just wonder whether splitting it might make the runs go faster?
It's about the same at the worst case of ~40-45min, but it's more consistent now, by excluding the older / slower amd64 hosts.

When validating our current pipeline, a warning is produced about a lack of a default workflow. For how we use it, we can add a simple default of "always".
Signed-off-by: Tom Rini trini@konsulko.com --- .gitlab-ci.yml | 4 ++++ 1 file changed, 4 insertions(+)
diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml index 7912eeee4401..ae9120655b0c 100644 --- a/.gitlab-ci.yml +++ b/.gitlab-ci.yml @@ -13,6 +13,10 @@ default: tags: - ${DEFAULT_ALL_TAG}
+workflow: + rules: + - when: always + # Grab our configured image. The source for this is found # in the u-boot tree at tools/docker/Dockerfile image: ${MIRROR_DOCKER}/trini/u-boot-gitlab-ci-runner:jammy-20240808-03Dec2024

On Sun, 8 Dec 2024 at 12:08, Tom Rini trini@konsulko.com wrote:
When validating our current pipeline, a warning is produced about a lack of a default workflow. For how we use it, we can add a simple default of "always".
Signed-off-by: Tom Rini trini@konsulko.com
.gitlab-ci.yml | 4 ++++ 1 file changed, 4 insertions(+)
Reviewed-by: Simon Glass sjg@chromium.org

In the test.py stage of the build we mark the pytest results as artifacts to save, so that they can be used for reports. This however leads to all of the artifacts being downloaded (and then not used) in later stages. Optimize this out by using an empty list of dependencies here (which is the keyword for which artifacts are needed).
Signed-off-by: Tom Rini trini@konsulko.com --- .gitlab-ci.yml | 2 ++ 1 file changed, 2 insertions(+)
diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml index ae9120655b0c..696d3cb1bd74 100644 --- a/.gitlab-ci.yml +++ b/.gitlab-ci.yml @@ -111,6 +111,7 @@ stages:
build all platforms in a single job: stage: world build + dependencies: [] rules: - when: always parallel: @@ -521,6 +522,7 @@ coreboot test.py: - if: $SJG_LAB != "1" when: manual allow_failure: true + dependencies: [] tags: [ 'lab' ] script: - if [[ -z "${SJG_LAB}" ]]; then

On Sun, 8 Dec 2024 at 12:08, Tom Rini trini@konsulko.com wrote:
In the test.py stage of the build we mark the pytest results as artifacts to save, so that they can be used for reports. This however leads to all of the artifacts being downloaded (and then not used) in later stages. Optimize this out by using an empty list of dependencies here (which is the keyword for which artifacts are needed).
Signed-off-by: Tom Rini trini@konsulko.com
.gitlab-ci.yml | 2 ++ 1 file changed, 2 insertions(+)
Reviewed-by: Simon Glass sjg@chromium.org

We do not want to use the host toolchain for building our platforms in CI (it is both too old, and would be inconsistent with our CI practices). To do this we need to set the toolchain-prefix so that we don't end up guessing "/opt/.../aarch64-linux-aarch64-linux-" as the prefix.
Link: https://source.denx.de/u-boot/custodians/u-boot-dm/-/issues/32 Signed-off-by: Tom Rini trini@konsulko.com --- tools/docker/Dockerfile | 3 +++ 1 file changed, 3 insertions(+)
diff --git a/tools/docker/Dockerfile b/tools/docker/Dockerfile index 38a9c4b9ec69..5d77e2b62696 100644 --- a/tools/docker/Dockerfile +++ b/tools/docker/Dockerfile @@ -323,6 +323,9 @@ RUN /bin/echo -e "kernelorg = /opt/gcc-${TCVER}-nolibc/*" >> ~/.buildman RUN if [ "$TARGETPLATFORM" = "linux/amd64" ]; then \ /bin/echo -e "\n[toolchain-prefix]\nxtensa = /opt/2020.07/xtensa-dc233c-elf/bin/xtensa-dc233c-elf-" >> ~/.buildman; \ fi +RUN if [ "$TARGETPLATFORM" = "linux/arm64" ]; then \ + /bin/echo -e "\n[toolchain-prefix]\naarch64 = /opt/gcc-${TCVER}-nolibc/aarch64-linux/bin/aarch64-linux-" >> ~/.buildman; \ + fi RUN /bin/echo -e "\n[toolchain-alias]\nsh = sh2" >> ~/.buildman RUN /bin/echo -e "\nx86 = i386" >> ~/.buildman;

On Sun, 8 Dec 2024 at 12:09, Tom Rini trini@konsulko.com wrote:
We do not want to use the host toolchain for building our platforms in CI (it is both too old, and would be inconsistent with our CI practices). To do this we need to set the toolchain-prefix so that we don't end up guessing "/opt/.../aarch64-linux-aarch64-linux-" as the prefix.
Link: https://source.denx.de/u-boot/custodians/u-boot-dm/-/issues/32 Signed-off-by: Tom Rini trini@konsulko.com
tools/docker/Dockerfile | 3 +++ 1 file changed, 3 insertions(+)
Reviewed-by: Simon Glass sjg@chromium.org

We should always look in our downloaded toolchains first and then for host-provided toolchains.
Signed-off-by: Tom Rini trini@konsulko.com --- tools/docker/Dockerfile | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/tools/docker/Dockerfile b/tools/docker/Dockerfile index 5d77e2b62696..348605a2b6a3 100644 --- a/tools/docker/Dockerfile +++ b/tools/docker/Dockerfile @@ -318,8 +318,8 @@ RUN virtualenv -p /usr/bin/python3 /tmp/venv && \ rm -rf /tmp/venv /tmp/*-requirements.txt
# Create the buildman config file -RUN /bin/echo -e "[toolchain]\nroot = /usr" > ~/.buildman -RUN /bin/echo -e "kernelorg = /opt/gcc-${TCVER}-nolibc/*" >> ~/.buildman +RUN /bin/echo -e "[toolchain]\nkernelorg = /opt/gcc-${TCVER}-nolibc/*" > ~/.buildman +RUN /bin/echo -e "root = /usr" >> ~/.buildman RUN if [ "$TARGETPLATFORM" = "linux/amd64" ]; then \ /bin/echo -e "\n[toolchain-prefix]\nxtensa = /opt/2020.07/xtensa-dc233c-elf/bin/xtensa-dc233c-elf-" >> ~/.buildman; \ fi

On Sun, 8 Dec 2024 at 12:08, Tom Rini trini@konsulko.com wrote:
We should always look in our downloaded toolchains first and then for host-provided toolchains.
Signed-off-by: Tom Rini trini@konsulko.com
tools/docker/Dockerfile | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)
Reviewed-by: Simon Glass sjg@chromium.org

First, try and be slightly clearer about what "buildx" is with respect to the docker build process.
Second, now that we build the container for both amd64 and arm64, we should document how to make a docker "builder" that has multiple nodes. With this one node should be amd64 and one node arm64, and with reasonably fast arm64 hardware this will be much quicker than using QEMU.
Signed-off-by: Tom Rini trini@konsulko.com --- Cc: Simon Glass sjg@chromium.org --- doc/build/docker.rst | 20 ++++++++++++++------ 1 file changed, 14 insertions(+), 6 deletions(-)
diff --git a/doc/build/docker.rst b/doc/build/docker.rst index 5896dd5ac4a6..01ed35050908 100644 --- a/doc/build/docker.rst +++ b/doc/build/docker.rst @@ -4,21 +4,29 @@ GitLab CI / U-Boot runner container In order to have a reproducible and portable build environment for CI we use a container for building in. This means that developers can also reproduce the CI environment, to a large degree at least, locally. This file is located in the tools/docker directory.
The docker image supports both amd64 and arm64. Ensure that the -'docker-buildx' Debian package is installed (or the equivalent on another -distribution). +`buildx` Docker CLI plugin is installed. This is often available in your +distribution via the 'docker-buildx' or 'docker-buildx-plugin' package.
You will need a multi-platform container, otherwise this error is shown::
ERROR: Multi-platform build is not supported for the docker driver. Switch to a different driver, or turn on the containerd image store, and try again.
-You can add one with:: +You can add a simple one with::
sudo docker buildx create --name multiarch --driver docker-container --use
-Building is supported on both amd64 (i.e. 64-bit x86) and arm64 machines. While -both amd64 and arm64 happen in parallel, the non-native part will take -considerably longer as it must use QEMU to emulate the foreign code. +This will result in a builder that will use QEMU for the non-native +architectures request in a build. While both amd64 and arm64 happen in +parallel, the non-native part will take considerably longer as it must use QEMU +to emulate the foreign code. An alternative, if you have accesss to reasonably +fast amd64 (i.e. 64-bit x86) and arm64 machines is:: + + sudo docker buildx create --name multiarch-multinode --node localNode --bootstrap --use + sudo docker buildx create --name multiarch-multinode --append --node remoteNode --bootstrap ssh://user@host + +And this will result in a builder named multiarch-multinode that will build +each platform natively on each node.
To build the image yourself::

On Sun, 8 Dec 2024 at 12:08, Tom Rini trini@konsulko.com wrote:
First, try and be slightly clearer about what "buildx" is with respect to the docker build process.
Second, now that we build the container for both amd64 and arm64, we should document how to make a docker "builder" that has multiple nodes. With this one node should be amd64 and one node arm64, and with reasonably fast arm64 hardware this will be much quicker than using QEMU.
Signed-off-by: Tom Rini trini@konsulko.com
Cc: Simon Glass sjg@chromium.org
doc/build/docker.rst | 20 ++++++++++++++------ 1 file changed, 14 insertions(+), 6 deletions(-)
Reviewed-by: Simon Glass sjg@chromium.org

Bring us up to the current Ubuntu "Jammy" tag.
Signed-off-by: Tom Rini trini@konsulko.com --- .azure-pipelines.yml | 2 +- .gitlab-ci.yml | 2 +- tools/docker/Dockerfile | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/.azure-pipelines.yml b/.azure-pipelines.yml index c577a724c82b..df3f82074af0 100644 --- a/.azure-pipelines.yml +++ b/.azure-pipelines.yml @@ -2,7 +2,7 @@ variables: windows_vm: windows-2022 ubuntu_vm: ubuntu-24.04 macos_vm: macOS-14 - ci_runner_image: trini/u-boot-gitlab-ci-runner:jammy-20240808-03Dec2024 + ci_runner_image: trini/u-boot-gitlab-ci-runner:jammy-20240911.1-08Dec2024 # Add '-u 0' options for Azure pipelines, otherwise we get "permission # denied" error when it tries to "useradd -m -u 1001 vsts_azpcontainer", # since our $(ci_runner_image) user is not root. diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml index 696d3cb1bd74..a7bae035319c 100644 --- a/.gitlab-ci.yml +++ b/.gitlab-ci.yml @@ -19,7 +19,7 @@ workflow:
# Grab our configured image. The source for this is found # in the u-boot tree at tools/docker/Dockerfile -image: ${MIRROR_DOCKER}/trini/u-boot-gitlab-ci-runner:jammy-20240808-03Dec2024 +image: ${MIRROR_DOCKER}/trini/u-boot-gitlab-ci-runner:jammy-20240911.1-08Dec2024
# We run some tests in different order, to catch some failures quicker. stages: diff --git a/tools/docker/Dockerfile b/tools/docker/Dockerfile index 348605a2b6a3..d2848ab85f35 100644 --- a/tools/docker/Dockerfile +++ b/tools/docker/Dockerfile @@ -2,7 +2,7 @@ # This Dockerfile is used to build an image containing basic stuff to be used # to build U-Boot and run our test suites.
-FROM ubuntu:jammy-20240808 +FROM ubuntu:jammy-20240911.1 LABEL org.opencontainers.image.authors="Tom Rini trini@konsulko.com" LABEL org.opencontainers.image.description=" This image is for building U-Boot inside a container"

On Sun, 8 Dec 2024 at 12:09, Tom Rini trini@konsulko.com wrote:
Bring us up to the current Ubuntu "Jammy" tag.
Signed-off-by: Tom Rini trini@konsulko.com
.azure-pipelines.yml | 2 +- .gitlab-ci.yml | 2 +- tools/docker/Dockerfile | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-)
Reviewed-by: Simon Glass sjg@chromium.org

Upon further consideration, we should have both DEFAULT_FAST_ARM64_TAG and DEFAULT_ARM64_TAG values available. This will allow us to later run a matrix of some jobs, such as sandbox, on any arm64 host and still keep the world build to only fast arm64 hosts.
Signed-off-by: Tom Rini trini@konsulko.com --- This is inspired in part by that I was able to get one of the Always Free Ampere instances on Oracle Cloud, but being only 4 CPUs it won't be good enough for the world build, but is fine for the "all" jobs, as well as sandbox, I hope. --- .gitlab-ci.yml | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml index a7bae035319c..cec67c981428 100644 --- a/.gitlab-ci.yml +++ b/.gitlab-ci.yml @@ -3,6 +3,7 @@ variables: DEFAULT_ALL_TAG: "all" DEFAULT_ARM64_TAG: "arm64" + DEFAULT_FAST_ARM64_TAG: "fast arm64" DEFAULT_AMD64_TAG: "amd64" DEFAULT_FAST_AMD64_TAG: "fast amd64" MIRROR_DOCKER: docker.io @@ -116,7 +117,7 @@ build all platforms in a single job: - when: always parallel: matrix: - - HOST: "arm64" + - HOST: "fast arm64" - HOST: "fast amd64" tags: - ${HOST}

On Sat, 21 Dec 2024 10:45:52 -0600, Tom Rini wrote:
Upon further consideration, we should have both DEFAULT_FAST_ARM64_TAG and DEFAULT_ARM64_TAG values available. This will allow us to later run a matrix of some jobs, such as sandbox, on any arm64 host and still keep the world build to only fast arm64 hosts.
Applied to u-boot/master, thanks!

Hi Tom,
On Sun, 8 Dec 2024 at 12:07, Tom Rini trini@konsulko.com wrote:
Hey all,
First, thanks to Simon Glass and also Linaro, we now have access to a few fast arm64 host machines in our Gitlab instance, to use as CI runners. This series finishes the work that I pushed earlier and Simon had started that enables arm64 hosts to be used for most things now.
The first notable change, especially if you use this on your own Gitlab instance is that "DEFAULT_TAG" is now unused and we instead have:
- DEFAULT_ALL_TAG:
- DEFAULT_ARM64_TAG:
- DEFAULT_AMD64_TAG:
- DEFAULT_FAST_AMD64_TAG:
This lets us say that some jobs can be run on all runners, because they are small enough that anything we'd connect to CI is fast enough and it also does not depend on the underlying host architecture. Next we have tags for any arm64 host, or any amd64 host. Finally, we have a tag for fast amd64 hosts. What these last three are for is that we have a few jobs that need to run on amd64 hosts and so we have to restrict them there, but we also have now reworked the world build jobs to build (almost) everything in a single job and on the fast amd64 machines this is still as quick as the old way was, in practice.
To reach this point, we say that the Xtensa jobs can only run on amd64 hosts. Our targets only work with the binary-only toolchain and so this is a reasonable limit and we exclude them from the world build jobs. We also need to deal with ensuring the right toolchain is used regardless what the host architecture is and that we don't use the host toolchain by accident. Finally, because some of these changes needed to be worked out in the linter, fix some of the general warnings that notes as well.
I haven't tried out this series. Does this handle running multiple test.py runs at once? For me that ends up providing a large improvement in CI times (down to about 35 mins with just two fast runners).
Regards, Simon

On Mon, Dec 09, 2024 at 08:00:33AM -0700, Simon Glass wrote:
Hi Tom,
On Sun, 8 Dec 2024 at 12:07, Tom Rini trini@konsulko.com wrote:
Hey all,
First, thanks to Simon Glass and also Linaro, we now have access to a few fast arm64 host machines in our Gitlab instance, to use as CI runners. This series finishes the work that I pushed earlier and Simon had started that enables arm64 hosts to be used for most things now.
The first notable change, especially if you use this on your own Gitlab instance is that "DEFAULT_TAG" is now unused and we instead have:
- DEFAULT_ALL_TAG:
- DEFAULT_ARM64_TAG:
- DEFAULT_AMD64_TAG:
- DEFAULT_FAST_AMD64_TAG:
This lets us say that some jobs can be run on all runners, because they are small enough that anything we'd connect to CI is fast enough and it also does not depend on the underlying host architecture. Next we have tags for any arm64 host, or any amd64 host. Finally, we have a tag for fast amd64 hosts. What these last three are for is that we have a few jobs that need to run on amd64 hosts and so we have to restrict them there, but we also have now reworked the world build jobs to build (almost) everything in a single job and on the fast amd64 machines this is still as quick as the old way was, in practice.
To reach this point, we say that the Xtensa jobs can only run on amd64 hosts. Our targets only work with the binary-only toolchain and so this is a reasonable limit and we exclude them from the world build jobs. We also need to deal with ensuring the right toolchain is used regardless what the host architecture is and that we don't use the host toolchain by accident. Finally, because some of these changes needed to be worked out in the linter, fix some of the general warnings that notes as well.
I haven't tried out this series. Does this handle running multiple test.py runs at once? For me that ends up providing a large improvement in CI times (down to about 35 mins with just two fast runners).
That's a per-runner configuration option and so orthogonal to this series. Given the breakdown between "all" jobs and "fast" jobs, throwing a bunch of the small and short "all" jobs on a runner also doing a world build should just be a few minutes slow down all around and overall noise.
participants (2)
-
Simon Glass
-
Tom Rini