Create custom BalenaOS for Raspberry Pi 4

Ats
24 min readJul 27, 2023

--

This is the procedure and logs about building my own custom BalenaOS. I’ll write this article to help me to do it again next time.

Photo by Alex Kulikov on Unsplash

My goal is to create my own BalenaOS for Raspberry Pi CM4 and show it up on Balena. Basically, I just followed the official document provided by Balena.

These are my machine environments.

  • Host
    - Thinkpad T14s Gen 1
    - Linux Ubuntu 22.04.2 LTS
  • Target
    - Raspberry Pi CM4

Set up the host environment

First of all, I forked the template repository and started with dry-run command.

I need npm to run balena-yocto-scripts/build/barys --dray-run . I didn’t have it in my machine so I installed it using nvm

After installing it, I ran it immediately then I got the error.

$ balena-yocto-scripts/build/barys --dry-run
[node:internal/modules/cjs/loader:1080
throw err;
^

Error: Cannot find module './*.coffee'

Based on the official document, I need machine information using coffee script file. So I just created an empty one.

touch raspberrypicm4-ioboard.coffee

Then It raised another error, of course, because it was empty. So I needed the content. I copied the official BalenaOS for RPI CM4 this time. I took a look at some coffee scripts of some different boards but there wasn’t much difference other than image and fstype . Some of them are balena-image and others are resin-image . resin is the previous name of balena so I think they are saying basically the same.

ats@ats-ThinkPad-T14s-Gen-1:~/projects/yocto/balena-rpi$ balena-yocto-scripts/build/barys --dry-run
Building JSON manifest...
npm WARN config production Use `--omit=dev` instead.

up to date, audited 4 packages in 991ms

found 0 vulnerabilities
[/home/ats/projects/yocto/balena-rpi/balena-yocto-scripts/build/node_modules/@resin.io/device-types/build.coffee:25
throw new Error("Ignored " + typeDefinition.slug + ": `" + field + "` is not set");
^

Error: Ignored raspberrypicm4-ioboard: `yocto.deployArtifact` is not set

Then it raised another error. It looked like I needed to install jq . So just install it.

ats@ats-ThinkPad-T14s-Gen-1:~/projects/yocto/balena-rpi$ balena-yocto-scripts/build/barys --dry-run
Building JSON manifest...
npm WARN config production Use `--omit=dev` instead.

up to date, audited 4 packages in 487ms

found 0 vulnerabilities
(node:7066) [DEP0128] DeprecationWarning: Invalid 'main' field in '/home/ats/projects/yocto/balena-rpi/balena-yocto-scripts/build/node_modules/@resin.io/device-types/package.json' of 'index.js'. Please either fix that or report it to the module author
(Use `node --trace-deprecation ...` to show where the warning was created)
...Done
[000000001][ERROR]Please install the 'jq' package before running this script.
apt-get install jq

After installing it and running they dry-run again, a new JSON file was created.

ats@ats-ThinkPad-T14s-Gen-1:~/projects/yocto/balena-rpi$ ls -l
total 164
drwxrwxr-x 6 ats ats 4096 Jul 24 15:32 balena-yocto-scripts
-rw-rw-r-- 1 ats ats 116844 Jul 20 17:39 CHANGELOG.md
drwxrwxr-x 6 ats ats 4096 Jul 20 17:39 contracts
drwxrwxr-x 4 ats ats 4096 Jul 24 16:56 layers
-rw-rw-r-- 1 ats ats 11360 Jul 20 17:39 LICENSE
-rw-rw-r-- 1 ats ats 1801 Jul 24 14:24 raspberrypicm4-ioboard.coffee
-rw-rw-r-- 1 ats ats 2584 Jul 24 17:09 raspberrypicm4-ioboard.json -> Created
-rw-rw-r-- 1 ats ats 1278 Jul 24 17:07 README.md
-rw-rw-r-- 1 ats ats 307 Jul 20 17:39 repo.yml
-rw-rw-r-- 1 ats ats 14 Jul 24 16:50 VERSION

At the same time, I got the error as well. Basically, it said I needed the custom board layer. Then I took a look at the official document.

ats@ats-ThinkPad-T14s-Gen-1:~/projects/yocto/balena-rpi$ balena-yocto-scripts/build/barys --dry-run
Building JSON manifest...
npm WARN config production Use `--omit=dev` instead.

up to date, audited 4 packages in 897ms

found 0 vulnerabilities
(node:7882) [DEP0128] DeprecationWarning: Invalid 'main' field in '/home/ats/projects/yocto/balena-rpi/balena-yocto-scripts/build/node_modules/@resin.io/device-types/package.json' of 'index.js'. Please either fix that or report it to the module author
(Use `node --trace-deprecation ...` to show where the warning was created)
...Done
ls: cannot access '/home/ats/projects/yocto/balena-rpi/balena-yocto-scripts/build/../../layers/meta-balena*/conf/samples/bblayers.conf.sample': No such file or directory
balena-yocto-scripts/build/barys: line 440: /home/ats/projects/yocto/balena-rpi/balena-yocto-scripts/build/../../layers/poky/oe-init-build-env: No such file or directory
sed: can't read conf/local.conf: No such file or directory
Can't open conf/local.conf: No such file or directory.
balena-yocto-scripts/build/barys: line 465: conf/local.conf: No such file or directory
ls: cannot access '/home/ats/projects/yocto/balena-rpi/balena-yocto-scripts/build/../../layers/meta-balena-*/conf/layer.conf': No such file or directory
^C

I wanted to go through the whole building process of a custom BalenaOS first so I decided to copy the official RPI CM4 board layer for my custom board layer. I’ll need to change the layer when I develop my own board.

One note for future me. I think I won’t need recipe-support/resign-init because I don’t have any plan to use internal storage, balena-image-flasher

After running the dry run again, I got the message. It was saying I need to set up the dependency layers, like poky, openembedded, and so on.

ats@ats-ThinkPad-T14s-Gen-1:~/projects/yocto/balena-rpi$ balena-yocto-scripts/build/barys --dry-run
Building JSON manifest...
npm WARN config production Use `--omit=dev` instead.

up to date, audited 4 packages in 450ms

found 0 vulnerabilities
(node:10035) [DEP0128] DeprecationWarning: Invalid 'main' field in '/home/ats/projects/yocto/balena-rpi/balena-yocto-scripts/build/node_modules/@resin.io/device-types/package.json' of 'index.js'. Please either fix that or report it to the module author
(Use `node --trace-deprecation ...` to show where the warning was created)
...Done
balena-yocto-scripts/build/barys: line 440: /home/ats/projects/yocto/balena-rpi/balena-yocto-scripts/build/../../layers/poky/oe-init-build-env: No such file or directory
sed: can't read conf/local.conf: No such file or directory
Can't open conf/local.conf: No such file or directory.
balena-yocto-scripts/build/barys: line 465: conf/local.conf: No such file or directory
[000000001][LOG]Release kirkstone already supported by device integration layer, will not revert meta-balena-common syntax.
[000000001][LOG]BalenaOS build initialized in directory: build.
[000000001][LOG]Dry run requested so don't start builds.
[000000001][WARNING]Don't forget to setup build MACHINE as this script ignores it in dry run mode.
[000000001][LOG]Done.

Based on the official document, I need to make the layers directory.

├── layers
│ ├── meta-balena
│ ├── meta-balena-<board-family> -> meta-balena-raspberrypi
│ ├── meta-<vendor> -> meta-raspberrypi
│ ├── meta-openembedded
│ ├── meta-rust -> seems unnecessary
│ └── poky

So just add layers like below.

git submodule add -b kirkstone git://git.yoctoproject.org/poky.git
git submodule add -b kirkstone git://git.openembedded.org/meta-openembedded
git submodule add -b kirkstone git://git.yoctoproject.org/meta-raspberrypi

Then the dry run script succeeded and created the build directory automatically.

ats@ats-ThinkPad-T14s-Gen-1:~/projects/yocto/balena-rpi$ balena-yocto-scripts/build/barys --dry-run
Building JSON manifest...
npm WARN config production Use `--omit=dev` instead.

up to date, audited 4 packages in 562ms

found 0 vulnerabilities
(node:10726) [DEP0128] DeprecationWarning: Invalid 'main' field in '/home/ats/projects/yocto/balena-rpi/balena-yocto-scripts/build/node_modules/@resin.io/device-types/package.json' of 'index.js'. Please either fix that or report it to the module author
(Use `node --trace-deprecation ...` to show where the warning was created)
...Done
You had no conf/local.conf file. This configuration file has therefore been
created for you from /home/ats/projects/yocto/balena-rpi/balena-yocto-scripts/build/../../layers/meta-balena-raspberrypi/conf/samples/local.conf.sample
You may wish to edit it to, for example, select a different MACHINE (target
hardware). See conf/local.conf for more information as common configuration
options are commented.

You had no conf/bblayers.conf file. This configuration file has therefore been
created for you from /home/ats/projects/yocto/balena-rpi/balena-yocto-scripts/build/../../layers/meta-balena-raspberrypi/conf/samples/bblayers.conf.sample
To add additional metadata layers into your configuration please add entries
to conf/bblayers.conf.

The Yocto Project has extensive documentation about OE including a reference
manual which can be found at:
https://docs.yoctoproject.org

For more information about OpenEmbedded see the website:
https://www.openembedded.org/


_ _ ___ ____
| |__ __ _| | ___ _ __ __ _ / _ \/ ___|
| '_ \ / _` | |/ _ \ '_ \ / _` | | | \___ \
| |_) | (_| | | __/ | | | (_| | |_| |___) |
|_.__/ \__,_|_|\___|_| |_|\__,_|\___/|____/

--------------------------------------------

Resin specific images available:
balena-image

Raspberry Pi CM4 IO Board (NEW) : $ MACHINE=raspberrypicm4-ioboard bitbake balena-image

[000000001][LOG]Release kirkstone already supported by device integration layer, will not revert meta-balena-common syntax.
[000000001][LOG]BalenaOS build initialized in directory: build.
[000000001][LOG]Dry run requested so don't start builds.
[000000001][WARNING]Don't forget to setup build MACHINE as this script ignores it in dry run mode.
[000000001][LOG]Done.
ats@ats-ThinkPad-T14s-Gen-1:~/projects/yocto/balena-rpi$ ls -l
total 168
drwxrwxr-x 6 ats ats 4096 Jul 24 15:32 balena-yocto-scripts
-rw-rw-r-- 1 ats ats 110 Jul 24 17:21 barys.log
drwxrwxr-x 3 ats ats 4096 Jul 24 17:21 build -> Created
-rw-rw-r-- 1 ats ats 116844 Jul 20 17:39 CHANGELOG.md
drwxrwxr-x 6 ats ats 4096 Jul 20 17:39 contracts
drwxrwxr-x 7 ats ats 4096 Jul 24 17:18 layers
-rw-rw-r-- 1 ats ats 11360 Jul 20 17:39 LICENSE
-rw-rw-r-- 1 ats ats 1801 Jul 24 14:24 raspberrypicm4-ioboard.coffee
-rw-rw-r-- 1 ats ats 2584 Jul 24 17:21 raspberrypicm4-ioboard.json
-rw-rw-r-- 1 ats ats 1278 Jul 24 17:07 README.md
-rw-rw-r-- 1 ats ats 307 Jul 20 17:39 repo.yml
-rw-rw-r-- 1 ats ats 14 Jul 24 16:50 VERSION

Build my BalenaOS

I was ready to build my BalenaOS finally. So I executed the scripts to set up environment variables and start building.

source layers/poky/oe-init-build-env
MACHINE=raspberrypicm4-ioboard bitbake balena-image

Right away, I got the error saying I needed to install Docker. Then I installed it just following official documents.

ats@ats-ThinkPad-T14s-Gen-1:~/projects/yocto/balena-rpi/build$ MACHINE=raspberrypicm4-ioboard bitbake balena-image
ERROR: The following required tools (as specified by HOSTTOOLS) appear to be unavailable in PATH, please install them in order to proceed:
docker

One note. Don’t forget to run sudo apt-get update after setting up the repository following the below document. Otherwise, you would get the error.

ats@ats-ThinkPad-T14s-Gen-1:~$ sudo apt-get install ./docker-desktop-4.21.1-amd64.deb 
[sudo] password for ats:
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Note, selecting 'docker-desktop' instead of './docker-desktop-4.21.1-amd64.deb'
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies.
docker-desktop : Depends: docker-ce-cli but it is not installable

Then tried again and got the error. I googled it quickly and there were some problems with the docker desktop. Interestingly, I could remove the error by deleting the credsStore value in ~/.docker/config.json . Every time I restart my docker engine, the value shows up again in the file. In my case, I was using docker only for this BalenaOS purpose so I decided to go with the solution unless I don’t have any problems with it. (Eventually, I didn’t have any problems)

Sstate summary: Wanted 1861 Local 0 Mirrors 0 Missed 1861 Current 0 (0% match, 0% complete)
NOTE: Executing Tasks
ERROR: mkfs-hostapp-native-1.0-r0 do_compile: ExecutionError('/home/ats/projects/yocto/balena-rpi/build/tmp/work/x86_64-linux/mkfs-hostapp-native/1.0-r0/temp/run.do_compile.1020833', 1, None, None)
ERROR: Logfile of failure stored in: /home/ats/projects/yocto/balena-rpi/build/tmp/work/x86_64-linux/mkfs-hostapp-native/1.0-r0/temp/log.do_compile.1020833
Log data follows:
| DEBUG: Executing shell function do_compile
| #0 building with "desktop-linux" instance using docker driver
|
| #1 [internal] load build definition from Dockerfile
| #1 transferring dockerfile:
| #1 transferring dockerfile: 220B 0.0s done
| #1 DONE 0.2s
|
| #2 [internal] load .dockerignore
| #2 transferring context: 2B 0.0s done
| #2 DONE 0.2s
|
| #3 [internal] load metadata for docker.io/library/debian:bullseye
| #3 ERROR: error getting credentials - err: exec: "docker-credential-desktop": executable file not found in $PATH, out: ``
| ------
| > [internal] load metadata for docker.io/library/debian:bullseye:
| ------
| Dockerfile:1
| --------------------
| 1 | >>> FROM debian:bullseye
| 2 |
| 3 | VOLUME /mnt/sysroot/inactive
| --------------------
| ERROR: failed to solve: debian:bullseye: error getting credentials - err: exec: "docker-credential-desktop": executable file not found in $PATH, out: ``
| WARNING: exit code 1 from a shell command.
ERROR: Task (/home/ats/projects/yocto/balena-rpi/build/../layers/meta-balena/meta-balena-common/recipes-containers/mkfs-hostapp-native/mkfs-hostapp-native.bb:do_compile) failed with exit code '1'
NOTE: Tasks Summary: Attempted 2645 tasks of which 0 didn't need to be rerun and 1 failed.

Summary: 1 task failed:
/home/ats/projects/yocto/balena-rpi/build/../layers/meta-balena/meta-balena-common/recipes-containers/mkfs-hostapp-native/mkfs-hostapp-native.bb:do_compile
Summary: There was 1 ERROR message, returning a non-zero exit code.

Then I got another error saying something wrong with cgroup kernel and realized the balena could work with only cgroup v1 after a quick investigation.

ERROR: balena-image-1.0-r0 do_image_hostapp_ext4: ExecutionError('/home/ats/projects/yocto/balena-rpi/build/tmp/work/raspberrypicm4_ioboard-poky-linux/balena-image/1.0-r0/temp/run.do_image_hostapp_ext4.2301899', 1, None, None)
ERROR: Logfile of failure stored in: /home/ats/projects/yocto/balena-rpi/build/tmp/work/raspberrypicm4_ioboard-poky-linux/balena-image/1.0-r0/temp/log.do_image_hostapp_ext4.2301899
Log data follows:
| DEBUG: Executing python function extend_recipe_sysroot
| NOTE: Direct dependencies are ['/home/ats/projects/yocto/balena-rpi/build/../layers/meta-balena/meta-balena-common/recipes-containers/mkfs-hostapp-native/mkfs-hostapp-native.bb:do_populate_sysroot', '/home/ats/projects/yocto/balena-rpi/build/../layers/meta-balena/meta-balena-common/recipes-kernel/linux/kernel-headers-test.bb:do_populate_sysroot', '/home/ats/projects/yocto/balena-rpi/build/../layers/poky/meta/recipes-core/glibc/cross-localedef-native_2.35.bb:do_populate_sysroot', '/home/ats/projects/yocto/balena-rpi/build/../layers/poky/meta/recipes-core/glibc/ldconfig-native_2.12.1.bb:do_populate_sysroot', '/home/ats/projects/yocto/balena-rpi/build/../layers/poky/meta/recipes-devtools/qemu/qemuwrapper-cross_1.0.bb:do_populate_sysroot', '/home/ats/projects/yocto/balena-rpi/build/../layers/poky/meta/recipes-kernel/kmod/depmodwrapper-cross_1.0.bb:do_populate_sysroot', 'virtual:native:/home/ats/projects/yocto/balena-rpi/build/../layers/meta-openembedded/meta-oe/recipes-devtools/jq/jq_git.bb:do_populate_sysroot', 'virtual:native:/home/ats/projects/yocto/balena-rpi/build/../layers/poky/meta/recipes-core/coreutils/coreutils_9.0.bb:do_populate_sysroot', 'virtual:native:/home/ats/projects/yocto/balena-rpi/build/../layers/poky/meta/recipes-core/update-rc.d/update-rc.d_0.8.bb:do_populate_sysroot', 'virtual:native:/home/ats/projects/yocto/balena-rpi/build/../layers/poky/meta/recipes-devtools/makedevs/makedevs_1.0.1.bb:do_populate_sysroot', 'virtual:native:/home/ats/projects/yocto/balena-rpi/build/../layers/poky/meta/recipes-devtools/opkg-utils/opkg-utils_0.5.0.bb:do_populate_sysroot', 'virtual:native:/home/ats/projects/yocto/balena-rpi/build/../layers/poky/meta/recipes-devtools/opkg/opkg_0.5.0.bb:do_populate_sysroot', 'virtual:native:/home/ats/projects/yocto/balena-rpi/build/../layers/poky/meta/recipes-devtools/pseudo/pseudo_git.bb:do_populate_sysroot', 'virtual:native:/home/ats/projects/yocto/balena-rpi/build/../layers/poky/meta/recipes-extended/pigz/pigz_2.7.bb:do_populate_sysroot']
| NOTE: Installed into sysroot: ['mkfs-hostapp-native', 'e2fsprogs-native', 'hostapp-update-native', 'balena-native', 'go-native']
| NOTE: Skipping as already exists in sysroot: ['kernel-headers-test', 'cross-localedef-native', 'ldconfig-native', 'qemuwrapper-cross', 'depmodwrapper-cross', 'jq-native', 'coreutils-native', 'update-rc.d-native', 'makedevs-native', 'opkg-utils-native', 'opkg-native', 'pseudo-native', 'pigz-native', 'xz-native', 'shared-mime-info-native', 'systemd-systemctl-native', 'qemu-native', 'glibc', 'gcc-runtime', 'libtool-native', 'onig-native', 'libsolv-native', 'libarchive-native', 'shadow-native', 'kmod-native', 'gettext-minimal-native', 'texinfo-dummy-native', 'attr-native', 'nss-native', 'perl-native', 'zlib-native', 'debianutils-native', 'openssl-native', 'python3-native', 'libxml2-native', 'glib-2.0-native', 'itstool-native', 'linux-libc-headers', 'libgcc', 'cmake-native', 'expat-native', 'zstd-native', 'bzip2-native', 'lzo-native', 'util-linux-native', 'sqlite3-native', 'nspr-native', 'gdbm-native', 'make-native', 'util-linux-libuuid-native', 'ncurses-native', 'libtirpc-native', 'libnsl2-native', 'libffi-native', 'readline-native', 'libpcre-native', 'gettext-native', 'curl-native', 'libcap-ng-native', 'libpcre2-native']
| DEBUG: sed -e 's:^[^/]*/:/home/ats/projects/yocto/balena-rpi/build/tmp/work/raspberrypicm4_ioboard-poky-linux/balena-image/1.0-r0/recipe-sysroot-native/:g' /home/ats/projects/yocto/balena-rpi/build/tmp/sysroots-components/x86_64/e2fsprogs-native/fixmepath | xargs sed -i -e 's:FIXMESTAGINGDIRTARGET:/home/ats/projects/yocto/balena-rpi/build/tmp/work/raspberrypicm4_ioboard-poky-linux/balena-image/1.0-r0/recipe-sysroot:g; s:FIXMESTAGINGDIRHOST:/home/ats/projects/yocto/balena-rpi/build/tmp/work/raspberrypicm4_ioboard-poky-linux/balena-image/1.0-r0/recipe-sysroot-native:g' -e 's:FIXME_PSEUDO_SYSROOT:/home/ats/projects/yocto/balena-rpi/build/tmp/sysroots-components/x86_64/pseudo-native:g' -e 's:FIXME_HOSTTOOLS_DIR:/home/ats/projects/yocto/balena-rpi/build/tmp/hosttools:g' -e 's:FIXME_PKGDATA_DIR:/home/ats/projects/yocto/balena-rpi/build/tmp/pkgdata/raspberrypicm4-ioboard:g' -e 's:FIXME_PSEUDO_LOCALSTATEDIR:/home/ats/projects/yocto/balena-rpi/build/tmp/work/raspberrypicm4_ioboard-poky-linux/balena-image/1.0-r0/pseudo/:g' -e 's:FIXME_LOGFIFO:/home/ats/projects/yocto/balena-rpi/build/tmp/work/raspberrypicm4_ioboard-poky-linux/balena-image/1.0-r0/temp/fifo.2301899:g'
| DEBUG: Python function extend_recipe_sysroot finished
| DEBUG: Executing python function set_image_size
| DEBUG: requested rootfs size 327680, aligned 327680
| DEBUG: 231520.000000 = 231520 * 1.000000
| DEBUG: 327680.000000 = max(231520.000000, 327680)[327680.000000] + 0
| DEBUG: 327680.000000 = int(327680.000000)
| DEBUG: 327680 = aligned(327680)
| DEBUG: returning 327680
| DEBUG: Python function set_image_size finished
| DEBUG: Executing shell function do_image_hostapp_ext4
| Loaded image: mkfs-hostapp-native:1690259198
| + SYSROOT=/mnt/sysroot/inactive
| + pid=7
| + sleep 5
| + balenad -s=overlay2 --data-root=/mnt/sysroot/inactive/balena -H unix:///var/run/balena-host.sock
| time="2023-07-25T07:58:47.875975887Z" level=info msg="Starting up"
| time="2023-07-25T07:58:47.882676882Z" level=warning msg="could not change group /var/run/balena-host.sock to balena-engine: group balena-engine not found"
| time="2023-07-25T07:58:47.893197811Z" level=info msg="libcontainerd: started new balena-engine-containerd process" pid=17
| time="2023-07-25T07:58:47Z" level=warning msg="containerd config version `1` has been deprecated and will be removed in containerd v2.0, please switch to version `2`, see https://github.com/containerd/containerd/blob/main/docs/PLUGINS.md#version-header"
| time="2023-07-25T07:58:47.957680914Z" level=info msg="starting containerd" revision= version=1.6.6+unknown
| time="2023-07-25T07:58:47.969487326Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
| time="2023-07-25T07:58:47.969612383Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
| time="2023-07-25T07:58:47.974189235Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.49-linuxkit-pr\\n\"): skip plugin" type=io.containerd.snapshotter.v1
| time="2023-07-25T07:58:47.974230525Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
| time="2023-07-25T07:58:47.974332350Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
| time="2023-07-25T07:58:47.975399714Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
| time="2023-07-25T07:58:47.975464732Z" level=info msg="metadata content store policy set" policy=shared
| time="2023-07-25T07:58:47.988598722Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
| time="2023-07-25T07:58:47.988624690Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
| time="2023-07-25T07:58:47.988635381Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
| time="2023-07-25T07:58:47.988665839Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
| time="2023-07-25T07:58:47.988686942Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
| time="2023-07-25T07:58:47.988705658Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
| time="2023-07-25T07:58:47.988735090Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
| time="2023-07-25T07:58:47.988760284Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
| time="2023-07-25T07:58:47.988779037Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1
| time="2023-07-25T07:58:47.988810620Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
| time="2023-07-25T07:58:47.988844924Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
| time="2023-07-25T07:58:47.988868908Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
| time="2023-07-25T07:58:47.989080895Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
| time="2023-07-25T07:58:47.989229229Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
| time="2023-07-25T07:58:47.989646779Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
| time="2023-07-25T07:58:47.989676984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
| time="2023-07-25T07:58:47.989691143Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
| time="2023-07-25T07:58:47.989911614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
| time="2023-07-25T07:58:47.989928568Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
| time="2023-07-25T07:58:47.989937948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
| time="2023-07-25T07:58:47.989945645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
| time="2023-07-25T07:58:47.989956589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
| time="2023-07-25T07:58:47.989973794Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
| time="2023-07-25T07:58:47.989984385Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
| time="2023-07-25T07:58:47.989996732Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
| time="2023-07-25T07:58:47.990006070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
| time="2023-07-25T07:58:47.990017195Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
| time="2023-07-25T07:58:47.990026090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
| time="2023-07-25T07:58:47.990051828Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
| time="2023-07-25T07:58:47.990086993Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
| time="2023-07-25T07:58:47.990107559Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
| time="2023-07-25T07:58:47.990130905Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin"
| time="2023-07-25T07:58:47.990391994Z" level=info msg=serving... address=/var/run/balena-engine/containerd/balena-engine-containerd-debug.sock
| time="2023-07-25T07:58:47.990454271Z" level=info msg=serving... address=/var/run/balena-engine/containerd/balena-engine-containerd.sock.ttrpc
| time="2023-07-25T07:58:47.990499629Z" level=info msg=serving... address=/var/run/balena-engine/containerd/balena-engine-containerd.sock
| time="2023-07-25T07:58:47.990533898Z" level=info msg="containerd successfully booted in 0.034852s"
| time="2023-07-25T07:58:47.998208793Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
| failed to start daemon: couldn't create plugin manager: runtime "runc" is not supported while cgroups v2 (unified hierarchy) is being used
| + hostapp-update -f /input -n
| Cannot connect to the balenaEngine daemon at unix:///var/run/balena-host.sock. Is the balenaEngine daemon running?
| Cannot connect to the balenaEngine daemon at unix:///var/run/balena-host.sock. Is the balenaEngine daemon running?
| /mnt/sysroot/inactive/balena/tmp is not a mountpoint
| Cannot connect to the balenaEngine daemon at unix:///var/run/balena-host.sock. Is the balenaEngine daemon running?
| invalid reference format
| Untagged: mkfs-hostapp-native:1690259198
| Deleted: sha256:7a6273d59b3e8e0f756073547c42b6abd77b2d7b00330a21d2c21424c75327e6
| WARNING: exit code 1 from a shell command.
ERROR: Task (/home/ats/projects/yocto/balena-rpi/build/../layers/meta-balena/meta-balena-common/recipes-core/images/balena-image.bb:do_image_hostapp_ext4) failed with exit code '1'
NOTE: Tasks Summary: Attempted 4451 tasks of which 2644 didn't need to be rerun and 1 failed.

Summary: 1 task failed:
/home/ats/projects/yocto/balena-rpi/build/../layers/meta-balena/meta-balena-common/recipes-core/images/balena-image.bb:do_image_hostapp_ext4
Summary: There were 19 WARNING messages.
Summary: There was 1 ERROR message, returning a non-zero exit code.

The docker documents helped me a lot to switch from v2 to v1. Basically, it’s v2 after running ls -la /sys/fs/cgroup/ and showing up cgroup.controllers . So I had to downgrade it using grub . I just added systemd.unified_cgroup_hierarchy=0 to the GRUB_CMDLINE_LINUX line in /etc/default/grub and ran sudo update-grub.

This is the files of cgroup v2.

ats@ats-ThinkPad-T14s-Gen-1:~$ ls -la /sys/fs/cgroup/
total 0
dr-xr-xr-x 12 root root 0 Jul 25 13:11 .
drwxr-xr-x 8 root root 0 Jul 25 13:11 ..
-r--r--r-- 1 root root 0 Jul 25 13:11 cgroup.controllers
-rw-r--r-- 1 root root 0 Jul 25 13:11 cgroup.max.depth
-rw-r--r-- 1 root root 0 Jul 25 13:11 cgroup.max.descendants
-rw-r--r-- 1 root root 0 Jul 25 13:11 cgroup.procs
-r--r--r-- 1 root root 0 Jul 25 13:11 cgroup.stat
-rw-r--r-- 1 root root 0 Jul 25 13:11 cgroup.subtree_control
-rw-r--r-- 1 root root 0 Jul 25 13:11 cgroup.threads
-rw-r--r-- 1 root root 0 Jul 25 13:11 cpu.pressure
-r--r--r-- 1 root root 0 Jul 25 13:11 cpuset.cpus.effective
-r--r--r-- 1 root root 0 Jul 25 13:11 cpuset.mems.effective
-r--r--r-- 1 root root 0 Jul 25 13:11 cpu.stat
drwxr-xr-x 2 root root 0 Jul 25 13:11 dev-hugepages.mount
drwxr-xr-x 2 root root 0 Jul 25 13:11 dev-mqueue.mount
drwxr-xr-x 2 root root 0 Jul 25 13:11 init.scope
-rw-r--r-- 1 root root 0 Jul 25 13:11 io.cost.model
-rw-r--r-- 1 root root 0 Jul 25 13:11 io.cost.qos
-rw-r--r-- 1 root root 0 Jul 25 13:11 io.pressure
-rw-r--r-- 1 root root 0 Jul 25 13:11 io.prio.class
-r--r--r-- 1 root root 0 Jul 25 13:11 io.stat
-r--r--r-- 1 root root 0 Jul 25 13:11 memory.numa_stat
-rw-r--r-- 1 root root 0 Jul 25 13:11 memory.pressure
--w------- 1 root root 0 Jul 25 13:11 memory.reclaim
-r--r--r-- 1 root root 0 Jul 25 13:11 memory.stat
-r--r--r-- 1 root root 0 Jul 25 13:11 misc.capacity
drwxr-xr-x 2 root root 0 Jul 25 13:11 proc-sys-fs-binfmt_misc.mount
drwxr-xr-x 2 root root 0 Jul 25 13:11 sys-fs-fuse-connections.mount
drwxr-xr-x 2 root root 0 Jul 25 13:11 sys-kernel-config.mount
drwxr-xr-x 2 root root 0 Jul 25 13:11 sys-kernel-debug.mount
drwxr-xr-x 2 root root 0 Jul 25 13:11 sys-kernel-tracing.mount
drwxr-xr-x 64 root root 0 Jul 25 13:11 system.slice
drwxr-xr-x 3 root root 0 Jul 25 13:11 user.slice

This is the files of cgroup v1.

ats@ats-ThinkPad-T14s-Gen-1:~$ ls -la /sys/fs/cgroup/
total 0
drwxr-xr-x 16 root root 400 Jul 25 13:15 .
drwxr-xr-x 8 root root 0 Jul 25 13:15 ..
dr-xr-xr-x 2 root root 0 Jul 25 13:15 blkio
lrwxrwxrwx 1 root root 11 Jul 25 13:15 cpu -> cpu,cpuacct
lrwxrwxrwx 1 root root 11 Jul 25 13:15 cpuacct -> cpu,cpuacct
dr-xr-xr-x 2 root root 0 Jul 25 13:15 cpu,cpuacct
dr-xr-xr-x 2 root root 0 Jul 25 13:15 cpuset
dr-xr-xr-x 13 root root 0 Jul 25 13:15 devices
dr-xr-xr-x 4 root root 0 Jul 25 13:15 freezer
dr-xr-xr-x 2 root root 0 Jul 25 13:15 hugetlb
dr-xr-xr-x 11 root root 0 Jul 25 13:15 memory
dr-xr-xr-x 2 root root 0 Jul 25 13:15 misc
lrwxrwxrwx 1 root root 16 Jul 25 13:15 net_cls -> net_cls,net_prio
dr-xr-xr-x 2 root root 0 Jul 25 13:15 net_cls,net_prio
lrwxrwxrwx 1 root root 16 Jul 25 13:15 net_prio -> net_cls,net_prio
dr-xr-xr-x 2 root root 0 Jul 25 13:15 perf_event
dr-xr-xr-x 11 root root 0 Jul 25 13:15 pids
dr-xr-xr-x 2 root root 0 Jul 25 13:15 rdma
dr-xr-xr-x 12 root root 0 Jul 25 13:15 systemd
dr-xr-xr-x 12 root root 0 Jul 25 13:15 unified

But I was still having the same error after switching cgroup version. I checked the docker for more and noticed my docker was still using v2.

ats@ats-ThinkPad-T14s-Gen-1:~$ docker info
Client: Docker Engine - Community
Version: 24.0.5
Context: desktop-linux
Debug Mode: false
Plugins:
buildx: Docker Buildx (Docker Inc.)
Version: v0.11.0
Path: /usr/lib/docker/cli-plugins/docker-buildx
compose: Docker Compose (Docker Inc.)
Version: v2.19.1
Path: /usr/lib/docker/cli-plugins/docker-compose
dev: Docker Dev Environments (Docker Inc.)
Version: v0.1.0
Path: /usr/lib/docker/cli-plugins/docker-dev
extension: Manages Docker extensions (Docker Inc.)
Version: v0.2.20
Path: /usr/lib/docker/cli-plugins/docker-extension
init: Creates Docker-related starter files for your project (Docker Inc.)
Version: v0.1.0-beta.6
Path: /usr/lib/docker/cli-plugins/docker-init
sbom: View the packaged-based Software Bill Of Materials (SBOM) for an image (Anchore Inc.)
Version: 0.6.0
Path: /usr/lib/docker/cli-plugins/docker-sbom
scan: Docker Scan (Docker Inc.)
Version: v0.26.0
Path: /usr/lib/docker/cli-plugins/docker-scan
scout: Command line tool for Docker Scout (Docker Inc.)
Version: 0.16.1
Path: /usr/lib/docker/cli-plugins/docker-scout

Server:
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 2
Server Version: 24.0.2
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Using metacopy: false
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 2
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 3dce8eb055cbb6872793272b4f20ed16117344f8
runc version: v1.1.7-0-g860f061
init version: de40ad0
Security Options:
seccomp
Profile: builtin
cgroupns
Kernel Version: 5.15.49-linuxkit-pr
Operating System: Docker Desktop
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 7.55GiB
Name: docker-desktop
ID: 2bc88a5f-e980-472a-bb56-9feaaa3a14e5
Docker Root Dir: /var/lib/docker
Debug Mode: false
HTTP Proxy: http.docker.internal:3128
HTTPS Proxy: http.docker.internal:3128
No Proxy: hubproxy.docker.internal
Experimental: false
Insecure Registries:
hubproxy.docker.internal:5555
127.0.0.0/8
Live Restore Enabled: false

So I needed to downgrade the cgroup of my docker as well. It was a little bit tricky (I think there should be better way). I checked the docker desktop release notes expecting there might be so options to toggle the version and I found the deprecatedCgroupV1 option. The option looked like what I was looking for but the document said it was for Mac. I edited ~/.docker/desktop/settings.json with suspicion. Then I just gave it a go and it worked. Now my docker cgroup version is v1 🎉 😢 (I need to check later)

ats@ats-ThinkPad-T14s-Gen-1:~$ cat .docker/desktop/settings.json | grep Cgroup
"deprecatedCgroupv1": true,
ats@ats-ThinkPad-T14s-Gen-1:~$ docker info
Client: Docker Engine - Community
Version: 24.0.5
Context: desktop-linux
Debug Mode: false
Plugins:
buildx: Docker Buildx (Docker Inc.)
Version: v0.11.0
Path: /usr/lib/docker/cli-plugins/docker-buildx
compose: Docker Compose (Docker Inc.)
Version: v2.19.1
Path: /usr/lib/docker/cli-plugins/docker-compose
dev: Docker Dev Environments (Docker Inc.)
Version: v0.1.0
Path: /usr/lib/docker/cli-plugins/docker-dev
extension: Manages Docker extensions (Docker Inc.)
Version: v0.2.20
Path: /usr/lib/docker/cli-plugins/docker-extension
init: Creates Docker-related starter files for your project (Docker Inc.)
Version: v0.1.0-beta.6
Path: /usr/lib/docker/cli-plugins/docker-init
sbom: View the packaged-based Software Bill Of Materials (SBOM) for an image (Anchore Inc.)
Version: 0.6.0
Path: /usr/lib/docker/cli-plugins/docker-sbom
scan: Docker Scan (Docker Inc.)
Version: v0.26.0
Path: /usr/lib/docker/cli-plugins/docker-scan
scout: Command line tool for Docker Scout (Docker Inc.)
Version: 0.16.1
Path: /usr/lib/docker/cli-plugins/docker-scout

Server:
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 2
Server Version: 24.0.2
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Using metacopy: false
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 1
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 3dce8eb055cbb6872793272b4f20ed16117344f8
runc version: v1.1.7-0-g860f061
init version: de40ad0
Security Options:
seccomp
Profile: builtin
Kernel Version: 5.15.49-linuxkit-pr
Operating System: Docker Desktop
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 7.55GiB
Name: docker-desktop
ID: 2bc88a5f-e980-472a-bb56-9feaaa3a14e5
Docker Root Dir: /var/lib/docker
Debug Mode: false
HTTP Proxy: http.docker.internal:3128
HTTPS Proxy: http.docker.internal:3128
No Proxy: hubproxy.docker.internal
Experimental: false
Insecure Registries:
hubproxy.docker.internal:5555
127.0.0.0/8
Live Restore Enabled: false

Lastly, I got the error after trying again, which was taking the most time. Basically, it said there wasn’t a license file then I checked the build/tmp/depoy/licenses directory and noticed sometimes something like it was created but mostly wasn’t created. I googled quickly and found an issue that seemed similar to my case. He was trying to build his OS without rebuilding from scratch. I also was trying the same because it takes a few hours to do but eventually, I rebuild it from scratch because I didn’t find better solutions. (and waited for another few hours…)

NOTE: Executing Tasks
ERROR: balena-image-1.0-r0 do_image_complete: ExecutionError('/home/ats/projects/yocto/balena-rpi/build/tmp/work/raspberrypicm4_ioboard-poky-linux/balena-image/1.0-r0/temp/run.deploy_image_license_manifest.72666', 1, None, None)
ERROR: Logfile of failure stored in: /home/ats/projects/yocto/balena-rpi/build/tmp/work/raspberrypicm4_ioboard-poky-linux/balena-image/1.0-r0/temp/log.do_image_complete.72666
Log data follows:
| DEBUG: Executing python function sstate_task_prefunc
| DEBUG: Python function sstate_task_prefunc finished
| DEBUG: Executing python function do_image_complete
| NOTE: Executing deploy_image_license_manifest ...
| DEBUG: Executing shell function deploy_image_license_manifest
| cp: cannot stat '/home/ats/projects/yocto/balena-rpi/build/tmp/deploy/licenses/balena-image-raspberrypicm4-ioboard-20230725124013/image_license.manifest': No such file or directory
| WARNING: exit code 1 from a shell command.
| DEBUG: Python function do_image_complete finished
ERROR: Task (/home/ats/projects/yocto/balena-rpi/build/../layers/meta-balena/meta-balena-common/recipes-core/images/balena-image.bb:do_image_complete) failed with exit code '1'

To remove the cache, I remove the following two directories.

rm -rf sstate-cache/
rm -rf tmp/

Then my BalenaOS was ready!

ats@ats-ThinkPad-T14s-Gen-1:~/projects/yocto/balena-rpi/build$ MACHINE=raspberrypicm4-ioboard bitbake balena-image
Loading cache: 100% | | ETA: --:--:--
Loaded 0 entries from dependency cache.
Parsing recipes: 100% |######################################################################################################################################################################| Time: 0:01:10
Parsing of 2663 .bb files complete (0 cached, 2663 parsed). 4171 targets, 352 skipped, 0 masked, 0 errors.
NOTE: Resolving any missing task queue dependencies

Build Configuration:
BB_VERSION = "2.0.0"
BUILD_SYS = "x86_64-linux"
NATIVELSBSTRING = "ubuntu-22.04"
TARGET_SYS = "aarch64-poky-linux"
MACHINE = "raspberrypicm4-ioboard"
DISTRO = "balena-os"
DISTRO_VERSION = "3.0.15"
TUNE_FEATURES = "aarch64 armv8a crc cortexa72"
TARGET_FPU = ""
meta-balena-rust
meta-balena-common
meta-balena-kirkstone = "HEAD:a20f839eb56d1636e9905c4809e27c4c318977e8"
meta-balena-raspberrypi = "main:3832b514d002436a04854e3f8d0bf3d3d8b01dca"
meta
meta-poky = "kirkstone:cc3287637c30080333d89a368e40473dfffb2fb7"
meta-oe
meta-filesystems
meta-networking
meta-python
meta-perl = "kirkstone:346753705e49a2486867dc150181a1c7f4d69377"
meta-raspberrypi = "kirkstone:43683cb14b6afc144619335b3a2353b70762ff3e"

Initialising tasks: 100% |###################################################################################################################################################################| Time: 0:00:03
Sstate summary: Wanted 1861 Local 0 Mirrors 0 Missed 1861 Current 0 (0% match, 0% complete)
NOTE: Executing Tasks
WARNING: linux-raspberrypi-1_5.15.92+gitAUTOINC+e1b976ee4f_14b35093ca-r0 do_kernel_resin_checkconfig: Checking for CONFIG_NFS_V4=y in the kernel configs failed for nfsfs.
WARNING: linux-raspberrypi-1_5.15.92+gitAUTOINC+e1b976ee4f_14b35093ca-r0 do_kernel_resin_checkconfig: Checking for CONFIG_NFS_V2=y in the kernel configs failed for nfsfs.
WARNING: linux-raspberrypi-1_5.15.92+gitAUTOINC+e1b976ee4f_14b35093ca-r0 do_kernel_resin_checkconfig: Checking for CONFIG_NFS_V3=y in the kernel configs failed for nfsfs.
WARNING: linux-raspberrypi-1_5.15.92+gitAUTOINC+e1b976ee4f_14b35093ca-r0 do_kernel_resin_checkconfig: Checking for CONFIG_NF_TABLES_SET=m in the kernel configs failed for nf_tables.
WARNING: linux-raspberrypi-1_5.15.92+gitAUTOINC+e1b976ee4f_14b35093ca-r0 do_kernel_resin_checkconfig: Checking for CONFIG_DM_CRYPT=y in the kernel configs failed for dmcrypt.
WARNING: linux-raspberrypi-1_5.15.92+gitAUTOINC+e1b976ee4f_14b35093ca-r0 do_kernel_resin_checkconfig: Checking for CONFIG_KERNEL_ZSTD=y in the kernel configs failed for kernel_zstd.
WARNING: linux-raspberrypi-1_5.15.92+gitAUTOINC+e1b976ee4f_14b35093ca-r0 do_kernel_resin_checkconfig: Checking for CONFIG_FB_FLEX=m in the kernel configs failed for fbtft.
WARNING: linux-raspberrypi-1_5.15.92+gitAUTOINC+e1b976ee4f_14b35093ca-r0 do_kernel_resin_checkconfig: Checking for CONFIG_FB_TFT_FBTFT_DEVICE=m in the kernel configs failed for fbtft.
WARNING: linux-raspberrypi-1_5.15.92+gitAUTOINC+e1b976ee4f_14b35093ca-r0 do_kernel_resin_checkconfig: Checking for CONFIG_PWRSEQ_SD8787=y in the kernel configs failed for sd8787_pwrseq_driver.
WARNING: linux-raspberrypi-1_5.15.92+gitAUTOINC+e1b976ee4f_14b35093ca-r0 do_kernel_resin_checkconfig: Checking for CONFIG_NFS_V4=y in the kernel configs failed for nfsfs.
WARNING: linux-raspberrypi-1_5.15.92+gitAUTOINC+e1b976ee4f_14b35093ca-r0 do_kernel_resin_checkconfig: Checking for CONFIG_NFS_V2=y in the kernel configs failed for nfsfs.
WARNING: linux-raspberrypi-1_5.15.92+gitAUTOINC+e1b976ee4f_14b35093ca-r0 do_kernel_resin_checkconfig: Checking for CONFIG_NFS_V3=y in the kernel configs failed for nfsfs.
WARNING: libqmi-1.30.2-r0 do_package_qa: QA Issue: Recipe LICENSE includes obsolete licenses GPLv2 LGPLv2.1 [obsolete-license]
WARNING: libmbim-1.26.2-r0 do_package_qa: QA Issue: Recipe LICENSE includes obsolete licenses GPLv2 LGPLv2.1 [obsolete-license]
WARNING: usb-modeswitch-data-20191128-r0 do_package_qa: QA Issue: Recipe LICENSE includes obsolete licenses GPLv2 [obsolete-license]
WARNING: modemmanager-1.18.4-r0 do_package_qa: QA Issue: Recipe LICENSE includes obsolete licenses GPL-2.0 LGPL-2.1 [obsolete-license]
WARNING: usb-modeswitch-2.5.2-r0 do_package_qa: QA Issue: Recipe LICENSE includes obsolete licenses GPLv2 [obsolete-license]
WARNING: dnsmasq-2.84-r0 do_package_qa: QA Issue: Recipe LICENSE includes obsolete licenses GPLv2 GPLv3 [obsolete-license]
WARNING: libnss-ato-git-r0 do_package_qa: QA Issue: Recipe LICENSE includes obsolete licenses LGPLv3 [obsolete-license]
NOTE: Tasks Summary: Attempted 4456 tasks of which 0 didn't need to be rerun and all succeeded.

Summary: There were 19 WARNING messages.

Register a device in Balena

I was ready to run my application code with Balena. I was doing it referring to the article.

I found my image file in build/tmp/deploy/images/raspberrypicm4-ioboard/balena-image-raspberrypicm4-ioboard.balena-img . I copied the image to my Mac, which is my laptop for coding applications on device, and run the following command to preload the image.

$ balena preload balena-image-raspberrypicm4-ioboard.balenaos-img --fleet [MY_FLEET_NAME] --commit [BALENA_COMMIT_ID]
Building Docker preloader image. [========================] 100%
| Checking that the image is a writable file
| Finding a free tcp port
| Creating preloader container
- Starting preloader container
\ Fetching application [MY_FLEET_NAME]
\ Reading image information
\ Fetching application [APP_ID]
/ Estimating required additional space
/ Resizing partitions and waiting for dockerd to start
Pulling 8 images [========================] 100%
| Cleaning up temporary files

And I ran the following command to register a device in my fleet.

$ balena device register [MY_FLEET_NAME]
Registering to [MY_FLEET_NAME]: [DEVICE_UUID]

And I ran the following command to create a config file. One note is that Balena puts the version in theVERSION file first even though I set 3.0.15+atsss as the version in the config file. You’ll see the image showing the version of my BalenaOS and saying it’s 3.0.15+ats which was set in the VERSION file.

$ balena config generate --version 3.0.15+atsss --appUpdatePollInterval 10 --network wifi --wifiSsid [MY_WIFI_SSID] --wifiKey [MY_WIFI_PASSWORD] --device [DEVICE_UUID] -o config.json               ✘ 1
applicationId: [APP_ID]
deviceType: raspberrypicm4-ioboard
userId: [USER_ID]
appUpdatePollInterval: 600000
listenPort: 48484
vpnPort: 443
apiEndpoint: https://api.balena-cloud.com
vpnEndpoint: cloudlink.balena-cloud.com
registryEndpoint: registry2.balena-cloud.com
deltaEndpoint: https://delta.balena-cloud.com
mixpanelToken: [TOKEN]
logsEndpoint: https://logs.balena-cloud.com
wifiSsid: [MY_WIFI_SSID]
wifiKey: [MY_WIFI_PASSWORD]
deviceApiKey: [DEVICE_API_KEY]
registered_at: 1690385706
deviceId: [DEVICE_ID]
uuid: [DEVICE_UUID]

Finally, configured my image using the config file and copied to anything with the.img extension because balena-img can’t be recognized as an image file.

$ balena os configure balena-image-raspberrypicm4-ioboard.balenaos-img --config config.json  --device [DEVICE_UUID] --version 3.0.15+atsss
$ cp balena-image-raspberrypicm4-ioboard.balenaos-img custom_balenaos.img

Then I flashed the image to an SD card with balenaEtcher.

And inserted it into RPI CM4 and plugged it into a power socket. I saw my BaleneOS on the dashboard!

This is the final repository of my custom BalenaOS.

That’s it!

--

--

Ats
Ats

Written by Ats

I like building something tangible like touch, gesture, and voice. Ruby on Rails / React Native / Yocto / Raspberry Pi / Interaction Design / CIID IDP alumni

No responses yet