Running k3s on Incus

I know the pain to manage a bunch of services on my own. Even with relying on Incus, Podman and Systemd as much as possible held together by lot’s of Ansible duct tape: it’s still arduous. I convinced myself change was in order: … something something Kubernetes.

My main criteria are basically:

  • Must be able to run on a single node (for now). i.e. no clustered services or databases. (k3s looks like it fits the bill)
  • Services must be able to be deployed with public service definitions (Helm FTW)
  • These service definitions must lend themselves to be version controlled
  • All relevant data directories must live on a separate ZFS datasets

Running k3s in an Incus container

You can run k3s in an Incus container, but it gets increasingly difficult. There’re reports of people getting it to run, but it gets increasingly difficult. Even public LXD/LXC definitions for microk8s or k3s are either quite old (as of 2025-08 3 and 6 years old respectively) and blast HUGE holes in the sandbox. ☹️ K3s “requires” access to /dev/kmsg, several places in /proc and /sys as well as modprobing several kernel modules (it checks for access to them and spams the logs with warnings and errors). 😶

It looks doable in a technical sense, but it’s a huge pain having to go though Incus, without any of the (sandboxing/security) benefits. So the general wisdom is to just use a VM. (No, I didn’t try k3s’ experimental rootless mode)

Running k3s in an Incus VM

I started with a fresh VM and could reuse my now much simplified Ansible tasks for setting um k3s. But my happiness got cut short by the k3s service spamming the journal with useless

level=error msg="failed to ping connection: disk I/O error: no such device"

error messages.After removing all the directories and files from /var/lib/rancher/k3s and starting the server by hand I got:

Error: preparing server: failed to bootstrap cluster data: creating storage endpoint: failed to create driver for default endpoint: setup db: disk I/O error: no such device

Some more mucking around with the k3s server config revealed a puzzling, but more useful

failed to mount overlay: invalid argument.

Looking at what dmesg had to say I got:

overlayfs: upper fs does not support tmpfile.
overlayfs: failed to set xattr on upper
overlayfs: …falling back to redirect_dir=nofollow.
overlayfs: …falling back to uuid=null.
overlayfs: …falling back to xino=off.
overlayfs: try mounting with 'userxattr' option
overlayfs: upper fs missing required features.

Long story short: it turns out in my eagerness I had mounted a custom Incus volume as k3s’ data directory (/var/lib/rancher/k3s). This being a VM (instead of a container) it mounted the volume using the virtiofs protocol. And it turns out the overlayfs doesn’t like being put on top of virtiofs devices (or NFS it seems). 😵‍💫 But good news: it was fixable, although hacky. I found out by grepping for “virtiofsd” processes that Incus vendors its own virtiofsd binary in /opt/incus/bin/virtiofsd. And it already runs it with the --posix-acl option with implies the required --xattr option. But Incus currently doesn’t support any way for configuring virtiofsd. 😓 So the only solution (by the main Incus maintainer none the less) is to replace /opt/incus/bin/virtiofsd with a shim script calling the real virtiofsd binary with the additional --modcaps=+sys_admin option. Basically something silly like:

Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
#!/usr/bin/bash
exec /opt/incus/bin/virtiofsd.orig --modcaps=+sys_admin "$@"
#!/usr/bin/bash exec /opt/incus/bin/virtiofsd.orig --modcaps=+sys_admin "$@"
#!/usr/bin/bash
exec /opt/incus/bin/virtiofsd.orig --modcaps=+sys_admin "$@"

Yeah also, “try mounting with ‘userxattr’ option” was not helpful and sent me down the wrong path. 🤐

All in all … all these stumbling blocks ate my weekend. Which was kind of in line with my prejudices against Kubernetes. 😅

Those ones were the expensive headcount anyway

Arstechnica reports on a study where they measured the productivity of software developers of different open source projects doing different (also non-coding) tasks.

In the comments there’s a snarky summary of the articles main point:

“These factors lead the researchers to conclude that current AI coding tools may be particularly ill-suited to “settings with very high quality standards, or with many implicit requirements (e.g., relating to documentation, testing coverage, or linting/formatting) that take humans substantial time to learn.” While those factors may not apply in “many realistic, economically relevant settings” involving simpler code bases, they could limit the impact of AI tools in this study and similar real-world situations.”

So as long as I cull the experienced people and commit to lousy software the glorious Age of AI will deliver productivity gains? Awesome, those ones were the expensive headcount!

Jon Steward on Trevor Noah’s What Now Podcast

No one has discernment for what they aren’t. […] You can’t. It’s the hardest thing in the world. It’s hard enough to have empathy to what they aren’t let alone discernment. […]
Jon Steward at 50:30

If we were more understanding of prejudice and stereotype and less tolerant of racism we’d understand that prejudice and stereotype are functions mostly of ignorance and of experience. Racism is malevolent, right? But the other is way more natural, but we react as though it would metastasize immediately. And so I think we throw out barriers to each other […] before we have to.
Jon Steward at 56:00

Century-Scale Storage

What would you use to keep (digital) data safe for at least a hundred years? Maxwell Neely-Cohen looks at all the factors, possible technologies, social and economic challenges that you have to contend with if you intentionally want to store data for a century. He explicitly chose that time scale, because it is at the edge of what a human can experience, but it is outside of a single human’s work life as well as beyond the lifetime of most companies or institutions. So the premise sets you up for a host of problems to be solved. He also analyses strategies for recording and keeping data past and present and evaluates their potential for keeping data safe at century-scale.
It’s long, but worth it.

Force VLC to use VA-API for Hardware Accellerated Video Decoding

tl;dr: add the --avcodec-hw=vaapi option on the command line or to the Exec option in the .desktop file.

It’s stupid, I know, but it’s been bothering me for a while now. Especially when I want to watch conference talks that are available in the AV1 video format (e.g. FOSDEM) the video always seems to hang (show an old frame indefinitely), have broken decoding (shows alternating weirdly colored blocks), de-sync from audio or just stay black. This is happening on both Intel and AMD integrated graphics for years now, and I somehow decided that VDPAU must be the culprit. I also definitely know that VA-API works on my machines, because I’ve tested it … so that can’t be the problem. 😇

VLC (generally) supports both VA-API (mainly for Intel and AMD hardware) and VDPAU (for Nvidia) libraries for hardware accelerated video decoding, but on my Ubuntu desktop machines prefers VDPAU on any hardware for some reason. The settings don’t even show support for anything else: “Simple Preferences” -> “Input/Codecs” tab -> “Hardware-accelerated decoding” only shows “Automatic”, “VDPAU video decoder” and “Disable” options. 😵‍💫 The only “variant” that correctly uses VA-API automatically on my machines is the VLC Flatpak. I checked which backend was used via the “Modules Tree” tab in the “Tools” -> “Messages” dialog. It will show “vdpau”-something in the “video output” subtree (or not).

The Solution

So I dug through weird forums and tried different suggested options, of those many weren’t even supported until I found the right incantation: --avcodec-hw=vaapi .

Fixing the .desktop file

To make your desktop always call VLC with the right options we have to edit VLC’s so-called .desktop file. Mine was located in /usr/share/applications/vlc.desktop.
The relevant line looked like this: Exec=/usr/bin/vlc --started-from-file %U .

Copy the vlc.desktop file to either the $HOME/.local/share/applications/ directory if you want to change the behavior only for you. Alternatively if you have root privileges you can update vlc.desktop for all users of that machine by copying it to /usr/local/share/applications/ . NOTE: you may need to create those directories first.

Then edit the Exec= line to look like this: Exec=/usr/bin/vlc --avcodec-hw=vaapi --started-from-file %U

Or if you want to just copy the relevant commands:

# create the directory for personal .desktop files
mkdir -p $HOME/.local/share/applications/

# copy the original vlc.desktop to this directory
cp /usr/share/applications/vlc.desktop $HOME/.local/share/applications/

# edit the copied vlc.desktop by changing its "Exec" option to include the relevant VLC option
desktop-file-edit --set-key=Exec --set-value="/usr/bin/vlc --avcodec-hw=vaapi --started-from-file %U" $HOME/.local/share/applications/vlc.desktop

Enjoy!

We’ll Ask The AI How to Make Money

We have no current plans to make revenue.

We have no idea how we may one day generate revenue.

We have made a soft promise to investors that once we’ve built a general intelligence system, basically we will ask it to figure out a way to generate an investment return for you.

Sam Altman to VCs in 2024

A video of this memorable moment … you can’t make this up.