So You Want To Delegate ZFS Datasets to Containers

ZFS supports delegating datasets and their children to containers since version 2.2. It moves the control of the datasets from the host to a container’s namespace (ZFS also calls them “zoned”). But it’s never as easy as it sounds. As with everything containers the shifting of user ids plays weird tricks on you.

I recently tried experimenting with the ZFS delegation feature of Incus custom volumes. This allows Incus/LXD/LXC-style system containers to manage a sub-tree of ZFS datasets from inside the container. Everything is fine when you create the top dataset you want to delegate, delegate it to the container and create all the necessary sub-datasets from inside the container. But things get weird when you have datasets created on the host that you want to move under the delegated dataset (e.g. zfs rename tank/some-where/some-data tank/incus/custom/default_c1-custom-volume/some-data).

It basically boils down to:

Even root can’t change or write data into a dataset that was created on the host and then moved under a container’s delegated custom volume. Creating a new dataset from inside the container doesn’t have the same problem.

I felt like this was a serious shortcoming and would impede migration scenarios like mine so I reported it as a bug … it turns out, I was holding it wrong. 😅

The Solution

To fix my situation and move externally created datasets into a zone I needed to find the Hostid fields from the container’s volatile.idmap.current option (one for UIDs and one for GIDs; both were 1000000 in my case).
Then running chown -R 1000000:1000000 /mountpoint/of/the/dataset/to/be/moved on the host is where the magic lies. 😁
Moving the dataset by running zfs unmount ..., zfs rename ..., zfs set zoned=on ... on the host I was not only able to zfs mount it in the container, but now the ids were in the right range for the container to manage the data in it.

The Worst Programming Language of All Time

You can argue that C++ shares this honor with the likes of JavaScript and TeX. Among them only JavaScript managed to design itself out of the mess it was in the early-to-mid 2000s. There’re still the ugly parts, but each new iteration actually improved the language as a whole. All while keeping backward compatibility. Well, TeX is odd and idiosyncratic, but it’s a “niche” language. And then there’s C++ … which managed to become more and more of a mess the more they tried to “improve” it. Making big blunders in designing features, failing to rectify them in a timely manner and then cowardly leaving “broken” features in the language to preserve backward compatibility. *sigh*

Here’s a great collection of grievances:

While many of the features are useful and necessary for a modern language, all the pieces are so shoddily Frankensteined together it is hilarious.

Just the amount of “separate” Turing-complete languages it contains is out of this world: C++, its C subset, Macros, Templates, Exceptions, constexpr/consteval, co-routines. All with separate syntax, semantics, inconsistencies and foot guns and no coherent design.

And even after all that it’s still missing essential pieces for software development like dependency and build management which the specification doesn’t even acknowledge as relevant. 🤯 This leading to weird edge cases like ODR violations or “ill-formed, NDR”-like atrocities, which was summarized best in a CppCon talk:

This is a language which has false positives for the question “was this a program?”

What is C++ – Chandler Carruth, Titus Winters – CppCon 2019 at 13:23

Gripes With Setting Up My Unifi UDM

I’ve recently purchased an UDM SE when it was on sale. I use it as my main router moving away from OpenWRT (for this role). Especially the “recent” improvements including the zone-based Firewall configuration as well as beginnings of “usable” IPv6 support are what allowed me to make the jump.

I first chose to import my old self-hosted Unifi Network setup, but chose to redo it from scratch, because it seemed “buggy.”

I’m happy overall. The new zone-based firewall is a HUGE improvement (there’re specifics below)!

My gripes1 with the setup (as of January 2026):

You cannot–under any circumstance–create a working allow rule from a network in the Guest zone … there seem to be “hidden rules” that prevent this. Finding this out ate a whole weekend.

Firewall rules can’t have “any” zone as a source or destination. E.g. you can’t create a pure WAN egress rule.

Using the object policies you can get an error, saying you can’t create any more “ACL rules.” … why?!? I’m not using ACL rules (knowingly)! How many can I use (IIRC I had four or five)? How do I find out which ones they are? They count even when they are all paused?!?!?! ☠️

If you want to use the zone-based Firewall to allow Internet access to specific domains only make sure your UDM/etc. is the device’s DNS server. It doesn’t work with external DNS servers.

The Intrusion Prevention System blocks connections (e.g. www.privacy-handbuch.de) even when it’s set to only “notify.” In the logs it doesn’t say what the reason for blocking was, I just found out by elimination. 🤮

It seems not all blocked connections are shown in the flows/logs. I’ve had to create firewall rules for devices and services that were blocked, but didn’t show up in the flows/logs view (even with all the extended logging settings set). I only found out because of my internal monitoring setup (yay, Prometheus Blackbox Exporter and Ping Exporter). 😱

You cannot use device groups in Firewall rules, only in object policies.

You can select devices as sources in Firewall policies, but not as destinations.

You can’t add comments to Port or IP lists. Neither on the whole list, nor on the individual entries.

“Add multiple” fields won’t filter duplicates automatically … they will nag you until you’ve removed them manually. 😞

There’s no way to bulk export or import for DNS records … or firewall rules.

You can’t use IPv6 in WireGuard VPNs! 🤬

You can’t change the settings of the WireGuard Server or Clients. I know why they don’t allow it, but it’s rubbing me the wrong way.

MLO is a lie!

  1. according to the principle: “if you want to nag at least have the courtesy to be specific!” ↩︎

Kubernetes Resource Requests Are a Massive Footgun

If you have Kubernetes workloads that configure Resource Requests on Pods or Containers there’s a footgun “hidden” in a sentence in the documentation (kudos if you spot it immediately):

[…] The kubelet also reserves at least the request amount of that system resource specifically for that container to use. […]

This means Resource Requests actually reserve the requested amount of resources exclusively. To emphasize: this is not a fairness measure in case of over-provisioning! So, if there are Resource Requests you can’t “overprovision” your node/cluster … hell, the new pod won’t even be scheduled although your node is sitting idle. 😵😓

By the time you find out why and have patched the offending resources you’ll be swearing up and down. 🤬

Oh … and wait till you see what the Internat has to say about Resource Limits. 😰

Simulating Statically Compiled Binaries in Glorified Tarballs

Containers won for one reason: they simulate a statically compiled binary that’s ergonomic for engineers and transparent to the application. A Docker image is a glorified tarball with metadata in a JSON document.

From Joseph’s comment on “Containers and giving up on expecting good software installation practices”

I hadn’t thought of it that way, but from a developer’s perspective it makes sense. It may not be incidental that the new programming languages of the 2010s (e.g. Go, Rust, Zig) produce statically linked binaries by default.

I always thought of containers as a way to add standardized interfaces to an application/binary that can be configured in a common way (e.g. ports, data directories, configurationenv vars, grouping and isolation). The only other ecosystem that does this and maybe even goes a little further is Nix.

Because the binary format itself is ossified and the ecosystem fragmented enough we missed the train for advanced lifecycle hooks for applications (think multiple entry points for starting, pausing, resuming, stopping, reacting to events, etc. like on Android, iOS, MacOS) … in Linux this is something that’s again bolted on from the outside: with e.g. D-Bus, Systemd, CRIU).

Configuring Custom Ingress Ports With Cilium

This is just a note for anyone looking for a solution to this problem.

While it’s extremely easy with the Kubernetes’ newer Gateway API via listeners on Gateway resources it seems the Ingress resources were always meant to be used with (global?) default ports … mainly 80 and 443 for HTTP and HTTPS respectively. So every Ingress Controller seems to have their own “side-channel solution” that leverages some resource metadata to convey this information. For Cilium this happens to be the sparsely documented ingress.cilium.io/host-listener-port annotation.

So your Ingress definition should look something like this:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ...
  namespace: ...
  annotations:
    ingress.cilium.io/host-listener-port: 1234
spec:
  ingressClassName: cilium
  rules:
  - http: ...