This is a pointless, but fun investigation of why some the letters “H” above the entrance of Frank Lloyd Wright‘s Unity Temple church in Chicago are up-side-down. The author tries to track down historical documents and pictures to reconstruct the history of the when those letters were put up and maybe taken down … and to ultimately see how far back the mistake goes.
Current
Terry Godier has found a great metaphor for a feed reader: a current. It leaves the shadow of mail clients and models feeds as currents with different velocities: automatically drifting by and fading away if unread. While moving away from traditional mail-like UI concepts feeds are still presented in-order (in contrast to social media-like “curated” feeds).
I like the idea and how far the metaphor carries and applies to all the technical and usability bits. It’ll take time to see if it really “holds water,” 😜 but I’m intrigued.
So You Want To Delegate ZFS Datasets to Containers
ZFS supports delegating datasets and their children to containers since version 2.2. It moves the control of the datasets from the host to a container’s namespace (ZFS also calls them “zoned”). But it’s never as easy as it sounds. As with everything containers the shifting of user ids plays weird tricks on you.
I recently tried experimenting with the ZFS delegation feature of Incus custom volumes. This allows Incus/LXD/LXC-style system containers to manage a sub-tree of ZFS datasets from inside the container. Everything is fine when you create the top dataset you want to delegate, delegate it to the container and create all the necessary sub-datasets from inside the container. But things get weird when you have datasets created on the host that you want to move under the delegated dataset (e.g. zfs rename tank/some-where/some-data tank/incus/custom/default_c1-custom-volume/some-data).
It basically boils down to:
Even root can’t change or write data into a dataset that was created on the host and then moved under a container’s delegated custom volume. Creating a new dataset from inside the container doesn’t have the same problem.
I felt like this was a serious shortcoming and would impede migration scenarios like mine so I reported it as a bug … it turns out, I was holding it wrong. 😅
The Solution
To fix my situation and move externally created datasets into a zone I needed to find the Hostid fields from the container’s volatile.idmap.current option (one for UIDs and one for GIDs; both were 1000000 in my case).
Then running chown -R 1000000:1000000 /mountpoint/of/the/dataset/to/be/moved on the host is where the magic lies. 😁
Moving the dataset by running zfs unmount ..., zfs rename ..., zfs set zoned=on ... on the host I was not only able to zfs mount it in the container, but now the ids were in the right range for the container to manage the data in it.
LLM Are Architectually Incapable of Abductive Reasoning
LLMs are architectually incapable of abductive reasoning.
Software Engineering Past, Present, and Future with Grady Booch at 43:27
The Worst Programming Language of All Time
You can argue that C++ shares this honor with the likes of JavaScript and TeX. Among them only JavaScript managed to design itself out of the mess it was in the early-to-mid 2000s. There’re still the ugly parts, but each new iteration actually improved the language as a whole. All while keeping backward compatibility. Well, TeX is odd and idiosyncratic, but it’s a “niche” language. And then there’s C++ … which managed to become more and more of a mess the more they tried to “improve” it. Making big blunders in designing features, failing to rectify them in a timely manner and then cowardly leaving “broken” features in the language to preserve backward compatibility. *sigh*
Here’s a great collection of grievances:
While many of the features are useful and necessary for a modern language, all the pieces are so shoddily Frankensteined together it is hilarious.
Just the amount of “separate” Turing-complete languages it contains is out of this world: C++, its C subset, Macros, Templates, Exceptions, constexpr/consteval, co-routines. All with separate syntax, semantics, inconsistencies and foot guns and no coherent design.
And even after all that it’s still missing essential pieces for software development like dependency and build management which the specification doesn’t even acknowledge as relevant. 🤯 This leading to weird edge cases like ODR violations or “ill-formed, NDR”-like atrocities, which was summarized best in a CppCon talk:
This is a language which has false positives for the question “was this a program?”
What is C++ – Chandler Carruth, Titus Winters – CppCon 2019 at 13:23
Mülayim
A: Biz Mülheim‘a gidecegiz.
B: Mülayim kim?
Residual Data In Backend Systems
the video was apparently “recovered from residual data located in backend systems.”
Google’s answer on how they “found” “expired” Nest doorbell footage.
Gripes With Setting Up My Unifi UDM
I’ve recently purchased an UDM SE when it was on sale. I use it as my main router moving away from OpenWRT (for this role). Especially the “recent” improvements including the zone-based Firewall configuration as well as beginnings of “usable” IPv6 support are what allowed me to make the jump.
I first chose to import my old self-hosted Unifi Network setup, but chose to redo it from scratch, because it seemed “buggy.”
I’m happy overall. The new zone-based firewall is a HUGE improvement (there’re specifics below)!
My gripes1 with the setup (as of January 2026):
You cannot–under any circumstance–create a working allow rule from a network in the Guest zone … there seem to be “hidden rules” that prevent this. Finding this out ate a whole weekend.
Firewall rules can’t have “any” zone as a source or destination. E.g. you can’t create a pure WAN egress rule.
Using the object policies you can get an error, saying you can’t create any more “ACL rules.” … why?!? I’m not using ACL rules (knowingly)! How many can I use (IIRC I had four or five)? How do I find out which ones they are? They count even when they are all paused?!?!?! ☠️
If you want to use the zone-based Firewall to allow Internet access to specific domains only make sure your UDM/etc. is the device’s DNS server. It doesn’t work with external DNS servers.
The Intrusion Prevention System blocks connections (e.g. www.privacy-handbuch.de) even when it’s set to only “notify.” In the logs it doesn’t say what the reason for blocking was, I just found out by elimination. 🤮
It seems not all blocked connections are shown in the flows/logs. I’ve had to create firewall rules for devices and services that were blocked, but didn’t show up in the flows/logs view (even with all the extended logging settings set). I only found out because of my internal monitoring setup (yay, Prometheus Blackbox Exporter and Ping Exporter). 😱
You cannot use device groups in Firewall rules, only in object policies.
You can select devices as sources in Firewall policies, but not as destinations.
You can’t add comments to Port or IP lists. Neither on the whole list, nor on the individual entries.
“Add multiple” fields won’t filter duplicates automatically … they will nag you until you’ve removed them manually. 😞
There’s no way to bulk export or import for DNS records … or firewall rules.
You can’t use IPv6 in WireGuard VPNs! 🤬
You can’t change the settings of the WireGuard Server or Clients. I know why they don’t allow it, but it’s rubbing me the wrong way.
- according to the principle: “if you want to nag at least have the courtesy to be specific!” ↩︎
Kubernetes Resource Requests Are a Massive Footgun
If you have Kubernetes workloads that configure Resource Requests on Pods or Containers there’s a footgun “hidden” in a sentence in the documentation (kudos if you spot it immediately):
[…] The kubelet also reserves at least the request amount of that system resource specifically for that container to use. […]
This means Resource Requests actually reserve the requested amount of resources exclusively. To emphasize: this is not a fairness measure in case of over-provisioning! So, if there are Resource Requests you can’t “overprovision” your node/cluster … hell, the new pod won’t even be scheduled although your node is sitting idle. 😵😓
By the time you find out why and have patched the offending resources you’ll be swearing up and down. 🤬
Oh … and wait till you see what the Internat has to say about Resource Limits. 😰
Simulating Statically Compiled Binaries in Glorified Tarballs
Containers won for one reason: they simulate a statically compiled binary that’s ergonomic for engineers and transparent to the application. A Docker image is a glorified tarball with metadata in a JSON document.
From Joseph’s comment on “Containers and giving up on expecting good software installation practices”
I hadn’t thought of it that way, but from a developer’s perspective it makes sense. It may not be incidental that the new programming languages of the 2010s (e.g. Go, Rust, Zig) produce statically linked binaries by default.
I always thought of containers as a way to add standardized interfaces to an application/binary that can be configured in a common way (e.g. ports, data directories, configurationenv vars, grouping and isolation). The only other ecosystem that does this and maybe even goes a little further is Nix.
Because the binary format itself is ossified and the ecosystem fragmented enough we missed the train for advanced lifecycle hooks for applications (think multiple entry points for starting, pausing, resuming, stopping, reacting to events, etc. like on Android, iOS, MacOS) … in Linux this is something that’s again bolted on from the outside: with e.g. D-Bus, Systemd, CRIU).