Running Circles Around Detecting Containers

Recently my monitoring service warned me that my Raspberry Pi was not syncing its time any more. I logged into the devices and tried restarting systemd-timesyncd.service and it failed.

The error it presented was:

ConditionVirtualization=!container was not met

I was confused. Although I was running containers on this device, this was on the host! 😯

I checked the service definition and it indeed had this condition. Then I tried to look up the docs for the ContainerVirtualization setting and found out Systemd has a helper command that can be used to find out if it has been run inside a Container/VM/etc.

To my surprise running systemd-detect-virt determined it was being run inside a Podman container, although it was run on the host. I was totally confused. Does it detect any Container or being run in one? 😵‍💫

I tried to dig deeper, but the docs only tell you what known Container/VM solutions can be detected, but not what it uses to do so. So I searched the code of systemd-detect-virt for indications how it tried to detect Podman containers … and I found it: it looks for the existence of a file at /run/.containerenv. 😯

Looking whether this file existed on the host I found out: it did!!! 😵 How could this be? I checked another device running Podman and the file wasn’t there!?! 😵‍💫 … Then it dawned on me. I was running cAdvisor on the Raspberry Pi and it so happens that it wants /var/run to be mounted inside the container, /var/run just links to /run and independent of me mounting it read-only it creates the /run/.containerenv file!!! 🤯

I looked into /run/.containerenv and found out it was empty, so I removed it and could finally restart systemd-timesyncd.service. The /run/.containerenv file is recreated on every restart of the container, but at least I know what to look for. 😩

JavaScript History’s Future as Seen From 2022

Brian Sletten presents an overview of the WebAssembly landscape, the development direction and applications it enables. I can’t but notice that we’re really on the path to WebAssembly becoming the JavaScript-derived universal runtime Gary Bernhardt promised in 2014. 🤯

Aktivieren Sie JavaScript um das Video zu sehen.
https://www.youtube.com/watch?v=J3hK7O5Oc2Y

Dropbear vs SSH woes between Ubuntu LTSes

Imagine you’re using dropbear-initrd to log in to a server during boot for unlocking the hard disk encryption and you’re greeted with the following error after a reboot:

root@server: Permission denied (publickey).

🤨😓😖 You start to sweat … this looks like extra work you didn’t need right now. You try to remember: were there any updates lately that could have messed up the initrd? … deep breath, lets take it slowly.

First try to get SSH to spit out more details:

$ ssh -vvv server-boot
[...]
debug1: Next authentication method: publickey
debug1: Offering public key: /home/user/.ssh/... RSA SHA256:... explicit
debug1: send_pubkey_test: no mutual signature algorithm
[...]

That doesn’t seem right … this worked before. The server is running Ubuntu 20.04 LTS and I’ve just upgraded my work machine to Ubuntu 22.04 LTS. I know that Dropbear doesn’t support ed25519 keys (at least not on the version on the server), that’s why I still use RSA keys for that. 🤔

Time to ask the Internet, but all the posts with a “no mutual signature algorithm” error message are years old … but most of them were circling around the SSH client having deprecated old key types (namely DSA keys). 😯

Can it be that RSA keys have also been deprecated? 😱 … I’ve recently upgraded my client machine 😶 … no way! … well, yes! That was exactly the problem.

Allowing RSA keys in the connection settings for that server allowed me to log in again 😎:

PubkeyAcceptedKeyTypes +ssh-rsa

But this whole detour unnecessarily wasted an hour of my life. 😓

Finding out what rules to add to /etc/gai.conf

I had a weird problem. I was using network prefix translation (NPT) for routing IPv6 packets to the Internet through a VPN. But while all devices could connect to the IPv6 Internet without problems, they never did so on their own. They always preferred IPv4 connections when they had the choice. 🤨

Problem Background

I knew that modern network stacks are configured to prefer IPv6 over IPv4 generally, but was baffled why it wouldn’t use IPv6 since it was clear that connections to the Internet work. A little bit of tinkering revealed that IPv4 connections to the Internet are preferred only when my device had no global IPv6 addresses. Because I was relying on NPT my devices only had ULAs.

It turns out that the wise people making standards decided that when a device has only private IPv4 addresses and ULAs IPv4 connections are preferred for the Internet under the assumption that private IPv4 addresses are definitely NATed while IPv6’s ULA probably (definitely?) won’t. 😯

Finding a Solution

A quick search for anything related to IPv4 vs. IPv6 priority leads exclusively to questions and posts where the authors want to always have IPv4 prioritized over IPv6. Although my case was the opposite one thing became clear: it had to do with modifying /etc/gai.conf. It’s a file for configuring RFC 6724 (i.e. Default Address Selection for Internet Protocol Version 6 (IPv6)).

This allowed me to influence the selection algorithm which seemed to be needed for solving my problem. If you open this file it even has commented-out lines for solving the “always prefer IPv4 over IPv6” problem. The inverse case was not so simple, because among the precedence rules there was no address range for ULAs and adding one for my specific ULA didn’t solve the problem either:

[...]
# precedence  <mask>   <value>
#    Add another rule to the RFC 3484 precedence table.  See section 2.1
#    and 10.3 in RFC 3484.  The default is:
#
precedence  ::1/128       50
precedence  ::/0          40
precedence  2002::/16     30
precedence ::/96          20
precedence ::ffff:0:0/96  10
precedence fd:11:22::/48  45  # <-- added my ULA, but didn't help
#
#    For sites which prefer IPv4 connections change the last line to
#
#precedence ::ffff:0:0/96  100
[...]

Manual Algorithm

I tried to take a step back and find out if a precedence setting was even the right change. I bit the bullet and tried to evaluated the “Source Address Selection” algorithm from RFC 6725 (Section 5) by hand.

Candidate Addresses

My candidate addresses for the destination (this server) were:

2a01:4f8:c2c:8101::1   # native IPv6
::ffff:116.203.176.52  # native IPv4 (mapped to IPv6 for this algorithm)

My candidate source addresses (from my WLAN connection) looked like:

fd00:11:22::aa  # global dynamic noprefixroute
fd00:11:22::bb  # global temporary dynamic
fd00:11:22::cc  # global mngtmpaddr noprefixroute
::ffff:10.0.0.50  # private IPv4 (mapped to IPv6 for this algorithm)

The Rules

Rule 1: Prefer same address.

skip, source and destination are not the same.

Rule 2: Prefer appropriate scope.

skip, connection is unicast, so no multicast.

Rule 3: Avoid deprecated addresses.

skip, no deprecated source addresses used.

Rule 4: Prefer home addresses.

skip? I was not sure what a “home address” is supposed to be, but it seems related to mobile networks. I just assumed all source addresses were “home” addresses.

Rule 5: Prefer outgoing interface.

skip, I was already only considering the outgoing interface here.

Rule 5.5: Prefer addresses in a prefix advertised by the next-hop.

skip? all next-hops were fe00::<router's EUI64>.

Rule 6: Prefer matching label.

We get the default labels from /etc/gai.conf (mine from Ubuntu 21.10):

[…]
#label ::1/128       0  # loopback address
#label ::/0          1  # IPv6, unless matched by other rules
#label 2002::/16     2  # 6to4 tunnels
#label ::/96         3  # IPv4-compatible addresses (deprecated)
#label ::ffff:0:0/96 4  # IPv4-mapped addresses
#label fec0::/10     5  # site-local addresses (deprecated)
#label fc00::/7      6  # ULAs
#label 2001:0::/32   7  # Teredo tunnels
[…]

Then the destination addresses would get labeled like this:

2a01:4f8:c2c:8101::1   # label 1
::ffff:116.203.176.52  # label 4

And the source addresses would get labeled like this:

fd00:11:22::aa  # label 6
fd00:11:22::bb  # label 6
fd00:11:22::cc  # label 6
::ffff:10.0.0.50  # label 4

Here we see why IPv4 addresses are selected: their destination and source addresses have the same label while the IPv6 address don’t. 😔

So I could add a new label for our ULA that has the same label as the ::/0 addresses (i.e. 1 here). I didn’t change the label on the fc00::/7 line in order not to change the behavior for all ULAs, but I wanted a special rule for my specific network. So I uncommented the default label lines and added the following line:

label fd00:11:22::/48 1  # my ULA prefix and the same label as ::/0

Reboot (may no be strictly necessary) … and lo and behold it worked! 😎

Conclusion

While this worked I really felt uneasy messing with the address priorization especially if you take into account that I’d have to do this on every device. This is on top of the already esoteric setup for using NPT. 🙈

I later found out that when the VPN goes down (i.e. there’s no IPv6 Internet connectivity) it won’t (actually can’t) fall back to IPv4 for the Internet connection. 😓