ZFS supports delegating datasets and their children to containers since version 2.2. It moves the control of the datasets from the host to a container’s namespace (ZFS also calls them “zoned”). But it’s never as easy as it sounds. As with everything containers the shifting of user ids plays weird tricks on you.
I recently tried experimenting with the ZFS delegation feature of Incus custom volumes. This allows Incus/LXD/LXC-style system containers to manage a sub-tree of ZFS datasets from inside the container. Everything is fine when you create the top dataset you want to delegate, delegate it to the container and create all the necessary sub-datasets from inside the container. But things get weird when you have datasets created on the host that you want to move under the delegated dataset (e.g. zfs rename tank/some-where/some-data tank/incus/custom/default_c1-custom-volume/some-data).
It basically boils down to:
Even root can’t change or write data into a dataset that was created on the host and then moved under a container’s delegated custom volume. Creating a new dataset from inside the container doesn’t have the same problem.
I felt like this was a serious shortcoming and would impede migration scenarios like mine so I reported it as a bug … it turns out, I was holding it wrong. 😅
The Solution
To fix my situation and move externally created datasets into a zone I needed to find the Hostid fields from the container’s volatile.idmap.current option (one for UIDs and one for GIDs; both were 1000000 in my case).
Then running chown -R 1000000:1000000 /mountpoint/of/the/dataset/to/be/moved on the host is where the magic lies. 😁
Moving the dataset by running zfs unmount ..., zfs rename ..., zfs set zoned=on ... on the host I was not only able to zfs mount it in the container, but now the ids were in the right range for the container to manage the data in it.