I don’t have any knowledge about the specific containers, but the way this works on my Linux distro is via the:
send host-name "(hostname)";
option in dhclient.conf
. Maybe you could try explicitly setting that option?
I don’t have any knowledge about the specific containers, but the way this works on my Linux distro is via the:
send host-name "(hostname)";
option in dhclient.conf
. Maybe you could try explicitly setting that option?
If you want to get fancy: systemd credentials. It can store the secrets encrypted on disk and seal the encryption key with the TPM chip. The encrypted secret is decrypted (non-interactively) and made available only to a specific systemd service. The process itself does not special systemd integration–it just sees a plain text file containing the secret, backed by a tmpfs
that’s not visible to other processes.
Depending on which TPM PCRs you bind to, you can choose how secure you want it to be. A reasonable/usable configuration would be something like binding to PCRs 7 and 14. With that setup, the TPM will not unseal the key if the system is booted into any other OS (i.e. anything signed with a different UEFI Secure Boot key). But if you really want to lock things down, you can bind to additional PCRs and make it so changing any hardware, boot order, BIOS setting, etc. will prevent the TPM from unsealing the key.
I went IPv6-only for everything internal. The only thing that’s dual stack is the wireguard server running on the gateway. I haven’t run into any issues, mostly because my Linux distro’s package repository has many IPv6-compatible mirrors (enabled by default). For anything not in the distro’s repos, I build from source and package them up into RPMs myself, so as a side-effect, I don’t have to deal with eg. Github not supporting IPv6.
Even things with generally crappy firmware, like the APC UPS management card, Supermicro & ASRock IPMI management interfaces, etc. have worked fine in an IPv6-only setup for me.