Skip to content

OpenVPN with IPv6 in multi-client server mode

The main showstopper for IPv6 in my private network environment was the non-availability of IPv6 payload support on OpenVPN's multi-client server mode. I am using the OpenVPN multi-client server mode exensively with a number of clients, and adding IPv6 to my OpenVPN network would have meant re-building most of it without multi-client server mode. This would mean having a rather dirty construction with one process per client or even **gasp** bridging. I did not have the heart to actually do this and stayed with IPv4.

Thankfully, these times are over: Gert Döring, Thomas Glanzmann, Bernhard Schmidt and Jan Dirnberger spent the better part of the christmas holidays implementing IPv6 payload support in OpenVPN multi-client server mode. They have published a patch against OpenVPN 2.1 and a number of binary packages implementing this feature that I've been waiting for.

Unfortunately, the IPv6-over-OpenVPN-multi-client-mode patch clashes with the well-known OpenVPN-over-IPv6 patch, so I had to disable it in my locally patched version of Debian's OpenVPN package. Bernhard's binary packages contain both patches.

Enabling IPv6 multi-client server mode is really a breeze. Add server-ipv6 and route-ipv6 statements to your server configuration, and you're done. Client-config-dir works for IPv6 as well, so I can assign static IPv6 addresses to the clients and tell them to point their IPv6 default route into the tunnel from the server by virtue of a ifconfig-ipv6-push and a push route-ipv6 statement inside the client-config-dir file.

That's it. Clients with unpatched client software can still connect (and will only get IPv4, just as before), and clients with patched client software will transparently get IPv6 additionally to the IPv4 tunnel. Now, I only have to pay attention again what services are running on my laptop - it's publicly visible on the intarwebs again.

Guys, your work rocks. I really really appreciate that. Good Job. I owe you more than a beer. Now we only need to convince OpenVPN upstream to accept your patch.

How much added complexity in packages to cater for apt's shortcomings?

It is well known that apt has an issue when it comes to resolving circular dependencies. Therefore, Debian bug reporters have set out to eradicate circular dependencies from the archive. This does, however, add significant bloat to the actual packages, and I am questioning why this is really necessary.

Continue reading "How much added complexity in packages to cater for apt's shortcomings?"

Block devices in KVM guests

In the last few days, I found the time to spend some with KVM and libvirt. Unfortunately, there is a subject that I haven't yet found a satisfying solution: Naming of block devices in guest instances.

This is surely a common issue, but solutions are rare. Neither an article on Usenet (in German) nor the German version of this blog article has found solutions for the main question. I should have written this in English in the first place and am thus translating from German to english, hoping that there will be some answers and suggestions.

KVM is quite inflexible when it coms to configure block devices. It is possible to define on the host, which files or whole devices from the host should be visible in the guest. The documentation suggests that devices should be brought into the guest with the virtio model, which needs suppport in the guest kernel. Importing a device as emulated ATA or SCSI device brings a performance penalty.

The devices brought into the guest via virtio appear in the guest's dev as /dev/vd<x> and do also have their corresponding entries in /dev/disk/by-uuid and /dev/disk/by-path. The vd<x> node is simply numbered in consecutive order as hd<x> and sd<x>. /dev/disk/by-uuid is the correct UUID of the file system found on the device, at least if it's a block device partitioned inside the guest and formatted with ext3 (I didn't try anything else yet). The terminology of the /dev/disk/by-path node is not yet understood, and I am somewhat reluctant to assume the PCI paths of emulated hardware as stable.

Continue reading "Block devices in KVM guests"