bookmark_borderError inside a LXC container: bash: fork: retry: No child processes

LXC simply stands for Linux containers and is a virtualization solution, where another isolated Linux instance is started within your actual running instance. The container Linux instance however is using the same resources and kernel as the host system. LXC is in direct competition to the elder OpenVZ and Linux-VServer solutions but doesn’t require a patched or modified kernel.
I for myself started to migrate all of my OpenVZ Containers to the LXC container virtualization for several reasons.  For e. g. LXC is already integrated in the kernel and OpenVZ development is getting more and more slower (just to name a few). But LXC isn’t free from bugs of course. One of the most annoying bugs is the bash: fork: retry: No child processes error. The following text shows you, how to fix it.

The error in summary

First and foremost, this error is not a problem of LXC. SystemD is causing this and it’s causing this for a good reason. SystemD wants to protect your system that one or multiple processes are able to spawn unlimited other processes. But if you use LXC, LXC as a mother process have to start multiple other processes like the services which are running within the container. This will force SystemD to step in and blocking more processes from being started by your container. This most likely causes the LXC container process to crash and makes your whole container inaccessible.

The Fix

The fix is rather easy and doesn’t even require a restart of your system or of your containers. As root open the file /etc/systemd/system.conf and enable / set the following value:

DefaultTasksMax=infinity

after you’ve done this, simply let the SystemD reload itself:

root@system:~# systemctl daemon-reload

As the word infinity already states, the maximum process count a mother process can have is set to infinity. While this can be an issue (for e.g. a container spwans a lot of processes due to an error), it’s the only useful way to get rid of this message. You could also enter a number which is like 10 times higher than before but even then a container could come to a point where this isn’t enough anymore. However, if you set the value like mentioned and reloaded SystemD, your containers should now run as expected without the error.
Have fun with your containers 🙂

Further links

bookmark_borderOpenVPN Error: Linux route add command failed

OpenVPN Logo

Image source: openvpn.net

Everbody knows OpenVPN. A powerful and easy to configure VPN client, which is cross-platform available for BSD, Linux, MAC and Windows.
A lot of my Linux boxes are OpenVPN clients, starting with Virtual Machines as well as physical boxes. If I use my OpenVPN server as a default gateway, some machines having trouble to create the necessarily route. The output in the most cases is something like this:

Sun Jun 19 14:03:20 2016 /bin/ip route add 1.2.3.4/32 via 0.0.0.0
RTNETLINK answers: No such device
Sun Jun 19 14:03:20 2016 ERROR: Linux route add command failed: external program exited with error status: 2

So this means that the OpenVPN tried to create a new route with the help of the ip command which failed (error code 2). But how to fix this?

Add the route by your own

I’ve searched around the internet and nobody really had an answer to this. Well, the solution is rather simple. Directly after the successful connection to your OpenVPN server, add the route by your own. The following example would do this for the shown error above:

sudo route add -host 1.2.3.4 dev enp4s0

As you can see, there is no gateway address to reach the host. It’s simply the Ethernet device which is stated here (enp4s0 is the name of the first wired Ethernet device under openSUSE when using Network Manager (formerly known as eth0)).
This error also occurs, if you want to use a OpenVZ container as a OpenVPN client. By default, the first virtual network device of a OpenVZ container is called venet0. So you would have to enter the following command to get this error fixed:

sudo route add -host 1.2.3.4 dev venet0

After you added the host to your routing table with the correct outgoing network device, you’re ready to go to use the VPN as your default gateway.

Permanent Fix

To be honest, until now I wasn’t able to find a permanent fix for this. So this also means that you have to redo the route add command every time, when you have connected to your VPN.
If you know a permanent fix for this problem, just let me know in the comments below. Your help is appreciated 🙂

bookmark_borderConvert IMG (raw) to QCOW2

Most of you will know the Kernel-based virtual machine. It’s already included with the latest Linux kernels and it gives you full virtualization under Linux which provides the capability to run almost every x86 OS you want inside a virtual machine.
Some versions ago, if you created a new virtual machine in KVM, the virtual hard disk was a RAW .img container. The new container type is QCOW2 and one of it’s main features is to enable the snapshot functionality of KVM.
So this means, if you have virtual machines which have a IMG HDD attached, than you will not be able to create snapshots of this virtual machine. Luckily the KVM developers are providing tools, which helps you to convert existing IMG HDDs to QCOW2 HDDs.

The convert process

First of all, this will take some time and it depends of course on the size of the HDD. Also, you should shutdown the virtual machine so that the convert process has the standalone access on the HDD while converting. The following example would convert a .img HDD to a .qcow2 HDD:

qemu-img convert -f raw -O qcow2 /path/to/your/hdd/vm01.img /path/to/your/hdd/vm01.qcow2

To explain the command a litte bit more:

  • qemu-img is the command which should be executed
  • convert says qemu-img that we want to convert an existing HDD
  • the switch -f raw lets qemu-img know, that the existing format of the HDD is RAW (in this case with .img filename ending)
  • the -O qcow2 switch tells the qemu-img command that the destination HDD should be QCOW2
  • the first file is the exisiting raw HDD, the second one is the filename of the new QCOW2 type HDD

So, let us say we want to convert a raw HDD which is located in /var/lib/libvirt/images (standard path for new KVM machines) to a QCOW2 HDD:

qemu-img convert -f raw -O qcow2 /var/lib/libvirt/images/machine01.img /var/lib/libvirt/images/machine01.qcow2

After you have done this, you just have to change the path from your HDD in your virtual machine from the raw .img to the .qcow2 file. NOTE: The .img file is not deleted after the successful convert process. You have to do this on your own.
At the end, you should be able to create snapshots for your virtual machine. One of the best features while using virtual machines at all 😉

Further links