LXC container fails to create

Full error message from creation page :

.. failed to start : lxc-start: confile.c: config_network_ipv4: 835 File exists - invalid ipv4 address: / lxc-start: parse.c: lxc_file_for_each_line: 57 Failed to parse config: lxc.network.ipv4 = / lxc-start: tools/lxc_start.c: main: 284 Failed to create lxc_container

And actually the network line looks like : lxc.network.ipv4 = /

Which is of course wrong ! Fixing it manually solves the issue and container starts properly.

The network was set to automatically assign an adress in creation page, this system runs others KVM and network is ok, IP ranges are assigned.

Status: 
Active

Comments

Unfortunately this bug is still present, reproduced it on a new Centos7 physical host I just set up :(

Any idea for a fix please ?

Ilia's picture
Submitted by Ilia on Tue, 08/25/2020 - 08:15

Assigned: Unassigned »

Unfortunately this bug is still present, reproduced it on a new Centos7 physical host I just set up :(

Do you have an IP range defined at LXC Host Systems/Edit LXC Host/IP address allocation ranges? I double checked and if those IPs are set, one of the allocated and free IPs are correctly passed to the config upon LXC container creation, at least in my case (Ubuntu 20.04 used as a host system). What if you manually define and IP address upon LXC system creation at Host and networking options tab under IP address for LXC container option?

Jamie, I ran few tests and it also fails for me with a bit different error:

Starting up new LXC container ..
.. failed to start : lxc-start: debug-dedicated-centos7-lxc.cloudmin.local: confile.c: set_config_net: 281 lxc.net must not have a value lxc-start: debug-dedicated-centos7-lxc.cloudmin.local: parse.c: lxc_file_for_each_line_mmap: 120 Failed to parse config file "/var/lib/lxc/debug-dedicated-centos7-lxc.cloudmin.local/config" at line "lxc.network.type = veth" Failed to load config for debug-dedicated-centos7-lxc.cloudmin.local lxc-start: debug-dedicated-centos7-lxc.cloudmin.local: tools/lxc_start.c: main: 242 Failed to create lxc_container

If I ran lxc-ls -l DEBUG -f it returns:

lxc-ls: confile.c: set_config_net: 281 lxc.net must not have a value
lxc-ls: parse.c: lxc_file_for_each_line_mmap: 120 Failed to parse config file "/var/lib/lxc/debug-dedicated-centos7-lxc.cloudmin.local/config" at line "lxc.network.type = veth"
Failed to load config for debug-dedicated-centos7-lxc.cloudmin.local

.. it's worth mentioning that this exact output is appended to .../console file, making it look broken, like this:

~# cat /var/lib/lxc/debug-dedicated-centos7-lxc.cloudmin.local/console
lxc-start: debug-dedicated-centos7-lxc.cloudmin.local: confile.c: set_config_net: 281 lxc.net must not have a value
lxc-start: debug-dedicated-centos7-lxc.cloudmin.local: parse.c: lxc_file_for_each_line_mmap: 120 Failed to parse config file "/var/lib/lxc/debug-dedicated-centos7-lxc.cloudmin.local/config" at line "lxc.network.type = veth"
Failed to load config for debug-dedicated-centos7-lxc.cloudmin.local
lxc-start: debug-dedicated-centos7-lxc.cloudmin.local: tools/lxc_start.c: main: 242 Failed to create lxc_container

Ilia's picture
Submitted by Ilia on Tue, 08/25/2020 - 08:43

Oh, it seems that console file is used exactly to store this kind of output..

Anyway, I think the source of the bug is a change in LXC. As of LXC 3.0 legacy config keys are not supported anymore (removed).

Looking at default config at /etc/lxc/default.conf displays:

~# cat /etc/lxc/default.conf
lxc.net.0.type = veth
lxc.net.0.link = lxcbr0
lxc.net.0.flags = up
lxc.net.0.hwaddr = 00:16:3e:xx:xx:xx

.. while we are using older lxc.network directives.

If I fix manually container's config at /var/lib/lxc/debug-dedicated-centos7-lxc.cloudmin.local/config, changing it from:

lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br0
lxc.network.name = eth0
lxc.network.ipv4 = 192.168.2.75/24
lxc.network.mtu = 1500
lxc.network.hwaddr = 4e:a9:5e:0F:8C:80
lxc.network.veth.pair = veth1
lxc.net.0.link = lxcbr0
lxc.net.0.hwaddr = 00:16:3e:cc:18:12
lxc.net.0.flags = up
lxc.utsname = debug-dedicated-centos7-lxc.cloudmin.local
#lxc.include = /usr/share/lxc/config/nesting.conf
lxc.net.0.type = veth
lxc.rootfs.path = dir:/var/lib/lxc/debug-dedicated-centos7-lxc.cloudmin.local/root

.. to:

lxc.net.0.type = veth
lxc.net.0.flags = up
lxc.net.0.link = br0
lxc.net.0.name = eth0
lxc.net.0.ipv4.address = 192.168.2.75/24
lxc.net.0.mtu = 1500
lxc.net.0.hwaddr = 4e:a9:5e:0F:8C:80
lxc.net.0.veth.pair = veth1
lxc.net.0.link = lxcbr0
lxc.net.0.hwaddr = 00:16:3e:cc:18:12
lxc.net.0.flags = up
#lxc.utsname = debug-dedicated-centos7-lxc.cloudmin.local
#lxc.include = /usr/share/lxc/config/nesting.conf
lxc.net.0.type = veth
lxc.rootfs.path = dir:/var/lib/lxc/debug-dedicated-centos7-lxc.cloudmin.local/root

.. then I could start it from Cloudmin with no problem and command lxc-ls -l DEBUG -f returns no error and prints as expected:

NAME                                       STATE   AUTOSTART GROUPS IPV4         IPV6 UNPRIVILEGED 
debug-dedicated-centos7-lxc.cloudmin.local RUNNING 0         -      192.168.2.75 -    false

I assume the fix could be straight forward, unless there were no older versions of LXC to support?

@lorelin You can easily use lxc-update-config -c mylxcconfig command to update an old config to a new one.

Lovely Illia, thanks a lot ! will the latest Cloudmin/Virtualmin version able to handle the new LXC 3.0 config directives ?

For now it I read you correctly the path to get a functional LXC container on a LXC3.0 host is a run a normal guest deployment and then run lxc-update-config -c mylxcconfig is it right ?

Ilia's picture
Submitted by Ilia on Thu, 08/27/2020 - 11:06

is it right ?

Yes, it will update old style LXC configs, generated by current version of Cloudmin, to something LXC 3.0 can deal with.

Ok, now we have a workaround for LXC 3.0, which is great, but my initial issue is still here : On LXC 2 hosts, the network configuration is broken and ends up with a / instead of any IP address which breaks lxc.

Starting up new LXC container ..
.. failed to start : lxc-start: confile.c: config_network_ipv4: 838 File exists - invalid ipv4 address: / lxc-start: parse.c: lxc_file_for_each_line: 57 Failed to parse config: lxc.network.ipv4 = / lxc-start: tools/lxc_start.c: main: 284 Failed to create lxc_container 
<blockcode>
And yes triple-checked and I have a network range assigned to each host on which I tried this.
Another issue is it keeps using br0 as LXC bridge, whatever setting I use on default or at LXC creation time. Even if br0 does not exist on the host !
 
This is quite nasty since most Cloudmin-supported host OSes runs LXC2 :(
 
Manually fixing the network config allows the container to run but this is not the point of using an automated provisionning tool ;)
Any fix please ?
Ilia's picture
Submitted by Ilia on Tue, 09/01/2020 - 07:14

Which distro does it happen on?

Latest try I had was on CentOS 7 with the LXC 2 upgrade but it does the same with Devuan ASCII (Debian 9).

Ilia's picture
Submitted by Ilia on Tue, 09/01/2020 - 09:07

I will deploy more physical installs with for Cloudmin testing. It will take some time though.

Hi there, just to get a quick update on this issue ? I still get the bug with latest Cloudmin with manually assigned adress : " .. failed to start : lxc-start: confile.c: config_network_ipv4: 835 File exists - invalid ipv4 address: / lxc-start: parse.c: lxc_file_for_each_line: 57 Failed to parse config: lxc.network.ipv4 = / lxc-start: tools/lxc_start.c: main: 284 Failed to create lxc_container "

and the config file in the LXC directory contains this wrong line : lxc.network.ipv4 = / and bridge is wrong too, it always uses the 1st interface instead of default or assigned bridge

When you created this container, did you enter an IP address manually, or have Cloudmin allocate it?

Hi Jamie, as I stated in my comment I manually assigned an address in the creation form. The bug occurs with both automatic and manual IP assignation. Seems like a parsing issue.

Thanks for your time

Ilia's picture
Submitted by Ilia on Sat, 11/14/2020 - 12:30

@lorelin I have a question for you - why using LXC, and not KVM? What is the benefit from your point of you? Would you be ready to switch to KVM instead in the future?

Hi @ilja,

though this seems a bit older, found this issue relevant when trying out cloudmin pro and googling for the same "lxc.net must not have a value ..." error.

So may be I can answer your question, why not kvm but lxc: kvm is simply not available - or only at more cost - at my provider. So I'm evaluating to use Proxmox with lxc (works flawlessly, but does not integrate with Virtualmin) and Cloudmin Pro (since gpl does not provide lxc).

Secondly, lxc should be a bit faster than kvm, i.e. has not so much overhead since it uses the same kernel.

Another - may be advantage or disadvantage: I have disk access to the lxc rootfs from the host system.

My idea is now this workaround: in "pre- and postmodification scripts" set "command to run after creating a system" to something like "lxc-update-config -c /var/lib/lxc/$SYSTEM-NAME-VARIABLE-HERE/config"

As this link https://www.cloudmin.com/documentation/developer/prepost seems to link only to virtualmin variables ... is there a matching variable, i.e. to point to the lxc container config file?

Best regards, Falko

ok, something like this as post creation script may fix the config:

#!/bin/bash

/usr/bin/lxc-update-config -c $SERVERMANAGER_LXC_CONFIG

sed -ie 's@br0@br1@' $SERVERMANAGER_LXC_CONFIG

echo please try to start the container $SERVERMANAGER_LXC again

Since this is run after start of the container, the start fails - but next start succeeds.

But: the network inside the container does not work, since the created container doesn't use neither the correct bridge interface nor the setting for lvm creation, and I'd have to fix this, too.

So, lxc in cloudmin seems not usable for me for now - at least not, if I have to fix this myself after paying a license for it - and some other things won't work, too.

Ilia's picture
Submitted by Ilia on Fri, 01/22/2021 - 08:10

Jamie, you as the original author, perhaps know the source of the issue and how it could be fixed in timely manner?

@lulatsch66 I unfortunately don't know the answer off-hand and would have to take much closer look to figure things out!

The problem is that the projects' scale is very large, and I personally need to take care for Authentic Theme bug fixes and improvements, providing Webmin, Virtualmin, install script testings and patches, alongside with providing support on issue tracker and the forum for users, and now, even creating video tutorials.

I physically cannot spare time right now for Cloudmin, considering its market share. I don't diminish your need in having proper support for LXC, and highly appreciate that your are using Cloudmin but until multiple other issues are sorted out, I personally will not look into Cloudmin issues. Sorry.

Thanx @ilja, I appreciate your answer - no worries, just tried it to have a nice(r) solution for lxc. For me, it is no problem for now - since I am using Proxmox anyways and will do here, too.

Already created an issue for refund of the cloudmin license.

Ilia's picture
Submitted by Ilia on Fri, 01/22/2021 - 14:40

Thanx @ilja, I appreciate your answer - no worries, just tried it to have a nice(r) solution for lxc. For me, it is no problem for now - since I am using Proxmox anyways and will do here, too.

Sorry about that! We most likely will fix this in the foreseeable future.

Hi @ilias, I was quite busy and no time to check back on this but it's still a big thorn in my side, I need to do a bunch of manual fix after each LXC creation :( And about why LXC ? Well, because it's damn light, blinding fast, native performances, easy to handle also from command line and portable. I can design a lightweight LXC container on my in-house machine using whatever distro, tarball it, push it on the datacenter, import in Cloudmin or explode it in a directory and it's ready to run ! Also I can resize the RAM and CPU allocation on the fly, no need to stop and start the system. No downtine, no need of kernel module (this can be and issue), no IO emulation or CPU overhead, simple, light & fast. My template Alpine LXC container uses 10MB disk space and it starts and stops instantly whatever the hardware. All in all it's a no-brainer for me, I use LXC unless some specific system or isolation issue requires KVM.

Did you get my point ? Could you consider giving some love to LXC please ? I would love to see this issue, and others, fixed. I can help if need be.

Ilia's picture
Submitted by Ilia on Mon, 03/22/2021 - 14:19

Did you get my point ? Could you consider giving some love to LXC please ? I would love to see this issue, and others, fixed. I can help if need be.

Thanks, I will consider it. Although, I cannot promise that it will be done with days or upcoming weeks, as currently there are a lot to do for me.

I will not archive this ticket on my side, keep it in stack, and update it with progress.