KVM ping failed OVH server

Hello

Am trying to configure Cloudmin for OVH servers but have not been able to successfully create KVM instance. KVM is able to start but failed to answer ping request. From what I've seen & past forum topics, OVH configuration is rather different.

Is there a solution for it yet?

Thank you.

Status: 
Active

Comments

Hi

Sorry, I'm not able to access the discussion, thanks

Ok, looks like it is private for the original poster due to the bug report containing passwords.

The important part is that the VM needed to have a MAC address that was supplied by or registered with OVH. Do they still have that restriction?

No, there's no restriction, freely available from the control panel. I've added MAC addresses too but didn't work.

Perhaps an extract from the original post? Sensitive information can be taken out. This is the document from OVH on network bridging http://docs.ovh.ca/en/guides-network-bridging.html, differs slightly from your documentation.

OVZ containers have the same issue too! Even w/o bridging.

Thanks

Another thing that I've seen is that OVH requires that VMs route their traffic via the host system, rather than being directly bridged to the internet. To set this up, you need to do the following :

  1. Create a br0:1 virtual interface on the host with an IP in the range assigned by OVH for the VMs, and with the correct netmask.

  2. On the VM, set the default gateway to the IP from step 1 .. not the gateway IP used on the host.

  3. Make sure routing is enabled on the host.

Thanks for the guide.

So as I do not mess up the server, should it be done as such, 1. configure kvm host according to your documentation https://www.virtualmin.com/documentation/cloudmin/virtualization/kvm, this will setup br0.

  1. Create a br0:1 virtual interface on the host with an IP in the range assigned by OVH for the VMs, and with the correct netmask.

  2. On the VM, set the default gateway to the IP from step 1 .. not the gateway IP used on the host.

  3. Make sure routing is enabled on the host. Is it just the host node or vm as well? According to OVH documentation, routing must be done on vm too route add GATEWAY_VM dev eth0 route add default gw GATEWAY_VM

What of this document? https://www.virtualmin.com/documentation/cloudmin/bridge Do I still need to setup br1?

Oh yea before I forget, I've used OVH for HyperVM and OVZ works w/o bridging, via Venet0. Is it the same for Cloudmin? coz mine done similarly didn't work.

Thanks

Hi

I now have OVZ running, just a glitch, didn't configure venet0 :)

Thanks

oh yea, I'd be more than happy to let you have a peek at OVH server configuration if you need to :)

Hello

Added br0:1 as above with /29 IP block, xxx.xxx.xxx.24 1st IP as network and xxx.xxx.xxx.30 as gateway, broadcast xxx.xxx.xxx.31.
KVM creation resulted in network crash, needed hard reboot, vm ping failed How do I add routing to vm if it's not even available? Routing is supposed to be added. Can't even access file manager. Webmin is okay even when ping fail.

What do you mean by a network crash here - was the host system completely down or inaccessible?

completely unable to access network until a hard reboot

I also noticed tap0 or tap1, etc. does not get IPv4 addresses, only IPv6. is that normal?

ifconfig -a br0 Link encap:Ethernet HWaddr 60:A4:4C:40:CD:E5
inet addr:xxx.99.6.48 Bcast:xxx.99.6.255 Mask:255.255.255.0 inet6 addr: fe80::62a4:4cff:fe40:cde5/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3700 errors:0 dropped:0 overruns:0 frame:0 TX packets:3142 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:678363 (662.4 KiB) TX bytes:431950 (421.8 KiB)

br1 Link encap:Ethernet HWaddr FA:4E:DF:B1:3C:F1
inet addr:xxx.4.194.24 Bcast:xxx.4.194.24 Mask:255.255.255.255 inet6 addr: fe80::f84e:dfff:feb1:3cf1/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:16 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:1076 (1.0 KiB)

eth0 Link encap:Ethernet HWaddr 60:A4:4C:40:CD:E5
inet6 addr: fe80::62a4:4cff:fe40:cde5/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3833 errors:0 dropped:0 overruns:0 frame:0 TX packets:3147 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:746503 (729.0 KiB) TX bytes:432316 (422.1 KiB)

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:940 errors:0 dropped:0 overruns:0 frame:0 TX packets:940 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1590583 (1.5 MiB) TX bytes:1590583 (1.5 MiB)

tap0 Link encap:Ethernet HWaddr CE:7C:45:AD:EA:82
inet6 addr: fe80::cc7c:45ff:fead:ea82/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:12 errors:0 dropped:0 overruns:0 frame:0 TX packets:938 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:720 (720.0 b) TX bytes:76650 (74.8 KiB)

venet0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
inet6 addr: fe80::1/128 Scope:Link UP BROADCAST POINTOPOINT RUNNING NOARP MTU:1500 Metric:1 RX packets:9 errors:0 dropped:0 overruns:0 frame:0 TX packets:9 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:348 (348.0 b) TX bytes:372 (372.0 b)

brctl show bridge name bridge id STP enabled interfaces br0 8000.60a44c40cde5 no eth0 tap0 br1 8000.000000000000 no

Yes, the tapN devices are not expected to have an IP - they are just used to bridge a VM to the real network interface.

One thing you might want to check with OVH is if they cut off your network access due to using an incorrect MAC address - they have some pretty strict rules about which MACs can be used by each system, which causes problems with a VM with a MAC different from the host is started up, unless it's connected is routed through the host (not just bridged).

seems to be an issue with ip route too, default should be eth0, right?

ip route show 167.114.9.104 dev venet0 scope link 192.99.6.0/24 dev br0 proto kernel scope link src 192.99.6.48 default via 192.99.6.254 dev br0

in kvm network interfaces, there isn't an IPv4 address, no ip routed at all

anyway, will also be checking with OVH per your comment.....

is this normal?

boot message QEMU waiting for connection on: tcp:127.0.0.1:40000,server

That's normal. Can you access the "Graphical Console" page for the VM? If so, that means it has started up successfully.

graphical console can be accessed but connection fails, file upload/run commands not possible.

Ok, so the VM is at least running. Did OVH give you any information on how they expect interfaces and routes to be setup for VMs?

working with OVH on that now. can you recommend any software that edit/add files to kvm os images? will probably need to add a route file for /network-scripts/, eth0 routing must be enabled in vm..........

You can edit files on your VM by logging in using the Graphical Console page.

Alternately, on the master system you can run cloudmin mount-system --host your.vm.name , and then edit files on the VM filesystem directly.

I'm not able to log into graphical console, connection fail.

will check out the command and revert.

kvm's running now, just needed to remove virtual mac for 1st IP of the block. your recommended network configuration works for OVH :)

I'll go through other stuff and revert if help needed, thanks.

well celebration short lived, 2nd KVM instance gets ping failed, 1st is okay. any idea why?

Are they setup in the exact same way, such as having the same network bridge, an IP in the same range, and a MAC address that is on OVH's allowed list?

yup all done as with the 1st instance, virtual MAC according to control panel............ there are also failed file system mounting errors supposedly due to LVM or RAID during creation, resolved by rebooting node
Edit
from what I see, it's the virtual system that's unable to assign IPv4 for KVM instances, hence the ping fail situation

Ok, if the VM disk could not be mounted at creation time then Cloudmin would not have been able to set the IP address or hostname correctly. Perhaps you should delete the 2nd VM and re-create it?

at wits end now, after fiddling a bit, it started working, so I decided to go for another instance. got into the same issue again.......ping failed. (3rd kvm)
however, no issues with ovz, why's that?

Hard to say - what did you do to get the 2nd instance working?

just fiddled with virtual MAC and bridge, vm got SSH status all of a sudden. tried but couldn't replicate the scenario
one crucial note, soft reboot no longer possible, doesn't startup at all.
btw is configuration needed to run dhcp server? dhcp fails to start.....

You should check the log file /var/log/messages to see what gets logged when you try to start DHCPd - that should explain why it isn't starting up.