Cloudmin operations and status checks should be done using Cloudmin's own connection

I added a second KVM server to one Cloudmin Pro instance using an SSH connection. The second KVM server is on a different network and acessible via an SSH port forward.

I am able to create VMs on the second server but their status is allways "Ping failed" even tough they are working fine. On the "change password" module I chose "Do not attempt SSH login" and "Do not attempt to ping either". After this change the status of the machines becomes "Alive".

This means that clean reboots via Cloudmin (and perhaps other operations) are not possible if machines on a remote KVM server are not reachable directly via IP.

I think Cloudmin should NOT expect a direct IP connection plus an open firewall between main host and remote guest machines. This seems to be a LAN centric design.

Since Cloudmin already has a connection (either SSH or Webmin) to the remote server I think all the operations should be done trough that connection. From inside the remote server it is reasonable to expect an IP connection to be possible.

Status: 
Closed (fixed)

Comments

Cloudmin won't work too well in this situation, as it expects to be able to directly connect to VMs - there is no support for tunneling the SSH connection via the host. However, you should still be able reboot, stop and start VMs - under KVM, a reboot is done by sending a virtual ctrl-alt-delete signal to the OS, which should trigger a clean restart.

OK, virtual ctrl-alt-del is better than unclean reboot.

However, just as Cloudmin does kvm operations through the existing connection (either SSH or Webmin connectionn) it could perfectly do the pings and ssh checks the same way. I'm suggesting this as an improvement.

In theory, yes ... in practice this is a little more complex, as Cloudmin would need to authenticate to both the host and the VM. This means either supplying multiple passwords (to SSH, and then the SSH-within-SSH), or passing the SSH key for the VM on to the host.

SSH within SSH or having the main SSH connection forward a port on the main host for each guest on the remote host (ssh -L 10221:192.168.1.8:22 ....).

Whatever the technical solution, I'd say that this is a logical need for a product the aggregates KVM servers. We want the guest on remote KVM servers to be treated as the local ones, regardless of external IP routing / firewalling.

I'm thinking that the guest on remote KVM servers also need to bem managed from Cloudmin. The execution of commands on multiple hosts, for example.

Another issue that could make this difficult is how to tunnel a file copy (with scp) though a host system. I suppose any files Cloudmin uploads to or downloads from a VM could be send to the host first, but that could be problematic for large files.

If the ssh connection to the remote host forwards a different local port to port 22 of each guest machine, then scp is transparent, as is ssh to any guest. You'd ssh and scp to localhost at the respective ports. This is as I suggested above.

The key idea is in the -L option of the ssh client.

Oh, I see now .. so you have permanent SSH tunnels setup on different ports? I was thinking more of a setup in which Cloudmin SSHs to the host, and from there to each VM.

Either permanents SSH tunnels or SSH tunnels created on demand each time status check, file transfer or command arxecution operations are needed. If you go for permanent tunnels they must be monitored. Perhaps creating them on demand is easier to manage, even if it imposes a bit more latency.

Status: Active » Fixed (pending)

I understand the system of focused on clusters of machines hosted "side by side" rather than system on different locations (it is a pity but I'm sure there are valid reasons related to demand).

Status: Fixed » Closed (fixed)

Automatically closed - issue fixed for 2 weeks with no activity.