Cloning system errors due to insufficient storage...I have no idea where it is getting its numbers from

Hey Jamie,

Using Cloudmin on srv1, I've tried to clone ns2 on srv2 to either srv2 or to srv3, both of which fail with insufficient storage errors:

"Creating clone of KVM system ns2.cloud.virtualmin.com named ns4 ..

.. failed : The host system srv2.cloud.virtualmin.com is not suitable for this clone : Only 8.95 GB disk space would be free after creating this system, but the minimum is 10 GB"

In this case, I found that having a directory filled in on the Host Systems page for srv2 would cause that particular check to test against the filled in directory, even though the logical volume option was selected and the VM has 800+GB of free space. So, in this case I think it's just a bug in how that form is handled maybe. It seems like it shouldn't need to be emptied if the directory option isn't selected.

And for srv3:

Creating clone of KVM system ns2.cloud.virtualmin.com named ns4 ..

.. failed : The host system srv3.cloud.virtualmin.com is not suitable for this clone : Only 38.55 GB disk space would be free after creating this system, but the minimum is 100 GB

In the case of creating the clone on srv3, I have no idea why it failed. That system has tons of space on its LV that's assigned for storage, and the Host Systems page reports that correctly.

Status: 
Active

Comments

There is something odd going on here ... digging into it now.

So, there are a few issues here :

  1. Cloning in Cloudmin currently only copies disks to the same type of storage location as the original. So because ns2 has it's disks in /kvm , it wants to put the new disks in /kvm on srv3 as well.
  2. However, this fails because at Host Systems -> KVM Host Systems -> srv3, the "Disk space to leave free" field is set to 100 GB, which is bigger than the size of /
  3. Also, for some reason ns2 has a 16 GB virtual disk attached that is stored in the LV ns2_cloud_virtualmin_com_newroot_img but doesn't appear to be used? The cloning calculation takes the size of this disk into account as well.

What I'd recommend is remove the disk in ns2_cloud_virtualmin_com_newroot_img if you aren't using it for anything, adjust the "Disk space to leave free" to something like 10 GB, and then perform the clone to srv3. One this is done, you can move the disks from /kvm to LVM on the clone at Resources -> Manage Disks -> some-disk.

I need to work on supporting cross-device cloning though.

Joe's picture
Submitted by Joe on Sun, 03/15/2015 - 15:06 Pro Licensee

This all started because I want to move the VMs on srv2 into the new disk LV.

Since that isn't possible, I was going to create a new root partition and copy everything to it manually...which is where the ns2_cloud_virtualmin_com_newroot_img came from. But, that also doesn't seem to be feasible.

It seems to be a "you can't get there from here" kind of situation, where it's not really possible to move VMs from one storage location to another.

Is the only way to do this to make an image from this VM and then create a new VM installed from that image?

The best way to do this is just to move the disks to LVM storage - it should be possible to this at Resources -> Manage DIsks, by clicking on a disk then using the move form at the bottom of the page.

Joe's picture
Submitted by Joe on Sun, 03/15/2015 - 23:04 Pro Licensee

I want to believe. But, there is no such form at the bottom of the page. Nothing about moving at all.

Oh, wait. There is if I shut it down first. That's pretty confusing...other options on the same will tell you it's going to be shut down to make the change, but doesn't hide the form. I've been trying to figure that one out for ages. So, I feel kinda dumb, and kinda annoyed. That's bad. We should not hide that without telling users there are additional options for shutdown machines. ;-)

Joe's picture
Submitted by Joe on Sun, 03/15/2015 - 23:22 Pro Licensee

That action doesn't clean up after itself after failure. srv1 lost connection as making the move seems to have restarted the network or firewall for some reason, which caused srv1 to lose connection to srv2 for a few seconds, which put it into the "Parent is down" state.

Resetting network or firewall when moving a disk seems unnecessary maybe?

Also, that system down detection is pretty fragile. I find srv2.cloud.virtualmin.com pretty much always in that state whenever working with it, as it has a very slow to start bridge and firewall (I'm not sure how all of the pieces fit together, but the firewall does take a long time to start due to having all of the CentOS mirrors listed...probably need a better way to handle updates for demo server).

Anyway, it prevents trying again if it failed without manually removing the volume. It seems to have left the old volume around, so I'm gonna remove the volume it created before and try again with a simplified iptables script in place, hoping it didn't delete the old one before enabling the new one. ;-)

That's odd, because the disk move doesn't touch the firewall at all.

Regarding the disk move, it sounds like I should make the move form always appear, but have it shut down the VM if it is running.