Error message when resizing LVM image

Here's the problem :

I built a KVM virtual machine on a KVM host : - First "Create empty system" (with a CentOS boot iso). - Filesystem is on an image file. - Filesystem is ext2, with ONLY ONE PARTITION (no /boot, no swap, etc). - Install the system, yum update, etc. - I shutdown the VM.

From that VM, I create a System Image : System operations / Create Image. The image is uncompressed (4 Gb).

From that system image, I plan to create several VMs, this time on a LVM Volume Group.

So : Create KVM Instance, then i fill all the fileds, and specify the system image built before as source.

Then I want to resize the VM from 4GB to 8 GB : shutdown the VM, then I go to "Manage disks" and change the size to "8 GB", then click "save".

Here's the message I get :

update failed : losetup -o 16384 /dev/loop9 \/dev\/VolGroup00\/test_copytrans_net_img failed : /dev/loop9: No such file or directory

BUT :

When I boot the VM, the filesystem IS resized and I get no error message... Strange...

Do I need to worry ?

Here are some output :

fdisk -l

Disque /dev/hda: 8589 Mo, 8589934592 octets

255 heads, 63 sectors/track, 1044 cylinders

Unités = cylindres de 16065 * 512 = 8225280 octets

Périphérique Amorce Début Fin Blocs Id Système

/dev/hda1 1 1045 8388592 83 Linux

mount

/dev/hda1 on / type ext2 (rw)

proc on /proc type proc (rw)

sysfs on /sys type sysfs (rw)

devpts on /dev/pts type devpts (rw,gid=5,mode=620)

tmpfs on /dev/shm type tmpfs (rw)

none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)

sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)

cat /etc/fstab

/dev/hda1 / ext2 defaults 1 1

tmpfs /dev/shm tmpfs defaults 0 0

devpts /dev/pts devpts gid=5,mode=620 0 0

sysfs /sys sysfs defaults 0 0

proc /proc proc defaults 0 0

Status: 
Closed (fixed)

Comments

Do both the files /dev/loop9 and /dev/VolGroup00/test_copytrans_net_img exist on your host system?

If /dev/loop9 is missing (which is surprising as most systems have it by default), you could create it with :

mknod /dev/loop9 b 7 9

ls /dev/loop*

/dev/loop0 /dev/loop1 /dev/loop2 /dev/loop3 /dev/loop4 /dev/loop5 /dev/loop6 /dev/loop7

Yep that's true. /dev/loop9 isn't present from the host, neither from the guest.

Host and guests are running CentOS 5.5, minimum install + necessary packages.

Created /dev/loop9 ont the host :

mknod /dev/loop8 b 7 8

mknod /dev/loop9 b 7 9

chmod 640 /dev/loop*

chgrp disk /dev/loop*

Didn't help.

Did the same on the guest prior to resize : didn't help either, as expected.

Maybe have I to reboot the computer ? I'll give a try.

Other strange thing :

From the VM, here's what I get after resize and boot :

fdisk -l

Disque /dev/hda: 26.8 Go, 26843545600 octets

255 heads, 63 sectors/track, 3263 cylinders

Unités = cylindres de 16065 * 512 = 8225280 octets

Périphérique Amorce Début Fin Blocs Id Système

/dev/hda1 1 3264 26214384 83 Linux

So, my disk seems to be 26,8Gb, what I wanted.

BUT :

The information page on Virtualmin gives me :

Local disk space 3.94 GB total, 1.80 GB used

That was the size BEFORE the resizing... So only the LVM volume was actually resized, and not the actual filesystem.

From http://www.brandonhutchinson.com/Creating_additional_loop_devices.html :

The loop module on RHEL 5 and RHEL 4 allows 8 loop devices by default. To increase the number of loop devices to 64, for example, add the following to /etc/modprobe.conf:

options loop max_loop=64

The maximum value of max_loop is 256.

ls -l /dev/loop* | wc -l

8

On RHEL 5 and RHEL 4 systems, reloading the loop module will cause the changes (i.e., increasing or decreasing loop devices) in max_loop to take effect.

/sbin/rmmod loop

/sbin/modprobe loop

That's what I did, and this time Resizing was OK. Not only the underlying LVM Volume, but also the filesystem : Virtualmin is also giving the right filesystem size.

Thanks for pointing me the right solution !

Hope it will help you either.

Cheers,

Miguel.

Sounds like I should switch Cloudmin from using loop9 to loop7 , which is more likely to exist. I will do this in the next release (4.9). Till then, your fix will work fine ..

Automatically closed -- issue fixed for 2 weeks with no activity.