Create new image issue - semi urgent

Hi I downloaded a 64bit image from stacklet.com and i can run it from then command line with xm create XXXX.cfg -c as suggested on stacklets website, however every time i try to Create image for Xen. on the New System Images page the new image can't be used when creating a new vps the new image gets created and added to the available images but after creating a xen instance, the vps is inaccessible with ssh or anything else. Is there something I should be doing or not doing before I migrate the new image into the cloudmin image list?

I have tried compressed and uncrompressed, i have changed 1 system to disable dhcp and set the network as static before the import but still not getting very far, I did also try a vmware image as well but that ends up the same.

The mention of semi urgent is due to some really strange server errors we are experiencing since both the host and xen vps machines running centos got upgraded to 5.6, so I was looking at migrating to ubuntu

Many Thanks

Steve

Status: 
Closed (fixed)

Comments

What format is this image in exactly? Stacklet offers several types ..

Also, after creating a system from the image and starting it up, try going to the Graphical Console page and see if there is some reason why it isn't booting..

Hi Jamie, thanks for the quick reply, the xen image was this one http://www.stacklet.com/downloads/images/ubuntu/10.10/x86-64 and the vmware image i tried was this one http://www.stacklet.com/downloads/images/x86_64/vmdk/ubuntu/10.04/gnome graphical console was just a blank black screen, text console i get [ 1.355788] udev[62]: starting version 163
Begin: Loading essential drivers ... done.
Begin: Running /scripts/init-premount ... done.
Begin: Mounting root file system ... Begin: Running /scripts/local-top ... done. chvt: can't open console
Gave up waiting for root device. Common problems:
- Boot args (cat /proc/cmdline)
- Check rootdelay= (did the system wait long enough?)
- Check root= (did the system wait for the right device?)
- Missing modules (cat /proc/modules; ls /dev)
chvt: can't open console
Couldn't get a file descriptor referring to the console
referring to the console
Couldn't get a file descriptor referring to the console
Couldn't get a file descriptor referring to the console
Couldn't get a file descriptor referring to the console
Couldn't get a file descriptor referring to the console
Couldn't get a file descriptor referring to the console
ALERT! /dev/xvda does not exist. Dropping to a shell!

BusyBox v1.15.3 (Ubuntu 1:1.15.3-1ubuntu5) built-in shell (ash)
Enter 'help' for a list of built-in commands.

(initramfs)

Update

Hi, I did a bit of digging and found something regarding ALERT! /dev/xvda does not exist. Dropping to a shell!

the cfg file that came with the image was bootloader = "/usr/bin/pygrub" memory = 512 name = "ubuntu.10-10.gnome.x86-64.20110417" vif = [ '' ] disk = ['file:/root/stacklet/ubuntu.10-10.gnome.x86-64.20110417.img,xvda,w'] root = "/dev/xvda" extra = "fastboot"

after adding to the system and creating a new vps the .cfg for that server ends up as memory = 1024 maxmem = 4096 name = 'ns8' vif = [ 'ip=xxx.xxx.xxx.xxx,mac=00:16:3e:12:94:EC' ] address = 'xxx.xxx.xxx.xxx' netmask = '255.255.255.128' disk = ['file:/xen/ns8.img,sda1,w'] bootloader = "/usr/bin/pygrub" vnc = 1 vnclisten = "0.0.0.0" vncunused = 1 vncpasswd = "NotForYourEyes" vfb = ['type=vnc,vncunused=1,vncpasswd=NotForYourEyes,vnclisten=0.0.0.0']

before i started using cloudmin I built my own image in centos which by default did this disk = [ "tap:aio:/xen/images/ns.domain.com.img,xvda,w" ] and that server has no issues - fully managed in cloudmin so I know that it cant be something not installed, anyway I've sort of fixed the issue but I'm not sure if this is a bug or not in the new .cfg file i changed the disk = ['file:/xen/ns8.img,sda1,w'] to disk = ['file:/xen/ns8.img,xvda,w'] and added root = "/dev/xvda" and it now boots up

is there a setting i need to change so it automatically sees the virtual disc type?

Ok, I see the problem .. the issue is that the /etc/fstab file in the image refers to /dev/vda , but Cloudmin is creating the Xen config to use /dev/sda . Normally during the creation process this is fixed by having Cloudmin update the /etc/fstab file to use the correct path .. but I am guessing that is failing, perhaps due to the filesystem format of the image.

On the VM, what does the command fdisk -l output ? And what does the /etc/fstab file contain?

thats funny as I have just been looking at that and Yep your right, /etc/fstab has /dev/xvda / ext3 defaults,errors=remount-ro 0 0

the output from creating the vm actually stated it updated the fstab file

I have found /etc/fstab.save which contains /dev/sda1 / ext3 defaults,errors=remount-ro 0 0

something that might be of interest the file creation dates for -rw-r--r-- 1 root root 49 May 2 14:45 /etc/fstab -rw-r--r-- 1 root root 70 Apr 15 17:42 /etc/fstab.save

fdisk -l Disk /dev/xvda: 10.7 GB, 10738466816 bytes 255 heads, 63 sectors/track, 1305 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000

Disk /dev/xvda doesn't contain a valid partition table

I'm wondering if the original/source image has the wrong entry in that cfg, so im going to change that to sda1 and boot it & run fdisk -l and will report back

Ok, I see the bug now .. Cloudmin is not fixing the /etc/fstab file in some cases when it should be. I will correct that in the next release.

In your case, I would suggest changing xvda1 to sda1 in /etc/fstab and in the Xen .cfg file, then shutting down and re-starting the VM. If it comes up OK, you can create a new image from it to use for creating additional VMs..

Ok, I see the bug now .. Cloudmin is not fixing the /etc/fstab file in some cases when it should be. I will correct that in the next release.

In your case, I would suggest changing xvda1 to sda1 in /etc/fstab and in the Xen .cfg file, then shutting down and re-starting the VM. If it comes up OK, you can create a new image from it to use for creating additional VMs..

Ok, I see the bug now .. Cloudmin is not fixing the /etc/fstab file in some cases when it should be. I will correct that in the next release.

In your case, I would suggest changing xvda1 to sda1 in /etc/fstab and in the Xen .cfg file, then shutting down and re-starting the VM. If it comes up OK, you can create a new image from it to use for creating additional VMs..

funnily enough thats exactly what I've started doing, its compressing the image at the moment, I'll let you know how things work out, Thanks Jamie

Hi Jamie, just to update, I've created an image from the working system with the edits in fstab etc to sda1 instead of xvda and the images are creating new vps's with no issues. Thanks Steve

Automatically closed -- issue fixed for 2 weeks with no activity.