Bug #465
opennebula_2.0.1-1 defaults to incorrect a DISK/DRIVER attribute of the VM
Status: | Closed | Start date: | 01/11/2011 | |
---|---|---|---|---|
Priority: | Normal | Due date: | ||
Assignee: | - | % Done: | 0% | |
Category: | - | |||
Target version: | - | |||
Resolution: | worksforme | Pull request: | ||
Affected Versions: |
Description
If not DISK [ DRIVER = qcow2 ] attribute is set on the VM template opennebula (only version 2.0.1-1) defaults to raw, even though the images are qcow files.
This behavior was not seen under opennebula_2.0.1.
However, I don't ever see the point of it, since it been mandatory under the new versions of libvirt, it can be workarounded on the host:
(SNIP /etc/libvirt/qemu.conf:)
This behavior was not seen under opennebula_2.0.1.
However, I don't ever see the point of it, since it been mandatory under the new versions of libvirt, it can be workarounded on the host:
(SNIP /etc/libvirt/qemu.conf:)
- If allow_disk_format_probing is enabled, libvirt will probe disk
- images to attempt to identify their format, when not otherwise
- specified in the XML. This is disabled by default. #
- WARNING: Enabling probing is a security hole in almost all
- deployments. It is strongly recommended that users update their
- guest XML <disk> elements to include <driver type='XXXX'/>
- elements instead of enabling this option.
allow_disk_format_probing=1
(EOF)
Well the point beeing that opennebula don't ever tries to identify the file, but defaults to the same as libvirt would naturly do.
I workarounded it on the VMs gererated by 2.0.1 by editing vm_pool.template manually and added DRIVE = qcow2 on the templates of the new generated VMs.
Associated revisions
History
#1 Updated by Ruben S. Montero over 10 years ago
- Status changed from New to Closed
- Resolution set to worksforme
The default driver for a disk can be set in two ways:
- Driver-wise for all the images at etc/vmm_ssh/vmm_ssh_kvm.conf with
DISK=[DRIVER="qcow2"] - Image-wise in the Image repository by setting the DRIVER attribute for the image
#2 Updated by Marlon Nerling over 10 years ago
The Driver-wise workaround (sic) stills buggy if you use 2 or more disks of different type.
The main problem is not workarounding for new VMs, but for the VMS generated by early versions.
This would be the case for migration from opennebula_2.0.1 to 2.0.1-1. The VMS would boot, but the bootloader would complain about 'not bootable device found'