Request #2239
datatstore 0 is confused about how to deal with transient file system images
Status: | Closed | Start date: | 07/28/2013 | |
---|---|---|---|---|
Priority: | Normal | Due date: | ||
Assignee: | - | % Done: | 0% | |
Category: | - | |||
Target version: | - | |||
Pull request: |
Description
Swap images get stored in datastore 0. The opennebula frontend uses datastore 0 to store the deployment.0 xml descriptor(we use libvirt/kvm), and symlinks for block devices(we use moosefs as our tm and datastore; I've based those off of shared). The docs mention that if you want live migration, then datastore 0 should be shared, so any writes into it(the checkpoint files) will be visible directly by other hosts. However, if shared, that implies that some kind of remote access(network or otherwise) is responsible for the storage.
All of our hosts have fast, local storage, in addition to the shared cluster storage. However, because datatstore 0 is currently shared, it means that swap devices are serviced from the network. When a vm starts pounding it's swap, the cluster suffers, instead of just that single host that it is on.
History
#1 Updated by Ruben S. Montero almost 8 years ago
- Status changed from Pending to Closed
Please also note, that you need the swap devices on shared storage for life migration (for example, there are virtual memory pages on swap).
So if you need life migrations you cannot escape from it. However, if you don't you can use images exported in a shared FS volume (like NFS) using the shared TM drivers; but a system DS using the ssh drivers. There will be no life-migration, though.
#2 Updated by Ruben S. Montero almost 8 years ago
Hi Adam,
It seems that your feedback is not geting in to the system.
Ok, then it'd be nice to have volatile disks placed in a particular datastore, so I could have a fast, but still shared, datastore. Or something. So, maybe this could be reopened? Or, I could just file it as a separate issue, if what I said makes sense.
The problem here is that is is really difficult to change the VM directory conventions. In the past we have done it by slightly modifying the scripts that creates the volatile disks. So the disks are created on local partitions... Let us know if you are interested in that solution.