Request #4877
Combine ceph system and image datastores
Status: | Pending | Start date: | 10/21/2016 | |
---|---|---|---|---|
Priority: | Normal | Due date: | ||
Assignee: | - | % Done: | 0% | |
Category: | - | |||
Target version: | - | |||
Pull request: |
Description
Currently using ceph as the backend requires two separate datastores, image and system; it would be nice if these could be combined so that root disks and volatile disks/checkpoints lived in the same datastore/pool (this would be another route to facilitate selection of the proper system datastore in the case of multiple system datastores, see feature req http://dev.opennebula.org/issues/4876)
Ceph system and image datastores could either be linked in the scheduler so that if an image is on a ceph datastore that the scheduler knows to select the system datastore with the same ceph pool, or, these could be merged so that there is only one datastore required if using ceph.
Note also this could be a configurable - either a oned/sched.conf config or a config variable on the ceph image datastore - USE_SYSTEM_DS = <DS ID>, or otherwise.
History
#1 Updated by Ruben S. Montero over 4 years ago
- Tracker changed from Feature to Request
Hi
I believe this is already implemented with Ceph as a system datastore in 5.0. Volatile disks are created in the pool designated in the system datastore. We made several designs to store also the checkpoints in the pool. Unfortunately libvirt cannot write/read checkpoints from a Ceph pool, so we decided to avoid the checkpoint trashing.
Note that you can use the same pool for the system datastore (or not, e.g. use a pool with less replication for volatile disks). This requires to define both datastores, but we rather keep this approach.
Cheers