Feature #4464
allow migration between clusters that share datastores/vnets
Status: | Closed | Start date: | 05/12/2016 | |
---|---|---|---|---|
Priority: | High | Due date: | ||
Assignee: | Carlos Martín | % Done: | 0% | |
Category: | Core & System | |||
Target version: | Release 5.2 | |||
Resolution: | fixed | Pull request: |
Description
We'd like to be able to migrate a vm between clusters that share datastores and vnets, but this is not possible currently because migration across clusters is not permitted.
We would like to define our hosts in separate clusters because they do not share all of their features, such as vnets and datastores, only some of them. Our setup is like an overlapping Venn diagram - we have shared storage across all the hosts A-F, but some vnets are only available on hosts A,B,C, some other vnets are on hosts D,E,F, and we have another set of vnets that are shared on hosts A-F. Because of those vnets that are only on hosts A,B,C, we'd like to define them as their own cluster to avoid having to explicitly name the hosts to schedule on when we create a template using that vnet (similarly, hosts D,E,F in their own cluster). For vms on vnets and datastores that are shared across hosts A-F, those vms could launch on any of hosts A-F - however once launched, currently they would be tied to that cluster (A,B,C or D,E,F) and not able to migrate to the other (even though all of their resources are present there). We'd like to see this restriction removed for those vms: if a vm's current vnet and datastore are accessible in the destination cluster then migration should be permitted.
Associated revisions
Feature #4464: Allow migration between clusters that share datastores/vnets
Feature #4464: Allow migration between clusters that share datastores/vnets
(cherry picked from commit f789d500f4005d00c63657ebed8d1b87b066dde6)
Feature #4464: Refresh VM cluster requirements
Cluster requirements are recalculated:
- on release from hold
- on resume from undeployed/stopped
- on resched
- on migrate
Feature #4464: Refresh VM cluster requirements
Cluster requirements are recalculated:
- on release from hold
- on resume from undeployed/stopped
- on resched
- on migrate
(cherry picked from commit b9588846efeb746acf0a1b3be84182b43b571c28)
feature #4464: Return list of viable clusters on automatic_requirements
API call
History
#1 Updated by Ruben S. Montero about 5 years ago
- Tracker changed from Feature to Backlog
- Category set to Core & System
- Priority changed from Normal to High
Would a --force option work in this case? We could bypass any check and allow migration is requested
#2 Updated by John Noss about 5 years ago
Having a --force option would work, if nothing else - but if possible, I think it would be nice to check the availability of the vm's current datastore/vnet on the destination host. (We can assume the clusters have been defined correctly by the administrator - so this can just check that the destination host is in a cluster that has that datastore and that vnet).
#3 Updated by John Noss almost 5 years ago
Any updates or thoughts on this?
#4 Updated by Ruben S. Montero almost 5 years ago
- Tracker changed from Backlog to Feature
- Assignee set to Carlos Martín
- Target version set to Release 5.2
We are scheduling this for 5.2. Currently the scheduler behaves as described (placement decisions are based on host compatibility and not in cluster membership), so the same logic will be implemented in oned.
#5 Updated by John Noss almost 5 years ago
This looks great. We've installed from that feature branch and are able to live migrate across clusters, although only in certain situations - we're running into an issue where this does not respect changes to clusters after the vm has been launched. It looks like the scheduling requirements (automatic_requirements) are only computed once, at initial boot time, and ignore any future changes (such as the vlan being added to a different cluster, etc).
Can the automatic requirements be recalculated? Or, is there a way for us (as the admins) to edit them? If this could be added to have oned recalculate, then this could be a oned.conf configurable, such that in steady state opennebula doesn't need to dynamically recompute these scheduling rules, but, it would be possible to enable having opennebula do a live check of datastore and vlan availability on the destination cluster, and update the vm's automatic_requirements accordingly.
#6 Updated by Carlos Martín almost 5 years ago
Hi,
We have made a new commit to recalculate the clusters for the automatic requirements in these situations:
- VM moves to pending from hold (release action)
- VM moves to pending from undeployed/stopped (resume action)
- VM resched action
- VM migrate
This means that the automatic requirements are refreshed every time a "scheduling cycle" starts, but will not be periodically refreshed while the VM is in the pending state. We think this is a good compromise to avoid the complexity of a timer constantly checking for changes.
To force this refresh on VMs in "pending" the admin will need to perform a hold & release on the VM.
#7 Updated by Ruben S. Montero almost 5 years ago
- Status changed from Pending to Closed
- Resolution set to fixed