Bug #3768

destination node not cleaned up when migrate --live fail

Added by Anton Todorov about 6 years ago. Updated almost 6 years ago.

Status:ClosedStart date:04/21/2015
Priority:SponsoredDue date:
Assignee:Javi Fontan% Done:

0%

Category:Drivers - VM
Target version:Release 4.14
Resolution:fixed Pull request:
Affected Versions:OpenNebula 4.10, OpenNebula 4.12

Description

I noted this while developing the StorPool driver. I am marking as a bug but probably it should be marked as feature?

Here is the scenario to reproduce:
1. Prepare migrate --live failure: add second disk to VM with CACHE=default or set CACHE attribute on the current disk (for this issue I will open separate report)
2. Issue live migrate command via sunstone or shell command
3. The migrate will fail with the following log:

Tue Apr 21 13:13:35 2015 [Z0][DiM][D]: Live-migrating VM 5
Tue Apr 21 13:13:35 2015 [Z0][ReM][D]: Req:3616 UID:0 VirtualMachineMigrate result SUCCESS, 5
Tue Apr 21 13:13:35 2015 [Z0][ReM][D]: Req:9920 UID:0 VirtualMachineInfo invoked , 5
Tue Apr 21 13:13:35 2015 [Z0][ReM][D]: Req:9920 UID:0 VirtualMachineInfo result SUCCESS, "<VM><ID>5</ID><UID>0..." 
Tue Apr 21 13:13:37 2015 [Z0][ImM][I]: --Mark--
Tue Apr 21 13:13:38 2015 [Z0][VMM][D]: Message received: LOG I 5 Successfully execute transfer manager driver operation: tm_premigrate.
Tue Apr 21 13:13:38 2015 [Z0][VMM][D]: Message received: LOG I 5 ExitCode: 0
Tue Apr 21 13:13:38 2015 [Z0][VMM][D]: Message received: LOG I 5 Successfully execute network driver operation: pre.
Tue Apr 21 13:13:39 2015 [Z0][VMM][D]: Message received: LOG I 5 Command execution fail: /var/tmp/one/vmm/kvm/migrate 'one-5' 's05' 's06' 5 s06
Tue Apr 21 13:13:39 2015 [Z0][VMM][D]: Message received: LOG E 5 migrate: Command "virsh --connect qemu:///system migrate --live  one-5 qemu+ssh://s05/system" failed: error: Unsafe migration: Migration may lead to data corruption if disks use cache != none
Tue Apr 21 13:13:39 2015 [Z0][VMM][D]: Message received: LOG E 5 Could not migrate one-5 to s05
Tue Apr 21 13:13:39 2015 [Z0][VMM][D]: Message received: LOG I 5 ExitCode: 1
Tue Apr 21 13:13:39 2015 [Z0][VMM][D]: Message received: LOG I 5 Failed to execute virtualization driver operation: migrate.
Tue Apr 21 13:13:39 2015 [Z0][VMM][D]: Message received: MIGRATE FAILURE 5 Could not migrate one-5 to s05
Tue Apr 21 13:13:42 2015 [Z0][VMM][D]: Message received: POLL SUCCESS 5 STATE=a USEDCPU=0.0 USEDMEMORY=786432 NETRX=53287 NETTX=53865

The tm_premigrate task is complete, files are copied but migrate failed so we have VM running on source node and copied files on the destination node.

IMHO here a hook should be triggered to clean files from the destination node

Associated revisions

Revision bdd231bf
Added by Javi Fontan about 6 years ago

bug #3768: add tm/failmigrate call on error

Revision adc3e0e4
Added by Javi Fontan about 6 years ago

bug #3768: add placeholder failmigrate script

History

#1 Updated by Ruben S. Montero about 6 years ago

  • Category changed from Drivers - Storage to Drivers - VM
  • Status changed from Pending to New
  • Assignee set to Javi Fontan
  • Priority changed from Normal to Sponsored
  • Target version set to Release 4.14

Thanks for the heads up!

#2 Updated by Javi Fontan about 6 years ago

Now on migration fail a new call to tm failmigration script is done with the same parameters as premigration.

#3 Updated by Javi Fontan almost 6 years ago

  • Status changed from New to Closed
  • Resolution set to fixed

Also available in: Atom PDF