Bug #4460

Migration TM scripts premigrate and post migrate not calle

Added by Michal Leinweber about 5 years ago. Updated about 5 years ago.

Status:ClosedStart date:05/12/2016
Priority:NormalDue date:
Assignee:-% Done:

0%

Category:Drivers - Storage
Target version:Release 5.0
Resolution:fixed Pull request:
Affected Versions:OpenNebula 4.14

Description

I have shared TM driver and premigrate and postmigrate TM scripts are never called.

In log I see:
Message received: LOG I 48 Successfully execute transfer manager driver operation: tm_premigrate.

But scripts are not executed on any host.

History

#1 Updated by Ruben S. Montero about 5 years ago

Hi

Please note that the pre/post scripts are from the system datastore, more info here1. Could you check the TM_MAD of the system datastore, and check that those scripts are not called...

[1] http://docs.opennebula.org/4.14/integration/infrastructure_integration/sd.html#tm-drivers-structure

#2 Updated by Michal Leinweber about 5 years ago

Thanks Ruben. Yes it seems to be that case. But I do not understand how it could be managed. If the VM has attached disk from other then system datastore, it meens it cannot be live migrated? Or how could I call datastore TM scripts from system datastore TM scripts?

#3 Updated by Ruben S. Montero about 5 years ago

Hi Michal, The shared system ssh, that can work, with multiple image datastores already have this:

#!/bin/bash

DRIVER_PATH=$(dirname $0)

DISK_COUNT=$(onevm show $4 -x|grep DISK_ID| wc -l)
TMS=$(onevm show $4 -x|sed -rn 's/[[:space:]]*<TM_MAD><\!\[CDATA\[([^]]*).*/\1/p')

XPATH="${DRIVER_PATH}/../../datastore/xpath.rb -b $7" 

for i in `seq 1 $DISK_COUNT`; do
  TM=`echo $TMS|cut -d" " -f$i`
  DISK_ID=$(($i-1))
  DEV=`ssh $1 "readlink $3/disk.$DISK_ID"`
  ${DRIVER_PATH}/../$TMS/premigrate "$1" "$2" "$DEV" 
done

exit 0

Maybe you can need to use the same logic in ssh? or this is not working for you?

#4 Updated by Michal Leinweber about 5 years ago

Yes Ruben it seems it solves my problem. This code is not from 4.14 so I suppose it is from upcomming 5.0. So I will just wait for 5.0 :-)

#5 Updated by Ruben S. Montero about 5 years ago

  • Status changed from Pending to Closed
  • Target version set to Release 5.0
  • Resolution set to fixed

You are right ;) I'm closing this as fix for 5.0 for now...

Also available in: Atom PDF