Request #3088

Use virsh save and virsh restore to replace virsh snapshot-create-as in kvm driver for shorter time

Added by 海涛 肖 almost 7 years ago. Updated over 6 years ago.

Status:NewStart date:07/23/2014
Priority:NormalDue date:
Assignee:-% Done:

0%

Category:Drivers - VM
Target version:-
Pull request:

Description

Now, for kvm driver, opennebula use 'virsh snapshot-create-as' cmd to create snapshot, when there are few data in disk, the snapshot is very faster. But for many data in disk, the snapshot is very slow.
therefore, the more data in disk, the slower the snapshot.

Using 'virsh save' and 'qemu-img' to replace 'virsh snapshot-create-as', when create snapshot, 'virsh save' vm memory into file, then 'qemu-img' create a new subfile as disk of vm, current file is the check point, finally, 'virsh restore' the vm from subfile and memory file.
Using 'virsh restore' and 'qemu-img' to replace 'virsh snapshot-revert', When revert snapshot, 'virsh destroy' the vm, and create subfile from the check point as disk of vmm by qemu-img cmd, then 'virsh restore' the vm by memory file and the new subfile.

create snapshot:
'virsh save' backup memory ----------> 'qemu-img' backup disk ------------> 'virsh restore' the vm ---------> finish
revert snapshot:
'virsh destroy' --------------> 'qemu-img' create subfile from checkpoint --------------> 'virsh restore' the vm ----------------> finish

This is the comparison resule:
-----------------------------------------------------------------------------------------------
[root@localhost xiaohaitao]# time virsh snapshot-create-as 19
Domain snapshot 1406104777 created

real 0m2.461s
user 0m0.003s
sys 0m0.006s
[root@localhost xiaohaitao]# virsh snapshot-list 19
名称 Creation Time 状态
------------------------------------------------------------
1406104777 2014-07-23 16:39:37 +0800 running

[root@localhost xiaohaitao]#
[root@localhost xiaohaitao]# qemu-img info wxp.qcow2
image: wxp.qcow2
file format: qcow2
virtual size: 195G (209715200000 bytes)
disk size: 364M
cluster_size: 65536
backing file: windows-xp.qcow2
Snapshot list:
ID TAG VM SIZE DATE VM CLOCK
1 1406104777 299M 2014-07-23 16:39:37 00:15:45.157

--------------------------------------------------------------------------------------------------
[root@localhost xiaohaitao]# time virsh snapshot-create-as 19
Domain snapshot 1406105485 created

real 3m11.671s
user 0m0.009s
sys 0m0.005s
[root@localhost xiaohaitao]# virsh snapshot-list 19
名称 Creation Time 状态
------------------------------------------------------------
1406104777 2014-07-23 16:39:37 +0800 running
1406105485 2014-07-23 16:51:25 +0800 running

[root@localhost xiaohaitao]# qemu-img info wxp.qcow2
image: wxp.qcow2
file format: qcow2
virtual size: 195G (209715200000 bytes)
disk size: 4.9G
cluster_size: 65536
backing file: windows-xp.qcow2
Snapshot list:
ID TAG VM SIZE DATE VM CLOCK
1 1406104777 299M 2014-07-23 16:39:37 00:15:45.157
2 1406105485 1.0G 2014-07-23 16:51:25 00:27:30.687
[root@localhost xiaohaitao]#

History

#1 Updated by EOLE Team over 6 years ago

+1

We heavily use snapshot for our development and tests.

I merge information from qemu-img info and VM logs to see how long it took to make snapshots in a typical use case:

ID TAG VM SIZE START DATE END DATE VM CLOCK
1 1417426933 346M 2014-12-01 10:42:13 2014-12-01 10:42:19 00:17:34.404
2 1417427384 934M 2014-12-01 10:49:44 2014-12-01 10:52:34 00:22:45.022
3 1417429038 2.9G 2014-12-01 11:17:18 2014-12-01 11:18:12 00:47:28.889
4 1417429343 2.9G 2014-12-01 11:22:23 2014-12-01 11:50:12 00:51:40.247
5 1417431686 2.9G 2014-12-01 12:01:26 2014-12-01 12:28:00 01:02:54.208

30 minutes to take a snapshot is too much.

Any improvement in this area is more than welcome.

This setup use a non shared system datastore (local 10k SATA velociraptor disk).

Regards.

#2 Updated by Ruben S. Montero over 6 years ago

  • Tracker changed from Request to Feature
  • Category set to Drivers - VM
  • Status changed from Pending to New
  • Priority changed from High to Normal
  • Target version set to Release 4.12

Moving this to 4.12.

#3 Updated by Carlos Martín over 6 years ago

Hi,

海涛 肖 wrote:

Using 'virsh save' and 'qemu-img' to replace 'virsh snapshot-create-as', when create snapshot, 'virsh save' vm memory into file, then 'qemu-img' create a new subfile as disk of vm, current file is the check point, finally, 'virsh restore' the vm from subfile and memory file.
Using 'virsh restore' and 'qemu-img' to replace 'virsh snapshot-revert', When revert snapshot, 'virsh destroy' the vm, and create subfile from the check point as disk of vmm by qemu-img cmd, then 'virsh restore' the vm by memory file and the new subfile.

Is this a tested procedure? If the memory and disk snapshots are not synchronized (since you propose to do two different operations), is the whole VM consistency guaranteed?

Regards

#4 Updated by Ruben S. Montero over 6 years ago

  • Tracker changed from Feature to Request
  • Target version deleted (Release 4.12)

Also available in: Atom PDF