Feature #2921

(Per VM) DISKIO IO information in Sunstone

Added by Stefan Kooman about 7 years ago. Updated over 4 years ago.

Status:ClosedStart date:05/16/2014
Priority:HighDue date:
Assignee:-% Done:

0%

Category:Core & System
Target version:Release 5.4
Resolution:fixed Pull request:

Description

Just like "network", "CPU" and "MEMORY", DISK IO information (READ/WRITE IOPS/THROUGHPUT) would be useful for quick inspection of (busy) VM's. Per DISK IO graphs would give the best insight in wich part of the VM is doing most of the IO. An item on the "Dashboard" of amount of DISK IO would be nice also.


Related issues

Related to Backlog #2626: Monitor datastores IO Pending 01/12/2014
Related to Feature #3718: Get disk space usage (real and virtual) Closed 03/24/2015

History

#1 Updated by Ruben S. Montero about 7 years ago

  • Tracker changed from Feature to Backlog
  • Category set to Drivers - Monitor
  • Priority changed from None to High

Ok, moving it to the backlog with High Priority. We need to setup VM probes for all the hypervisors to get the DISK/IO info.

#2 Updated by Ruben S. Montero almost 7 years ago

#3 Updated by Ruben S. Montero over 6 years ago

  • Target version set to Release 4.14

#4 Updated by Ruben S. Montero over 6 years ago

  • Tracker changed from Backlog to Feature

#5 Updated by Ruben S. Montero over 6 years ago

  • Status changed from Pending to New

#6 Updated by Javi Fontan over 6 years ago

  • Related to Feature #3718: Get disk space usage (real and virtual) added

#7 Updated by Ruben S. Montero about 6 years ago

  • Tracker changed from Feature to Backlog
  • Status changed from New to Pending
  • Target version changed from Release 4.14 to Release 5.0

#8 Updated by Stefan Kooman over 5 years ago

During the OpenNebula conference several presentations made note of the needs for DISK (IO) operations because of performance issues in their environment (user VM's hammering storage layer). I have at least spoken with 4 different OpenNebula users that made customs tools / scripts to collect data from libvirt to address this issue ... there seems to be a clear demand for this feature :-).

#9 Updated by Ruben S. Montero over 5 years ago

  • Tracker changed from Backlog to Feature
  • Status changed from Pending to New

#10 Updated by Stefan Kooman over 5 years ago

Seems to be related to #2768 (Ruben S. Montero: "This may require, update accounting schema as well as monitoring probes") ... if this information is to be used in accounting / showback.

#11 Updated by Ruben S. Montero about 5 years ago

  • Tracker changed from Feature to Backlog

#12 Updated by Ruben S. Montero almost 5 years ago

  • Target version changed from Release 5.0 to Release 5.4

#13 Updated by John Noss almost 5 years ago

+1, this feature would be great to have in OpenNebula. We are currently using https://github.com/fasrc/nebula-ceph-diamond-collector for monitoring vm disks on ceph using rbd performance counters

#14 Updated by Ruben S. Montero almost 5 years ago

Cool, we'll take a look at it. We need to generalize the framework to gather input from other storage sources LVM, FS... but this is something we can use to extend the VM monitoring.

#15 Updated by Tino Vázquez over 4 years ago

  • Category changed from Drivers - Monitor to vCenter

#16 Updated by Miguel Ángel Álvarez Cabrerizo over 4 years ago

Added information for vCenter driver. The following metrics are used from vCenter's PerfManager using real-time data. vCenter statistics level must be set to 2 for 5 minute (Settings -> Statistics) in order to get disk IO info:

- virtualDisk.read to get diskrdbytes. vCenter provides an average in kilobytes/s so data retrieved will be an approximation.
- virtualDisk.write to get diskwriops. vCenter provides an average in kilobytes/s so data retrieved will be an approximation.
- virtualDisk.numberReadAveraged to get diskrdiops
- virtualDisk.numberWriteAveraged to get diskwriops

#17 Updated by Ruben S. Montero over 4 years ago

  • Tracker changed from Backlog to Feature

#18 Updated by Ruben S. Montero over 4 years ago

  • Category changed from vCenter to Core & System
  • Status changed from New to Closed
  • Resolution set to fixed

Also available in: Atom PDF