Feature #3848
Virtual networks should have an associated networking driver.
Status: | Closed | Start date: | 06/23/2015 | ||
---|---|---|---|---|---|
Priority: | High | Due date: | |||
Assignee: | - | % Done: | 80% | ||
Category: | Core & System | ||||
Target version: | Release 5.0 | ||||
Resolution: | fixed | Pull request: |
Description
In current releases, the networking driver is set at the scope of the host and prevents virtual machines to have different isolation methods (802.1Q, VXLAN, ...) for attached NICs.
Please find below some suggestions to address this feature.
Core
-----
- The VN_MAD attribute should be removed from the host and moved directly to the virtual network.
- The core should filter the networking drivers used by NICs in the template of the VM and call them all when performing actions.
Sunstone
-----------
- The networking driver attribute should be removed in Sunstone when adding a new Host.
- A new networking driver attribute should be added in Sunstone when creating a new Vnet.
Drivers - Network
--------------------
- Networking drivers should filter NICs of the template against the new networking driver attribute.
- The security group driver should be able to apply rules only on some filtered NICs because he may or may not be used by the network driver (typically, the ovswitch driver doesn't use it).
Associated revisions
Feature #3848: Virtual networks should have an associated networking driver.
Feature #3848: Virtual networks should have an associated networking driver.
(cherry picked from commit 7e90463693ef6639bd4f15f5b6f5079664f6a1e2)
This cherry still needs to merge files from original contribution by goberle <goberle@unistra.fr>:
- src/sunstone/public/app/tabs/vnets-tab/form-panels/wizard.hbs
feature #3848: Changes scope of some methods and constants. Use oned
escape_xml functions. Update fw driver to use VN_MAD at NIC level.
feature #3848: Fix render problems and CLI options for VN_MAD
Feature #3848: Fix vnet update when VN_MAD is defined
Feature #3848: Small change for medium size windows
Feature #3848: Remove VLAN=YES from sunstone
feature #3848: Remove VN_MAD from Host in oned. Remove VMWare driver in
core
feature #3848: Update VMM driver to get VN_MAD from NIC attributes
feature #3848: Remove VN_MAD from cli & sunstone
feature #3848: Fix host creation form
feature #3848: VN_MAD is set for VirtualNetworks in Sunstone. Removed
host vnet option
feature #3848: Update Host.allocate in OCA JAVA
feature #3848: Fix OCA Java
Feature #3848: Fix vnm_mad name to dummy instead of default
Feature #3848: Formatting
History
#1 Updated by Guillaume Oberlé about 6 years ago
Hi ! :)
We have a small working POC here : https://github.com/unistra/one/tree/feature-3848.
We added a mandatory VN_MAD attribute when creating a new virtual network and also updated Sunstone accordingly.
Currently, the pre, post and clean actions are re-dispatched by a remote dispatch driver but we are looking to move this part directly in the core. Basically, the core will trigger the pre, post, clean actions of every concerned drivers directly via SSH without the need of a dispatcher on the remote side.
We also updated XPATH_FILTER of each network drivers and added a new xpath_filter parameter to the constructor of the security group driver.
Any suggestion or review will be appreciated :)
#2 Updated by Ruben S. Montero about 6 years ago
- Tracker changed from Feature to Backlog
- Target version set to Release 5.0
#3 Updated by Stefan Kooman over 5 years ago
I was about to create a new ticket to make the current networking drivers more "smart", i.e. to allow for a host to specifiy which networking driver should be based on the type of virtual network when I came across this ticket. Our use case: be able to utilize vxlan support in OpenNebula for existing deployments (hypervisors) while maintaining openvswitch as primary virtual switch on linux based deployments (as vxlan support for openvswitch in opennebula is not (yet) there). We made a PoC combining both openvswitch and vxlan drivers in OpenNebula:
Two type of hypervisors added to OpenNebula
-kvm with vxlan drivers
-kvm with openvswitch drivers
Two (physical) hypervisors in total. They are actually the same hypervisors, but managed seperately by OpenNebula.
We created a "vxlan" network in the "vxlan" hypervisors and deployed two VM's with a vxlan vnet. The "physical" interface used to transport the vxlan traffic is actually a openvswitch "internal" vlan. After fixing an issue with our multicast setup (routed IPv4 multicast versus L2 connected multicast as OpenNebula assumes), issue #4043, it just worked(tm).
To allow for even for more flexibile OpenNebula deployments the network specific hypervisor driver should be de-coupled (and moved to vnet).
#4 Updated by Ruben S. Montero over 5 years ago
Totally, this is initially planned for 5.0. Now that 4.14 has been released one of the first thing I want to do is merge the patch from Guillaume.
#5 Updated by Ruben S. Montero over 5 years ago
- Tracker changed from Backlog to Feature
#6 Updated by Ruben S. Montero over 5 years ago
We are now behind schedule for this release 5.0 .. moving this to the backlog
#7 Updated by Ruben S. Montero over 5 years ago
- Tracker changed from Feature to Backlog
- Priority changed from Normal to High
- Target version deleted (
Release 5.0)
#8 Updated by Guillaume Oberlé over 5 years ago
I updated the pull request, there is no more conflicts with the master branch.
#9 Updated by Anonymous over 5 years ago
So just to be clear - setting networking driver per Virtual Network instead of per Host is not going to be included in 5.0 release? How can I help to get it back to 5.0?
This feature would be very useful for me.
#10 Updated by Ruben S. Montero over 5 years ago
- Target version set to Release 5.0
Sorry I had a quick chat with Guillaume and decided to include it again for 5.0
THANKS!
#11 Updated by Anonymous over 5 years ago
That's great! Looking forward to seeing it in 5.0 :) Thank you.
#12 Updated by Ruben S. Montero about 5 years ago
- Tracker changed from Backlog to Feature
#13 Updated by Ruben S. Montero about 5 years ago
- Category changed from Core & System to Documentation
#14 Updated by Ruben S. Montero about 5 years ago
- Category changed from Documentation to Core & System
- % Done changed from 0 to 80
#15 Updated by EOLE Team about 5 years ago
I wonder if it's related to this feature, but testing ONE 5.0 β I have:
VN_MAD openvswitch
Which result in FAILURE
:
Wed May 18 16:36:32 2016 [Z0][VMM][I]: Command execution fail: /var/tmp/one/vnm/openvswitch/pre PFZNPjxJRD4wPC9JRD48REVQTE9ZX0lELz48VEVNUExBVEU+PFNFQ1VSSVRZX0dST1VQX1JVTEU+PFBST1RPQ09MPjwhW0NEQVRBW0FMTF1dPjwvUFJPVE9DT0w+PFJVTEVfVFlQRT48IVtDREFUQVtPVVRCT1VORF1dPjwvUlVMRV9UWVBFPjxTRUNVUklUWV9HUk9VUF9JRD48IVtDREFUQVswXV0+PC9TRUNVUklUWV9HUk9VUF9JRD48U0VDVVJJVFlfR1JPVVBfTkFNRT48IVtDREFUQVtkZWZhdWx0XV0+PC9TRUNVUklUWV9HUk9VUF9OQU1FPjwvU0VDVVJJVFlfR1JPVVBfUlVMRT48L1RFTVBMQVRFPjxURU1QTEFURT48U0VDVVJJVFlfR1JPVVBfUlVMRT48UFJPVE9DT0w+PCFbQ0RBVEFbQUxMXV0+PC9QUk9UT0NPTD48UlVMRV9UWVBFPjwhW0NEQVRBW0lOQk9VTkRdXT48L1JVTEVfVFlQRT48U0VDVVJJVFlfR1JPVVBfSUQ+PCFbQ0RBVEFbMF1dPjwvU0VDVVJJVFlfR1JPVVBfSUQ+PFNFQ1VSSVRZX0dST1VQX05BTUU+PCFbQ0RBVEFbZGVmYXVsdF1dPjwvU0VDVVJJVFlfR1JPVVBfTkFNRT48L1NFQ1VSSVRZX0dST1VQX1JVTEU+PC9URU1QTEFURT48SElTVE9SWV9SRUNPUkRTPjxISVNUT1JZPjxIT1NUTkFNRT5pZ29yPC9IT1NUTkFNRT48L0hJU1RPUlk+PC9ISVNUT1JZX1JFQ09SRFM+PEhJU1RPUllfUkVDT1JEUz48SElTVE9SWT48Vk1fTUFEPjwhW0NEQVRBW2t2bV1dPjwvVk1fTUFEPjwvSElTVE9SWT48L0hJU1RPUllfUkVDT1JEUz48VEVNUExBVEU+PE5JQz48QVJfSUQ+PCFbQ0RBVEFbMF1dPjwvQVJfSUQ+PEJSSURHRT48IVtDREFUQVtyZWN0b3JhdF1dPjwvQlJJREdFPjxDTFVTVEVSX0lEPjwhW0NEQVRBWzBdXT48L0NMVVNURVJfSUQ+PE1BQz48IVtDREFUQVswMjowMjowMDowMDowMDowMV1dPjwvTUFDPjxORVRXT1JLPjwhW0NEQVRBW0VPTEVdXT48L05FVFdPUks+PE5FVFdPUktfSUQ+PCFbQ0RBVEFbMF1dPjwvTkVUV09SS19JRD48TklDX0lEPjwhW0NEQVRBWzBdXT48L05JQ19JRD48U0VDVVJJVFlfR1JPVVBTPjwhW0NEQVRBWzBdXT48L1NFQ1VSSVRZX0dST1VQUz48VEFSR0VUPjwhW0NEQVRBW29uZS0wLTBdXT48L1RBUkdFVD48VkxBTl9JRD48IVtDREFUQVs0XV0+PC9WTEFOX0lEPjxWTl9NQUQ+PCFbQ0RBVEFbb3BlbnZzd2l0Y2hdXT48L1ZOX01BRD48L05JQz48L1RFTVBMQVRFPjwvVk0+ Wed May 18 16:36:32 2016 [Z0][VMM][I]: bash: ligne 2: /var/tmp/one/vnm/openvswitch/pre: Aucun fichier ou dossier de ce type Wed May 18 16:36:32 2016 [Z0][VMM][I]: ExitCode: 127 Wed May 18 16:36:32 2016 [Z0][VMM][I]: Failed to execute network driver operation: pre. Wed May 18 16:36:32 2016 [Z0][VMM][E]: Error deploying virtual machine: openvswitch: - Wed May 18 16:36:32 2016 [Z0][VM][I]: New LCM state is BOOT_FAILURE
Regards.
#16 Updated by Ruben S. Montero about 5 years ago
- Status changed from Pending to Closed
- Resolution set to fixed