Virtual networks should have an associated networking driver.
|Category:||Core & System|
|Target version:||Release 5.0|
In current releases, the networking driver is set at the scope of the host and prevents virtual machines to have different isolation methods (802.1Q, VXLAN, ...) for attached NICs.
Please find below some suggestions to address this feature.
- The VN_MAD attribute should be removed from the host and moved directly to the virtual network.
- The core should filter the networking drivers used by NICs in the template of the VM and call them all when performing actions.
- The networking driver attribute should be removed in Sunstone when adding a new Host.
- A new networking driver attribute should be added in Sunstone when creating a new Vnet.
Drivers - Network
- Networking drivers should filter NICs of the template against the new networking driver attribute.
- The security group driver should be able to apply rules only on some filtered NICs because he may or may not be used by the network driver (typically, the ovswitch driver doesn't use it).
Feature #3848: Virtual networks should have an associated networking driver.
(cherry picked from commit 7e90463693ef6639bd4f15f5b6f5079664f6a1e2)
This cherry still needs to merge files from original contribution by goberle <email@example.com>:
feature #3848: Changes scope of some methods and constants. Use oned
escape_xml functions. Update fw driver to use VN_MAD at NIC level.
feature #3848: VN_MAD is set for VirtualNetworks in Sunstone. Removed
host vnet option
#1 Updated by Guillaume Oberlé over 4 years ago
Hi ! :)
We have a small working POC here : https://github.com/unistra/one/tree/feature-3848.
We added a mandatory VN_MAD attribute when creating a new virtual network and also updated Sunstone accordingly.
Currently, the pre, post and clean actions are re-dispatched by a remote dispatch driver but we are looking to move this part directly in the core. Basically, the core will trigger the pre, post, clean actions of every concerned drivers directly via SSH without the need of a dispatcher on the remote side.
We also updated XPATH_FILTER of each network drivers and added a new xpath_filter parameter to the constructor of the security group driver.
Any suggestion or review will be appreciated :)
#3 Updated by Stefan Kooman about 4 years ago
I was about to create a new ticket to make the current networking drivers more "smart", i.e. to allow for a host to specifiy which networking driver should be based on the type of virtual network when I came across this ticket. Our use case: be able to utilize vxlan support in OpenNebula for existing deployments (hypervisors) while maintaining openvswitch as primary virtual switch on linux based deployments (as vxlan support for openvswitch in opennebula is not (yet) there). We made a PoC combining both openvswitch and vxlan drivers in OpenNebula:
Two type of hypervisors added to OpenNebula
-kvm with vxlan drivers
-kvm with openvswitch drivers
Two (physical) hypervisors in total. They are actually the same hypervisors, but managed seperately by OpenNebula.
We created a "vxlan" network in the "vxlan" hypervisors and deployed two VM's with a vxlan vnet. The "physical" interface used to transport the vxlan traffic is actually a openvswitch "internal" vlan. After fixing an issue with our multicast setup (routed IPv4 multicast versus L2 connected multicast as OpenNebula assumes), issue #4043, it just worked(tm).
To allow for even for more flexibile OpenNebula deployments the network specific hypervisor driver should be de-coupled (and moved to vnet).
#15 Updated by EOLE Team over 3 years ago
I wonder if it's related to this feature, but testing ONE 5.0 β I have:
Which result in