Feature #1818

Support for IP reservations

Added by Ruben S. Montero over 8 years ago. Updated almost 7 years ago.

Status:ClosedStart date:03/19/2013
Priority:NormalDue date:
Assignee:Carlos Martín% Done:

0%

Category:Core & System
Target version:Release 4.8
Resolution:duplicate Pull request:

Description

See this thread for more details

http://lists.opennebula.org/pipermail/users-opennebula.org/2013-February/022174.html

This issue will also address the need to reserve more than one IP per NIC. In this way a server can add several addresses to the same NIC without the need of adding multiples NICs on the same network.

opennebula-lease-reservation-v1.patch Magnifier (23.1 KB) Simon Boulet, 05/10/2013 02:34 AM

opennebula-lease-reservation-v1-fix-initialization.patch Magnifier (895 Bytes) Simon Boulet, 05/10/2013 03:50 PM

opennebula-lease-reservation-v1-fix-lease-counter.patch Magnifier (968 Bytes) Simon Boulet, 05/10/2013 06:34 PM


Related issues

Related to Feature #1773: Add network groups Closed 02/20/2013
Related to Feature #2858: Extend Network Model Closed 04/29/2014

History

#1 Updated by Ruben S. Montero about 8 years ago

  • Category set to Core & System

#2 Updated by Ruben S. Montero about 8 years ago

  • Priority changed from Normal to High

#3 Updated by Simon Boulet about 8 years ago

This patch adds support for reserving leases for a specific UID / GID.

It can be used to reserve IP address for a user, and allow to move the address between VMs (similar to Amazon Elastic IP address)

It implements a new one.vn.reserve API call that takes the network ID and lease definition (like the others one.vn.* calls) and adds two new parameters: user UID and GID for the reservation. The feature is similar to the current lease on hold feature, but allows for a specific user to use the lease even when held (and also returns the least to held/reserved when the VM is destroyed or the lease detached from the VM).

- Currently supports FIXED network only (I suppose RANGED networks support could easily be added).
- Requires MANAGE NET permission (like all other one.vn.* calls)
- It allow to set reservation for both used and unused lease. Used lease will become reserved/held when the VM is destroyed or the lease is detached.
- The UID and GID for which the lease is to be reserved are not validated. One could reserve a lease for an nonexistent user (or a user that doesn't have access to the NET) ultimately making the lease unusable until it is released back into the pool of available leases.
- In Sunstone, an unused reserved lease will appear as held. When a reserved lease becomes used, it appears as used by the given VM, and will return to held when the VM is destroyed or the lease is detached.
- The one.vn.release call (same as for held leases) is used to unreserve a reserved lease.
- The implementation doesn't require any changes to the DB. It will insert a new UID and GID attribute to the leases that are reserved. Also confirmed onedb fsck safe.

Further development thoughts:

I noticed some Leases functions accept IP address as parameter, and others Leases vectors. Perhaps we should standardize the specialized Leases methods to always take a Leases reference as parameter (instead of an IP). This would ensure the Leases vector is traversed only once to find the Lease with the given IP. Currently there are some double lookups notably each time a held Lease is freed/released (Leases::free_leases()). Also perhaps we could drop support multiple Leases attributes (justifying the need for the vector) altogether since it's not supported at the moment and I'm not sure that's really something necessary (and it adds a bit of complexity).

Also I had to add a parameter to the various set() and nic_attribute() function to carry the uid of the user when creating a new VM. I couldn't load the VM from the VMPool because when the NIC gets attached the VM doesn't exist in the Pool yet (the NICs are being attached during the VM Pool insertion process). It would be great to be able to retrieve the Request attributes (perhaps on a per-thread basis) without having to pass the req around as parameter.

Also noticed that in some situation the PoolSQL lock is sometime held for long period of time, notably PoolSQL.cc:126 where the ALLOCATE hook is executed. It would be great to change the lock to a read / write lock, where the write lock would be acquired for short period of time when adding / modifying the pool, but otherwise read lock would be used.

Finally, I'd be curious to know how many of our users are using the RANGED lease feature, and perhaps see if RANGED lease can't be reimplemented on top of FIXED lease instead of being a completely separate implementation.

Generally I have found that a large part of the network management code would benefit of a thorough review and clean up, and has room for optimization.

As usual, comments are more than welcome :)

Simon

#4 Updated by Simon Boulet about 8 years ago

Fix lease not being properly initialized when first created / added to the network.

Applies on top of v1 above.

#5 Updated by Simon Boulet about 8 years ago

One more fix to properly increment/decrement network used lease counter.

#6 Updated by Ruben S. Montero about 8 years ago

Hi Simon,

Thanks for the patches I'll take a look at them ASAP (probably next week). I am interested in some of your comments to improve the Network stack, so let me summary so I can fill issues form them:

  1. Better signature for class methods. I totally agree with this one, for example with IPv6 one should be able to request a MAC address and now it is not possible because of this, so probably pass a Lease to these functions is a good idea. We have #1958 open, and I think this can be developed in the context of that issue
  1. Lock's in PoolSQL. For me this is a bug, if we add an ALLOCATE hook like (sleep 100) you'll hang the whole system. I think we can safety move the do_hooks out of the critical section. I'd say that the general idea with the pool lock is to be quick, have you noticed any other situation where the pool would be locked for a long time?.

These two are clear for me and will fill a couple of separate issues. Then we have:

  1. accessing the request, I am not sure how to make this happening without a deep change on most of the functions to pass down the value. Any suggestions here?
  1. RANGED as FIXED, this is something we've discussed in the past and decided not to go that way because of the storage request (e.g. for someone defining a B class...). I've several references of people using ranged...

Finally, I'd really like to hear any suggestion to improve and optimize the network module, where do you feel we could do it better....

Thanks again for your feedback and contributions! :)

#7 Updated by Ruben S. Montero almost 8 years ago

  • Target version changed from Release 4.2 to Release 4.4

#8 Updated by Carlos Martín almost 8 years ago

  • Assignee set to Carlos Martín

#9 Updated by Simon Boulet almost 8 years ago

Ruben S. Montero wrote:

  1. accessing the request, I am not sure how to make this happening without a deep change on most of the functions to pass down the value. Any suggestions here?

We could look at having per-thread structure containing thread specific information ex.: http://www.boost.org/doc/libs/1_34_1/doc/html/boost/thread_specific_ptr.html

Then have a static method for retrieving / handling the per-thread information (retrieving the thread Request object, etc.)

That's similar to what log4cpp is using for its thread NDC context. On a side note we could use Log4cpp NDC when creating new threads to tags the user ID or the VM ID that thread is servicing (when syslog logging is used).

Simon

#10 Updated by Ruben S. Montero almost 8 years ago

Hi

After an almost-complete implementation of this feature it seems that the overall design does not play well with it (i.e. chmod/ACL's/... access to a reserved leases should rely on the current auth mechanism, same for ownership).

We've come out with an intermediate solution. Access to the network is controlled with the current mechanisms. A lease can be USED (by a VM), FREE (can be automatically assigned to VMs), HOLD (lease is excluded from the IP range and cannot be assigned) and RESERVED (the leases will not be automatically assigned but can be manually requested).

Comments?

#11 Updated by Simon Boulet almost 8 years ago

Initially the idea was to have the IP addresses reserved for usage by a given user. Much like the EC2 Elastic IP, the user can reserve IP and assign them to his VMs at will.

From the quota point of view, I would see the IP address being accounted as used by the user when it's reserved for its exclusive usage, independently if it's really in use or not (assigned to a VM or not).

In term of ownership of the VNET and right to use the lease, I was seeing the reserved lease as a second level of permission. You still need USE access to the VNET, but you are allowed to instantiate (and attach new NICs) with leases that are in RESERVED state only if the lease is reserved for your UID or GID.

#12 Updated by Carlos Martín almost 8 years ago

Hi Simon,

That scenario can still be implemented with Ruben's proposal. Users will be able to assign RESERVED IPs at will.

If you need to separate which IPs can be used by each user, this can be done creating a different VNet for each one. E.g. Fixed VNet 0 [...1 - ...10] for user A, VNet 1 [...11 - ...15] for user B, etc.

Our main concern is the authorization mechanism. We don't feel comfortable having a specific authentication method for leases that doesn't support ACL, chmod, etc.

If we have to add owner/group to each lease, we should do a major redesign and maybe make them first-class citizens (PoolElement). We could also consider merging somehow the ranged and fixed networks, as you said. But this would be a big development effort and we don't consider it a high priority right now.

Best regards

#13 Updated by Simon Boulet almost 8 years ago

Carlos Martín wrote:

If you need to separate which IPs can be used by each user, this can be done creating a different VNet for each one. E.g. Fixed VNet 0 [...1 - ...10] for user A, VNet 1 [...11 - ...15] for user B, etc.

The problem with that is I like to assign an entire subnet to OpenNebula, and then leave OpenNebula manage to subnet. With the scenario of having one VNet per user, I'd have to manage the available IP addresses externally from OpenNebula, or, have some dummy VNet that contains all my subnets, and when a user requests an elastic IP, mark the IP as held in the dummy VNet, and add the IP to the user's private VNet. When the user releases an elastic IP, remove the IP from his private VNet, and release the IP from the dummy VNet.

Our main concern is the authorization mechanism. We don't feel comfortable having a specific authentication method for leases that doesn't support ACL, chmod, etc.

If we have to add owner/group to each lease, we should do a major redesign and maybe make them first-class citizens (PoolElement). We could also consider merging somehow the ranged and fixed networks, as you said. But this would be a big development effort and we don't consider it a high priority right now.

Yes, I've though of this too, make IP address PoolElements, also merge the ranged and fixed networks. I'm interested and can help. I've dig down quite a bit in the OpenNebula network part and feel quite comfortable doing this on my spare time. Looking forward at discussing this with you and the conference.

Simon

#14 Updated by Ruben S. Montero over 7 years ago

  • Target version changed from Release 4.4 to Release 4.6

#15 Updated by Ruben S. Montero over 7 years ago

  • Priority changed from High to Normal

#16 Updated by Ruben S. Montero over 7 years ago

  • Description updated (diff)

#17 Updated by Jaime Melis over 7 years ago

  • Target version changed from Release 4.6 to Release 4.8

#18 Updated by Tino Vázquez over 7 years ago

Implementation Proposal
---------------------

Allow a LEASE to belong to both a VM and to a sub-VNET (hierarchical model). The sub-VNET would inherit all variables in the parent VNET and only be a container for IP address. This could allow users with the proper ACLs, or administrators to use the VNET for VM IPs, or to create a sub-VNET container with one more more IP addresses from the parent VNET.

It'd be nice to be able to disable "parent" VNET creation while allowing users to create "child" VNETs from an existing "parent" VNET. This seems work with existing ACLs methods as creating a "child" VNET could fall under the "USE" ACL for a VNET.

#19 Updated by Ruben S. Montero about 7 years ago

#20 Updated by Ruben S. Montero almost 7 years ago

  • Target version changed from Release 4.8 to Release 4.8 - Beta 1

#21 Updated by Carlos Martín almost 7 years ago

  • Status changed from New to Closed
  • Resolution set to duplicate

Already implemented in #2858 and merged to master, still needs documentation.
The implementation uses the onevnet reserve call to create a new vnet, owned by the user doing the call. The parent network leases are marked as used by the reservation vnet.

#22 Updated by Javi Fontan almost 7 years ago

  • Target version changed from Release 4.8 - Beta 1 to Release 4.8

Also available in: Atom PDF