... | ... | @@ -10,105 +10,76 @@ A notable difference between the approach described in obsolete [Backend Assiste |
|
|
|
|
|
Lease selection is a complex process in which the allocation engine favors allocation of reservations and address hints. If neither a reserved address nor an address provided as hint can be offered, the server must find free lease from within the address pools belonging to the given subnet or multiple subnets, if the subnets belong to a shared network. In a new installation all addresses in a new pool are initially free. The server iteratively picks addresses from the pool and offers them to the clients. This is the least complicated way to select free leases and it is also the fastest when the pool utilization is relatively low.
|
|
|
|
|
|
We have received several requests from Kea users that iterative lease assignment makes the server vulnerable to the scanning attacks. The users requested that the server should rather offer randomly selected addresses from the available pools to avoid scanning. While this design mostly focuses on addressing another issue with the iterative allocation, degraded performance when working with nearly exhausted pools, it also creates a good opportunity to solve the problem of predictable address assignment.
|
|
|
|
|
|
The iterative allocation performance degrades significantly when the pools are nearly exhausted. The server walks over the entire pool and for each candidate address it checks lease existence. In an extreme case, the server will walk over the entire pool before it will find out that there are no more addresses available and then it will try another pool. This has been reported to be an issue for some installations having many relatively small pools with high utilization.
|
|
|
|
|
|
The solution described in this design is aimed at minimizing the number of iterations to find a free lease within an address pool or learn that the pool is exhausted. Ideally, the number of iterations for a single pool should be equal to 1, even when the pool is nearly exhausted. The use of in pool host reservations (and conflict resolution scheme) may increase the number of iterations when conflicts are detected. However, the iterative allocation is also prone to this issue.
|
|
|
|
|
|
## Configuration
|
|
|
|
|
|
### Configuration Levels
|
|
|
|
|
|
The new mechanism mostly aimed at improving lease allocation performance for nearly exhausted pools. It may also improve performance for the pools with high fragmentation, in which free leases are scattered. For large address pools, e.g. /8, the occurrence of the pool exhaustion is less probable. Populating and maintaining free lease queues has certain computation cost upon server's startup and reconfiguration and increases memory usage. Therefore it should be administrator's decision for which pools the free lease queues are used and for which pools they aren't. The appropriate configuration knobs must be available at global, shared network, subnet and pool levels.
|
|
|
## High Level Concepts
|
|
|
|
|
|
### Shuffling Leases
|
|
|
Kea allocation engine is a part of the `dhcpsrv` library, responsible for finding free leases and allocating them to the clients. It is also responsible for periodically reclaiming expired leases. An `allocator` is a replaceable part of the allocation engine providing a mechanism to find next candidate lease. As of today, Kea has only one allocator, the iterative allocator, which is always used.
|
|
|
|
|
|
The free lease queues (FLQ) designed herein, besides fast lease lookup, will also provide a new capability to shuffle offered leases. It has been pointed out by several Kea users and admitted by Kea team members that offering consecutive IP addresses (iterative allocation) makes it trivial to guess next address to be offered/allocated and makes a network prone to scanning attacks. Introduction of FLQ is a good opportunity for addressing this issue. Leases in the queues can be shuffled and offered in no particular order. Shuffling leases requires maintenance of a (shuffling) state and introduces additional complexity. Hence, this design assumes that it should be possible to enable/disable shuffling from global down to the pool level.
|
|
|
We have to implement new allocators to facilitate new allocation strategies. The iterative allocator works reliably in many DHCP deployments and must remain the default option. The new allocators address specific issues at the cost of more complexity, computation time during the server startup, and increased memory consumption for maintaining the allocation state information. Thus, we have to allow Kea configurations in which the new allocators will be used only for the subnets where the iterative allocation limitations are the issue.
|
|
|
|
|
|
### Configuration Knobs
|
|
|
Allocator's API has been designed to operate at the subnet level. It takes the subnet as an argument for allocation, picks a pool that meets client classification criteria, and offers a lease from this pool. Instead of using the global allocator, it is possible to use a different allocator instance for each subnet. Some modifications to the allocator's API are required. The allocator will have to return its configuration in a JSON format. Such a change requires some updates to the pool and subnet structure too. Currently, the pool and subnet classes hold the allocation state dedicated to the iterative allocator. We need to replace them with a pointer to the state class. Each allocator may come with its own derivation of the state class. The state class holds all the information required by the particular allocator to efficiently select the next available address. For example, in the case of the random allocation strategy, the state class may contain the permutation object, returning a unique random address taking into account already returned addresses for the pool. Pools and subnets should have respective allocation states.
|
|
|
|
|
|
FLQ and shuffling are complementary and should be configured separately. This design proposes introduction of the two boolean parameters: `free-lease-queue` and `free-lease-shuffle`, to enable (if true) or disable (if false) the use of these features.
|
|
|
Generating an initial allocation state may take significant time for large subnets. We should avoid unnecessarily regenerating the state during the server's reconfiguration. The states should outlive the configuration switch in the configuration manager when no changes were applied to the given subnet or pool. It will require smarter logic in the configuration parsers and the manager.
|
|
|
|
|
|
The following configuration:
|
|
|
Having these changes in place, we can proceed with the implementation of the new allocators.
|
|
|
|
|
|
```json
|
|
|
{
|
|
|
"Dhcp4": {
|
|
|
"free-lease-queue": false,
|
|
|
"free-lease-shuffle": false
|
|
|
}
|
|
|
}
|
|
|
```
|
|
|
|
|
|
is a default configuration which globally disables the use of FLQ and leases shuffling, effectively causing the server to use iterative allocation strategy as in Kea 1.9.0 and earlier.
|
|
|
|
|
|
Both parameters can be specified at global, shared network, subnet and pool levels and are subject to regular inheritance rules.
|
|
|
## Configuration
|
|
|
|
|
|
For example:
|
|
|
Using the iterative allocator does not require any additional configuration. Allocator can be selected at global, shared network and subnet level as shown on the following configuration snippet:
|
|
|
|
|
|
```json
|
|
|
{
|
|
|
"Dhcp4": {
|
|
|
"free-lease-queue": true,
|
|
|
"free-lease-shuffle": true,
|
|
|
"allocator": "random"
|
|
|
"shared-networks": [
|
|
|
{
|
|
|
"name": "foo",
|
|
|
"allocator": "iterative",
|
|
|
"subnet4": [...]
|
|
|
}
|
|
|
],
|
|
|
"subnet4": [
|
|
|
{
|
|
|
"subnet": "10.0.0.0/8",
|
|
|
"free-lease-queue": false
|
|
|
"subnet": "192.0.2.0/24",
|
|
|
"allocator": "free-lease-queue"
|
|
|
}
|
|
|
]
|
|
|
}
|
|
|
}
|
|
|
```
|
|
|
|
|
|
In this example, the subnet specific setting of `free-lease-queue` overrides the default setting, effectively disabling the usage of free lease queues in the 10.0.0.0/8 subnet. All subnets for which neither of this parameters is overridden will use the global settings.
|
|
|
|
|
|
The following table summarizes the pros and cons of using the new settings. The administrator must consider pool sizes, pools utilization, memory size, machine performance and possibly other things specific to the deployment to select appropriate settings at different configuration levels.
|
|
|
|
|
|
| free-lease-queue | free-lease-shuffle | Pros | Cons |
|
|
|
| ---------------- | ------------------ | ---- | ---- |
|
|
|
| false | false | Fast startup, low memory consumption. | Network scans, slow free lease lookup. |
|
|
|
| true | false | Fast free lease lookup, not using memory for shuffling. | Slower startup, network scans. |
|
|
|
| false | true | Network scanning harder with no overhead for maintaining the queues. | Slow free lease lookup. |
|
|
|
| true | true | Network scanning harder, fast free lease lookup. | Slower startup, the largest memory consumption. |
|
|
|
|
|
|
## Code Changes
|
|
|
|
|
|
### New Allocator
|
|
|
Should any allocator-specific settings be added, they are specified at the same level as the allocator type to avoid creating deeper nesting levels which are problematic for the Config Backend.
|
|
|
|
|
|
Kea allocation engine is a part of the `dhcpsrv` library, responsible for finding free leases and allocating them to the clients. It is also responsible for periodically reclaiming expired leases. An `allocator` is a replaceable part of the allocation engine providing a mechanism to find next candidate lease. As of today, Kea has only one allocator, the iterative allocator, which is always used. This allocator satisfies the first case out of those listed above, i.e. both `free-lease-queue` and `free-lease-shuffle` are set to `false` at all levels.
|
|
|
## Random Allocator
|
|
|
|
|
|
In order to facilitate other cases (lease queues and/or shuffling), a new allocator must be developed. Currently, only one allocator instance can be used during the server's run time. It implies that if any of these new configuration settings is set to `true` at any level, the new allocator must be used instead of the default iterative allocator. It also means that the new allocator must support the iterative allocation strategy because this strategy may be used for selected pools.
|
|
|
One of the problems to solve in this design is the random lease selection. The other problem with optimizing free lease searches is much more complex, and there is no reason why it wouldn't also involve the assignment of randomly selected leases. In other words, random lease assignments can exist independently or with further complex optimizations. Therefore, it makes sense to start with the simpler case of creating an allocator that performs pure random lease selection. Similarly to the iterative allocator, it wouldn't maintain any state of lease assignment (lease existence in the database), and the allocation engine would still be fully responsible for checking that the selected address does not collide with an existing lease.
|
|
|
|
|
|
This brings a question if we should remove the iterative allocator altogether given that the new allocator will cover its functionality. Personally, as the author of this design, I feel that initially it shouldn't be removed until we make sure that the new allocator performs as good as the iterative allocator in terms of performance and functionality. The new allocator is currently considered experimental and therefore there must be a way to fall back to the "traditional", well tested iterative allocation strategy.
|
|
|
The most important issue to remember is that the implemented randomization algorithm must select all leases from the pool with equal probability, not favor any leases, and not omit any leases.
|
|
|
|
|
|
Another interesting question is whether we should be able to configure the server to use different allocators for different pools, subnets etc. This is a possibility which must be further explored. At the first glance, however, it looks like it may bring too much complexity to the allocation engine state for respective pools.
|
|
|
A random arrangement of leases in the pool where none of the leases occur twice is called a permutation. If the server can maintain permutations of addresses in each pool, it can offer these addresses in the order in which they are held in the permutation. It requires maintaining the permutation, so additional computation time and memory usage. On the other hand, the server does not need to compute the whole permutation upon startup. It can find and append the next lease to offer for each new lease allocation attempt.
|
|
|
|
|
|
I propose that we use iterative allocator when `free-lease-queue` and `free-lease-shuffle` are set to `false` at all configuration levels and use the new allocator when any of these values is set to `true` at any configuration level.
|
|
|
The [Fisher-Yates Shuffle](https://en.wikipedia.org/wiki/Fisher–Yates_shuffle) is a useful and simple technique for creating permutations. We will implement the lease shuffling technique based on this technique.
|
|
|
|
|
|
The proposed name for this new allocator is `FLQAllocator`.
|
|
|
|
|
|
### Shuffling (Permutations)
|
|
|
|
|
|
The server is configured to offer addresses from the contiguous address ranges (pools). When the allocation engine follows the iterative strategy it finds next address in the pool by incrementing a pool specific counter by one until it reaches the end of the pool. Then, it starts from the beginning of the address range until it reaches the address from which it began. This way the allocation engine ensures to walk over all addresses in the address pool.
|
|
|
|
|
|
When the addresses are to be offered in a random order, it is significantly harder to ensure that the allocation engine walks over all addresses in the pool before proceeding to the next pool or starts from the beginning of that pool.
|
|
|
|
|
|
A random arrangement of all elements (IP addresses in this case) in a given range is called "permutation". The [Fisher-Yates Shuffle](https://en.wikipedia.org/wiki/Fisher–Yates_shuffle) is a well known algorithm for generating permutations. Using this algorithm we're going to create a new class, `IPRangePermutation`, which will be used to shuffle the addresses in a given address range and return them in a random order.
|
|
|
|
|
|
This class will expose the following interface:
|
|
|
The new `IPRangePermutation` will have the following interface:
|
|
|
|
|
|
```c++
|
|
|
IPRangePermutation(const AddressRange& range);
|
|
|
IPRangePermutation(const PrefixRange& range);
|
|
|
bool exhausted() const;
|
|
|
asiolink::IOAddress next(bool& done);
|
|
|
void reset();
|
|
|
```
|
|
|
The first two are the constructors creating a new permutation for a pool or delegated prefix pool. The `exhausted()` function checks if we have reached the end of the permutation (the end of the pool), i.e., we offered all addresses from the pool to the clients, and we should move to its beginning (reset the permutation). The `next` method returns the next (random) address from the permutation. The `next` never returns the same address twice. It keeps an internal state remembering what addresses have been already returned and avoids returning them again.
|
|
|
The `reset` function clears the permutation state and allows to start offering the same addresses from the pool in some new order.
|
|
|
|
|
|
The first two are the constructors creating new permutation for a pool or delegated prefix pool respectively. The `exhausted()` function checks if we have reached the end of the permutation (the end of the pool), i.e. we have already stored all addresses from the pool in the FLQ or, if `free-lease-queue` is `false`, we offered all addresses from the pool to the clients and we should move to its beginning. The `next` method returns next (random) address from the permutation. The `next` never returns the same address twice. It keeps an internal state remembering what addresses have been already returned and avoids returning them again.
|
|
|
|
|
|
We should consider having a setting for this class that will cause it to always return a default permutation, i.e. addresses in ascending order. This will facilitate the case of iterative allocation with the `ShuffledPreAllocator`. This will simplify the logic of the allocator which will use the `IPRangePermutation` regardless if the server is configured to shuffle the leases or not.
|
|
|
The permutations will be maintained in the pools' allocation states. The random allocator will call their `next` functions to get next available addresses for respective pools until the permutations exhaust addresses. The allocator will randomly pick the pool from those that still have permutations with not offered addresses. When all permutations are exhausted, the allocator will reset them and start giving out the same addresses again.
|
|
|
|
|
|
### Free Lease Queue
|
|
|
## Free Lease Queue Allocator
|
|
|
|
|
|
The FLQ is a structure holding information about available leases in various IP address/prefix ranges. This structure has the following properties:
|
|
|
- holds the information about free leases for all configured pools for which `free-leases-queue` is `true`,
|
... | ... | |