Kea issueshttps://gitlab.isc.org/isc-projects/kea/-/issues2023-07-13T18:35:36Zhttps://gitlab.isc.org/isc-projects/kea/-/issues/237ISC DHCP per class lease limit2023-07-13T18:35:36ZFrancis DupontISC DHCP per class lease limitQuote from ISC DHCP 1dhcpd.conf.5`
>>>
PER-CLASS LIMITS ON DYNAMIC ADDRESS ALLOCATION
You may specify a limit to the number of clients in a class that can be
assigned leases. The effect of this will be to make it difficul...Quote from ISC DHCP 1dhcpd.conf.5`
>>>
PER-CLASS LIMITS ON DYNAMIC ADDRESS ALLOCATION
You may specify a limit to the number of clients in a class that can be
assigned leases. The effect of this will be to make it difficult for a
new client in a class to get an address. Once a class with such a
limit has reached its limit, the only way a new client in that class
can get a lease is for an existing client to relinquish its lease,
either by letting it expire, or by sending a DHCPRELEASE packet.
Classes with lease limits are specified as follows:
class "limited-1" {
lease limit 4;
}
This will produce a class in which a maximum of four members may hold a
lease at one time.
>>>
Often associated with cloned classes. Requested by a customer but a priori not easy to implement.
Note that in Kea lease assignment is done before calling setReservedClientClasses.
Support tickets: [support#18293](https://support.isc.org/Ticket/Display.html?id=18293), [support#17523](https://support.isc.org/Ticket/Display.html?id=17523), [support#19968](https://support.isc.org/Ticket/Display.html?id=19968)
Being implemented in Kea.ISC DHCP Migrationhttps://gitlab.isc.org/isc-projects/kea/-/issues/321Make it possible to start the server without all the configured interfaces re...2024-02-08T14:36:54ZVicky Riskvicky@isc.orgMake it possible to start the server without all the configured interfaces ready (GH#91)<Originally reported on Github as issue #91 by karaluh on July 6, 2018>
Dibbler has an "inactive-mode" option, which works like this, according to the official docs:
During normal startup, client tries to bind all interfaces defined in...<Originally reported on Github as issue #91 by karaluh on July 6, 2018>
Dibbler has an "inactive-mode" option, which works like this, according to the official docs:
During normal startup, client tries to bind all interfaces defined in a configuration file. If such attempt
fails, client reports an error and gives up. Usually that is best action. However, in some cases it is possible that interface is not ready yet, e.g. WLAN interface did not complete association. Dibbler attempt to detect link-local addresses, bind any sockets or initiate any kind of communication will fail. To work around this disadvantage, a new mode has been introduced in the 0.6.0RC4 version. It is possible to modify client behavior, so it will accept downed and not running interfaces. To do so, inactive-mode
keyword must be added to client.conf file. In this mode, client will accept inactive interfaces, will add
them to inactive list and will periodically monitor its state. When the interface finally goes on-line, client
will try to configure it."
My use case is PD over PPP interfaces which come and go as they please during the server process lifetime and mostly aren't there when the server starts.backloghttps://gitlab.isc.org/isc-projects/kea/-/issues/878performance: implement backend statistics2023-07-31T13:02:12ZTomek Mrugalskiperformance: implement backend statisticsWe want to be able to measure the following:
* looking for reservations took X us,
* looking for leases took Y us.
* Z queries per packet were conducted.
* W total queries performed by backend, average response time was A.
* possibly st...We want to be able to measure the following:
* looking for reservations took X us,
* looking for leases took Y us.
* Z queries per packet were conducted.
* W total queries performed by backend, average response time was A.
* possibly stats by query type (getLease4byHWAddr, getLease4ByAddr, etc.)
* possibly query by SQL type (A number of SELECTs, B number of INSERTs, C number of DELETEs)
This, on its own, wouldn't improve any performance, but it will be an essential tool for assessing other performance improvement proposals.next-stable-2.6https://gitlab.isc.org/isc-projects/kea/-/issues/1125rebuild statistic-get-all response2021-10-19T07:37:07ZWlodzimierz Wencelrebuild statistic-get-all responseAt this point this is what kea is sending back:
```
{
"command": "statistic-get-all",
"arguments": {
"declined-addresses": [ [ 0, "2019-07-30 10:04:28.386733" ] ],
"reclaimed-declined-addresses": [ [ 0, "2019-07-3...At this point this is what kea is sending back:
```
{
"command": "statistic-get-all",
"arguments": {
"declined-addresses": [ [ 0, "2019-07-30 10:04:28.386733" ] ],
"reclaimed-declined-addresses": [ [ 0, "2019-07-30 10:04:28.386735" ] ],
"reclaimed-leases": [ [ 0, "2019-07-30 10:04:28.386736" ] ],
"subnet[1].assigned-addresses": [ [ 0, "2019-07-30 10:04:28.386740" ] ],
"subnet[1].declined-addresses": [ [ 0, "2019-07-30 10:04:28.386743" ] ],
"subnet[1].reclaimed-declined-addresses": [ [ 0, "2019-07-30 10:04:28.386745" ] ],
"subnet[1].reclaimed-leases": [ [ 0, "2019-07-30 10:04:28.386747" ] ],
"subnet[1].total-addresses": [ [ 200, "2019-07-30 10:04:28.386719" ] ]
},
"result": 0
}
```
I want to focus on a prat with subnets, which is really hard to parse. Biggest issue is that subnet id is in key name, not as value. And this is completely flat.
I am proposing to return statistics like that:
```
{
"command": "statistic-get-all",
"arguments": {
"declined-addresses": [ [ 0, "2019-07-30 10:04:28.386733" ] ],
"reclaimed-declined-addresses": [ [ 0, "2019-07-30 10:04:28.386735" ] ],
"reclaimed-leases": [ [ 0, "2019-07-30 10:04:28.386736" ] ],
"subnets": { [
"subnet-id": 1,
"assigned-addresses": [ [ 0, "2019-07-30 10:04:28.386740" ] ],
"declined-addresses": [ [ 0, "2019-07-30 10:04:28.386743" ] ],
"reclaimed-declined-addresses": [ [ 0, "2019-07-30 10:04:28.386745" ] ],
"reclaimed-leases": [ [ 0, "2019-07-30 10:04:28.386747" ] ],
"total-addresses": [ [ 200, "2019-07-30 10:04:28.386719" ] ]
],
[
"subnet-id": 2,
"assigned-addresses": [ [ 0, "2019-07-30 10:04:28.386740" ] ],
"declined-addresses": [ [ 0, "2019-07-30 10:04:28.386743" ] ],
"reclaimed-declined-addresses": [ [ 0, "2019-07-30 10:04:28.386745" ] ],
"reclaimed-leases": [ [ 0, "2019-07-30 10:04:28.386747" ] ],
"total-addresses": [ [ 200, "2019-07-30 10:04:28.386719" ] ]
]
}
},
"result": 0
}
```
I came up on this issue while working recently with performance testing of a setup with ~500 subnets.outstandinghttps://gitlab.isc.org/isc-projects/kea/-/issues/1260avoid more race conditions2021-10-20T10:18:11ZRazvan Becheriuavoid more race conditionsit seems that addLease, updateLease and deleteLease are called in several other places. we should lock the resource there as well:
```
Dhcpv4Srv::processRelease
Dhcpv4Srv::declineLease
Dhcpv6Srv::releaseIA_NA
Dhcpv6Srv::releaseIA_PD
Dh...it seems that addLease, updateLease and deleteLease are called in several other places. we should lock the resource there as well:
```
Dhcpv4Srv::processRelease
Dhcpv4Srv::declineLease
Dhcpv6Srv::releaseIA_NA
Dhcpv6Srv::releaseIA_PD
Dhcpv6Srv::declineLease
Dhcpv6Srv::generateFqdn
LeaseCmdsImpl::lease6BulkApplyHandler - there is a leaseDelete which can cause other races.
LeaseCmdsImpl::lease4DelHandler - will cause race
LeaseCmdsImpl::lease6DelHandler - will cause race
AllocEngine::allocateReservedLeases6
from AllocEngine::allocateLeases6
from Dhcpv6Srv::assignIA_NA
from Dhcpv6Srv::assignIA_PD
from AllocEngine::renewLeases6
from Dhcpv6Srv::extendIA_NA
from Dhcpv6Srv::extendIA_PD
AllocEngine::allocateGlobalReservedLeases6
from AllocEngine::allocateReservedLeases6
AllocEngine::removeNonmatchingReservedLeases6
from AllocEngine::allocateLeases6
from Dhcpv6Srv::assignIA_NA
from Dhcpv6Srv::assignIA_PD
from AllocEngine::renewLeases6
from Dhcpv6Srv::extendIA_NA
from Dhcpv6Srv::extendIA_PD
AllocEngine::removeNonmatchingReservedNoHostLeases6
from AllocEngine::removeNonmatchingReservedLeases6
from AllocEngine::allocateLeases6
from Dhcpv6Srv::assignIA_NA
from Dhcpv6Srv::assignIA_PD
from AllocEngine::renewLeases6
from Dhcpv6Srv::extendIA_NA
from Dhcpv6Srv::extendIA_PD
AllocEngine::removeNonreservedLeases6
from AllocEngine::allocateLeases6
from Dhcpv6Srv::assignIA_NA
from Dhcpv6Srv::assignIA_PD
from AllocEngine::renewLeases6
from Dhcpv6Srv::extendIA_NA
from Dhcpv6Srv::extendIA_PD
AllocEngine::reuseExpiredLease
from AllocEngine::allocateUnreservedLeases6
from AllocEngine::allocateLeases6
from Dhcpv6Srv::assignIA_NA
from Dhcpv6Srv::assignIA_PD
from AllocEngine::renewLeases6
from Dhcpv6Srv::extendIA_NA
from Dhcpv6Srv::extendIA_PD
AllocEngine::createLease6
from AllocEngine::allocateUnreservedLeases6
from AllocEngine::allocateLeases6
from Dhcpv6Srv::assignIA_NA
from Dhcpv6Srv::assignIA_PD
from AllocEngine::renewLeases6
from Dhcpv6Srv::extendIA_NA
from Dhcpv6Srv::extendIA_PD
from AllocEngine::allocateReservedLeases6
from AllocEngine::allocateLeases6
from Dhcpv6Srv::assignIA_NA
from Dhcpv6Srv::assignIA_PD
from AllocEngine::renewLeases6
from Dhcpv6Srv::extendIA_NA
from Dhcpv6Srv::extendIA_PD
from AllocEngine::allocateGlobalReservedLeases6
from AllocEngine::allocateLeases6
from Dhcpv6Srv::assignIA_NA
from Dhcpv6Srv::assignIA_PD
from AllocEngine::renewLeases6
from Dhcpv6Srv::extendIA_NA
from Dhcpv6Srv::extendIA_PD
AllocEngine::extendLease6
from AllocEngine::renewLeases6
from Dhcpv6Srv::extendIA_NA
from Dhcpv6Srv::extendIA_PD
AllocEngine::updateLeaseData
from AllocEngine::allocateLeases6
from Dhcpv6Srv::assignIA_NA
from Dhcpv6Srv::assignIA_PD
AllocEngine::deleteExpiredReclaimedLeases6 - will cause race
AllocEngine::deleteExpiredReclaimedLeases4 - will cause race
AllocEngine::reclaimLeaseInDatabase
from AllocEngine::reclaimExpiredLease Lease4Ptr
from AllocEngine::reclaimExpiredLease Lease6Ptr
AllocEngine::reclaimExpiredLease Lease4Ptr
from AllocEngine::reclaimExpiredLeases4 - safe
from AllocEngine::renewLease4
from AllocEngine::reuseExpiredLease4
AllocEngine::reclaimExpiredLease Lease6Ptr
from AllocEngine::reuseExpiredLease
from AllocEngine::extendLease6
from AllocEngine::reclaimExpiredLeases6 - safe
AllocEngine::createLease4
from AllocEngine::allocateOrReuseLease4
from AllocEngine::discoverLease4
from AllocEngine::requestLease4
AllocEngine::requestLease4
from AllocEngine::allocateLease4
from Dhcpv4Srv::assignLease
AllocEngine::renewLease4
from AllocEngine::discoverLease4
from AllocEngine::allocateLease4
from Dhcpv4Srv::assignLease
from AllocEngine::requestLease4
from AllocEngine::allocateLease4
from Dhcpv4Srv::assignLease
AllocEngine::reuseExpiredLease4
from AllocEngine::allocateOrReuseLease4
from AllocEngine::discoverLease4
from AllocEngine::requestLease4
from AllocEngine::allocateUnreservedLease4
from AllocEngine::discoverLease4
from AllocEngine::requestLease4
```outstandinghttps://gitlab.isc.org/isc-projects/kea/-/issues/1339calling expired can cause races2023-07-31T13:13:53ZRazvan Becheriucalling expired can cause racesas @fdupont mentioned, calling expire can cause races within the kea code:
```
lease->expired() // false here
...
// some time passes
lease->expired() // true here
```as @fdupont mentioned, calling expire can cause races within the kea code:
```
lease->expired() // false here
...
// some time passes
lease->expired() // true here
```next-stable-2.6Razvan BecheriuRazvan Becheriuhttps://gitlab.isc.org/isc-projects/kea/-/issues/1345Ability to always-respond to all requests in HA active-active mode to support...2021-01-22T13:30:51ZEwald van GeffenAbility to always-respond to all requests in HA active-active mode to support anycast DHCPMy impression is that ISC KEA doesn't always respond to all requests. I think this is due to the 1/n split.
I run two KEA instances sharing a single BGP anycast /32 IP prefix. DHCP Requests get routed via a DHCP relay towards the closes...My impression is that ISC KEA doesn't always respond to all requests. I think this is due to the 1/n split.
I run two KEA instances sharing a single BGP anycast /32 IP prefix. DHCP Requests get routed via a DHCP relay towards the closest ISC KEA instance according to BGP. Load balancing is externally handled. This means KEA should respond to all requests it receives and not impose any load-balancing logic.
I think this is where the magic happens [1]
From my understanding active_servers needs to reflect the current server instance id (pri,sec).
[1] https://github.com/isc-projects/kea/blob/457111f9db051723ff9f8e7fb621872d0aa10363/src/hooks/dhcp/high_availability/query_filter.cc#L316outstandinghttps://gitlab.isc.org/isc-projects/kea/-/issues/1463Performance improvement: lookup leases by address first when address is avail...2023-07-31T13:14:28ZAndrei Pavelandrei@isc.orgPerformance improvement: lookup leases by address first when address is availableEdit: TLDR: This issue is about looking up leases by address hint in renews and v6 requests. Currently, they are first searched by DUID/client ID. Because these are not primary keys, but indexes, this lookup is slower. Searching by addre...Edit: TLDR: This issue is about looking up leases by address hint in renews and v6 requests. Currently, they are first searched by DUID/client ID. Because these are not primary keys, but indexes, this lookup is slower. Searching by address which is a primary key should be significantly faster. I imagine this as a few lines of code changed, although it is affecting the allocation engine. There are concerns that this might affect host reservations, RADIUS or other aspects. I expect that to come up in tests.
---
When designing database scehmas, it is desirable to look at access patterns in order to define keys or indexes or maybe entire table definitions. When looking at the DHCP protocol, we can easily figure out that the key that we most want to use in a lookup is the client's identifier which is usually the DUID or client ID. This is not the case for current Kea since address is clearly chosen as sole primary key across all backends for the lease databases. This is probably because it is chosen based on it's unique properties. A v6 client can have multiple addresses per DUID and maybe it's the same for v4 in same strange use cases. This could have been solved by storing addresses as a list in DUID/clientID-keyed tables at the cost of complexity. But that is not my proposal.
We can leverage the current table structure by looking up by address when possible. This is effectively the case when an address hint is provided. A well known case when this happens is during the renew process, but I think I remember reading from a RFC that it can be provided in other DHCP messages as well. But a well-behaved client should (RFC SHOULD?) always send the address in their RENEW (I think it's still called REQUEST for DHCPv4?) and RENEWs are 99% of the DHCP messages sent in networks with high uptime. So it is an optimization for RENEW.
Concerns:
* Security? Honoring DHCP clients requesting address directly might lead to address starvation?
* No, after lookup by address finds nothing, the usual DISCOVER & SOLICIT messages go through with looking up by DUID/clientID. And then it will find the lease that belonged to the malicious client. Even if it did, add that to the list of security issues. Clients spoofing their DUID/clientID doesn't do the same?
* Are we sure we aren't providing the address to the wrong client?
* Yes. We can check in-memory/in-server if the DUIDs/clientIDs match. Even though what harm can the right lease given to the wrong client do?
* Does this affect the ability to reserve addresses/prefixes, RADIUS, hooks or other use-cases?
* To be investigated. I don't know. Because firstly I don't understand why host reservations are looked up before leases. This optimization is less impactful if we still search by DUID/clientID in hosts first. I would move the address lookup in the lease database at the beginning of the packet processing. But then why not move the DUID/clientID lookup at the beginning as well? If the client has an active lease, doesn't it stop there?backloghttps://gitlab.isc.org/isc-projects/kea/-/issues/1623Config Backend migration tool2024-03-21T12:22:05ZCathy AlmondConfig Backend migration tool---
name: Control block/configuration migration tool
about: Similar to #1078, this is a request for tools to assist Kea users who decide to change their back-end provisioning - in this instance, their configuration. This should (ideally...---
name: Control block/configuration migration tool
about: Similar to #1078, this is a request for tools to assist Kea users who decide to change their back-end provisioning - in this instance, their configuration. This should (ideally) cater for all scenarios, so not just changing which DB backend is being used, but also migration from not-CB to CB and vice versa
---
I don't think anyone else has opened this yet - but if we do tackle this, it would be worth bearing in mind that configuration/CB backend version control or logging might also turn out to be A Thing - so we need to consider that use case/evolution in the design too.
**Is your feature request related to a problem? **
See [Support ticket #17332](https://support.isc.org/Ticket/Display.html?id=17332) for details of which customer for whom this would be useful.
The likelihood is that they will wish to implement CB backend on the DB backend on which it's available now, but migrate to their preferred DB backend, once there is CB support there too (as well as complete coverage of all configuration options in the CB - at the moment there isn't).
**Update**: As discussed in [Porto](https://pad.isc.org/p/porto2022-stork-roadmap-and-backlog#L31), we'd like to make more generic. While the ultimate solution will likely cover Stork overseeing the migration, some parts of the functionality should be implemented in Kea. In particular, the following was discussed:
- `config-get`/`config-set` like commands that would retrieve/set the full configuration from CB
Once the above is implemented, it will open up plenty of opportunities for Stork. In particular, scenarios such as: "add subnets defined in a config file to CB, delete subents from config file, write updated config file to disk". The same can be repeated for other configuration elements (shared networks, classes, HR, etc).next-stable-3.0https://gitlab.isc.org/isc-projects/kea/-/issues/1739Implement FORCERENEW support (RFC3203)2022-11-02T17:06:58ZTomek MrugalskiImplement FORCERENEW support (RFC3203)This is roughly a v4 equivalent of RECONFIGURE message in v6. This is not a popular feature (due to lack of support among clients), but it is being requested sporadically.
* [asking about forcerenew on kea-users](https://lists.isc.org/p...This is roughly a v4 equivalent of RECONFIGURE message in v6. This is not a popular feature (due to lack of support among clients), but it is being requested sporadically.
* [asking about forcerenew on kea-users](https://lists.isc.org/pipermail/kea-users/2020-October/002910.html)
* [another one from 2017](http://kea-users.7364.n8.nabble.com/Kea-users-FORCERENEW-feature-td435.html)backloghttps://gitlab.isc.org/isc-projects/kea/-/issues/1746multiple contact points for MySQL and PostgreSQL2023-04-06T12:02:31ZAndrei Pavelandrei@isc.orgmultiple contact points for MySQL and PostgreSQLFor the purpose of highly-available database connectivity, eliminating single point of failures in cluster nodes, and benefitting from Galera and Percona's active-active responsiveness, Kea could use the ability to specify multiple conta...For the purpose of highly-available database connectivity, eliminating single point of failures in cluster nodes, and benefitting from Galera and Percona's active-active responsiveness, Kea could use the ability to specify multiple contact points in the same database access set.
Ideally, it would be less work if you could pass the responsibility of shuffling through the nodes onto the database library, like in Cassandra.
But if this is not an option, to avoid contention on selecting the connection to be used, a connection could be randomly chosen by each thread.
Design document: TODObackloghttps://gitlab.isc.org/isc-projects/kea/-/issues/1801Durable DDNS update queue (Persistence Manager for D2)2023-06-08T19:48:37ZVicky Riskvicky@isc.orgDurable DDNS update queue (Persistence Manager for D2)**Problem**
The DDNS update queue in the D2 process is not durable and queued requests may be lost if the process is stopped or crashes. The retry mechanism is non configurable, making two more attempts 100ms apart if the target DNS ser...**Problem**
The DDNS update queue in the D2 process is not durable and queued requests may be lost if the process is stopped or crashes. The retry mechanism is non configurable, making two more attempts 100ms apart if the target DNS server cannot be reached.
**Desired Solution**
- [ ] A durable queue persists pending updates between restarts and crashes.
- [ ] A new configuration parameter is used to set the number of seconds that the D2 process waits for a response before retrying the DNS update.
- [ ] An additional configuration parameter is used to limit the number of times that D2 process tries to send the DNS update.
This may be covered in the design of a Persistence Manager in [this wiki document](https://gitlab.isc.org/isc-projects/kea/-/wikis/designs/ddns-design#addendum-persistencemgr-design-point8).backloghttps://gitlab.isc.org/isc-projects/kea/-/issues/2131revisit and extend D2 update retry code2022-02-25T12:09:25ZFrancis Dupontrevisit and extend D2 update retry codeThe waiting delay between two attempts is not clear and for GSS-TSIG to be able to set the number of retries is requested.
This ticket should stay in the core code. Note the idea to save and restore the NCR queue is not considered here ...The waiting delay between two attempts is not clear and for GSS-TSIG to be able to set the number of retries is requested.
This ticket should stay in the core code. Note the idea to save and restore the NCR queue is not considered here (it has its own ticket #1801).
Opening a design phaseoutstandingFrancis DupontFrancis Duponthttps://gitlab.isc.org/isc-projects/kea/-/issues/2271Extend the infinite lifetime feature to full DHCP2022-11-02T15:10:41ZFrancis DupontExtend the infinite lifetime feature to full DHCPFollowup of #897 which uses infinite lifetime only for BOOTP.
Already discussed design points:
- skip Cassandra/CQL support
- add a new lease state to make static/sticky leases independent of the real lifetime
- by default static/sti...Followup of #897 which uses infinite lifetime only for BOOTP.
Already discussed design points:
- skip Cassandra/CQL support
- add a new lease state to make static/sticky leases independent of the real lifetime
- by default static/sticky leases can't be released
- add new config flags to allow infinite lifetimesbackloghttps://gitlab.isc.org/isc-projects/kea/-/issues/2499don't extend dhcpdb_create scripts any more2024-03-27T13:32:29ZWlodzimierz Wenceldon't extend dhcpdb_create scripts any moreWe should stop to make two paths of database creation. It leads to mistakes more work during releases additional jobs to check differences. So rather to develop scripts like `dhcpdb_create.mysql` (`dhcpdb_create.pgsql`) and upgrade scrip...We should stop to make two paths of database creation. It leads to mistakes more work during releases additional jobs to check differences. So rather to develop scripts like `dhcpdb_create.mysql` (`dhcpdb_create.pgsql`) and upgrade scripts (eg. upgrade_009_to_010.sh.in) separately we should develop just upgrade scripts which will be executed by dhcpdb_create.sh script.
It's ugly to do it this late in a process but it will make our life much easier in the future.
- [ ] as part of the refactor process, please make sure there's a VERY good reason why there's .in version that needs to be expanded during configure.next-stable-3.0https://gitlab.isc.org/isc-projects/kea/-/issues/2529patching query options core hook2023-08-10T13:23:02ZFrancis Dupontpatching query options core hookThe idea is to clone the flex option core hook into a similar hook patching the query vs the response. It should be simpler (no client class) and will solve a lot of customer problems including the RAI link selector one.
The only not ea...The idea is to clone the flex option core hook into a similar hook patching the query vs the response. It should be simpler (no client class) and will solve a lot of customer problems including the RAI link selector one.
The only not easy point (code and doc can be reused at a very high level) is to pick a name for it!next-stable-2.6https://gitlab.isc.org/isc-projects/kea/-/issues/2708HA pool rebalancing2023-02-02T14:23:33ZTomek MrugalskiHA pool rebalancingThis idea is not new. It was recently brought up by @cathya in Porto (see [notes](https://pad.isc.org/p/porto2022-kea-features-for-stork#L58). The overall concept is to design and implement a mechanism similar to the one in ISC DHCP. Whe...This idea is not new. It was recently brought up by @cathya in Porto (see [notes](https://pad.isc.org/p/porto2022-kea-features-for-stork#L58). The overall concept is to design and implement a mechanism similar to the one in ISC DHCP. When there are two servers in load-balancing, it is possible that one of them will run out of addresses while the other one still has many.
Couple random comments:
- The pool rebalancing would somehow make both partners negotiate the pools and rebalance them.
- Using a hysteresis approach with high/low threshold would prevent the mechanism to go crazy when running out of addresses. We don't want it to go crazy when there's one or two addresses left.
- The pool dynamism would add extra complexity as the modified pool range would need to be stored somewhere that would survive crashes/reboots etc.
This requires a ~design. It's a complicated feature request with a high potential for endless tweaks, conflicting tuning requests etc.
We will do it one day, but this would require a lot of design, testing and tuning.outstandinghttps://gitlab.isc.org/isc-projects/kea/-/issues/2714RFE: HA plugin ability to detect partner inabilty to receive client requests ...2023-07-31T14:12:57ZKevin FlemingRFE: HA plugin ability to detect partner inabilty to receive client requests and transition it to 'partner-down'---
name: Feature request
about: HA plugin ability to detect partner inabilty to receive client requests and transition it to 'partner-down'
---
**Some initial questions**
- Are you sure your feature is not already implemented in the l...---
name: Feature request
about: HA plugin ability to detect partner inabilty to receive client requests and transition it to 'partner-down'
---
**Some initial questions**
- Are you sure your feature is not already implemented in the latest Kea version? Yes
- Are you sure what you would like to do is not possible using some other mechanisms? Yes
- Have you discussed your idea on kea-users or kea-dev mailing lists? Yes
**Is your feature request related to a problem? Please describe.**
(This issue was created as a result of an extensive thread on kea-users)
When the HA plugin is being used in either hot-standby or load-balancing mode, Kea peers are able to notice some forms of communications failures and force the other peers to the 'partner-down' state in order to provide service to clients supported by the other peer.
However, in a situation where client requests are not being delivered to a peer, but it is otherwise fully operational including the peer-to-peer communications link, clients supported by that peer will not be serviced, but the other peer(s) care unable to notice the issue and take action to correct it. This situation could arise when the Kea peers are using separate network links for client traffic and HA traffic, or when the Kea peers are receiving client traffic via a DHCP relay and the relay configuration is incorrect.
**Describe the solution you'd like**
One (or more) opt-in mechanisms that the Kea admin can choose to enhance the ability to detect peer failures to service clients, even when the peer's Kea daemon is otherwise fully operational.
**Describe alternatives you've considered**
Some discussions about external monitoring solutions have occurred, and that is certainly an option which some admins could choose.
**Funding its development**
Kea is run by ISC, which is a small non-profit organization without any government funding or any permanent sponsorship organizations. Are you able and willing to participate financially in the development costs? Yes
**Participating in development**
Are you willing to participate in the feature development? ISC team always tries to make a feature as generic as possible, so it can be used in wide variety of situations. That means the proposed solution may be a bit different that you initially thought. Are you willing to take part in the design discussions? Are you willing to test an unreleased engineering code? Yesnext-stable-2.6https://gitlab.isc.org/isc-projects/kea/-/issues/2763Lease start time of state2023-04-28T07:24:29ZFrancis DupontLease start time of stateBoth v4 Bulk Leasequery (BLQ) and RADIUS accounting have use of keeping the start time of state for assigned leases i.e. the date of the transistion to the default (0) state. This ticket proposes to keep it in the user context, the RADIU...Both v4 Bulk Leasequery (BLQ) and RADIUS accounting have use of keeping the start time of state for assigned leases i.e. the date of the transistion to the default (0) state. This ticket proposes to keep it in the user context, the RADIUS table keeping it can be reused for the design.next-stable-2.6https://gitlab.isc.org/isc-projects/kea/-/issues/2802Implement `bundle` command2023-04-11T11:07:55ZTomek MrugalskiImplement `bundle` commandThe idea for this API call came from @marcin. Here's the discussion: https://pad.isc.org/p/stork-cb-migration#L64
The overall long term goal is to have a command that could include multiple other commands and run them one after another ...The idea for this API call came from @marcin. Here's the discussion: https://pad.isc.org/p/stork-cb-migration#L64
The overall long term goal is to have a command that could include multiple other commands and run them one after another as one change-set. Couple scenarios where this might be useful:
- changing subnet and all host reservations in it
- deleting subnet and all leases associated
- reservation migration from config file to database
This mechanism, if implemented correctly, will be very powerful.
One important feature of this new command is the ability to rollback changes if configurable number of errors is reached. It's important to acknowledge that full rollback in generic case is not possible, so this rollback should be limited to DB operations _for now_. We hope to expand the rollback in the future to maybe cover some other commands, but it will never be possible to rollback everything.
Since there are many ways how this could be implemented, I think the first step would be come up with a mini-design. Couple paragraphs in the ticket or on wiki should do the trick.outstanding