ISC Open Source Projects issueshttps://gitlab.isc.org/groups/isc-projects/-/issues2024-03-27T12:51:29Zhttps://gitlab.isc.org/isc-projects/kea/-/issues/3213Feature request: statistics-get-all-global command to get all of the global s...2024-03-27T12:51:29ZCathy AlmondFeature request: statistics-get-all-global command to get all of the global stats without any of the subnet stats---
name: statistics-get-all-global command
about: `statistics-get-all-global` command (or similar) to get all of the global statistics without any of the subnet statistics
---
It would also be useful to have something like "statistics...---
name: statistics-get-all-global command
about: `statistics-get-all-global` command (or similar) to get all of the global statistics without any of the subnet statistics
---
It would also be useful to have something like "statistics-get-all-global" command to get all of the global stats but not all of the subnet (or pool if they get added) stats. We have scenarios with multiple 100s of subnets and for those "get-all" can get unwieldy.
See [SF1429](https://isc.lightning.force.com/lightning/r/Case/5007V00002ZyA1sQAF/view)next-stable-3.0https://gitlab.isc.org/isc-projects/kea/-/issues/3206subnet-get commands should fetch leases for selected subnets with pagination2024-03-22T13:15:53ZMarcin Siodelskisubnet-get commands should fetch leases for selected subnets with paginationIn HA, we use lease commands to synchronize the database. The lease commands fetch all leases with pagination. However, in the hub-and-spoke model it would be useful to fetch the leases only for selected subnets because the relationships...In HA, we use lease commands to synchronize the database. The lease commands fetch all leases with pagination. However, in the hub-and-spoke model it would be useful to fetch the leases only for selected subnets because the relationships are partitioned by subnet. Today, all leases have to be fetched by each relationship and those that do not belong to the relationship are discarded. This is inefficient. One thing to consider is that subnet identifiers are listed explicitly in the commands.next-stable-3.0https://gitlab.isc.org/isc-projects/kea/-/issues/3140Feature request: Add Statistics Counters for dropped packets (for various dif...2024-03-27T12:51:09ZCathy AlmondFeature request: Add Statistics Counters for dropped packets (for various different reasons)---
name: Feature request: Add Statistics Counters for dropped packets (for various different reasons)
about: Add different packet counters for dropped packets such as ones dropped due to HA ignoring them, or to Kea being disabled.
---
...---
name: Feature request: Add Statistics Counters for dropped packets (for various different reasons)
about: Add different packet counters for dropped packets such as ones dropped due to HA ignoring them, or to Kea being disabled.
---
This is loosely related to issue #3125 for counting some dropped packets due to HA (#3125 is more about not logging them unnecessarily though, or being able to disable the per-event logging).
This is a request to add a specific packet counts for both HA and for other dropped packets such as ones dropped due to kea being disabled.
Customer requesting this feature currently has an issue where they can't tell if an operator has misconfigured their network such that Kea wasn't receiving any traffic or if it was in a disabled state due to Kea HA sync.
Version 2.2
See [SF1429](https://isc.lightning.force.com/lightning/r/Case/5007V00002ZyA1sQAF/view)next-stable-3.0https://gitlab.isc.org/isc-projects/kea/-/issues/3082kea-ctrl-agent and dual stack listening2024-03-21T11:45:53ZDarren Ankneykea-ctrl-agent and dual stack listeningThe `kea-ctrl-agent` can presently only listen on one address, be that an IPv4 or IPv6 address. If you have a dual stack on the equipment where you want to listen, then you have to choose either the IPv4 or the IPv6 address to configure...The `kea-ctrl-agent` can presently only listen on one address, be that an IPv4 or IPv6 address. If you have a dual stack on the equipment where you want to listen, then you have to choose either the IPv4 or the IPv6 address to configure in the `kea-ctrl-agent.json`.
I propose to add a new parameter to the kea-ctrl-agent configuration "http-hosts" as shown:
```
{
"Control-agent": {
"http-hosts": [
"2001:db8::2",
"10.1.2.2"
],
"http-port": 8000,
"control-sockets": {
"dhcp4": {
"socket-type": "unix",
"socket-name": "/tmp/socket4"
}
}
}
}
```
This would allow listening on multiple IP addresses especially in a dual stack environment. Also, the new parameter would preserve backward compatibility.
FYI: I did solve this problem by running two copies of the `kea-ctrl-agent`. However, I'm not convinced that is a good solution. Configurations and other details included below for illustration.
<details><summary>kea-dhcp4.json</summary>
```
{
"Dhcp4": {
"control-socket": {
"socket-type": "unix",
"socket-name": "/tmp/socket4"
},
"interfaces-config": {
"interfaces": [
"ens256"
]
},
"lease-database": {
"type": "memfile",
"persist": false
},
"subnet4": [
{
"subnet": "10.1.2.0/24",
"id": 1,
"option-data": [
{
"name": "routers",
"data": "10.1.2.1"
}
],
"pools": [
{
"pool": "10.1.2.100 - 10.1.2.254"
}
]
}
],
"loggers": [
{
"name": "kea-dhcp4",
"severity": "INFO",
"output_options": [
{
"output": "stdout"
}
]
}
]
}
}
```
</details>
<details><summary>kea-ctrl-agent-v4.json</summary>
```
{
"Control-agent": {
"http-host": "10.1.2.2",
"http-port": 8000,
"control-sockets": {
"dhcp4": {
"socket-type": "unix",
"socket-name": "/tmp/socket4"
}
}
}
}
```
</details>
<details><summary>kea-ctrl-agent-v6.json</summary>
```
{
"Control-agent": {
"http-host": "2001:db8::2",
"http-port": 8000,
"control-sockets": {
"dhcp4": {
"socket-type": "unix",
"socket-name": "/tmp/socket4"
}
}
}
}
```
</details>
<details><summary>Configuration of ens256</summary>
```
3: ens256: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:c0:5f:47 brd ff:ff:ff:ff:ff:ff
altname enp26s0
inet 10.1.2.2/24 brd 10.1.2.255 scope global ens256
valid_lft forever preferred_lft forever
inet6 2001:db8::2/64 scope global
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fec0:5f47/64 scope link
valid_lft forever preferred_lft forever
```
</details>
<details><summary>Daemon command lines</summary>
```
kea-dhcp4 -c kea-dhcp4.json
```
```
kea-ctrl-agent -c kea-ctrl-agent-v4.json
```
```
kea-ctrl-agent -c kea-ctrl-agent-v6.json
```
</details>
<details><summary>Send config-get with curl to both</summary>
```
curl -X POST -H "Content-Type: application/json" -d '{ "command": "config-get", "service": [ "dhcp4" ] }' http://10.1.2.2:8000/ | jq
```
```
curl -X POST -H "Content-Type: application/json" -d '{ "command": "config-get", "service": [ "dhcp4" ] }' http://[2001:db8::2]:8000/
```
</details>
Both returned a result successfully.
[SF1260](https://isc.lightning.force.com/lightning/r/Case/5007V00002X2x4cQAB/view)next-stable-3.0https://gitlab.isc.org/isc-projects/kea/-/issues/2863require-client-classes does not prioritize classes options2024-03-21T12:42:39ZMarcin Godzinarequire-client-classes does not prioritize classes optionsUsing `require-client-classes` does not prioritize options defined in classes.
I tried classifying clients by empty host reservations and by class test.
**Recreation:**
`00:03:00:01:f6:f5:f4:f3:f2:11` is sending Solicit and option 23 ...Using `require-client-classes` does not prioritize options defined in classes.
I tried classifying clients by empty host reservations and by class test.
**Recreation:**
`00:03:00:01:f6:f5:f4:f3:f2:11` is sending Solicit and option 23 requirements.
Kea responds with Advertise, including option 23.
I expect "2001:db8::2" in option 23, which is defined in the class, but receive "2001:db8::1" from the subnet.
Tested configs:
<details><summary>1 - using empty reservation (both with and without "only-if-required": true) </summary>
```
{
"Dhcp6": {
"subnet6": [
{
"subnet": "2001:db8:1::/64",
"pools": [
{
"pool": "2001:db8:1::1-2001:db8:1::1"
}
],
"interface": "enp0s9",
"option-data": [
{
"code": 23,
"csv-format": true,
"data": "2001:db8::1",
"name": "dns-servers",
"space": "dhcp6"
}
],
"require-client-classes": [
"blocked"
]
}
],
"interfaces-config": {
"interfaces": [
"enp0s9"
]
},
"client-classes": [
{
"name": "blocked",
"option-data": [
{
"name": "dns-servers",
"data": "2001:db8::2"
}
]
}
],
"reservations": [
{
"duid": "00:03:00:01:f6:f5:f4:f3:f2:11",
"client-classes": [
"blocked"
]
}
],
"reservation-mode": "global",
"renew-timer": 1000,
"rebind-timer": 2000,
"preferred-lifetime": 3000,
"valid-lifetime": 4000,
"lease-database": {
"type": "memfile"
}
}
}
```
</details>
<details><summary>2 - using class test (both with and without "only-if-required": true)</summary>
```
{
"Dhcp6": {
"subnet6": [
{
"subnet": "2001:db8:1::/64",
"pools": [
{
"pool": "2001:db8:1::1-2001:db8:1::1"
}
],
"interface": "enp0s9",
"option-data": [
{
"code": 23,
"csv-format": true,
"data": "2001:db8::1",
"name": "dns-servers",
"space": "dhcp6"
}
],
"require-client-classes": [
"blocked"
]
}
],
"interfaces-config": {
"interfaces": [
"enp0s9"
]
},
"client-classes": [
{
"name": "blocked",
"test": "option[1].hex == 0x00030001f6f5f4f3f211",
"only-if-required": true,
"option-data": [
{
"name": "dns-servers",
"data": "2001:db8::2"
}
]
}
],
"reservations": [
{
"duid": "00:03:00:01:f6:f5:f4:f3:f2:11"
}
],
"reservation-mode": "global",
"renew-timer": 1000,
"rebind-timer": 2000,
"preferred-lifetime": 3000,
"valid-lifetime": 4000,
"lease-database": {
"type": "memfile"
}
}
}
```
</details>next-stable-3.0https://gitlab.isc.org/isc-projects/kea/-/issues/2819Add max TTL time to D22024-03-27T12:49:15ZDarren AnkneyAdd max TTL time to D2With the addition of being able to set the DDNS TTL with `ddns-ttl-percent` to some percentage of lease time, it might be a good idea to add a maximum possible TTL to the `kea-ddns` configuration. This should be an absolute time limit. ...With the addition of being able to set the DDNS TTL with `ddns-ttl-percent` to some percentage of lease time, it might be a good idea to add a maximum possible TTL to the `kea-ddns` configuration. This should be an absolute time limit. For example, `3600` for one hour maximum TTL length. In this example, if Kea sent something higher than this, D2 would truncate the TTL to `3600`. If the proposed setting is not present in D2 then there should be no change from current behavior which, I believe, is no limit to the length of the TTL.
[RT21610](https://support.isc.org/Ticket/Display.html?id=21610)next-stable-3.0https://gitlab.isc.org/isc-projects/kea/-/issues/2499don't extend dhcpdb_create scripts any more2024-03-27T13:32:29ZWlodzimierz Wenceldon't extend dhcpdb_create scripts any moreWe should stop to make two paths of database creation. It leads to mistakes more work during releases additional jobs to check differences. So rather to develop scripts like `dhcpdb_create.mysql` (`dhcpdb_create.pgsql`) and upgrade scrip...We should stop to make two paths of database creation. It leads to mistakes more work during releases additional jobs to check differences. So rather to develop scripts like `dhcpdb_create.mysql` (`dhcpdb_create.pgsql`) and upgrade scripts (eg. upgrade_009_to_010.sh.in) separately we should develop just upgrade scripts which will be executed by dhcpdb_create.sh script.
It's ugly to do it this late in a process but it will make our life much easier in the future.
- [ ] as part of the refactor process, please make sure there's a VERY good reason why there's .in version that needs to be expanded during configure.next-stable-3.0https://gitlab.isc.org/isc-projects/kea/-/issues/2407GSS-TSIG max-inactivity-interval2024-02-01T11:44:22ZPeter DaviesGSS-TSIG max-inactivity-interval
MS DNS servers seem to silently invalidate "inactive" GSS contexts. As a
result of that, if Kea doesn't perform a successful DDNS request-reply exchange
with GSS-TSIG for a certain period, the MS server invalidates the correspondin...
MS DNS servers seem to silently invalidate "inactive" GSS contexts. As a
result of that, if Kea doesn't perform a successful DDNS request-reply exchange
with GSS-TSIG for a certain period, the MS server invalidates the corresponding
GSS-TSIG key and starts refusing subsequent requests with GSS-TSIG using that
context.
Those subsequent requests will be responded with "broken" responses as
described above, with the RCODE being REFUSED. We heard this "certain period"
is 165s, and it certainly seems to be a few minutes, but we've not explicitly
confirmed the exact period (or the existence of such "invalidation timer").
From Kea's point of view, those responses just look like broken GSS-TSIG token,
so it cannot tell whether the context is invalidated in the MS server. So the
only possible workaround right now is to rekey GSS-TSIG keys quite often,
whether or not it's been actively used. This is where my other ticket
([#20794](https://support.isc.org/Ticket/Display.html?id=20794)) matters.
see #2404
From our experiments, if we rekey GSS-TSIG keys about every 150s and set key
lifetime to 160s, this problem didn't happen. But such a frequent rekeying is
a waste if the key is being actively used. For example, if we keep triggering
DDNS updates every 30s or so, the problem didn't seem to happen even with the
default rekey-interval (2700s).
So I'd suggest introducing some kind of "max-inactivity-interval". Kea would
keep track of the use of generated GSS-TSIG keys, and if a key isn't used for a
successful DDNS update attempt for the specified interval, it would trigger
rekeying and expire the inactive one as soon as the new key becomes available.
see [RT #20795](https://support.isc.org/Ticket/Display.html?id=20795)next-stable-3.0https://gitlab.isc.org/isc-projects/kea/-/issues/2343CB migration assistant2024-03-21T12:21:07ZPeter DaviesCB migration assistant---
name: CB migration assistant
about: A method to migrate to CB
---
When users need to migrate from a file-based json configuration to the Configuration Backend, or to migrate between the supported databases, it would be useful if **...---
name: CB migration assistant
about: A method to migrate to CB
---
When users need to migrate from a file-based json configuration to the Configuration Backend, or to migrate between the supported databases, it would be useful if **Kea** provided some tool to support this.
Possible methods could:
The implementation of two new **CB** commands ie:
**remote-server4-config-get**
and
**remote-server4-config-set**
Or alternatively the enhancement of the **kea-admin** tool to provide this functionality.
[RT #17095](https://support.isc.org/Ticket/Display.html?id=17095)
[RT #20167](https://support.isc.org/Ticket/Display.html?id=20167)
[RT #21508](https://support.isc.org/Ticket/Display.html?id=21508)
Requested migrations: MySQL -> Postgres, also config to MySQL, config to PostgreSQL.next-stable-3.0https://gitlab.isc.org/isc-projects/kea/-/issues/2050Provide basic configuration templates for sample use cases2024-03-21T12:45:56ZVicky Riskvicky@isc.orgProvide basic configuration templates for sample use casesWe have a number of example configurations in a separate file in the Kea distribution. These show how to use different features of Kea. The problem is, a new user is looking for a template to cut and paste to start with.
What we would ...We have a number of example configurations in a separate file in the Kea distribution. These show how to use different features of Kea. The problem is, a new user is looking for a template to cut and paste to start with.
What we would like is to **add example configurations that a user will recognize as 'probably fits my needs', and document these in the ARM**. The example configuration descriptions should include what we are assuming the requirements are for that scenario, what lease lifetimes, traffic levels and so on we expect, and what hardware/sw we suggest. If we can import the configurations into the ARM, great, or we else we can link to the json files. Then we can refer them to the existing examples for how to add the xyz feature, if their scenario requires that and we didn't include it.
These example configurations could be included as an appendix to the ARM or in the Getting Started section.
We might want a handful of these eventually, starting with the simplest and most common.
* [x] **power home user** - 256 addresses, dhcpv4 or ipv6 pd, wants failover with v. inexpensive hw, minimal traffic, no dbs. !1418
* [ ] **small organization** - several small dhcpv4 subnets, 20 host reservations, failover, memfile, guest wifi plus 'internal' users. Low traffic Maybe we could model this on the old ISC hq?
* [ ] **multi-site enterprise** - DHCPv4 and DHCPv6. multiple clusters each serving local geography. 100s of host reservations, 1000s of clients, failover (within a site), separate guest network/wifi, possibly captive portal to register?)
* [ ] **hub and spokes enterprise**, such as retail (dozens to hundreds of locations with local failover, centralized configuration mgmt, likely dhcpv4 only legacy devices with odd vendor options....).
* [ ] **small wired service provider** (1-3 data centers, <100K subs, v4 and v6 w/pd, failover, multi-threading, forensic logging, classification for premium, known and unknown users, home gateways with RFC 1918, 3-4 addresses per gw. e.g. small municipal fiber provider
* [ ] **large wired service provider** (>3 data centers, >100K subs, v4 and v6 with pd, config backend. Some ability to identify premium users. e.g. Cable access provider, cable modems.
* [ ] **wireless service provider** (not sure how this is different, except perhaps much shorter lease times, higher traffic for the same number of users, more likely to be v6? highest performance configuration, captive portal)next-stable-3.0https://gitlab.isc.org/isc-projects/kea/-/issues/1989Issues with qualifying suffix when clients use a combination of Hostname and ...2024-03-27T12:50:32ZMarcin SiodelskiIssues with qualifying suffix when clients use a combination of Hostname and Client FQDN optionA client sends option 12 (Hostname) or option 81 (Client FQDN) to communicate the desired name to the server. The server assumes that the client sends one of these options, not both. If the Client FQDN is present in the client's message,...A client sends option 12 (Hostname) or option 81 (Client FQDN) to communicate the desired name to the server. The server assumes that the client sends one of these options, not both. If the Client FQDN is present in the client's message, it processes this option and ignores the Hostname option.
The server may append a qualifying suffix to the received name or replace the name entirely. The qualified name is terminated with a dot in Client FQDN and is never terminated with a dot in the Hostname option.
We deliberately made a change in the processing of the Hostname option several years ago when it turned out that some DHCP clients have issues with consuming the dot terminated hostname. It appears that it has implications for some clients.
We received a report about a client who uses a combination of option 12 and option 81 in the 4-way exchange. The client sends a partial name in option 12. The server qualifies the name with the suffix and sends option 12 back. The client uses the qualified name (without a dot) and sends it in the Client FQDN option as a partial name. The server qualifies the received name again on the grounds that it is a partial name. Even though the client shouldn't use a combination of these options, we could probably prevent qualifying the name twice by checking if the received name already has a suffix equal to the configured one.
One solution that comes to mind is to always append the dot to the hostnames returned in option 12. However, as mentioned before, we deliberately removed the dots because some clients did not accept them.
We can also consider whether it should be possible to explicitly include a dot via host reservations. If a host reservation has a dot for the hostname, the server would always include a dot. Thus, it would be possible to make exceptions for selected clients.
[RT #18375](https://support.isc.org/Ticket/Display.html?id=18735)
UPDATE: The original problem has been addressed in #1529. However, there's a request [see below](https://gitlab.isc.org/isc-projects/kea/-/issues/1989#note_227456) to add two parameters:
- [ ] allow the administrator to configure which field (option 12 or option 81) to prefer if both exist (RFC violation, that you can actually do now with DDNS tuning hook).
- [ ] including a trailing dot or not could be a configurable feature (or it could be left to the administrator if they include a trailing dot on the qualifying suffix itself.)next-stable-3.0https://gitlab.isc.org/isc-projects/kea/-/issues/1623Config Backend migration tool2024-03-21T12:22:05ZCathy AlmondConfig Backend migration tool---
name: Control block/configuration migration tool
about: Similar to #1078, this is a request for tools to assist Kea users who decide to change their back-end provisioning - in this instance, their configuration. This should (ideally...---
name: Control block/configuration migration tool
about: Similar to #1078, this is a request for tools to assist Kea users who decide to change their back-end provisioning - in this instance, their configuration. This should (ideally) cater for all scenarios, so not just changing which DB backend is being used, but also migration from not-CB to CB and vice versa
---
I don't think anyone else has opened this yet - but if we do tackle this, it would be worth bearing in mind that configuration/CB backend version control or logging might also turn out to be A Thing - so we need to consider that use case/evolution in the design too.
**Is your feature request related to a problem? **
See [Support ticket #17332](https://support.isc.org/Ticket/Display.html?id=17332) for details of which customer for whom this would be useful.
The likelihood is that they will wish to implement CB backend on the DB backend on which it's available now, but migrate to their preferred DB backend, once there is CB support there too (as well as complete coverage of all configuration options in the CB - at the moment there isn't).
**Update**: As discussed in [Porto](https://pad.isc.org/p/porto2022-stork-roadmap-and-backlog#L31), we'd like to make more generic. While the ultimate solution will likely cover Stork overseeing the migration, some parts of the functionality should be implemented in Kea. In particular, the following was discussed:
- `config-get`/`config-set` like commands that would retrieve/set the full configuration from CB
Once the above is implemented, it will open up plenty of opportunities for Stork. In particular, scenarios such as: "add subnets defined in a config file to CB, delete subents from config file, write updated config file to disk". The same can be repeated for other configuration elements (shared networks, classes, HR, etc).next-stable-3.0https://gitlab.isc.org/isc-projects/kea/-/issues/1559doc: picking the right redundancy solution (HA vs shared database)2024-03-21T12:23:39ZTomek Mrugalskidoc: picking the right redundancy solution (HA vs shared database)Once the situation with MySQL group replication improves (see #1411, #593 and #980) and we get more experience with running Kea with a cluster, we should document this a possible alternative for HA.
Support and sales are vitally interes...Once the situation with MySQL group replication improves (see #1411, #593 and #980) and we get more experience with running Kea with a cluster, we should document this a possible alternative for HA.
Support and sales are vitally interested in this. Hence the customer label.
At various times it was commented that the decision tree for recently added in host reservation docs is very useful. Regardless if this ends up as a KB article or part of Kea ARM, there should be a decision tree helping the reader to navigate through available options.next-stable-3.0https://gitlab.isc.org/isc-projects/kea/-/issues/1186JSON translator tool for CB2024-03-21T12:21:55ZPeter DaviesJSON translator tool for CB---
name: JSON translator tool for CB
about: Importing elements from a json configuration into CB
---
**Some initial questions**
This request looks like an extension to GT [#333](https://gitlab.isc.org/isc-projects/kea/-/issues/333) "pa...---
name: JSON translator tool for CB
about: Importing elements from a json configuration into CB
---
**Some initial questions**
This request looks like an extension to GT [#333](https://gitlab.isc.org/isc-projects/kea/-/issues/333) "parser libraries for servers (for netconf)
**Is your feature request related to a problem? Please describe.**
When migrating from a json based configuration to the Configuration Backend the user must identify each element in the configuration, locate the correct hooks command and apply the appropriate parameters
**Describe the solution you'd like**
A tool which takes a json configuration file as an input. The tool should identify any elements that are CB configurable for the current Kea version and produce a set of command which will create the appropriate elements in the CB.
**Describe alternatives you've considered**
As an extra function of keama
**Additional context**
Customer ticket RT [#16203](https://support.isc.org/Ticket/Display.html?id=16203)next-stable-3.0https://gitlab.isc.org/isc-projects/kea/-/issues/1028New classification design.2023-07-31T11:54:22ZFrancis DupontNew classification design.Some proposals for a new classification design:
- replace the list+set by a multi-index
- replace the required-xxx by a more direct add-client-classes.
- add this new add-client-classes to host reservations as an alias of the existing...Some proposals for a new classification design:
- replace the list+set by a multi-index
- replace the required-xxx by a more direct add-client-classes.
- add this new add-client-classes to host reservations as an alias of the existing client-classes (same entry with the same behavior for all objects which add a class to the query)
- complete the list of class evaluation points:
* new points after the deferred unpack, pkt*_receive hook, etc
* make clear in the doc that which a classification point is for:
+ dependency on a packet procession phase (e.g. KNOWN/UNKNOWN)
+ usage for the next packet processing step (e.g. subnet selection, pool guard, output option)
* add an enum (vs a few flags) for the point where a class must be evaluated
* add a meta-data with the value of its enum and make it visible to users
- same rules on dependency (use of member in expression):
* no forward reference (the user class in a member clause must be already defined)
* get the last classification point
* perhaps a new built-in class for instance for the pkt*_receive hook
- document the way to switch from expired-* to this new stuff (but do not develop a tool to translate configurations)
- (next steps?) new uses of classes (e.g. lifetime), new expressions (e.g. in the response vs the query): in almost all cases this means new classification pointsnext-stable-3.0https://gitlab.isc.org/isc-projects/kea/-/issues/968Implement the hash allocator2023-07-05T10:42:16ZFrancis DupontImplement the hash allocatorReference #895, requires #966Reference #895, requires #966next-stable-3.0https://gitlab.isc.org/isc-projects/kea/-/issues/3310Documentation should include more examples with IPv6 addresses in URLs2024-03-25T12:34:33ZFrancis DupontDocumentation should include more examples with IPv6 addresses in URLsThe reason is the syntax is no so trivial... I suggest to add at least one in ARM (hooks-ha.rst) and in kea6 examples.The reason is the syntax is no so trivial... I suggest to add at least one in ARM (hooks-ha.rst) and in kea6 examples.backloghttps://gitlab.isc.org/isc-projects/kea/-/issues/3298Make test utility class MemHostDataSource thread-safe2024-03-21T15:02:10ZAndrei Pavelandrei@isc.orgMake test utility class MemHostDataSource thread-safe`MemHostDataSource` is used in certain unit tests.
RADIUS MT unit tests required `MemHostDataSource` to be thread-safe, so the `TestHostCache` that derives it overrode all its methods and added a `lock_guard` to each.
To avoid this boi...`MemHostDataSource` is used in certain unit tests.
RADIUS MT unit tests required `MemHostDataSource` to be thread-safe, so the `TestHostCache` that derives it overrode all its methods and added a `lock_guard` to each.
To avoid this boilerplate code, ideally, `MemHostDataSource` should be made thread-safe itself.
This was not done at the time due to lack of time before the release.
When this is done, remember to remove the overridden methods from `TestHostCache`:
- `premium/src/hooks/dhcp/radius/tests/access_unittests.cc`
- `premium/src/hooks/dhcp/radius/tests/accounting_unittests.cc`
@fdupont says
> Note the mutex must be at most protected.backloghttps://gitlab.isc.org/isc-projects/stork/-/issues/1331Change the protobuf field names to follow the naming convention2024-03-19T14:59:57ZSlawek FigielChange the protobuf field names to follow the naming conventionThe field names should be written using the `snake_case`. We use `camelCase`.
It causes the generated GRPC API files for non-Go languages not to follow the expected naming conventions.
```
message GetStateRsp {
string agentVersion = 1...The field names should be written using the `snake_case`. We use `camelCase`.
It causes the generated GRPC API files for non-Go languages not to follow the expected naming conventions.
```
message GetStateRsp {
string agentVersion = 1;
repeated App apps = 2;
string hostname = 3;
int64 cpus = 4;
string cpusLoad = 5;
int64 memory = 6;
int64 usedMemory = 7;
int64 uptime = 8;
string error = 9;
string os = 10;
string platform = 11;
string platformFamily = 12;
string platformVersion = 13;
string kernelVersion = 14;
string kernelArch = 15;
string virtualizationSystem = 16;
string virtualizationRole = 17;
string hostID = 18;
bool agentUsesHTTPCredentials = 19;
}
```
References: https://protobuf.dev/programming-guides/style/#message-field-namesbackloghttps://gitlab.isc.org/isc-projects/stork/-/issues/1329Event-driven HA status monitoring2024-03-26T14:55:35ZMarcin SiodelskiEvent-driven HA status monitoringCurrently we poll every 10s for the current HA state. I'd like to suggest that we move to an event-based monitoring where the changes will be signaled by the server to the subscribers over SSE. This should reduce the amount of processing...Currently we poll every 10s for the current HA state. I'd like to suggest that we move to an event-based monitoring where the changes will be signaled by the server to the subscribers over SSE. This should reduce the amount of processing in the browser and should cause an immediate reaction to the HA state changes.backlog