Kea issueshttps://gitlab.isc.org/isc-projects/kea/-/issues2023-07-13T18:35:36Zhttps://gitlab.isc.org/isc-projects/kea/-/issues/237ISC DHCP per class lease limit2023-07-13T18:35:36ZFrancis DupontISC DHCP per class lease limitQuote from ISC DHCP 1dhcpd.conf.5`
>>>
PER-CLASS LIMITS ON DYNAMIC ADDRESS ALLOCATION
You may specify a limit to the number of clients in a class that can be
assigned leases. The effect of this will be to make it difficul...Quote from ISC DHCP 1dhcpd.conf.5`
>>>
PER-CLASS LIMITS ON DYNAMIC ADDRESS ALLOCATION
You may specify a limit to the number of clients in a class that can be
assigned leases. The effect of this will be to make it difficult for a
new client in a class to get an address. Once a class with such a
limit has reached its limit, the only way a new client in that class
can get a lease is for an existing client to relinquish its lease,
either by letting it expire, or by sending a DHCPRELEASE packet.
Classes with lease limits are specified as follows:
class "limited-1" {
lease limit 4;
}
This will produce a class in which a maximum of four members may hold a
lease at one time.
>>>
Often associated with cloned classes. Requested by a customer but a priori not easy to implement.
Note that in Kea lease assignment is done before calling setReservedClientClasses.
Support tickets: [support#18293](https://support.isc.org/Ticket/Display.html?id=18293), [support#17523](https://support.isc.org/Ticket/Display.html?id=17523), [support#19968](https://support.isc.org/Ticket/Display.html?id=19968)
Being implemented in Kea.ISC DHCP Migrationhttps://gitlab.isc.org/isc-projects/kea/-/issues/554Speedup subnet selection2022-11-02T15:10:18ZFrancis DupontSpeedup subnet selectionFirst we use a selector structure where all possible keys are (so not the query itself), second the most interesting key is the source address of the query (interesting here means mainly the key which should not change between two querie...First we use a selector structure where all possible keys are (so not the query itself), second the most interesting key is the source address of the query (interesting here means mainly the key which should not change between two queries from the same or similar clients.
So I propose to cache selector => subnet selection results in a hash table (unordered multi map) keyed by the source address.
Note as this can slow down things where there are a few subnets conditions of use, cache sizing, etc, should be analyzed (so it is an **idea**).
Changed for a more global research of subnet selection speedup.backlogFrancis DupontFrancis Duponthttps://gitlab.isc.org/isc-projects/kea/-/issues/1345Ability to always-respond to all requests in HA active-active mode to support...2021-01-22T13:30:51ZEwald van GeffenAbility to always-respond to all requests in HA active-active mode to support anycast DHCPMy impression is that ISC KEA doesn't always respond to all requests. I think this is due to the 1/n split.
I run two KEA instances sharing a single BGP anycast /32 IP prefix. DHCP Requests get routed via a DHCP relay towards the closes...My impression is that ISC KEA doesn't always respond to all requests. I think this is due to the 1/n split.
I run two KEA instances sharing a single BGP anycast /32 IP prefix. DHCP Requests get routed via a DHCP relay towards the closest ISC KEA instance according to BGP. Load balancing is externally handled. This means KEA should respond to all requests it receives and not impose any load-balancing logic.
I think this is where the magic happens [1]
From my understanding active_servers needs to reflect the current server instance id (pri,sec).
[1] https://github.com/isc-projects/kea/blob/457111f9db051723ff9f8e7fb621872d0aa10363/src/hooks/dhcp/high_availability/query_filter.cc#L316outstandinghttps://gitlab.isc.org/isc-projects/kea/-/issues/2131revisit and extend D2 update retry code2022-02-25T12:09:25ZFrancis Dupontrevisit and extend D2 update retry codeThe waiting delay between two attempts is not clear and for GSS-TSIG to be able to set the number of retries is requested.
This ticket should stay in the core code. Note the idea to save and restore the NCR queue is not considered here ...The waiting delay between two attempts is not clear and for GSS-TSIG to be able to set the number of retries is requested.
This ticket should stay in the core code. Note the idea to save and restore the NCR queue is not considered here (it has its own ticket #1801).
Opening a design phaseoutstandingFrancis DupontFrancis Duponthttps://gitlab.isc.org/isc-projects/kea/-/issues/2037Improve expired leases reclamation query for PostgreSQL2022-11-02T15:10:41ZThomas MarkwalderImprove expired leases reclamation query for PostgreSQLExtended performance testing revealed cyclic slow downs that correspond to expired lease reclamation. The MySQL query was doing full table scans, this was mitigated under #2030. We need to examine PostgreSQL performance and see if it ca...Extended performance testing revealed cyclic slow downs that correspond to expired lease reclamation. The MySQL query was doing full table scans, this was mitigated under #2030. We need to examine PostgreSQL performance and see if it can be improved.backlogMarcin SiodelskiMarcin Siodelskihttps://gitlab.isc.org/isc-projects/kea/-/issues/1767check static analysers reports2022-11-02T15:10:18ZWlodzimierz Wencelcheck static analysers reportsRecent increased interest in security reminded me that it was some time since anyone looked into our static analysers, reports are:
* https://scan.coverity.com/projects/kea/view_defects (if you don't have an account please sign in and re...Recent increased interest in security reminded me that it was some time since anyone looked into our static analysers, reports are:
* https://scan.coverity.com/projects/kea/view_defects (if you don't have an account please sign in and request access to kea)
* https://jenkins.isc.org/view/All/job/kea-master-cppcheck-internal/
We need:
* review reports
* in coverity mark issues with correct status
* open tickets for real issues
* fix issues :)backlogRazvan BecheriuRazvan Becheriuhttps://gitlab.isc.org/isc-projects/kea/-/issues/326Handle Reconfigure Accept Option #872023-05-30T11:04:20ZVicky Riskvicky@isc.orgHandle Reconfigure Accept Option #87Mayya Sunil opened on Github as issue #87 on June 1, 2018
Reconfigure Accept Option : Included in Server's Replies, Advertise message and client's Solicit, Request, Renew, Rebind, Information Request to announce support of Reconfigure f...Mayya Sunil opened on Github as issue #87 on June 1, 2018
Reconfigure Accept Option : Included in Server's Replies, Advertise message and client's Solicit, Request, Renew, Rebind, Information Request to announce support of Reconfigure feature.
This issue involves 2 tasks
Include the option in the server's outgoing message.
2)Parse the option in the client's message and generate and store the keys in the reservation if keys not available.
----------
MayyaSunil pushed a commit to MayyaSunil/kea that referenced this issue on Aug 14
[store_client_context] Stotes client info in user contexts …
a09944dnext-stable-2.62020-02-29https://gitlab.isc.org/isc-projects/kea/-/issues/2339Memory leak in HA scenario with backup server down2023-09-07T14:02:26ZBranimir RajtarMemory leak in HA scenario with backup server down---
name: Memory leak in HA scenario with backup server down
about: Memory loss is created on running instances
---
**Describe the bug**
HA mode is configured with three servers (primary, secondary, backup) and is serving clients. Whe...---
name: Memory leak in HA scenario with backup server down
about: Memory loss is created on running instances
---
**Describe the bug**
HA mode is configured with three servers (primary, secondary, backup) and is serving clients. When the backup server becomes unavailable, the primary and secondary experience a continuous memory leak which is manifested as a continuous increase in RSS memory use for the isc-kea-dhcp4-server process. The size of the memory leak is in direct correlation with the number of active clients - the larger number, the greater the memory leak. Once the backup server is deleted from the configuration or it becomes active again, there is no more memory increase, but the old memory is not freed.
**To Reproduce**
Steps to reproduce the behavior:
1. Run KEA (DHCP4 only) in HA scenario with two load-balancing servers (primary and secondary) and a single backup server
2. Start serving clients (40k in our scenario) and monitoring RSS usage for the KEA server process
3. Disable backup server
4. Verify that RSS usage is increasing continuously
5. Enable backup server
6. Verify that RSS usage is stable
**Expected behavior**
The servers should not have any memory leaks.
**Environment:**
- Kea version: 1.8.2, 2.0.2
- OS: Ubuntu 18.04
- Memfile
- libdhcp_lease_cmds, libdhcp_stat_cmds, libdhcp_ha
**Additional Information**
```
{
"Dhcp4": {
"dhcp-queue-control": {
"enable-queue": true,
"queue-type": "kea-ring4",
"capacity": 256
},
"interfaces-config": {
"interfaces": [
"eth1"
],
"dhcp-socket-type": "udp"
},
"control-socket": {
"socket-type": "unix",
"socket-name": "/tmp/kea-dhcp4-ctrl.sock"
},
"lease-database": {
"type": "memfile",
"persist": true,
"name": "/var/lib/kea/dhcp4.leases",
"lfc-interval": 3600,
"port": 0
},
"expired-leases-processing": {
"reclaim-timer-wait-time": 10,
"flush-reclaimed-timer-wait-time": 25,
"hold-reclaimed-time": 3600,
"max-reclaim-leases": 100,
"max-reclaim-time": 250,
"unwarned-reclaim-cycles": 5
},
"renew-timer": 60,
"rebind-timer": 100,
"valid-lifetime": 120,
"option-data": [],
"hooks-libraries": [
{
"library": "/usr/lib/x86_64-linux-gnu/kea/hooks/libdhcp_lease_cmds.so",
"parameters": {}
},
{
"library": "/usr/lib/x86_64-linux-gnu/kea/hooks/libdhcp_stat_cmds.so"
},
{
"library": "/usr/lib/x86_64-linux-gnu/kea/hooks/libdhcp_ha.so",
"parameters": {
"high-availability": [
{
"this-server-name": "server3",
"mode": "load-balancing",
"heartbeat-delay": 3000,
"max-response-delay": 7000,
"max-ack-delay": 7000,
"max-unacked-clients": 20,
"peers": [
{
"name": "server2",
"url": "http://<XXX>:8080/",
"role": "secondary",
"auto-failover": true
},
{
"name": "server1",
"url": "http://<YYY>:8080/",
"role": "primary",
"auto-failover": true
},
{
"name": "server3",
"url": "http://<ZZZ>:8080/",
"role": "backup",
"auto-failover": true
}
]
}
]
}
}
],
"option-def": [
{
"name": "classless-static-route",
"code": 121,
"space": "dhcp4",
"type": "record",
"array": true,
"record-types": "uint8, uint8"
}
],
"client-classes": [
// anonymized
],
"subnet4": [
// anonymized
],
"reservations": [],
"loggers": [
{
"name": "kea-dhcp4",
"output_options": [
{
"output": "syslog"
}
],
"severity": "error",
"debuglevel": 0
}
]
}
}
```
**Contacting you**
Email/Github, telephone is available after contactnext-stable-2.6https://gitlab.isc.org/isc-projects/kea/-/issues/2856mysql v4 backend slowing down while using random and flq allocator2023-06-19T10:40:00ZWlodzimierz Wencelmysql v4 backend slowing down while using random and flq allocatorIt looks like Kea performance is hugely impacted when random or flq allocator is used. Screen of the charts that show performance degradation are attached. Results are generated using code from isc-projects/kea#2843
![Screenshot_2023-05...It looks like Kea performance is hugely impacted when random or flq allocator is used. Screen of the charts that show performance degradation are attached. Results are generated using code from isc-projects/kea#2843
![Screenshot_2023-05-11_at_13.56.08](/uploads/8fd6f88e0cd8078c130c11b224ea749c/Screenshot_2023-05-11_at_13.56.08.png)
![Screenshot_2023-05-11_at_13.56.38](/uploads/7a82938d13b13dec1b7c80388f3b4b1c/Screenshot_2023-05-11_at_13.56.38.png)
Issue is not observed in v6 random allocator:
![Screenshot_2023-05-11_at_14.01.09](/uploads/39181cd28e2e1fad03c70e2319a88fc1/Screenshot_2023-05-11_at_14.01.09.png)
[full internal report](https://jenkins.aws.isc.org/view/Kea-manual/job/kea-manual/job/performance/94/artifact/qa-dhcp/kea/performance-jenkins/report.html) (it's heavy, please wait patiently for it to load)
to check all allocator related tests please go to tab `Resource Consumption`, allocators tests starts at Scenario 7.
Mysql related scenarios:
* v4: 10, 11, 12, 19, 20, 21, 28, 29, 30
* v6: 39, 40, 45, 46, 51, 52
(number of tests will be reduced for regular monthly runs)
Additionally I should mention that in previous runs (master) I observed issues with iterative allocator in v4 mysql as well. Looked like kea stops assigning leases after assigning ~6mln leases, but I couldn't reproduce it on isc-projects/kea#2843 (I will repeat those tests on master overnight)
![Screenshot_2023-05-11_at_14.11.01](/uploads/6da24ed37a758e3e86653a31a2b2fbb1/Screenshot_2023-05-11_at_14.11.01.png)next-stable-2.6Marcin SiodelskiMarcin Siodelskihttps://gitlab.isc.org/isc-projects/kea/-/issues/3050Post audit: tighten access permissions for configs2023-09-21T17:00:32ZTomek MrugalskiPost audit: tighten access permissions for configsAnother point after @manu's [audit](https://gitlab.isc.org/isc-private/kea/-/wikis/Kea-Security-Review-02-2023#9-limiting-permission-of-the-kea-configuration-files):
I would propose considering the following:
* [ ] put a WARNING sectio...Another point after @manu's [audit](https://gitlab.isc.org/isc-private/kea/-/wikis/Kea-Security-Review-02-2023#9-limiting-permission-of-the-kea-configuration-files):
I would propose considering the following:
* [ ] put a WARNING section to the config files (close to the sections where password/key is configured) with a link to guide how to setup it up correctly so the administrator has at least a chance to notice it and follow the recommendation
* [ ] let service during startup/reload if the password or key secret is present and display/log warning (?with link to the guide?)
* [x] change access permissions to 0640 by default (instead of 0644); in other words, remove read rights for 'other'. Note: User/group ownership should be 'root' or the 'user' under which kea is running.
While the second would probably be tricky to implement, so we might skip it, proposals 1 and 3 are solid and we should do it.
This ticket is about updating the packages. Some might argue that similar action should be done for Kea sources (e.g. make sure the make install install the sources with more restrictive permissions).next-stable-2.6https://gitlab.isc.org/isc-projects/kea/-/issues/1336inaccurate counters in kea core caused by reservations and declined leases2024-03-22T13:16:40ZRazvan Becheriuinaccurate counters in kea core caused by reservations and declined leasesrelated to https://gitlab.isc.org/isc-projects/kea/-/issues/944
there are several problem discovered in https://gitlab.isc.org/isc-projects/kea/-/issues/1065
1. declined leases are considered 'allocated' and must be added to recount fu...related to https://gitlab.isc.org/isc-projects/kea/-/issues/944
there are several problem discovered in https://gitlab.isc.org/isc-projects/kea/-/issues/1065
1. declined leases are considered 'allocated' and must be added to recount functions on startup, or they will cause negative counters on expire/reclaim
2. reservations must be treated as normal leases and should increment counters as they are decrementing counters on expire or reclaim and can lead to negative counters
functions that need updating are:
```
allocateReservedLeases6
allocateGlobalReservedLeases6
```
3. extendLease6 should not increment stats:
```
"assigned-nas"
"assigned-pds"
"cumulative-assigned-nas"
"cumulative-assigned-pds"
"cumulative-assigned-nas"
"cumulative-assigned-pds"
```
because they have been already updated by previous functions:
```
allocateUnreservedLeases6
createLease6
reuseExpiredLease
```
also by functions at previous point (2):
```
allocateReservedLeases6
allocateGlobalReservedLeases6
```
this will also mean that all removed leases in extendLease6 must undo the counters already updated in previous functions:
```
"cumulative-assigned-nas"
"cumulative-assigned-pds"
"cumulative-assigned-nas"
"cumulative-assigned-pds"
```kea2.5.8Razvan BecheriuRazvan Becheriuhttps://gitlab.isc.org/isc-projects/kea/-/issues/3047Detection of packet processing slowdown2024-03-22T13:16:46ZDarren AnkneyDetection of packet processing slowdownSometimes Kea will find itself unable to keep up with incoming packet load for various reasons (overwhelming amounts of packets in an avalanche scenario, slowdown in SQL queries in the case of database usage, and etc). Kea currently has...Sometimes Kea will find itself unable to keep up with incoming packet load for various reasons (overwhelming amounts of packets in an avalanche scenario, slowdown in SQL queries in the case of database usage, and etc). Kea currently has no way to detect or warn about this situation. Detailed analysis of the logs is necessary to understand what is happening. This issue is meant to provide some ideas for future development in this area where Kea could possibly detect this situation and provide warning log messages about it.
Possible ideas:
1. Add timestamps to received packets (possible? [perhaps](https://www.kernel.org/doc/Documentation/networking/timestamping.txt)) as they are put into the buffer. Check these timestamps against the current time as they are pulled out of the buffer. If there is some discrepancy that is larger than some threshold, emit some kind of log message about this.
2. In `netstat -l` output, there is a representation of the current buffer size for a process. Example:
```
$ netstat -l
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
...
udp 0 0 10.1.2.2:bootps 0.0.0.0:*
```
Administrators sometimes use this to find out if some process has a slowdown in processing packets. If the number is larger than normal, then there might be. Kea may not necessarily know what is normal, but there must be some maximum size of the Recv-Q. If so, perhaps this maximum size is being reached when these backups occur. If that could be detected, perhaps a warning message could be logged.
[RT22378](https://support.isc.org/Ticket/Display.html?id=22378)kea2.5.8Thomas MarkwalderThomas Markwalderhttps://gitlab.isc.org/isc-projects/kea/-/issues/3276Kea primary server in "passive backup" freeze/crash on receiving ha-sync2024-03-28T10:34:28ZMarcin GodzinaKea primary server in "passive backup" freeze/crash on receiving ha-syncKea HA server set as `primary` freezes after receiving `ha-sync` command with proper arguments.
The backup server does NOT crash.
Freeze occurs only in `passive-backup` mode.
The problem exists in both v4 and v6. Also, in Memfile and m...Kea HA server set as `primary` freezes after receiving `ha-sync` command with proper arguments.
The backup server does NOT crash.
Freeze occurs only in `passive-backup` mode.
The problem exists in both v4 and v6. Also, in Memfile and mysql/psql lease database.
**Kea versions tested:**
- 2.5.7-git 8c1f22e3fb65225a0279606a8a65962850a5f881
- 2.4.0 release tarball
**Tested systems:**
- Fedora 38 in VM on my local setup.
- Ubuntu 22.04, Alpine 3.16, Fedora 36 on Jenkins build farm.
**To Reproduce**
Steps to reproduce the behavior:
1. Run Kea HA servers in **Passive backup** configuration (tested configuration provided)
2. Wait for servers to connect.
3. Optionally add leases (crashes either way)
4. Send the `ha-sync` command with proper arguments to the primary server. (`"server-name": "server2"` for provided configuration) (Invalid arguments respond with error)
The primary server freezes after receiving a response to the `dhcp-disable` command, sent automatically to the backup server. It does not respond to kea-ctrl agent, keyboard interrupts or SIGHUP
<details><summary>Commands tested to freeze provided config:</summary>
```
{
command": "ha-sync",
"arguments": {
"server-name": "server2"
}
}
```
```
{
command": "ha-sync",
"arguments": {
"server-name": "server1"
}
}
```
```
{
command": "ha-sync",
"arguments": {
"server-name": "server2",
"max-period": 60
}
}
```
</details>
**Configuration**
<details><summary>Primary</summary>
```
{
"Dhcp4": {
"option-data": [],
"hooks-libraries": [
{
"library": "/home/mgodzina/installed/keadev/lib/kea/hooks/libdhcp_ha.so",
"parameters": {
"high-availability": [
{
"peers": [
{
"auto-failover": true,
"name": "server1",
"role": "primary",
"url": "http://192.168.56.102:8003/"
},
{
"auto-failover": true,
"name": "server2",
"role": "backup",
"url": "http://192.168.56.103:8003/"
}
],
"state-machine": {
"states": []
},
"mode": "passive-backup",
"this-server-name": "server1",
"multi-threading": {
"enable-multi-threading": true,
"http-dedicated-listener": true,
"http-listener-threads": 0,
"http-client-threads": 0
}
}
]
}
},
{
"library": "/home/mgodzina/installed/keadev/lib/kea/hooks/libdhcp_lease_cmds.so"
}
],
"shared-networks": [],
"subnet4": [
{
"subnet": "192.168.50.0/24",
"pools": [
{
"pool": "192.168.50.1-192.168.50.200"
}
],
"interface": "enp0s9"
}
],
"interfaces-config": {
"interfaces": [
"enp0s9"
]
},
"control-socket": {
"socket-type": "unix",
"socket-name": "/home/mgodzina/installed/keadev/var/run/kea/control_socket"
},
"renew-timer": 1000,
"rebind-timer": 2000,
"valid-lifetime": 4000,
"loggers": [
{
"name": "kea-dhcp4",
"output-options": [
{
"output": "/home/mgodzina/installed/keadev/var/log/kea.log"
}
],
"severity": "DEBUG",
"debuglevel": 99
}
],
"lease-database": {
"type": "memfile"
}
}
}
```
</details>
<details><summary>Backup</summary>
```
{
"Dhcp4": {
"option-data": [],
"hooks-libraries": [
{
"library": "/home/mgodzina/installed/keadev/lib/kea/hooks/libdhcp_ha.so",
"parameters": {
"high-availability": [
{
"peers": [
{
"auto-failover": true,
"name": "server1",
"role": "primary",
"url": "http://192.168.56.102:8003/"
},
{
"auto-failover": true,
"name": "server2",
"role": "backup",
"url": "http://192.168.56.103:8003/"
}
],
"state-machine": {
"states": []
},
"mode": "passive-backup",
"this-server-name": "server2",
"multi-threading": {
"enable-multi-threading": true,
"http-dedicated-listener": true,
"http-listener-threads": 0,
"http-client-threads": 0
}
}
]
}
},
{
"library": "/home/mgodzina/installed/keadev/lib/kea/hooks/libdhcp_lease_cmds.so"
}
],
"shared-networks": [],
"subnet4": [
{
"subnet": "192.168.50.0/24",
"pools": [
{
"pool": "192.168.50.1-192.168.50.200"
}
],
"interface": "enp0s9"
}
],
"interfaces-config": {
"interfaces": [
"enp0s9"
]
},
"control-socket": {
"socket-type": "unix",
"socket-name": "/home/mgodzina/installed/keadev/var/run/kea/control_socket"
},
"renew-timer": 1000,
"rebind-timer": 2000,
"valid-lifetime": 4000,
"loggers": [
{
"name": "kea-dhcp4",
"output-options": [
{
"output": "/home/mgodzina/installed/keadev/var/log/kea.log"
}
],
"severity": "DEBUG",
"debuglevel": 99
}
],
"lease-database": {
"type": "memfile"
}
}
}
```
</details>
**Logs**
<details><summary>Primary server log tail</summary>
```
2024-02-28 16:20:13.417 DEBUG [kea-dhcp4.commands/2096.139741364354944] COMMAND_SOCKET_CONNECTION_OPENED Opened socket 38 for incoming command connection
2024-02-28 16:20:13.417 DEBUG [kea-dhcp4.commands/2096.139741364354944] COMMAND_SOCKET_READ Received 127 bytes over command socket 38
2024-02-28 16:20:13.417 INFO [kea-dhcp4.commands/2096.139741364354944] COMMAND_RECEIVED Received command 'ha-sync'
2024-02-28 16:20:13.417 DEBUG [kea-dhcp4.callouts/2096.139741364354944] HOOKS_CALLOUTS_BEGIN begin all callouts for hook $ha_sync
2024-02-28 16:20:13.417 DEBUG [kea-dhcp4.http/2096.139741364354944] HTTP_CLIENT_REQUEST_SEND sending HTTP request POST / HTTP/1.1 to http://192.168.56.103:8003/
2024-02-28 16:20:13.417 DEBUG [kea-dhcp4.http/2096.139741364354944] HTTP_CLIENT_REQUEST_SEND_DETAILS detailed information about request sent to http://192.168.56.103:8003/:
POST / HTTP/1.1
Host: 192.168.56.103
Content-Length: 86
Content-Type: application/json
{ "arguments": { "origin": 2000 }, "command": "dhcp-disable", "service": [ "dhcp4" ] }
2024-02-28 16:20:13.417 INFO [kea-dhcp4.ha-hooks/2096.139741364354944] HA_SYNC_START server1: starting lease database synchronization with server2
2024-02-28 16:20:13.417 DEBUG [kea-dhcp4.http/2096.139741364354944] HTTP_SERVER_RESPONSE_RECEIVED received HTTP response from http://192.168.56.103:8003/
2024-02-28 16:20:13.417 DEBUG [kea-dhcp4.http/2096.139741364354944] HTTP_SERVER_RESPONSE_RECEIVED_DETAILS detailed information about well-formed response received from http://192.168.56.103:8003/:
HTTP/1.1 200 OK
Content-Length: 54
Content-Type: application/json
Date: Wed, 28 Feb 2024 15:20:13 GMT
[ { "result": 0, "text": "DHCPv4 service disabled" } ]
```
</details>
<details><summary>Backup server log snippet with timeout:</summary>
```
2024-02-28 16:20:13.413 DEBUG [kea-dhcp4.http/20519.140151306917568] HTTP_REQUEST_RECEIVE_START start receiving request from 192.168.56.102 with timeout 10
2024-02-28 16:20:13.413 DEBUG [kea-dhcp4.http/20519.140151306917568] HTTP_DATA_RECEIVED received 179 bytes from 192.168.56.102
2024-02-28 16:20:13.413 DEBUG [kea-dhcp4.http/20519.140151306917568] HTTP_CLIENT_REQUEST_RECEIVED received HTTP request from 192.168.56.102
2024-02-28 16:20:13.413 DEBUG [kea-dhcp4.http/20519.140151306917568] HTTP_CLIENT_REQUEST_RECEIVED_DETAILS detailed information about well-formed request received from 192.168.56.102:
POST / HTTP/1.1
Host: 192.168.56.103
Content-Length: 86
Content-Type: application/json
{ "arguments": { "origin": 2000 }, "command": "dhcp-disable", "service": [ "dhcp4" ] }
2024-02-28 16:20:13.413 INFO [kea-dhcp4.commands/20519.140151306917568] COMMAND_RECEIVED Received command 'dhcp-disable'
2024-02-28 16:20:13.413 DEBUG [kea-dhcp4.callouts/20519.140151306917568] HOOKS_CALLOUTS_BEGIN begin all callouts for hook command_processed
2024-02-28 16:20:13.413 DEBUG [kea-dhcp4.callouts/20519.140151306917568] HOOKS_CALLOUT_CALLED hooks library with index 1 has called a callout on hook command_processed that has address 0x7f778767ffe0 (callout duration: 0.000 ms)
2024-02-28 16:20:13.413 DEBUG [kea-dhcp4.callouts/20519.140151306917568] HOOKS_CALLOUTS_COMPLETE completed callouts for hook command_processed (total callouts duration: 0.000 ms)
2024-02-28 16:20:13.413 DEBUG [kea-dhcp4.http/20519.140151306917568] HTTP_SERVER_RESPONSE_SEND sending HTTP response HTTP/1.1 200 OK to 192.168.56.102
2024-02-28 16:20:13.413 DEBUG [kea-dhcp4.http/20519.140151306917568] HTTP_SERVER_RESPONSE_SEND_DETAILS detailed information about response sent to 192.168.56.102:
HTTP/1.1 200 OK
Content-Length: 54
Content-Type: application/json
Date: Wed, 28 Feb 2024 15:20:13 GMT
[ { "result": 0, "text": "DHCPv4 service disabled" } ]
2024-02-28 16:20:17.831 DEBUG [kea-dhcp4.dhcpsrv/20519.140151383601024] DHCPSRV_TIMERMGR_RUN_TIMER_OPERATION running operation for timer: reclaim-expired-leases
2024-02-28 16:20:17.831 DEBUG [kea-dhcp4.alloc-engine/20519.140151383601024] ALLOC_ENGINE_V4_LEASES_RECLAMATION_START starting reclamation of expired leases (limit = 100 leases or 250 milliseconds)
2024-02-28 16:20:17.831 DEBUG [kea-dhcp4.dhcpsrv/20519.140151383601024] DHCPSRV_MEMFILE_GET_EXPIRED4 obtaining maximum 101 of expired IPv4 leases
2024-02-28 16:20:17.832 DEBUG [kea-dhcp4.alloc-engine/20519.140151383601024] ALLOC_ENGINE_V4_LEASES_RECLAMATION_COMPLETE reclaimed 0 leases in 0.033 ms
2024-02-28 16:20:17.832 DEBUG [kea-dhcp4.alloc-engine/20519.140151383601024] ALLOC_ENGINE_V4_NO_MORE_EXPIRED_LEASES all expired leases have been reclaimed
2024-02-28 16:20:17.832 DEBUG [kea-dhcp4.dhcpsrv/20519.140151383601024] DHCPSRV_TIMERMGR_START_TIMER starting timer: reclaim-expired-leases
2024-02-28 16:20:21.840 DEBUG [kea-dhcp4.dhcpsrv/20519.140151383601024] DHCPSRV_TIMERMGR_RUN_TIMER_OPERATION running operation for timer: flush-reclaimed-leases
2024-02-28 16:20:21.840 DEBUG [kea-dhcp4.alloc-engine/20519.140151383601024] ALLOC_ENGINE_V4_RECLAIMED_LEASES_DELETE begin deletion of reclaimed leases expired more than 3600 seconds ago
2024-02-28 16:20:21.840 DEBUG [kea-dhcp4.dhcpsrv/20519.140151383601024] DHCPSRV_MEMFILE_DELETE_EXPIRED_RECLAIMED4 deleting reclaimed IPv4 leases that expired more than 3600 seconds ago
2024-02-28 16:20:21.840 DEBUG [kea-dhcp4.alloc-engine/20519.140151383601024] ALLOC_ENGINE_V4_RECLAIMED_LEASES_DELETE_COMPLETE successfully deleted 0 expired-reclaimed leases
2024-02-28 16:20:21.840 DEBUG [kea-dhcp4.dhcpsrv/20519.140151383601024] DHCPSRV_TIMERMGR_START_TIMER starting timer: flush-reclaimed-leases
2024-02-28 16:20:27.852 DEBUG [kea-dhcp4.dhcpsrv/20519.140151383601024] DHCPSRV_TIMERMGR_RUN_TIMER_OPERATION running operation for timer: reclaim-expired-leases
2024-02-28 16:20:27.852 DEBUG [kea-dhcp4.alloc-engine/20519.140151383601024] ALLOC_ENGINE_V4_LEASES_RECLAMATION_START starting reclamation of expired leases (limit = 100 leases or 250 milliseconds)
2024-02-28 16:20:27.852 DEBUG [kea-dhcp4.dhcpsrv/20519.140151383601024] DHCPSRV_MEMFILE_GET_EXPIRED4 obtaining maximum 101 of expired IPv4 leases
2024-02-28 16:20:27.852 DEBUG [kea-dhcp4.alloc-engine/20519.140151383601024] ALLOC_ENGINE_V4_LEASES_RECLAMATION_COMPLETE reclaimed 0 leases in 0.032 ms
2024-02-28 16:20:27.852 DEBUG [kea-dhcp4.alloc-engine/20519.140151383601024] ALLOC_ENGINE_V4_NO_MORE_EXPIRED_LEASES all expired leases have been reclaimed
2024-02-28 16:20:27.852 DEBUG [kea-dhcp4.dhcpsrv/20519.140151383601024] DHCPSRV_TIMERMGR_START_TIMER starting timer: reclaim-expired-leases
2024-02-28 16:20:37.891 DEBUG [kea-dhcp4.dhcpsrv/20519.140151383601024] DHCPSRV_TIMERMGR_RUN_TIMER_OPERATION running operation for timer: reclaim-expired-leases
2024-02-28 16:20:37.892 DEBUG [kea-dhcp4.alloc-engine/20519.140151383601024] ALLOC_ENGINE_V4_LEASES_RECLAMATION_START starting reclamation of expired leases (limit = 100 leases or 250 milliseconds)
2024-02-28 16:20:37.892 DEBUG [kea-dhcp4.dhcpsrv/20519.140151383601024] DHCPSRV_MEMFILE_GET_EXPIRED4 obtaining maximum 101 of expired IPv4 leases
2024-02-28 16:20:37.892 DEBUG [kea-dhcp4.alloc-engine/20519.140151383601024] ALLOC_ENGINE_V4_LEASES_RECLAMATION_COMPLETE reclaimed 0 leases in 0.027 ms
2024-02-28 16:20:37.892 DEBUG [kea-dhcp4.alloc-engine/20519.140151383601024] ALLOC_ENGINE_V4_NO_MORE_EXPIRED_LEASES all expired leases have been reclaimed
2024-02-28 16:20:37.892 DEBUG [kea-dhcp4.dhcpsrv/20519.140151383601024] DHCPSRV_TIMERMGR_START_TIMER starting timer: reclaim-expired-leases
2024-02-28 16:20:43.433 DEBUG [kea-dhcp4.http/20519.140151315310272] HTTP_IDLE_CONNECTION_TIMEOUT_OCCURRED closing persistent connection with 192.168.56.102 as a result of a timeout
2024-02-28 16:20:43.433 DEBUG [kea-dhcp4.http/20519.140151315310272] HTTP_CONNECTION_STOP stopping HTTP connection from 192.168.56.102
```
</details>
[gdb.txt](/uploads/de79e56462885f7947eab90267f7a658/gdb.txt)kea2.5.8Marcin SiodelskiMarcin Siodelskihttps://gitlab.isc.org/isc-projects/kea/-/issues/3297Perfmon-Hook-Task-5 Add Event Stack Processing2024-03-28T20:25:49ZThomas MarkwalderPerfmon-Hook-Task-5 Add Event Stack ProcessingComplete Hook Task 5: Add Event Stack Processing - Process event stacks into MonitoredDuration updates, implement report timer, and alarm processing
See https://gitlab.isc.org/isc-projects/kea/-/wikis/Designs/performance-monitor#perfm...Complete Hook Task 5: Add Event Stack Processing - Process event stacks into MonitoredDuration updates, implement report timer, and alarm processing
See https://gitlab.isc.org/isc-projects/kea/-/wikis/Designs/performance-monitor#perfmon-hook-taskskea2.5.8Thomas MarkwalderThomas Markwalderhttps://gitlab.isc.org/isc-projects/kea/-/issues/3247changes in hammer for rocky linux2024-03-28T08:11:54ZMarcin Godzinachanges in hammer for rocky linuxWe need to extend Hammer to build on Rocky Linux 9. We have a business need for this.We need to extend Hammer to build on Rocky Linux 9. We have a business need for this.kea2.5.8Marcin GodzinaMarcin Godzinahttps://gitlab.isc.org/isc-projects/kea/-/issues/3129extend hammer to prepare kea to test it with TSAN2024-03-28T08:12:03ZWlodzimierz Wencelextend hammer to prepare kea to test it with TSANWhile checking what is really needed to run fully automated system tests against KEA with TSAN enabled I came to a conclusion that different compilation procedure have to be incorporated in hammer.
In TSAN job in jenkins we are using:
`...While checking what is really needed to run fully automated system tests against KEA with TSAN enabled I came to a conclusion that different compilation procedure have to be incorporated in hammer.
In TSAN job in jenkins we are using:
```
CXX=clang++
CXXFLAGS="-g3 -ggdb -O0 -fsanitize=thread -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fexceptions -fstack-protector-strong -grecord-gcc-switches"
TSAN_OPTIONS="detect_deadlocks=1 second_deadlock_stack=1"
```kea2.5.8Wlodzimierz WencelWlodzimierz Wencel