ISC Open Source Projects issueshttps://gitlab.isc.org/groups/isc-projects/-/issues2024-01-03T13:35:03Zhttps://gitlab.isc.org/isc-projects/bind9/-/issues/4318Check the size of the structure passed to dns_rdata_*struct methods2024-01-03T13:35:03ZMark AndrewsCheck the size of the structure passed to dns_rdata_*struct methods#4314 made me think we should check the size of the structure being passed to dns_rdata_tostruct, dns_rdata_fromstruct, and dns_rdata_freestruct as we don't have the compiler doing type checks for us. It there is a mismatch badness coul...#4314 made me think we should check the size of the structure being passed to dns_rdata_tostruct, dns_rdata_fromstruct, and dns_rdata_freestruct as we don't have the compiler doing type checks for us. It there is a mismatch badness could happen.Not plannedhttps://gitlab.isc.org/isc-projects/kea/-/issues/3050Post audit: tighten access permissions for configs2023-09-21T17:00:32ZTomek MrugalskiPost audit: tighten access permissions for configsAnother point after @manu's [audit](https://gitlab.isc.org/isc-private/kea/-/wikis/Kea-Security-Review-02-2023#9-limiting-permission-of-the-kea-configuration-files):
I would propose considering the following:
* [ ] put a WARNING sectio...Another point after @manu's [audit](https://gitlab.isc.org/isc-private/kea/-/wikis/Kea-Security-Review-02-2023#9-limiting-permission-of-the-kea-configuration-files):
I would propose considering the following:
* [ ] put a WARNING section to the config files (close to the sections where password/key is configured) with a link to guide how to setup it up correctly so the administrator has at least a chance to notice it and follow the recommendation
* [ ] let service during startup/reload if the password or key secret is present and display/log warning (?with link to the guide?)
* [x] change access permissions to 0640 by default (instead of 0644); in other words, remove read rights for 'other'. Note: User/group ownership should be 'root' or the 'user' under which kea is running.
While the second would probably be tricky to implement, so we might skip it, proposals 1 and 3 are solid and we should do it.
This ticket is about updating the packages. Some might argue that similar action should be done for Kea sources (e.g. make sure the make install install the sources with more restrictive permissions).next-stable-2.6https://gitlab.isc.org/isc-projects/kea/-/issues/3047Detection of packet processing slowdown2024-03-22T13:16:46ZDarren AnkneyDetection of packet processing slowdownSometimes Kea will find itself unable to keep up with incoming packet load for various reasons (overwhelming amounts of packets in an avalanche scenario, slowdown in SQL queries in the case of database usage, and etc). Kea currently has...Sometimes Kea will find itself unable to keep up with incoming packet load for various reasons (overwhelming amounts of packets in an avalanche scenario, slowdown in SQL queries in the case of database usage, and etc). Kea currently has no way to detect or warn about this situation. Detailed analysis of the logs is necessary to understand what is happening. This issue is meant to provide some ideas for future development in this area where Kea could possibly detect this situation and provide warning log messages about it.
Possible ideas:
1. Add timestamps to received packets (possible? [perhaps](https://www.kernel.org/doc/Documentation/networking/timestamping.txt)) as they are put into the buffer. Check these timestamps against the current time as they are pulled out of the buffer. If there is some discrepancy that is larger than some threshold, emit some kind of log message about this.
2. In `netstat -l` output, there is a representation of the current buffer size for a process. Example:
```
$ netstat -l
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
...
udp 0 0 10.1.2.2:bootps 0.0.0.0:*
```
Administrators sometimes use this to find out if some process has a slowdown in processing packets. If the number is larger than normal, then there might be. Kea may not necessarily know what is normal, but there must be some maximum size of the Recv-Q. If so, perhaps this maximum size is being reached when these backups occur. If that could be detected, perhaps a warning message could be logged.
[RT22378](https://support.isc.org/Ticket/Display.html?id=22378)kea2.5.8Thomas MarkwalderThomas Markwalderhttps://gitlab.isc.org/isc-projects/bind9/-/issues/4298ns1 shutdown hang in "tcp:checking that BIND 9 doesn't crash on long TCP mess...2024-02-24T07:51:48ZMichal Nowakns1 shutdown hang in "tcp:checking that BIND 9 doesn't crash on long TCP messages"Job [#3639436](https://gitlab.isc.org/isc-projects/bind9/-/jobs/3639436) failed for 028154d416c2a29bea41c4d9658066845539b82a.
jemalloc arenas were merged to `main` and 9.18, but they did not help with the "tcp:checking that BIND 9 doesn...Job [#3639436](https://gitlab.isc.org/isc-projects/bind9/-/jobs/3639436) failed for 028154d416c2a29bea41c4d9658066845539b82a.
jemalloc arenas were merged to `main` and 9.18, but they did not help with the "tcp:checking that BIND 9 doesn't crash on long TCP messages" check (isc-projects/bind9#4038) entirely (nor with the `isc_mem_benchmark` check of the `mem_test` unit test, see isc-projects/bind9#4286). Arguably, I've never seen the `tcp` check fail four times in a day:
- https://gitlab.isc.org/isc-projects/bind9/-/jobs/3639436
- https://gitlab.isc.org/isc-projects/bind9/-/jobs/3639704
- https://gitlab.isc.org/isc-projects/bind9/-/jobs/3639696
- https://gitlab.isc.org/isc-projects/bind9/-/jobs/3639687
However, the nature of the failure is different: `ns1` is not OOM-killed but it didn't terminate in time, that is 5 minutes wasn't enough to terminate and the process was aborted.
```
2023-09-06 00:20:22 INFO:tcp I:tcp_tmp_n9au2yez:checking that BIND 9 doesn't crash on long TCP messages (10)
2023-09-06 00:20:22 INFO:tcp I:tcp_tmp_n9au2yez:sending 300000 time(s): 00010000000100000000000003697363036f72670000fc0001
2023-09-06 00:20:22 INFO:tcp I:tcp_tmp_n9au2yez:............................................................................................................................................................................................................................................................................................................
2023-09-06 00:20:22 INFO:tcp I:tcp_tmp_n9au2yez:sent 4023683 bytes to 10.53.0.1:20597
2023-09-06 00:20:22 INFO:tcp I:tcp_tmp_n9au2yez:exit status: 0
---------------------------- Captured log teardown -----------------------------
2023-09-06 00:25:23 INFO:tcp I:tcp_tmp_n9au2yez:ns1 didn't die when sent a SIGTERM
2023-09-06 00:25:23 ERROR:tcp Failed to stop servers
2023-09-06 00:25:24 INFO:tcp I:tcp_tmp_n9au2yez:Core dump(s) found: /builds/isc-projects/bind9/bin/tests/system/tcp_tmp_n9au2yez/ns1/core.387947
2023-09-06 00:25:24 INFO:tcp D:tcp_tmp_n9au2yez:backtrace from /builds/isc-projects/bind9/bin/tests/system/tcp_tmp_n9au2yez/ns1/core.387947:
2023-09-06 00:25:24 INFO:tcp D:tcp_tmp_n9au2yez:--------------------------------------------------------------------------------
2023-09-06 00:27:07 INFO:tcp D:/builds/isc-projects/bind9/bin/tests/system/tcp_tmp_n9au2yez:Core was generated by `/builds/isc-projects/bind9/bin/named/.libs/named -D tcp_tmp_n9au2yez-ns1 -X nam'.
2023-09-06 00:27:07 INFO:tcp D:/builds/isc-projects/bind9/bin/tests/system/tcp_tmp_n9au2yez:Program terminated with signal SIGABRT, Aborted.
2023-09-06 00:27:07 INFO:tcp D:/builds/isc-projects/bind9/bin/tests/system/tcp_tmp_n9au2yez:#0 0x00007f8a0a4c3129 in pthread_barrier_wait@GLIBC_2.2.5 () from /lib64/libc.so.6
2023-09-06 00:27:07 INFO:tcp D:/builds/isc-projects/bind9/bin/tests/system/tcp_tmp_n9au2yez:[Current thread is 1 (Thread 0x7f8a09eaa580 (LWP 387947))]
2023-09-06 00:27:07 INFO:tcp D:/builds/isc-projects/bind9/bin/tests/system/tcp_tmp_n9au2yez:#0 0x00007f8a0a4c3129 in pthread_barrier_wait@GLIBC_2.2.5 () from /lib64/libc.so.6
2023-09-06 00:27:07 INFO:tcp D:/builds/isc-projects/bind9/bin/tests/system/tcp_tmp_n9au2yez:#1 0x00007f8a0b2432a2 in stop_tcp_child_job (arg=0x7f8a09ab6800) at netmgr/tcp.c:589
2023-09-06 00:27:07 INFO:tcp D:/builds/isc-projects/bind9/bin/tests/system/tcp_tmp_n9au2yez:#2 0x00007f8a0b243372 in stop_tcp_child (sock=<optimized out>) at netmgr/tcp.c:597
2023-09-06 00:27:07 INFO:tcp D:/builds/isc-projects/bind9/bin/tests/system/tcp_tmp_n9au2yez:#3 0x00007f8a0b243b21 in isc__nm_tcp_stoplistening (sock=0x7f8a09a77800) at netmgr/tcp.c:622
2023-09-06 00:27:07 INFO:tcp D:/builds/isc-projects/bind9/bin/tests/system/tcp_tmp_n9au2yez:#4 0x00007f8a0b23b359 in isc_nm_stoplistening (sock=<optimized out>) at netmgr/netmgr.c:1699
2023-09-06 00:27:07 INFO:tcp D:/builds/isc-projects/bind9/bin/tests/system/tcp_tmp_n9au2yez:#5 0x00007f8a0b23dc62 in isc__nmsocket_stop (listener=0x7f8a09a76e00) at netmgr/netmgr.c:1730
2023-09-06 00:27:07 INFO:tcp D:/builds/isc-projects/bind9/bin/tests/system/tcp_tmp_n9au2yez:#6 0x00007f8a0b24183e in isc__nm_streamdns_stoplistening (sock=<optimized out>) at netmgr/streamdns.c:962
2023-09-06 00:27:07 INFO:tcp D:/builds/isc-projects/bind9/bin/tests/system/tcp_tmp_n9au2yez:#7 0x00007f8a0b23b360 in isc_nm_stoplistening (sock=<optimized out>) at netmgr/netmgr.c:1702
2023-09-06 00:27:07 INFO:tcp D:/builds/isc-projects/bind9/bin/tests/system/tcp_tmp_n9au2yez:#8 0x00007f8a0afcab27 in ns_interface_shutdown (ifp=ifp@entry=0x7f8a09ad7980) at interfacemgr.c:729
2023-09-06 00:27:07 INFO:tcp D:/builds/isc-projects/bind9/bin/tests/system/tcp_tmp_n9au2yez:#9 0x00007f8a0afcaf9a in purge_old_interfaces (mgr=mgr@entry=0x7f8a09a70500) at interfacemgr.c:815
2023-09-06 00:27:07 INFO:tcp D:/builds/isc-projects/bind9/bin/tests/system/tcp_tmp_n9au2yez:#10 0x00007f8a0afcb13e in ns_interfacemgr_shutdown (mgr=0x7f8a09a70500) at interfacemgr.c:435
2023-09-06 00:27:07 INFO:tcp D:/builds/isc-projects/bind9/bin/tests/system/tcp_tmp_n9au2yez:#11 0x0000000000445bf1 in shutdown_server (arg=0x7f8a09a9f700) at server.c:9983
2023-09-06 00:27:07 INFO:tcp D:/builds/isc-projects/bind9/bin/tests/system/tcp_tmp_n9au2yez:#12 0x00007f8a0b24b383 in isc__async_cb (handle=<optimized out>) at async.c:111
2023-09-06 00:27:07 INFO:tcp D:/builds/isc-projects/bind9/bin/tests/system/tcp_tmp_n9au2yez:#13 0x00007f8a0a977bd3 in ?? () from /lib64/libuv.so.1
2023-09-06 00:27:07 INFO:tcp D:/builds/isc-projects/bind9/bin/tests/system/tcp_tmp_n9au2yez:#14 0x00007f8a0a99457b in ?? () from /lib64/libuv.so.1
2023-09-06 00:27:07 INFO:tcp D:/builds/isc-projects/bind9/bin/tests/system/tcp_tmp_n9au2yez:#15 0x00007f8a0a97d097 in uv_run () from /lib64/libuv.so.1
2023-09-06 00:27:07 INFO:tcp D:/builds/isc-projects/bind9/bin/tests/system/tcp_tmp_n9au2yez:#16 0x00007f8a0b25d1ba in loop_thread (arg=arg@entry=0x7f8a09aac800) at loop.c:282
2023-09-06 00:27:07 INFO:tcp D:/builds/isc-projects/bind9/bin/tests/system/tcp_tmp_n9au2yez:#17 0x00007f8a0b26ca20 in thread_body (wrap=0xb788b0) at thread.c:85
2023-09-06 00:27:07 INFO:tcp D:/builds/isc-projects/bind9/bin/tests/system/tcp_tmp_n9au2yez:#18 0x00007f8a0b26ca99 in isc_thread_main (func=func@entry=0x7f8a0b25d12f <loop_thread>, arg=0x7f8a09aac800) at thread.c:116
2023-09-06 00:27:07 INFO:tcp D:/builds/isc-projects/bind9/bin/tests/system/tcp_tmp_n9au2yez:#19 0x00007f8a0b25e109 in isc_loopmgr_run (loopmgr=0x7f8a09a6f6c0) at loop.c:454
2023-09-06 00:27:07 INFO:tcp D:/builds/isc-projects/bind9/bin/tests/system/tcp_tmp_n9au2yez:#20 0x0000000000426faa in main (argc=16, argv=0x7fff1308dae8) at main.c:1592
2023-09-06 00:27:07 INFO:tcp D:tcp_tmp_n9au2yez:--------------------------------------------------------------------------------
```
```
06-Sep-2023 00:20:58.284 client @0x7f895a786800 10.53.0.1#53464 (isc.org): bad zone transfer request: 'isc.org/IN': non-authoritative zone (NOTAUTH)
06-Sep-2023 00:20:58.284 client @0x7f895a787400 10.53.0.1#53464 (isc.org): bad zone transfer request: 'isc.org/IN': non-authoritative zone (NOTAUTH)
06-Sep-2023 00:20:58.284 client @0x7f895a788000 10.53.0.1#53464 (isc.org): bad zone transfer request: 'isc.org/IN': non-authoritative zone (NOTAUTH)
06-Sep-2023 00:20:58.284 client @0x7f895a788c00 10.53.0.1#53464 (isc.org): bad zone transfer request: 'isc.org/IN': non-authoritative zone (NOTAUTH)
06-Sep-2023 00:20:58.284 client @0x7f895a789800 10.53.0.1#53464 (isc.org): bad zone transfer request: 'isc.org/IN': non-authoritative zone (NOTAUTH)
06-Sep-2023 00:20:58.284 client @0x7f895a78a400 10.53.0.1#53464 (isc.org): bad zone transfer request: 'isc.org/IN': non-authoritative zone (NOTAUTH)
06-Sep-2023 00:20:58.284 client @0x7f895a7a5000 10.53.0.1#53464 (isc.org): bad zone transfer request: 'isc.org/IN': non-authoritative zone (NOTAUTH)
06-Sep-2023 00:20:58.284 client @0x7f895a7a5c00 10.53.0.1#53464 (isc.org): bad zone transfer request: 'isc.org/IN': non-authoritative zone (NOTAUTH)
06-Sep-2023 00:20:58.284 netmgr 0x7f8a09a6f900: Shutting down network manager worker on loop 0x7f8a09aae180(3)
06-Sep-2023 00:20:58.284 netmgr 0x7f8a09a6f900: Shutting down network manager worker on loop 0x7f8a09aad900(2)
```
[core.387947-backtrace.txt](/uploads/dab4601f64759ee9475d504cac179df2/core.387947-backtrace.txt)
[named.run](/uploads/d5b292cffcbd202101133e28d131551b/named.run)
Locally, I can't reproduce it; `ns1` terminates at worst in 210 seconds.May 2024 (9.18.27, 9.18.27-S1, 9.19.24)https://gitlab.isc.org/isc-projects/kea-packaging/-/issues/16Post audit: review packages for running as root2023-09-21T10:10:53ZTomek MrugalskiPost audit: review packages for running as root@manu's [audit reported](https://gitlab.isc.org/isc-private/kea/-/wikis/Kea-Security-Review-02-2023#8-run-kea-from-an-unprivileged-account) the following issue:
Kea should run from unprivileged user, when possible. At the time of his au...@manu's [audit reported](https://gitlab.isc.org/isc-private/kea/-/wikis/Kea-Security-Review-02-2023#8-run-kea-from-an-unprivileged-account) the following issue:
Kea should run from unprivileged user, when possible. At the time of his audit, Ubuntu did that. The goal of this ticket is to check all packages to see if they're running kea as non-root. If any of them still run as root, they should be updated or a good explanation why it can't be done should be described here.Wlodzimierz WencelWlodzimierz Wencelhttps://gitlab.isc.org/isc-projects/bind9/-/issues/4261Detect unexpected files created during system test run with pytest runner2023-12-06T15:51:28ZTom KrizekDetect unexpected files created during system test run with pytest runner> we should have a check whether the named haven't produced any unexpected files. But that's only tangential to cleaning up the cruft. Perhaps this can take a form of .gitignore(?) with expected files for each test.
https://gitlab.isc.o...> we should have a check whether the named haven't produced any unexpected files. But that's only tangential to cleaning up the cruft. Perhaps this can take a form of .gitignore(?) with expected files for each test.
https://gitlab.isc.org/isc-projects/bind9/-/issues/4246#note_395492
Related #3810Not plannedhttps://gitlab.isc.org/isc-projects/bind9/-/issues/4254zonechecks died mid test2023-08-10T13:14:02ZMark Andrewszonechecks died mid testJob [#3577320](https://gitlab.isc.org/isc-private/bind9/-/jobs/3577320) failed for isc-private/bind9@51f47c4d045f50dd0e72573cd573a5031261fed4:
zonechecks died mid test possibly fallout from setting `-e`.
```
2023-08-10 07:15:03 INFO:zo...Job [#3577320](https://gitlab.isc.org/isc-private/bind9/-/jobs/3577320) failed for isc-private/bind9@51f47c4d045f50dd0e72573cd573a5031261fed4:
zonechecks died mid test possibly fallout from setting `-e`.
```
2023-08-10 07:15:03 INFO:zonechecks I:zonechecks_tmp_w6aef5m0:checking that we detect a NS which looks like a AAAA record (fail)
2023-08-10 07:15:03 INFO:zonechecks I:zonechecks_tmp_w6aef5m0:checking that we detect a NS which looks like a AAAA record (warn=default)
2023-08-10 07:15:03 INFO:zonechecks I:zonechecks_tmp_w6aef5m0:checking that we detect a NS which looks like a AAAA record (ignore)
2023-08-10 07:15:03 INFO:zonechecks I:zonechecks_tmp_w6aef5m0:checking 'rdnc zonestatus' output
2023-08-10 07:15:03 INFO:zonechecks I:zonechecks_tmp_w6aef5m0:ns1 zone reload queued
```Not plannedhttps://gitlab.isc.org/isc-projects/bind9/-/issues/4230checkds test may fail due to a timing issue2024-01-02T13:40:29ZMark Andrewscheckds test may fail due to a timing issueJob [#3552282](https://gitlab.isc.org/isc-projects/bind9/-/jobs/3552282) failed for 423c9d6716c4b96f0cf939653da47abca267bd23:
```
___________________________ test_checkds_dspublished ___________________________
[gw0] linux -- Python 3.1...Job [#3552282](https://gitlab.isc.org/isc-projects/bind9/-/jobs/3552282) failed for 423c9d6716c4b96f0cf939653da47abca267bd23:
```
___________________________ test_checkds_dspublished ___________________________
[gw0] linux -- Python 3.11.4 /usr/bin/python3.11
/builds/isc-projects/bind9/bin/tests/system/checkds/tests_checkds.py:640: in test_checkds_dspublished
checkds_dspublished(named_port, "explicit", "10.53.0.8")
/builds/isc-projects/bind9/bin/tests/system/checkds/tests_checkds.py:308: in checkds_dspublished
keystate_check(parent, zone, "DSPublish")
/builds/isc-projects/bind9/bin/tests/system/checkds/tests_checkds.py:228: in keystate_check
assert val != 0
E assert 0 != 0
```Not plannedTom KrizekTom Krizekhttps://gitlab.isc.org/isc-projects/bind9/-/issues/4153Run system tests in network namespaces2023-12-14T15:27:50ZTom KrizekRun system tests in network namespacesExecuting system tests under pytest should support isolation using network namespaces on platforms where it's possible. It would simplify running the tests (no root setup required), prevent any network interference, remove weird quirks w...Executing system tests under pytest should support isolation using network namespaces on platforms where it's possible. It would simplify running the tests (no root setup required), prevent any network interference, remove weird quirks with port assignment and make it easier to capture relevant traffic into PCAP.Not plannedTom KrizekTom Krizekhttps://gitlab.isc.org/isc-projects/bind9/-/issues/4127dangerfile can't traverse GitLab project boundary2023-06-06T13:13:58ZMichal Nowakdangerfile can't traverse GitLab project boundaryJob [#3445138](https://gitlab.isc.org/isc-private/bind9/-/jobs/3445138) failed for [c713737cdc6ca2997f75c18ad35715ffb48688e8](https://gitlab.isc.org/isc-private/bind9/-/commit/c713737cdc6ca2997f75c18ad35715ffb48688e8).
`danger-python` c...Job [#3445138](https://gitlab.isc.org/isc-private/bind9/-/jobs/3445138) failed for [c713737cdc6ca2997f75c18ad35715ffb48688e8](https://gitlab.isc.org/isc-private/bind9/-/commit/c713737cdc6ca2997f75c18ad35715ffb48688e8).
`danger-python` crashed when `Backport of MR isc-projects/bind9!7457` was present in the MR description field:
```
$ Backport of MR isc-projects/bind9!7457 ci -f
There was an error when executing dangerfile.py:
GitlabGetError at line 207: 404 Not found
Stacktrace:
File "dangerfile.py", line 207, in <module>
original_mr = proj.mergerequests.get(original_mr_id)
File "/usr/local/lib/python3.9/dist-packages/gitlab/v4/objects/merge_requests.py", line 486, in get
return cast(ProjectMergeRequest, super().get(id=id, lazy=lazy, **kwargs))
File "/usr/local/lib/python3.9/dist-packages/gitlab/exceptions.py", line 338, in wrapped_f
raise error(e.error_message, e.response_code, e.response_body) from e
Failing the build, there is 1 fail.
Feedback: https://gitlab.isc.org/isc-private/bind9/merge_requests/531#note_378750
```
It seems that `danger-python` with the current `dangerfile.py` can't traverse the GitLab project boundary from isc-private to isc-projects (and vice versa) and couldn't look for missing isc-private/bind9!531 MR commits that are in the "upstream" isc-projects/bind9!7457 MR.https://gitlab.isc.org/isc-projects/bind9/-/issues/4119delv occasionally hangs in tsan tests2023-08-03T08:53:22ZMichal Nowakdelv occasionally hangs in tsan testsJob [#3442535](https://gitlab.isc.org/isc-private/bind9/-/jobs/3442535) failed for [97f8f0991e3879b047073a7e812e453f620d5c85](https://gitlab.isc.org/isc-private/bind9/-/commit/97f8f0991e3879b047073a7e812e453f620d5c85). Also https://gitla...Job [#3442535](https://gitlab.isc.org/isc-private/bind9/-/jobs/3442535) failed for [97f8f0991e3879b047073a7e812e453f620d5c85](https://gitlab.isc.org/isc-private/bind9/-/commit/97f8f0991e3879b047073a7e812e453f620d5c85). Also https://gitlab.isc.org/isc-private/bind9/-/jobs/3435673. Locally, I could not reproduce it, and in CI, pytest does not present verbose enough output to identify a stuck check.
The `digdelv` system test is sometimes stuck in TSAN system tests https://gitlab.isc.org/isc-private/bind9/-/jobs/3442535, and the CI times out after an hour as a result.
So far, I only saw this on BIND 9.18-S.Not plannedhttps://gitlab.isc.org/isc-projects/bind9/-/issues/4112"serve-stale:check prefetch processing of a stale CNAME target" fails on Free...2023-07-07T09:25:45ZMichal Nowak"serve-stale:check prefetch processing of a stale CNAME target" fails on FreeBSD 13Job [#3435305](https://gitlab.isc.org/isc-projects/bind9/-/jobs/3435305) failed for ff3d25a47f9f969669b2e4f5cde10c50f9cdd171 (~"v9.18").
On FreeBSD 13.2, the `check prefetch processing of a stale CNAME target` check [failed](https://git...Job [#3435305](https://gitlab.isc.org/isc-projects/bind9/-/jobs/3435305) failed for ff3d25a47f9f969669b2e4f5cde10c50f9cdd171 (~"v9.18").
On FreeBSD 13.2, the `check prefetch processing of a stale CNAME target` check [failed](https://gitlab.isc.org/isc-projects/bind9/-/jobs/3435305) [twice](https://gitlab.isc.org/isc-private/bind9/-/jobs/3431983) in the recent days:
```
2023-06-02 01:09:52 INFO:serve-stale I:serve-stale_tmp_q8yamlle:check prefetch processing of a stale CNAME target (214)
2023-06-02 01:09:55 INFO:serve-stale I:serve-stale_tmp_q8yamlle:failed
```
This was expected:
```
target.example. 2 IN A 10.53.0.2
```
But this was the answer:
```
target.example. 30 IN A 10.53.0.2
```
We got a stale answer after client timeout (`; EDE: 3 (Stale Answer): (client timeout)`), query time was 1840 msec. Locally, I get 2 msec and a non-stale answer.
I was unable to reproduce the problem locally.https://gitlab.isc.org/isc-projects/bind9/-/issues/4104ZoneQuota stats counter is not counting everything2024-02-24T07:55:05ZOndřej SurýZoneQuota stats counter is not counting everythingThe `ZoneQuota` should log all the hits to `fcount_incr()` returning `ISC_R_QUOTA`, but it does only in a single place. The counting should be moved to `fctx_incr()`.The `ZoneQuota` should log all the hits to `fcount_incr()` returning `ISC_R_QUOTA`, but it does only in a single place. The counting should be moved to `fctx_incr()`.May 2024 (9.18.27, 9.18.27-S1, 9.19.24)Ondřej SurýOndřej Surýhttps://gitlab.isc.org/isc-projects/bind9/-/issues/4092timer.c:223:timerevent_destroy(): fatal error: RUNTIME_CHECK(isc_mutex_unlock...2023-05-25T07:38:17ZMichal Nowaktimer.c:223:timerevent_destroy(): fatal error: RUNTIME_CHECK(isc_mutex_unlock((&timer->lock)) == ISC_R_SUCCESS) failedJob [#3411550](https://gitlab.isc.org/isc-projects/bind9/-/jobs/3411550) failed for 66254cf56d7072833db6d8744e6bcef2109b72e2.
BIND 9.18 `task` unit test failed on `unit:gcc:oraclelinux8:amd64`.
```
[==========] Running 11 test(s).
[ RU...Job [#3411550](https://gitlab.isc.org/isc-projects/bind9/-/jobs/3411550) failed for 66254cf56d7072833db6d8744e6bcef2109b72e2.
BIND 9.18 `task` unit test failed on `unit:gcc:oraclelinux8:amd64`.
```
[==========] Running 11 test(s).
[ RUN ] manytasks
[ OK ] manytasks
[ RUN ] all_events
[ OK ] all_events
[ RUN ] basic
timer.c:223:timerevent_destroy(): fatal error: RUNTIME_CHECK(isc_mutex_unlock((&timer->lock)) == ISC_R_SUCCESS) failed
../../tests/unit-test-driver.sh: line 36: 8595 Aborted (core dumped) "${TEST_PROGRAM}"
I:task_test:Core dump found: ./core.8595
D:task_test:backtrace from ./core.8595 start
[New LWP 8636]
[New LWP 8595]
[New LWP 8637]
[New LWP 8638]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Core was generated by `/builds/isc-projects/bind9/tests/isc/.libs/lt-task_test'.
Program terminated with signal SIGABRT, Aborted.
#0 0x00007f8c5b302aff in raise () from /lib64/libc.so.6
[Current thread is 1 (Thread 0x7f8c3bfff700 (LWP 8636))]
Thread 4 (Thread 0x7f8c412fa700 (LWP 8638)):
#0 0x00007f8c5b68846c in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
No symbol table info available.
#1 0x00007f8c5c4af725 in run (uap=0x7f8c591e1000) at timer.c:632
manager = 0x7f8c591e1000
now = {seconds = 1684976709, nanoseconds = 609640403}
result = <optimized out>
__func__ = "run"
#2 0x00007f8c5c4b4b20 in isc__trampoline_run (arg=0x1973730) at trampoline.c:189
trampoline = 0x1973730
result = <optimized out>
#3 0x00007f8c5b6821da in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#4 0x00007f8c5b2ede73 in clone () from /lib64/libc.so.6
No symbol table info available.
Thread 3 (Thread 0x7f8c40af9700 (LWP 8637)):
#0 0x00007f8c5b3e4017 in epoll_wait () from /lib64/libc.so.6
No symbol table info available.
#1 0x00007f8c5c2460f9 in uv.io_poll () from /lib64/libuv.so.1
No symbol table info available.
#2 0x00007f8c5c234a74 in uv_run () from /lib64/libuv.so.1
No symbol table info available.
#3 0x00007f8c5c47aa6c in nm_thread (worker0=0x7f8c591f75b8) at netmgr/netmgr.c:698
r = <optimized out>
worker = 0x7f8c591f75b8
mgr = 0x7f8c59036000
__func__ = "nm_thread"
#4 0x00007f8c5c4b4b20 in isc__trampoline_run (arg=0x1974330) at trampoline.c:189
trampoline = 0x1974330
result = <optimized out>
#5 0x00007f8c5b6821da in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#6 0x00007f8c5b2ede73 in clone () from /lib64/libc.so.6
No symbol table info available.
Thread 2 (Thread 0x7f8c5ce04140 (LWP 8595)):
#0 0x00007f8c5b3ae9a8 in nanosleep () from /lib64/libc.so.6
No symbol table info available.
#1 0x00007f8c5b3dbf48 in usleep () from /lib64/libc.so.6
No symbol table info available.
#2 0x00007f8c5c4ac692 in isc__taskmgr_destroy (managerp=managerp@entry=0x607348 <taskmgr>) at task.c:1041
No locals.
#3 0x00007f8c5c49b4b0 in isc_managers_destroy (netmgrp=netmgrp@entry=0x607338 <netmgr>, taskmgrp=taskmgrp@entry=0x607348 <taskmgr>, timermgrp=timermgrp@entry=0x607340 <timermgr>) at managers.c:99
No locals.
#4 0x00000000004052ee in teardown_managers (state=<optimized out>) at isc.c:84
No locals.
#5 0x0000000000404f64 in _teardown (state=<optimized out>) at task_test.c:91
No locals.
#6 0x00007f8c5be1702e in cmocka_run_one_test_or_fixture () from /lib64/libcmocka.so.0
No symbol table info available.
#7 0x00007f8c5be179e0 in _cmocka_run_group_tests () from /lib64/libcmocka.so.0
No symbol table info available.
#8 0x000000000040516b in main () at task_test.c:1408
r = <optimized out>
Thread 1 (Thread 0x7f8c3bfff700 (LWP 8636)):
#0 0x00007f8c5b302aff in raise () from /lib64/libc.so.6
No symbol table info available.
#1 0x00007f8c5b2d5ea5 in abort () from /lib64/libc.so.6
No symbol table info available.
#2 0x00007f8c5c48f5c2 in isc_error_fatal (file=file@entry=0x7f8c5c4c45a6 "timer.c", line=line@entry=223, func=func@entry=0x7f8c5c4d07a0 <__func__.7544> "timerevent_destroy", format=format@entry=0x7f8c5c4c0814 "RUNTIME_CHECK(%s) failed") at error.c:72
args = {{gp_offset = 40, fp_offset = 48, overflow_arg_area = 0x7f8c3bff9d00, reg_save_area = 0x7f8c3bff9c40}}
#3 0x00007f8c5c4af15f in timerevent_destroy (event0=0x7f8c51800b00) at timer.c:225
timer = 0x7f8c591e10a0
event = 0x7f8c51800b00
__func__ = "timerevent_destroy"
#4 0x00007f8c5c48f7e9 in isc_event_free (eventp=eventp@entry=0x7f8c3bff9d48) at event.c:93
event = <optimized out>
#5 0x0000000000403449 in basic_tick (task=<optimized out>, event=<optimized out>) at task_test.c:444
No locals.
#6 0x00007f8c5c4abf17 in task_run (task=0x7f8c591e73c0) at task.c:815
dispatch_count = 0
finished = false
quantum = <optimized out>
event = 0x7f8c51800b00
result = ISC_R_SUCCESS
dispatch_count = <optimized out>
finished = <optimized out>
event = <optimized out>
result = <optimized out>
quantum = <optimized out>
__func__ = "task_run"
__atomic_load_ptr = <optimized out>
__atomic_load_tmp = <optimized out>
__atomic_load_ptr = <optimized out>
__atomic_load_tmp = <optimized out>
__atomic_load_ptr = <optimized out>
__atomic_load_tmp = <optimized out>
__atomic_load_ptr = <optimized out>
__atomic_load_tmp = <optimized out>
__v = <optimized out>
#7 isc_task_run (task=0x7f8c591e73c0) at task.c:896
No locals.
#8 0x00007f8c5c472579 in isc__nm_async_task (worker=worker@entry=0x7f8c591f7000, ev0=ev0@entry=0x7f8c51805f80) at netmgr/netmgr.c:848
ievent = 0x7f8c51805f80
result = <optimized out>
#9 0x00007f8c5c479d78 in process_netievent (worker=worker@entry=0x7f8c591f7000, ievent=ievent@entry=0x7f8c51805f80) at netmgr/netmgr.c:920
No locals.
#10 0x00007f8c5c47a78e in process_queue (worker=worker@entry=0x7f8c591f7000, type=type@entry=NETIEVENT_TASK) at netmgr/netmgr.c:1013
next = 0x0
ievent = 0x7f8c51805f80
list = {head = 0x0, tail = 0x0}
__func__ = "process_queue"
#11 0x00007f8c5c47b23b in process_all_queues (worker=0x7f8c591f7000) at netmgr/netmgr.c:767
result = <optimized out>
type = 2
reschedule = false
reschedule = <optimized out>
type = <optimized out>
result = <optimized out>
#12 async_cb (handle=0x7f8c591f7360) at netmgr/netmgr.c:796
worker = 0x7f8c591f7000
#13 0x00007f8c5c2342f1 in uv.async_io.part () from /lib64/libuv.so.1
No symbol table info available.
#14 0x00007f8c5c245d15 in uv.io_poll () from /lib64/libuv.so.1
No symbol table info available.
#15 0x00007f8c5c234a74 in uv_run () from /lib64/libuv.so.1
No symbol table info available.
#16 0x00007f8c5c47aa6c in nm_thread (worker0=0x7f8c591f7000) at netmgr/netmgr.c:698
r = <optimized out>
worker = 0x7f8c591f7000
mgr = 0x7f8c59036000
__func__ = "nm_thread"
#17 0x00007f8c5c4b4b20 in isc__trampoline_run (arg=0x1976840) at trampoline.c:189
trampoline = 0x1976840
result = <optimized out>
#18 0x00007f8c5b6821da in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#19 0x00007f8c5b2ede73 in clone () from /lib64/libc.so.6
No symbol table info available.
D:task_test:backtrace from ./core.8595 end
FAIL task_test (exit status: 134)
```https://gitlab.isc.org/isc-projects/kea/-/issues/2856mysql v4 backend slowing down while using random and flq allocator2023-06-19T10:40:00ZWlodzimierz Wencelmysql v4 backend slowing down while using random and flq allocatorIt looks like Kea performance is hugely impacted when random or flq allocator is used. Screen of the charts that show performance degradation are attached. Results are generated using code from isc-projects/kea#2843
![Screenshot_2023-05...It looks like Kea performance is hugely impacted when random or flq allocator is used. Screen of the charts that show performance degradation are attached. Results are generated using code from isc-projects/kea#2843
![Screenshot_2023-05-11_at_13.56.08](/uploads/8fd6f88e0cd8078c130c11b224ea749c/Screenshot_2023-05-11_at_13.56.08.png)
![Screenshot_2023-05-11_at_13.56.38](/uploads/7a82938d13b13dec1b7c80388f3b4b1c/Screenshot_2023-05-11_at_13.56.38.png)
Issue is not observed in v6 random allocator:
![Screenshot_2023-05-11_at_14.01.09](/uploads/39181cd28e2e1fad03c70e2319a88fc1/Screenshot_2023-05-11_at_14.01.09.png)
[full internal report](https://jenkins.aws.isc.org/view/Kea-manual/job/kea-manual/job/performance/94/artifact/qa-dhcp/kea/performance-jenkins/report.html) (it's heavy, please wait patiently for it to load)
to check all allocator related tests please go to tab `Resource Consumption`, allocators tests starts at Scenario 7.
Mysql related scenarios:
* v4: 10, 11, 12, 19, 20, 21, 28, 29, 30
* v6: 39, 40, 45, 46, 51, 52
(number of tests will be reduced for regular monthly runs)
Additionally I should mention that in previous runs (master) I observed issues with iterative allocator in v4 mysql as well. Looked like kea stops assigning leases after assigning ~6mln leases, but I couldn't reproduce it on isc-projects/kea#2843 (I will repeat those tests on master overnight)
![Screenshot_2023-05-11_at_14.11.01](/uploads/6da24ed37a758e3e86653a31a2b2fbb1/Screenshot_2023-05-11_at_14.11.01.png)next-stable-2.6Marcin SiodelskiMarcin Siodelskihttps://gitlab.isc.org/isc-projects/bind9/-/issues/4014Implement tests for maximum global and idle time for incoming XFR2023-05-04T14:23:14ZOndřej SurýImplement tests for maximum global and idle time for incoming XFRSpin-off from !7810 to not forget to write pytests for maximum global and idle time for incoming XFR.Spin-off from !7810 to not forget to write pytests for maximum global and idle time for incoming XFR.Not plannedTom KrizekTom Krizekhttps://gitlab.isc.org/isc-projects/bind9/-/issues/4013Add more tests for #4001 and #40022023-07-03T15:47:55ZOndřej SurýAdd more tests for #4001 and #4002This is a follow up from !7805 to add more tests for the source-port configuration.
To quote @pspacek:
> Well, this is pretty large change and needs tests. If nothing else I would like to see what happens if:
>
> * attempt to open TCP...This is a follow up from !7805 to add more tests for the source-port configuration.
To quote @pspacek:
> Well, this is pretty large change and needs tests. If nothing else I would like to see what happens if:
>
> * attempt to open TCP connection ends up in packet black-hole
> * connection is established and the remote side does not respond (established connection hangs)
> * the remote side responds with some something which does not parse as DNS
> * the remote side sends mismatching NOTIFY answer (say different zone name)Not plannedTom KrizekTom Krizekhttps://gitlab.isc.org/isc-projects/bind9/-/issues/4007mkeys: exceeded time limit waiting for 'dump_done' in ns5/named.run2023-04-06T12:42:02ZMichal Nowakmkeys: exceeded time limit waiting for 'dump_done' in ns5/named.runJob [#3300214](https://gitlab.isc.org/isc-projects/bind9/-/jobs/3300214) failed for 8f73f5d0886d9e0f57f593fbf3bae862d13f9853:
```
I:mkeys:check 'rndc managed-keys' and islands of trust now that root is reachable (39)
I:mkeys:exceeded ti...Job [#3300214](https://gitlab.isc.org/isc-projects/bind9/-/jobs/3300214) failed for 8f73f5d0886d9e0f57f593fbf3bae862d13f9853:
```
I:mkeys:check 'rndc managed-keys' and islands of trust now that root is reachable (39)
I:mkeys:exceeded time limit waiting for 'dump_done' in ns5/named.run
```
I've seen the 20-second limit exceeded several times.
Bumping it to 60 might be a good thing.Not plannedhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3987Change DNSKEY TTL of inline-signed zone2024-02-24T07:55:08ZGerald VogtChange DNSKEY TTL of inline-signed zone### Description
I have a few zones using inline-signing which I have set up originally with 2d TTL. Due to this the existing DNSKEY RRs also have 2d TTL. Now I have been trying to reduce the TTL to 1d but it seems there is no supported ...### Description
I have a few zones using inline-signing which I have set up originally with 2d TTL. Due to this the existing DNSKEY RRs also have 2d TTL. Now I have been trying to reduce the TTL to 1d but it seems there is no supported way or tool to do so.
I have set dnskey-ttl to 1d and replace keys, still all DNSKEY RRs have 2d TTL. Setting it on the key with dnssec-settime doesn't help either and man pages specifically mention:
```
This option sets the default TTL to use for this key when it is converted into a DNSKEY RR.
This is the TTL used when the key is imported into a zone, unless there was already a DNSKEY
RRset in place, in which case the existing TTL takes precedence.
```
Running AlmaLinux 9 bind-9.16.23-5.el9_1.x86_64.
### Request
Add some way to change the TTL of DNSKEY RRs in inline-signed zones.
### Links / references
I have found a thread from 2016 about the same problem: https://www.mail-archive.com/bind-users@lists.isc.org/msg23186.htmlMay 2024 (9.18.27, 9.18.27-S1, 9.19.24)Matthijs Mekkingmatthijs@isc.orgMatthijs Mekkingmatthijs@isc.orghttps://gitlab.isc.org/isc-projects/bind9/-/issues/3913timing issue in statschannel system test2023-03-03T08:07:02ZTom Krizektiming issue in statschannel system testIn job [#3207137](https://gitlab.isc.org/isc-projects/bind9/-/jobs/3207137) the `statschannel` failed on check `fetch zone 'dnssec' stats data after updating DNSKEY RRset (7)`.
The test expects 1 zsk sign operation and 1 ksk sign operat...In job [#3207137](https://gitlab.isc.org/isc-projects/bind9/-/jobs/3207137) the `statschannel` failed on check `fetch zone 'dnssec' stats data after updating DNSKEY RRset (7)`.
The test expects 1 zsk sign operation and 1 ksk sign operation. These happen:
```
02-Mar-2023 09:10:06.228 add re-sign dnssec. 300 IN RRSIG DNSKEY 13 1 300 20230401091005 20230302081006 7913 dnssec. MT/JQRYMGsWYBwyRkdnj+tB7h2oSbpVfZ9n+DMr1oyDdXtRSoxGwydIO a2GRjUgP5kQ/N5JWUQMK2LkvThFvsA==
02-Mar-2023 09:10:06.228 add re-sign dnssec. 300 IN RRSIG SOA 13 1 300 20230401091006 20230302081006 35685 dnssec. unerVqERet0Xr8KV19/D174G3moa5NVkDoXIbvpc3cPQuzCd/z+eYC2h zuLlEZawAFe2NTI3jw+TDkIgKfpnCw==
```
Then the test waits for `next key event`:
```
02-Mar-2023 09:10:06.228 zone dnssec/IN: next key event: 02-Mar-2023 10:10:06.220
```
However, 100 milliseconds later, before the test retrieves the zone statistics from the statschannel, additional signing operations take place:
```
02-Mar-2023 09:10:06.328 zone_maintenance: zone dnssec/IN: enter
02-Mar-2023 09:10:06.328 zone_resigninc: zone dnssec/IN: enter
02-Mar-2023 09:10:06.328 zone_journal: zone dnssec/IN: enter
02-Mar-2023 09:10:06.328 writing to journal
02-Mar-2023 09:10:06.328 del dnssec. 300 IN SOA mname1. . 4 20 20 1814400 3600
02-Mar-2023 09:10:06.328 del re-sign dnssec. 300 IN RRSIG NS 13 1 300 20230309211006 20230302080959 35685 dnssec. kaS58YnGbS/p3V8088p+yREiSINte5ETOr/3QvFU9XsHyuxDqRLkux6i XqY5/PSkQFm094MO/wNLdbTp8LaB2g==
02-Mar-2023 09:10:06.328 del re-sign a.dnssec. 300 IN RRSIG NSEC 13 2 300 20230309211006 20230302080959 35685 dnssec. 12MDX1o0qgh0T3SoM0aKsu6AjJYcueqHpOT//xD0l/EjFJgBOVu3VJpA 0OO3R/VdQzFeBq1tgY88dpLnTwvKUw==
02-Mar-2023 09:10:06.328 del re-sign a.dnssec. 300 IN RRSIG MX 13 2 300 20230309211006 20230302080959 35685 dnssec. vGNl05h/fsvnnHvU1RX3yUuCRS5Egqd9Mr5HxZ8J3uZAleQNVxUa+gG9 f9ZJ+q6+Zp8Kz8AFHqCxN1vMq0+5zw==
02-Mar-2023 09:10:06.332 del re-sign mail.dnssec. 300 IN RRSIG A 13 2 300 20230309211006 20230302080959 35685 dnssec. RRafXWoDHpbErUBXOVzI3rbREe8ezmj8QEHpHampjKVNJrfzFWbP1cku meg3TPsTCQdy/1v6v4cvfB03SusQoQ==
02-Mar-2023 09:10:06.332 del re-sign a.dnssec. 300 IN RRSIG A 13 2 300 20230309211006 20230302080959 35685 dnssec. ofun5lwcYTsK0OawryLIViK/sdJHPSHT7RxoQR5ErmkAPpjZvTIoE4EO ua95xdHE1X5h/hnJCBYPpl5kHS+Lfg==
02-Mar-2023 09:10:06.332 del re-sign mail.dnssec. 300 IN RRSIG NSEC 13 2 300 20230309211006 20230302080959 35685 dnssec. GspggAHIxa6RQMbauI4On2esTEWifodSorcCjxqlwtZ71XOF7LdtWTbw HLf5o1xYP76o2RRN3CAZRPmAOs1Lkw==
02-Mar-2023 09:10:06.332 del re-sign ns2.dnssec. 300 IN RRSIG NSEC 13 2 300 20230309211006 20230302080959 35685 dnssec. FJXd9+ncwyBxhrMpmAO3xs1sEGlP3g0EhmOk9IHa4/Ljgv6qIJYIL7hW dz2JbfYjI3oI+QqbCEM/5mIM9AO5Jw==
02-Mar-2023 09:10:06.332 del re-sign ns2.dnssec. 300 IN RRSIG A 13 2 300 20230309211006 20230302080959 35685 dnssec. gzliUistEykm6hpMGY8ForYJewFzaUB4rPOJxHROOupf8jaX+GhbelWU tfcnLKmjypuWEIdyFkGugFpzE/slCA==
02-Mar-2023 09:10:06.332 del re-sign dnssec. 300 IN RRSIG SOA 13 1 300 20230401091006 20230302081006 35685 dnssec. unerVqERet0Xr8KV19/D174G3moa5NVkDoXIbvpc3cPQuzCd/z+eYC2h zuLlEZawAFe2NTI3jw+TDkIgKfpnCw==
02-Mar-2023 09:10:06.332 add dnssec. 300 IN SOA mname1. . 5 20 20 1814400 3600
02-Mar-2023 09:10:06.332 add re-sign dnssec. 300 IN RRSIG NS 13 1 300 20230401084359 20230302081006 35685 dnssec. C+vYMDHLb6wperZxXwTAAK3vM9XJ+7/WAGddPyQcdI+fp6hRXL6UM4hz C5qj8+hnKY0E2bHq1jYLoQaw62M/mw==
02-Mar-2023 09:10:06.332 add re-sign a.dnssec. 300 IN RRSIG NSEC 13 2 300 20230401084359 20230302081006 35685 dnssec. h1X7d4NBfaVyRJnkzcsiyjAZPXufVBgKPw08wxAm8Zx1W8N5Tg0WS2/m Xx6MyytvPoCSFFvFOQLkCXurucZUww==
02-Mar-2023 09:10:06.332 add re-sign a.dnssec. 300 IN RRSIG MX 13 2 300 20230401084359 20230302081006 35685 dnssec. J5uy5bjeXLdt73gV8nv4/dbj+cjOIcyFHuh6Qp/sdqFE/sswo8izCdRU 3/iYmjwLS9EeNs6dEb2xx0l9heRmDg==
02-Mar-2023 09:10:06.332 add re-sign mail.dnssec. 300 IN RRSIG A 13 2 300 20230401084359 20230302081006 35685 dnssec. wCLJPDVyC8ja84GHqaA/OnUrOocpAKNOZiTQJdHJkwkkd0BbVxLazYiP fE2rKG54VIFvGxC/EcXavXcqiFeQEg==
02-Mar-2023 09:10:06.332 add re-sign a.dnssec. 300 IN RRSIG A 13 2 300 20230401084359 20230302081006 35685 dnssec. /BXz8YtxNfEo3tJYFGRHVjMQQDAtzZ8ne2nJOm6CQm3d803qzs5JaHLy /4hmNB5oTEz1l9kXw3LQnB94iuH/yA==
02-Mar-2023 09:10:06.332 add re-sign mail.dnssec. 300 IN RRSIG NSEC 13 2 300 20230401084359 20230302081006 35685 dnssec. QWpfnMgaVTitdgvpdIBTimLxJY+YUPcCvcQcVlnVta8FkmhSxgRhLkRs NGM9H1J5o+4K9uFwso3bSmLTADq1YA==
02-Mar-2023 09:10:06.332 add re-sign ns2.dnssec. 300 IN RRSIG NSEC 13 2 300 20230401084359 20230302081006 35685 dnssec. o/RJ6s2Jcccttnk2PxugOcjnyQ9kX9BfxEu5nKLZglAcaFAl7pnPjhNm 9455gOUH62iNPlxHS3KXrac8HuruIQ==
02-Mar-2023 09:10:06.332 add re-sign ns2.dnssec. 300 IN RRSIG A 13 2 300 20230401084359 20230302081006 35685 dnssec. E59RXGdRykOWp+Oad6UJ6DjIJDJT0vtX326pRcoW54obolq/sc2ZjCha GZk634z/MvcNFHWaQnF2rmtOj0SuGg==
02-Mar-2023 09:10:06.332 add re-sign dnssec. 300 IN RRSIG SOA 13 1 300 20230401091006 20230302081006 35685 dnssec. aSQQxKw8rg8Pbn6bacb22o993cEwzPXchCB9wQM4nLjMWq5VgW5JIQeG Tkz2VIWj9dPCQVRv4xKInZmBjHSwfg==
```
This results in the test detecting extra 9 signature operations which it doesn't expect and thus fails.Not planned