BIND issueshttps://gitlab.isc.org/isc-projects/bind9/-/issues2022-05-19T18:46:38Zhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3369BIND ARM contains redundant and confusing information2022-05-19T18:46:38ZGreg ChoulesBIND ARM contains redundant and confusing informationIn most/all versions of the BIND 9 Administrator Reference Manual 9.11-S, 9.16 and 9.18, in the section **Feature Changes** there is the following statement:
`The default value of max-stale-ttl has changed from 1 week to 12 hours.`
Thi...In most/all versions of the BIND 9 Administrator Reference Manual 9.11-S, 9.16 and 9.18, in the section **Feature Changes** there is the following statement:
`The default value of max-stale-ttl has changed from 1 week to 12 hours.`
This change happened in 9.16.4 as a result of [GL#1877](https://gitlab.isc.org/isc-projects/bind9/-/issues/1877) and the default value has changed again since then, so the statement should have been removed from later editions of the ARM, or updated to reflect the new value.
Please can it be removed going forward. Thanks.https://gitlab.isc.org/isc-projects/bind9/-/issues/3135dnstap-read: add sorting option2022-02-10T15:06:08Zmartinvonwittichdnstap-read: add sorting option### Description
`dnstap-read` currently prints the DNS packets in the order in which they were stored in the file.
Unfortunately, the packets aren't necessarily stored in chronological order (probably due to the fact that dnstap loggin...### Description
`dnstap-read` currently prints the DNS packets in the order in which they were stored in the file.
Unfortunately, the packets aren't necessarily stored in chronological order (probably due to the fact that dnstap logging is understandably a low-priority task for BIND).
For example:
```
09-Feb-2022 02:33:36.294 CQ 127.0.0.1:41718 -> 127.0.0.1:0 UDP 33b example.com/IN/MX
09-Feb-2022 02:33:36.334 RR 192.168.1.2:55163 <- 192.168.1.1:53 UDP 71b example.com/IN/MX
09-Feb-2022 02:33:36.294 RQ 192.168.1.2:55163 -> 192.168.1.1:53 UDP 33b example.com/IN/MX
09-Feb-2022 02:33:36.334 CR 127.0.0.1:41718 <- 127.0.0.1:0 UDP 102b example.com/IN/MX
09-Feb-2022 02:33:38.453 CQ 127.0.0.1:57293 -> 127.0.0.1:0 UDP 33b example.com/IN/MX
09-Feb-2022 02:33:38.453 CR 127.0.0.1:57293 <- 127.0.0.1:0 UDP 102b example.com/IN/MX
```
Obviously the resolver query (RQ) comes before the resolver response (RR), and while the timestamps reflect this, the ordering does not. This can make reading the logs rather confusing, especially when piping `dnstap-read -p` or `dnstap-read -y` through a pager - when I'm looking at the RQ, then I expect to be able to search for the RR e.g. with `/55163`, but as `/` searches forward, I won't find it.
### Request
Add the ability to sort the packets chronologically, with a switch.Not plannedhttps://gitlab.isc.org/isc-projects/bind9/-/issues/4366XFR (dispatch) doesn't shutdown TCP connection on timeout2023-12-05T15:50:25ZOndřej SurýXFR (dispatch) doesn't shutdown TCP connection on timeoutAfter switching the XFR to use `dns_dispatch`, the TCP connection doesn't get cancelled properly when `dns_dispatch_done()` is called and it waits for the TCP connection to timeout on the server side.After switching the XFR to use `dns_dispatch`, the TCP connection doesn't get cancelled properly when `dns_dispatch_done()` is called and it waits for the TCP connection to timeout on the server side.https://gitlab.isc.org/isc-projects/bind9/-/issues/4341Investigate the memory spike when the cache is cold2023-12-05T15:58:56ZOndřej SurýInvestigate the memory spike when the cache is coldOndřej SurýOndřej Surýhttps://gitlab.isc.org/isc-projects/bind9/-/issues/4075Data race lib/dns/qp.c in dns_qpmulti_commit2023-11-01T11:29:59ZMichal NowakData race lib/dns/qp.c in dns_qpmulti_commitJob [#3393759](https://gitlab.isc.org/isc-projects/bind9/-/jobs/3393759) failed for 35094195cfa4d0e24ae0c9f9834814441fec9f97:
```
WARNING: ThreadSanitizer: data race
Write of size 8 at 0x000000000001 by main thread (mutexes: write M1)...Job [#3393759](https://gitlab.isc.org/isc-projects/bind9/-/jobs/3393759) failed for 35094195cfa4d0e24ae0c9f9834814441fec9f97:
```
WARNING: ThreadSanitizer: data race
Write of size 8 at 0x000000000001 by main thread (mutexes: write M1):
#0 dns_qpmulti_commit lib/dns/qp.c:1172:2 (BuildId: 082634ba54e2c48b299bb1acd04e5ce6303e9808)
#1 dns_zt_mount lib/dns/zt.c:146:2 (BuildId: 082634ba54e2c48b299bb1acd04e5ce6303e9808)
#2 dns_view_addzone lib/dns/view.c:769:11 (BuildId: 082634ba54e2c48b299bb1acd04e5ce6303e9808)
#3 configure_zone bin/named/server.c:6832:3 (BuildId: 5fea3abc3538900c17a2b1b7ac40c22484083db8)
#4 catz_addmodzone_cb bin/named/server.c:2821:11 (BuildId: 5fea3abc3538900c17a2b1b7ac40c22484083db8)
#5 isc__async_cb lib/isc/async.c:112:3 (BuildId: 0061f9b047bdfaf5adff2ded360e7c7f64f4f361)
#6 uv__async_io /usr/src/libuv-v1.44.1/src/unix/async.c:163:5 (BuildId: 120c450d14885aa5308bc95c4ea77de2c2b1cc36)
#7 thread_body lib/isc/thread.c:87:8 (BuildId: 0061f9b047bdfaf5adff2ded360e7c7f64f4f361)
#8 isc_thread_main lib/isc/thread.c:118:2
#9 isc_loopmgr_run lib/isc/loop.c:452:2 (BuildId: 0061f9b047bdfaf5adff2ded360e7c7f64f4f361)
#10 main bin/named/main.c:1532:2 (BuildId: 5fea3abc3538900c17a2b1b7ac40c22484083db8)
Previous read of size 8 at 0x000000000001 by thread T1 (mutexes: write M2):
#0 reader_open lib/dns/qp.c:1251:22 (BuildId: 082634ba54e2c48b299bb1acd04e5ce6303e9808)
#1 dns_qpmulti_query lib/dns/qp.c:1269:26 (BuildId: 082634ba54e2c48b299bb1acd04e5ce6303e9808)
#2 dns_zt_find lib/dns/zt.c:179:2 (BuildId: 082634ba54e2c48b299bb1acd04e5ce6303e9808)
#3 dns_view_findzone lib/dns/view.c:779:10 (BuildId: 082634ba54e2c48b299bb1acd04e5ce6303e9808)
#4 dns__catz_zones_merge lib/dns/catz.c:562:17 (BuildId: 082634ba54e2c48b299bb1acd04e5ce6303e9808)
#5 dns__catz_update_cb lib/dns/catz.c:2474:11 (BuildId: 082634ba54e2c48b299bb1acd04e5ce6303e9808)
#6 isc__work_cb lib/isc/work.c:30:2 (BuildId: 0061f9b047bdfaf5adff2ded360e7c7f64f4f361)
#7 uv__queue_work /usr/src/libuv-v1.44.1/src/threadpool.c:326:3 (BuildId: 120c450d14885aa5308bc95c4ea77de2c2b1cc36)
Location is heap block of size 168 at 0x000000000020 allocated by main thread:
#0 malloc <null> (BuildId: 5fea3abc3538900c17a2b1b7ac40c22484083db8)
#1 mallocx lib/isc/./jemalloc_shim.h:65:14 (BuildId: 0061f9b047bdfaf5adff2ded360e7c7f64f4f361)
#2 mem_get lib/isc/mem.c:305:8
#3 isc__mem_get lib/isc/mem.c:674:8 (BuildId: 0061f9b047bdfaf5adff2ded360e7c7f64f4f361)
#4 dns_qpmulti_create lib/dns/qp.c:1375:25 (BuildId: 082634ba54e2c48b299bb1acd04e5ce6303e9808)
#5 dns_zt_create lib/dns/zt.c:104:2 (BuildId: 082634ba54e2c48b299bb1acd04e5ce6303e9808)
#6 dns_view_create lib/dns/view.c:137:2 (BuildId: 082634ba54e2c48b299bb1acd04e5ce6303e9808)
#7 create_view bin/named/server.c:6440:11 (BuildId: 5fea3abc3538900c17a2b1b7ac40c22484083db8)
#8 load_configuration bin/named/server.c:9118:12 (BuildId: 5fea3abc3538900c17a2b1b7ac40c22484083db8)
#9 loadconfig bin/named/server.c:10305:11 (BuildId: 5fea3abc3538900c17a2b1b7ac40c22484083db8)
#10 named_server_reconfigcommand bin/named/server.c:10711:2
#11 named_control_docommand bin/named/control.c:244:12 (BuildId: 5fea3abc3538900c17a2b1b7ac40c22484083db8)
#12 control_command bin/named/controlconf.c:401:18 (BuildId: 5fea3abc3538900c17a2b1b7ac40c22484083db8)
#13 isc__async_cb lib/isc/async.c:112:3 (BuildId: 0061f9b047bdfaf5adff2ded360e7c7f64f4f361)
#14 uv__async_io /usr/src/libuv-v1.44.1/src/unix/async.c:163:5 (BuildId: 120c450d14885aa5308bc95c4ea77de2c2b1cc36)
#15 thread_body lib/isc/thread.c:87:8 (BuildId: 0061f9b047bdfaf5adff2ded360e7c7f64f4f361)
#16 isc_thread_main lib/isc/thread.c:118:2
#17 isc_loopmgr_run lib/isc/loop.c:452:2 (BuildId: 0061f9b047bdfaf5adff2ded360e7c7f64f4f361)
#18 main bin/named/main.c:1532:2 (BuildId: 5fea3abc3538900c17a2b1b7ac40c22484083db8)
Mutex M2 (0x000000000032) created at:
#0 pthread_mutex_init <null> (BuildId: 5fea3abc3538900c17a2b1b7ac40c22484083db8)
#1 dns_qpmulti_create lib/dns/qp.c:1380:2 (BuildId: 082634ba54e2c48b299bb1acd04e5ce6303e9808)
#2 dns_zt_create lib/dns/zt.c:104:2 (BuildId: 082634ba54e2c48b299bb1acd04e5ce6303e9808)
#3 dns_view_create lib/dns/view.c:137:2 (BuildId: 082634ba54e2c48b299bb1acd04e5ce6303e9808)
#4 create_view bin/named/server.c:6440:11 (BuildId: 5fea3abc3538900c17a2b1b7ac40c22484083db8)
#5 load_configuration bin/named/server.c:9118:12 (BuildId: 5fea3abc3538900c17a2b1b7ac40c22484083db8)
#6 loadconfig bin/named/server.c:10305:11 (BuildId: 5fea3abc3538900c17a2b1b7ac40c22484083db8)
#7 named_server_reconfigcommand bin/named/server.c:10711:2
#8 named_control_docommand bin/named/control.c:244:12 (BuildId: 5fea3abc3538900c17a2b1b7ac40c22484083db8)
#9 control_command bin/named/controlconf.c:401:18 (BuildId: 5fea3abc3538900c17a2b1b7ac40c22484083db8)
#10 isc__async_cb lib/isc/async.c:112:3 (BuildId: 0061f9b047bdfaf5adff2ded360e7c7f64f4f361)
#11 uv__async_io /usr/src/libuv-v1.44.1/src/unix/async.c:163:5 (BuildId: 120c450d14885aa5308bc95c4ea77de2c2b1cc36)
#12 thread_body lib/isc/thread.c:87:8 (BuildId: 0061f9b047bdfaf5adff2ded360e7c7f64f4f361)
#13 isc_thread_main lib/isc/thread.c:118:2
#14 isc_loopmgr_run lib/isc/loop.c:452:2 (BuildId: 0061f9b047bdfaf5adff2ded360e7c7f64f4f361)
#15 main bin/named/main.c:1532:2 (BuildId: 5fea3abc3538900c17a2b1b7ac40c22484083db8)
Mutex M2 (0x000000000035) created at:
#0 pthread_mutex_init <null> (BuildId: 5fea3abc3538900c17a2b1b7ac40c22484083db8)
#1 dns_catz_new_zone lib/dns/catz.c:818:2 (BuildId: 082634ba54e2c48b299bb1acd04e5ce6303e9808)
#2 dns_catz_add_zone lib/dns/catz.c:895:11 (BuildId: 082634ba54e2c48b299bb1acd04e5ce6303e9808)
#3 configure_catz_zone bin/named/server.c:3034:11 (BuildId: 5fea3abc3538900c17a2b1b7ac40c22484083db8)
#4 configure_catz bin/named/server.c:3178:3
#5 configure_view bin/named/server.c:4160:3 (BuildId: 5fea3abc3538900c17a2b1b7ac40c22484083db8)
#6 load_configuration bin/named/server.c:9172:12 (BuildId: 5fea3abc3538900c17a2b1b7ac40c22484083db8)
#7 run_server bin/named/server.c:9982:2 (BuildId: 5fea3abc3538900c17a2b1b7ac40c22484083db8)
#8 isc__async_cb lib/isc/async.c:112:3 (BuildId: 0061f9b047bdfaf5adff2ded360e7c7f64f4f361)
#9 uv__async_io /usr/src/libuv-v1.44.1/src/unix/async.c:163:5 (BuildId: 120c450d14885aa5308bc95c4ea77de2c2b1cc36)
#10 thread_body lib/isc/thread.c:87:8 (BuildId: 0061f9b047bdfaf5adff2ded360e7c7f64f4f361)
#11 isc_thread_main lib/isc/thread.c:118:2
#12 isc_loopmgr_run lib/isc/loop.c:452:2 (BuildId: 0061f9b047bdfaf5adff2ded360e7c7f64f4f361)
#13 main bin/named/main.c:1532:2 (BuildId: 5fea3abc3538900c17a2b1b7ac40c22484083db8)
Thread T1 (running) created by thread T2 at:
#0 pthread_create <null> (BuildId: 5fea3abc3538900c17a2b1b7ac40c22484083db8)
#1 uv_thread_create_ex /usr/src/libuv-v1.44.1/src/unix/thread.c:279:9 (BuildId: 120c450d14885aa5308bc95c4ea77de2c2b1cc36)
#2 uv_once /usr/src/libuv-v1.44.1/src/unix/thread.c:440:7 (BuildId: 120c450d14885aa5308bc95c4ea77de2c2b1cc36)
#3 dns__catz_timer_cb lib/dns/catz.c:2117:2 (BuildId: 082634ba54e2c48b299bb1acd04e5ce6303e9808)
#4 timer_cb lib/isc/timer.c:111:2 (BuildId: 0061f9b047bdfaf5adff2ded360e7c7f64f4f361)
#5 uv__run_timers /usr/src/libuv-v1.44.1/src/timer.c:178:5 (BuildId: 120c450d14885aa5308bc95c4ea77de2c2b1cc36)
#6 thread_body lib/isc/thread.c:87:8 (BuildId: 0061f9b047bdfaf5adff2ded360e7c7f64f4f361)
#7 thread_run lib/isc/thread.c:102:14
SUMMARY: ThreadSanitizer: data race lib/dns/qp.c:1172:2 in dns_qpmulti_commit
```
It might be another instance of isc-projects/bind9#4073, but since it happened during the `catz` system test, perhaps also interesting to @aram.Not plannedTony FinchTony Finchhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3774TCP, DNS over TCP, and DNS over TLS unit tests fail or hang on Dragonfly BSD2023-05-29T13:13:36ZMichal NowakTCP, DNS over TCP, and DNS over TLS unit tests fail or hang on Dragonfly BSD`tcp_test`, `tcpdns_test`, `tls_test`, and `tlsdns_test` unit tests fail on Dragonfly BSD 6.2.1 in `*_noop` and `*_noresponse` tests like this:
```
[==========] Running 6 test(s).
[ RUN ] tlsdns_noop
Could not run test: 0x30 != 0
[ ...`tcp_test`, `tcpdns_test`, `tls_test`, and `tlsdns_test` unit tests fail on Dragonfly BSD 6.2.1 in `*_noop` and `*_noresponse` tests like this:
```
[==========] Running 6 test(s).
[ RUN ] tlsdns_noop
Could not run test: 0x30 != 0
[ LINE ] --- netmgr_common.c:621: error: Failure!0 != 0x1
[ LINE ] --- netmgr_common.c:650: error: Failure!Test teardown failed
[ ERROR ] tlsdns_noop
[ RUN ] tlsdns_noresponse
loop.c:343: REQUIRE(loopmgrp != ((void *)0) && *loopmgrp == ((void *)0)) failed, back trace
0x80068d0bc <isc_assertion_typetotext+0x6c> at /home/newman/bind9/lib/isc/.libs/libisc-9.19.9-dev.so
0x80068d02a <isc_assertion_failed+0xa> at /home/newman/bind9/lib/isc/.libs/libisc-9.19.9-dev.so
0x80069f1aa <isc_loopmgr_create+0x4ea> at /home/newman/bind9/lib/isc/.libs/libisc-9.19.9-dev.so
0x40627d <setup_loopmgr+0x4d> at /home/newman/bind9/tests/isc/.libs/tlsdns_test
0x404f75 <setup_netmgr_test+0x1e5> at /home/newman/bind9/tests/isc/.libs/tlsdns_test
0x405989 <stream_noresponse_setup+0x9> at /home/newman/bind9/tests/isc/.libs/tlsdns_test
0x80061cb95 <_test_realloc+0x485> at /usr/local/lib/libcmocka.so.0
0x80061d3b6 <_cmocka_run_group_tests+0x326> at /usr/local/lib/libcmocka.so.0
0x403a87 <main+0x47> at /home/newman/bind9/tests/isc/.libs/tlsdns_test
FAIL tlsdns_test (exit status: 134)
```
```
(gdb) bt
#0 0x00000008015cc79c in lwp_kill () from /lib/libc.so.8
#1 0x00000008013607f2 in _thr_send_sig () from /usr/lib/libpthread.so.0
#2 0x0000000801357ee5 in raise () from /usr/lib/libpthread.so.0
#3 0x0000000801666dff in abort () from /lib/libc.so.8
#4 0x000000080068d02f in isc_assertion_failed (file=file@entry=0x8006ce806 "loop.c", line=line@entry=343, type=type@entry=isc_assertiontype_require,
cond=cond@entry=0x8006c74b0 "loopmgrp != ((void *)0) && *loopmgrp == ((void *)0)") at assertions.c:50
#5 0x000000080069f1aa in isc_loopmgr_create (mctx=<optimized out>, nloops=<optimized out>, loopmgrp=loopmgrp@entry=0x408d60 <loopmgr>) at loop.c:219
#6 0x000000000040627d in setup_loopmgr (state=state@entry=0x802022500) at isc.c:73
#7 0x0000000000404f75 in setup_netmgr_test (state=0x802022500) at netmgr_common.c:161
#8 0x0000000000405989 in stream_noresponse_setup (state=<optimized out>) at netmgr_common.c:716
#9 0x000000080061cb95 in ?? () from /usr/local/lib/libcmocka.so.0
#10 0x000000080061d3b6 in _cmocka_run_group_tests () from /usr/local/lib/libcmocka.so.0
#11 0x0000000000403a87 in main () at tlsdns_test.c:162
```BIND 9.19.xArtem BoldarievArtem Boldarievhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3771Segmentation fault in the dupsigs system test2023-01-05T09:03:25ZOndřej SurýSegmentation fault in the dupsigs system testFrom https://gitlab.isc.org/isc-projects/bind9/-/jobs/3040116:
```
D:dupsigs:Core was generated by `/builds/isc-projects/bind9/bin/named/.libs/named -D dupsigs-ns1 -X named.lock -'.
D:dupsigs:Program terminated with signal SIGABRT, Abort...From https://gitlab.isc.org/isc-projects/bind9/-/jobs/3040116:
```
D:dupsigs:Core was generated by `/builds/isc-projects/bind9/bin/named/.libs/named -D dupsigs-ns1 -X named.lock -'.
D:dupsigs:Program terminated with signal SIGABRT, Aborted.
D:dupsigs:#0 futex_wait_cancelable (private=0, expected=0, futex_word=0x7f8ba4c0a4c0) at ../sysdeps/nptl/futex-internal.h:186
D:dupsigs:[Current thread is 1 (Thread 0x7f8bc717e140 (LWP 1506))]
D:dupsigs:#0 futex_wait_cancelable (private=0, expected=0, futex_word=0x7f8ba4c0a4c0) at ../sysdeps/nptl/futex-internal.h:186
D:dupsigs:#1 __pthread_cond_wait_common (abstime=0x0, clockid=0, mutex=0x7f8ba4c0a450, cond=0x7f8ba4c0a498) at pthread_cond_wait.c:508
D:dupsigs:#2 __pthread_cond_wait (cond=0x7f8ba4c0a498, mutex=0x7f8ba4c0a450) at pthread_cond_wait.c:638
D:dupsigs:#3 0x00007f8bca7d073b in rwlock_lock (rwl=0x7f8ba4c0a448, type=isc_rwlocktype_read) at rwlock.c:222
D:dupsigs:#4 0x00007f8bca7d116a in isc__rwlock_lock (rwl=0x7f8ba4c0a448, type=isc_rwlocktype_read) at rwlock.c:331
D:dupsigs:#5 0x00007f8bca40560a in findnodeintree (rbtdb=0x7f8ba4c0a300, tree=0x7f8ba4c1f190, name=0x7fffe325a5f0, create=false, nodep=0x7fffe325a8a8) at rbtdb.c:2861
D:dupsigs:#6 0x00007f8bca405a81 in findnode (db=0x7f8ba4c0a300, name=0x7fffe325a5f0, create=false, nodep=0x7fffe325a8a8) at rbtdb.c:2925
D:dupsigs:#7 0x00007f8bca348f65 in dns_db_findnode (db=0x7f8ba4c0a300, name=0x7fffe325a5f0, create=false, nodep=0x7fffe325a8a8) at db.c:434
D:dupsigs:#8 0x00007f8bca382b5a in dns_db_createsoatuple (db=0x7f8ba4c0a300, ver=0x7f8ba4c26180, mctx=0x7f8bc671c800, op=DNS_DIFFOP_EXISTS, tp=0x7fffe325b648) at journal.c:141
D:dupsigs:#9 0x00007f8bca268309 in ns_xfr_start (client=0x7f8bc619d400, reqtype=252) at xfrout.c:968
D:dupsigs:#10 0x00007f8bca255186 in ns_query_start (client=0x7f8bc619d400, handle=0x7f8bc6c9bfc0) at query.c:11979
D:dupsigs:#11 0x00007f8bca220e82 in ns__client_request (handle=0x7f8bc6c9bfc0, eresult=ISC_R_SUCCESS, region=0x7fffe325c320, arg=0x7f8bc6c94380) at client.c:2239
D:dupsigs:#12 0x00007f8bca76e703 in streamdns_on_complete_dnsmessage (dnsasm=0x7f8bc6c3e300, region=0x7fffe325c320, sock=0x7f8bc6deec00, transphandle=0x7f8bc6c9bc40) at netmgr/streamdns.c:144
D:dupsigs:#13 0x00007f8bca76e8c8 in streamdns_on_dnsmessage_data_cb (dnsasm=0x7f8bc6c3e300, result=ISC_R_SUCCESS, region=0x7fffe325c320, cbarg=0x7f8bc6deec00, userarg=0x7f8bc6c9bc40) at netmgr/streamdns.c:203
D:dupsigs:#14 0x00007f8bca76e1a3 in isc__dnsstream_assembler_handle_message (dnsasm=0x7f8bc6c3e300, userarg=0x7f8bc6c9bc40) at ./include/isc/dnsstream.h:338
D:dupsigs:#15 0x00007f8bca76e38b in isc_dnsstream_assembler_incoming (dnsasm=0x7f8bc6c3e300, userarg=0x7f8bc6c9bc40, buf=0x7f8bc6580800, buf_size=55) at ./include/isc/dnsstream.h:367
D:dupsigs:#16 0x00007f8bca76e9b3 in streamdns_handle_incoming_data (sock=0x7f8bc6deec00, transphandle=0x7f8bc6c9bc40, data=0x7f8bc6580800, len=55) at netmgr/streamdns.c:239
D:dupsigs:#17 0x00007f8bca76ff05 in streamdns_readcb (handle=0x7f8bc6c9bc40, result=ISC_R_SUCCESS, region=0x7fffe325c440, cbarg=0x7f8bc6deec00) at netmgr/streamdns.c:522
D:dupsigs:#18 0x00007f8bca769eb4 in isc__nm_async_readcb (worker=0x0, ev0=0x7fffe325c4c0) at netmgr/netmgr.c:2077
D:dupsigs:#19 0x00007f8bca769c95 in isc__nm_readcb (sock=0x7f8bc6dee200, uvreq=0x7f8bc67cf000, eresult=ISC_R_SUCCESS, async=false) at netmgr/netmgr.c:2050
D:dupsigs:#20 0x00007f8bca7763cc in isc__nm_tcp_read_cb (stream=0x7f8bc6dee7e0, nread=55, buf=0x7fffe325c730) at netmgr/tcp.c:823
D:dupsigs:#21 0x00007f8bc9e0b5e9 in uv__read (stream=0x7f8bc6dee7e0) at /usr/src/libuv-v1.44.1/src/unix/stream.c:1247
D:dupsigs:#22 0x00007f8bc9e0b8a8 in uv__stream_io (loop=0x7f8bc6ca2da0, w=0x7f8bc6dee868, events=1) at /usr/src/libuv-v1.44.1/src/unix/stream.c:1315
D:dupsigs:#23 0x00007f8bc9e15bc3 in uv__io_poll (loop=0x7f8bc6ca2da0, timeout=30000) at /usr/src/libuv-v1.44.1/src/unix/epoll.c:374
D:dupsigs:#24 0x00007f8bc9dfa8ae in uv_run (loop=0x7f8bc6ca2da0, mode=UV_RUN_DEFAULT) at /usr/src/libuv-v1.44.1/src/unix/core.c:391
D:dupsigs:#25 0x00007f8bca7a3308 in loop_run (loop=0x7f8bc6ca2d80) at loop.c:270
D:dupsigs:#26 0x00007f8bca7a3555 in loop_thread (arg=0x7f8bc6ca2d80) at loop.c:297
D:dupsigs:#27 0x00007f8bca7a4956 in isc_loopmgr_run (loopmgr=0x7f8bc6c233c0) at loop.c:481
D:dupsigs:#28 0x000055d925bbfe4f in main (argc=18, argv=0x7fffe325fb18) at main.c:1518
```https://gitlab.isc.org/isc-projects/bind9/-/issues/3734[ISC-support #21570] Add the ability to specify TSIG algorithm and secrets to...2024-01-17T14:55:27ZMark Andrews[ISC-support #21570] Add the ability to specify TSIG algorithm and secrets to catalog zones.With DoT and DoH available we should be able to share these via a catalog zone. We would need to think about multiple namespaces for TSIG keys perhaps storing the algorithm and secret along with the rest of the primary's data.
We may w...With DoT and DoH available we should be able to share these via a catalog zone. We would need to think about multiple namespaces for TSIG keys perhaps storing the algorithm and secret along with the rest of the primary's data.
We may want to flag a catalog zone with these fields presents as requiring DoT/DoH etc.Not plannedhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3047Change reject-000-label default to false in BIND 9.192021-12-08T07:05:44ZMark AndrewsChange reject-000-label default to false in BIND 9.19As per documented road map for reject-000-label.
- [ ] change default
- [ ] update documentation
- [ ] update synthfromdnssec system test
- [ ] update release notes
- [ ] create future issue to remove reject-000-label (9.21.x)As per documented road map for reject-000-label.
- [ ] change default
- [ ] update documentation
- [ ] update synthfromdnssec system test
- [ ] update release notes
- [ ] create future issue to remove reject-000-label (9.21.x)BIND 9.19.xhttps://gitlab.isc.org/isc-projects/bind9/-/issues/2930Remove support for the "map" zone file format2021-10-04T10:57:58ZMichał KępieńRemove support for the "map" zone file formatThe `masterfile-format map;` options has already been [deprecated][1] in
9.16/9.17. This issue is for dropping the "map" zone file format
altogether in 9.19+. See #2882 for the rationale.
[1]: #2882The `masterfile-format map;` options has already been [deprecated][1] in
9.16/9.17. This issue is for dropping the "map" zone file format
altogether in 9.19+. See #2882 for the rationale.
[1]: #2882BIND 9.19.xhttps://gitlab.isc.org/isc-projects/bind9/-/issues/1890automation of DS Record submit to registrar/parent, integrated with 'new' ka...2021-10-05T14:46:01Zpgndautomation of DS Record submit to registrar/parent, integrated with 'new' kasp/dnssec-policy support in bind### Description
i'm migrating/implementing the new `dnssec-policy` usage & KASP workflow in my bind 9.16.3.
the new policy does a nice job of streamlining the signing/key mgmt.
after key generation/rotation, the 'last step' is subm...### Description
i'm migrating/implementing the new `dnssec-policy` usage & KASP workflow in my bind 9.16.3.
the new policy does a nice job of streamlining the signing/key mgmt.
after key generation/rotation, the 'last step' is submitting new/changed DS Records to the relevant registrar
i'd like to automate the process of submitting generated DS Records to the registrar/parent using a capable registrar's DNSSEC API.
as i understand, there is neither any mechanism in Bind for automating the DS Record submit, nor is there
an external hook mechanism to external scripts that can handle the task.
offline, it's been suggested to me that with the current version of bind, a 'best' approach would be to write a simple script that checks for the existence of the CDS/CDNSKEY RRset in each signed zone.
then, when a new record is added, trigger a submission of the DS to the parent. and, similarly, when a record is removed, trigger a withdrawal of the DS.
rather than re-inventing the wheel ... i'm guessing i'm not the only one who'd like to automate this.
### Request
an additional response on ML
> This is where we need to get the registrars to follow standards. They are written
> so everyone doesn’t have to cobble together ad-hoc solutions. Hourly scans of all
> the DNSSEC delegations by the registrars would do.
>
> Personally I prefer push solutions but I couldn’t get the IETF to agree.
> https://tools.ietf.org/html/draft-andrews-dnsop-update-parent-zones-04
sounds reasonable. at very least, better than nothing.
in the absence of a standards-based solution, integrated in bind's dnssec-policy/kasp feature set, an option for script/execution hooks in bind to external scripts, would be a good 1st step, even if ad-hoc
e.g., "if when change in DS Record in local bind, then fire this external script which will manage the DS submit/withdraw via API to registrar"
failing any/all of that^, a well documented example of a completely de-coupled solution, independent of bind itself, ideally registrar/API agnostic, but demonstrated to work, would be useful.
that's of course doable -- but again, ad-hoc, and seems a step backwards given the nice progress with dnssec-policy/kasp simplifications in recent versions.
### Links / referencesBIND 9.19.xMatthijs Mekkingmatthijs@isc.orgMatthijs Mekkingmatthijs@isc.orghttps://gitlab.isc.org/isc-projects/bind9/-/issues/1776BIND 9.16 and cache node locks for name cleaning vs. 'the thundering herd'2021-10-05T12:07:29ZCathy AlmondBIND 9.16 and cache node locks for name cleaning vs. 'the thundering herd'From [Support ticket #16212](https://support.isc.org/Ticket/Display.html?id=16212)
During investigations of intermittent 'brownouts' - periods in which named seemingly stops actioning client queries for a short period, and then resumes ...From [Support ticket #16212](https://support.isc.org/Ticket/Display.html?id=16212)
During investigations of intermittent 'brownouts' - periods in which named seemingly stops actioning client queries for a short period, and then resumes processing a second or two later (yes, delays of seconds not ms from this) we 'caught' one interesting scenario on BIND 9.16 in which it appeared that the vast majority of the active threads (netmgr and taskmgr both - so both client queries being answered from cache, AND client queries for which recursion had just taken place) were competing for the same cache node lock.
The pstack output demonstrating the problem was automatically triggered by monitoring for anomalies in inbound versus outbound network traffic.
The symptoms when this issue occurs are that:
* Outbound client-facing traffic rates plummet (well below the proportion that you would expect to see if it was only cache-misses not being serviced
* Recursive query rates plummet too
* CPU use increases - but in user space not in system space
* Recursive clients backlog increases (and may hit the limit)
* Fetchlimits may be triggered (we suspect this, and its predecessor are symptom not cause however, although triggering fetchlimits will exacerbate the situation, both from the client perspective, and as increased traffic rates as clients retry/re-send.
What we saw in the pstacks was that the majority netmgr threads (these answer directly from cache) were attempting to get a write lock on the node - for example:
```
Thread 74 (Thread 0x7f3ff366e700 (LWP 11713)):
#0 isc_rwlock_lock (rwl=rwl@entry=0x7f3f59523980, type=type@entry=isc_rwlocktype_write) at rwlock.c:57
#1 0x000000000051d826 in decrement_reference (rbtdb=rbtdb@entry=0x7f3fc6457010, node=node@entry=0x7f3eace34510, least_serial=least_serial@entry=0, nlock=nlock@entry=isc_rwlocktype_read, tlock=tlock@entry=isc_rwlocktype_none, pruning=pruning@entry=false) at rbtdb.c:2040
#2 0x00000000005215bf in detachnode (db=0x7f3fc6457010, targetp=targetp@entry=0x7f3ff366da88) at rbtdb.c:5352
#3 0x00000000005217be in rdataset_disassociate (rdataset=<optimized out>) at rbtdb.c:8691
#4 0x00000000005657e8 in dns_rdataset_disassociate (rdataset=rdataset@entry=0x7f3fad30cf28) at rdataset.c:111
#5 0x00000000004ebb21 in msgresetnames (first_section=0, msg=0x7f3fad2e1a50, msg@entry=0x7f3fad30b5f0) at message.c:438
#6 msgreset (msg=msg@entry=0x7f3fad2e1a50, everything=everything@entry=false) at message.c:524
#7 0x00000000004ec95a in dns_message_reset (msg=0x7f3fad2e1a50, intent=intent@entry=1) at message.c:760
#8 0x00000000004797ba in ns_client_endrequest (client=0x7f3fae5b8550) at client.c:229
#9 ns__client_reset_cb (client0=0x7f3fae5b8550) at client.c:1586
#10 0x0000000000632989 in isc_nmhandle_unref (handle=handle@entry=0x7f3fae5b83e0) at netmgr.c:1158
#11 0x0000000000632c30 in isc__nm_uvreq_put (req0=req0@entry=0x7f3ff366dbb8, sock=<optimized out>) at netmgr.c:1291
#12 0x00000000006357c4 in udp_send_cb (req=<optimized out>, status=<optimized out>) at udp.c:465
#13 0x00007f3ff5375153 in uv__udp_run_completed () from /lib64/libuv.so.1
#14 0x00007f3ff53754d3 in uv__udp_io () from /lib64/libuv.so.1
#15 0x00007f3ff5367c43 in uv_run () from /lib64/libuv.so.1
#16 0x0000000000632fda in nm_thread (worker0=0x138e3e0) at netmgr.c:481
#17 0x00007f3ff4f39e65 in start_thread () from /lib64/libpthread.so.0
#18 0x00007f3ff484488d in clone () from /lib64/libc.so.6
```
A handful of threads are attempting to get a read lock on the same node - for example:
```
Thread 59 (Thread 0x7f3feab0e700 (LWP 11734)):
#0 0x00007f3ff4f3d144 in pthread_rwlock_rdlock () from /lib64/libpthread.so.0
#1 0x000000000063cc6e in isc_rwlock_lock (rwl=0x7f3f59523980, type=type@entry=isc_rwlocktype_read) at rwlock.c:48
#2 0x00000000005129c6 in rdataset_getownercase (rdataset=<optimized out>, name=0x7f3feaaffde0) at rbtdb.c:9770
#3 0x000000000056620a in towiresorted (rdataset=rdataset@entry=0x7f3ec42dee70, owner_name=owner_name@entry=0x7f3ec42dd0a0, cctx=<optimized out>, target=<optimized out>, order=<optimized out>, order_arg=order_arg@entry=0x7f3ec42b8718, partial=true, options=1, countp=0x7f3feab005dc, state=<optimized out>) at rdataset.c:444
#4 0x0000000000566e3f in dns_rdataset_towirepartial (rdataset=rdataset@entry=0x7f3ec42dee70, owner_name=owner_name@entry=0x7f3ec42dd0a0, cctx=<optimized out>, target=<optimized out>, order=<optimized out>, order_arg=order_arg@entry=0x7f3ec42b8718, options=<optimized out>, options@entry=1, countp=<optimized out>, countp@entry=0x7f3feab005dc, state=<optimized out>, state@entry=0x0) at rdataset.c:565
#5 0x00000000004ecc71 in dns_message_rendersection (msg=0x7f3ec42b8550, sectionid=sectionid@entry=1, options=options@entry=6) at message.c:2086
#6 0x00000000004780f3 in ns_client_send (client=client@entry=0x7f3ec5d4b510) at client.c:555
#7 0x0000000000485b7c in query_send (client=0x7f3ec5d4b510) at query.c:552
#8 0x000000000048de23 in ns_query_done (qctx=qctx@entry=0x7f3feab09a70) at query.c:10921
#9 0x000000000048f76d in query_respond (qctx=0x7f3feab09a70) at query.c:7414
#10 query_prepresponse (qctx=qctx@entry=0x7f3feab09a70) at query.c:9913
#11 0x000000000049181c in query_gotanswer (qctx=qctx@entry=0x7f3feab09a70, res=res@entry=0) at query.c:6836
#12 0x0000000000493a22 in query_lookup (qctx=qctx@entry=0x7f3feab09a70) at query.c:5617
#13 0x00000000004950f6 in query_zone_delegation (qctx=0x7f3feab09a70) at query.c:8003
#14 query_delegation (qctx=qctx@entry=0x7f3feab09a70) at query.c:8031
#15 0x0000000000491a1a in query_gotanswer (qctx=qctx@entry=0x7f3feab09a70, res=res@entry=65565) at query.c:6842
#16 0x0000000000493a22 in query_lookup (qctx=qctx@entry=0x7f3feab09a70) at query.c:5617
#17 0x0000000000494036 in ns__query_start (qctx=qctx@entry=0x7f3feab09a70) at query.c:5493
#18 0x000000000048de05 in ns_query_done (qctx=qctx@entry=0x7f3feab09a70) at query.c:10853
#19 0x0000000000492420 in query_dname (qctx=<optimized out>) at query.c:9806
#20 query_gotanswer (qctx=qctx@entry=0x7f3feab09a70, res=res@entry=65568) at query.c:6872
#21 0x0000000000493a22 in query_lookup (qctx=qctx@entry=0x7f3feab09a70) at query.c:5617
#22 0x00000000004950f6 in query_zone_delegation (qctx=0x7f3feab09a70) at query.c:8003
#23 query_delegation (qctx=qctx@entry=0x7f3feab09a70) at query.c:8031
#24 0x0000000000491a1a in query_gotanswer (qctx=qctx@entry=0x7f3feab09a70, res=res@entry=65565) at query.c:6842
#25 0x0000000000493a22 in query_lookup (qctx=qctx@entry=0x7f3feab09a70) at query.c:5617
#26 0x0000000000494036 in ns__query_start (qctx=qctx@entry=0x7f3feab09a70) at query.c:5493
#27 0x000000000048de05 in ns_query_done (qctx=qctx@entry=0x7f3feab09a70) at query.c:10853
#28 0x0000000000492420 in query_dname (qctx=<optimized out>) at query.c:9806
#29 query_gotanswer (qctx=qctx@entry=0x7f3feab09a70, res=res@entry=65568) at query.c:6872
#30 0x0000000000493a22 in query_lookup (qctx=qctx@entry=0x7f3feab09a70) at query.c:5617
#31 0x00000000004950f6 in query_zone_delegation (qctx=0x7f3feab09a70) at query.c:8003
#32 query_delegation (qctx=qctx@entry=0x7f3feab09a70) at query.c:8031
#33 0x0000000000491a1a in query_gotanswer (qctx=qctx@entry=0x7f3feab09a70, res=res@entry=65565) at query.c:6842
#34 0x0000000000493a22 in query_lookup (qctx=qctx@entry=0x7f3feab09a70) at query.c:5617
#35 0x0000000000494036 in ns__query_start (qctx=qctx@entry=0x7f3feab09a70) at query.c:5493
#36 0x0000000000494b26 in query_setup (client=client@entry=0x7f3ec5d4b510, qtype=<optimized out>) at query.c:5217
#37 0x0000000000497056 in ns_query_start (client=client@entry=0x7f3ec5d4b510) at query.c:11318
#38 0x000000000047b101 in ns__client_request (handle=<optimized out>, region=<optimized out>, arg=<optimized out>) at client.c:2209
#39 0x0000000000635462 in udp_recv_cb (handle=<optimized out>, nrecv=48, buf=0x7f3feab0ab00, addr=<optimized out>, flags=<optimized out>) at udp.c:329
#40 0x00007f3ff53755db in uv__udp_io () from /lib64/libuv.so.1
#41 0x00007f3ff53779c8 in uv__io_poll () from /lib64/libuv.so.1
#42 0x00007f3ff5367c70 in uv_run () from /lib64/libuv.so.1
#43 0x0000000000632fda in nm_thread (worker0=0x13926e8) at netmgr.c:481
#44 0x00007f3ff4f39e65 in start_thread () from /lib64/libpthread.so.0
#45 0x00007f3ff484488d in clone () from /lib64/libc.so.6
```
Meanwhile, the threads run by taskmgr (this bunch would have recursed) were attempting to get write locks (unsurprisingly, although depending on the node and the client query, I guess it's also possible that one might want to get a read lock):
Here's a writer:
```
Thread 50 (Thread 0x7f3fe587b700 (LWP 11746)):
#0 isc_rwlock_lock (rwl=rwl@entry=0x7f3f59523980, type=type@entry=isc_rwlocktype_write) at rwlock.c:57
#1 0x000000000051d826 in decrement_reference (rbtdb=rbtdb@entry=0x7f3fc6457010, node=node@entry=0x7f3eace34510, least_serial=least_serial@entry=0, nlock=nlock@entry=isc_rwlocktype_read, tlock=tlock@entry=isc_rwlocktype_none, pruning=pruning@entry=false) at rbtdb.c:2040
#2 0x00000000005215bf in detachnode (db=0x7f3fc6457010, targetp=0x7f3fe587acc0) at rbtdb.c:5352
#3 0x00000000004bdd83 in dns_db_detachnode (db=<optimized out>, nodep=nodep@entry=0x7f3fe587acc0) at db.c:588
#4 0x00000000004804cb in qctx_clean (qctx=qctx@entry=0x7f3fe587a830) at query.c:5097
#5 0x000000000048db5a in ns_query_done (qctx=qctx@entry=0x7f3fe587a830) at query.c:10834
#6 0x000000000048f76d in query_respond (qctx=0x7f3fe587a830) at query.c:7414
#7 query_prepresponse (qctx=qctx@entry=0x7f3fe587a830) at query.c:9913
#8 0x000000000049181c in query_gotanswer (qctx=qctx@entry=0x7f3fe587a830, res=res@entry=0) at query.c:6836
#9 0x0000000000496870 in query_resume (qctx=0x7f3fe587a830) at query.c:6134
#10 fetch_callback (task=<optimized out>, event=0x7f3ead5c9c18) at query.c:5716
#11 0x000000000064007a in dispatch (threadid=<optimized out>, manager=<optimized out>) at task.c:1152
#12 run (queuep=<optimized out>) at task.c:1344
#13 0x00007f3ff4f39e65 in start_thread () from /lib64/libpthread.so.0
#14 0x00007f3ff484488d in clone () from /lib64/libc.so.6
```
In this particular instance, every single one of the legacy i/o-handler threads was twiddling its thumbs (sitting on epoll_wait() ) - which is probably not too surprising, if no taskmgr workers are sending out queries to auth servers?
Doing stats on this particular capture (74 threads - 24x netmgr, 24x taskmgr, 24x legacy i/o plus 1 each main and the timer thread), we have:
33 instances of isc_rwlock_lock (rwl=rwl@entry=0x7f3f59523980
31 instances of rbtdb=rbtdb@entry=0x7f3fc6457010
30 instances of node=node@entry=0x7f3eace34510
It might be that it's possible to prove from the pstack output that this is a series of different names all attached to the same node, versus a single name that is expiring that all of the threads are attempting to clean-up simultaneously.
Either way, the locking is not working well in this situation - there's a lot of spinning in user space it would appear.
Hypotheses being tendered currently include:
* This scenario has always potentially existed, but using pthread-rwlocks amplifies it considerably
* Could this be a case where prefetching (enabled with default settings in this example) hits a surprise edge case?
* Is it possible we're seeing the after-effects of another delay which has resulted in late client query-response processing for something that has a very short TTL in cache?
* Is this a scenario where a client comes along and queries near-simultaneously (and probably quite innocently) for a lot of similar names under the same domain/apex very close to the time where they would all be naturally expiring from cache?
* Could it be that TTL=0 handling has broken in 9.16 with the introduction of netmgr (noting that TTL=0 responses from auth servers would be expected to be available solely to the clients that recursed and waited for the fetch completion - not to anyone who came along after the fetch had populated cache for the waiting client request to be fulfilled - this should all be in taskmgr and none of it in netmgr)?
* Do we perhaps have too many threads running (detected CPUs = 24)?BIND 9.19.xOndřej SurýOndřej Surýhttps://gitlab.isc.org/isc-projects/bind9/-/issues/1702Make isc_quota and isc_quota_cb opaque2023-10-31T13:50:45ZWitold KrecickiMake isc_quota and isc_quota_cb opaquehttps://gitlab.isc.org/isc-projects/bind9/-/issues/1663Zone not signed nor loaded with NSEC32021-10-22T11:29:41ZLibor PeltanZone not signed nor loaded with NSEC3**Scenario I** (what works):
(1) I configure options `inline-signing`, `auto-dnssec: maintain`, `key-directory`. \
(2) I create a pair of keys with `dnssec-keygen`. \
(3) I put the resulting DNSKEY records into the unsigned zone file. \...**Scenario I** (what works):
(1) I configure options `inline-signing`, `auto-dnssec: maintain`, `key-directory`. \
(2) I create a pair of keys with `dnssec-keygen`. \
(3) I put the resulting DNSKEY records into the unsigned zone file. \
(4) I start Bind.
The result: the zone is signed with NSEC chain and published. (Further DDNS updates are processed including NSECs and RRSIGs reconstruction.)
**Scenario II** (what does not work):
The same, but in step (3), I also add a NSEC3PARAM record to the unsigned zone file.
**Expected:**
After startup, Bind should sign the zone with NSEC3 chain and publish it.
**Observed:**
(a) Bind does not sign the zone. Not even with NSECs. The file `example.com.zone.signed` does not appear. \
(b) Bind does not publish the zone at all. All queries are responded with SERVFAIL. \
(c) A log message says "all zones loaded", which is untrue according to (b). \
(d) The only log message possibly indicating any problem says "signed dynamic zone has no resign event scheduled", which gives no clue of what happened.
I consider ALL of these four points (a) - (d) bugs.
Workaround:
Let Bind start with NSEC chain with Scenario I, and after Bind starts up, perform NSEC -> NSEC3 transition with `rndc signing -nsec3param 1 0 10 <salt> example.com.`. However, I don't like this because I want to avoid publishing the zone with NSECs for any second.
Note:
My observations differ from #953
**Bind9 version**:
```
starting BIND 9.11.3-1ubuntu1.11-Ubuntu (Extended Support Version) <id:a375815> \
running on Linux x86_64 5.3.0-40-generic #32~18.04.1-Ubuntu SMP Mon Feb 3 14:05:59 UTC 2020 \
built with '--build=x86_64-linux-gnu' '--prefix=/usr' '--includedir=/usr/include' '--mandir=/usr/share/man' '--infodir=/usr/share/info' '--sysconfdir=/etc' '--localstatedir=/var' '--disable-silent-rules' '--libdir=/usr/lib/x86_64-linux-gnu' '--libexecdir=/usr/lib/x86_64-linux-gnu' '--disable-maintainer-mode' '--disable-dependency-tracking' '--libdir=/usr/lib/x86_64-linux-gnu' '--sysconfdir=/etc/bind' '--with-python=python3' '--localstatedir=/' '--enable-threads' '--enable-largefile' '--with-libtool' '--enable-shared' '--enable-static' '--with-gost=no' '--with-openssl=/usr' '--with-gssapi=/usr' '--with-libjson=/usr' '--without-lmdb' '--with-gnu-ld' '--with-geoip=/usr' '--with-atf=no' '--enable-ipv6' '--enable-rrl' '--enable-filter-aaaa' '--enable-native-pkcs11' '--with-pkcs11=/usr/lib/softhsm/libsofthsm2.so' '--with-randomdev=/dev/urandom' '--with-eddsa=no' 'build_alias=x86_64-linux-gnu' 'CFLAGS=-g -O2 -fdebug-prefix-map=/build/bind9-uW3Pyl/bind9-9.11.3+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -fno-strict-aliasing -fno-delete-null-pointer-checks -DNO_VERSION_DATE -DDIG_SIGCHASE' 'LDFLAGS=-Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-z,now' 'CPPFLAGS=-Wdate-time -D_FORTIFY_SOURCE=2'
```BIND 9.19.xhttps://gitlab.isc.org/isc-projects/bind9/-/issues/1426Intermittent failure in the qmin system test2021-11-05T07:58:34ZOndřej SurýIntermittent failure in the qmin system test* https://gitlab.isc.org/isc-projects/bind9/-/jobs/440093* https://gitlab.isc.org/isc-projects/bind9/-/jobs/440093BIND 9.19.xhttps://gitlab.isc.org/isc-projects/bind9/-/issues/1329continue validator/keytable refactoring2023-10-31T13:33:14ZEvan Huntcontinue validator/keytable refactoring- simplify dns_keytable structure to have a single object at each node instead of a list
- convert DNSKEY trust anchors into DS format internally, so the validator only needs a single method for zone key validation
- run code coverage an...- simplify dns_keytable structure to have a single object at each node instead of a list
- convert DNSKEY trust anchors into DS format internally, so the validator only needs a single method for zone key validation
- run code coverage analysis
- add unit tests and system test cases as neededBIND 9.19.xhttps://gitlab.isc.org/isc-projects/bind9/-/issues/1234dns_client_destroyrestrans can be called on object in use2023-10-31T13:24:53ZOndřej Surýdns_client_destroyrestrans can be called on object in useThe `dns_client_destroyrestrans()` function contains this snippet:
```
/*
* Wait for the lock in client_resfind to be released before
* destroying the lock.
*/
LOCK(&rctx->lock);
UNLOCK(...The `dns_client_destroyrestrans()` function contains this snippet:
```
/*
* Wait for the lock in client_resfind to be released before
* destroying the lock.
*/
LOCK(&rctx->lock);
UNLOCK(&rctx->lock);
```
basically meaning that the object being destroyed might be still in use.
It seems to me that the `dns_clientrestrans_t` (aka `resctx_t`) is missing some basic reference counting.BIND 9.19.xhttps://gitlab.isc.org/isc-projects/bind9/-/issues/1169named-checkconf geoip check is incomplete2022-07-02T00:35:58ZEvan Huntnamed-checkconf geoip check is incomplete`named-checkconf` can check geoip ACL's for syntactic correctness, but it can't check whether a geoip database is installed that can support the configuration. This means `named-checkconf` can greenlight a configuration that will be reje...`named-checkconf` can check geoip ACL's for syntactic correctness, but it can't check whether a geoip database is installed that can support the configuration. This means `named-checkconf` can greenlight a configuration that will be rejected by `named`.BIND 9.19.xEvan HuntEvan Hunthttps://gitlab.isc.org/isc-projects/bind9/-/issues/1083Possible race in configure_catz_zone2023-11-03T07:20:57ZWitold KrecickiPossible race in configure_catz_zoneAppeared once, when running system tests with libfaketime:
```
#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
#1 0x00007f3401379535 in __GI_abort () at abort.c:79
#2 0x000055f98610e292 in library_fatal_error ...Appeared once, when running system tests with libfaketime:
```
#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
#1 0x00007f3401379535 in __GI_abort () at abort.c:79
#2 0x000055f98610e292 in library_fatal_error (file=0x55f986446038 "./server.c", line=2959, format=0x55f9864bcd8f "RUNTIME_CHECK(%s) failed", args=0x7f33fe634820) at ./main.c:293
#3 0x000055f9863fe605 in isc_error_fatal (file=0x55f986446038 "./server.c", line=2959, format=0x55f9864bcd8f "RUNTIME_CHECK(%s) failed") at error.c:65
#4 0x000055f9863fe645 in isc_error_runtimecheck (file=0x55f986446038 "./server.c", line=2959, expression=0x55f986446d90 "tresult == 0") at error.c:72
#5 0x000055f9861178e5 in configure_catz_zone (view=0x7f33b40900d0, config=0x7f33ebd7f890, element=0x7f33f25e4318) at ./server.c:2959
#6 0x000055f986117cfd in configure_catz (view=0x7f33b40900d0, config=0x7f33ebd7f890, catz_obj=0x7f33b4035cb8) at ./server.c:3056
#7 0x000055f98611a179 in configure_view (view=0x7f33b40900d0, viewlist=0x7f33fe635ab0, config=0x7f33ebd7f890, vconfig=0x0, cachelist=0x7f33fe635ad0, bindkeys=0x0, mctx=0x55f98780d160, actx=0x7f33f25ebff0, need_hints=true) at ./server.c:3888
#8 0x000055f986129c4e in load_configuration (filename=0x55f986534d00 <absolute_conffile> "/home/wpk/dev/isc/bind9/master/bin/tests/system/catz/ns2/named.conf", server=0x7f33fee46020, first_time=false) at ./server.c:8810
#9 0x000055f98612e586 in loadconfig (server=0x7f33fee46020) at ./server.c:10029
#10 0x000055f98612f3fa in named_server_reconfigcommand (server=0x7f33fee46020) at ./server.c:10397
#11 0x000055f986106d5c in named_control_docommand (message=0x7f33b4025150, readonly=false, text=0x7f33fe635ce8) at control.c:241
--Type <RET> for more, q to quit, c to continue without paging--
#12 0x000055f98610878c in control_recvmessage (task=0x7f33fee53020, event=0x7f33f25ef610) at controlconf.c:462
#13 0x000055f986424805 in dispatch (manager=0x7f33fee3d020, threadid=1) at task.c:1128
#14 0x000055f986424e70 in run (queuep=0x55f98781a6b0) at task.c:1295
#15 0x00007f3401548182 in start_thread (arg=<optimized out>) at pthread_create.c:486
#16 0x00007f3401471b1f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
```
tresult is ISC_R_NOTFOUNDNot plannedhttps://gitlab.isc.org/isc-projects/bind9/-/issues/743Improvements to dnssec-verify2024-02-14T23:14:00ZCathy AlmondImprovements to dnssec-verify### Summary
This is both a bug report and a feature request.
dnssec-verify is not finding everything that is wrong with a zone, possibly because it is only looking for faults that would cause its RRs to fail validation and/or it is ign...### Summary
This is both a bug report and a feature request.
dnssec-verify is not finding everything that is wrong with a zone, possibly because it is only looking for faults that would cause its RRs to fail validation and/or it is ignoring RRsets that ordinarily would be considered to be occluded, even though their DNSSEC-state is ambiguous in the zone itself.
### BIND version used
9.11.5
### Potential new features/fixes
1. Detect RRsets that are DNSSEC-signed that shouldn't be because they're out-of-zone - for example necessary glue, unnecessary glue (usually occluded) and DNSKEY RRs that have inappropriately been added to the parent along with the DS RRs.
2. Check that the NSEC/NSEC3 RRs don't 'cover' any RTYPEs that they should not (I'm assuming that the check already includes matching the RRsets at the name being covered with the type list)
3. Better 'this is broken' error messages detailing what is wrong, particularly when it is that an NSEC/NSEC3 chain is broken
4. Match NSEC3 RRs to the names that they cover when reporting problems
5. [ ] #1863 Add an option for retrospective checking of a historic copy of a zone file by introducing an option to provide a date/time to be treated as 'now' for the purposes of DNSSEC validation
See also https://support.isc.org/Ticket/Display.html?id=13752Not planned