BIND issueshttps://gitlab.isc.org/isc-projects/bind9/-/issues2023-01-10T14:15:41Zhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3660CID 375800: Concurrent data access violations in lib/isc/rwlock.c2023-01-10T14:15:41ZMichal NowakCID 375800: Concurrent data access violations in lib/isc/rwlock.cCoverity Scan assumes that `rwl->readers_waiting` should be accessed with a lock held as is done elsewhere 4 out of 5 times.
The following recent commits might be of interest:
- 98b7a93772077c021c3ff8a8d763decb8dffeba1
- 6bd201ccec1d5a...Coverity Scan assumes that `rwl->readers_waiting` should be accessed with a lock held as is done elsewhere 4 out of 5 times.
The following recent commits might be of interest:
- 98b7a93772077c021c3ff8a8d763decb8dffeba1
- 6bd201ccec1d5a11a42890e8b94a6920fcda97bb
- 0492bbf590fa5c7e52f94d5a9220724e68052444
```
*** CID 375800: Concurrent data access violations (MISSING_LOCK)
/lib/isc/rwlock.c: 104 in isc__rwlock_init()
98 rwl->magic = 0;
99
100 atomic_init(&rwl->spins, 0);
101 atomic_init(&rwl->write_requests, 0);
102 atomic_init(&rwl->write_completions, 0);
103 atomic_init(&rwl->cnt_and_flag, 0);
>>> CID 375800: Concurrent data access violations (MISSING_LOCK)
>>> Accessing "rwl->readers_waiting" without holding lock "isc_rwlock.lock". Elsewhere, "isc_rwlock.readers_waiting" is accessed with "isc_rwlock.lock" held 4 out of 5 times.
104 rwl->readers_waiting = 0;
105 atomic_init(&rwl->write_granted, 0);
106 if (read_quota != 0) {
107 UNEXPECTED_ERROR("read quota is not supported");
108 }
109 if (write_quota == 0) {
```Not plannedhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3659Implement draft-ietf-dnsop-dns-error-reporting2023-11-02T17:05:06ZMark AndrewsImplement draft-ietf-dnsop-dns-error-reportingthe current draft is at https://datatracker.ietf.org/doc/draft-ietf-dnsop-dns-error-reporting/
Development inter-op testing is using 65023 as the EDNS option code point.the current draft is at https://datatracker.ietf.org/doc/draft-ietf-dnsop-dns-error-reporting/
Development inter-op testing is using 65023 as the EDNS option code point.Not plannedMark AndrewsMark Andrewshttps://gitlab.isc.org/isc-projects/bind9/-/issues/3658journal performance improvements - use asynchronous I/O2023-04-06T12:27:32ZPetr Špačekpspacek@isc.orgjournal performance improvements - use asynchronous I/OCurrently journal I/O is done synchronously, including fsync(). `fsync` is very slow (see #3556) and we probably cannot get rid of it completely, so I think we should use asynchronous I/O for journal processing. That would allow the serv...Currently journal I/O is done synchronously, including fsync(). `fsync` is very slow (see #3556) and we probably cannot get rid of it completely, so I think we should use asynchronous I/O for journal processing. That would allow the server to process normal queries while waiting for journal operations.
TODO: Test IXFR-in and querying the server in parallel.https://gitlab.isc.org/isc-projects/bind9/-/issues/3657Time and related disasters in software (y2k38)2023-03-31T15:51:56ZTony FinchTime and related disasters in software (y2k38)There are some opportunities to simplify:
* BIND now targets POSIX, which gives us much more useful guarantees about the behaviour of `time_t`, so we can replace most uses of `isc_stdtime_t`.
* ISO C has adopted `struct timespec` s...There are some opportunities to simplify:
* BIND now targets POSIX, which gives us much more useful guarantees about the behaviour of `time_t`, so we can replace most uses of `isc_stdtime_t`.
* ISO C has adopted `struct timespec` so we can use that instead of `isc_time_t`. It is used for C's timed mutex and thread sleep.
- This will need some care, though, because `isc_time_t` has unsigned members, whereas `struct timespec` is signed, for instance, `long tv_nsec`.
- On the other hand, 64 bit nanoseconds since the epoch is much easier to use than `struct timespec` or `isc_time_t`
* Most uses of time in BIND should probably be changed to use CLOCK_MONOTONIC instead of wall timeNot plannedTony FinchTony Finchhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3653update-policy logic logging/debugging2022-11-04T07:04:22ZJorisupdate-policy logic logging/debugging### Description
Currently debugging update-policy rules is opaque, only the decision is available.
It would be beneficial if it was possible to trace the individual steps taken, eg which rule was expanded to what (wildcard expansion, k...### Description
Currently debugging update-policy rules is opaque, only the decision is available.
It would be beneficial if it was possible to trace the individual steps taken, eg which rule was expanded to what (wildcard expansion, key names vs fqdn names) to which result.
### Request
Logging at the right trace level could be something like below. Note I'm by far not an expert in the update policies, my example might way off.
```update-policy
grant wrong-key-name name example.com ANY;
=> identity key wrong-key-name not found, aborted
grant key-name name example.com NS;
=> update request does not match name example.com, aborted
grant key-name name example.com MX;
=> update request does not match type, aborted
grant updater-key.example.com name example.com ANY;
=> identity host updater-key.example.com not found, aborted [ a keyname that looks like a fqdn? ]
=> request denied```
Similarly key/kerberos failures could be logged too.
### Links / references
In #235 a similar request is made, this elaborates the scope and suggested solution.https://gitlab.isc.org/isc-projects/bind9/-/issues/3650Support not crossing the XFR streams2023-11-09T21:01:32ZMark AndrewsSupport not crossing the XFR streams### Description
#### Goal
Make BIND in the role of **secondary** to play nicely with multi-master infrastructures.
In large topologies people want to avoid SPOF anywhere in the DNS infrastructure. Other people provide tools to accept D...### Description
#### Goal
Make BIND in the role of **secondary** to play nicely with multi-master infrastructures.
In large topologies people want to avoid SPOF anywhere in the DNS infrastructure. Other people provide tools to accept DNS UPDATE at multiple servers in parallel and then resynchronize databases using their own protocols, which is not consistent with monotonic and unique SOA SERIAL mapping to a single version of zone data.
Multi-master primaries can go up and down at any time - that's why people do want multi-master in the first place!
#### Problems in practice
- Replication between primaries (using proprietary protocols) takes non-zero time.
- SOA serials are generally not consistent/cannot be relied upon.
- IXFR is total mess when switching between primaries.
- AXFR and NOTIFY is unreliable - SOA serial might indicate there are no new data while there actually are.
- Primaries might do independent signing - RRSIGs inconsistent (IXFR trouble again).
### Request
Extend the primaries the syntax to support **sets** of primaries to which server the same zone contents, and switch between them when the current set is unreachable.
Sets are needed to support topologies where BIND secondary is not speaking directly to the primaries but is somewhere deeper down in the replication tree.
Proposal:
```
primaries [set #] ... { ... };
```
Record the set number in the raw file. "255 sets must be good enough for everyone."
#### Caveats
- Secondaries with sets configured now require `masterfile-format raw`.
- Config change might mess up mapping between primary sets and number recorded in the raw zone file.
### Links / references
Who is doing multi-master, for different purposes:
- Windows Active Directory DNS - very common, does not maintain SOA serial consistency across topology.
- Some TLDs do independent DNSSEC signers to avoid SPOF.
- FreeIPA - multi-master DNS - like Windows AD but for Unix, independent DNSSEC signers (different RRSIGs on each DNS server), does not maintain strict SOA serial consistency.Not plannedhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3649Feature request: configurable TCP timeouts on zone refresh queries2023-06-15T13:22:31ZCathy AlmondFeature request: configurable TCP timeouts on zone refresh queriesThere is nothing that can be configured to reduce the timeout when failing to reach an authoritative server with a refresh query over TCP (BIND default is "try-tcp-refresh yes;")
Customer input is :
```
Basically, if my slave is trying ...There is nothing that can be configured to reduce the timeout when failing to reach an authoritative server with a refresh query over TCP (BIND default is "try-tcp-refresh yes;")
Customer input is :
```
Basically, if my slave is trying to reach out to a third-party master and doesn't get a response in 10-15 seconds, I want it to move on to the next listed master in hopes of better results versus waiting for 2 minutes
```
A 'sort of' workaround for this is to allow the UDP timeout to happen ("try-tcp-refresh no;") but that takes away the possibility of being able to reach servers and to pull a zone transfer in the situation where UDP doesn't work, but TCP does.
We have configurable TCP timeouts for other BIND functions, but not for this.
See [Support ticket #21044](https://support.isc.org/Ticket/Display.html?id=21044)https://gitlab.isc.org/isc-projects/bind9/-/issues/3644dupsigs system test is flaky for TYPE65534 on Windows2023-02-06T19:48:34ZMichal Nowakdupsigs system test is flaky for TYPE65534 on WindowsSuspiciously, the `dupsigs` system test only prints a variant of the following `check_journal.pl` failure several times over and [exits with a `PASS`](https://gitlab.isc.org/isc-projects/bind9/-/jobs/2876865):
```
unable to parse key sta...Suspiciously, the `dupsigs` system test only prints a variant of the following `check_journal.pl` failure several times over and [exits with a `PASS`](https://gitlab.isc.org/isc-projects/bind9/-/jobs/2876865):
```
unable to parse key status record at check_journal.pl line 87, <> line 13.
1008 26311
1 38032
```
More debugging for what happens for `TYPE65534` RRTYPE on Windows is needed.Not plannedhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3642tcpdns_recv_send failure in mem.c:7722023-01-10T14:16:34ZMichal Nowaktcpdns_recv_send failure in mem.c:772Job [#2874890](https://gitlab.isc.org/isc-projects/bind9/-/jobs/2874890) failed for 395a5576b40eb7f4d34ee0606200947b4698173e with:
```
[ RUN ] tcpdns_recv_send
mem.c:772: REQUIRE(((ctx) != ((void *)0) && ((const isc__magic_t *)(ctx)...Job [#2874890](https://gitlab.isc.org/isc-projects/bind9/-/jobs/2874890) failed for 395a5576b40eb7f4d34ee0606200947b4698173e with:
```
[ RUN ] tcpdns_recv_send
mem.c:772: REQUIRE(((ctx) != ((void *)0) && ((const isc__magic_t *)(ctx))->magic == ((('M') << 24 | ('e') << 16 | ('m') << 8 | ('C'))))) failed, back trace
```
<details><summary>tcpdns_recv_send failure</summary>
```
[==========] Running 6 test(s).
[ RUN ] tcpdns_noop
[ OK ] tcpdns_noop
[ RUN ] tcpdns_noresponse
[ OK ] tcpdns_noresponse
[ RUN ] tcpdns_timeout_recovery
[ OK ] tcpdns_timeout_recovery
[ RUN ] tcpdns_recv_one
[ OK ] tcpdns_recv_one
[ RUN ] tcpdns_recv_two
[ OK ] tcpdns_recv_two
[ RUN ] tcpdns_recv_send
mem.c:772: REQUIRE(((ctx) != ((void *)0) && ((const isc__magic_t *)(ctx))->magic == ((('M') << 24 | ('e') << 16 | ('m') << 8 | ('C'))))) failed, back trace
mem.c:772: REQUIRE(((ctx) != ((void *)0) && ((const isc__magic_t *)(ctx))->magic == ((('M') << 24 | ('e') << 16 | ('m') << 8 | ('C'))))) failed, back trace
mem.c:772: REQUIRE(((ctx) != ((void *)0) && ((const isc__magic_t *)(ctx))->magic == ((('M') << 24 | ('e') << 16 | ('m') << 8 | ('C'))))) failed, back trace
/builds/isc-projects/bind9/lib/isc/.libs/libisc-9.19.7-dev.so(+0x3119d)[0x7efd91df119d]
/builds/isc-projects/bind9/lib/isc/.libs/libisc-9.19.7-dev.so(+0x3119d)[0x7efd91df119d]
/builds/isc-projects/bind9/lib/isc/.libs/libisc-9.19.7-dev.so(+0x3119d)[0x7efd91df119d]
/builds/isc-projects/bind9/lib/isc/.libs/libisc-9.19.7-dev.so(isc_assertion_failed+0xa)[0x7efd91df1118]
/builds/isc-projects/bind9/lib/isc/.libs/libisc-9.19.7-dev.so(isc_assertion_failed+0xa)[0x7efd91df1118]
/builds/isc-projects/bind9/lib/isc/.libs/libisc-9.19.7-dev.so(isc_assertion_failed+0xa)[0x7efd91df1118]
/builds/isc-projects/bind9/lib/isc/.libs/libisc-9.19.7-dev.so(isc__mem_get+0x9f)[0x7efd91e04800]
/builds/isc-projects/bind9/lib/isc/.libs/libisc-9.19.7-dev.so(isc__mem_get+0x9f)[0x7efd91e04800]
/builds/isc-projects/bind9/lib/isc/.libs/libisc-9.19.7-dev.so(isc__mem_get+0x9f)[0x7efd91e04800]
/builds/isc-projects/bind9/lib/isc/.libs/libisc-9.19.7-dev.so(isc_nm_tcpdnsconnect+0x8b)[0x7efd91de92af]
/builds/isc-projects/bind9/lib/isc/.libs/libisc-9.19.7-dev.so(isc_nm_tcpdnsconnect+0x8b)[0x7efd91de92af]
/builds/isc-projects/bind9/lib/isc/.libs/libisc-9.19.7-dev.so(isc_nm_tcpdnsconnect+0x8b)[0x7efd91de92af]
/builds/isc-projects/bind9/tests/isc/.libs/lt-tcpdns_test[0x403b0d]
/builds/isc-projects/bind9/tests/isc/.libs/lt-tcpdns_test(stream_recv_send_connect+0x5a)[0x4040ec]
/builds/isc-projects/bind9/lib/isc/.libs/libisc-9.19.7-dev.so(+0x3b536)[0x7efd91dfb536]
/builds/isc-projects/bind9/tests/isc/.libs/lt-tcpdns_test[0x403b0d]
/builds/isc-projects/bind9/tests/isc/.libs/lt-tcpdns_test[0x403b0d]
/lib64/libuv.so.1(uv__run_idle+0x99)[0x7efd91ba7a49]
/builds/isc-projects/bind9/tests/isc/.libs/lt-tcpdns_test(stream_recv_send_connect+0x5a)[0x4040ec]
/builds/isc-projects/bind9/tests/isc/.libs/lt-tcpdns_test(stream_recv_send_connect+0x5a)[0x4040ec]
/builds/isc-projects/bind9/lib/isc/.libs/libisc-9.19.7-dev.so(+0x3b536)[0x7efd91dfb536]
/lib64/libuv.so.1(uv_run+0x298)[0x7efd91ba0bf8]
/builds/isc-projects/bind9/lib/isc/.libs/libisc-9.19.7-dev.so(+0x3b536)[0x7efd91dfb536]
/lib64/libuv.so.1(uv__run_idle+0x99)[0x7efd91ba7a49]
/builds/isc-projects/bind9/lib/isc/.libs/libisc-9.19.7-dev.so(+0x41b44)[0x7efd91e01b44]
/lib64/libuv.so.1(uv__run_idle+0x99)[0x7efd91ba7a49]
/lib64/libuv.so.1(uv_run+0x298)[0x7efd91ba0bf8]
/builds/isc-projects/bind9/lib/isc/.libs/libisc-9.19.7-dev.so(isc__trampoline_run+0x16)[0x7efd91e1b87f]
/lib64/libuv.so.1(uv_run+0x298)[0x7efd91ba0bf8]
/builds/isc-projects/bind9/lib/isc/.libs/libisc-9.19.7-dev.so(+0x41b44)[0x7efd91e01b44]
/lib64/libpthread.so.0(+0x81df)[0x7efd90de91df]
/builds/isc-projects/bind9/lib/isc/.libs/libisc-9.19.7-dev.so(+0x41b44)[0x7efd91e01b44]
/builds/isc-projects/bind9/lib/isc/.libs/libisc-9.19.7-dev.so(isc__trampoline_run+0x16)[0x7efd91e1b87f]
/lib64/libc.so.6(clone+0x43)[0x7efd90a55dd3]
/builds/isc-projects/bind9/lib/isc/.libs/libisc-9.19.7-dev.so(isc__trampoline_run+0x16)[0x7efd91e1b87f]
/lib64/libpthread.so.0(+0x81df)[0x7efd90de91df]
../../tests/unit-test-driver.sh: line 36: 7887 Aborted (core dumped) "${TEST_PROGRAM}"
I:tcpdns_test:Core dump found: ./core.7887
D:tcpdns_test:backtrace from ./core.7887 start
[New LWP 7975]
[New LWP 7970]
[New LWP 7973]
[New LWP 7887]
[New LWP 7971]
[New LWP 7976]
[New LWP 7972]
[New LWP 7977]
[New LWP 7974]
[New LWP 7968]
[New LWP 7967]
[New LWP 7969]
[New LWP 7978]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Core was generated by `/builds/isc-projects/bind9/tests/isc/.libs/lt-tcpdns_test'.
Program terminated with signal SIGABRT, Aborted.
#0 0x00007efd90a6aa9f in raise () from /lib64/libc.so.6
[Current thread is 1 (Thread 0x7efd89efb700 (LWP 7975))]
Thread 13 (Thread 0x7efd83fff700 (LWP 7978)):
#0 0x00007efd90df282d in __lll_lock_wait () from /lib64/libpthread.so.0
No symbol table info available.
#1 0x00007efd90debba4 in pthread_mutex_lock () from /lib64/libpthread.so.0
No symbol table info available.
#2 0x00007efd90b84ff3 in dl_iterate_phdr () from /lib64/libc.so.6
No symbol table info available.
#3 0x00007efd8f0a1bf5 in _Unwind_Find_FDE () from /lib64/libgcc_s.so.1
No symbol table info available.
#4 0x00007efd8f09e193 in uw_frame_state_for () from /lib64/libgcc_s.so.1
No symbol table info available.
#5 0x00007efd8f0a01d8 in _Unwind_Backtrace () from /lib64/libgcc_s.so.1
No symbol table info available.
#6 0x00007efd90b57c46 in backtrace () from /lib64/libc.so.6
No symbol table info available.
#7 0x00007efd91df18f2 in isc_backtrace (addrs=addrs@entry=0x7efd83ffce20, maxaddrs=maxaddrs@entry=128) at backtrace.c:44
n = <optimized out>
#8 0x00007efd91df119d in default_callback (file=0x7efd91e2ff0b "mem.c", line=772, type=isc_assertiontype_require, cond=0x7efd91e304a0 "((ctx) != ((void *)0) && ((const isc__magic_t *)(ctx))->magic == ((('M') << 24 | ('e') << 16 | ('m') << 8 | ('C'))))") at assertions.c:100
tracebuf = {0x7efd91df18f2 <isc_backtrace+24>, 0x7efd91df119d <default_callback+49>, 0x7efd91df1118 <isc_assertion_failed+10>, 0x7efd80e008c0, 0x14, 0x7efd80e09218, 0x0, 0x7efd83ffe058, 0x7efd83ffcee8, 0xe6c048c994a92300, 0x7efd57200000, 0x400, 0x200, 0x7efd83ffde68, 0x7efd57200000, 0x7efd83ffd110, 0x7efd83ffdc78, 0x7efd9037de0b <je_arena_ralloc+459>, 0x0, 0x7efd57201000, 0x7efd83ffd118, 0x0, 0x7efd80e008c0, 0x0, 0x0, 0x201, 0x0, 0x0, 0x0, 0x3c3000, 0x7efd903b134a <extents_remove_locked.isra+170>, 0xe6c048c994a92300, 0x23, 0x7efd8ec03268, 0x7efd8ec03268, 0x7efd8ec0f200, 0x7efd8e660cc8, 0x7efd907fdec0, 0xd50dcc13, 0x7efd83ffdc78, 0x7efd8ec0f100, 0x7efd903aca72 <extent_lock_from_addr+434>, 0x0, 0x100000000000000, 0x7efd83ffdc78, 0x0, 0x7efd83ffdc78, 0xd50dcc13, 0x7efd8ec03268, 0x7efd8ec0f200, 0x7efd8ec0f580, 0x7efd903b1877 <extent_try_coalesce_impl+839>, 0x7efd83ffdca8, 0x7efd8ec00980, 0x7efd83ffd250, 0x7efd00000001, 0x0, 0x7efd83ffd108, 0x0, 0x183ffd250, 0x7efd83ffdca8, 0x1, 0x8, 0x7efd83ffd108, 0x0, 0x54, 0x7efd83ffd0f0, 0x7efd91df6247 <isc_hash64+87>, 0x7efd83ffd05f, 0x7efd92549c70 <isc_modules+80>, 0x0, 0x0, 0x0, 0x0, 0x0, 0x7efd83ffd198, 0x0, 0x183ffdc78, 0x0, 0x7efd83ffd1b8, 0x0, 0x183ffd198, 0x0, 0x1, 0x8, 0x7efd83ffd1b8, 0x0, 0x2e2, 0x7efd83ffd1a0, 0x7efd91df6247 <isc_hash64+87>, 0x7efd83ffd1c0, 0x7efd91e03bc9 <delete_trace_entry+303>, 0x7efd91e2b585, 0x7efd5720f400, 0x0, 0x7efd57200000, 0x400, 0x0, 0x0, 0x7efd83ffd258, 0x0, 0x7efd83ffd268, 0x0, 0x7efd83ffd278, 0x0, 0x18ea13010, 0x168, 0x1, 0x8, 0x7efd83ffd278, 0x0, 0x201, 0x7efd83ffd260, 0x7efd91df6247 <isc_hash64+87>, 0x7efd91e27a1e, 0x7efd57204000, 0x7efd83ffd1f0, 0x7efd91e05a3c <isc__mem_putanddetach+114>, 0x7efd5720f400, 0x0, 0x0, 0x0, 0x7efd83ffd2a0, 0x7efd91df15d7 <isc_astack_destroy+125>, 0x7efd83ffdc78, 0x7efd83ffd328, 0x0, 0x10027ffff}
nframes = <optimized out>
#9 0x00007efd91df1118 in isc_assertion_failed (file=file@entry=0x7efd91e2ff0b "mem.c", line=line@entry=772, type=type@entry=isc_assertiontype_require, cond=cond@entry=0x7efd91e304a0 "((ctx) != ((void *)0) && ((const isc__magic_t *)(ctx))->magic == ((('M') << 24 | ('e') << 16 | ('m') << 8 | ('C'))))") at assertions.c:49
No locals.
#10 0x00007efd91e04800 in isc__mem_get (ctx=<optimized out>, size=size@entry=2208, flags=flags@entry=0, file=file@entry=0x7efd91e2a6f2 "netmgr/tcpdns.c", line=line@entry=269) at mem.c:781
ptr = 0x0
#11 0x00007efd91de92af in isc_nm_tcpdnsconnect (mgr=<optimized out>, local=0x6084e0 <tcp_connect_addr>, peer=0x608580 <tcp_listen_addr>, cb=0x40529c <connect_connect_cb>, cbarg=cbarg@entry=0x403ae9 <tcpdns_connect>, timeout=timeout@entry=30000) at netmgr/tcpdns.c:269
result = ISC_R_SUCCESS
sock = 0x0
ievent = 0x0
req = 0x0
sa_family = 10
worker = 0x7efd8ead4680
__func__ = "isc_nm_tcpdnsconnect"
#12 0x0000000000403b0d in tcpdns_connect (nm=<optimized out>) at tcpdns_test.c:63
No locals.
#13 0x00000000004040ec in stream_recv_send_connect (arg=0x403ae9 <tcpdns_connect>) at netmgr_common.c:516
connect = 0x403ae9 <tcpdns_connect>
connect_addr = {type = {sa = {sa_family = 10, sa_data = '\000' <repeats 13 times>}, sin = {sin_family = 10, sin_port = 0, sin_addr = {s_addr = 0}, sin_zero = "\000\000\000\000\000\000\000"}, sin6 = {sin6_family = 10, sin6_port = 0, sin6_flowinfo = 0, sin6_addr = {__in6_u = {__u6_addr8 = '\000' <repeats 15 times>, "\001", __u6_addr16 = {0, 0, 0, 0, 0, 0, 0, 256}, __u6_addr32 = {0, 0, 0, 16777216}}}, sin6_scope_id = 0}, ss = {ss_family = 10, __ss_padding = '\000' <repeats 21 times>, "\001", '\000' <repeats 95 times>, __ss_align = 0}, sunix = {sun_family = 10, sun_path = '\000' <repeats 21 times>, "\001", '\000' <repeats 85 times>}}, length = 28, link = {prev = 0xffffffffffffffff, next = 0xffffffffffffffff}}
#14 0x00007efd91dfb536 in isc__job_cb (idle=0x7efd8ebf0808) at job.c:75
job = 0x7efd8ebf0800
r = <optimized out>
__func__ = "isc__job_cb"
#15 0x00007efd91ba7a49 in uv.run_idle () from /lib64/libuv.so.1
No symbol table info available.
#16 0x00007efd91ba0bf8 in uv_run () from /lib64/libuv.so.1
No symbol table info available.
#17 0x00007efd91e01b44 in loop_run (loop=0x7efd8eaa5b80) at loop.c:267
r = <optimized out>
job = <optimized out>
r = <optimized out>
job = <optimized out>
__func__ = "loop_run"
next = <optimized out>
#18 loop_thread (arg=0x7efd8eaa5b80) at loop.c:294
loop = 0x7efd8eaa5b80
#19 0x00007efd91e1b87f in isc__trampoline_run (arg=0x18486b0) at trampoline.c:198
trampoline = 0x18486b0
result = <optimized out>
#20 0x00007efd90de91df in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#21 0x00007efd90a55dd3 in clone () from /lib64/libc.so.6
No symbol table info available.
Thread 12 (Thread 0x7efd82ffd700 (LWP 7969)):
#0 0x00007efd90df05be in pthread_barrier_wait () from /lib64/libpthread.so.0
No symbol table info available.
#1 0x00007efd91bada7d in uv_barrier_wait () from /lib64/libuv.so.1
No symbol table info available.
#2 0x00007efd91de8154 in isc__nm_async_tcpdnsstop (worker=worker@entry=0x7efd8ead40c0, ev0=ev0@entry=0x7efd8ea8bd00) at netmgr/tcpdns.c:670
ievent = 0x7efd8ea8bd00
sock = 0x7efd8e3196e0
__func__ = "isc__nm_async_tcpdnsstop"
#3 0x00007efd91de27d2 in process_netievent (arg=0x7efd8ea8bd00) at netmgr/netmgr.c:463
ievent = 0x7efd8ea8bd00
worker = 0x7efd8ead40c0
#4 0x00007efd91dfb536 in isc__job_cb (idle=0x7efd8ebf0a48) at job.c:75
job = 0x7efd8ebf0a40
r = <optimized out>
__func__ = "isc__job_cb"
#5 0x00007efd91ba7a49 in uv.run_idle () from /lib64/libuv.so.1
No symbol table info available.
#6 0x00007efd91ba0bf8 in uv_run () from /lib64/libuv.so.1
No symbol table info available.
#7 0x00007efd91e01b44 in loop_run (loop=0x7efd8eaa26a0) at loop.c:267
r = <optimized out>
job = <optimized out>
r = <optimized out>
job = <optimized out>
__func__ = "loop_run"
next = <optimized out>
#8 loop_thread (arg=0x7efd8eaa26a0) at loop.c:294
loop = 0x7efd8eaa26a0
#9 0x00007efd91e1b87f in isc__trampoline_run (arg=0x1845fc0) at trampoline.c:198
trampoline = 0x1845fc0
result = <optimized out>
#10 0x00007efd90de91df in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#11 0x00007efd90a55dd3 in clone () from /lib64/libc.so.6
No symbol table info available.
Thread 11 (Thread 0x7efd81ffb700 (LWP 7967)):
#0 0x00007efd90df05be in pthread_barrier_wait () from /lib64/libpthread.so.0
No symbol table info available.
#1 0x00007efd91bada7d in uv_barrier_wait () from /lib64/libuv.so.1
No symbol table info available.
#2 0x00007efd91de8154 in isc__nm_async_tcpdnsstop (worker=worker@entry=0x7efd8ead4040, ev0=ev0@entry=0x7efd8ea8bb80) at netmgr/tcpdns.c:670
ievent = 0x7efd8ea8bb80
sock = 0x7efd8e3185a0
__func__ = "isc__nm_async_tcpdnsstop"
#3 0x00007efd91de27d2 in process_netievent (arg=0x7efd8ea8bb80) at netmgr/netmgr.c:463
ievent = 0x7efd8ea8bb80
worker = 0x7efd8ead4040
#4 0x00007efd91dfb536 in isc__job_cb (idle=0x7efd8ea24508) at job.c:75
job = 0x7efd8ea24500
r = <optimized out>
__func__ = "isc__job_cb"
#5 0x00007efd91ba7a49 in uv.run_idle () from /lib64/libuv.so.1
No symbol table info available.
#6 0x00007efd91ba0bf8 in uv_run () from /lib64/libuv.so.1
No symbol table info available.
#7 0x00007efd91e01b44 in loop_run (loop=0x7efd8eaa1ae0) at loop.c:267
r = <optimized out>
job = <optimized out>
r = <optimized out>
job = <optimized out>
__func__ = "loop_run"
next = <optimized out>
#8 loop_thread (arg=0x7efd8eaa1ae0) at loop.c:294
loop = 0x7efd8eaa1ae0
#9 0x00007efd91e1b87f in isc__trampoline_run (arg=0x1848200) at trampoline.c:198
trampoline = 0x1848200
result = <optimized out>
#10 0x00007efd90de91df in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#11 0x00007efd90a55dd3 in clone () from /lib64/libc.so.6
No symbol table info available.
Thread 10 (Thread 0x7efd827fc700 (LWP 7968)):
#0 0x00007efd90df05be in pthread_barrier_wait () from /lib64/libpthread.so.0
No symbol table info available.
#1 0x00007efd91bada7d in uv_barrier_wait () from /lib64/libuv.so.1
No symbol table info available.
#2 0x00007efd91de8154 in isc__nm_async_tcpdnsstop (worker=worker@entry=0x7efd8ead4080, ev0=ev0@entry=0x7efd8ea8a800) at netmgr/tcpdns.c:670
ievent = 0x7efd8ea8a800
sock = 0x7efd8e318e40
__func__ = "isc__nm_async_tcpdnsstop"
#3 0x00007efd91de27d2 in process_netievent (arg=0x7efd8ea8a800) at netmgr/netmgr.c:463
ievent = 0x7efd8ea8a800
worker = 0x7efd8ead4080
#4 0x00007efd91dfb536 in isc__job_cb (idle=0x7efd8ea23848) at job.c:75
job = 0x7efd8ea23840
r = <optimized out>
__func__ = "isc__job_cb"
#5 0x00007efd91ba7a49 in uv.run_idle () from /lib64/libuv.so.1
No symbol table info available.
#6 0x00007efd91ba0bf8 in uv_run () from /lib64/libuv.so.1
No symbol table info available.
#7 0x00007efd91e01b44 in loop_run (loop=0x7efd8eaa20c0) at loop.c:267
r = <optimized out>
job = <optimized out>
r = <optimized out>
job = <optimized out>
__func__ = "loop_run"
next = <optimized out>
#8 loop_thread (arg=0x7efd8eaa20c0) at loop.c:294
loop = 0x7efd8eaa20c0
#9 0x00007efd91e1b87f in isc__trampoline_run (arg=0x1845e20) at trampoline.c:198
trampoline = 0x1845e20
result = <optimized out>
#10 0x00007efd90de91df in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#11 0x00007efd90a55dd3 in clone () from /lib64/libc.so.6
No symbol table info available.
Thread 9 (Thread 0x7efd8a6fc700 (LWP 7974)):
#0 0x00007efd90df05be in pthread_barrier_wait () from /lib64/libpthread.so.0
No symbol table info available.
#1 0x00007efd91bada7d in uv_barrier_wait () from /lib64/libuv.so.1
No symbol table info available.
#2 0x00007efd91de8154 in isc__nm_async_tcpdnsstop (worker=worker@entry=0x7efd8ead4200, ev0=ev0@entry=0x7efd8ea8af80) at netmgr/tcpdns.c:670
ievent = 0x7efd8ea8af80
sock = 0x7efd8e31c200
__func__ = "isc__nm_async_tcpdnsstop"
#3 0x00007efd91de27d2 in process_netievent (arg=0x7efd8ea8af80) at netmgr/netmgr.c:463
ievent = 0x7efd8ea8af80
worker = 0x7efd8ead4200
#4 0x00007efd91dfb536 in isc__job_cb (idle=0x7efd8ebf0d48) at job.c:75
job = 0x7efd8ebf0d40
r = <optimized out>
__func__ = "isc__job_cb"
#5 0x00007efd91ba7a49 in uv.run_idle () from /lib64/libuv.so.1
No symbol table info available.
#6 0x00007efd91ba0bf8 in uv_run () from /lib64/libuv.so.1
No symbol table info available.
#7 0x00007efd91e01b44 in loop_run (loop=0x7efd8eaa4400) at loop.c:267
r = <optimized out>
job = <optimized out>
r = <optimized out>
job = <optimized out>
__func__ = "loop_run"
next = <optimized out>
#8 loop_thread (arg=0x7efd8eaa4400) at loop.c:294
loop = 0x7efd8eaa4400
#9 0x00007efd91e1b87f in isc__trampoline_run (arg=0x1847ef0) at trampoline.c:198
trampoline = 0x1847ef0
result = <optimized out>
#10 0x00007efd90de91df in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#11 0x00007efd90a55dd3 in clone () from /lib64/libc.so.6
No symbol table info available.
Thread 8 (Thread 0x7efd88ef9700 (LWP 7977)):
#0 0x00007efd90b85317 in _dl_addr () from /lib64/libc.so.6
No symbol table info available.
#1 0x00007efd90b58296 in backtrace_symbols_fd () from /lib64/libc.so.6
No symbol table info available.
#2 0x00007efd91df193a in isc_backtrace_symbols_fd (buffer=buffer@entry=0x7efd88ef6e20, size=size@entry=13, fd=<optimized out>) at backtrace.c:61
No locals.
#3 0x00007efd91df11f6 in default_callback (file=0x7efd91e2ff0b "mem.c", line=772, type=<optimized out>, cond=0x7efd91e304a0 "((ctx) != ((void *)0) && ((const isc__magic_t *)(ctx))->magic == ((('M') << 24 | ('e') << 16 | ('m') << 8 | ('C'))))") at assertions.c:107
tracebuf = {0x7efd91df119d <default_callback+49>, 0x7efd91df1118 <isc_assertion_failed+10>, 0x7efd91e04800 <isc__mem_get+159>, 0x7efd91de92af <isc_nm_tcpdnsconnect+139>, 0x403b0d <tcpdns_connect+36>, 0x4040ec <stream_recv_send_connect+90>, 0x7efd91dfb536 <isc__job_cb+59>, 0x7efd91ba7a49 <uv.run_idle+153>, 0x7efd91ba0bf8 <uv_run+664>, 0x7efd91e01b44 <loop_thread+433>, 0x7efd91e1b87f <isc__trampoline_run+22>, 0x7efd90de91df <start_thread+239>, 0x7efd90a55dd3 <clone+67>, 0x7efd90a55dd3 <clone+67>, 0x0, 0x7efd9037ae61 <arena_bin_malloc_hard+1201>, 0x7efd806086e8, 0x3, 0x7efd88ef6f2f, 0xa, 0x7efd905f9d90, 0x7efd9037ad6e <arena_bin_malloc_hard+958>, 0x7efd806008c0, 0x7efd88ef7c78, 0x7efd8060a880, 0x7efd80608760, 0x7efd80608728, 0x7efd905f9d90, 0x7efd88ef6f2c, 0x7efd88ef7038, 0x0, 0x188ef6f2d, 0x0, 0x1, 0x8, 0x7efd88ef7038, 0x7efd8e6620d8, 0x7efd907ff200, 0xd50dcc13, 0x7efd88ef7c78, 0x7efd8ec0f300, 0x7efd903aca72 <extent_lock_from_addr+434>, 0x0, 0x1007efd9037bd7f, 0x4000000000, 0x0, 0x7efd88ef7c78, 0xd50dcc13, 0x7efd8ec03268, 0x7efd8ec0f400, 0x7efd8ec0f500, 0x7efd903b1877 <extent_try_coalesce_impl+839>, 0x7efd88ef7ca8, 0x7efd8ec00980, 0x7efd88ef7250, 0x48c900000001, 0x0, 0x7efd88ef7108, 0x0, 0x188ef7250, 0x7efd88ef7ca8, 0x1, 0x8, 0x7efd88ef7108, 0x0, 0x54, 0x7efd88ef70f0, 0x7efd91df6247 <isc_hash64+87>, 0x7efd88ef705f, 0x7efd92549c70 <isc_modules+80>, 0x0, 0x0, 0x0, 0x100000000000000, 0x0, 0xe6c048c994a92300, 0x0, 0xe6c048c994a92300, 0x7efd88ef7190, 0x7efd88ef71b8, 0x0, 0x1000000a8, 0x0, 0x1, 0x8, 0x7efd88ef71b8, 0x0, 0x2e2, 0x7efd88ef71a0, 0x7efd91df6247 <isc_hash64+87>, 0x7efd88ef71c0, 0x7efd91e03bc9 <delete_trace_entry+303>, 0x7efd91e2b585, 0x7efd5660e400, 0x7efd88ef7130, 0x8, 0x7efd8eaa5928, 0x34, 0x7efd91bb52a8, 0x7efd88ef7258, 0x0, 0x7efd88ef7268, 0x0, 0x7efd88ef7278, 0x0, 0x100000001, 0x8, 0x1, 0x8, 0x7efd88ef7278, 0x0, 0x201, 0x7efd88ef7260, 0x7efd91df6247 <isc_hash64+87>, 0x7efd91e27a1e, 0x7efd56603000, 0x7efd88ef71f0, 0x7efd91e05a3c <isc__mem_putanddetach+114>, 0x7efd5660e400, 0x0, 0x0, 0x0, 0x7efd88ef72a0, 0x7efd91df15d7 <isc_astack_destroy+125>, 0x7efd88ef7c78, 0x7efd88ef7328, 0x0, 0x10027ffff}
nframes = 13
#4 0x00007efd91df1118 in isc_assertion_failed (file=file@entry=0x7efd91e2ff0b "mem.c", line=line@entry=772, type=type@entry=isc_assertiontype_require, cond=cond@entry=0x7efd91e304a0 "((ctx) != ((void *)0) && ((const isc__magic_t *)(ctx))->magic == ((('M') << 24 | ('e') << 16 | ('m') << 8 | ('C'))))") at assertions.c:49
No locals.
#5 0x00007efd91e04800 in isc__mem_get (ctx=<optimized out>, size=size@entry=2208, flags=flags@entry=0, file=file@entry=0x7efd91e2a6f2 "netmgr/tcpdns.c", line=line@entry=269) at mem.c:781
ptr = 0x0
#6 0x00007efd91de92af in isc_nm_tcpdnsconnect (mgr=<optimized out>, local=0x6084e0 <tcp_connect_addr>, peer=0x608580 <tcp_listen_addr>, cb=0x40529c <connect_connect_cb>, cbarg=cbarg@entry=0x403ae9 <tcpdns_connect>, timeout=timeout@entry=30000) at netmgr/tcpdns.c:269
result = ISC_R_SUCCESS
sock = 0x0
ievent = 0x0
req = 0x0
sa_family = 10
worker = 0x7efd8ead4640
__func__ = "isc_nm_tcpdnsconnect"
#7 0x0000000000403b0d in tcpdns_connect (nm=<optimized out>) at tcpdns_test.c:63
No locals.
#8 0x00000000004040ec in stream_recv_send_connect (arg=0x403ae9 <tcpdns_connect>) at netmgr_common.c:516
connect = 0x403ae9 <tcpdns_connect>
connect_addr = {type = {sa = {sa_family = 10, sa_data = '\000' <repeats 13 times>}, sin = {sin_family = 10, sin_port = 0, sin_addr = {s_addr = 0}, sin_zero = "\000\000\000\000\000\000\000"}, sin6 = {sin6_family = 10, sin6_port = 0, sin6_flowinfo = 0, sin6_addr = {__in6_u = {__u6_addr8 = '\000' <repeats 15 times>, "\001", __u6_addr16 = {0, 0, 0, 0, 0, 0, 0, 256}, __u6_addr32 = {0, 0, 0, 16777216}}}, sin6_scope_id = 0}, ss = {ss_family = 10, __ss_padding = '\000' <repeats 21 times>, "\001", '\000' <repeats 95 times>, __ss_align = 0}, sunix = {sun_family = 10, sun_path = '\000' <repeats 21 times>, "\001", '\000' <repeats 85 times>}}, length = 28, link = {prev = 0xffffffffffffffff, next = 0xffffffffffffffff}}
#9 0x00007efd91dfb536 in isc__job_cb (idle=0x7efd8ebf0748) at job.c:75
job = 0x7efd8ebf0740
r = <optimized out>
__func__ = "isc__job_cb"
#10 0x00007efd91ba7a49 in uv.run_idle () from /lib64/libuv.so.1
No symbol table info available.
#11 0x00007efd91ba0bf8 in uv_run () from /lib64/libuv.so.1
No symbol table info available.
#12 0x00007efd91e01b44 in loop_run (loop=0x7efd8eaa55a0) at loop.c:267
r = <optimized out>
job = <optimized out>
r = <optimized out>
job = <optimized out>
__func__ = "loop_run"
next = <optimized out>
#13 loop_thread (arg=0x7efd8eaa55a0) at loop.c:294
loop = 0x7efd8eaa55a0
#14 0x00007efd91e1b87f in isc__trampoline_run (arg=0x1848510) at trampoline.c:198
trampoline = 0x1848510
result = <optimized out>
#15 0x00007efd90de91df in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#16 0x00007efd90a55dd3 in clone () from /lib64/libc.so.6
No symbol table info available.
Thread 7 (Thread 0x7efd8b6fe700 (LWP 7972)):
#0 0x00007efd90df05be in pthread_barrier_wait () from /lib64/libpthread.so.0
No symbol table info available.
#1 0x00007efd91bada7d in uv_barrier_wait () from /lib64/libuv.so.1
No symbol table info available.
#2 0x00007efd91de8154 in isc__nm_async_tcpdnsstop (worker=worker@entry=0x7efd8ead4180, ev0=ev0@entry=0x7efd8ea8ac80) at netmgr/tcpdns.c:670
ievent = 0x7efd8ea8ac80
sock = 0x7efd8e31b0c0
__func__ = "isc__nm_async_tcpdnsstop"
#3 0x00007efd91de27d2 in process_netievent (arg=0x7efd8ea8ac80) at netmgr/netmgr.c:463
ievent = 0x7efd8ea8ac80
worker = 0x7efd8ead4180
#4 0x00007efd91dfb536 in isc__job_cb (idle=0x7efd8ebf0bc8) at job.c:75
job = 0x7efd8ebf0bc0
r = <optimized out>
__func__ = "isc__job_cb"
#5 0x00007efd91ba7a49 in uv.run_idle () from /lib64/libuv.so.1
No symbol table info available.
#6 0x00007efd91ba0bf8 in uv_run () from /lib64/libuv.so.1
No symbol table info available.
#7 0x00007efd91e01b44 in loop_run (loop=0x7efd8eaa3840) at loop.c:267
r = <optimized out>
job = <optimized out>
r = <optimized out>
job = <optimized out>
__func__ = "loop_run"
next = <optimized out>
#8 loop_thread (arg=0x7efd8eaa3840) at loop.c:294
loop = 0x7efd8eaa3840
#9 0x00007efd91e1b87f in isc__trampoline_run (arg=0x1846050) at trampoline.c:198
trampoline = 0x1846050
result = <optimized out>
#10 0x00007efd90de91df in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#11 0x00007efd90a55dd3 in clone () from /lib64/libc.so.6
No symbol table info available.
Thread 6 (Thread 0x7efd896fa700 (LWP 7976)):
#0 0x00007efd90df282d in __lll_lock_wait () from /lib64/libpthread.so.0
No symbol table info available.
#1 0x00007efd90debba4 in pthread_mutex_lock () from /lib64/libpthread.so.0
No symbol table info available.
#2 0x00007efd90b851f7 in _dl_addr () from /lib64/libc.so.6
No symbol table info available.
#3 0x00007efd90b58296 in backtrace_symbols_fd () from /lib64/libc.so.6
No symbol table info available.
#4 0x00007efd91df193a in isc_backtrace_symbols_fd (buffer=buffer@entry=0x7efd896f7e20, size=size@entry=13, fd=<optimized out>) at backtrace.c:61
No locals.
#5 0x00007efd91df11f6 in default_callback (file=0x7efd91e2ff0b "mem.c", line=772, type=<optimized out>, cond=0x7efd91e304a0 "((ctx) != ((void *)0) && ((const isc__magic_t *)(ctx))->magic == ((('M') << 24 | ('e') << 16 | ('m') << 8 | ('C'))))") at assertions.c:107
tracebuf = {0x7efd91df119d <default_callback+49>, 0x7efd91df1118 <isc_assertion_failed+10>, 0x7efd91e04800 <isc__mem_get+159>, 0x7efd91de92af <isc_nm_tcpdnsconnect+139>, 0x403b0d <tcpdns_connect+36>, 0x4040ec <stream_recv_send_connect+90>, 0x7efd91dfb536 <isc__job_cb+59>, 0x7efd91ba7a49 <uv.run_idle+153>, 0x7efd91ba0bf8 <uv_run+664>, 0x7efd91e01b44 <loop_thread+433>, 0x7efd91e1b87f <isc__trampoline_run+22>, 0x7efd90de91df <start_thread+239>, 0x7efd90a55dd3 <clone+67>, 0x7efd90a55dd3 <clone+67>, 0x0, 0x7efd896f8110, 0x7efd896f8c78, 0x7efd9037de0b <je_arena_ralloc+459>, 0x0, 0x7efd56c00400, 0x7efd896f8118, 0x0, 0x7efd80a008c0, 0x0, 0x0, 0x201, 0x0, 0x0, 0x0, 0x141000, 0x7efd903b134a <extents_remove_locked.isra+170>, 0xe6c048c994a92300, 0x1d, 0x7efd8ec03268, 0x7efd8ec03268, 0x7efd8ec0f400, 0x7efd8e6616d0, 0x7efd907ff5f0, 0xd50dcc13, 0x7efd896f8c78, 0x7efd8ec0f200, 0x7efd903aca72 <extent_lock_from_addr+434>, 0x0, 0x100000000000001, 0x7efd896f8c78, 0x0, 0x7efd896f8c78, 0xd50dcc13, 0x7efd8ec03268, 0x7efd8ec0f300, 0x7efd8ec0f500, 0x7efd903b1877 <extent_try_coalesce_impl+839>, 0x7efd896f8ca8, 0x7efd8ec00980, 0x7efd896f8250, 0x7efd00000001, 0x0, 0x7efd896f8108, 0x0, 0x1896f8250, 0x7efd896f8ca8, 0x1, 0x8, 0x7efd896f8108, 0x0, 0x54, 0x7efd896f80f0, 0x7efd91df6247 <isc_hash64+87>, 0x7efd896f805f, 0x7efd92549c70 <isc_modules+80>, 0x0, 0x0, 0x0, 0x0, 0x0, 0x7efd896f8198, 0x0, 0x1896f8c78, 0x0, 0x7efd896f81b8, 0x0, 0x1896f8198, 0x0, 0x1, 0x8, 0x7efd896f81b8, 0x0, 0x2e2, 0x7efd896f81a0, 0x7efd91df6247 <isc_hash64+87>, 0x7efd896f81c0, 0x7efd91e03bc9 <delete_trace_entry+303>, 0x7efd91e2b585, 0x7efd56c13400, 0x0, 0x7efd80000000, 0x7efd8df8c120, 0x7efd896f8c78, 0x7efd8ea32e30, 0x7efd896f8258, 0x0, 0x7efd896f8268, 0x0, 0x7efd896f8278, 0x0, 0x18ea18010, 0x168, 0x1, 0x8, 0x7efd896f8278, 0x0, 0x201, 0x7efd896f8260, 0x7efd91df6247 <isc_hash64+87>, 0x7efd91e27a1e, 0x7efd56c04400, 0x7efd896f81f0, 0x7efd91e05a3c <isc__mem_putanddetach+114>, 0x7efd56c13400, 0x0, 0x0, 0x0, 0x7efd896f82a0, 0x7efd91df15d7 <isc_astack_destroy+125>, 0x7efd896f8c78, 0x7efd896f8328, 0x0, 0x10027ffff}
nframes = 13
#6 0x00007efd91df1118 in isc_assertion_failed (file=file@entry=0x7efd91e2ff0b "mem.c", line=line@entry=772, type=type@entry=isc_assertiontype_require, cond=cond@entry=0x7efd91e304a0 "((ctx) != ((void *)0) && ((const isc__magic_t *)(ctx))->magic == ((('M') << 24 | ('e') << 16 | ('m') << 8 | ('C'))))") at assertions.c:49
No locals.
#7 0x00007efd91e04800 in isc__mem_get (ctx=<optimized out>, size=size@entry=2208, flags=flags@entry=0, file=file@entry=0x7efd91e2a6f2 "netmgr/tcpdns.c", line=line@entry=269) at mem.c:781
ptr = 0x0
#8 0x00007efd91de92af in isc_nm_tcpdnsconnect (mgr=<optimized out>, local=0x6084e0 <tcp_connect_addr>, peer=0x608580 <tcp_listen_addr>, cb=0x40529c <connect_connect_cb>, cbarg=cbarg@entry=0x403ae9 <tcpdns_connect>, timeout=timeout@entry=30000) at netmgr/tcpdns.c:269
result = ISC_R_SUCCESS
sock = 0x0
ievent = 0x0
req = 0x0
sa_family = 10
worker = 0x7efd8ead4600
__func__ = "isc_nm_tcpdnsconnect"
#9 0x0000000000403b0d in tcpdns_connect (nm=<optimized out>) at tcpdns_test.c:63
No locals.
#10 0x00000000004040ec in stream_recv_send_connect (arg=0x403ae9 <tcpdns_connect>) at netmgr_common.c:516
connect = 0x403ae9 <tcpdns_connect>
connect_addr = {type = {sa = {sa_family = 10, sa_data = '\000' <repeats 13 times>}, sin = {sin_family = 10, sin_port = 0, sin_addr = {s_addr = 0}, sin_zero = "\000\000\000\000\000\000\000"}, sin6 = {sin6_family = 10, sin6_port = 0, sin6_flowinfo = 0, sin6_addr = {__in6_u = {__u6_addr8 = '\000' <repeats 15 times>, "\001", __u6_addr16 = {0, 0, 0, 0, 0, 0, 0, 256}, __u6_addr32 = {0, 0, 0, 16777216}}}, sin6_scope_id = 0}, ss = {ss_family = 10, __ss_padding = '\000' <repeats 21 times>, "\001", '\000' <repeats 95 times>, __ss_align = 0}, sunix = {sun_family = 10, sun_path = '\000' <repeats 21 times>, "\001", '\000' <repeats 85 times>}}, length = 28, link = {prev = 0xffffffffffffffff, next = 0xffffffffffffffff}}
#11 0x00007efd91dfb536 in isc__job_cb (idle=0x7efd8ebf0688) at job.c:75
job = 0x7efd8ebf0680
r = <optimized out>
__func__ = "isc__job_cb"
#12 0x00007efd91ba7a49 in uv.run_idle () from /lib64/libuv.so.1
No symbol table info available.
#13 0x00007efd91ba0bf8 in uv_run () from /lib64/libuv.so.1
No symbol table info available.
#14 0x00007efd91e01b44 in loop_run (loop=0x7efd8eaa4fc0) at loop.c:267
r = <optimized out>
job = <optimized out>
r = <optimized out>
job = <optimized out>
__func__ = "loop_run"
next = <optimized out>
#15 loop_thread (arg=0x7efd8eaa4fc0) at loop.c:294
loop = 0x7efd8eaa4fc0
#16 0x00007efd91e1b87f in isc__trampoline_run (arg=0x1847a10) at trampoline.c:198
trampoline = 0x1847a10
result = <optimized out>
#17 0x00007efd90de91df in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#18 0x00007efd90a55dd3 in clone () from /lib64/libc.so.6
No symbol table info available.
Thread 5 (Thread 0x7efd8beff700 (LWP 7971)):
#0 0x00007efd90df05be in pthread_barrier_wait () from /lib64/libpthread.so.0
No symbol table info available.
#1 0x00007efd91bada7d in uv_barrier_wait () from /lib64/libuv.so.1
No symbol table info available.
#2 0x00007efd91de8154 in isc__nm_async_tcpdnsstop (worker=worker@entry=0x7efd8ead4140, ev0=ev0@entry=0x7efd8ea89900) at netmgr/tcpdns.c:670
ievent = 0x7efd8ea89900
sock = 0x7efd8e31a820
__func__ = "isc__nm_async_tcpdnsstop"
#3 0x00007efd91de27d2 in process_netievent (arg=0x7efd8ea89900) at netmgr/netmgr.c:463
ievent = 0x7efd8ea89900
worker = 0x7efd8ead4140
#4 0x00007efd91dfb536 in isc__job_cb (idle=0x7efd8ebf0b08) at job.c:75
job = 0x7efd8ebf0b00
r = <optimized out>
__func__ = "isc__job_cb"
#5 0x00007efd91ba7a49 in uv.run_idle () from /lib64/libuv.so.1
No symbol table info available.
#6 0x00007efd91ba0bf8 in uv_run () from /lib64/libuv.so.1
No symbol table info available.
#7 0x00007efd91e01b44 in loop_run (loop=0x7efd8eaa3260) at loop.c:267
r = <optimized out>
job = <optimized out>
r = <optimized out>
job = <optimized out>
__func__ = "loop_run"
next = <optimized out>
#8 loop_thread (arg=0x7efd8eaa3260) at loop.c:294
loop = 0x7efd8eaa3260
#9 0x00007efd91e1b87f in isc__trampoline_run (arg=0x1846020) at trampoline.c:198
trampoline = 0x1846020
result = <optimized out>
#10 0x00007efd90de91df in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#11 0x00007efd90a55dd3 in clone () from /lib64/libc.so.6
No symbol table info available.
Thread 4 (Thread 0x7efd92768140 (LWP 7887)):
#0 0x00007efd90df05be in pthread_barrier_wait () from /lib64/libpthread.so.0
No symbol table info available.
#1 0x00007efd91bada7d in uv_barrier_wait () from /lib64/libuv.so.1
No symbol table info available.
#2 0x00007efd91de8154 in isc__nm_async_tcpdnsstop (worker=worker@entry=0x7efd8ead4000, ev0=ev0@entry=0x7efd8ea8b700) at netmgr/tcpdns.c:670
ievent = 0x7efd8ea8b700
sock = 0x7efd8e317d00
__func__ = "isc__nm_async_tcpdnsstop"
#3 0x00007efd91de27d2 in process_netievent (arg=0x7efd8ea8b700) at netmgr/netmgr.c:463
ievent = 0x7efd8ea8b700
worker = 0x7efd8ead4000
#4 0x00007efd91de2af3 in isc__nm_process_ievent (worker=<optimized out>, event=<optimized out>) at netmgr/netmgr.c:567
No locals.
#5 0x00007efd91de72c5 in stop_tcpdns_child (sock=sock@entry=0x7efd8e5d8c00, tid=tid@entry=0) at netmgr/tcpdns.c:605
csock = 0x7efd8e317d00
ievent = <optimized out>
#6 0x00007efd91de7c14 in isc__nm_tcpdns_stoplistening (sock=0x7efd8e5d8c00) at netmgr/tcpdns.c:632
__func__ = "isc__nm_tcpdns_stoplistening"
#7 0x00007efd91ddf096 in isc_nm_stoplistening (sock=<optimized out>) at netmgr/netmgr.c:2091
No locals.
#8 0x0000000000403c86 in stop_listening (arg=<optimized out>) at tcpdns_test.c:45
No locals.
#9 0x00007efd91dfb536 in isc__job_cb (idle=0x7efd8ebefe48) at job.c:75
job = 0x7efd8ebefe40
r = <optimized out>
__func__ = "isc__job_cb"
#10 0x00007efd91ba7a49 in uv.run_idle () from /lib64/libuv.so.1
No symbol table info available.
#11 0x00007efd91ba0bf8 in uv_run () from /lib64/libuv.so.1
No symbol table info available.
#12 0x00007efd91e01b44 in loop_run (loop=0x7efd8eaa1500) at loop.c:267
r = <optimized out>
job = <optimized out>
r = <optimized out>
job = <optimized out>
__func__ = "loop_run"
next = <optimized out>
#13 loop_thread (arg=0x7efd8eaa1500) at loop.c:294
loop = 0x7efd8eaa1500
#14 0x00007efd91e02b91 in isc_loopmgr_run (loopmgr=0x7efd8ea23540) at loop.c:474
__func__ = "isc_loopmgr_run"
#15 0x0000000000403ae7 in run_test_tcpdns_recv_send (state=<optimized out>) at tcpdns_test.c:121
No locals.
#16 0x00007efd91782f37 in cmocka_run_one_test_or_fixture () from /lib64/libcmocka.so.0
No symbol table info available.
#17 0x00007efd917838a1 in _cmocka_run_group_tests () from /lib64/libcmocka.so.0
No symbol table info available.
#18 0x000000000040405e in main () at tcpdns_test.c:153
r = <optimized out>
Thread 3 (Thread 0x7efd8aefd700 (LWP 7973)):
#0 0x00007efd90df05be in pthread_barrier_wait () from /lib64/libpthread.so.0
No symbol table info available.
#1 0x00007efd91bada7d in uv_barrier_wait () from /lib64/libuv.so.1
No symbol table info available.
#2 0x00007efd91de8154 in isc__nm_async_tcpdnsstop (worker=worker@entry=0x7efd8ead41c0, ev0=ev0@entry=0x7efd8ea8ae00) at netmgr/tcpdns.c:670
ievent = 0x7efd8ea8ae00
sock = 0x7efd8e31b960
__func__ = "isc__nm_async_tcpdnsstop"
#3 0x00007efd91de27d2 in process_netievent (arg=0x7efd8ea8ae00) at netmgr/netmgr.c:463
ievent = 0x7efd8ea8ae00
worker = 0x7efd8ead41c0
#4 0x00007efd91dfb536 in isc__job_cb (idle=0x7efd8ebf0c88) at job.c:75
job = 0x7efd8ebf0c80
r = <optimized out>
__func__ = "isc__job_cb"
#5 0x00007efd91ba7a49 in uv.run_idle () from /lib64/libuv.so.1
No symbol table info available.
#6 0x00007efd91ba0bf8 in uv_run () from /lib64/libuv.so.1
No symbol table info available.
#7 0x00007efd91e01b44 in loop_run (loop=0x7efd8eaa3e20) at loop.c:267
r = <optimized out>
job = <optimized out>
r = <optimized out>
job = <optimized out>
__func__ = "loop_run"
next = <optimized out>
#8 loop_thread (arg=0x7efd8eaa3e20) at loop.c:294
loop = 0x7efd8eaa3e20
#9 0x00007efd91e1b87f in isc__trampoline_run (arg=0x1846080) at trampoline.c:198
trampoline = 0x1846080
result = <optimized out>
#10 0x00007efd90de91df in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#11 0x00007efd90a55dd3 in clone () from /lib64/libc.so.6
No symbol table info available.
Thread 2 (Thread 0x7efd837fe700 (LWP 7970)):
#0 0x00007efd90df05be in pthread_barrier_wait () from /lib64/libpthread.so.0
No symbol table info available.
#1 0x00007efd91bada7d in uv_barrier_wait () from /lib64/libuv.so.1
No symbol table info available.
#2 0x00007efd91de8154 in isc__nm_async_tcpdnsstop (worker=worker@entry=0x7efd8ead4100, ev0=ev0@entry=0x7efd8ea8be80) at netmgr/tcpdns.c:670
ievent = 0x7efd8ea8be80
sock = 0x7efd8e319f80
__func__ = "isc__nm_async_tcpdnsstop"
#3 0x00007efd91de27d2 in process_netievent (arg=0x7efd8ea8be80) at netmgr/netmgr.c:463
ievent = 0x7efd8ea8be80
worker = 0x7efd8ead4100
#4 0x00007efd91dfb536 in isc__job_cb (idle=0x7efd8ebf08c8) at job.c:75
job = 0x7efd8ebf08c0
r = <optimized out>
__func__ = "isc__job_cb"
#5 0x00007efd91ba7a49 in uv.run_idle () from /lib64/libuv.so.1
No symbol table info available.
#6 0x00007efd91ba0bf8 in uv_run () from /lib64/libuv.so.1
No symbol table info available.
#7 0x00007efd91e01b44 in loop_run (loop=0x7efd8eaa2c80) at loop.c:267
r = <optimized out>
job = <optimized out>
r = <optimized out>
job = <optimized out>
__func__ = "loop_run"
next = <optimized out>
#8 loop_thread (arg=0x7efd8eaa2c80) at loop.c:294
loop = 0x7efd8eaa2c80
#9 0x00007efd91e1b87f in isc__trampoline_run (arg=0x1845ff0) at trampoline.c:198
trampoline = 0x1845ff0
result = <optimized out>
#10 0x00007efd90de91df in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#11 0x00007efd90a55dd3 in clone () from /lib64/libc.so.6
No symbol table info available.
Thread 1 (Thread 0x7efd89efb700 (LWP 7975)):
#0 0x00007efd90a6aa9f in raise () from /lib64/libc.so.6
No symbol table info available.
#1 0x00007efd90a3de05 in abort () from /lib64/libc.so.6
No symbol table info available.
#2 0x00007efd91df111d in isc_assertion_failed (file=file@entry=0x7efd91e2ff0b "mem.c", line=line@entry=772, type=type@entry=isc_assertiontype_require, cond=cond@entry=0x7efd91e304a0 "((ctx) != ((void *)0) && ((const isc__magic_t *)(ctx))->magic == ((('M') << 24 | ('e') << 16 | ('m') << 8 | ('C'))))") at assertions.c:50
No locals.
#3 0x00007efd91e04800 in isc__mem_get (ctx=<optimized out>, size=size@entry=2208, flags=flags@entry=0, file=file@entry=0x7efd91e2a6f2 "netmgr/tcpdns.c", line=line@entry=269) at mem.c:781
ptr = 0x0
#4 0x00007efd91de92af in isc_nm_tcpdnsconnect (mgr=<optimized out>, local=0x6084e0 <tcp_connect_addr>, peer=0x608580 <tcp_listen_addr>, cb=0x40529c <connect_connect_cb>, cbarg=cbarg@entry=0x403ae9 <tcpdns_connect>, timeout=timeout@entry=30000) at netmgr/tcpdns.c:269
result = ISC_R_SUCCESS
sock = 0x0
ievent = 0x0
req = 0x0
sa_family = 10
worker = 0x7efd8ead45c0
__func__ = "isc_nm_tcpdnsconnect"
#5 0x0000000000403b0d in tcpdns_connect (nm=<optimized out>) at tcpdns_test.c:63
No locals.
#6 0x00000000004040ec in stream_recv_send_connect (arg=0x403ae9 <tcpdns_connect>) at netmgr_common.c:516
connect = 0x403ae9 <tcpdns_connect>
connect_addr = {type = {sa = {sa_family = 10, sa_data = '\000' <repeats 13 times>}, sin = {sin_family = 10, sin_port = 0, sin_addr = {s_addr = 0}, sin_zero = "\000\000\000\000\000\000\000"}, sin6 = {sin6_family = 10, sin6_port = 0, sin6_flowinfo = 0, sin6_addr = {__in6_u = {__u6_addr8 = '\000' <repeats 15 times>, "\001", __u6_addr16 = {0, 0, 0, 0, 0, 0, 0, 256}, __u6_addr32 = {0, 0, 0, 16777216}}}, sin6_scope_id = 0}, ss = {ss_family = 10, __ss_padding = '\000' <repeats 21 times>, "\001", '\000' <repeats 95 times>, __ss_align = 0}, sunix = {sun_family = 10, sun_path = '\000' <repeats 21 times>, "\001", '\000' <repeats 85 times>}}, length = 28, link = {prev = 0xffffffffffffffff, next = 0xffffffffffffffff}}
#7 0x00007efd91dfb536 in isc__job_cb (idle=0x7efd8ebf05c8) at job.c:75
job = 0x7efd8ebf05c0
r = <optimized out>
__func__ = "isc__job_cb"
#8 0x00007efd91ba7a49 in uv.run_idle () from /lib64/libuv.so.1
No symbol table info available.
#9 0x00007efd91ba0bf8 in uv_run () from /lib64/libuv.so.1
No symbol table info available.
#10 0x00007efd91e01b44 in loop_run (loop=0x7efd8eaa49e0) at loop.c:267
r = <optimized out>
job = <optimized out>
r = <optimized out>
job = <optimized out>
__func__ = "loop_run"
next = <optimized out>
#11 loop_thread (arg=0x7efd8eaa49e0) at loop.c:294
loop = 0x7efd8eaa49e0
#12 0x00007efd91e1b87f in isc__trampoline_run (arg=0x1847870) at trampoline.c:198
trampoline = 0x1847870
result = <optimized out>
#13 0x00007efd90de91df in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#14 0x00007efd90a55dd3 in clone () from /lib64/libc.so.6
No symbol table info available.
D:tcpdns_test:backtrace from ./core.7887 end
FAIL tcpdns_test (exit status: 134)
```
</details>Not plannedhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3635Implement support for DNS over QUIC2023-02-15T19:00:14ZJeremy SakladImplement support for DNS over QUIC### Description
DNS-over-QUIC, specified in [RFC 9250][RFC 9250], has considerable advantages over the already-implemented options.
* With the debatable exception of DoH and HTTP/3, it is the only standardized encrypted DNS protocol to...### Description
DNS-over-QUIC, specified in [RFC 9250][RFC 9250], has considerable advantages over the already-implemented options.
* With the debatable exception of DoH and HTTP/3, it is the only standardized encrypted DNS protocol to operate over UDP.
* It avoids issues such as head-of-line blocking and potential for amplification attacks.
* It avoids the overhead of DNS-over-HTTPS.
### Request
DNS-over-QUIC should be offered wherever DNS-over-HTTPS or DNS-over-TLS is, at minimum. Its use should be encouraged over the others where applicable.
[RFC 9250][RFC 9250] emphasizes the following scopes of usage:
> * the "stub to recursive resolver" scenario (also called the "stub to recursive" scenario in this document)
> * the "recursive resolver to authoritative nameserver" scenario (also called the "recursive to authoritative" scenario in this document), and
> * the "nameserver to nameserver" scenario (mainly used for zone transfers (XFR) [RFC1995][RFC 1995] [RFC5936][RFC 5936]).
I believe that covers every function of BIND.
---
While not specific to DNS-over-QUIC, the implementation should be designed with future support for non-standard ports and SVBC records in mind. 53/udp is explicitly banned for use with this protocol, but it should eventually be possible to use any other non-standard port rather than 853/udp.
[RFC 1995]: https://www.rfc-editor.org/info/rfc1995
[RFC 5936]: https://www.rfc-editor.org/info/rfc5936
[RFC 9250]: https://www.rfc-editor.org/info/rfc9250Not plannedhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3629Slow retries after timeouts2022-11-16T16:07:31Zgrzchr15Slow retries after timeoutsMeasurements from Neustar/UltraDNS for some DNS servers including BIND9,They think there is weird behavior:
https://ripe85.ripe.net/wp-content/uploads/presentations/96-dknight-fewnsmanyips-ripe85-dnswg.pdf
Page 8
Video: https://ripe85....Measurements from Neustar/UltraDNS for some DNS servers including BIND9,They think there is weird behavior:
https://ripe85.ripe.net/wp-content/uploads/presentations/96-dknight-fewnsmanyips-ripe85-dnswg.pdf
Page 8
Video: https://ripe85.ripe.net/archive/video/dave-knight_fewer-name-servers-more-addresses_main-20221027-112204.mp4 time 7:17 onwards
Measurements from Neustar/UltraDNS for some DNS servers including BIND9
- Bind Strongly prefers IPv6 - they think there is weird behavior.
- If one address is broken, penalize all higher numbered addresses until piling onto the last one?
- Slow to get an answer when retryingŠtěpán BalážikŠtěpán Balážikhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3623A combination of an RPZ with NSIP triggers & unusually (but not entirely) bro...2022-10-27T11:36:42ZGraham ClinchA combination of an RPZ with NSIP triggers & unusually (but not entirely) broken delegation gives SERVFAIL & "rpz NSIP rewrite X via Y NS address rewrite rrset failed: failure"<!--
If the bug you are reporting is potentially security-related - for example,
if it involves an assertion failure or other crash in `named` that can be
triggered repeatedly - then please do *NOT* report it here, but send an
email to [...<!--
If the bug you are reporting is potentially security-related - for example,
if it involves an assertion failure or other crash in `named` that can be
triggered repeatedly - then please do *NOT* report it here, but send an
email to [security-officer@isc.org](security-officer@isc.org).
-->
### Summary
When a configured response policy zone contains rpz-nsip triggers **and** NS record resolution does not complete successfully (but in an apparently unusual way), SERVFAIL is returned to queries that would otherwise succeed.
### BIND version used
```
BIND 9.18.8 (Stable Release) <id:35f5d35>
running on Darwin arm64 21.6.0 Darwin Kernel Version 21.6.0: Mon Aug 22 20:19:52 PDT 2022; root:xnu-8020.140.49~2/RELEASE_ARM64_T6000
built by make with '--prefix=/opt/homebrew/Cellar/bind/9.18.8' '--sysconfdir=/opt/homebrew/etc/bind' '--localstatedir=/opt/homebrew/var' '--with-json-c' '--with-libidn2=/opt/homebrew/opt/libidn2' '--with-openssl=/opt/homebrew/opt/openssl@3' '--without-lmdb' 'CC=clang' 'PKG_CONFIG_PATH=/opt/homebrew/opt/json-c/lib/pkgconfig:/opt/homebrew/opt/libidn2/lib/pkgconfig:/opt/homebrew/opt/libnghttp2/lib/pkgconfig:/opt/homebrew/opt/libuv/lib/pkgconfig:/opt/homebrew/opt/openssl@3/lib/pkgconfig' 'PKG_CONFIG_LIBDIR=/usr/lib/pkgconfig:/opt/homebrew/Library/Homebrew/os/mac/pkgconfig/12'
compiled by CLANG Apple LLVM 14.0.0 (clang-1400.0.29.202)
compiled with OpenSSL version: OpenSSL 3.0.5 5 Jul 2022
linked to OpenSSL version: OpenSSL 3.0.5 5 Jul 2022
compiled with libuv version: 1.44.2
linked to libuv version: 1.44.2
compiled with libnghttp2 version: 1.50.0
linked to libnghttp2 version: 1.50.0
compiled with libxml2 version: 2.9.4
linked to libxml2 version: 20904
compiled with json-c version: 0.16
linked to json-c version: 0.16
compiled with zlib version: 1.2.11
linked to zlib version: 1.2.11
threads support is enabled
DNSSEC algorithms: RSASHA1 NSEC3RSASHA1 RSASHA256 RSASHA512 ECDSAP256SHA256 ECDSAP384SHA384 ED25519 ED448
DS algorithms: SHA-1 SHA-256 SHA-384
HMAC algorithms: HMAC-MD5 HMAC-SHA1 HMAC-SHA224 HMAC-SHA256 HMAC-SHA384 HMAC-SHA512
TKEY mode 2 support (Diffie-Hellman): yes
TKEY mode 3 support (GSS-API): yes
default paths:
named configuration: /opt/homebrew/etc/bind/named.conf
rndc configuration: /opt/homebrew/etc/bind/rndc.conf
DNSSEC root key: /opt/homebrew/etc/bind/bind.keys
nsupdate session key: /opt/homebrew/var/run/named/session.key
named PID file: /opt/homebrew/var/run/named/named.pid
named lock file: /opt/homebrew/var/run/named/named.lock
```
### Steps to reproduce
Configure a minimal BIND 9 recursive resolver with a response policy zone that includes an rpz-nsip match, and then attempt to resolve www.britishairways.com (which appears to have an unusually broken partially lame delegation).
### What is the current *bug* behavior?
DNS resolution of "www.britishways.com" fails with SERVFAIL:
```
$ delv www.britishairways.com @::1
;; resolution failed: SERVFAIL
```
named (at -d 1) logs:
```
25-Oct-2022 22:53:35.007 client @0x1439ad560 ::1#53833 (www.britishairways.com): rpz NSIP rewrite www.britishairways.com via dnssec1-win.server.ntli.net NS address rewrite rrset failed: failure
25-Oct-2022 22:53:35.007 client @0x1439ad560 ::1#53833 (www.britishairways.com): query failed (SERVFAIL) for www.britishairways.com/IN/A at query.c:7232
```
### What is the expected *correct* behavior?
DNS resolution of "www.britishairways.com" should succeed:
```
$ delv www.britishairways.com @::1
; unsigned answer
www.britishairways.com. 60 IN CNAME www.ba.com.edgekey.net.
www.ba.com.edgekey.net. 21600 IN CNAME e8308.b.akamaiedge.net.
e8308.b.akamaiedge.net. 20 IN A 104.117.169.173
```
### Relevant configuration files
named.conf:
```
options {
response-policy {
zone "test.example.net" policy given;
};
};
zone "test.example.net" {
type primary;
file "test.example.net";
};
```
test.example.net zone file:
```
@ SOA ns1 hostmaster. (
2003080800 ; serial number
12h ; refresh
15m ; update retry
3w ; expiry
2h ; minimum
)
@ NS ns1
ns1 A 127.0.0.1
foo.com CNAME .
32.99.99.168.192.rpz-nsip CNAME .
```
Note that simply commenting out the final line in the zone file causes the problem to go away.
### Relevant logs and/or screenshots
named -d 2 -g output:
```
[...]
25-Oct-2022 22:57:14.370 fetch: www.britishairways.com/A
25-Oct-2022 22:57:14.371 fetch: _.com/A
25-Oct-2022 22:57:14.393 fetch: _.britishairways.com/A
25-Oct-2022 22:57:14.419 fetch: ns1.britishairways.com/AAAA
25-Oct-2022 22:57:14.419 fetch: ns2.britishairways.com/AAAA
25-Oct-2022 22:57:14.419 fetch: dnssec1-win.server.ntli.net/A
25-Oct-2022 22:57:14.419 fetch: dnssec1-win.server.ntli.net/AAAA
25-Oct-2022 22:57:14.419 fetch: dnssec2-win.server.ntli.net/A
25-Oct-2022 22:57:14.419 fetch: dnssec2-win.server.ntli.net/AAAA
25-Oct-2022 22:57:14.439 delete_node(): 0x600002c8edf0 www.britishairways.com (bucket 15)
25-Oct-2022 22:57:14.440 fetch: britishairways.com/DS
25-Oct-2022 22:57:14.456 fetch: com/DNSKEY
25-Oct-2022 22:57:14.477 fetch: dnssec1-win.server.ntli.net/A
25-Oct-2022 22:57:14.494 lame server resolving 'dnssec2-win.server.ntli.net' (in 'server.ntli.net'?): 194.168.4.237#53
25-Oct-2022 22:57:14.495 lame server resolving 'dnssec1-win.server.ntli.net' (in 'server.ntli.net'?): 194.168.4.237#53
25-Oct-2022 22:57:14.495 lame server resolving 'dnssec1-win.server.ntli.net' (in 'server.ntli.net'?): 194.168.4.237#53
25-Oct-2022 22:57:14.497 lame server resolving 'dnssec2-win.server.ntli.net' (in 'server.ntli.net'?): 194.168.4.237#53
25-Oct-2022 22:57:14.497 lame server resolving 'dnssec1-win.server.ntli.net' (in 'server.ntli.net'?): 194.168.4.237#53
25-Oct-2022 22:57:14.522 lame server resolving 'dnssec2-win.server.ntli.net' (in 'server.ntli.net'?): 62.253.162.237#53
25-Oct-2022 22:57:14.522 fetch: dns1.ntli.net/AAAA
25-Oct-2022 22:57:14.522 fetch: dns2.ntli.net/AAAA
25-Oct-2022 22:57:14.522 lame server resolving 'dnssec1-win.server.ntli.net' (in 'server.ntli.net'?): 62.253.162.237#53
25-Oct-2022 22:57:14.522 lame server resolving 'dnssec1-win.server.ntli.net' (in 'server.ntli.net'?): 62.253.162.237#53
25-Oct-2022 22:57:14.522 lame server resolving 'dnssec2-win.server.ntli.net' (in 'server.ntli.net'?): 62.253.162.237#53
25-Oct-2022 22:57:14.523 lame server resolving 'dnssec1-win.server.ntli.net' (in 'server.ntli.net'?): 62.253.162.237#53
25-Oct-2022 22:57:14.523 client @0x11a870560 ::1#52973 (www.britishairways.com): rpz NSIP rewrite www.britishairways.com via dnssec1-win.server.ntli.net NS address rewrite rrset failed: failure
25-Oct-2022 22:57:14.523 client @0x11a870560 ::1#52973 (www.britishairways.com): query failed (SERVFAIL) for www.britishairways.com/IN/A at query.c:7232
25-Oct-2022 22:57:14.523 fetch completed at resolver.c:4140 for dnssec1-win.server.ntli.net/A in 0.045906: failure/success [domain:server.ntli.net,referral:0,restart:2,qrysent:2,timeout:0,lame:2,quota:0,neterr:0,badresp:0,adberr:0,findfail:0,valfail:0]
```
### Possible fixes
Unclear, but appears to be a fault in the rpz-nsip processing when an "unusually unknown" NS IP is processed.https://gitlab.isc.org/isc-projects/bind9/-/issues/3618dynamic TTL shortening in auth after RR change2022-10-26T09:07:48ZPetr Špačekpspacek@isc.orgdynamic TTL shortening in auth after RR change### Description
TL;DR version: Withdrawing DS is a nightmare because TLDs have too long TTLs. COM with 1 day is a total nightmare and risk-averse bussinesses like google.com are not going to risk 1 day disruption of service => no prospe...### Description
TL;DR version: Withdrawing DS is a nightmare because TLDs have too long TTLs. COM with 1 day is a total nightmare and risk-averse bussinesses like google.com are not going to risk 1 day disruption of service => no prospect of deploying DNSSEC.
Long version:
https://indico.dns-oarc.net/event/44/contributions/962/
### Request
I'm considering an _experiment_, not a production-ready feature. Auth DNS is not a good place for what I'm going to propose, but I still think it is a nice experiment:
Add magic which shortens TTLs sent out in answers after RR modification. Say, in the first hour after modification shorten TTL of modified DS RR to 60 seconds. After that use the original TTL. (Of course we can invent any other schema, this is just a simple example.)
Obviously this requires knowing when RR was modified - and this is a nightmare by itself. For an experiment I think it would be good enough to look at RRSIG inception time to detect the initial window. Obviously this will have false positives after resigning, but for an experiment I think we don't need to care.
An experiment would allow us to detect if something breaks when TTL on RR and it's RRSIG do not match when sent as an answer from auth. (It should work, but you know how it is ...)
### Links / references
- https://indico.dns-oarc.net/event/44/contributions/962/
- https://chat.dns-oarc.net/community/pl/u36txi1cw3ykzx7iaos4rqo95c
- https://www.ripe.net/ripe/mail/archives/dns-wg/2021-December/003935.htmlhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3616empty zones slow down loading lots of zones2023-11-02T17:05:05ZPetr Špačekpspacek@isc.orgempty zones slow down loading lots of zones### Summary
Loading 1 M zones from textual named.conf & minimal zone file is much slower if `empty-zones-enable yes` (default) is in effect.
### BIND version used
BIND 9.19.7-dev (Development Release) 64287e4
### Steps to reproduce
1...### Summary
Loading 1 M zones from textual named.conf & minimal zone file is much slower if `empty-zones-enable yes` (default) is in effect.
### BIND version used
BIND 9.19.7-dev (Development Release) 64287e4
### Steps to reproduce
1. Generate config
```
for I in $(seq 1 1000000); do echo "zone z$I.example. { type primary; file \"db\"; };" >> named.conf; done
echo 'options { notify no; };' >> named.conf
```
2. Use minimal zone file:
[db](/uploads/dc86b8dc10f481b4d9b2f9718a7efd2a/db)
3. Run BIND:
```
named -g -c named.conf &> log
```
4. Observe CPU utilization going through the roof. Startup time is over 60 seconds on my laptop.
### What is the expected *correct* behavior?
It could be faster :-)
### Relevant logs and/or screenshots
Notice the timestamps - it takes quite long to process a single zone:
```
20-Oct-2022 15:55:37.781 automatic empty zone: 10.IN-ADDR.ARPA
20-Oct-2022 15:55:37.971 automatic empty zone: 16.172.IN-ADDR.ARPA
20-Oct-2022 15:55:38.148 automatic empty zone: 17.172.IN-ADDR.ARPA
20-Oct-2022 15:55:38.325 automatic empty zone: 18.172.IN-ADDR.ARPA
20-Oct-2022 15:55:38.505 automatic empty zone: 19.172.IN-ADDR.ARPA
```
Flamegraph, all threads combined:
![nothreads.svg](/uploads/30254da36ec0ee0be71ca1d86d0e2cde/nothreads.svg)
With
```
options { notify no; empty-zones-enable no; };
```
the load time decreases down to ~ 42 s, i.e. almost 1/3 decrease.
### Possible fixes
From a quick glance, it seems that `create_empty_zone()` is called in a cycle and it effectively walks the cfg-representation of zone list over and over, including conversion from text to wire format, and then compares zone names. This is done to detect forward zones with particular configuration.
Unless I'm missing something, we should be able to move empty zones creating to the very end of config processing and use forward table for way more efficient lookups which should lower the processing time for empty zones to almost zero.Not plannedTony FinchTony Finchhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3614Resolver prefetch issue with qtype ANY2022-11-07T10:49:44ZArаm SаrgsyаnResolver prefetch issue with qtype ANYIn `lib/ns/query.c:query_respond_any()`, when there are several datasets in the database node, only one of them has a chance to trigger a prefetch, because when one does, the `FETCH_RECTYPE_PREFETCH(client) != NULL` check (see below) doe...In `lib/ns/query.c:query_respond_any()`, when there are several datasets in the database node, only one of them has a chance to trigger a prefetch, because when one does, the `FETCH_RECTYPE_PREFETCH(client) != NULL` check (see below) does not pass for the rest of the datasets, even if they are eligible for a prefetch, as the client's fetch reserved for the prefetch operation is already in progress:
```c
static void
query_prefetch(ns_client_t *client, dns_name_t *qname,
dns_rdataset_t *rdataset) {
CTRACE(ISC_LOG_DEBUG(3), "query_prefetch");
if (FETCH_RECTYPE_PREFETCH(client) != NULL ||
client->view->prefetch_trigger == 0U ||
rdataset->ttl > client->view->prefetch_trigger ||
(rdataset->attributes & DNS_RDATASETATTR_PREFETCH) == 0)
{
return;
}
fetch_and_forget(client, qname, rdataset->type, RECTYPE_PREFETCH);
dns_rdataset_clearprefetch(rdataset);
ns_stats_increment(client->manager->sctx->nsstats,
ns_statscounter_prefetch);
}
```
I will use the `resolver` system test's `check prefetch qtype * (${n})` check to demonstrate it. Please note that if you want to reproduce it, you'll need to use the branch in !6937 which fixes another prefetch issue (unless it is already merged).
Run the test:
```
$ ./run.sh -n resolver
...
...
I:resolver:check prefetch qtype * (32)
...
PASS: resolver
```
Check the first answer, all records start with TTL value of 10:
```
$ cat resolver/dig.out.1.32
...
;; QUESTION SECTION:
;fetchall.tld. IN ANY
;; ANSWER SECTION:
fetchall.tld. 10 IN AAAA ::1
fetchall.tld. 10 IN A 1.2.3.4
fetchall.tld. 10 IN TXT "A" "short" "ttl"
...
```
Check the second answer (for a request after 7 seconds), this should had triggered a prefetch for all 3 records, because TTL value 3 is smaller than the configured trigger value 4:
```
$ cat resolver/dig.out.2.32
...
;; QUESTION SECTION:
;fetchall.tld. IN ANY
;; ANSWER SECTION:
fetchall.tld. 3 IN AAAA ::1
fetchall.tld. 3 IN A 1.2.3.4
fetchall.tld. 3 IN TXT "A" "short" "ttl"
...
```
Check the third answer (for a request after 1 second):
```
$ cat resolver/dig.out.3.32
...
;; QUESTION SECTION:
;fetchall.tld. IN ANY
;; ANSWER SECTION:
fetchall.tld. 9 IN AAAA ::1
fetchall.tld. 2 IN A 1.2.3.4
fetchall.tld. 2 IN TXT "A" "short" "ttl"
...
```
As you can see, only the first record was prefetched.
Here are the logs which confirm that, where you can see that `query_prefetch()` was called three times, but a prefetch was initiated only for the first call: [fetchall.tld-any.log.gz](/uploads/30a6656ecf98a5ca6c9b342f200969b7/fetchall.tld-any.log.gz).
I think, as suggest by @fanf in MM, ANY should not trigger prefetching at all. Or, otherwise, all records which are eligible for prefetch should be prefetched.Not plannedhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3609dual-stack-servers are not being used2023-11-02T17:05:05ZMark Andrewsdual-stack-servers are not being usedThe current code for deciding when to add a dual stack server doesn't always cause the server to be added as it is looking for a NXRRSET indication for the server's address (A for IPv4 and AAAA for IPv6) as well as no dispatch for that t...The current code for deciding when to add a dual stack server doesn't always cause the server to be added as it is looking for a NXRRSET indication for the server's address (A for IPv4 and AAAA for IPv6) as well as no dispatch for that transport. When the server is within the zone we can't get the NXRRSET indication. Additionally no dispatch is contingent on -4 or -6 being used on the command line.
- We need a solution to the bootstrap problem or to relax the requirement.
- We need a better solution for how to determine we are effectively running single stack other the -4 and -6.
- We need to add tests for dual-stack-servers as there are currently none.Not plannedhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3602udp_test fails with atomic_load(&sreads) >= expected_creads2022-10-20T16:51:28ZOndřej Surýudp_test fails with atomic_load(&sreads) >= expected_creadsReproduced under `rr record -h`:
```
[==========] Running 18 test(s).
[ RUN ] mock_listenudp_uv_udp_open
[ OK ] mock_listenudp_uv_udp_open
[ RUN ] mock_listenudp_uv_udp_bind
[ OK ] mock_listenudp_uv_udp_bind
[ RUN ...Reproduced under `rr record -h`:
```
[==========] Running 18 test(s).
[ RUN ] mock_listenudp_uv_udp_open
[ OK ] mock_listenudp_uv_udp_open
[ RUN ] mock_listenudp_uv_udp_bind
[ OK ] mock_listenudp_uv_udp_bind
[ RUN ] mock_listenudp_uv_udp_recv_start
[ OK ] mock_listenudp_uv_udp_recv_start
[ RUN ] mock_udpconnect_uv_udp_open
[ OK ] mock_udpconnect_uv_udp_open
[ RUN ] mock_udpconnect_uv_udp_bind
[ OK ] mock_udpconnect_uv_udp_bind
[ RUN ] mock_udpconnect_uv_udp_connect
[ OK ] mock_udpconnect_uv_udp_connect
[ RUN ] mock_udpconnect_uv_recv_buffer_size
[ OK ] mock_udpconnect_uv_recv_buffer_size
[ RUN ] mock_udpconnect_uv_send_buffer_size
[ OK ] mock_udpconnect_uv_send_buffer_size
[ RUN ] udp_noop
[ OK ] udp_noop
[ RUN ] udp_noresponse
[ OK ] udp_noresponse
[ RUN ] udp_shutdown_connect
[ OK ] udp_shutdown_connect
[ RUN ] udp_shutdown_read
[ OK ] udp_shutdown_read
[ RUN ] udp_cancel_read
[ OK ] udp_cancel_read
[ RUN ] udp_timeout_recovery
[ OK ] udp_timeout_recovery
[ RUN ] udp_double_read
[ OK ] udp_double_read
[ RUN ] udp_recv_one
[ OK ] udp_recv_one
[ RUN ] udp_recv_two
[ OK ] udp_recv_two
[ RUN ] udp_recv_send
atomic_load(&sreads) >= expected_creads
[ LINE ] --- udp_test.c:947: error: Failure!Aborted
```Not plannedOndřej SurýOndřej Surýhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3579Add more debugging messages for network-level events2022-10-05T09:37:10ZMichał KępieńAdd more debugging messages for network-level eventsThe network manager code is currently not particularly verbose when it
comes to logging debug messages:
$ git grep isc_log_write lib/isc/netmgr/ | wc -l
9
In particular, this applies to "positive" events (non-errors), like
esta...The network manager code is currently not particularly verbose when it
comes to logging debug messages:
$ git grep isc_log_write lib/isc/netmgr/ | wc -l
9
In particular, this applies to "positive" events (non-errors), like
establishing a connection, correctly receiving data from a socket, etc.
This applies to both non-encrypted transports (like TCP) and encrypted
ones.
The problem for me as an administrator/troubleshooter is that I have
very limited visibility into what BIND 9 "sees" on its side of things
when things go south. For example, I recently experimented with getting
`systemd-resolved` to talk to `named` over DNS-over-TLS; the former
reported, well, *errors*, and I could not get a grasp of the point at
which things are failing without resorting to Wireshark ("Is it the TCP
connection on port 853 itself? Or maybe the TLS session negotiation?
Or is that part okay and it is something about the data that
`system-resolved` sends inside a properly-established TLS session that
makes `named` complain?" etc.)
I am opening this issue so that it can serve as a public acknowledgment
of this being a known deficiency. It would be nice to do something
about it in the log run. Obviously there will have to be performance
trade-offs, but I think even hiding certain log messages behind a
build-time switch is fine as long as there is *some* way of getting
`named` to become more talkative logging-wise when it comes to
network-level events.Not plannedhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3561Analyze the current lock contention2022-09-26T09:20:18ZOndřej SurýAnalyze the current lock contentionI've hacked `mutrace` and `named` to work again together and here are current results:
```
mutrace: Showing statistics for process named (PID: 1782915).
mutrace: 1802520 mutexes used.
Mutex #1326836 (0x0x55ec3f8dd640) first referenced ...I've hacked `mutrace` and `named` to work again together and here are current results:
```
mutrace: Showing statistics for process named (PID: 1782915).
mutrace: 1802520 mutexes used.
Mutex #1326836 (0x0x55ec3f8dd640) first referenced by:
/usr/local/lib/libmutrace.so(pthread_mutex_init+0xe8) [0x7fc95d0d87a8]
/home/ondrej/Projects/bind9/lib/dns/.libs/libdns-9.19.6-dev.so(dns_sdb_register+0xb7) [0x7fc95ca55f32]
Mutex #1332002 (0x0x55ec3fa3b4e0) first referenced by:
/usr/local/lib/libmutrace.so(pthread_mutex_init+0xe8) [0x7fc95d0d87a8]
/home/ondrej/Projects/bind9/lib/isc/.libs/libisc-9.19.6-dev.so(isc__task_create+0xbd) [0x7fc95cb98aee]
Mutex #1329994 (0x0x55ec3f9cea68) first referenced by:
/usr/local/lib/libmutrace.so(pthread_rwlock_init+0xca) [0x7fc95d0d942a]
/home/ondrej/Projects/bind9/lib/dns/.libs/libdns-9.19.6-dev.so(dns_rbtdb_create+0xfe) [0x7fc95c9e4cfc]
Mutex #1332378 (0x0x55ec3fa5b7a8) first referenced by:
/usr/local/lib/libmutrace.so(pthread_rwlock_init+0xca) [0x7fc95d0d942a]
/home/ondrej/Projects/bind9/lib/dns/.libs/libdns-9.19.6-dev.so(dns_adb_create+0x120) [0x7fc95c951561]
Mutex #1332379 (0x0x55ec3fa5bd30) first referenced by:
/usr/local/lib/libmutrace.so(pthread_mutex_init+0xe8) [0x7fc95d0d87a8]
/home/ondrej/Projects/bind9/lib/isc/.libs/libisc-9.19.6-dev.so(isc__task_create+0xbd) [0x7fc95cb98aee]
Mutex #1327168 (0x0x55ec3f8f9558) first referenced by:
/usr/local/lib/libmutrace.so(pthread_rwlock_init+0xca) [0x7fc95d0d942a]
/home/ondrej/Projects/bind9/lib/dns/.libs/libdns-9.19.6-dev.so(dns_zonemgr_create+0x218) [0x7fc95caa36d6]
Mutex #1325497 (0x0x55ec3f841648) first referenced by:
/usr/local/lib/libmutrace.so(pthread_mutex_init+0xe8) [0x7fc95d0d87a8]
/home/ondrej/Projects/bind9/lib/isc/.libs/libisc-9.19.6-dev.so(+0x438b4) [0x7fc95cb8c8b4]
Mutex #78281 (0x0x7fc95d0d1fa0) first referenced by:
/usr/local/lib/libmutrace.so(pthread_mutex_lock+0x4c) [0x7fc95d0d89bc]
/usr/lib/x86_64-linux-gnu/libuv.so.1(uv_mutex_lock+0x9) [0x7fc95c3aa689]
Mutex #1330013 (0x0x55ec3f9cf2c0) first referenced by:
/usr/local/lib/libmutrace.so(pthread_rwlock_init+0xca) [0x7fc95d0d942a]
/home/ondrej/Projects/bind9/lib/dns/.libs/libdns-9.19.6-dev.so(dns_rbtdb_create+0x497) [0x7fc95c9e5095]
Mutex #1325525 (0x0x55ec3f843f48) first referenced by:
/usr/local/lib/libmutrace.so(pthread_mutex_init+0xe8) [0x7fc95d0d87a8]
/home/ondrej/Projects/bind9/lib/isc/.libs/libisc-9.19.6-dev.so(+0x411fe) [0x7fc95cb8a1fe]
mutrace: Showing 10 mutexes in order of (write) contention count:
Mutex # Changed Locked tot.Time[ms] avg.Time[ms] max.Time[ms] Cont. tot.Cont[ms] avg.Cont[ms] max.Cont[ms] Flags
1326836 959 3544 16.228 0.005 0.055 829 93.809 0.113 2.528 Mx.a-.
1332002 10563 942412 784.830 0.001 0.183 330 0.379 0.001 0.021 Mx.a-.
1329994 7845 141856 7.790 0.000 0.079 185 1.978 0.011 0.130 W!...r
(read) 989396 57.085 0.000 0.169 254 5.736 0.023 0.112 ||||||
1332378 4593 48592 1590.456 0.033 45.106 165 56.996 0.345 7.611 Wx...r
(read) 1 39.878 39.878 39.878 0 0.000 0.000 0.000 ||||||
1332379 23798 36940 37.882 0.001 0.101 108 0.210 0.002 0.081 Mx.a-.
1327168 133 314 24.442 0.078 0.276 96 8.554 0.089 0.534 Wx...r
(read) 2 1.013 0.507 0.760 0 0.000 0.000 0.000 ||||||
1325497 2107 1018717 828.111 0.001 0.207 65 0.115 0.002 0.015 Mx.a-.
78281 19 21 1.274 0.061 0.186 15 2.693 0.180 0.251 M-.--.
1330013 2650 41181 52.259 0.001 0.104 11 0.044 0.004 0.007 Wx...r
(read) 103766 96.121 0.001 0.130 8 0.099 0.012 0.049 ||||||
1325525 26482 30675 31.489 0.001 0.092 11 0.033 0.003 0.010 Mx.a-.
... ... ... ... ... ... ... ... ... ... ||||||
/|||||
Object: M = Mutex, W = RWLock, I = isc_rwlock /||||
State: x = dead, ! = inconsistent /|||
Use: R = used in realtime thread /||
Mutex Type: r = RECURSIVE, e = ERRORCHECK, a = ADAPTIVE /|
Mutex Protocol: i = INHERIT, p = PROTECT /
RWLock Kind: r = PREFER_READER, w = PREFER_WRITER, W = PREFER_WRITER_NONREC
mutrace: Note that rwlocks are shown as two lines: write locks then read locks.
mutrace: Note that the flags column R is only valid in --track-rt mode!
mutrace: 1 condition variables used.
mutrace: No condition variable contended according to filtering parameters.
mutrace: Total runtime is 185346.476 ms.
mutrace: Results for SMP with 8 processors.
mutrace: WARNING: 139919 inconsistent mutex uses detected. Results might not be reliable.
mutrace: Fix your program first!
mutrace: WARNING: 98 internal hash collisions detected. Results might not be as reliable as they could be.
mutrace: Try to increase --hash-size=, which is currently at 400000009.
mutrace: WARNING: 533 internal mutex contention detected. Results might not be reliable as they could be.
mutrace: Try to increase --hash-size=, which is currently at 400000009.
```https://gitlab.isc.org/isc-projects/bind9/-/issues/3558Batched UPDATE processing2022-09-23T08:52:56ZTony FinchBatched UPDATE processingThere's a cool thing that can happen when you have a queue whose consumer can work in batches: as the load increases, the batch size increases, so the per-batch overhead is amortized over more queue entries and the system's overall effic...There's a cool thing that can happen when you have a queue whose consumer can work in batches: as the load increases, the batch size increases, so the per-batch overhead is amortized over more queue entries and the system's overall efficiency goes up.
This occurs in the DNS for UPDATE processing: UPDATEs are serialized, so new UPDATEs must be queued while an UPDATE is in progress. It then makes sense when an UPDATE is completed to process all pending UPDATEs in a single transaction, so the server has a chance of catching up with its backlog.Not plannedTony FinchTony Finch