BIND issueshttps://gitlab.isc.org/isc-projects/bind9/-/issues2019-03-29T15:24:30Zhttps://gitlab.isc.org/isc-projects/bind9/-/issues/920lib/dns/message.c:1626: INSIST(free_name == isc_boolean_false) failed, with S...2019-03-29T15:24:30ZOndřej Surýlib/dns/message.c:1626: INSIST(free_name == isc_boolean_false) failed, with SIG0 response and SIG0 in additional record[As reported by Jan Žižka <jan.zizka@nokia.com> to the security-officer@isc.org address:]
Hi,
we have been running one of the Codenomicon test suits sending various
DNS responses back to client requests triggered by nslookup and dig
an...[As reported by Jan Žižka <jan.zizka@nokia.com> to the security-officer@isc.org address:]
Hi,
we have been running one of the Codenomicon test suits sending various
DNS responses back to client requests triggered by nslookup and dig
and we have hit an `abort()` with reposnse containing SIG0 answer type
and SIG0 type in additional records.
```
Domain Name System (response)
Transaction ID: 0x2f83
Flags: 0x8100 Standard query response, No error
Questions: 1
Answer RRs: 1
Authority RRs: 1
Additional RRs: 1
Queries
dns.suite.local: type A, class IN
Name: dns.suite.local
[Name Length: 15]
[Label Count: 3]
Type: A (Host Address) (1)
Class: IN (0x0001)
Answers
la\030el: type SIG, class IN
Name: la\030el
Type: SIG (security signature) (24)
Class: IN (0x0001)
Time to live: 3600
Data length: 35
Type Covered: Unused (0)
Algorithm: RSA/SHA1 (5)
Labels: 85
Original TTL: 0 (0 seconds)
Signature Expiration: Jun 8, 1970 13:43:44.000000000 CET
Signature Inception: Jan 16, 2019 13:45:51.000000000 CET
Key Tag: 0
Signer's name: dns.suite.local
Signature: 6c00
Authoritative nameservers
dns.suite.local: type A, class IN, addr 0.0.0.0
Name: dns.suite.local
Type: A (Host Address) (1)
Class: IN (0x0001)
Time to live: 3600
Data length: 4
Address: 0.0.0.0
Additional records
suite.local: type SIG, class IN
Name: suite.local
Type: SIG (security signature) (24)
Class: IN (0x0001)
Time to live: 3600
Data length: 35
Type Covered: Unused (0)
Algorithm: RSA/SHA1 (5)
Labels: 0
Original TTL: 0 (0 seconds)
Signature Expiration: Feb 7, 1970 23:13:20.000000000 CET
Signature Inception: Jan 16, 2019 13:45:51.000000000 CET
Key Tag: 72
Signer's name: dns.suite.local
Signature: 6c00
```
Seems that `getsection()` doesn't know how to handle this kind of packet
as there is already sig0 record and free_name is left 'true'.
Stack trace of thread 8334:
```
#0 0x00007fb9b0ad153f __GI_raise (libc.so.6)
#1 0x00007fb9b0abb895 __GI_abort (libc.so.6)
#2 0x00007fb9b147efea isc_assertion_failed (libisc.so.169)
#3 0x00007fb9b160de30 getsection (libdns.so.1102)
#4 0x00007fb9b160e3e2 dns_message_parse (libdns.so.1102)
#5 0x000000000041bff5 recv_done (dig)
#6 0x00007fb9b14b0b72 dispatch (libisc.so.169)
#7 0x00007fb9b14b0ecf run (libisc.so.169)
#8 0x00007fb9b0fde58e start_thread (libpthread.so.0)
#9 0x00007fb9b0b966a3 __clone (libc.so.6)
```
I have made a reproducer script (attached) it will work on Fedora 29
out of box with bind 9.11.4-P2-RedHat-9.11.4-13.P2.fc29.
I have also tried to configure named to forwarding configuration
and used simple script to send the problematic response back but I have
not able to force the same code path in `getsection()`. Still I feel this
problem could hit also named and cause it to `abort()`. Therefore I
wanted to report this through this channel first just to make sure.
I didn't manage to figure out any proper fix, I can do that locally
with some check for this specific condition but I don't feel it is
right. Better if someone from bind developers would have a look.
Just to make sure I'm also attaching logs from reproducer run locally
and the pcap capture of the packets, in case the reproducer script
fails for you.
-Jan
[reproduce.sh](/uploads/fb3f6b586e7c285812b36aacdade9aa8/reproduce.sh)
[reproduce.pcap](/uploads/853541250bc1ba2a2d0b9fce3062112b/reproduce.pcap)
[reproduce.log](/uploads/edb733fad35ca0c0ec1ce35d2d2ab3a1/reproduce.log)
[server.py](/uploads/26b2695a797fd229f1241c152a222c78/server.py)https://gitlab.isc.org/isc-projects/bind9/-/issues/605Add SipHash24 and synchronize the Cookie algorithm with other vendors2019-07-22T12:15:37ZOndřej SurýAdd SipHash24 and synchronize the Cookie algorithm with other vendorsBIND 9.15.xOndřej SurýOndřej Surýhttps://gitlab.isc.org/isc-projects/bind9/-/issues/506Requesting more explicit support for and demonstration of FIPS compliant cryp...2019-10-23T12:41:10ZSara DickisnonRequesting more explicit support for and demonstration of FIPS compliant crypto library [ISC-support #13613]### Description
Certain environments mandate the use of FIPS compliant TLS libraries for DNSSEC signing. At the moment restricting BIND to only build/run against FIPS compliant crypto libraries or demonstrating that a given instance is ...### Description
Certain environments mandate the use of FIPS compliant TLS libraries for DNSSEC signing. At the moment restricting BIND to only build/run against FIPS compliant crypto libraries or demonstrating that a given instance is using a FIPS compliant library is rather implicit. This capability is often required to demonstrate compliance to auditors.
### Request
I'd like to request more explicit handling of FIPS in BIND, for example:
1. It would be nice to have a BIND runtime command tool that reported the capabilities of the currently used crypto library. Since libraries might be opened dynamically and system/environment variables can affect library behavior this would provide certainty for the user.
2. It would be nice to have a compile and/or runtime method to restrict BIND to using only crypto libraries with certain characteristics e.g. FIPS compliance which are typically exposed via library APIs.
### Links / referencesOndřej SurýOndřej Surýhttps://gitlab.isc.org/isc-projects/bind9/-/issues/405Potential assertion in isc-bind-9.12.22018-11-08T18:29:21ZOndřej SurýPotential assertion in isc-bind-9.12.2A new issue reported by Robert Święcki <robert@swiecki.net> to security-officer:
With some fuzzing of ISC-BIND-9.12.2 with honggfuzz setup (from here https://github.com/google/honggfuzz/tree/master/examples/bind) I', able to hit some as...A new issue reported by Robert Święcki <robert@swiecki.net> to security-officer:
With some fuzzing of ISC-BIND-9.12.2 with honggfuzz setup (from here https://github.com/google/honggfuzz/tree/master/examples/bind) I', able to hit some asserion which ends-up with SIGABRT.
```
dispatch.c:2464: INSIST(disp->tcpbuffers == 0) failed.
#0 0x00007ffff6caf6a0 in raise () from /lib/x86_64-linux-gnu/libc.so.6
(gdb) bt
#0 0x00007ffff6caf6a0 in raise () from /lib/x86_64-linux-gnu/libc.so.6
#1 0x00007ffff6cb0cf7 in abort () from /lib/x86_64-linux-gnu/libc.so.6
#2 0x000000000052d2ae in assertion_failed (file=0xde92a0 <.str> "resolver.c", line=7033, type=isc_assertiontype_require,
cond=0xded340 <.str.213> "(__builtin_expect(!!((query) != ((void*)0)), 1) && __builtin_expect(!!(((const isc__magic_t *)(query))->magic == ((('Q') << 24 | ('!') << 16 | ('!') << 8 | ('!')))), 1))") at ./main.c:252
#3 0x0000000000bb9c97 in isc_assertion_failed (file=0x2 <error: Cannot access memory at address 0x2>, line=-446762384, type=isc_assertiontype_require, cond=0x7ffff6caf6a0 <raise+272> "H\213\214$\b\001") at assertions.c:51
#4 0x00000000009aa838 in resquery_response (task=0x7fffe3d253b8, event=0x7fffe37dff08) at resolver.c:7033
#5 0x0000000000c46658 in dispatch (manager=<optimized out>) at task.c:1139
#6 0x0000000000c4135d in run (uap=0x2) at task.c:1311
#7 0x00007ffff77a351a in start_thread (arg=0x7fffe55f0700) at pthread_create.c:465
#8 0x00007ffff6d703ef in clone () from /lib/x86_64-linux-gnu/libc.so.6
```
Have you seen it before?
If not, I'll try to gather some more info on it, and send you the details, though I'm attaching my config, so you can check it for obvious problems.
[named.conf](/uploads/032f4988bd3aba2d2f53bc62e6367a88/named.conf)
And the report from honggfuzz, which is not particularly more informative:
```
=====================================================================
TIME: 2018-07-11.14:26:50
=====================================================================
FUZZER ARGS:
mutationsPerRun : 6
externalCmd : NULL
fuzzStdin : FALSE
timeout : 10 (sec)
ignoreAddr : (nil)
ASLimit : 0 (MiB)
RSSLimit : 0 (MiB)
DATALimit : 0 (MiB)
targetPid : 0
targetCmd :
wordlistFile : NULL
dynFileMethod:
fuzzTarget : /usr/local/google/home/swiecki/fuzz/bind/bind-9.12.2/bin/named/named -A client:1:1:1:1:1:1 -f -c /usr/local/google/home/swiecki/fuzz/bind/dist/etc/named.conf
ORIG_FNAME: IN.req-response//8b6e4a1f05567f57d1a8dd3cbb50fc9f.00000127.honggfuzz.cov
FUZZ_FNAME: ./SIGABRT.PC.7ffff6caf6a0.STACK.cfb0c006c.CODE.-6.ADDR.(nil).INSTR.mov____0x108(%rsp),%rcx.fuzz
PID: 47832
SIGNAL: SIGABRT (6)
FAULT ADDRESS: (nil)
INSTRUCTION: mov____0x108(%rsp),%rcx
STACK HASH: 0000000cfb0c006c
STACK:
<0x00007ffff6cb0cf7> [[UNKNOWN]():0 at /lib/x86_64-linux-gnu/libc-2.26.so]
<0x0000000000bb9ca1> [isc_assertion_failed():52 at /usr/local/google/home/swiecki/fuzz/bind/bind-9.12.2/bin/named/named]
<0x00000000006c9550> [dispatch_free():2465 at /usr/local/google/home/swiecki/fuzz/bind/bind-9.12.2/bin/named/named]
<0x00000000006c8658> [destroy_disp():549 at /usr/local/google/home/swiecki/fuzz/bind/bind-9.12.2/bin/named/named]
<0x0000000000c46658> [dispatch():1142 at /usr/local/google/home/swiecki/fuzz/bind/bind-9.12.2/bin/named/named]
<0x0000000000c4135d> [run():1320 at /usr/local/google/home/swiecki/fuzz/bind/bind-9.12.2/bin/named/named]
<0x00007ffff77a351a> [[UNKNOWN]():0 at /lib/x86_64-linux-gnu/libpthread-2.26.so]
<0x00007ffff6d703ef> [[UNKNOWN]():0 at /lib/x86_64-linux-gnu/libc-2.26.so]
=====================================================================
```https://gitlab.isc.org/isc-projects/bind9/-/issues/387BIND crashes while resolving DNAME with deny-answer-aliases2018-11-08T18:30:30ZWitold KrecickiBIND crashes while resolving DNAME with deny-answer-aliasesReported by Tony Finch to wpk and security-officer:
named.conf:
```
options {
recursion yes;
deny-answer-aliases {
"cam.ac.uk";
} except-from {
"cam.ac.uk";
};
};
...Reported by Tony Finch to wpk and security-officer:
named.conf:
```
options {
recursion yes;
deny-answer-aliases {
"cam.ac.uk";
} except-from {
"cam.ac.uk";
};
};
```
```
dig @localhost 130.232.128.in-addr.arpa dname
```
results in INSIST failure:
```
05-Jul-2018 17:17:43.684 name.c:2150: REQUIRE(suffixlabels > 0) failed, back trace
```
```
#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51
#1 0x00007ffff6279801 in __GI_abort () at abort.c:79
#2 0x00005555555ae9e7 in assertion_failed (file=<optimized out>, line=<optimized out>, type=<optimized out>, cond=<optimized out>) at ./main.c:250
#3 0x000055555578fa1a in isc_assertion_failed (file=file@entry=0x555555808500 "name.c", line=line@entry=2150,
type=type@entry=isc_assertiontype_require, cond=cond@entry=0x555555808718 "suffixlabels > 0") at assertions.c:51
#4 0x00005555556715ea in dns_name_split (name=0x7fffe8051aa0, suffixlabels=0, prefix=<optimized out>, suffix=<optimized out>) at name.c:2150
#5 0x000055555559ee1a in is_answertarget_allowed (fctx=fctx@entry=0x7fffe8051790, qname=qname@entry=0x7fffe8051aa0, rname=0x7fffef8867e0,
rdataset=0x7fffef8895c0, chainingp=chainingp@entry=0x0) at resolver.c:6637
#6 0x000055555559f7d4 in rctx_answer_match (rctx=0x7ffff2d0d470) at resolver.c:8173
#7 rctx_answer_positive (rctx=rctx@entry=0x7ffff2d0d470) at resolver.c:7927
#8 0x00005555556ea738 in rctx_answer (rctx=0x7ffff2d0d470) at resolver.c:7811
#9 resquery_response (task=<optimized out>, event=<optimized out>) at resolver.c:7342
#10 0x00005555557b2cb1 in dispatch (manager=0x7ffff7f73010) at task.c:1139
#11 run (uap=0x7ffff7f73010) at task.c:1311
#12 0x00007ffff69f26db in start_thread (arg=0x7ffff2d0e700) at pthread_create.c:463
#13 0x00007ffff635a88f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
```https://gitlab.isc.org/isc-projects/bind9/-/issues/360Replace FNV-1a with SipHash2019-05-21T11:50:27ZGhost UserReplace FNV-1a with SipHashNOTE: This is a confidential ticket because it may describe a vulnerability.
I suggest replacing FNV-1a in BIND with SipHash: https://131002.net/siphash/siphash.pdf
BIND uses FNV-1a for the hashtables with a random starting seed (the o...NOTE: This is a confidential ticket because it may describe a vulnerability.
I suggest replacing FNV-1a in BIND with SipHash: https://131002.net/siphash/siphash.pdf
BIND uses FNV-1a for the hashtables with a random starting seed (the offset basis).. but IMO it falls short of being a true universal hash function because I suspect it would map to different buckets based on the initial offset basis chosen, but colliding keys would still map to the same buckets regardless of the initial offset basis (the bucket# may be different).
SipHash is a more robust hash function and it suggests the hash table function as an application. The change to SipHash would be fairly easy, but note that some parts of BIND may perform incremental hashing (i.e., they add input data to compute the hash from different places of the library). So the hash function would need to follow an API similar to the HMAC contexts in BIND, OR, the data would have to be concatenated into a buffer as a whole before hashing.
I suspect collisions may be exploitable by hash flooding (there have also been previous reports of these). See section 7 of the siphash paper and references [13], etc. in the paper for details.
See this presentation: https://media.ccc.de/v/29c3-5152-en-hashflooding_dos_reloaded_h264https://gitlab.isc.org/isc-projects/bind9/-/issues/339BIND crash processing .jnl file for outbound IXFR2018-07-05T04:38:03ZCathy AlmondBIND crash processing .jnl file for outbound IXFRBIND 9.12.1-P2
### Summary
Three authoritative servers all crashed at around the same time with:
```
13-Jun-2018 06:16:37.710 general: critical: journal.c:1663: INSIST(j->offset <= j->it.epos.offset) failed
13-Jun-2018 06:16:37.710 gen...BIND 9.12.1-P2
### Summary
Three authoritative servers all crashed at around the same time with:
```
13-Jun-2018 06:16:37.710 general: critical: journal.c:1663: INSIST(j->offset <= j->it.epos.offset) failed
13-Jun-2018 06:16:37.710 general: critical: exiting (due to assertion failure)
```
When trying to restart BIND, it fails to launch and logs the same lines as above.
### Steps to reproduce
Restarting BIND reproduces the problem.
Removing the zone .jnl file and restarting resolves the problem.
It would appear that named is attempting to process an outbound IXFR request at the time of the crash, after having recently received a significantly-sized inbound update:
```
13-Jun-2018 05:40:30.552 xfer-in: info: transfer of 'XXXXXXXX/IN' from XX.XX.XX.XX#53: Transfer status: success
13-Jun-2018 05:40:30.552 xfer-in: info: transfer of 'XXXXXXXX/IN' from XX.XX.XX.XX#53: Transfer completed: 113359 messages, 27271720 records, 1800453680 bytes, 669.247 secs (2690267 bytes/sec)
13-Jun-2018 05:40:30.569 xfer-out: info: client @0xXXXXXXXXXX XX.XX.XX.XX#36227/key XXXX (XXXXXXXX): transfer of 'XXXXXXXX/IN': IXFR started: TSIG XXXX (serial 2018061320 -> 2018061321)
```
### What is the current *bug* behavior?
named crashes
### What is the expected *correct* behavior?
named doesn't crash
### Relevant configuration files
Not yet available
### Relevant logs and/or screenshots
See above
### Possible fixes
I am wondering if the size of the .jnl file might be significant - ~4Gbhttps://gitlab.isc.org/isc-projects/bind9/-/issues/263assertion failure: critical: query.c:5541: REQUIRE(qtype != 0) failed2019-04-25T15:49:51ZGhost Userassertion failure: critical: query.c:5541: REQUIRE(qtype != 0) failed## Description
Last night one of our bind caching resolvers crashed several times with error
```
15-May-2018 23:56:35.904 general: critical: query.c:5541: REQUIRE(qtype != 0) failed, back trace
15-May-2018 23:56:35.904 general: critica...## Description
Last night one of our bind caching resolvers crashed several times with error
```
15-May-2018 23:56:35.904 general: critical: query.c:5541: REQUIRE(qtype != 0) failed, back trace
15-May-2018 23:56:35.904 general: critical: #0 0x429210 in ??
15-May-2018 23:56:35.904 general: critical: #1 0x7f561c9ee17a in ??
15-May-2018 23:56:35.904 general: critical: #2 0x7f561e069cde in ??
15-May-2018 23:56:35.904 general: critical: #3 0x7f561e07814d in ??
15-May-2018 23:56:35.904 general: critical: #4 0x7f561e074f10 in ??
15-May-2018 23:56:35.904 general: critical: #5 0x7f561e077dea in ??
15-May-2018 23:56:35.904 general: critical: #6 0x7f561e070374 in ??
15-May-2018 23:56:35.904 general: critical: #7 0x7f561e07b8e7 in ??
15-May-2018 23:56:35.904 general: critical: #8 0x7f561e07bcef in ??
15-May-2018 23:56:35.904 general: critical: #9 0x7f561e05ba09 in ??
15-May-2018 23:56:35.904 general: critical: #10 0x7f561ca12d6b in ??
15-May-2018 23:56:35.904 general: critical: #11 0x7f561bd4be25 in ??
15-May-2018 23:56:35.904 general: critical: #12 0x7f561aff334d in ??
15-May-2018 23:56:35.904 general: critical: exiting (due to assertion failure)
```
## Version information
BIND 9.12.1-P1 (with patches CVE-2018-5736 and CVE-2018-5737 )
```
BIND 9.12.1-P1-RedHat-9.12.1_P1-1.el7.0 <id:f729b2b>
running on Linux x86_64 3.10.0-693.11.6.el7.x86_64 #1 SMP Thu Dec 28 14:23:39 EST 2017
built by make with '--build=x86_64-koji-linux-gnu' '--host=x86_64-koji-linux-gnu' '--program-prefix=' '--disable-dependency-tracking' '--prefix=/opt/named' '--bindir=/opt/named/bin' '--sbindir=/opt/named/sbin' '--sysconfdir=/etc' '--datadir=/opt/named/share' '--includedir=/opt/named/include' '--libdir=/opt/named/lib64' '--libexecdir=/opt/named/libexec' '--localstatedir=/var' '--sharedstatedir=/var/lib' '--mandir=/opt/named/share/man' '--infodir=/opt/named/share/info' '--exec-prefix=/opt/named' '--with-libtool' '--enable-threads' '--enable-ipv6' '--enable-largefile' '--with-randomdev=/dev/urandom' '--with-tuning=large' '--enable-dnstap' '--disable-crypto-rand' '--disable-static' '--disable-openssl-version-check' '--includedir=/opt/named/include/bind9' '--with-docbook-xsl=/opt/named/share/sgml/docbook/xsl-stylesheets' 'build_alias=x86_64-koji-linux-gnu' 'host_alias=x86_64-koji-linux-gnu' 'CFLAGS= -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic' 'LDFLAGS=-Wl,-z,relro ' 'CPPFLAGS= -DDIG_SIGCHASE'
compiled by GCC 4.8.5 20150623 (Red Hat 4.8.5-28)
compiled with OpenSSL version: OpenSSL 1.0.2k 26 Jan 2017
linked to OpenSSL version: OpenSSL 1.0.2k-fips 26 Jan 2017
compiled with libxml2 version: 2.9.1
linked to libxml2 version: 20901
compiled with zlib version: 1.2.7
linked to zlib version: 1.2.7
threads support is enabled
```
OS: Red Hat Enterprise Linux Server release 7.4 (Maipo)
Michał KępieńMichał Kępieńhttps://gitlab.isc.org/isc-projects/bind9/-/issues/185[CVE-2018-5737] serve-stale crash2021-03-31T12:02:44ZTony Finch[CVE-2018-5737] serve-stale crashOne of my recursive servers crashed messily this evening, logging more than a million lines of
```
27-Mar-2018 18:20:35.862 general: info: 105.91.84.115.in-addr.arpa resolver failure, stale answer used
[snip 100MB logs]
27-Mar-2018...One of my recursive servers crashed messily this evening, logging more than a million lines of
```
27-Mar-2018 18:20:35.862 general: info: 105.91.84.115.in-addr.arpa resolver failure, stale answer used
[snip 100MB logs]
27-Mar-2018 18:24:03.414 general: info: 105.91.84.115.in-addr.arpa resolver failure, stale answer used
27-Mar-2018 18:24:03.414 general: critical: rbtdb.c:2115: INSIST(!((void *)((node)->deadlink.prev) != (void *)(-1))) failed
27-Mar-2018 18:24:03.414 general: critical: exiting (due to assertion failure)
```
Earlier today I turned on serve-stale in production, so it did not last long before crashing!
I'm afraid I don't have the start of the logspam because I only keep 100MB logs, but the obvious query will reproduce the problem.
Tangentially related, I think the serve-stale logging needs work: it's very noisy, so it should be in its own category, and perhaps some of the messages should have at debugging rather than informational level...
Full configuration below. It's possibly of note that I have two views with a shared cache using attach-cache.
```
acl "blackhole" {
240.0.0.0/4;
};
acl "secure" {
"localhost";
131.111.56.56/32;
131.111.57.57/32;
2001:630:212:110::d:7a7/128;
2001:630:212:110:221:9bff:fe16:a526/128;
2001:630:212:110:646f:7461:742e:6174/128;
131.111.9.53/32;
131.111.9.73/32;
2001:630:212:8::d:aa/128;
2001:630:212:8::d:aaaa/128;
};
acl "loopback" {
127.0.0.0/8;
::1/128;
};
acl "cudn" {
127.0.0.0/8;
::1/128;
2001:630:210::/44;
2a00:1098:5::/48;
128.232.0.0/16;
129.169.0.0/16;
131.111.0.0/16;
192.18.195.0/24;
192.84.5.0/24;
192.153.213.0/24;
193.60.80.0/20;
193.63.252.0/23;
!172.31.0.0/16;
172.16.0.0/12;
10.128.0.0/9;
};
acl "isc" {
"ipreg";
key "university_of_cambridge-a1ec5f18.sns-pba.isc.org";
};
acl "secondaries" {
"cudn";
"isc";
key "tsig-cam-maths";
key "cam.ac.uk.feb2016.tsig.ic.ac.uk";
194.81.227.226/32;
2001:630:0:44::e2/128;
193.63.105.17/32;
2001:630:0:45::11/128;
193.63.106.103/32;
2001:630:0:46::67/128;
193.62.157.66/32;
2001:630:0:47::42/128;
93.93.130.49/32;
69.56.173.190/32;
2600:3c00::f03c:91ff:fe96:beac/128;
93.93.128.67/32;
2a00:1098:0:80:1000::10/128;
185.24.221.32/32;
2a02:2770:11:0:21a:4aff:febe:759b/128;
};
acl "ipreg" {
key "tsig-ipreg";
"secure";
};
controls {
inet 0.0.0.0 port 953 allow {
"secure";
};
inet :: port 953 allow {
"secure";
};
};
logging {
channel "log" {
file "../log/named.log" versions 10 size 10485760;
severity dynamic;
print-time yes;
print-severity yes;
print-category yes;
};
category "default" {
"log";
};
category "cname" {
"default_debug";
};
category "dnssec" {
"default_debug";
};
category "lame-servers" {
"default_debug";
};
category "query-errors" {
"default_debug";
};
category "resolver" {
"default_debug";
};
category "security" {
"default_debug";
};
category "update-security" {
"default_debug";
};
};
masters "notify-isc" {
149.20.67.14 key "university_of_cambridge-a1ec5f18.sns-pba.isc.org";
199.6.0.100 key "university_of_cambridge-a1ec5f18.sns-pba.isc.org";
};
masters "notify-auth" {
2001:630:212:8::d:a0 key "tsig-ipreg";
2001:630:212:12::d:a1 key "tsig-ipreg";
2001:630:212:8::d:a2 key "tsig-ipreg";
2001:630:212:12::d:a3 key "tsig-ipreg";
};
masters "notify-rec" {
"notify-auth";
2001:630:212:8::d:92 key "tsig-ipreg";
2001:630:212:8::d:93 key "tsig-ipreg";
2001:630:212:8::d:94 key "tsig-ipreg";
2001:630:212:8::d:95 key "tsig-ipreg";
};
masters "master-ipreg" {
2001:630:212:8::d:aa key "tsig-ipreg";
};
masters "master-fanf" {
2001:630:212:110::d:7a7 key "tsig-fanf";
2001:630:212:110:646f:7461:742e:6174 key "tsig-fanf";
};
masters "master-cl" {
2001:630:212:200::d:a0;
128.232.0.19;
2001:630:212:200::d:a1;
128.232.0.18;
};
masters "master-eng" {
129.169.8.8;
129.169.8.9;
};
masters "master-maths" {
131.111.16.129;
131.111.16.30;
131.111.16.32;
};
masters "master-janet-rpz" {
2001:630:1:128::166;
194.82.174.166;
2001:630:1:12a::235;
194.83.56.235;
};
masters "master-imperial" {
2001:630:12:600:1::80 key "cam.ac.uk.feb2016.tsig.ic.ac.uk";
2001:630:12:600:1::81 key "cam.ac.uk.feb2016.tsig.ic.ac.uk";
2001:630:12:600:1::82 key "cam.ac.uk.feb2016.tsig.ic.ac.uk";
195.97.216.196 key "cam.ac.uk.feb2016.tsig.ic.ac.uk";
};
masters "master-salford" {
146.87.136.156;
146.87.136.157;
};
masters "master-york" {
144.32.129.200;
144.32.128.230;
};
masters "master-sanger" {
193.62.203.30;
};
masters "master-chiark" {
212.13.197.229;
};
masters "master-srcf" {
131.111.179.79;
};
masters "master-exim" {
2001:630:212:8::e:f0e key "tsig-cam-exim";
131.111.8.88 key "tsig-cam-exim";
2a02:898:31::53:0 key "tsig-cam-exim";
94.142.241.91 key "tsig-cam-exim";
2604:a880:800:a1::419:1001 key "tsig-cam-exim";
159.203.114.39 key "tsig-cam-exim";
};
options {
blackhole {
"blackhole";
};
directory "/home/named/run";
recursive-clients 12345;
server-id hostname;
tcp-clients 1234;
dnssec-validation auto;
max-cache-size 17179869184;
max-stale-ttl 3600;
no-case-compress {
"any";
};
rrset-order {
order random;
};
stale-answer-enable yes;
allow-query {
"cudn";
};
notify no;
zone-statistics full;
};
statistics-channels {
inet 0.0.0.0 port 8053 allow {
"cudn";
};
inet :: port 8053 allow {
"cudn";
};
};
view "main" {
match-destinations {
!131.111.9.99/32;
!2001:630:212:8::d:2/128;
!131.111.12.99/32;
!2001:630:212:12::d:3/128;
!131.111.9.118/32;
!2001:630:212:8::d:fff2/128;
!131.111.12.118/32;
!2001:630:212:12::d:fff3/128;
"any";
};
zone "1.2.0.0.3.6.0.1.0.0.2.ip6.arpa" {
type slave;
file "../zone/1.2.0.0.3.6.0.1.0.0.2.ip6.arpa";
masters {
"master-ipreg";
};
};
zone "10.in-addr.arpa" {
type slave;
file "../zone/10.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "111.131.in-addr.arpa" {
type slave;
file "../zone/111.131.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "145.111.131.in-addr.arpa" {
type slave;
file "../zone/145.111.131.in-addr.arpa";
masters {
"master-maths";
};
};
zone "16.111.131.in-addr.arpa" {
type slave;
file "../zone/16.111.131.in-addr.arpa";
masters {
"master-maths";
};
};
zone "16.172.in-addr.arpa" {
type slave;
file "../zone/16.172.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "169.129.in-addr.arpa" {
type slave;
file "../zone/169.129.in-addr.arpa";
masters {
"master-eng";
};
};
zone "17.111.131.in-addr.arpa" {
type slave;
file "../zone/17.111.131.in-addr.arpa";
masters {
"master-maths";
};
};
zone "17.172.in-addr.arpa" {
type slave;
file "../zone/17.172.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "18.111.131.in-addr.arpa" {
type slave;
file "../zone/18.111.131.in-addr.arpa";
masters {
"master-maths";
};
};
zone "18.172.in-addr.arpa" {
type slave;
file "../zone/18.172.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "19.172.in-addr.arpa" {
type slave;
file "../zone/19.172.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "195.18.192.in-addr.arpa" {
type slave;
file "../zone/195.18.192.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "2.0.2.1.2.0.0.3.6.0.1.0.0.2.ip6.arpa" {
type slave;
file "../zone/2.0.2.1.2.0.0.3.6.0.1.0.0.2.ip6.arpa";
masters {
"master-cl";
};
};
zone "20.111.131.in-addr.arpa" {
type slave;
file "../zone/20.111.131.in-addr.arpa";
masters {
"master-maths";
};
};
zone "20.172.in-addr.arpa" {
type slave;
file "../zone/20.172.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "21.172.in-addr.arpa" {
type slave;
file "../zone/21.172.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "213.153.192.in-addr.arpa" {
type slave;
file "../zone/213.153.192.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "22.172.in-addr.arpa" {
type slave;
file "../zone/22.172.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "23.172.in-addr.arpa" {
type slave;
file "../zone/23.172.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "232.128.in-addr.arpa" {
type slave;
file "../zone/232.128.in-addr.arpa";
masters {
"master-cl";
};
};
zone "24.111.131.in-addr.arpa" {
type slave;
file "../zone/24.111.131.in-addr.arpa";
masters {
"master-maths";
};
};
zone "24.172.in-addr.arpa" {
type slave;
file "../zone/24.172.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "25.172.in-addr.arpa" {
type slave;
file "../zone/25.172.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "252.63.193.in-addr.arpa" {
type slave;
file "../zone/252.63.193.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "253.63.193.in-addr.arpa" {
type slave;
file "../zone/253.63.193.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "26.172.in-addr.arpa" {
type slave;
file "../zone/26.172.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "27.172.in-addr.arpa" {
type slave;
file "../zone/27.172.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "28.172.in-addr.arpa" {
type slave;
file "../zone/28.172.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "29.172.in-addr.arpa" {
type slave;
file "../zone/29.172.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "30.172.in-addr.arpa" {
type slave;
file "../zone/30.172.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "5.0.0.0.8.9.0.1.0.0.a.2.ip6.arpa" {
type slave;
file "../zone/5.0.0.0.8.9.0.1.0.0.a.2.ip6.arpa";
masters {
"master-ipreg";
};
};
zone "5.84.192.in-addr.arpa" {
type slave;
file "../zone/5.84.192.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "80.60.193.in-addr.arpa" {
type slave;
file "../zone/80.60.193.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "81.60.193.in-addr.arpa" {
type slave;
file "../zone/81.60.193.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "82.60.193.in-addr.arpa" {
type slave;
file "../zone/82.60.193.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "83.60.193.in-addr.arpa" {
type slave;
file "../zone/83.60.193.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "84.60.193.in-addr.arpa" {
type slave;
file "../zone/84.60.193.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "85.60.193.in-addr.arpa" {
type slave;
file "../zone/85.60.193.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "86.60.193.in-addr.arpa" {
type slave;
file "../zone/86.60.193.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "87.60.193.in-addr.arpa" {
type slave;
file "../zone/87.60.193.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "88.60.193.in-addr.arpa" {
type slave;
file "../zone/88.60.193.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "89.60.193.in-addr.arpa" {
type slave;
file "../zone/89.60.193.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "90.60.193.in-addr.arpa" {
type slave;
file "../zone/90.60.193.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "91.60.193.in-addr.arpa" {
type slave;
file "../zone/91.60.193.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "92.60.193.in-addr.arpa" {
type slave;
file "../zone/92.60.193.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "93.60.193.in-addr.arpa" {
type slave;
file "../zone/93.60.193.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "94.60.193.in-addr.arpa" {
type slave;
file "../zone/94.60.193.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "95.60.193.in-addr.arpa" {
type slave;
file "../zone/95.60.193.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "block.arpa.cam.ac.uk" {
type slave;
file "../zone/block.arpa.cam.ac.uk";
masters {
"master-ipreg";
};
};
zone "botnetcc.rpz.spamhaus.org" {
type slave;
file "../zone/botnetcc.rpz.spamhaus.org";
masters {
"master-janet-rpz";
};
};
zone "cam.ac.uk" {
type slave;
file "../zone/cam.ac.uk";
masters {
"master-ipreg";
};
};
zone "cl.cam.ac.uk" {
type slave;
file "../zone/cl.cam.ac.uk";
masters {
"master-cl";
};
};
zone "cst.cam.ac.uk" {
type slave;
file "../zone/cst.cam.ac.uk";
masters {
"master-cl";
};
};
zone "damtp.cam.ac.uk" {
type slave;
file "../zone/damtp.cam.ac.uk";
masters {
"master-maths";
};
};
zone "dbl.rpz.spamhaus.org" {
type slave;
file "../zone/dbl.rpz.spamhaus.org";
masters {
"master-janet-rpz";
};
};
zone "dpmms.cam.ac.uk" {
type slave;
file "../zone/dpmms.cam.ac.uk";
masters {
"master-maths";
};
};
zone "drop.rpz.spamhaus.org" {
type slave;
file "../zone/drop.rpz.spamhaus.org";
masters {
"master-janet-rpz";
};
};
zone "eng.cam.ac.uk" {
type slave;
file "../zone/eng.cam.ac.uk";
masters {
"master-eng";
};
};
zone "in-addr.arpa.cam.ac.uk" {
type slave;
file "../zone/in-addr.arpa.cam.ac.uk";
masters {
"master-ipreg";
};
};
zone "in-addr.arpa.private.cam.ac.uk" {
type slave;
file "../zone/in-addr.arpa.private.cam.ac.uk";
masters {
"master-ipreg";
};
};
zone "malware-aggressive.rpz.spamhaus.org" {
type slave;
file "../zone/malware-aggressive.rpz.spamhaus.org";
masters {
"master-janet-rpz";
};
};
zone "malware.rpz.spamhaus.org" {
type slave;
file "../zone/malware.rpz.spamhaus.org";
masters {
"master-janet-rpz";
};
};
zone "maths.cam.ac.uk" {
type slave;
file "../zone/maths.cam.ac.uk";
masters {
"master-maths";
};
};
zone "newton.cam.ac.uk" {
type slave;
file "../zone/newton.cam.ac.uk";
masters {
"master-maths";
};
};
zone "passthru.arpa.cam.ac.uk" {
type slave;
file "../zone/passthru.arpa.cam.ac.uk";
masters {
"master-ipreg";
};
};
zone "private.cam.ac.uk" {
type slave;
file "../zone/private.cam.ac.uk";
masters {
"master-ipreg";
};
};
zone "srcf.net" {
type slave;
file "../zone/srcf.net";
masters {
"master-srcf";
};
};
zone "srcf.ucam.org" {
type slave;
file "../zone/srcf.ucam.org";
masters {
"master-srcf";
};
};
zone "statslab.cam.ac.uk" {
type slave;
file "../zone/statslab.cam.ac.uk";
masters {
"master-maths";
};
};
zone "ucam.org" {
type slave;
file "../zone/ucam.org";
masters {
"master-chiark";
};
};
response-policy {
zone "passthru.arpa.cam.ac.uk" policy passthru;
zone "block.arpa.cam.ac.uk" policy cname "block.dns.cam.ac.uk";
} break-dnssec yes max-policy-ttl 300 qname-wait-recurse no;
};
view "unfiltered" {
zone "1.2.0.0.3.6.0.1.0.0.2.ip6.arpa" {
in-view "main";
};
zone "10.in-addr.arpa" {
in-view "main";
};
zone "111.131.in-addr.arpa" {
in-view "main";
};
zone "145.111.131.in-addr.arpa" {
in-view "main";
};
zone "16.111.131.in-addr.arpa" {
in-view "main";
};
zone "16.172.in-addr.arpa" {
in-view "main";
};
zone "169.129.in-addr.arpa" {
in-view "main";
};
zone "17.111.131.in-addr.arpa" {
in-view "main";
};
zone "17.172.in-addr.arpa" {
in-view "main";
};
zone "18.111.131.in-addr.arpa" {
in-view "main";
};
zone "18.172.in-addr.arpa" {
in-view "main";
};
zone "19.172.in-addr.arpa" {
in-view "main";
};
zone "195.18.192.in-addr.arpa" {
in-view "main";
};
zone "2.0.2.1.2.0.0.3.6.0.1.0.0.2.ip6.arpa" {
in-view "main";
};
zone "20.111.131.in-addr.arpa" {
in-view "main";
};
zone "20.172.in-addr.arpa" {
in-view "main";
};
zone "21.172.in-addr.arpa" {
in-view "main";
};
zone "213.153.192.in-addr.arpa" {
in-view "main";
};
zone "22.172.in-addr.arpa" {
in-view "main";
};
zone "23.172.in-addr.arpa" {
in-view "main";
};
zone "232.128.in-addr.arpa" {
in-view "main";
};
zone "24.111.131.in-addr.arpa" {
in-view "main";
};
zone "24.172.in-addr.arpa" {
in-view "main";
};
zone "25.172.in-addr.arpa" {
in-view "main";
};
zone "252.63.193.in-addr.arpa" {
in-view "main";
};
zone "253.63.193.in-addr.arpa" {
in-view "main";
};
zone "26.172.in-addr.arpa" {
in-view "main";
};
zone "27.172.in-addr.arpa" {
in-view "main";
};
zone "28.172.in-addr.arpa" {
in-view "main";
};
zone "29.172.in-addr.arpa" {
in-view "main";
};
zone "30.172.in-addr.arpa" {
in-view "main";
};
zone "5.0.0.0.8.9.0.1.0.0.a.2.ip6.arpa" {
in-view "main";
};
zone "5.84.192.in-addr.arpa" {
in-view "main";
};
zone "80.60.193.in-addr.arpa" {
in-view "main";
};
zone "81.60.193.in-addr.arpa" {
in-view "main";
};
zone "82.60.193.in-addr.arpa" {
in-view "main";
};
zone "83.60.193.in-addr.arpa" {
in-view "main";
};
zone "84.60.193.in-addr.arpa" {
in-view "main";
};
zone "85.60.193.in-addr.arpa" {
in-view "main";
};
zone "86.60.193.in-addr.arpa" {
in-view "main";
};
zone "87.60.193.in-addr.arpa" {
in-view "main";
};
zone "88.60.193.in-addr.arpa" {
in-view "main";
};
zone "89.60.193.in-addr.arpa" {
in-view "main";
};
zone "90.60.193.in-addr.arpa" {
in-view "main";
};
zone "91.60.193.in-addr.arpa" {
in-view "main";
};
zone "92.60.193.in-addr.arpa" {
in-view "main";
};
zone "93.60.193.in-addr.arpa" {
in-view "main";
};
zone "94.60.193.in-addr.arpa" {
in-view "main";
};
zone "95.60.193.in-addr.arpa" {
in-view "main";
};
zone "block.arpa.cam.ac.uk" {
in-view "main";
};
zone "botnetcc.rpz.spamhaus.org" {
in-view "main";
};
zone "cam.ac.uk" {
in-view "main";
};
zone "cl.cam.ac.uk" {
in-view "main";
};
zone "cst.cam.ac.uk" {
in-view "main";
};
zone "damtp.cam.ac.uk" {
in-view "main";
};
zone "dbl.rpz.spamhaus.org" {
in-view "main";
};
zone "dpmms.cam.ac.uk" {
in-view "main";
};
zone "drop.rpz.spamhaus.org" {
in-view "main";
};
zone "eng.cam.ac.uk" {
in-view "main";
};
zone "in-addr.arpa.cam.ac.uk" {
in-view "main";
};
zone "in-addr.arpa.private.cam.ac.uk" {
in-view "main";
};
zone "malware-aggressive.rpz.spamhaus.org" {
in-view "main";
};
zone "malware.rpz.spamhaus.org" {
in-view "main";
};
zone "maths.cam.ac.uk" {
in-view "main";
};
zone "newton.cam.ac.uk" {
in-view "main";
};
zone "passthru.arpa.cam.ac.uk" {
in-view "main";
};
zone "private.cam.ac.uk" {
in-view "main";
};
zone "srcf.net" {
in-view "main";
};
zone "srcf.ucam.org" {
in-view "main";
};
zone "statslab.cam.ac.uk" {
in-view "main";
};
zone "ucam.org" {
in-view "main";
};
attach-cache "main";
};
key "cam.ac.uk.feb2016.tsig.ic.ac.uk" {
algorithm "hmac-sha256";
secret "????????????????????????????????????????????";
};
key "tsig-ipreg" {
algorithm "hmac-sha256";
secret "????????????????????????????????????????????";
};
key "university_of_cambridge-a1ec5f18.sns-pba.isc.org" {
algorithm "hmac-sha512";
secret "????????????????????????????????????????????????????????????????????????????????????????";
};
key "tsig-cam-maths" {
algorithm "hmac-sha256";
secret "????????????????????????????????????????????";
};
key "tsig-cam-exim" {
algorithm "hmac-sha256";
secret "????????????????????????????????????????????";
};
key "tsig-fanf" {
algorithm "hmac-sha256";
secret "????????????????????????????????????????????";
};
server 157.83.102.245/32 {
send-cookie no;
};
server 157.83.102.246/32 {
send-cookie no;
};
server 157.83.126.245/32 {
send-cookie no;
};
server 157.83.126.246/32 {
send-cookie no;
};
server 43.242.49.158/32 {
send-cookie no;
};
server 113.209.232.218/32 {
send-cookie no;
};
server 63.150.72.5/32 {
send-cookie no;
};
server 2001:428::7/128 {
send-cookie no;
};
server 208.44.130.121/32 {
send-cookie no;
};
server 2001:428::8/128 {
send-cookie no;
};
server 172.16.3.0/24 {
bogus no;
};
server 0.0.0.0/8 {
bogus yes;
};
server 10.0.0.0/8 {
bogus yes;
};
server 100.64.0.0/10 {
bogus yes;
};
server 127.0.0.0/8 {
bogus yes;
};
server 169.254.0.0/16 {
bogus yes;
};
server 172.16.0.0/12 {
bogus yes;
};
server 192.0.0.0/24 {
bogus yes;
};
server 192.0.2.0/24 {
bogus yes;
};
server 192.88.99.0/24 {
bogus yes;
};
server 192.168.0.0/16 {
bogus yes;
};
server 198.18.0.0/15 {
bogus yes;
};
server 198.51.100.0/24 {
bogus yes;
};
server 203.0.113.0/24 {
bogus yes;
};
server 224.0.0.0/3 {
bogus yes;
};
server ::/3 {
bogus yes;
};
server 2001::/32 {
bogus yes;
};
server 2001:2::/48 {
bogus yes;
};
server 2001:10::/28 {
bogus yes;
};
server 2001:db8::/32 {
bogus yes;
};
server 2002::/16 {
bogus yes;
};
server 3000::/4 {
bogus yes;
};
server 4000::/2 {
bogus yes;
};
server 8000::/1 {
bogus yes;
};
```https://gitlab.isc.org/isc-projects/bind9/-/issues/134Crash in BIND 9.12.0-RedHat-9.12.0-1.el7.02019-04-26T09:49:56ZJakob DhondtCrash in BIND 9.12.0-RedHat-9.12.0-1.el7.0I'd like to report a crash on a production host (ns2.switch.ch).
All the necessary info can be found here: [REDACTED]
Thanks,
JakobI'd like to report a crash on a production host (ns2.switch.ch).
All the necessary info can be found here: [REDACTED]
Thanks,
JakobMichał KępieńMichał Kępieńhttps://gitlab.isc.org/isc-projects/bind9/-/issues/23DDoS mitigation2023-12-22T10:28:30ZOndřej SurýDDoS mitigationThis is a placeholder bug for general DDoS mitigation techniques that needs to be introduced into BIND to cope with current DNS landscape.This is a placeholder bug for general DDoS mitigation techniques that needs to be introduced into BIND to cope with current DNS landscape.Not plannedhttps://gitlab.isc.org/isc-projects/bind9/-/issues/1658AXFR stops working after a while in BIND9 9.16.2020-03-05T20:32:02ZMathieu ArnoldAXFR stops working after a while in BIND9 9.16.### Summary
So, like with #1636, I noticed things not working any more, but this time, notifies were working, but it was axfr's that were not working any more.
### BIND version used
(Paste the output of `named -V`.)
### Steps to repr...### Summary
So, like with #1636, I noticed things not working any more, but this time, notifies were working, but it was axfr's that were not working any more.
### BIND version used
(Paste the output of `named -V`.)
### Steps to reproduce
Well, right now, it is "wait a few days".
### What is the current *bug* behavior?
In BIND9 logs, I see:
```
Mar 5 07:46:33 ns3 named[73378]: client @0x28deb400 ipv4_address#57694: received notify for zone 'xxxx.yyy'
Mar 5 07:46:33 ns3 named[73378]: zone xxxx.yyy/IN: notify from ipv4_address#57694: serial 2020030336
Mar 5 07:46:33 ns3 named[73378]: zone xxxx.yyy/IN: Transfer started.
Mar 5 07:46:33 ns3 named[73378]: transfer of 'xxxx.yyy/IN' from ipv4_address#53: connected using 185.167.19.242#26276 TSIG yop
Mar 5 07:46:33 ns3 named[73378]: transfer of 'xxxx.yyy/IN' from ipv4_address#53: failed while receiving responses: connection reset
Mar 5 07:46:33 ns3 named[73378]: transfer of 'xxxx.yyy/IN' from ipv4_address#53: Transfer status: connection reset
Mar 5 07:46:33 ns3 named[73378]: transfer of 'xxxx.yyy/IN' from ipv4_address#53: Transfer completed: 0 messages, 0 records, 0 bytes, 0.007 secs (0 bytes/sec)
Mar 5 07:46:33 ns3 named[73378]: zone xxxx.yyy/IN: Transfer started.
Mar 5 07:46:33 ns3 named[73378]: transfer of 'xxxx.yyy/IN' from ipv4_address#53: connected using 185.167.19.242#23888 TSIG yop
Mar 5 07:46:33 ns3 named[73378]: transfer of 'xxxx.yyy/IN' from ipv4_address#53: failed while receiving responses: connection reset
Mar 5 07:46:33 ns3 named[73378]: transfer of 'xxxx.yyy/IN' from ipv4_address#53: Transfer status: connection reset
Mar 5 07:46:33 ns3 named[73378]: transfer of 'xxxx.yyy/IN' from ipv4_address#53: Transfer completed: 0 messages, 0 records, 0 bytes, 0.006 secs (0 bytes/sec)
```
Which I confirmed by using dig directly:
```
# dig -y hmac-sha256:yop:xxx axfr yyyy.yyy @ipv6_address
;; communications error to ipv6_address#53: connection reset
```
### What is the expected *correct* behavior?
After restarting named on the master server:
```
Mar 5 08:03:13 ns3 named[73378]: client @0x28de9000 ipv4_address#50761: received notify for zone 'xxxx.yyy'
Mar 5 08:03:13 ns3 named[73378]: zone xxxx.yyy/IN: notify from ipv4_address#50761: serial 2020030337
Mar 5 08:03:13 ns3 named[73378]: transfer of 'xxxx.yyy/IN' from ipv4_address#53: connected using 185.167.19.242#60778 TSIG yop
Mar 5 08:03:13 ns3 named[73378]: transfer of 'xxxx.yyy/IN' from ipv4_address#53: Transfer status: success
Mar 5 08:03:13 ns3 named[73378]: transfer of 'xxxx.yyy/IN' from ipv4_address#53: Transfer completed: 1 messages, 116 records, 13597 bytes, 0.010 secs (1359700 bytes/sec)
```
```
# dig -y hmac-sha256:yop:zzzz axfr yyyy.yyy @ipv6_address
; <<>> DiG 9.16.0 <<>> -y hmac-sha256 axfr yyyy.yyy @ipv6_address
;; global options: +cmd
yyyy.yyy. 21600 IN SOA ns1.yyyy.yyy. root.yyyy.yyy. 2020030320 86400 3600 604800 1800
yyyy.yyy. 21600 IN RRSIG SOA 8 2 21600 20200324035558 20200303030727 43041 yyyy.yyy. KcunWHJt1kcskNuZKRBMfCmAInzjslmX4Sk3XVjPc2BVQkjkSvLljaNQ jfHU+LAN4Y+n2fcY3NWjRn05wG4Vp/ArGFuLH7LR8/sxMlSz3QlRLTce spBaZIZr8F3PGxXrfaQOKe9aBZImMypic0LnoMJD68nvu9cHzdFQVCtW FuM=
yyyy.yyy. 3600 IN DNSKEY 256 3 8 AwEAAacp1eCSgm0KMB5khT6Ju7/BUBNtmOWYt6bJ1cI3mE91a42AuXuN jOniblRf5neJUlyaBFcVq+73UCyqtu/QW7qrVwgkTMiAcZhHh5WTvK50 ifPZCP01AfS1OgPK1EoSunBnFcZyr1h+3HJz5Ql9+IJR0qRDMCbBzx3O 0w+dPEV3...
```
### Relevant configuration files
[snipped]
### Relevant logs and/or screenshots
Pasted above.https://gitlab.isc.org/isc-projects/bind9/-/issues/1643Problems reported in BIND 9.16.0 after hitting tcp-clients limit2020-03-11T13:57:44ZMichael McNallyProblems reported in BIND 9.16.0 after hitting tcp-clients limitReceived by security-officer@isc.org from external party:
```
ISC BIND folk:
BIND 9.16.0 seems to get TCP queues stuck after the number of client TCP
connects hits the max configured with tcp-clients. The symptoms for the
affected a...Received by security-officer@isc.org from external party:
```
ISC BIND folk:
BIND 9.16.0 seems to get TCP queues stuck after the number of client TCP
connects hits the max configured with tcp-clients. The symptoms for the
affected addresses are:
o DNS over TCP (such as "dig +tcp @address") times out
o output "netstat -ln | egrep '^tcp.*:53'" shows non-0 (in fact always
11 so far) Recv-Q number
For example:
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 XXX.XXX.3.138:53 0.0.0.0:* LISTEN
tcp 11 0 XXX.XXX.1.27:53 0.0.0.0:* LISTEN
tcp 11 0 XXX.XXX.64.26:53 0.0.0.0:* LISTEN
tcp 11 0 XXX.XXX.1.26:53 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.1:53 0.0.0.0:* LISTEN
tcp6 0 0 fe80::d267:26ff:fed2:53 :::* LISTEN
tcp6 0 0 XXXX:0:XXXX:XXXX::2:53 :::* LISTEN
tcp6 0 0 ::1:53 :::* LISTEN
tcp6 11 0 XXXX:0:XXXX:XXXX::10:53 :::* LISTEN
tcp6 11 0 XXXX:0:XXXX:XXXX::11:53 :::* LISTEN
tcp6 11 0 XXXX:0:XXXX:XXXX::12:53 :::* LISTEN
All addresses with Recv-Q of 11 fail to respond to DNS over TCP.
The only fix I've found is to stop & start named. I suspect that disabling
listening on the affected addresses, doing "rndc reconfig", then enabling
listening would work, but that's a bit of a pain.
This is BIND 9.16.0 on 2 Linux systems:
____________________Debian____________________
: uname -a
Linux xxx 4.19.0-5-amd64 #1 SMP Debian 4.19.37-5+deb10u1 (2019-07-19) x86_64 GNU/Linux
: named -V
BIND 9.16.0 (Stable Release) <id:6270e60>
running on Linux x86_64 4.19.0-5-amd64 #1 SMP Debian 4.19.37-5+deb10u1 (2019-07-19)
built by make with 'STD_CDEFINES=-DISC_FACILITY=LOG_LOCAL5' '--libdir=/usr/lib/x86_64-linux-gnu' '--with-openssl' '--enable-dnstap' '--enable-fixed-rrset' '--with-libtool' '--sysconfdir=/local/nsdata/etc' '--localstatedir=/local/nsdata/var'
compiled by GCC 8.3.0
compiled with OpenSSL version: OpenSSL 1.1.1d 10 Sep 2019
linked to OpenSSL version: OpenSSL 1.1.1d 10 Sep 2019
compiled with libxml2 version: 2.9.4
linked to libxml2 version: 20904
compiled with protobuf-c version: 1.3.1
linked to protobuf-c version: 1.3.1
threads support is enabled
default paths:
named configuration: /local/nsdata/etc/named.conf
rndc configuration: /local/nsdata/etc/rndc.conf
DNSSEC root key: /local/nsdata/etc/bind.keys
nsupdate session key: /local/nsdata/var/run/named/session.key
named PID file: /local/nsdata/var/run/named/named.pid
named lock file: /local/nsdata/var/run/named/named.lock
____________________CentOS____________________
: uname -a
Linux xxx 3.10.0-1062.12.1.el7.x86_64 #1 SMP Tue Feb 4 23:02:59 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
: named -V
BIND 9.16.0 (Stable Release) <id:6270e60>
running on Linux x86_64 3.10.0-1062.12.1.el7.x86_64 #1 SMP Tue Feb 4 23:02:59 UTC 2020
built by make with '--build=x86_64-redhat-linux-gnu' '--host=x86_64-redhat-linux-gnu' '--program-prefix=' '--disable-dependency-tracking' '--prefix=/usr' '--exec-prefix=/usr' '--bindir=/usr/bin' '--sbindir=/usr/sbin' '--sysconfdir=/local/nsdata/etc' '--datadir=/usr/share' '--includedir=/usr/include' '--libdir=/usr/lib64' '--libexecdir=/usr/libexec' '--sharedstatedir=/var/lib' '--mandir=/usr/share/man' '--infodir=/usr/share/info' '--with-libtool' '--with-pic' '--disable-static' '--with-docbook-xsl=/usr/share/sgml/docbook/xsl-stylesheets' '--enable-fixed-rrset' '--with-gssapi=yes' '--disable-isc-spnego' '--localstatedir=/var' '--with-geoip=no' '--with-python' '--enable-dnstap' 'build_alias=x86_64-redhat-linux-gnu' 'host_alias=x86_64-redhat-linux-gnu' 'CFLAGS= -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic' 'LDFLAGS=-Wl,-z,relro ' 'CPPFLAGS= -DDIG_SIGCHASE -DISC_FACILITY=LOG_LOCAL5' 'PKG_CONFIG_PATH=:/usr/lib64/pkgconfig:/usr/share/pkgconfig'
compiled by GCC 4.8.5 20150623 (Red Hat 4.8.5-39)
compiled with OpenSSL version: OpenSSL 1.0.2k-fips 26 Jan 2017
linked to OpenSSL version: OpenSSL 1.0.2k-fips 26 Jan 2017
compiled with libxml2 version: 2.9.1
linked to libxml2 version: 20901
compiled with zlib version: 1.2.7
linked to zlib version: 1.2.7
compiled with protobuf-c version: 1.0.2
linked to protobuf-c version: 1.0.2
threads support is enabled
default paths:
named configuration: /local/nsdata/etc/named.conf
rndc configuration: /local/nsdata/etc/rndc.conf
DNSSEC root key: /local/nsdata/etc/bind.keys
nsupdate session key: /var/run/named/session.key
named PID file: /var/run/named/named.pid
named lock file: /var/run/named/named.lock
________________________________________
In both cases "rndc status" shows that the number of TCP connections has hit
the max allowed:
version: BIND 9.16.0 (Stable Release) <id:6270e60>
running on xxx: Linux x86_64 3.10.0-1062.12.1.el7.x86_64 #1
SMP Tue Feb 4 23:02:59 UTC 2020
boot time: Wed, 26 Feb 2020 20:29:33 GMT
last configured: Wed, 26 Feb 2020 20:29:33 GMT
configuration file: /local/nsdata/etc/named.conf
(/local/nsdata/local/nsdata/etc/named.conf)
CPUs found: 16
worker threads: 16
UDP listeners per interface: 16
number of zones: 1016 (189 automatic)
debug level: 0
xfers running: 0
xfers deferred: 0
soa queries in progress: 0
query logging is OFF
recursive clients: 0/900/1000
tcp clients: 4/400 <======
TCP high-water: 400 <======
server is up and running
DNS over UDP is not affected & DNS over TCP to other addresses is not
affected.
Let me know what else you need to know about this.
```https://gitlab.isc.org/isc-projects/bind9/-/issues/1313master failing to build on MacOS High Sierra2019-11-16T05:22:12ZMark Andrewsmaster failing to build on MacOS High SierraFailing when building libwrap.la
```
gcc -dynamiclib -undefined dynamic_lookup -Wl,-z,interpose -o libwrap.la wrap.o -llmdb -L/opt/local/lib -luv -lpthread -ldl -L/opt/local/lib -L/opt/local/lib -lcmocka
ld: unknown option: -z
clang:...Failing when building libwrap.la
```
gcc -dynamiclib -undefined dynamic_lookup -Wl,-z,interpose -o libwrap.la wrap.o -llmdb -L/opt/local/lib -luv -lpthread -ldl -L/opt/local/lib -L/opt/local/lib -lcmocka
ld: unknown option: -z
clang: error: linker command failed with exit code 1 (use -v to see invocation)
```
offending command introduced in 53f0b6c34d3fe01c885d8020d155061d55c19477
Additionally `-Wl,-rpath=<path>` needs to be `-Wl,-rpath,<path>`.https://gitlab.isc.org/isc-projects/bind9/-/issues/446repeated `rndc reload` or `rndc reconfig` on bind 9.11.3 and 9.11.4 causes na...2021-10-04T13:04:07ZAlex Maestasrepeated `rndc reload` or `rndc reconfig` on bind 9.11.3 and 9.11.4 causes named memory usage to grow.### Summary
Each run of `rndc reload` or `rndc reconfig`, with bind 9.11.3 and 9.11.4, in our configuration, causes named memory usage to grow.
### Steps to reproduce
Run `rndc reload` or `rndc reconfig` repeatedly, without changing t...### Summary
Each run of `rndc reload` or `rndc reconfig`, with bind 9.11.3 and 9.11.4, in our configuration, causes named memory usage to grow.
### Steps to reproduce
Run `rndc reload` or `rndc reconfig` repeatedly, without changing the configuration.
### What is the current *bug* behavior?
Memory usage increases per run of `rndc reload`.
### What is the expected *correct* behavior?
Memory usage should remain relatively constant.
### Relevant configuration files
```
$ sudo named-checkconf -t /var/named/chroot -px
acl "real_localhost" {
127.0.0.1/32;
};
acl "lan_hosts" {
10.0.0.0/8;
172.16.0.0/12;
192.168.0.0/16;
};
acl "dns_resolvers" {
"lan_hosts";
};
controls {
inet 127.0.0.1 port 953 allow {
"localhost";
} keys {
"rndckey";
};
};
logging {
channel "general" {
file "/var/log/named.log";
severity notice;
print-time yes;
print-severity yes;
print-category yes;
};
channel "verbose" {
file "/var/log/verbose.log";
severity debug 1;
print-time yes;
print-severity yes;
print-category yes;
};
channel "query" {
file "/var/log/query.log";
severity info;
print-time yes;
print-severity no;
print-category no;
};
category "default" {
"general";
"verbose";
};
category "queries" {
"query";
};
};
options {
directory "/var/named";
dump-file "data/cache_dump.db";
listen-on {
"any";
};
listen-on-v6 {
"any";
};
memstatistics-file "data/named_mem_stats.txt";
pid-file "/var/run/named/named.pid";
querylog no;
statistics-file "data/named_stats.txt";
use-v4-udp-ports {
range 57345 61000;
};
auth-nxdomain no;
max-cache-size 15728640;
no-case-compress {
"localhost";
"lan_hosts";
};
recursion yes;
rrset-order {
order random;
};
allow-query {
"localhost";
};
allow-transfer {
"none";
};
forward only;
forwarders {
10.86.100.108;
10.86.110.129;
10.86.144.123;
10.86.95.123;
10.86.96.101;
10.86.97.126;
};
notify no;
};
key "rndckey" {
algorithm "hmac-md5";
secret "????????????????????????????????????????????????????????????";
};
zone "twitter.com.smf1.twitter.com" {
type master;
file "db.empty";
};
zone "twttr.net.smf1.twitter.com" {
type master;
file "db.empty";
};
zone "twitter.com.atla.twitter.com" {
type master;
file "db.empty";
};
zone "twttr.net.atla.twitter.com" {
type master;
file "db.empty";
};
zone "twitter.com.atla.twttr.net" {
type master;
file "db.empty";
};
zone "twttr.net.atla.twttr.net" {
type master;
file "db.empty";
};
zone "twitter.com.atlb.twitter.com" {
type master;
file "db.empty";
};
zone "twttr.net.atlb.twitter.com" {
type master;
file "db.empty";
};
zone "twitter.com.atlb.twttr.net" {
type master;
file "db.empty";
};
zone "twttr.net.atlb.twttr.net" {
type master;
file "db.empty";
};
zone "twitter.com.smfc.twitter.com" {
type master;
file "db.empty";
};
zone "twttr.net.smfc.twitter.com" {
type master;
file "db.empty";
};
zone "twitter.com.atlc.twitter.com" {
type master;
file "db.empty";
};
zone "twttr.net.atlc.twitter.com" {
type master;
file "db.empty";
};
zone "twitter.com.atlc.twttr.net" {
type master;
file "db.empty";
};
zone "twttr.net.atlc.twttr.net" {
type master;
file "db.empty";
};
zone "twitter.com.prod.twitter.com" {
type master;
file "db.empty";
};
zone "twitter.com.prod.twttr.net" {
type master;
file "db.empty";
};
zone "twttr.net.prod.twitter.com" {
type master;
file "db.empty";
};
zone "twttr.net.prod.twttr.net" {
type master;
file "db.empty";
};
zone "twitter.com.corpdc.twitter.com" {
type master;
file "db.empty";
};
zone "twitter.com.corpdc.twttr.net" {
type master;
file "db.empty";
};
zone "twttr.net.corpdc.twitter.com" {
type master;
file "db.empty";
};
zone "twttr.net.corpdc.twttr.net" {
type master;
file "db.empty";
};
zone "twtter.com" {
type master;
file "db.empty";
};
zone "twitter.com.twttr.net" {
type master;
file "db.empty";
};
zone "twttr.net.twttr.net" {
type master;
file "db.empty";
};
zone "." {
type hint;
file "root.hint";
};
zone "localhost" {
type master;
file "db.localhost";
};
zone "0.0.127.in-addr.arpa" {
type master;
file "db.127.0.0";
};
```
### Relevant logs and/or screenshots
We created core dumps by sending signal 11 to `named`, from several machines with varying memory usage.
first-pass naïve analysis shows that the strings 'KSATtstA' and 'udpdispatch' loosely correlate with memory usage of the process. These hosts represent bind 9.11.3.
```
$ for i in smf* ; do echo = $i = ; strings -a $i | sort | uniq -c | sort -nr | head -n10 ; done
= smf1-azg-31-sr1 =
65978 KSATtstA
64625 udpdispatch
6475 tSeD
2624 pMEMlpmA
2253 nSND
1710 !fuB
1492 twitter
1257 CmeMxcmA
1197 L'jh
1197 disp_sepool
= smf1-dha-15-sr1 =
8297 KSATtstA
7168 udpdispatch
2212 tSeD
1088 CmeMxcmA
584 twitter
564 nSND
406 kLWR
375 !fuB
326 pMEMlpmA`
279 ONBR
= smf1-duy-24-sr1 =
24776 KSATtstA
23596 udpdispatch
3224 tSeD
1251 nSND
1136 CmeMxcmA
982 pMEMlpmA
910 !fuB
838 twitter
437 psiD
437 disp_sepool
= smf1-duz-23-sr1 =
24777 KSATtstA 7
23579 udpdispatch
4300 tSeD
1278 nSND
1136 CmeMxcmA
982 pMEMlpmA
928 !fuB
856 twitter
437 psiD
437 disp_sepool
```
We also tested out 9.11.4:
```
$ ps auxww | grep \[n]amed
named 205598 5.9 0.0 1282692 229648 ? Ssl 21:30 0:02 /usr/sbin/named -u named -c /etc/named.conf -t /var/named/chroot -c /etc/named.conf
$ sudo rndc reload
server reload successful
$ ps auxww | grep \[n]amed
named 205598 9.8 0.1 1282172 291564 ? Ssl 21:30 0:04 /usr/sbin/named -u named -c /etc/named.conf -t /var/named/chroot -c /etc/named.conf
$ sudo rndc reload
server reload successful
$ ps auxww | grep \[n]amed
named 205598 14.3 0.1 1282172 346904 ? Ssl 21:30 0:07 /usr/sbin/named -u named -c /etc/named.conf -t /var/named/chroot -c /etc/named.conf
$ sudo rndc reload
server reload successful
$ ps auxww | grep \[n]amed
named 205598 18.9 0.1 1282172 291992 ? Ssl 21:30 0:09 /usr/sbin/named -u named -c /etc/named.conf -t /var/named/chroot -c /etc/named.conf
```
After six more `rndc reload` commands:
```
$ ps auxww | grep \[n]amed
named 205598 31.6 0.1 1348028 409972 ? Ssl 21:30 0:38 /usr/sbin/named -u named -c /etc/named.conf -t /var/named/chroot -c /etc/named.conf
```
We forced a core and found similar results:
```
$ strings -a core.205598 | sort | uniq -c | sort -nr | head -n10
65972 KSATtstA
64699 udpdispatch
5480 tSeD
2612 pMEMlpmA`
1667 nSND
1522 !fuB
1251 CmeMxcmA
1197 psiD
1197 disp_sepool
1008 disp_portpool
```
### Possible fixes
These strings correspond to various magic values, suggesting that some path `rndc reload` and `rndc reconfig` take is leaking `udpdispatch` structures tagged with ISCAPI_TASK_MAGIC and ONDESTROY_MAGIC. We have a valgrind report, which was inconclusive.
```
lib/isc/ondestroy.c:23:#define ONDESTROY_MAGIC ISC_MAGIC('D', 'e', 'S', 't')
lib/isc/task.c:89:#define TASK_MAGIC ISC_MAGIC('T', 'A', 'S', 'K')
lib/isc/include/isc/task.h:166:#define ISCAPI_TASK_MAGIC ISC_MAGIC('A','t','s','t')
```https://gitlab.isc.org/isc-projects/bind9/-/issues/280Master does not start on Centos7 or Debian72018-05-29T21:10:39ZStephen MorrisMaster does not start on Centos7 or Debian7On both Debian7 and Centos7 Jenkins systems, BIND "master" (f6c213c87d684060b51754d5a384758da8db77a8) fails to start, outputting the message:
random.c:167: fatal error: RUNTIME_CHECK(RAND_bytes(buf, buflen) < 1) failed
* https://jenkin...On both Debian7 and Centos7 Jenkins systems, BIND "master" (f6c213c87d684060b51754d5a384758da8db77a8) fails to start, outputting the message:
random.c:167: fatal error: RUNTIME_CHECK(RAND_bytes(buf, buflen) < 1) failed
* https://jenkins.isc.org/view/BIND/job/bind9-master-centos7-amd64-1/631/
* https://jenkins.isc.org/view/BIND-jobs_testing/job/bind9-master-debian7-amd64-1/1/
This may be an issue with the version of the library used for the random functions (the code runs OK on other systems, including Debian 9) but if so, a check should be made for the correct version in "configure".https://gitlab.isc.org/isc-projects/bind9/-/issues/199v9_10_sub: decrement_reference() not effective for node cleanup2019-04-25T15:46:41ZGhost Userv9_10_sub: decrement_reference() not effective for node cleanup`decrement_reference()` doesn't appear effective for node cleanup in 9.10-sub. This is because `clean_cache_node()` fails to set `node->data` to NULL after freeing the `rbtdb_data_t` when all data under it has been cleaned up.
Something...`decrement_reference()` doesn't appear effective for node cleanup in 9.10-sub. This is because `clean_cache_node()` fails to set `node->data` to NULL after freeing the `rbtdb_data_t` when all data under it has been cleaned up.
Something like this is required:
```
diff --git a/lib/dns/rbtdb.c b/lib/dns/rbtdb.c
index 5730d3b14e..eb23137681 100644
--- a/lib/dns/rbtdb.c
+++ b/lib/dns/rbtdb.c
@@ -1990,6 +1990,13 @@ clean_cache_node(dns_rbtdb_t *rbtdb, dns_rbtnode_t *node) {
rbtdb->common.mctx,
clean_iptree_nodedata,
rbtdb);
+
+ if ((data->nonecs_data == NULL) &&
+ (data->ecs_root == NULL))
+ {
+ isc_mem_put(rbtdb->common.mctx, data, sizeof(*data));
+ node->data = NULL;
+ }
}
node->dirty = 0;
```https://gitlab.isc.org/isc-projects/bind9/-/issues/3058SUMMARY: ThreadSanitizer: data race in read2023-11-02T17:02:21ZOndřej SurýSUMMARY: ThreadSanitizer: data race in readThis almost seems like we are passing some buffer to isc_task that libuv is still using.
```
==================
WARNING: ThreadSanitizer: data race (pid=22062)
Write of size 8 at 0x7fdcbeac0000 by thread T9:
#0 read <null> (libtsa...This almost seems like we are passing some buffer to isc_task that libuv is still using.
```
==================
WARNING: ThreadSanitizer: data race (pid=22062)
Write of size 8 at 0x7fdcbeac0000 by thread T9:
#0 read <null> (libtsan.so.0+0x4ace2)
#1 uv__read /usr/src/libuv-v1.42.0/src/unix/stream.c:1164 (libuv.so.1+0x227e1)
#2 isc__trampoline_run /builds/isc-projects/bind9/lib/isc/trampoline.c:185 (libisc-9.17.20.so+0x7b0e1)
Previous read of size 8 at 0x7fdcbeac0000 by thread T8:
#0 memmove <null> (libtsan.so.0+0x5da6e)
#1 memmove /usr/include/bits/string_fortified.h:36 (libisc-9.17.20.so+0x453d9)
#2 isc_buffer_copyregion /builds/isc-projects/bind9/lib/isc/buffer.c:530 (libisc-9.17.20.so+0x453d9)
#3 dns_zone_forwardupdate /builds/isc-projects/bind9/lib/dns/zone.c:18408 (libdns-9.17.20.so+0x22081d)
#4 forward_action /builds/isc-projects/bind9/lib/ns/update.c:3748 (libns-9.17.20.so+0x516d6)
#5 task_run /builds/isc-projects/bind9/lib/isc/task.c:827 (libisc-9.17.20.so+0x7237a)
#6 isc_task_run /builds/isc-projects/bind9/lib/isc/task.c:907 (libisc-9.17.20.so+0x7237a)
#7 isc__nm_async_task netmgr/netmgr.c:835 (libisc-9.17.20.so+0x1e9ab)
#8 process_netievent netmgr/netmgr.c:914 (libisc-9.17.20.so+0x27efb)
#9 process_queue netmgr/netmgr.c:1008 (libisc-9.17.20.so+0x28a2a)
#10 process_all_queues netmgr/netmgr.c:754 (libisc-9.17.20.so+0x29353)
#11 async_cb netmgr/netmgr.c:783 (libisc-9.17.20.so+0x29353)
#12 uv__async_io /usr/src/libuv-v1.42.0/src/unix/async.c:163 (libuv.so.1+0x110ef)
#13 isc__trampoline_run /builds/isc-projects/bind9/lib/isc/trampoline.c:185 (libisc-9.17.20.so+0x7b0e1)
Location is heap block of size 1310720 at 0x7fdcbeac0000 allocated by main thread:
#0 malloc <null> (libtsan.so.0+0x32919)
#1 mallocx /builds/isc-projects/bind9/lib/isc/jemalloc_shim.h:33 (libisc-9.17.20.so+0x5b02a)
#2 mem_get /builds/isc-projects/bind9/lib/isc/mem.c:343 (libisc-9.17.20.so+0x5b02a)
#3 isc__mem_get /builds/isc-projects/bind9/lib/isc/mem.c:758 (libisc-9.17.20.so+0x5b02a)
#4 isc__netmgr_create netmgr/netmgr.c:319 (libisc-9.17.20.so+0x1f2a4)
#5 isc_managers_create /builds/isc-projects/bind9/lib/isc/managers.c:39 (libisc-9.17.20.so+0x59ef2)
#6 create_managers /builds/isc-projects/bind9/bin/named/main.c:920 (named+0x424a19)
#7 setup /builds/isc-projects/bind9/bin/named/main.c:1184 (named+0x424a19)
#8 main /builds/isc-projects/bind9/bin/named/main.c:1452 (named+0x424a19)
Thread T9 'isc-net-0008' (tid=22096, running) created by main thread at:
#0 pthread_create <null> (libtsan.so.0+0x5bf45)
#1 isc_thread_create /builds/isc-projects/bind9/lib/isc/thread.c:79 (libisc-9.17.20.so+0x7466d)
#2 isc__netmgr_create netmgr/netmgr.c:328 (libisc-9.17.20.so+0x1f34b)
#3 isc_managers_create /builds/isc-projects/bind9/lib/isc/managers.c:39 (libisc-9.17.20.so+0x59ef2)
#4 create_managers /builds/isc-projects/bind9/bin/named/main.c:920 (named+0x424a19)
#5 setup /builds/isc-projects/bind9/bin/named/main.c:1184 (named+0x424a19)
#6 main /builds/isc-projects/bind9/bin/named/main.c:1452 (named+0x424a19)
Thread T8 'isc-net-0007' (tid=22094, running) created by main thread at:
#0 pthread_create <null> (libtsan.so.0+0x5bf45)
#1 isc_thread_create /builds/isc-projects/bind9/lib/isc/thread.c:79 (libisc-9.17.20.so+0x7466d)
#2 isc__netmgr_create netmgr/netmgr.c:328 (libisc-9.17.20.so+0x1f34b)
#3 isc_managers_create /builds/isc-projects/bind9/lib/isc/managers.c:39 (libisc-9.17.20.so+0x59ef2)
#4 create_managers /builds/isc-projects/bind9/bin/named/main.c:920 (named+0x424a19)
#5 setup /builds/isc-projects/bind9/bin/named/main.c:1184 (named+0x424a19)
#6 main /builds/isc-projects/bind9/bin/named/main.c:1452 (named+0x424a19)
SUMMARY: ThreadSanitizer: data race (/lib64/libtsan.so.0+0x4ace2) in read
==================
ThreadSanitizer: reported 1 warnings
```Not plannedhttps://gitlab.isc.org/isc-projects/bind9/-/issues/2953Resolver issues with refactored dispatch code2023-11-02T17:02:19ZMichał KępieńResolver issues with refactored dispatch codeThis issue attempts to describe various issues with resolver behavior
found after merging !4601 (#2401). Most of these issues are
intermittent, so it is important to keep track of them somewhere in
order to not forget that they exist. ...This issue attempts to describe various issues with resolver behavior
found after merging !4601 (#2401). Most of these issues are
intermittent, so it is important to keep track of them somewhere in
order to not forget that they exist. We should get to the bottom of all
of these issues before we release BIND 9.18.0.
1. [x] **Recursive Perflab tests cause the resolver to stop responding.**
This issue might be the simplest to start with because the behavior
observed seems to be consistent rather than intermittent. Namely,
all Perflab jobs which test a resolver seem to crank out a response
rate of some 70-120 kQPS at the beginning of the test and then...
the resolver stops responding indefinitely. While Perflab was not
designed with recursive tests in mind and therefore we can treat its
recursive results with a grain of salt, it certainly should not be
reporting zeros all over the place.
- https://perflab.isc.org/#/config/run/5bf195dd83ba91a870b2976f/
- https://perflab.isc.org/#/config/run/5cd6a166643076f6c1f6c26f/
- https://perflab.isc.org/#/config/run/5db74b6264458967f762143a/
- https://perflab.isc.org/#/config/run/5db74b7264458967f762143b/
- https://perflab.isc.org/#/config/run/5db74c2764458967f7621440/
- https://perflab.isc.org/#/config/run/5db74c3464458967f7621441/
(Resolved by !5500.)
2. [x] **`respdiff` tests are *sometimes* slow.**
Ever since we merged the dispatch branch, the `respdiff` tests
started failing *intermittently* for `main` (and only `main`)
because of timeouts.
- [job 2016337][1]: pass, ~2m30s per each 10,000 queries
- [job 2016622][2]: pass, ~2m45s per each 10,000 queries
- [job 2017990][3]: pass, ~2m30s per each 10,000 queries
- [job 2020093][4]: fail, 7+ minutes per each 10,000 queries
- [job 2023057][5]: fail, 16+ minutes per each 10,000 queries
- [job 2023490][6]: pass, ~2m40s per each 10,000 queries
I do not think varying CI runner stress can be blamed for this, not
for discrepancies this large. It also never happened before merging
!4601, AFAIK.
3. [x] **A lot of "stress" test graph indicate growing memory use.** #3002
While testing October BIND 9 releases, one of the 1-hour "stress"
tests ran in recursive mode for BIND 9.17.19 yielded a graph which
indicates that memory use growth over time might be an issue.
https://wiki.isc.org/bin/viewfile/QA/BindQaResults_9_11_36?filename=bind-9.17.19-linux-amd64-recursive-1h.png;rev=1
However, that phenomenon was not observable for other OS/arch
combinations this specific code revision was tested with.
It was also not observable on the *same* OS/arch combination for a
very similar code revision (the code differences should not have any
effect on memory use patterns):
https://wiki.isc.org/bin/viewfile/QA/BindQaResults_9_11_36?filename=bind-9.17.19-linux-amd64-recursive-1h.png;rev=2
Pre-release tests run for BIND 9.17.20 confirmed that memory leaks
are a common thing when `named` is used as a recursive resolver.
More details are available in #3002.
The "stress" tests are run on isolated VMs and despite being pretty
synthetic (fixed traffic pattern, everything happens on one machine,
etc.), they have a history of being very stable, so typical issues
like test host load varying over time etc. are not a factor here.
4. [x] **Lame servers with IPv6 unreachable cause hang on shutdown.** #2927
5. [x] **resolver test fails intermittently** #3013
See https://gitlab.isc.org/isc-projects/bind9/-/jobs/2054296
```
I:resolver:query count error: 6 NS records: expected queries 10, actual 11
I:resolver:failed
```
6. [x] **Assertion failed in `dns_resolver_logfetch()`** #2962
7. [x] **Assertion failed in `dns_dispatch_gettcp()`** #2963
8. [x] **Assertion failed in `dns_resolver_destroyfetch()`** #2969
9. [x] **ThreadSanitizer issues with adb** #2978 #2979
10. [x] **fctx_cancelquery() attempts to process a query which has already been freed** #3018
11. [x] **premature TCP connection closure leaks fetch contexts (hang on shutdown)** #3026
12. [ ] **validator loops can cause shutdown hang** #3033
13. [ ] **ADB finds for a broken zone may cause fetch contexts to hang** #3037
14. [ ] **ASAN error in fctx_cancelquery()** #3102
I decided to open a single issue for all of the above problems because I
sense they are somehow related and I hope that fixing the root cause of
one of them will eliminate the other ones as well.
[1]: https://gitlab.isc.org/isc-projects/bind9/-/jobs/2016337
[2]: https://gitlab.isc.org/isc-projects/bind9/-/jobs/2016622
[3]: https://gitlab.isc.org/isc-projects/bind9/-/jobs/2017990
[4]: https://gitlab.isc.org/isc-projects/bind9/-/jobs/2020093
[5]: https://gitlab.isc.org/isc-projects/bind9/-/jobs/2023057
[6]: https://gitlab.isc.org/isc-projects/bind9/-/jobs/2023490Not plannedhttps://gitlab.isc.org/isc-projects/bind9/-/issues/2730[ISC-support #18552] Logging category for notify/xfer related messages2024-03-27T13:17:04ZChuck Stearns[ISC-support #18552] Logging category for notify/xfer related messages### Description
Logging category for notify/xfer related messages
### Request
The notify category does not include some messages that end up in the general category. There are also some messages that might be better placed in xfer-in....### Description
Logging category for notify/xfer related messages
### Request
The notify category does not include some messages that end up in the general category. There are also some messages that might be better placed in xfer-in. For instance, "notify from" and "refused notify from non-master". The intent is to have all messages useful for troubleshooting an aspect of operation in one log. For example, if troubleshooting zone transfer issues, the relevant messages would be in the transfer.log. This segregation also facilitates some noise reduction when using dynamic severity.
### Links / referencesNot planned