BIND issueshttps://gitlab.isc.org/isc-projects/bind9/-/issues2020-02-25T00:52:36Zhttps://gitlab.isc.org/isc-projects/bind9/-/issues/1629contrib/dlz/modules/filesystem no longer builds 9.16.02020-02-25T00:52:36ZMark Andrewscontrib/dlz/modules/filesystem no longer builds 9.16.0clang-format reordered the includes causing isc_result_t to not be typedef'd early enough.clang-format reordered the includes causing isc_result_t to not be typedef'd early enough.https://gitlab.isc.org/isc-projects/bind9/-/issues/1600Core dump in resolver.c when shutting down.2023-11-02T16:51:27ZMark AndrewsCore dump in resolver.c when shutting down.Job [#637123](https://gitlab.isc.org/isc-projects/bind9/-/jobs/637123) failed for 4e2ac5f6c79d91cc0c58d4c3c097e47b79d1f647:
```
05-Feb-2020 08:49:12.254 resolver.c:9813: INSIST(((res->dbuckets[i].list).head == ((void *)0))) failed, back...Job [#637123](https://gitlab.isc.org/isc-projects/bind9/-/jobs/637123) failed for 4e2ac5f6c79d91cc0c58d4c3c097e47b79d1f647:
```
05-Feb-2020 08:49:12.254 resolver.c:9813: INSIST(((res->dbuckets[i].list).head == ((void *)0))) failed, back trace
05-Feb-2020 08:49:12.254 #0 0x5620b7b6cc74 in ??
05-Feb-2020 08:49:12.254 #1 0x7f7c47de1857 in ??
05-Feb-2020 08:49:12.254 #2 0x7f7c48318728 in ??
05-Feb-2020 08:49:12.254 #3 0x7f7c48352808 in ??
05-Feb-2020 08:49:12.254 #4 0x7f7c4835345d in ??
05-Feb-2020 08:49:12.254 #5 0x7f7c4835355c in ??
05-Feb-2020 08:49:12.254 #6 0x7f7c47e06159 in ??
05-Feb-2020 08:49:12.254 #7 0x7f7c47699fa3 in ??
05-Feb-2020 08:49:12.254 #8 0x7f7c475ac4cf in ??
05-Feb-2020 08:49:12.254 exiting (due to assertion failure)
```
Artifacts saved. core dump present (rpzrecurs/ns3/core.7909).Not plannedhttps://gitlab.isc.org/isc-projects/bind9/-/issues/1572Wait for mirror zone to be deleted2020-01-23T05:19:53ZMark AndrewsWait for mirror zone to be deletedJob [#589745](https://gitlab.isc.org/isc-projects/bind9/-/jobs/589745) failed for 1c371c1dfa13733521f39dcb7a4e530293362b66:
```
I:mirror:checking that a mirror zone can be deleted using rndc (28)
I:mirror:failed
I:mirror:exit status: 1
...Job [#589745](https://gitlab.isc.org/isc-projects/bind9/-/jobs/589745) failed for 1c371c1dfa13733521f39dcb7a4e530293362b66:
```
I:mirror:checking that a mirror zone can be deleted using rndc (28)
I:mirror:failed
I:mirror:exit status: 1
R:mirror:FAIL
E:mirror:Tue Jan 21 22:28:27 UTC 2020
```
```
n=`expr $n + 1`
echo_i "checking that a mirror zone can be deleted using rndc ($n)"
ret=0
# Remove the mirror zone added in the previous test.
$RNDCCMD 10.53.0.3 delzone verify-addzone > rndc.out.ns3.test$n 2>&1 || ret=1
# Check whether the mirror zone was removed.
$DIG $DIGOPTS @10.53.0.3 +norec verify-addzone SOA > dig.out.ns3.test$n 2>&1 || ret=1
grep "NXDOMAIN" dig.out.ns3.test$n > /dev/null || ret=1
grep "flags:.* aa" dig.out.ns3.test$n > /dev/null && ret=1
grep "flags:.* ad" dig.out.ns3.test$n > /dev/null || ret=1
if [ $ret != 0 ]; then echo_i "failed"; fi
status=`expr $status + $ret`
```
```
21-Jan-2020 22:28:23.184 received control channel command 'delzone verify-addzone'
21-Jan-2020 22:28:26.422 zone verify-addzone/IN: mirror zone is no longer in use; reverting to normal recursion
21-Jan-2020 22:28:26.422 shutting down
21-Jan-2020 22:28:26.425 client @0x7f59b8010370 10.53.0.1#34239: no matching view in class 'IN'
21-Jan-2020 22:28:26.425 client @0x7f59b8010370 10.53.0.1#34239: no matching view in class
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 24320
;; flags: ad; QUESTION: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags: do; udp: 4096
; COOKIE: cde32bbbac68a62b
;; QUESTION SECTION:
;verify-addzone. IN SOA
21-Jan-2020 22:28:26.425 client @0x7f59b8010370 10.53.0.1#34239: error
21-Jan-2020 22:28:26.425 client @0x7f59b8010370 10.53.0.1#34239: send
21-Jan-2020 22:28:26.425 client @0x7f59b8010370 10.53.0.1#34239: sendto
21-Jan-2020 22:28:26.425 client @0x7f59b8010370 10.53.0.1#34239: senddone
21-Jan-2020 22:28:26.425 client @0x7f59b8010370 10.53.0.1#34239: next
21-Jan-2020 22:28:26.425 client @0x7f59b8010370 10.53.0.1#34239: endrequest
```https://gitlab.isc.org/isc-projects/bind9/-/issues/1561crash in validator2020-02-11T07:58:24ZEvan Huntcrash in validatorThe refactoring of validator.c made an intermittent crash possible if `validator_start()` is called with `val->event->message` set to `NULL`, which can occur when validating a negative cache entry.The refactoring of validator.c made an intermittent crash possible if `validator_start()` is called with `val->event->message` set to `NULL`, which can occur when validating a negative cache entry.Evan HuntEvan Hunthttps://gitlab.isc.org/isc-projects/bind9/-/issues/4504named generates core and ends with signal SIGFPE, Arithmetic exception.2024-01-02T15:27:20Zsagar sagarnamed generates core and ends with signal SIGFPE, Arithmetic exception.<!--
If the bug you are reporting is potentially security-related - for example,
if it involves an assertion failure or other crash in `named` that can be
triggered repeatedly - then please make sure that you make the new issue
confident...<!--
If the bug you are reporting is potentially security-related - for example,
if it involves an assertion failure or other crash in `named` that can be
triggered repeatedly - then please make sure that you make the new issue
confidential by clicking the checkbox at the bottom!
-->
### Summary
For max-cache-size value, 1823420M named is crashing with arithmetic exception and generating the core.
### BIND version affected
Version info
````
BIND 9.16.23-RH (Extended Support Version) <id:fde3b1f>
running on Linux x86_64 5.15.0-105.125.6.2.1.el9uek.x86_64 #2 SMP Thu Sep 14 21:51:15 PDT 2023
built by make with '--build=x86_64-redhat-linux-gnu' '--host=x86_64-redhat-linux-gnu' '--program-prefix=' '--disable-dependency-tracking' '--prefix=/usr' '--exec-prefix=/usr' '--bindir=/usr/bin' '--sbindir=/usr/sbin' '--sysconfdir=/etc' '--datadir=/usr/share' '--includedir=/usr/include' '--libdir=/usr/lib64' '--libexecdir=/usr/libexec' '--sharedstatedir=/var/lib' '--mandir=/usr/share/man' '--infodir=/usr/share/info' '--with-python=/usr/bin/python3' '--with-libtool' '--localstatedir=/var' '--with-pic' '--disable-static' '--includedir=/usr/include/bind9' '--with-tuning=large' '--with-libidn2' '--with-maxminddb' '--with-dlopen=yes' '--with-gssapi=yes' '--with-lmdb=yes' '--without-libjson' '--with-json-c' '--enable-dnstap' '--enable-fixed-rrset' '--enable-full-report' 'build_alias=x86_64-redhat-linux-gnu' 'host_alias=x86_64-redhat-linux-gnu' 'CC=gcc' 'CFLAGS= -O2 -flto=auto -ffat-lto-objects -fexceptions -g -grecord-gcc-switches -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -fstack-protector-strong -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -march=x86-64-v2 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection' 'LDFLAGS=-Wl,-z,relro -Wl,--as-needed -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 ' 'LT_SYS_LIBRARY_PATH=/usr/lib64:' 'PKG_CONFIG_PATH=:/usr/lib64/pkgconfig:/usr/share/pkgconfig'
compiled by GCC 11.4.1 20230605 (Red Hat 11.4.1-2.1.0.1)
compiled with OpenSSL version: OpenSSL 3.0.7 1 Nov 2022
linked to OpenSSL version: OpenSSL 3.0.7 1 Nov 2022
compiled with libuv version: 1.42.0
linked to libuv version: 1.42.0
compiled with libxml2 version: 2.9.13
linked to libxml2 version: 20913
compiled with json-c version: 0.14
linked to json-c version: 0.14
compiled with zlib version: 1.2.11
linked to zlib version: 1.2.11
linked to maxminddb version: 1.5.2
compiled with protobuf-c version: 1.3.3
linked to protobuf-c version: 1.3.3
threads support is enabled
default paths:
named configuration: /etc/named.conf
rndc configuration: /etc/rndc.conf
DNSSEC root key: /etc/bind.keys
nsupdate session key: /var/run/named/session.key
named PID file: /var/run/named/named.pid
named lock file: /var/run/named/named.lock
geoip-directory: /usr/share/GeoIP
````
### Relevant configuration files
Add max-cache-size value 1823420M option in named.conf to replicate the issue, why this particular value will be explained further in the report.
In our ssystem total memory is cat ./proc/meminfo | grep -i MemTotal
MemTotal: 2124386752 kB
we haven't configured the max cache size option in our configuration file so it takes the max-cache size to 90% of its total memory as seen in the log
````none:89: 'max-cache-size 90%' - setting to 1867136MB (out of 2074596MB)````
### Relevant logs
As soon as it start running it generates
````
Program terminated with signal SIGFPE, Arithmetic exception.
#0 0x00007f29bc9f5a8d in more_frags (new_size=0, ctx=0x7f29b032e6c0) at ../../../lib/isc/mem.c:457
457 frags = (int)(total_size / new_size);
[Current thread is 1 (Thread 0x7f29bb9e5640 (LWP 346082))]
(gdb) bt
#0 0x00007f29bc9f5a8d in more_frags (new_size=0, ctx=0x7f29b032e6c0) at ../../../lib/isc/mem.c:457
#1 mem_getunlocked (ctx=ctx@entry=0x7f29b032e6c0, size=size@entry=4294967296) at ../../../lib/isc/mem.c:522
#2 0x00007f29bca003ce in isc___mem_get (ctx0=0x7f29b032e6c0, size=4294967296, file=0x7f29bcc8e980 "../../../lib/dns/rbt.c", line=2387) at ../../../lib/isc/mem.c:1066
#3 0x00007f29bcb6c465 in rehash (newbits=<optimized out>, rbt=0x7f29b682d010) at ../../../lib/dns/rbt.c:2387
#4 maybe_rehash (rbt=0x7f29b682d010, newcount=<optimized out>) at ../../../lib/dns/rbt.c:2409
#5 0x00007f29bcb6d9d0 in dns_rbt_adjusthashsize (size=<optimized out>, rbt=<optimized out>) at ../../../lib/dns/rbt.c:1098
#6 dns_rbt_adjusthashsize (rbt=<optimized out>, size=<optimized out>) at ../../../lib/dns/rbt.c:1084
#7 0x00007f29bcb84dd9 in adjusthashsize (db=0x7f29b6829010, size=1911994449920) at ../../../lib/dns/rbtdb.c:8129
#8 0x000055742d5bd66b in configure_view (view=<optimized out>, viewlist=<optimized out>, config=0x7f29b6d60ee8, vconfig=0x0, cachelist=<optimized out>, kasplist=<optimized out>, bindkeys=0x0,
mctx=0x55742e088c40, actx=0x7f29bbb2f538, need_hints=true) at ../../../bin/named/server.c:4625
#9 0x000055742d5cba8b in load_configuration (filename=<optimized out>, server=server@entry=0x7f29b6d32010, first_time=first_time@entry=true) at ../../../bin/named/server.c:8997
#10 0x000055742d5cdc1e in run_server (task=<optimized out>, event=<optimized out>) at ../../../bin/named/server.c:9709
#11 0x00007f29bca221bd in task_run (task=0x7f29b6d3d010) at ../../../lib/isc/task.c:857
#12 isc_task_run (task=0x7f29b6d3d010) at ../../../lib/isc/task.c:950
#13 0x00007f29bca0d2a9 in isc__nm_async_task (worker=0x55742e09bfb0, ev0=0x7f29b6d478a8) at netmgr/../../../../lib/isc/netmgr/netmgr.c:873
#14 process_netievent (worker=worker@entry=0x55742e09bfb0, ievent=0x7f29b6d478a8) at netmgr/../../../../lib/isc/netmgr/netmgr.c:958
#15 0x00007f29bca0d425 in process_queue (worker=worker@entry=0x55742e09bfb0, type=type@entry=NETIEVENT_TASK) at netmgr/../../../../lib/isc/netmgr/netmgr.c:1027
#16 0x00007f29bca0dc17 in process_all_queues (worker=0x55742e09bfb0) at netmgr/../../../../lib/isc/netmgr/netmgr.c:798
#17 async_cb (handle=0x55742e09c310) at netmgr/../../../../lib/isc/netmgr/netmgr.c:827
#18 0x00007f29bc7a6b3d in uv__async_io (loop=0x55742e09bfc0, w=<optimized out>, events=<optimized out>) at src/unix/async.c:163
#19 0x00007f29bc7c285e in uv__io_poll (loop=0x55742e09bfc0, timeout=<optimized out>) at src/unix/epoll.c:374
#20 0x00007f29bc7ac5a8 in uv__io_poll (timeout=<optimized out>, loop=0x55742e09bfc0) at src/unix/udp.c:122
#21 uv_run (loop=loop@entry=0x55742e09bfc0, mode=mode@entry=UV_RUN_DEFAULT) at src/unix/core.c:389
#22 0x00007f29bca0d4b7 in nm_thread (worker0=0x55742e09bfb0) at netmgr/../../../../lib/isc/netmgr/netmgr.c:733
#23 0x00007f29bca1ff9a in isc__trampoline_run (arg=0x55742e09fe90) at ../../../lib/isc/trampoline.c:196
#24 0x00007f29bc200812 in start_thread (arg=<optimized out>) at pthread_create.c:443
#25 0x00007f29bc1a0450 in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81
(gdb)
````
I did some analysis the quantize function in mem_getunlocked is returning value zero for size =4294967296
I did testing around this and it seems like it can return zero for these cases and ours is one of them
````
for(size_t i=1;i<4294967399;i++)
{
size_t result = quantize(i);
if(result==0)
printf("%lu=%lu\n",i,result);
}
4294967289=0
4294967290=0
4294967291=0
4294967292=0
4294967293=0
4294967294=0
4294967295=0
4294967296=0
````https://gitlab.isc.org/isc-projects/bind9/-/issues/4235Flamethrower instance #3 stops querying BIND in RPZ mode2024-01-22T16:24:31ZMichal NowakFlamethrower instance #3 stops querying BIND in RPZ modeJob [#3554059](https://gitlab.isc.org/isc-projects/bind9/-/jobs/3554059) failed for 2086be9bcafb6985e4bdd79421fe9fb44ff5c9b5.
The `stress:rpz:fedora:38:amd64` "stress" test often fails on BIND 9.16 (and -S) because the Flamethrower no. ...Job [#3554059](https://gitlab.isc.org/isc-projects/bind9/-/jobs/3554059) failed for 2086be9bcafb6985e4bdd79421fe9fb44ff5c9b5.
The `stress:rpz:fedora:38:amd64` "stress" test often fails on BIND 9.16 (and -S) because the Flamethrower no. 3 stops sending queries over TCP. The other TCP Flamethrower (no. 4) and two UDP Flamethrowers keep working as they should. The issue is limited to BIND 9.16 and the amd64 RPZ job. The Fedora 38 image has Flamethrower 0.11.0 from Fedora repos.
This issue is close to isc-projects/bind9#2395, but not quite. I wonder if the https://gitlab.isc.org/isc-projects/bind9/-/issues/2395#note_187991 fix or bumping to Flamethrower `master` would work.
The first occurrence of this issue seems to be https://gitlab.isc.org/isc-projects/bind9/-/jobs/3435489 from 2 June 2023, between 9.16.41 and 9.16.42 releases. I could not reproduce the problem when manually triggering the job in the CI.
```
2023-07-30:13:27:29 INFO: Server 'ns4' (rpz) received 5,386,242 TCP queries
2023-07-30:13:27:29 INFO: About 10,800,000 TCP queries were expected
2023-07-30:13:27:29 INFO: Minimum number of TCP queries required to pass is 9,720,000
2023-07-30:13:27:29 ERROR: BIND did not process enough TCP queries
```
IPv4 TCP Flamethrower: [generator.log](/uploads/86f30e615716be3e60bbb0472eb82c60/generator.log)
```
45.4845s: send: 1760, avg send: 1406, recv: 1772, avg recv: 1386, min/avg/max resp: 120.112/462.365/2886.94ms, in flight: 587, timeouts: 11
46.4852s: send: 100, avg send: 1378, recv: 633, avg recv: 1369, min/avg/max resp: 52.8356/730.897/1950.99ms, in flight: 50, timeouts: 6
47.4848s: send: 0, avg send: 1378, recv: 7, avg recv: 1340, min/avg/max resp: 1175.22/2074.78/2833.18ms, in flight: 33, timeouts: 6
48.4851s: send: 0, avg send: 1378, recv: 2, avg recv: 1312, min/avg/max resp: 2511.72/2554.51/2597.29ms, in flight: 12, timeouts: 10
49.486s: send: 0, avg send: 1378, recv: 0, avg recv: 1312, min/avg/max resp: 0/-nan/0ms, in flight: 12, timeouts: 0
...
3599.61s: send: 0, avg send: 1378, recv: 0, avg recv: 1312, min/avg/max resp: 0/-nan/0ms, in flight: 12, timeouts: 0
3600.61s: send: 0, avg send: 1378, recv: 0, avg recv: 1312, min/avg/max resp: 0/-nan/0ms, in flight: 12, timeouts: 0
3600.87s: send: 0, avg send: 1378, recv: 0, avg recv: 1312, min/avg/max resp: 0/-nan/0ms, in flight: 12, timeouts: 0
...
runtime : 3600.87 s
total sent : 65289
total rcvd : 64877
```
About 5 million TCP queries should have been sent.Not plannedhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3648Memory leak in authoritative-only servers2023-10-30T14:48:40ZGreg ChoulesMemory leak in authoritative-only serversA customer has reported that some very busy primary/secondary servers have increasing memory usage and decreasing performance over a period of several days, ultimately resulting in the servers crashing and needing to be restarted. The ve...A customer has reported that some very busy primary/secondary servers have increasing memory usage and decreasing performance over a period of several days, ultimately resulting in the servers crashing and needing to be restarted. The version in use is 9.16.30.https://gitlab.isc.org/isc-projects/bind9/-/issues/3104Add latest DNS RFCs to doc/arm/general.rst2022-01-19T18:10:17ZSuzanne GoldlustAdd latest DNS RFCs to doc/arm/general.rstSome new RFCs have been published (and listed on https://www.isc.org/rfcs/), but they haven't been added to the ARM list yet.Some new RFCs have been published (and listed on https://www.isc.org/rfcs/), but they haven't been added to the ARM list yet.https://gitlab.isc.org/isc-projects/bind9/-/issues/2792memory trace (-m trace) overhaul2022-12-12T08:58:02ZBrian Conrymemory trace (-m trace) overhaulIn BIND there are as many as three different layers of memory management, including libc. Proper memory account and tracing will need to be able to track any allocation through all of those layers, including how the size and returned ad...In BIND there are as many as three different layers of memory management, including libc. Proper memory account and tracing will need to be able to track any allocation through all of those layers, including how the size and returned address may change through those transitions.
Original issue statement
> When `isc__mem_allocate` calls `ADD_TRACE` it uses a combination of pointer address `si` and size `si[-1].size` that is inconsistent.
>
> Specifically, the size reported (`si[-1].size`) has been modified from what was requested by `ALIGNMENT_SIZE` at least once (and `ISC_UNLIKELY` a second time) while the pointer reported (`si`) is offset from the actual base of the allocation by the same amount.
>
> This will lead to confusing `-m trace` entries such as:
> ```
> add 0x7f8f6240f018 size 104 file task.c line 1400 mctx 0x55a572544320
> ...
> add 0x7f8f6240f078 size 104 file hash.c line 159 mctx 0x55a572544320
> ```
>
> A sharp-eyed reader may notice that there are only 96 bytes between the two reported addresses even though the size reported is 104 bytes.
>
> This is only a "cosmetic error" in the `-m trace` output, so it's automatically low priority.
>
> I discovered this in 9.11 and have verified it still exists in both 9.16 and 9.17.
Edit: the merge of jemalloc support in 9.17.17 has changed things, requiring an update to all of these diagrams - consider them to be obsolete unless the comment containing them explicitly mentions being updated for 9.17.17https://gitlab.isc.org/isc-projects/bind9/-/issues/1264[CVE-2019-6477] No quota on the number of queries on a single TCP connection/...2019-11-28T14:27:10ZWitold Krecicki[CVE-2019-6477] No quota on the number of queries on a single TCP connection/DoS possibilityRelated/discovered: https://support.isc.org/Ticket/Display.html?id=15332
With pipelining enabled each incoming query on a TCP connection causes creation of a ns_client_t object. Number of outstanding queries is not guarded by any quota,...Related/discovered: https://support.isc.org/Ticket/Display.html?id=15332
With pipelining enabled each incoming query on a TCP connection causes creation of a ns_client_t object. Number of outstanding queries is not guarded by any quota, and I was able to hit 30000 clients created on my laptop - causing allocation of 2.5G of memory. Then, cleaning up those clients causes a huge load on the server as it's unable to handle any other queries. This leads to a -very- simple DoS.
This behaviour was introduced when we fixed tcp-clients quota, v9_11@724ad961dfb740ec04af14ad448002d6ee9a3067 works properly.https://gitlab.isc.org/isc-projects/bind9/-/issues/920lib/dns/message.c:1626: INSIST(free_name == isc_boolean_false) failed, with S...2019-03-29T15:24:30ZOndřej Surýlib/dns/message.c:1626: INSIST(free_name == isc_boolean_false) failed, with SIG0 response and SIG0 in additional record[As reported by Jan Žižka <jan.zizka@nokia.com> to the security-officer@isc.org address:]
Hi,
we have been running one of the Codenomicon test suits sending various
DNS responses back to client requests triggered by nslookup and dig
an...[As reported by Jan Žižka <jan.zizka@nokia.com> to the security-officer@isc.org address:]
Hi,
we have been running one of the Codenomicon test suits sending various
DNS responses back to client requests triggered by nslookup and dig
and we have hit an `abort()` with reposnse containing SIG0 answer type
and SIG0 type in additional records.
```
Domain Name System (response)
Transaction ID: 0x2f83
Flags: 0x8100 Standard query response, No error
Questions: 1
Answer RRs: 1
Authority RRs: 1
Additional RRs: 1
Queries
dns.suite.local: type A, class IN
Name: dns.suite.local
[Name Length: 15]
[Label Count: 3]
Type: A (Host Address) (1)
Class: IN (0x0001)
Answers
la\030el: type SIG, class IN
Name: la\030el
Type: SIG (security signature) (24)
Class: IN (0x0001)
Time to live: 3600
Data length: 35
Type Covered: Unused (0)
Algorithm: RSA/SHA1 (5)
Labels: 85
Original TTL: 0 (0 seconds)
Signature Expiration: Jun 8, 1970 13:43:44.000000000 CET
Signature Inception: Jan 16, 2019 13:45:51.000000000 CET
Key Tag: 0
Signer's name: dns.suite.local
Signature: 6c00
Authoritative nameservers
dns.suite.local: type A, class IN, addr 0.0.0.0
Name: dns.suite.local
Type: A (Host Address) (1)
Class: IN (0x0001)
Time to live: 3600
Data length: 4
Address: 0.0.0.0
Additional records
suite.local: type SIG, class IN
Name: suite.local
Type: SIG (security signature) (24)
Class: IN (0x0001)
Time to live: 3600
Data length: 35
Type Covered: Unused (0)
Algorithm: RSA/SHA1 (5)
Labels: 0
Original TTL: 0 (0 seconds)
Signature Expiration: Feb 7, 1970 23:13:20.000000000 CET
Signature Inception: Jan 16, 2019 13:45:51.000000000 CET
Key Tag: 72
Signer's name: dns.suite.local
Signature: 6c00
```
Seems that `getsection()` doesn't know how to handle this kind of packet
as there is already sig0 record and free_name is left 'true'.
Stack trace of thread 8334:
```
#0 0x00007fb9b0ad153f __GI_raise (libc.so.6)
#1 0x00007fb9b0abb895 __GI_abort (libc.so.6)
#2 0x00007fb9b147efea isc_assertion_failed (libisc.so.169)
#3 0x00007fb9b160de30 getsection (libdns.so.1102)
#4 0x00007fb9b160e3e2 dns_message_parse (libdns.so.1102)
#5 0x000000000041bff5 recv_done (dig)
#6 0x00007fb9b14b0b72 dispatch (libisc.so.169)
#7 0x00007fb9b14b0ecf run (libisc.so.169)
#8 0x00007fb9b0fde58e start_thread (libpthread.so.0)
#9 0x00007fb9b0b966a3 __clone (libc.so.6)
```
I have made a reproducer script (attached) it will work on Fedora 29
out of box with bind 9.11.4-P2-RedHat-9.11.4-13.P2.fc29.
I have also tried to configure named to forwarding configuration
and used simple script to send the problematic response back but I have
not able to force the same code path in `getsection()`. Still I feel this
problem could hit also named and cause it to `abort()`. Therefore I
wanted to report this through this channel first just to make sure.
I didn't manage to figure out any proper fix, I can do that locally
with some check for this specific condition but I don't feel it is
right. Better if someone from bind developers would have a look.
Just to make sure I'm also attaching logs from reproducer run locally
and the pcap capture of the packets, in case the reproducer script
fails for you.
-Jan
[reproduce.sh](/uploads/fb3f6b586e7c285812b36aacdade9aa8/reproduce.sh)
[reproduce.pcap](/uploads/853541250bc1ba2a2d0b9fce3062112b/reproduce.pcap)
[reproduce.log](/uploads/edb733fad35ca0c0ec1ce35d2d2ab3a1/reproduce.log)
[server.py](/uploads/26b2695a797fd229f1241c152a222c78/server.py)https://gitlab.isc.org/isc-projects/bind9/-/issues/605Add SipHash24 and synchronize the Cookie algorithm with other vendors2019-07-22T12:15:37ZOndřej SurýAdd SipHash24 and synchronize the Cookie algorithm with other vendorsBIND 9.15.xOndřej SurýOndřej Surýhttps://gitlab.isc.org/isc-projects/bind9/-/issues/506Requesting more explicit support for and demonstration of FIPS compliant cryp...2019-10-23T12:41:10ZSara DickisnonRequesting more explicit support for and demonstration of FIPS compliant crypto library [ISC-support #13613]### Description
Certain environments mandate the use of FIPS compliant TLS libraries for DNSSEC signing. At the moment restricting BIND to only build/run against FIPS compliant crypto libraries or demonstrating that a given instance is ...### Description
Certain environments mandate the use of FIPS compliant TLS libraries for DNSSEC signing. At the moment restricting BIND to only build/run against FIPS compliant crypto libraries or demonstrating that a given instance is using a FIPS compliant library is rather implicit. This capability is often required to demonstrate compliance to auditors.
### Request
I'd like to request more explicit handling of FIPS in BIND, for example:
1. It would be nice to have a BIND runtime command tool that reported the capabilities of the currently used crypto library. Since libraries might be opened dynamically and system/environment variables can affect library behavior this would provide certainty for the user.
2. It would be nice to have a compile and/or runtime method to restrict BIND to using only crypto libraries with certain characteristics e.g. FIPS compliance which are typically exposed via library APIs.
### Links / referencesOndřej SurýOndřej Surýhttps://gitlab.isc.org/isc-projects/bind9/-/issues/405Potential assertion in isc-bind-9.12.22018-11-08T18:29:21ZOndřej SurýPotential assertion in isc-bind-9.12.2A new issue reported by Robert Święcki <robert@swiecki.net> to security-officer:
With some fuzzing of ISC-BIND-9.12.2 with honggfuzz setup (from here https://github.com/google/honggfuzz/tree/master/examples/bind) I', able to hit some as...A new issue reported by Robert Święcki <robert@swiecki.net> to security-officer:
With some fuzzing of ISC-BIND-9.12.2 with honggfuzz setup (from here https://github.com/google/honggfuzz/tree/master/examples/bind) I', able to hit some asserion which ends-up with SIGABRT.
```
dispatch.c:2464: INSIST(disp->tcpbuffers == 0) failed.
#0 0x00007ffff6caf6a0 in raise () from /lib/x86_64-linux-gnu/libc.so.6
(gdb) bt
#0 0x00007ffff6caf6a0 in raise () from /lib/x86_64-linux-gnu/libc.so.6
#1 0x00007ffff6cb0cf7 in abort () from /lib/x86_64-linux-gnu/libc.so.6
#2 0x000000000052d2ae in assertion_failed (file=0xde92a0 <.str> "resolver.c", line=7033, type=isc_assertiontype_require,
cond=0xded340 <.str.213> "(__builtin_expect(!!((query) != ((void*)0)), 1) && __builtin_expect(!!(((const isc__magic_t *)(query))->magic == ((('Q') << 24 | ('!') << 16 | ('!') << 8 | ('!')))), 1))") at ./main.c:252
#3 0x0000000000bb9c97 in isc_assertion_failed (file=0x2 <error: Cannot access memory at address 0x2>, line=-446762384, type=isc_assertiontype_require, cond=0x7ffff6caf6a0 <raise+272> "H\213\214$\b\001") at assertions.c:51
#4 0x00000000009aa838 in resquery_response (task=0x7fffe3d253b8, event=0x7fffe37dff08) at resolver.c:7033
#5 0x0000000000c46658 in dispatch (manager=<optimized out>) at task.c:1139
#6 0x0000000000c4135d in run (uap=0x2) at task.c:1311
#7 0x00007ffff77a351a in start_thread (arg=0x7fffe55f0700) at pthread_create.c:465
#8 0x00007ffff6d703ef in clone () from /lib/x86_64-linux-gnu/libc.so.6
```
Have you seen it before?
If not, I'll try to gather some more info on it, and send you the details, though I'm attaching my config, so you can check it for obvious problems.
[named.conf](/uploads/032f4988bd3aba2d2f53bc62e6367a88/named.conf)
And the report from honggfuzz, which is not particularly more informative:
```
=====================================================================
TIME: 2018-07-11.14:26:50
=====================================================================
FUZZER ARGS:
mutationsPerRun : 6
externalCmd : NULL
fuzzStdin : FALSE
timeout : 10 (sec)
ignoreAddr : (nil)
ASLimit : 0 (MiB)
RSSLimit : 0 (MiB)
DATALimit : 0 (MiB)
targetPid : 0
targetCmd :
wordlistFile : NULL
dynFileMethod:
fuzzTarget : /usr/local/google/home/swiecki/fuzz/bind/bind-9.12.2/bin/named/named -A client:1:1:1:1:1:1 -f -c /usr/local/google/home/swiecki/fuzz/bind/dist/etc/named.conf
ORIG_FNAME: IN.req-response//8b6e4a1f05567f57d1a8dd3cbb50fc9f.00000127.honggfuzz.cov
FUZZ_FNAME: ./SIGABRT.PC.7ffff6caf6a0.STACK.cfb0c006c.CODE.-6.ADDR.(nil).INSTR.mov____0x108(%rsp),%rcx.fuzz
PID: 47832
SIGNAL: SIGABRT (6)
FAULT ADDRESS: (nil)
INSTRUCTION: mov____0x108(%rsp),%rcx
STACK HASH: 0000000cfb0c006c
STACK:
<0x00007ffff6cb0cf7> [[UNKNOWN]():0 at /lib/x86_64-linux-gnu/libc-2.26.so]
<0x0000000000bb9ca1> [isc_assertion_failed():52 at /usr/local/google/home/swiecki/fuzz/bind/bind-9.12.2/bin/named/named]
<0x00000000006c9550> [dispatch_free():2465 at /usr/local/google/home/swiecki/fuzz/bind/bind-9.12.2/bin/named/named]
<0x00000000006c8658> [destroy_disp():549 at /usr/local/google/home/swiecki/fuzz/bind/bind-9.12.2/bin/named/named]
<0x0000000000c46658> [dispatch():1142 at /usr/local/google/home/swiecki/fuzz/bind/bind-9.12.2/bin/named/named]
<0x0000000000c4135d> [run():1320 at /usr/local/google/home/swiecki/fuzz/bind/bind-9.12.2/bin/named/named]
<0x00007ffff77a351a> [[UNKNOWN]():0 at /lib/x86_64-linux-gnu/libpthread-2.26.so]
<0x00007ffff6d703ef> [[UNKNOWN]():0 at /lib/x86_64-linux-gnu/libc-2.26.so]
=====================================================================
```https://gitlab.isc.org/isc-projects/bind9/-/issues/387BIND crashes while resolving DNAME with deny-answer-aliases2018-11-08T18:30:30ZWitold KrecickiBIND crashes while resolving DNAME with deny-answer-aliasesReported by Tony Finch to wpk and security-officer:
named.conf:
```
options {
recursion yes;
deny-answer-aliases {
"cam.ac.uk";
} except-from {
"cam.ac.uk";
};
};
...Reported by Tony Finch to wpk and security-officer:
named.conf:
```
options {
recursion yes;
deny-answer-aliases {
"cam.ac.uk";
} except-from {
"cam.ac.uk";
};
};
```
```
dig @localhost 130.232.128.in-addr.arpa dname
```
results in INSIST failure:
```
05-Jul-2018 17:17:43.684 name.c:2150: REQUIRE(suffixlabels > 0) failed, back trace
```
```
#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51
#1 0x00007ffff6279801 in __GI_abort () at abort.c:79
#2 0x00005555555ae9e7 in assertion_failed (file=<optimized out>, line=<optimized out>, type=<optimized out>, cond=<optimized out>) at ./main.c:250
#3 0x000055555578fa1a in isc_assertion_failed (file=file@entry=0x555555808500 "name.c", line=line@entry=2150,
type=type@entry=isc_assertiontype_require, cond=cond@entry=0x555555808718 "suffixlabels > 0") at assertions.c:51
#4 0x00005555556715ea in dns_name_split (name=0x7fffe8051aa0, suffixlabels=0, prefix=<optimized out>, suffix=<optimized out>) at name.c:2150
#5 0x000055555559ee1a in is_answertarget_allowed (fctx=fctx@entry=0x7fffe8051790, qname=qname@entry=0x7fffe8051aa0, rname=0x7fffef8867e0,
rdataset=0x7fffef8895c0, chainingp=chainingp@entry=0x0) at resolver.c:6637
#6 0x000055555559f7d4 in rctx_answer_match (rctx=0x7ffff2d0d470) at resolver.c:8173
#7 rctx_answer_positive (rctx=rctx@entry=0x7ffff2d0d470) at resolver.c:7927
#8 0x00005555556ea738 in rctx_answer (rctx=0x7ffff2d0d470) at resolver.c:7811
#9 resquery_response (task=<optimized out>, event=<optimized out>) at resolver.c:7342
#10 0x00005555557b2cb1 in dispatch (manager=0x7ffff7f73010) at task.c:1139
#11 run (uap=0x7ffff7f73010) at task.c:1311
#12 0x00007ffff69f26db in start_thread (arg=0x7ffff2d0e700) at pthread_create.c:463
#13 0x00007ffff635a88f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
```https://gitlab.isc.org/isc-projects/bind9/-/issues/360Replace FNV-1a with SipHash2019-05-21T11:50:27ZGhost UserReplace FNV-1a with SipHashNOTE: This is a confidential ticket because it may describe a vulnerability.
I suggest replacing FNV-1a in BIND with SipHash: https://131002.net/siphash/siphash.pdf
BIND uses FNV-1a for the hashtables with a random starting seed (the o...NOTE: This is a confidential ticket because it may describe a vulnerability.
I suggest replacing FNV-1a in BIND with SipHash: https://131002.net/siphash/siphash.pdf
BIND uses FNV-1a for the hashtables with a random starting seed (the offset basis).. but IMO it falls short of being a true universal hash function because I suspect it would map to different buckets based on the initial offset basis chosen, but colliding keys would still map to the same buckets regardless of the initial offset basis (the bucket# may be different).
SipHash is a more robust hash function and it suggests the hash table function as an application. The change to SipHash would be fairly easy, but note that some parts of BIND may perform incremental hashing (i.e., they add input data to compute the hash from different places of the library). So the hash function would need to follow an API similar to the HMAC contexts in BIND, OR, the data would have to be concatenated into a buffer as a whole before hashing.
I suspect collisions may be exploitable by hash flooding (there have also been previous reports of these). See section 7 of the siphash paper and references [13], etc. in the paper for details.
See this presentation: https://media.ccc.de/v/29c3-5152-en-hashflooding_dos_reloaded_h264https://gitlab.isc.org/isc-projects/bind9/-/issues/339BIND crash processing .jnl file for outbound IXFR2018-07-05T04:38:03ZCathy AlmondBIND crash processing .jnl file for outbound IXFRBIND 9.12.1-P2
### Summary
Three authoritative servers all crashed at around the same time with:
```
13-Jun-2018 06:16:37.710 general: critical: journal.c:1663: INSIST(j->offset <= j->it.epos.offset) failed
13-Jun-2018 06:16:37.710 gen...BIND 9.12.1-P2
### Summary
Three authoritative servers all crashed at around the same time with:
```
13-Jun-2018 06:16:37.710 general: critical: journal.c:1663: INSIST(j->offset <= j->it.epos.offset) failed
13-Jun-2018 06:16:37.710 general: critical: exiting (due to assertion failure)
```
When trying to restart BIND, it fails to launch and logs the same lines as above.
### Steps to reproduce
Restarting BIND reproduces the problem.
Removing the zone .jnl file and restarting resolves the problem.
It would appear that named is attempting to process an outbound IXFR request at the time of the crash, after having recently received a significantly-sized inbound update:
```
13-Jun-2018 05:40:30.552 xfer-in: info: transfer of 'XXXXXXXX/IN' from XX.XX.XX.XX#53: Transfer status: success
13-Jun-2018 05:40:30.552 xfer-in: info: transfer of 'XXXXXXXX/IN' from XX.XX.XX.XX#53: Transfer completed: 113359 messages, 27271720 records, 1800453680 bytes, 669.247 secs (2690267 bytes/sec)
13-Jun-2018 05:40:30.569 xfer-out: info: client @0xXXXXXXXXXX XX.XX.XX.XX#36227/key XXXX (XXXXXXXX): transfer of 'XXXXXXXX/IN': IXFR started: TSIG XXXX (serial 2018061320 -> 2018061321)
```
### What is the current *bug* behavior?
named crashes
### What is the expected *correct* behavior?
named doesn't crash
### Relevant configuration files
Not yet available
### Relevant logs and/or screenshots
See above
### Possible fixes
I am wondering if the size of the .jnl file might be significant - ~4Gbhttps://gitlab.isc.org/isc-projects/bind9/-/issues/263assertion failure: critical: query.c:5541: REQUIRE(qtype != 0) failed2019-04-25T15:49:51ZGhost Userassertion failure: critical: query.c:5541: REQUIRE(qtype != 0) failed## Description
Last night one of our bind caching resolvers crashed several times with error
```
15-May-2018 23:56:35.904 general: critical: query.c:5541: REQUIRE(qtype != 0) failed, back trace
15-May-2018 23:56:35.904 general: critica...## Description
Last night one of our bind caching resolvers crashed several times with error
```
15-May-2018 23:56:35.904 general: critical: query.c:5541: REQUIRE(qtype != 0) failed, back trace
15-May-2018 23:56:35.904 general: critical: #0 0x429210 in ??
15-May-2018 23:56:35.904 general: critical: #1 0x7f561c9ee17a in ??
15-May-2018 23:56:35.904 general: critical: #2 0x7f561e069cde in ??
15-May-2018 23:56:35.904 general: critical: #3 0x7f561e07814d in ??
15-May-2018 23:56:35.904 general: critical: #4 0x7f561e074f10 in ??
15-May-2018 23:56:35.904 general: critical: #5 0x7f561e077dea in ??
15-May-2018 23:56:35.904 general: critical: #6 0x7f561e070374 in ??
15-May-2018 23:56:35.904 general: critical: #7 0x7f561e07b8e7 in ??
15-May-2018 23:56:35.904 general: critical: #8 0x7f561e07bcef in ??
15-May-2018 23:56:35.904 general: critical: #9 0x7f561e05ba09 in ??
15-May-2018 23:56:35.904 general: critical: #10 0x7f561ca12d6b in ??
15-May-2018 23:56:35.904 general: critical: #11 0x7f561bd4be25 in ??
15-May-2018 23:56:35.904 general: critical: #12 0x7f561aff334d in ??
15-May-2018 23:56:35.904 general: critical: exiting (due to assertion failure)
```
## Version information
BIND 9.12.1-P1 (with patches CVE-2018-5736 and CVE-2018-5737 )
```
BIND 9.12.1-P1-RedHat-9.12.1_P1-1.el7.0 <id:f729b2b>
running on Linux x86_64 3.10.0-693.11.6.el7.x86_64 #1 SMP Thu Dec 28 14:23:39 EST 2017
built by make with '--build=x86_64-koji-linux-gnu' '--host=x86_64-koji-linux-gnu' '--program-prefix=' '--disable-dependency-tracking' '--prefix=/opt/named' '--bindir=/opt/named/bin' '--sbindir=/opt/named/sbin' '--sysconfdir=/etc' '--datadir=/opt/named/share' '--includedir=/opt/named/include' '--libdir=/opt/named/lib64' '--libexecdir=/opt/named/libexec' '--localstatedir=/var' '--sharedstatedir=/var/lib' '--mandir=/opt/named/share/man' '--infodir=/opt/named/share/info' '--exec-prefix=/opt/named' '--with-libtool' '--enable-threads' '--enable-ipv6' '--enable-largefile' '--with-randomdev=/dev/urandom' '--with-tuning=large' '--enable-dnstap' '--disable-crypto-rand' '--disable-static' '--disable-openssl-version-check' '--includedir=/opt/named/include/bind9' '--with-docbook-xsl=/opt/named/share/sgml/docbook/xsl-stylesheets' 'build_alias=x86_64-koji-linux-gnu' 'host_alias=x86_64-koji-linux-gnu' 'CFLAGS= -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic' 'LDFLAGS=-Wl,-z,relro ' 'CPPFLAGS= -DDIG_SIGCHASE'
compiled by GCC 4.8.5 20150623 (Red Hat 4.8.5-28)
compiled with OpenSSL version: OpenSSL 1.0.2k 26 Jan 2017
linked to OpenSSL version: OpenSSL 1.0.2k-fips 26 Jan 2017
compiled with libxml2 version: 2.9.1
linked to libxml2 version: 20901
compiled with zlib version: 1.2.7
linked to zlib version: 1.2.7
threads support is enabled
```
OS: Red Hat Enterprise Linux Server release 7.4 (Maipo)
Michał KępieńMichał Kępieńhttps://gitlab.isc.org/isc-projects/bind9/-/issues/185[CVE-2018-5737] serve-stale crash2021-03-31T12:02:44ZTony Finch[CVE-2018-5737] serve-stale crashOne of my recursive servers crashed messily this evening, logging more than a million lines of
```
27-Mar-2018 18:20:35.862 general: info: 105.91.84.115.in-addr.arpa resolver failure, stale answer used
[snip 100MB logs]
27-Mar-2018...One of my recursive servers crashed messily this evening, logging more than a million lines of
```
27-Mar-2018 18:20:35.862 general: info: 105.91.84.115.in-addr.arpa resolver failure, stale answer used
[snip 100MB logs]
27-Mar-2018 18:24:03.414 general: info: 105.91.84.115.in-addr.arpa resolver failure, stale answer used
27-Mar-2018 18:24:03.414 general: critical: rbtdb.c:2115: INSIST(!((void *)((node)->deadlink.prev) != (void *)(-1))) failed
27-Mar-2018 18:24:03.414 general: critical: exiting (due to assertion failure)
```
Earlier today I turned on serve-stale in production, so it did not last long before crashing!
I'm afraid I don't have the start of the logspam because I only keep 100MB logs, but the obvious query will reproduce the problem.
Tangentially related, I think the serve-stale logging needs work: it's very noisy, so it should be in its own category, and perhaps some of the messages should have at debugging rather than informational level...
Full configuration below. It's possibly of note that I have two views with a shared cache using attach-cache.
```
acl "blackhole" {
240.0.0.0/4;
};
acl "secure" {
"localhost";
131.111.56.56/32;
131.111.57.57/32;
2001:630:212:110::d:7a7/128;
2001:630:212:110:221:9bff:fe16:a526/128;
2001:630:212:110:646f:7461:742e:6174/128;
131.111.9.53/32;
131.111.9.73/32;
2001:630:212:8::d:aa/128;
2001:630:212:8::d:aaaa/128;
};
acl "loopback" {
127.0.0.0/8;
::1/128;
};
acl "cudn" {
127.0.0.0/8;
::1/128;
2001:630:210::/44;
2a00:1098:5::/48;
128.232.0.0/16;
129.169.0.0/16;
131.111.0.0/16;
192.18.195.0/24;
192.84.5.0/24;
192.153.213.0/24;
193.60.80.0/20;
193.63.252.0/23;
!172.31.0.0/16;
172.16.0.0/12;
10.128.0.0/9;
};
acl "isc" {
"ipreg";
key "university_of_cambridge-a1ec5f18.sns-pba.isc.org";
};
acl "secondaries" {
"cudn";
"isc";
key "tsig-cam-maths";
key "cam.ac.uk.feb2016.tsig.ic.ac.uk";
194.81.227.226/32;
2001:630:0:44::e2/128;
193.63.105.17/32;
2001:630:0:45::11/128;
193.63.106.103/32;
2001:630:0:46::67/128;
193.62.157.66/32;
2001:630:0:47::42/128;
93.93.130.49/32;
69.56.173.190/32;
2600:3c00::f03c:91ff:fe96:beac/128;
93.93.128.67/32;
2a00:1098:0:80:1000::10/128;
185.24.221.32/32;
2a02:2770:11:0:21a:4aff:febe:759b/128;
};
acl "ipreg" {
key "tsig-ipreg";
"secure";
};
controls {
inet 0.0.0.0 port 953 allow {
"secure";
};
inet :: port 953 allow {
"secure";
};
};
logging {
channel "log" {
file "../log/named.log" versions 10 size 10485760;
severity dynamic;
print-time yes;
print-severity yes;
print-category yes;
};
category "default" {
"log";
};
category "cname" {
"default_debug";
};
category "dnssec" {
"default_debug";
};
category "lame-servers" {
"default_debug";
};
category "query-errors" {
"default_debug";
};
category "resolver" {
"default_debug";
};
category "security" {
"default_debug";
};
category "update-security" {
"default_debug";
};
};
masters "notify-isc" {
149.20.67.14 key "university_of_cambridge-a1ec5f18.sns-pba.isc.org";
199.6.0.100 key "university_of_cambridge-a1ec5f18.sns-pba.isc.org";
};
masters "notify-auth" {
2001:630:212:8::d:a0 key "tsig-ipreg";
2001:630:212:12::d:a1 key "tsig-ipreg";
2001:630:212:8::d:a2 key "tsig-ipreg";
2001:630:212:12::d:a3 key "tsig-ipreg";
};
masters "notify-rec" {
"notify-auth";
2001:630:212:8::d:92 key "tsig-ipreg";
2001:630:212:8::d:93 key "tsig-ipreg";
2001:630:212:8::d:94 key "tsig-ipreg";
2001:630:212:8::d:95 key "tsig-ipreg";
};
masters "master-ipreg" {
2001:630:212:8::d:aa key "tsig-ipreg";
};
masters "master-fanf" {
2001:630:212:110::d:7a7 key "tsig-fanf";
2001:630:212:110:646f:7461:742e:6174 key "tsig-fanf";
};
masters "master-cl" {
2001:630:212:200::d:a0;
128.232.0.19;
2001:630:212:200::d:a1;
128.232.0.18;
};
masters "master-eng" {
129.169.8.8;
129.169.8.9;
};
masters "master-maths" {
131.111.16.129;
131.111.16.30;
131.111.16.32;
};
masters "master-janet-rpz" {
2001:630:1:128::166;
194.82.174.166;
2001:630:1:12a::235;
194.83.56.235;
};
masters "master-imperial" {
2001:630:12:600:1::80 key "cam.ac.uk.feb2016.tsig.ic.ac.uk";
2001:630:12:600:1::81 key "cam.ac.uk.feb2016.tsig.ic.ac.uk";
2001:630:12:600:1::82 key "cam.ac.uk.feb2016.tsig.ic.ac.uk";
195.97.216.196 key "cam.ac.uk.feb2016.tsig.ic.ac.uk";
};
masters "master-salford" {
146.87.136.156;
146.87.136.157;
};
masters "master-york" {
144.32.129.200;
144.32.128.230;
};
masters "master-sanger" {
193.62.203.30;
};
masters "master-chiark" {
212.13.197.229;
};
masters "master-srcf" {
131.111.179.79;
};
masters "master-exim" {
2001:630:212:8::e:f0e key "tsig-cam-exim";
131.111.8.88 key "tsig-cam-exim";
2a02:898:31::53:0 key "tsig-cam-exim";
94.142.241.91 key "tsig-cam-exim";
2604:a880:800:a1::419:1001 key "tsig-cam-exim";
159.203.114.39 key "tsig-cam-exim";
};
options {
blackhole {
"blackhole";
};
directory "/home/named/run";
recursive-clients 12345;
server-id hostname;
tcp-clients 1234;
dnssec-validation auto;
max-cache-size 17179869184;
max-stale-ttl 3600;
no-case-compress {
"any";
};
rrset-order {
order random;
};
stale-answer-enable yes;
allow-query {
"cudn";
};
notify no;
zone-statistics full;
};
statistics-channels {
inet 0.0.0.0 port 8053 allow {
"cudn";
};
inet :: port 8053 allow {
"cudn";
};
};
view "main" {
match-destinations {
!131.111.9.99/32;
!2001:630:212:8::d:2/128;
!131.111.12.99/32;
!2001:630:212:12::d:3/128;
!131.111.9.118/32;
!2001:630:212:8::d:fff2/128;
!131.111.12.118/32;
!2001:630:212:12::d:fff3/128;
"any";
};
zone "1.2.0.0.3.6.0.1.0.0.2.ip6.arpa" {
type slave;
file "../zone/1.2.0.0.3.6.0.1.0.0.2.ip6.arpa";
masters {
"master-ipreg";
};
};
zone "10.in-addr.arpa" {
type slave;
file "../zone/10.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "111.131.in-addr.arpa" {
type slave;
file "../zone/111.131.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "145.111.131.in-addr.arpa" {
type slave;
file "../zone/145.111.131.in-addr.arpa";
masters {
"master-maths";
};
};
zone "16.111.131.in-addr.arpa" {
type slave;
file "../zone/16.111.131.in-addr.arpa";
masters {
"master-maths";
};
};
zone "16.172.in-addr.arpa" {
type slave;
file "../zone/16.172.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "169.129.in-addr.arpa" {
type slave;
file "../zone/169.129.in-addr.arpa";
masters {
"master-eng";
};
};
zone "17.111.131.in-addr.arpa" {
type slave;
file "../zone/17.111.131.in-addr.arpa";
masters {
"master-maths";
};
};
zone "17.172.in-addr.arpa" {
type slave;
file "../zone/17.172.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "18.111.131.in-addr.arpa" {
type slave;
file "../zone/18.111.131.in-addr.arpa";
masters {
"master-maths";
};
};
zone "18.172.in-addr.arpa" {
type slave;
file "../zone/18.172.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "19.172.in-addr.arpa" {
type slave;
file "../zone/19.172.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "195.18.192.in-addr.arpa" {
type slave;
file "../zone/195.18.192.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "2.0.2.1.2.0.0.3.6.0.1.0.0.2.ip6.arpa" {
type slave;
file "../zone/2.0.2.1.2.0.0.3.6.0.1.0.0.2.ip6.arpa";
masters {
"master-cl";
};
};
zone "20.111.131.in-addr.arpa" {
type slave;
file "../zone/20.111.131.in-addr.arpa";
masters {
"master-maths";
};
};
zone "20.172.in-addr.arpa" {
type slave;
file "../zone/20.172.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "21.172.in-addr.arpa" {
type slave;
file "../zone/21.172.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "213.153.192.in-addr.arpa" {
type slave;
file "../zone/213.153.192.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "22.172.in-addr.arpa" {
type slave;
file "../zone/22.172.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "23.172.in-addr.arpa" {
type slave;
file "../zone/23.172.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "232.128.in-addr.arpa" {
type slave;
file "../zone/232.128.in-addr.arpa";
masters {
"master-cl";
};
};
zone "24.111.131.in-addr.arpa" {
type slave;
file "../zone/24.111.131.in-addr.arpa";
masters {
"master-maths";
};
};
zone "24.172.in-addr.arpa" {
type slave;
file "../zone/24.172.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "25.172.in-addr.arpa" {
type slave;
file "../zone/25.172.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "252.63.193.in-addr.arpa" {
type slave;
file "../zone/252.63.193.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "253.63.193.in-addr.arpa" {
type slave;
file "../zone/253.63.193.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "26.172.in-addr.arpa" {
type slave;
file "../zone/26.172.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "27.172.in-addr.arpa" {
type slave;
file "../zone/27.172.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "28.172.in-addr.arpa" {
type slave;
file "../zone/28.172.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "29.172.in-addr.arpa" {
type slave;
file "../zone/29.172.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "30.172.in-addr.arpa" {
type slave;
file "../zone/30.172.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "5.0.0.0.8.9.0.1.0.0.a.2.ip6.arpa" {
type slave;
file "../zone/5.0.0.0.8.9.0.1.0.0.a.2.ip6.arpa";
masters {
"master-ipreg";
};
};
zone "5.84.192.in-addr.arpa" {
type slave;
file "../zone/5.84.192.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "80.60.193.in-addr.arpa" {
type slave;
file "../zone/80.60.193.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "81.60.193.in-addr.arpa" {
type slave;
file "../zone/81.60.193.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "82.60.193.in-addr.arpa" {
type slave;
file "../zone/82.60.193.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "83.60.193.in-addr.arpa" {
type slave;
file "../zone/83.60.193.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "84.60.193.in-addr.arpa" {
type slave;
file "../zone/84.60.193.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "85.60.193.in-addr.arpa" {
type slave;
file "../zone/85.60.193.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "86.60.193.in-addr.arpa" {
type slave;
file "../zone/86.60.193.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "87.60.193.in-addr.arpa" {
type slave;
file "../zone/87.60.193.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "88.60.193.in-addr.arpa" {
type slave;
file "../zone/88.60.193.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "89.60.193.in-addr.arpa" {
type slave;
file "../zone/89.60.193.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "90.60.193.in-addr.arpa" {
type slave;
file "../zone/90.60.193.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "91.60.193.in-addr.arpa" {
type slave;
file "../zone/91.60.193.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "92.60.193.in-addr.arpa" {
type slave;
file "../zone/92.60.193.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "93.60.193.in-addr.arpa" {
type slave;
file "../zone/93.60.193.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "94.60.193.in-addr.arpa" {
type slave;
file "../zone/94.60.193.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "95.60.193.in-addr.arpa" {
type slave;
file "../zone/95.60.193.in-addr.arpa";
masters {
"master-ipreg";
};
};
zone "block.arpa.cam.ac.uk" {
type slave;
file "../zone/block.arpa.cam.ac.uk";
masters {
"master-ipreg";
};
};
zone "botnetcc.rpz.spamhaus.org" {
type slave;
file "../zone/botnetcc.rpz.spamhaus.org";
masters {
"master-janet-rpz";
};
};
zone "cam.ac.uk" {
type slave;
file "../zone/cam.ac.uk";
masters {
"master-ipreg";
};
};
zone "cl.cam.ac.uk" {
type slave;
file "../zone/cl.cam.ac.uk";
masters {
"master-cl";
};
};
zone "cst.cam.ac.uk" {
type slave;
file "../zone/cst.cam.ac.uk";
masters {
"master-cl";
};
};
zone "damtp.cam.ac.uk" {
type slave;
file "../zone/damtp.cam.ac.uk";
masters {
"master-maths";
};
};
zone "dbl.rpz.spamhaus.org" {
type slave;
file "../zone/dbl.rpz.spamhaus.org";
masters {
"master-janet-rpz";
};
};
zone "dpmms.cam.ac.uk" {
type slave;
file "../zone/dpmms.cam.ac.uk";
masters {
"master-maths";
};
};
zone "drop.rpz.spamhaus.org" {
type slave;
file "../zone/drop.rpz.spamhaus.org";
masters {
"master-janet-rpz";
};
};
zone "eng.cam.ac.uk" {
type slave;
file "../zone/eng.cam.ac.uk";
masters {
"master-eng";
};
};
zone "in-addr.arpa.cam.ac.uk" {
type slave;
file "../zone/in-addr.arpa.cam.ac.uk";
masters {
"master-ipreg";
};
};
zone "in-addr.arpa.private.cam.ac.uk" {
type slave;
file "../zone/in-addr.arpa.private.cam.ac.uk";
masters {
"master-ipreg";
};
};
zone "malware-aggressive.rpz.spamhaus.org" {
type slave;
file "../zone/malware-aggressive.rpz.spamhaus.org";
masters {
"master-janet-rpz";
};
};
zone "malware.rpz.spamhaus.org" {
type slave;
file "../zone/malware.rpz.spamhaus.org";
masters {
"master-janet-rpz";
};
};
zone "maths.cam.ac.uk" {
type slave;
file "../zone/maths.cam.ac.uk";
masters {
"master-maths";
};
};
zone "newton.cam.ac.uk" {
type slave;
file "../zone/newton.cam.ac.uk";
masters {
"master-maths";
};
};
zone "passthru.arpa.cam.ac.uk" {
type slave;
file "../zone/passthru.arpa.cam.ac.uk";
masters {
"master-ipreg";
};
};
zone "private.cam.ac.uk" {
type slave;
file "../zone/private.cam.ac.uk";
masters {
"master-ipreg";
};
};
zone "srcf.net" {
type slave;
file "../zone/srcf.net";
masters {
"master-srcf";
};
};
zone "srcf.ucam.org" {
type slave;
file "../zone/srcf.ucam.org";
masters {
"master-srcf";
};
};
zone "statslab.cam.ac.uk" {
type slave;
file "../zone/statslab.cam.ac.uk";
masters {
"master-maths";
};
};
zone "ucam.org" {
type slave;
file "../zone/ucam.org";
masters {
"master-chiark";
};
};
response-policy {
zone "passthru.arpa.cam.ac.uk" policy passthru;
zone "block.arpa.cam.ac.uk" policy cname "block.dns.cam.ac.uk";
} break-dnssec yes max-policy-ttl 300 qname-wait-recurse no;
};
view "unfiltered" {
zone "1.2.0.0.3.6.0.1.0.0.2.ip6.arpa" {
in-view "main";
};
zone "10.in-addr.arpa" {
in-view "main";
};
zone "111.131.in-addr.arpa" {
in-view "main";
};
zone "145.111.131.in-addr.arpa" {
in-view "main";
};
zone "16.111.131.in-addr.arpa" {
in-view "main";
};
zone "16.172.in-addr.arpa" {
in-view "main";
};
zone "169.129.in-addr.arpa" {
in-view "main";
};
zone "17.111.131.in-addr.arpa" {
in-view "main";
};
zone "17.172.in-addr.arpa" {
in-view "main";
};
zone "18.111.131.in-addr.arpa" {
in-view "main";
};
zone "18.172.in-addr.arpa" {
in-view "main";
};
zone "19.172.in-addr.arpa" {
in-view "main";
};
zone "195.18.192.in-addr.arpa" {
in-view "main";
};
zone "2.0.2.1.2.0.0.3.6.0.1.0.0.2.ip6.arpa" {
in-view "main";
};
zone "20.111.131.in-addr.arpa" {
in-view "main";
};
zone "20.172.in-addr.arpa" {
in-view "main";
};
zone "21.172.in-addr.arpa" {
in-view "main";
};
zone "213.153.192.in-addr.arpa" {
in-view "main";
};
zone "22.172.in-addr.arpa" {
in-view "main";
};
zone "23.172.in-addr.arpa" {
in-view "main";
};
zone "232.128.in-addr.arpa" {
in-view "main";
};
zone "24.111.131.in-addr.arpa" {
in-view "main";
};
zone "24.172.in-addr.arpa" {
in-view "main";
};
zone "25.172.in-addr.arpa" {
in-view "main";
};
zone "252.63.193.in-addr.arpa" {
in-view "main";
};
zone "253.63.193.in-addr.arpa" {
in-view "main";
};
zone "26.172.in-addr.arpa" {
in-view "main";
};
zone "27.172.in-addr.arpa" {
in-view "main";
};
zone "28.172.in-addr.arpa" {
in-view "main";
};
zone "29.172.in-addr.arpa" {
in-view "main";
};
zone "30.172.in-addr.arpa" {
in-view "main";
};
zone "5.0.0.0.8.9.0.1.0.0.a.2.ip6.arpa" {
in-view "main";
};
zone "5.84.192.in-addr.arpa" {
in-view "main";
};
zone "80.60.193.in-addr.arpa" {
in-view "main";
};
zone "81.60.193.in-addr.arpa" {
in-view "main";
};
zone "82.60.193.in-addr.arpa" {
in-view "main";
};
zone "83.60.193.in-addr.arpa" {
in-view "main";
};
zone "84.60.193.in-addr.arpa" {
in-view "main";
};
zone "85.60.193.in-addr.arpa" {
in-view "main";
};
zone "86.60.193.in-addr.arpa" {
in-view "main";
};
zone "87.60.193.in-addr.arpa" {
in-view "main";
};
zone "88.60.193.in-addr.arpa" {
in-view "main";
};
zone "89.60.193.in-addr.arpa" {
in-view "main";
};
zone "90.60.193.in-addr.arpa" {
in-view "main";
};
zone "91.60.193.in-addr.arpa" {
in-view "main";
};
zone "92.60.193.in-addr.arpa" {
in-view "main";
};
zone "93.60.193.in-addr.arpa" {
in-view "main";
};
zone "94.60.193.in-addr.arpa" {
in-view "main";
};
zone "95.60.193.in-addr.arpa" {
in-view "main";
};
zone "block.arpa.cam.ac.uk" {
in-view "main";
};
zone "botnetcc.rpz.spamhaus.org" {
in-view "main";
};
zone "cam.ac.uk" {
in-view "main";
};
zone "cl.cam.ac.uk" {
in-view "main";
};
zone "cst.cam.ac.uk" {
in-view "main";
};
zone "damtp.cam.ac.uk" {
in-view "main";
};
zone "dbl.rpz.spamhaus.org" {
in-view "main";
};
zone "dpmms.cam.ac.uk" {
in-view "main";
};
zone "drop.rpz.spamhaus.org" {
in-view "main";
};
zone "eng.cam.ac.uk" {
in-view "main";
};
zone "in-addr.arpa.cam.ac.uk" {
in-view "main";
};
zone "in-addr.arpa.private.cam.ac.uk" {
in-view "main";
};
zone "malware-aggressive.rpz.spamhaus.org" {
in-view "main";
};
zone "malware.rpz.spamhaus.org" {
in-view "main";
};
zone "maths.cam.ac.uk" {
in-view "main";
};
zone "newton.cam.ac.uk" {
in-view "main";
};
zone "passthru.arpa.cam.ac.uk" {
in-view "main";
};
zone "private.cam.ac.uk" {
in-view "main";
};
zone "srcf.net" {
in-view "main";
};
zone "srcf.ucam.org" {
in-view "main";
};
zone "statslab.cam.ac.uk" {
in-view "main";
};
zone "ucam.org" {
in-view "main";
};
attach-cache "main";
};
key "cam.ac.uk.feb2016.tsig.ic.ac.uk" {
algorithm "hmac-sha256";
secret "????????????????????????????????????????????";
};
key "tsig-ipreg" {
algorithm "hmac-sha256";
secret "????????????????????????????????????????????";
};
key "university_of_cambridge-a1ec5f18.sns-pba.isc.org" {
algorithm "hmac-sha512";
secret "????????????????????????????????????????????????????????????????????????????????????????";
};
key "tsig-cam-maths" {
algorithm "hmac-sha256";
secret "????????????????????????????????????????????";
};
key "tsig-cam-exim" {
algorithm "hmac-sha256";
secret "????????????????????????????????????????????";
};
key "tsig-fanf" {
algorithm "hmac-sha256";
secret "????????????????????????????????????????????";
};
server 157.83.102.245/32 {
send-cookie no;
};
server 157.83.102.246/32 {
send-cookie no;
};
server 157.83.126.245/32 {
send-cookie no;
};
server 157.83.126.246/32 {
send-cookie no;
};
server 43.242.49.158/32 {
send-cookie no;
};
server 113.209.232.218/32 {
send-cookie no;
};
server 63.150.72.5/32 {
send-cookie no;
};
server 2001:428::7/128 {
send-cookie no;
};
server 208.44.130.121/32 {
send-cookie no;
};
server 2001:428::8/128 {
send-cookie no;
};
server 172.16.3.0/24 {
bogus no;
};
server 0.0.0.0/8 {
bogus yes;
};
server 10.0.0.0/8 {
bogus yes;
};
server 100.64.0.0/10 {
bogus yes;
};
server 127.0.0.0/8 {
bogus yes;
};
server 169.254.0.0/16 {
bogus yes;
};
server 172.16.0.0/12 {
bogus yes;
};
server 192.0.0.0/24 {
bogus yes;
};
server 192.0.2.0/24 {
bogus yes;
};
server 192.88.99.0/24 {
bogus yes;
};
server 192.168.0.0/16 {
bogus yes;
};
server 198.18.0.0/15 {
bogus yes;
};
server 198.51.100.0/24 {
bogus yes;
};
server 203.0.113.0/24 {
bogus yes;
};
server 224.0.0.0/3 {
bogus yes;
};
server ::/3 {
bogus yes;
};
server 2001::/32 {
bogus yes;
};
server 2001:2::/48 {
bogus yes;
};
server 2001:10::/28 {
bogus yes;
};
server 2001:db8::/32 {
bogus yes;
};
server 2002::/16 {
bogus yes;
};
server 3000::/4 {
bogus yes;
};
server 4000::/2 {
bogus yes;
};
server 8000::/1 {
bogus yes;
};
```https://gitlab.isc.org/isc-projects/bind9/-/issues/134Crash in BIND 9.12.0-RedHat-9.12.0-1.el7.02019-04-26T09:49:56ZJakob DhondtCrash in BIND 9.12.0-RedHat-9.12.0-1.el7.0I'd like to report a crash on a production host (ns2.switch.ch).
All the necessary info can be found here: [REDACTED]
Thanks,
JakobI'd like to report a crash on a production host (ns2.switch.ch).
All the necessary info can be found here: [REDACTED]
Thanks,
JakobMichał KępieńMichał Kępień