BIND issueshttps://gitlab.isc.org/isc-projects/bind9/-/issues2023-12-19T09:21:45Zhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3858Deprecate (or improve/replace) the fetches-per-zone option2023-12-19T09:21:45ZOndřej SurýDeprecate (or improve/replace) the fetches-per-zone optionThe `fetches-per-zone` is a measure to prevent abuse of the nameservers.
### How we pick a bucket?
When fetch (`fctx`) is created, the `fctx->domain` is initialized with a domain name that could be:
#### Argument passed by the called
...The `fetches-per-zone` is a measure to prevent abuse of the nameservers.
### How we pick a bucket?
When fetch (`fctx`) is created, the `fctx->domain` is initialized with a domain name that could be:
#### Argument passed by the called
`domain` passed by the caller - from `dns_adb`/`fetch_name` when `start_at_name` is set and from `ns_query`/`ns_query_recurse()`
No example here, we can (sort of) ignore this case.
#### In the forward-only mode
The `.` when we are in **forward-only** mode - there's only a single counter!
With QNAME Minimization On and Off
```
increasing counter for '.' in the '0x7fed97e3e000/www.google.com/A' to 1 (allowed 1 spilled 0)
increasing counter for '.' in the '0x7fed97a26800/com/DS' to 2 (allowed 2 spilled 0)
increasing counter for '.' in the '0x7fed97a25400/google.com/DS' to 3 (allowed 3 spilled 0)
decreasing counter for '.' in the '0x7fed97a26800/com/DS' to 2 (allowed 3 spilled 0)
increasing counter for '.' in the '0x7fed97226800/com/DNSKEY' to 3 (allowed 4 spilled 0)
decreasing counter for '.' in the '0x7fed97226800/com/DNSKEY' to 2 (allowed 4 spilled 0)
decreasing counter for '.' in the '0x7fed97a25400/google.com/DS' to 1 (allowed 4 spilled 0)
dropping counter for '.' in the '0x7fed97e3e000/www.google.com/A' to 0 (allowed 4 spilled 0)
```
#### Everything else
Whatever `dns_view_findzonecut()` returns. This includes **forward-first** configurations.
Example with QNAME minimization:
```
increasing counter for '.' in the '0x7f4b9983e000/www.google.com/A' to 1 (allowed 1 spilled 0)
increasing counter for '.' in the '0x7f4b9b81a000/_.com/A' to 2 (allowed 2 spilled 0)
decreasing counter for '.' in the '0x7f4b9b81a000/_.com/A' to 1 (allowed 2 spilled 0)
increasing counter for 'com' in the '0x7f4b9b81a000/_.com/A' to 1 (allowed 1 spilled 0)
dropping counter for 'com' in the '0x7f4b9b81a000/_.com/A' to 0 (allowed 1 spilled 0)
dropping counter for '.' in the '0x7f4b9983e000/www.google.com/A' to 0 (allowed 2 spilled 0)
increasing counter for 'com' in the '0x7f4b9983e000/www.google.com/A' to 1 (allowed 1 spilled 0)
increasing counter for 'com' in the '0x7f4b9b81a000/_.google.com/A' to 2 (allowed 2 spilled 0)
decreasing counter for 'com' in the '0x7f4b9b81a000/_.google.com/A' to 1 (allowed 2 spilled 0)
increasing counter for 'google.com' in the '0x7f4b9b81a000/_.google.com/A' to 1 (allowed 1 spilled 0)
dropping counter for 'google.com' in the '0x7f4b9b81a000/_.google.com/A' to 0 (allowed 1 spilled 0)
dropping counter for 'com' in the '0x7f4b9983e000/www.google.com/A' to 0 (allowed 2 spilled 0)
increasing counter for 'google.com' in the '0x7f4b9983e000/www.google.com/A' to 1 (allowed 1 spilled 0)
increasing counter for 'com' in the '0x7f4b9b81c800/google.com/DS' to 1 (allowed 1 spilled 0)
increasing counter for 'com' in the '0x7f4b99027800/com/DNSKEY' to 2 (allowed 2 spilled 0)
decreasing counter for 'com' in the '0x7f4b99027800/com/DNSKEY' to 1 (allowed 2 spilled 0)
dropping counter for 'com' in the '0x7f4b9b81c800/google.com/DS' to 0 (allowed 2 spilled 0)
dropping counter for 'google.com' in the '0x7f4b9983e000/www.google.com/A' to 0 (allowed 1 spilled 0)
```
Example without QNAME minimization:
```
increasing counter for '.' in the '0x7fc30803e000/www.google.com/A' to 1 (allowed 1 spilled 0)
dropping counter for '.' in the '0x7fc30803e000/www.google.com/A' to 0 (allowed 1 spilled 0)
increasing counter for 'com' in the '0x7fc30803e000/www.google.com/A' to 1 (allowed 1 spilled 0)
dropping counter for 'com' in the '0x7fc30803e000/www.google.com/A' to 0 (allowed 1 spilled 0)
increasing counter for 'com' in the '0x7fc30803e000/www.google.com/A' to 1 (allowed 1 spilled 0)
dropping counter for 'com' in the '0x7fc30803e000/www.google.com/A' to 0 (allowed 1 spilled 0)
increasing counter for 'google.com' in the '0x7fc30803e000/www.google.com/A' to 1 (allowed 1 spilled 0)
dropping counter for 'google.com' in the '0x7fc30803e000/www.google.com/A' to 0 (allowed 1 spilled 0)
increasing counter for 'google.com' in the '0x7fc30803e000/www.google.com/A' to 1 (allowed 1 spilled 0)
increasing counter for 'com' in the '0x7fc307c28c00/google.com/DS' to 1 (allowed 1 spilled 0)
increasing counter for 'com' in the '0x7fc307c27800/com/DNSKEY' to 2 (allowed 2 spilled 0)
decreasing counter for 'com' in the '0x7fc307c27800/com/DNSKEY' to 1 (allowed 2 spilled 0)
dropping counter for 'com' in the '0x7fc307c28c00/google.com/DS' to 0 (allowed 2 spilled 0)
dropping counter for 'google.com' in the '0x7fc30803e000/www.google.com/A' to 0 (allowed 1 spilled 0)
```
NOTE: The similar effect here has the `fetches-per-server` - but `fetches-per-server` is more fine-grained.BIND 9.21.xhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3779catalog zone grammar does not enforce default-primaries key / should we suppo...2023-03-16T08:59:16ZPetr Špačekpspacek@isc.orgcatalog zone grammar does not enforce default-primaries key / should we support primary zones in catalog?### Summary
Catalog zones grammar in named.conf does not enforce/require `default-primaries` key. This can be either bug, or an opportunity to extend the feature in an meaningful way.
### BIND version used
* ~"Affects v9.16": d14a22b3...### Summary
Catalog zones grammar in named.conf does not enforce/require `default-primaries` key. This can be either bug, or an opportunity to extend the feature in an meaningful way.
### BIND version used
* ~"Affects v9.16": d14a22b3d9fa8e8bb21dfe3bb0bca216a5b93910
* ~"Affects v9.18": f5e7192691568d4c089fbdd4ed4e93c7af785bae
* ~"Affects v9.19": 0e489b9ed4ba7821c50038dade014bf2b706bd12
### Steps to reproduce
1. Define catalog zone **without** `default-primaries` key. E.g.
```
catalog-zones {
zone "catalog.invalid"
//default-masters { 127.0.0.2; }
in-memory no
zone-directory "catzones"
min-update-interval 1;
};
```
2a. Start **with** matching files on disk
2b. Start **without** matching files on disk
### What is the current *bug* behavior?
The config is accepted by parser but causes surprising behavior later on.
Variant 2A:
The zone is on disk under correct name, and it loads just fine when the file is available in `catzones` directory. `rndc zonestatus` then reports:
```
name: .
type: secondary
files: catzones/__catz___default_catalog.invalid_..db
serial: 2023010600
nodes: 8438
last loaded: Fri, 06 Jan 2023 16:03:08 GMT
next refresh: Fri, 06 Jan 2023 16:12:19 GMT
expires: Fri, 13 Jan 2023 16:03:08 GMT
secure: yes
inline signing: no
key maintenance: none
dynamic: no
reconfigurable via modzone: yes
```
Next time refresh timer hits it errors out with
```
zone ./IN: cannot refresh: no primaries
```
but continues serving the zone until it expires. Kind of works, but not so much because it can never refresh and is bound to expire eventually.
Variant 2B:
File is not on disk. It fails to load as expected, and logs
```
zone ./IN: cannot refresh: no primaries
```
immediately.
### Possible fixes
I can see two options:
a) Require the `default-primaries` and error out if it is not present. That would be the same as for regular secondary zones, I believe.
b) Make this behavior "supported", probably by switching zone type to "primary" in case there is no `default-primaries` defined for the respective catalog. (In that case `in-memory` must be configured as `no`.)
Personally I think it makes sense to do b) because it eliminates need to have two different per-zone config management procedures for primaries.
I mean - with "strict" variant adding a new primary zone always requires `rndc addzone` + catalog zone modification on the primary side.
With less strict variant `rndc addzone` is not necessary and the whole state is in the catalog zone, which is has to be maintained for secondaries anyway.Not plannedhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3776update-policy wildcard match limitations2023-01-09T11:30:12ZTimothe Littupdate-policy wildcard match limitations[The `wildcard` match documentation](https://bind9.readthedocs.io/en/v9_18_10/reference.html#dynamic-update-policies) reads
> The name field is subject to **DNS wildcard expansion**, and this rule matches when the name being updated is ...[The `wildcard` match documentation](https://bind9.readthedocs.io/en/v9_18_10/reference.html#dynamic-update-policies) reads
> The name field is subject to **DNS wildcard expansion**, and this rule matches when the name being updated is a valid expansion of the wildcard.
It would be useful to have a more flexible wildcard match for the update-policy's grant/deny wildcard names.
An example that would help many of us (who use Let's Encrypt) is
`_acme-challenge.*.example.net`, as in
```
grant "CERTIFICATE_ISSUE_BOT" name _acme-challenge.*.example.net. TXT ;
```
Since DNS wildcards only work for the leftmost label, this can't be expressed with the current syntax.
As a result, when a server is added, not only must the A/AAAA records be added (which can be done with UPDATE), but a `grant` clause must be added to the configuration (which can not).
Or allow the BOT to handle all TXT records in the domain. I'm pretty sure I don't want a bot to be able to mess up SPF, google console, and other TXT records...
There are other cases where a generic glob match would be helpful, but most of them can be worked-around by suitable naming and/or introducing a subdomain. Unfortunately, that's not the case for ACME, which requires this structure for the records it uses for `dns-01` validation.
This is NOT asking for changes to how queries are resolved. That ship sailed (to where there be dragons) long ago. Just how `update-policy` clauses are matched. `update-policy` is internal to bind, and the suggested change would be upward-compatible.Not plannedhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3758Revisit BIND features to prevent the scenario where a second BIND instance is...2023-01-02T14:18:27ZCathy AlmondRevisit BIND features to prevent the scenario where a second BIND instance is running accidentallyAs discussed with Engineering, but not relating to a specific customer issue, although Support team regularly encounter sites with unintentional multiple running instances of named, so I'm tagging this one as 'Customer'.
With newer BIND...As discussed with Engineering, but not relating to a specific customer issue, although Support team regularly encounter sites with unintentional multiple running instances of named, so I'm tagging this one as 'Customer'.
With newer BIND using SO_REUSEPORT (`reuseport yes;`) there is no longer anything to stop multiple instances of running named from listening on the same sockets - the kernel will distribute incoming queries to the listening threads/processes per however the kernel and NICs implement and support rx-flow-hash.
This could be seen as a 'feature' by some. But for others, it allows accidental launching of multiple instances of named, and then much confusion and pain troubleshooting ensuing problems, if, for example, the intent was to restart named with a different version or different configuration. Having different instances fielding different queries could produce different outcomes!
This new behaviour (mostly) negates this change, introduced in BIND 9.11.0:
4022. [func] Stop multiple spawns of named by limiting number of
processes to 1. This is done by using a lockfile and
checking whether we can listen on any configured
TCP interfaces. [RT #37908]
Of significance is that with the introduction of reuseport, the TCP listen check will now no longer work, and that lock-file is not enabled by default.
```
lock-file
This is the pathname of a file on which named attempts to acquire a file lock when starting for the first time; if
unsuccessful, the server terminates, under the assumption that another server is already running. If not specified,
the default is none.
Specifying lock-file none disables the use of a lock file. lock-file is ignored if named was run using the
-X option, which overrides it. Changes to lock-file are ignored if named is being reloaded or reconfigured;
it is only effective when the server is first started.
```
This is distinct from pid-file, whose purpose is to identify the pid of the running named instance, so that signals can be sent to it:
```
pid-file
This is the pathname of the file the server writes its process ID in. If not specified, the default is /var/run/
named/named.pid. The PID file is used by programs that send signals to the running name server. Specifying
pid-file none disables the use of a PID file; no file is written and any existing one is removed. Note that
none is a keyword, not a filename, and therefore is not enclosed in double quotes.
```
---
What to do? I'm not sure - I think this is a 'gotcha' rather than a bug at this point, but I think it's so subtle that it has the potential to derail the unwary who aren't aware that it could happen. Should we perhaps change the default for the existence of the lock-file?Not plannedhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3757named-checkconfig -px inserts extra blanks in output2022-12-31T08:40:56ZPhilip Prindevillenamed-checkconfig -px inserts extra blanks in output<!--
If the bug you are reporting is potentially security-related - for example,
if it involves an assertion failure or other crash in `named` that can be
triggered repeatedly - then please do *NOT* report it here, but send an
email to [...<!--
If the bug you are reporting is potentially security-related - for example,
if it involves an assertion failure or other crash in `named` that can be
triggered repeatedly - then please do *NOT* report it here, but send an
email to [security-officer@isc.org](security-officer@isc.org).
-->
### Summary
Certain multiline commands like `listen-on-v6` generate two spaces instead of one before the leading left curly bracket.
### BIND version used
```
BIND 9.18.10 (Stable Release) <id:aa8ab10>
running on Linux x86_64 5.15.85 #0 SMP Mon Dec 26 23:46:44 2022
built by make with '--target=x86_64-openwrt-linux' '--host=x86_64-openwrt-linux' '--build=x86_64-pc-linux-gnu' '--program-prefix=' '--program-suffix=' '--prefix=/usr' '--exec-prefix=/usr' '--bindir=/usr/bin' '--sbindir=/usr/sbin' '--libexecdir=/usr/lib' '--sysconfdir=/etc' '--datadir=/usr/share' '--localstatedir=/var' '--mandir=/usr/man' '--infodir=/usr/info' '--with-openssl=/home/philipp/lede/staging_dir/target-x86_64_musl/usr' '--without-lmdb' '--enable-epoll' '--without-gssapi' '--without-readline' '--sysconfdir=/etc/bind' '--with-json-c=no' '--with-libxml2=no' '--enable-doh' 'build_alias=x86_64-pc-linux-gnu' 'host_alias=x86_64-openwrt-linux' 'target_alias=x86_64-openwrt-linux' 'CC=x86_64-openwrt-linux-musl-gcc' 'CFLAGS=-Os -pipe -fno-caller-saves -fno-plt -fhonour-copts -fmacro-prefix-map=/home/philipp/lede/build_dir/target-x86_64_musl/bind-9.18.10=bind-9.18.10 -Wformat -Werror=format-security -fstack-protector -D_FORTIFY_SOURCE=1 -Wl,-z,now -Wl,-z,relro ' 'LDFLAGS=-L/home/philipp/lede/staging_dir/toolchain-x86_64_gcc-11.3.0_musl/usr/lib -L/home/philipp/lede/staging_dir/toolchain-x86_64_gcc-11.3.0_musl/lib -znow -zrelro -Wl,--gc-sections,--as-needed ' 'CPPFLAGS=-I/home/philipp/lede/staging_dir/toolchain-x86_64_gcc-11.3.0_musl/usr/include -I/home/philipp/lede/staging_dir/toolchain-x86_64_gcc-11.3.0_musl/include/fortify -I/home/philipp/lede/staging_dir/toolchain-x86_64_gcc-11.3.0_musl/include ' 'PKG_CONFIG=/home/philipp/lede/staging_dir/host/bin/pkg-config' 'PKG_CONFIG_PATH=/home/philipp/lede/staging_dir/target-x86_64_musl/usr/lib/pkgconfig:/home/philipp/lede/staging_dir/target-x86_64_musl/usr/share/pkgconfig' 'PKG_CONFIG_LIBDIR=/home/philipp/lede/staging_dir/target-x86_64_musl/usr/lib/pkgconfig:/home/philipp/lede/staging_dir/target-x86_64_musl/usr/share/pkgconfig'
compiled by GCC 11.3.0
compiled with OpenSSL version: OpenSSL 1.1.1s 1 Nov 2022
linked to OpenSSL version: OpenSSL 1.1.1s 1 Nov 2022
compiled with libuv version: 1.44.1
linked to libuv version: 1.44.1
compiled with libnghttp2 version: 1.44.0
linked to libnghttp2 version: 1.44.0
compiled with zlib version: 1.2.13
linked to zlib version: 1.2.13
threads support is enabled
DNSSEC algorithms: RSASHA1 NSEC3RSASHA1 RSASHA256 RSASHA512 ECDSAP256SHA256 ECDSAP384SHA384 ED25519 ED448
DS algorithms: SHA-1 SHA-256 SHA-384
HMAC algorithms: HMAC-MD5 HMAC-SHA1 HMAC-SHA224 HMAC-SHA256 HMAC-SHA384 HMAC-SHA512
TKEY mode 2 support (Diffie-Hellman): yes
TKEY mode 3 support (GSS-API): no
default paths:
named configuration: /etc/bind/named.conf
rndc configuration: /etc/bind/rndc.conf
DNSSEC root key: /etc/bind/bind.keys
nsupdate session key: /var/run/named/session.key
named PID file: /var/run/named/named.pid
named lock file: /var/run/named/named.lock
```
### Steps to reproduce
Create a `named.conf` with `listen-on-v6 { none; }` in the `options { ... };` section, and load it.
Then run:
```
named-checkconf -px \
| sed -r -ne '1N; N; /^\tlisten-on-v6 +\{\n\t\t"none";\n\t\};$/{ p; q; }; D'
```
and the output will be:
```
listen-on-v6 {
"none";
};
```
Note the extraneous space on the first line.
### What is the current *bug* behavior?
Two spaces before the `{`.
### What is the expected *correct* behavior?
A single space as for all other commands.
### Relevant configuration files
```
// This is the primary configuration file for the BIND DNS server named.
options {
directory "/tmp";
// If your ISP provided one or more IP addresses for stable
// nameservers, you probably want to use them as forwarders.
// Uncomment the following block, and insert the addresses replacing
// the all-0's placeholder.
// forwarders {
// 0.0.0.0;
// };
recursion yes;
// note that all subnets are visible to each other;
// if we wished to isolate them we could use "views".
allow-query {
localhost;
192.168.6.0/24;
192.168.7.0/24;
192.168.8.0/24;
};
auth-nxdomain no; # conform to RFC1035
// added by philipp
allow-transfer { none; };
// dnssec-validation no;
// dnssec-enabled yes;
dnssec-validation auto;
listen-on-v6 { none; };
};
include "/etc/bind/named-rndc.conf";
include "/tmp/bind/named.conf.local";
// prime the server with knowledge of the root servers
zone "." {
type hint;
file "/etc/bind/db.root";
};
// be authoritative for the localhost forward and reverse zones, and for
// broadcast zones as per RFC 1912
zone "localhost" {
type master;
file "/etc/bind/db.local";
};
zone "127.in-addr.arpa" {
type master;
file "/etc/bind/db.127";
};
zone "0.in-addr.arpa" {
type master;
file "/etc/bind/db.0";
};
zone "255.in-addr.arpa" {
type master;
file "/etc/bind/db.255";
};
# added by philipp
zone "tiktok.com" {
type master;
file "/etc/bind/db.tiktok.com";
};
```
### Relevant logs and/or screenshots
```
listen-on-v6 {
"none";
};
```
### Possible fixes
UnknownNot plannedhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3698rndc can get very very slow if large number of requests is made over short pe...2023-01-12T11:25:12ZPetr Špačekpspacek@isc.orgrndc can get very very slow if large number of requests is made over short period of time### Summary
rndc can get very very slow if large number of requests is made over short period of time. Primary cause seems to be that packet deduplication goes O(n^2).
### BIND version used
- ~"Affects v9.19" : e2bbf38cdb42de70d504b2f...### Summary
rndc can get very very slow if large number of requests is made over short period of time. Primary cause seems to be that packet deduplication goes O(n^2).
### BIND version used
- ~"Affects v9.19" : e2bbf38cdb42de70d504b2ff281fb360cd0f27c0
- I assume that supported versions are affected
### Steps to reproduce
TL;DR call `rndc addzone` in a tight loop, and measure response time.
Helper scripts:
* [addconfprim.py](/uploads/129fa9ac79cd590c9172e6d61f28a59f/addconfprim.py) - run this
* [rndc.py](/uploads/23c2eba46244ac7decbe865d84a92052/rndc.py)
### What is the current *bug* behavior?
Adding zones slows down very quickly.
```
33000 zones present; adding last 1000 took 1.93 secs
...
65000 zones present; adding last 1000 took 4.42 secs
```
### What is the expected *correct* behavior?
No speed degradation.
### Relevant configuration files
```
key "key" {
algorithm hmac-sha256;
secret "ptCZS/77Xm2sIzCdO/oxEoer2BbDgCfvF0CrqrcdRWM=";
};
options {
max-cache-size 10M;
recursion no;
notify no;
allow-new-zones yes;
lmdb-mapsize 110M;
};
```
* [empty.db](/uploads/da7366d7d37edc16d43b7280f7cbaf6f/empty.db)
### Possible fixes
From a quick glance, the problem centers around `DUP_LIFETIME` defined in `lib/isccc/cc.c` and in the inefficiency of `isccc_cc_cleansymtab()` and it's use.https://gitlab.isc.org/isc-projects/bind9/-/issues/3695Improvement: Including query time in dnstap CLIENT_RESPONSE messages2023-01-11T12:15:33ZBorja Marcos EA2EKHImprovement: Including query time in dnstap CLIENT_RESPONSE messages### Description
While the dnstap specification recommends including the query time for AUTH_RESPONSE, RESOLVER_RESPONSE and
CLIENT_RESPONSE dnstap messages, the latter is excluded.
Having the query time in CLIENT_RESPONSE dnstap messa...### Description
While the dnstap specification recommends including the query time for AUTH_RESPONSE, RESOLVER_RESPONSE and
CLIENT_RESPONSE dnstap messages, the latter is excluded.
Having the query time in CLIENT_RESPONSE dnstap messages would be very useful when using dnstap to keep track
of response times.
### Request
In lib/dns/dnstap.c (both for 9.16 and 9.18) the dns_dt_send function accepts the qtime and rtime parameters.
However, when building the dnstap message, CLIENT_RESPONSE messages are prevented from using the qtime parameter.
` dm.m.has_response_time_sec = 1;
dm.m.response_time_nsec = isc_time_nanoseconds(t);
dm.m.has_response_time_nsec = 1;
/*
* Types CR, RR, and FR can fall through and get the query
* time set as well. Any other response type, break.
*/
if (msgtype != DNS_DTTYPE_RR && msgtype != DNS_DTTYPE_FR
&& msgtype != DNS_DTTYPE_CR) { // << I HAVE ADDED THIS!
break;
}
FALLTHROUGH;
case DNS_DTTYPE_AQ:
case DNS_DTTYPE_CQ:
case DNS_DTTYPE_FQ:
case DNS_DTTYPE_RQ:
case DNS_DTTYPE_SQ:
case DNS_DTTYPE_TQ:
case DNS_DTTYPE_UQ:
if (qtime != NULL) {
t = qtime;
}
dm.m.query_time_sec = isc_time_seconds(t);
dm.m.has_query_time_sec = 1;
dm.m.query_time_nsec = isc_time_nanoseconds(t);
dm.m.has_query_time_nsec = 1;
break;
`
I have tried making the simple change shown above (so that qtime is considered for
CLIENT_RESPONSE messages as well) and it works both for 9.16.35 and 9.18.9.
The change looks safe enough (it won´t crash because if qtime is NULL t will contain a
timestamp obtained when dns_dt_send() is invoked) and at worst it would contain a false
qtime.
A more correct alternative would be to include it for CLIENT_RESPONSE messages only if qtime != NULL. But
I don´t know whether it can happen or all the calls to dns_dt_send() will contain qtime.
Also, is it possible for qtime to be missing for a CLIENT_RESPONSE but not for a RESOLVER_RESPONSE? Because for a RESOLVER_RESPONSE it would mean that query time in the dnstap message would contain the timestamp obtained in dns_dt_send() and, being probably
greater than the response time itself that would botch a time difference calculation.
### Links / referencesNot plannedhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3691stats channels and `rndc dumpstats` do not expose all counters from `rndc sta...2022-11-29T13:42:48ZPetr Špačekpspacek@isc.orgstats channels and `rndc dumpstats` do not expose all counters from `rndc status`### Summary
JSON and XML stat channels, and `rndc dumpstats` command, do not expose counters from `rndc status`. This forces users to scrape both channels to get complete picture.
### BIND version used
9.19.8-dev (Development Release)...### Summary
JSON and XML stat channels, and `rndc dumpstats` command, do not expose counters from `rndc status`. This forces users to scrape both channels to get complete picture.
### BIND version used
9.19.8-dev (Development Release) 9128e54 , but it certainly dates long way back.
### What is the current *bug* behavior?
Compare lines produced by `rndc status` with content of JSON stats channel:
| rndc status line | evaluation | JSON key |
|-------------------------------------------------------------|-------------------|-------------|
| version: BIND 9.19.8-dev (Development Release) <id:9128e54> | different format, just `9.19.8-dev` | version |
| running on p: Linux x86_64 6.0.8-arch1-1 #1 SMP … | missing | |
| boot time: Tue, 22 Nov 2022 08:43:49 GMT | different format | boot-time |
| last configured: Tue, 22 Nov 2022 09:34:20 GMT | different format | config-time |
| configuration file: /etc/named.conf | missing | |
| CPUs found: 8 | missing | |
| worker threads: 8 | missing | |
| UDP listeners per interface: 8 | missing | |
| number of zones: 103 (98 automatic) | missing | |
| debug level: 0 | missing | |
| xfers running: 0 | missing | |
| xfers deferred: 0 | missing | |
| soa queries in progress: 0 | missing | |
| query logging is OFF | missing | |
| recursive clients: 0/900/1000 | missing | |
| tcp clients: 0/150 | missing | |
| TCP high-water: 0 | missing | |
| server is up and running | missing | |
### What is the expected *correct* behavior?
All information from `rndc status` is also exposed in other stats channels.Not plannedhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3688rndc status "soa queries in progress" counter includes also AXFR in progress2023-11-02T17:05:06ZPetr Špačekpspacek@isc.orgrndc status "soa queries in progress" counter includes also AXFR in progress### Summary
rndc status "soa queries in progress" counter includes also AXFRs in progress, even transfers which made SOA query and received a valid answer with SOA before initiating transfer itself.
### BIND version used
- ~"Affects v...### Summary
rndc status "soa queries in progress" counter includes also AXFRs in progress, even transfers which made SOA query and received a valid answer with SOA before initiating transfer itself.
### BIND version used
- ~"Affects v9.19": 8272cc2
- ~"Affects v9.18": v9_18_9
- ~"Affects v9.16": v9_16_35
- ~"Affects v9.11 (EoL)": v9_11_37
### Steps to reproduce
1. Use following config to transfer (public) se. zone:
```
zone se {
type secondary;
primaries { 45.155.96.61; };
notify no;
};
```
2. Run `tcpdump` and watch SOA queries go by:
```
sudo tcpdump -i any 'udp and host 45.155.96.61'
```
3. Run BIND:
```
named -g -c secondary.conf
```
4. Observe output from `rndc status` before the transfer finishes.
### What is the current *bug* behavior?
tcpdump shows:
```
18:43:53.463606 enp0s13f0u1u2u3 Out IP p.50306 > zonedata.iis.se.domain: 21273 [1au] SOA? se. (35)
18:43:53.512928 enp0s13f0u1u2u3 In IP zonedata.iis.se.domain > p.50306: 21273*- 1/0/1 SOA (107)
```
`rndc status` at the same time shows:
```
xfers running: 1
xfers deferred: 0
soa queries in progress: 1
```
`named` log at the time:
```
21-Nov-2022 18:43:53.509 zone se/IN: Transfer started.
21-Nov-2022 18:43:53.556 transfer of 'se/IN' from 45.155.96.61#53: connected using 45.155.96.61#53
```
### What is the expected *correct* behavior?
I would expect "soa queries in progress" counter so be 0 at this point in time.Not plannedMark AndrewsMark Andrewshttps://gitlab.isc.org/isc-projects/bind9/-/issues/3669update-policy external is synchronous and blocking without timeouts2023-04-06T12:51:39ZPetr Špačekpspacek@isc.orgupdate-policy external is synchronous and blocking without timeouts### Summary
[update-policy](https://bind9.readthedocs.io/en/v9_19_6/reference.html#namedconf-statement-update-policy) `external` is using synchronous blocking I/O on a Unix socket.
### BIND version used
0744ebe2206fcb327ca0d33e0a72275...### Summary
[update-policy](https://bind9.readthedocs.io/en/v9_19_6/reference.html#namedconf-statement-update-policy) `external` is using synchronous blocking I/O on a Unix socket.
### BIND version used
0744ebe2206fcb327ca0d33e0a722757525e30f0, but as far as I can tell all versions after 9.8.0b1 are affected. (We did not have the `external` policy before that version.)
### Steps to reproduce
I've just looked at the code - function `dns_ssu_external_match()`.
### What is the current *bug* behavior?
Connect/write/read operations are done synchronously on an unix socket. If the external system takes non-zero time to process the query (say, because it's doing database lookups ... or because it just crashed) a named thread will be blocked while waiting for the answer.
### What is the expected *correct* behavior?
I would expect it to be asynchronous... Or that we don't have the policy :-0Not plannedhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3614Resolver prefetch issue with qtype ANY2022-11-07T10:49:44ZArаm SаrgsyаnResolver prefetch issue with qtype ANYIn `lib/ns/query.c:query_respond_any()`, when there are several datasets in the database node, only one of them has a chance to trigger a prefetch, because when one does, the `FETCH_RECTYPE_PREFETCH(client) != NULL` check (see below) doe...In `lib/ns/query.c:query_respond_any()`, when there are several datasets in the database node, only one of them has a chance to trigger a prefetch, because when one does, the `FETCH_RECTYPE_PREFETCH(client) != NULL` check (see below) does not pass for the rest of the datasets, even if they are eligible for a prefetch, as the client's fetch reserved for the prefetch operation is already in progress:
```c
static void
query_prefetch(ns_client_t *client, dns_name_t *qname,
dns_rdataset_t *rdataset) {
CTRACE(ISC_LOG_DEBUG(3), "query_prefetch");
if (FETCH_RECTYPE_PREFETCH(client) != NULL ||
client->view->prefetch_trigger == 0U ||
rdataset->ttl > client->view->prefetch_trigger ||
(rdataset->attributes & DNS_RDATASETATTR_PREFETCH) == 0)
{
return;
}
fetch_and_forget(client, qname, rdataset->type, RECTYPE_PREFETCH);
dns_rdataset_clearprefetch(rdataset);
ns_stats_increment(client->manager->sctx->nsstats,
ns_statscounter_prefetch);
}
```
I will use the `resolver` system test's `check prefetch qtype * (${n})` check to demonstrate it. Please note that if you want to reproduce it, you'll need to use the branch in !6937 which fixes another prefetch issue (unless it is already merged).
Run the test:
```
$ ./run.sh -n resolver
...
...
I:resolver:check prefetch qtype * (32)
...
PASS: resolver
```
Check the first answer, all records start with TTL value of 10:
```
$ cat resolver/dig.out.1.32
...
;; QUESTION SECTION:
;fetchall.tld. IN ANY
;; ANSWER SECTION:
fetchall.tld. 10 IN AAAA ::1
fetchall.tld. 10 IN A 1.2.3.4
fetchall.tld. 10 IN TXT "A" "short" "ttl"
...
```
Check the second answer (for a request after 7 seconds), this should had triggered a prefetch for all 3 records, because TTL value 3 is smaller than the configured trigger value 4:
```
$ cat resolver/dig.out.2.32
...
;; QUESTION SECTION:
;fetchall.tld. IN ANY
;; ANSWER SECTION:
fetchall.tld. 3 IN AAAA ::1
fetchall.tld. 3 IN A 1.2.3.4
fetchall.tld. 3 IN TXT "A" "short" "ttl"
...
```
Check the third answer (for a request after 1 second):
```
$ cat resolver/dig.out.3.32
...
;; QUESTION SECTION:
;fetchall.tld. IN ANY
;; ANSWER SECTION:
fetchall.tld. 9 IN AAAA ::1
fetchall.tld. 2 IN A 1.2.3.4
fetchall.tld. 2 IN TXT "A" "short" "ttl"
...
```
As you can see, only the first record was prefetched.
Here are the logs which confirm that, where you can see that `query_prefetch()` was called three times, but a prefetch was initiated only for the first call: [fetchall.tld-any.log.gz](/uploads/30a6656ecf98a5ca6c9b342f200969b7/fetchall.tld-any.log.gz).
I think, as suggest by @fanf in MM, ANY should not trigger prefetching at all. Or, otherwise, all records which are eligible for prefetch should be prefetched.Not plannedhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3609dual-stack-servers are not being used2023-11-02T17:05:05ZMark Andrewsdual-stack-servers are not being usedThe current code for deciding when to add a dual stack server doesn't always cause the server to be added as it is looking for a NXRRSET indication for the server's address (A for IPv4 and AAAA for IPv6) as well as no dispatch for that t...The current code for deciding when to add a dual stack server doesn't always cause the server to be added as it is looking for a NXRRSET indication for the server's address (A for IPv4 and AAAA for IPv6) as well as no dispatch for that transport. When the server is within the zone we can't get the NXRRSET indication. Additionally no dispatch is contingent on -4 or -6 being used on the command line.
- We need a solution to the bootstrap problem or to relax the requirement.
- We need a better solution for how to determine we are effectively running single stack other the -4 and -6.
- We need to add tests for dual-stack-servers as there are currently none.Not plannedhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3464Histograms for timing and memory statistics2023-11-02T17:05:05ZTony FinchHistograms for timing and memory statisticsBIND needs to be able to record statistics covering a wide range of possible values (several decimal orders of magnitude):
* latency times, from submilliseond queries on the same LAN to multi-minute zone transfers
* memory usage, fo...BIND needs to be able to record statistics covering a wide range of possible values (several decimal orders of magnitude):
* latency times, from submilliseond queries on the same LAN to multi-minute zone transfers
* memory usage, for zones from a handful of records to tens of millions
* message sizes, from 64 bytes to 64 kilobytes
In this issue I'm outlining a possible design for a general-purpose histogram data structure,
that could be added to `libisc` for collecting statistics efficiently in several places in BIND.
## existing histograms in BIND
The statistics channel has histograms for request and response sizes, which use buckets that
are defined manually with some tediously repetitive code. These could be replaced by the
proposed self-tuning histograms, although the bucketing will be somewhat different.
## examples of general-purpose histograms
It's possible to record histograms of values covering a wide range, with bucket sizes chosen automatically to provide a particular level of accuracy (e.g. 1% or 10%), and without using more than a few KiB for each histogram. Existing examples are:
* [circllhist, Circonus log-linear histogram](https://github.com/openhistogram/libcircllhist),
aka [OpenHistogram](https://openhistogram.io/)
Uses decimal floating point with two digits of mantissa and a 1 byte exponent,
to record values with 1% accuracy.
* [DDSketch from DataDog](https://www.datadoghq.com/blog/engineering/computing-accurate-percentiles-with-ddsketch/)
Uses the floating-point logarithm to a base derived from the required accuracy, rounded to an integer to make a bucket index.
Has an alternative "fast" mode more like HdrHistogram.
* [HdrHistogram, high dynamic range histogram](http://www.hdrhistogram.org/)
Uses low-precision floating point numbers as bucket indexes.
* [hg64, 64-bit histograms](https://github.com/fanf2/hg64)
My prototype implementation intended for use in BIND.
The DataDog blog article has a nice overview, and compares a quantile sketch implementation (that is designed for a particular rank error) with a histogram (designed for a particular value error). From my reading on this topic I concluded that histograms are both easier to understand, simpler to implement, and have similar or better CPU and memory usage compared to rank-error-based quantile sketches.
## key idea
The histogram counts how many measurements (time or space) have particular `uint64_t` value
or range of values, according to the histogram's configured precision (e.g. 1% or 10%).
Each range of values corresponds to a bucket or counter.
My prototype `hg64` uses a log-linear bucket spacing, which has two parts:
* a logarithm of the value to cover a large dynamic range with a few bits;
specifically, the log base 2 of a `uint64_t` varies from 0 to 63, which fits in 6 bits.
* linear, evenly spaced buckets between logarithms, to provide more precision
than you can get from just a power of 2 or 10. 4 buckets per log are enough
for 10% precision; 32 buckets per log gives 1% precision.
This log-linear bucketing is the same thing as decimal scientific notation,
like 1e9 (1 significant digit, 10% precision) or 2.2e8 (2 significant digits, 1% precision).
It's also the same as a (low-precision) binary floating point number:
the FP exponent is the logarithmic part, and the FP mantissa is the linear part.
## measurements and values
When counting time measurements, it makes sense for the `uint64_t` value to be the time measured in nanoseconds. This allows the histogram to count any time measurements we are likely to need, from submicrosecond up to a few centuries. There is no point using lower-precision time measurements because the histogram bucketing algorithm will reduce the precision as required.
Unlike nanosecond measurements, whose values are towards the logarithmic mid-range of `uint64_t`, memory measurements tend to cluster around zero. The `hg64` bucketing algorithm provides one counter for each distinct small integer; for instance, with 1% precision `hg64` has a counter for each value from 0 to 63, above which multiple values share each counter. To make the best use of these small-value counters, it makes sense to divide a memory measurement to get the desired resolution. For example, if the allocator quantum is 16 bytes, divide an allocation size by 16 before using it as a histogram value.
## incrementing counters quickly
It is very cheap to turn a `uint64_t` value into a bucket number, using CLZ to get the logarithm
with some bit shuffling to move things into place. The basic principle is
roughly the same as used by HdrHistogram and fast-mode DDSketch.
[Paul Khuong encouraged me to use his algorithm](https://twitter.com/pkhuong/status/1571831293335277573)
which is smaller and faster than the version I developed for my proof-of-concept.
As in BIND's existing statistics code, we use a relaxed atomic increment to update a counter.
When the histogram is in cache and uncontended, the whole operation (calculating the bucket
number and incrementing the counter) takes less than 2.5ns in my prototype code.
## efficient storage
The `hg64` bucket keys are small, e.g. 8 bits for 10% precision, or 11 bits for 1% precision.
We could store the buckets as a simple array of counters, which would use 2 KiB for 10%
precision, or 16 KiB for 1% precision. However a large fraction of that space will be
unused, because the values we are recording do not cover anywhere near 20 orders of
magnitude.
My prototype code has a 64 entry top-level array (one for each possible exponent)
and allocate each sub-array on demand (with a counter for each possible mantissa).
Most of the sub-arrays will remain unused. This layout supports lock-free multithreading.
## operations on histograms
* given a value, find its rank (or percentile)
* find the value at a given rank (or percentile)
* get the mean and standard deviation of the data recorded in the histogram
* merge two histograms (which may differ in precision)
* dump and load a histogram in text (e.g. csv, xml, json) and/or binary (for efficiency)
* export a histogram to a user-selected collection of buckets (e.g. for prometheus)
I have implementations of the first four.
The rank and percentile queries work on a snapshot of the working histogram, to avoid multithreading races and to make the calculations more efficient.
## exporting data
An important consumer for data recorded in histograms is Prometheus.
The docs <https://prometheus.io/docs/practices/histograms/> say it supports
* a "histogram" type (actually a cumulative frequency digest) where quantiles are calculated on the server
* a "summary" type, where quantiles are calculated on the client and the server aggregates them over a sliding window
Prometheus has its own textual format for exposing / ingesting data,
<https://prometheus.io/docs/instrumenting/exposition_formats/>.
It looks like it would be fairly easy for `hg64` and BIND to support it,
though it isn't clear whether the server is able to re-bucket data that
is exposed with a different bucketing than configured on the server.
## elsewhere on gitlab
Related issues #598 #2101 #3455Not plannedTony FinchTony Finchhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3444Issues with using C++-based PKCS#11 providers with BIND 9 when jemalloc suppo...2023-12-21T11:10:14ZMichał KępieńIssues with using C++-based PKCS#11 providers with BIND 9 when jemalloc support is enabledWhile [moving][1] SoftHSM-based jobs around between operating systems,
we noticed that `dnssec-keyfromlabel` segfaults on Debian 11 "bullseye"
when BIND 9 is built with jemalloc support. A full backtrace with debug
symbols installed fol...While [moving][1] SoftHSM-based jobs around between operating systems,
we noticed that `dnssec-keyfromlabel` segfaults on Debian 11 "bullseye"
when BIND 9 is built with jemalloc support. A full backtrace with debug
symbols installed follows:
<details>
<summary>Click to expand/collapse backtrace</summary>
```
#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
#1 0x00007f779ea8e537 in __GI_abort () at abort.c:79
#2 0x00007f779e4335a0 in rtree_child_leaf_tryread (elm=0x7f779e530268 <je_extents_rtree+704104>, dependent=true) at src/rtree.c:205
#3 0x00007f779e433812 in je_rtree_leaf_elm_lookup_hard (tsdn=0x7f779bf82700, rtree=0x7f779e484400 <je_extents_rtree>, rtree_ctx=0x7f779bf82730, key=94481296101344, dependent=true, init_missing=false) at src/rtree.c:292
#4 0x00007f779e3b6235 in rtree_leaf_elm_lookup (tsdn=0x7f779bf82700, rtree=0x7f779e484400 <je_extents_rtree>, rtree_ctx=0x7f779bf82730, key=94481296101344, dependent=true, init_missing=false) at include/jemalloc/internal/rtree.h:381
#5 0x00007f779e3b627a in rtree_read (tsdn=0x7f779bf82700, rtree=0x7f779e484400 <je_extents_rtree>, rtree_ctx=0x7f779bf82730, key=94481296101344, dependent=true) at include/jemalloc/internal/rtree.h:406
#6 0x00007f779e3b6394 in rtree_szind_read (tsdn=0x7f779bf82700, rtree=0x7f779e484400 <je_extents_rtree>, rtree_ctx=0x7f779bf82730, key=94481296101344, dependent=true) at include/jemalloc/internal/rtree.h:429
#7 0x00007f779e3b8dbb in arena_salloc (tsdn=0x7f779bf82700, ptr=0x55ee24178fe0) at include/jemalloc/internal/arena_inlines_b.h:191
#8 0x00007f779e3b9f35 in isalloc (tsdn=0x7f779bf82700, ptr=0x55ee24178fe0) at include/jemalloc/internal/jemalloc_internal_inlines_c.h:38
#9 0x00007f779e3c696f in je_sdallocx_default (ptr=0x55ee24178fe0, size=21, flags=0) at src/jemalloc.c:3555
#10 0x00007f779e3c6e3c in je_je_sdallocx_noflags (ptr=0x55ee24178fe0, size=21) at src/jemalloc.c:3611
#11 0x00007f779e44a855 in operator delete (ptr=0x55ee24178fe0, size=21) at src/jemalloc_cpp.cpp:131
#12 0x00007f779befa14f in __gnu_cxx::new_allocator<char>::deallocate (__t=<optimized out>, __p=<optimized out>, this=0x7ffd603a8b30) at /usr/include/c++/10/ext/new_allocator.h:133
#13 std::allocator_traits<std::allocator<char> >::deallocate (__n=<optimized out>, __p=<optimized out>, __a=...) at /usr/include/c++/10/bits/alloc_traits.h:492
#14 std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::_M_destroy (__size=<optimized out>, this=0x7ffd603a8b30) at /usr/include/c++/10/bits/basic_string.h:237
#15 std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::_M_dispose (this=0x7ffd603a8b30) at /usr/include/c++/10/bits/basic_string.h:232
#16 std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::~basic_string (this=0x7ffd603a8b30, __in_chrg=<optimized out>) at /usr/include/c++/10/bits/basic_string.h:658
#17 SimpleConfigLoader::loadConfiguration (this=0x55ee2417aaf0) at SimpleConfigLoader.cpp:150
#18 0x00007f779bef73cb in Configuration::reload (this=0x55ee2417aa40) at Configuration.cpp:169
#19 0x00007f779bee38bf in SoftHSM::C_Initialize (this=0x55ee2417c910, pInitArgs=<optimized out>) at SoftHSM.cpp:564
#20 0x00007f779beb3e34 in C_Initialize (pInitArgs=0x7ffd603a90b0) at main.cpp:133
#21 0x00007f779bf73249 in pkcs11_CTX_load (ctx=ctx@entry=0x55ee24179230, name=<optimized out>) at p11_load.c:86
#22 0x00007f779bf76ac8 in PKCS11_CTX_load (ctx=ctx@entry=0x55ee24179230, ident=<optimized out>) at p11_front.c:46
#23 0x00007f779bf6e07a in ctx_enumerate_slots_unlocked (ctx=ctx@entry=0x55ee2415aff0, pkcs11_ctx=pkcs11_ctx@entry=0x55ee24179230) at eng_back.c:258
#24 0x00007f779bf6f0bd in ctx_init_libp11_unlocked (ctx=0x55ee2415aff0) at eng_back.c:307
#25 ctx_load_object (ctx=ctx@entry=0x55ee2415aff0, object_typestr=object_typestr@entry=0x7f779bf784be "public key", match_func=match_func@entry=0x7f779bf6f2c0 <match_public_key>, object_uri=0x7f779ba30000 "pkcs11:token=softhsm2-keyfromlabel;object=keyfromlabel-zsk-rsasha256.example;pin-source=/bind9/bin/tests/system/keyfromlabel/pin", ui_method=0x0, callback_data=0x0) at eng_back.c:578
#26 0x00007f779bf6f590 in ctx_load_pubkey (ctx=0x55ee2415aff0, s_key_id=<optimized out>, ui_method=<optimized out>, callback_data=<optimized out>) at eng_back.c:745
#27 0x00007f779e821140 in ENGINE_load_public_key () from /usr/lib/x86_64-linux-gnu/libcrypto.so.1.1
#28 0x00007f779ecdba7c in opensslrsa_fromlabel (key=0x7f779ba35000, engine=0x7ffd603aada9 "pkcs11", label=0x7f779ba30000 "pkcs11:token=softhsm2-keyfromlabel;object=keyfromlabel-zsk-rsasha256.example;pin-source=/bind9/bin/tests/system/keyfromlabel/pin", pin=<optimized out>) at opensslrsa_link.c:1457
#29 0x00007f779ec902af in dst_key_fromlabel (name=name@entry=0x7ffd603a9540, alg=8, flags=flags@entry=256, protocol=protocol@entry=3, rdclass=<optimized out>, engine=engine@entry=0x7ffd603aada9 "pkcs11", label=0x7f779ba30000 "pkcs11:token=softhsm2-keyfromlabel;object=keyfromlabel-zsk-rsasha256.example;pin-source=/bind9/bin/tests/system/keyfromlabel/pin", pin=0x0, mctx=0x7f779ba09000, keyp=0x7ffd603a93b8) at dst_api.c:960
#30 0x000055ee2376afae in main (argc=<optimized out>, argv=<optimized out>) at dnssec-keyfromlabel.c:609
```
</details>
As it can be seen in the backtrace, the segmentation fault happens
during SoftHSM initialization.
When jemalloc is built with debugging enabled, the following assertion
is logged:
<jemalloc>: src/rtree.c:205: Failed assertion: "!dependent || leaf != NULL"
Nothing like this happens on Fedora. I have not checked other operating
systems.
Further investigation revealed that this assertion failure [means][2]
that jemalloc was asked to free a pointer that it did not allocate.
Things only get more fuzzy from here...
I believe the root cause of this issue lies somewhere in how various
distros link and load executables (because that influences when jemalloc
is initialized). Specifically, it looks like jemalloc gets initialized
earlier on Fedora than on Debian, which allows it to properly handle
allocations requested by C++ shared objects dynamically loaded from C
executables. Why this happens is over my head. However, one fact that
supports this theory is that `LD_PRELOAD`ing jemalloc on Debian seems to
work around the problem.
Since there are most likely other PKCS#11 providers out there that are
C++-based, I decided to document the hacks that worked around the
problem in my test environment (i.e. allowed `dnssec-keyfromlabel` to
work):
- `LD_PRELOAD` jemalloc.
- Build BIND 9 using `--without-jemalloc`.
- Link BIND 9 against a jemalloc build with C++ integration disabled
(`--disable-cxx`). This prevents jemalloc from handling C++'s `new`
and `delete` keywords.
Another possible (and untested) workaround would be to link BIND 9
against a jemalloc build that uses a custom function name prefix, but
BIND 9 [does not currently support such a scenario][3].
I do not see a clear way to fix this on the BIND 9 side of things, so
this issue is mostly meant to serve merely as a permanently-open source
of information.
[1]: !6322
[2]: https://gitter.im/jemalloc/jemalloc?at=5e275495364db33faa0bf972
[3]: #3116Not plannedhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3431auth/DNSSEC: RRSIGs not removed when node becomes delegation point2022-06-30T23:27:56ZLibor Peltanauth/DNSSEC: RRSIGs not removed when node becomes delegation point### Summary
Bind operates as primary authoritative with DNSSEC signing enabled. A node (say `xyz.xyz.hahnekedar.`) already exists and has A+AAAA records signed. An incremental update adds a NS record to the node, making it a delegation ...### Summary
Bind operates as primary authoritative with DNSSEC signing enabled. A node (say `xyz.xyz.hahnekedar.`) already exists and has A+AAAA records signed. An incremental update adds a NS record to the node, making it a delegation point, making the previously authoritative A+AAAA records non-authoritative in this zone. As a result, their signatures should be removed.
### BIND version used
9.18.4 (also 9.18.1 and potentially others)
```
BIND 9.18.4-1+ubuntu20.04.1+isc+1-Ubuntu (Stable Release) <id:>
running on Linux x86_64 5.4.0-113-generic #127-Ubuntu SMP Wed May 18 14:30:56 UTC 2022
built by make with '--build=x86_64-linux-gnu' '--prefix=/usr' '--includedir=${prefix}/include' '--mandir=${prefix}/share/man' '--infodir=${prefix}/share/info' '--sysconfdir=/etc' '--localstatedir=/var' '--disable-silent-rules' '--libdir=${prefix}/lib/x86_64-linux-gnu' '--libexecdir=${prefix}/lib/x86_64-linux-gnu' '--disable-maintainer-mode' '--disable-dependency-tracking' '--libdir=/usr/lib/x86_64-linux-gnu' '--sysconfdir=/etc/bind' '--with-python=python3' '--localstatedir=/' '--enable-threads' '--enable-largefile' '--with-libtool' '--enable-shared' '--disable-static' '--with-gost=no' '--with-openssl=/usr' '--with-gssapi=yes' '--with-libidn2' '--with-json-c' '--with-lmdb=/usr' '--with-gnu-ld' '--with-maxminddb' '--with-atf=no' '--enable-ipv6' '--enable-rrl' '--enable-filter-aaaa' '--disable-native-pkcs11' '--enable-dnstap' 'build_alias=x86_64-linux-gnu' 'CFLAGS=-g -O2 -fdebug-prefix-map=/build/bind9-chX9Xr/bind9-9.18.4=. -fstack-protector-strong -Wformat -Werror=format-security -fno-strict-aliasing -fno-delete-null-pointer-checks -DNO_VERSION_DATE -DDIG_SIGCHASE' 'LDFLAGS=-Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-z,now' 'CPPFLAGS=-Wdate-time -D_FORTIFY_SOURCE=2'
compiled by GCC 9.4.0
compiled with OpenSSL version: OpenSSL 1.1.1f 31 Mar 2020
linked to OpenSSL version: OpenSSL 1.1.1f 31 Mar 2020
compiled with libuv version: 1.44.1
linked to libuv version: 1.44.1
compiled with libnghttp2 version: 1.40.0
linked to libnghttp2 version: 1.40.0
compiled with libxml2 version: 2.9.10
linked to libxml2 version: 20910
compiled with json-c version: 0.13.1
linked to json-c version: 0.13.1
compiled with zlib version: 1.2.11
linked to zlib version: 1.2.11
linked to maxminddb version: 1.4.2
compiled with protobuf-c version: 1.3.3
linked to protobuf-c version: 1.3.3
threads support is enabled
default paths:
named configuration: /etc/bind/named.conf
rndc configuration: /etc/bind/rndc.conf
DNSSEC root key: /etc/bind/bind.keys
nsupdate session key: //run/named/session.key
named PID file: //run/named/named.pid
named lock file: //run/named/named.lock
geoip-directory: /usr/share/GeoIP
```
### Steps to reproduce
See Summary.
### What is the current *bug* behavior?
The RRSIGs of the previously authoritative records are not removed from the zone.
### What is the expected *correct* behavior?
The RRSIGs should be no longer present in the zone.
### Relevant configuration files
```
options {
directory "/tmp/knottest-1656601404-kbfbxc3h/dnssec/validate_bind/bind1";
key-directory "/tmp/knottest-1656601404-kbfbxc3h/dnssec/validate_bind/bind1";
managed-keys-directory "/tmp/knottest-1656601404-kbfbxc3h/dnssec/validate_bind/bind1";
session-keyfile "/tmp/knottest-1656601404-kbfbxc3h/dnssec/validate_bind/bind1/session.key";
pid-file "/tmp/knottest-1656601404-kbfbxc3h/dnssec/validate_bind/bind1/bind.pid";
listen-on port 21461 { 127.0.0.1; };
listen-on-v6 { };
auth-nxdomain no;
recursion no;
masterfile-format text;
max-refresh-time 2;
max-retry-time 2;
transfers-in 30;
transfers-out 30;
minimal-responses true;
notify-delay 0;
notify-rate 1000;
max-journal-size unlimited;
max-ixfr-ratio unlimited;
startup-notify-rate 1000;
serial-query-rate 1000;
};
key "vsIj.JK2zpN69I9lrZ3M2wBOek8EjENtyvimIH0O8HpstVdWCI8yh96fLVGK88" {
algorithm hmac-md5;
secret "KkTQCVgZ54xSOtXoR0TwfA==";
};
controls {
inet 127.0.0.1 port 21462 allow { 127.0.0.1; } keys { vsIj.JK2zpN69I9lrZ3M2wBOek8EjENtyvimIH0O8HpstVdWCI8yh96fLVGK88; };
};
key "K8VMS.r8vKnF4I" {
# test
algorithm hmac-sha384;
secret "oIXitWx9sj66cCS4LzEjBBzjEQ7PxsR2WQVYbYSrFZIfdZ+CwpJTpo/qBjVTFCgh";
};
key "C8s0rGMAIWmKXmlE9g0Qc2YpKZMKeBqOxM6bzpqvTM50sAN7tPoNSKG6zkyPy.RRnMzzgsXaXv9pMsfpmdky.A61mVO33Stxh6PtW8Eenpdknt2.qhC6akpkb8aGw59RKAyIai60U4wvRWzNM57XMpcYdnPF.jnA3Jbq7QyTX64H924qywefBjXzhriUI7s8fLIYUrF4OtCA4I5UagQ" {
# local
algorithm hmac-sha256;
secret "ihC0Exd7Qq0q+s/mYsfVLq8KlBg1odYtuqsol5RB0d0=";
};
key "S8oUeWsERvzj4f6SlQu3aAucUhp.X5yMyeE287KUpVbXhl3ikdCIkQ59SFKTBS6yxd7qPEkaVIOA8PS4.sTIX7xW1qtyqxBpKqdSKP4hy5V2.Dub0zm5nVloE1KQrMSagYqJAKcW7gSu24U.gQuF3LQERY6cdRS4aEvEW101jPsBU1oqdMTsgEssVwIyE2445UEgV0vj8nw77J" {
# knot1
algorithm hmac-sha256;
secret "PCVor52MeoxlZ5nn3rp4MXqLXsPGft5SW0j4h2j55AE=";
};
# more configured zone ommited
zone "hahnekedar." {
file "/tmp/knottest-1656601404-kbfbxc3h/dnssec/validate_bind/bind1/master/hahnekedar.rndzone";
check-names warn;
type master;
notify explicit;
check-integrity no;
ixfr-from-differences yes;
also-notify { 127.0.0.1 port 21463 key C8s0rGMAIWmKXmlE9g0Qc2YpKZMKeBqOxM6bzpqvTM50sAN7tPoNSKG6zkyPy.RRnMzzgsXaXv9pMsfpmdky.A61mVO33Stxh6PtW8Eenpdknt2.qhC6akpkb8aGw59RKAyIai60U4wvRWzNM57XMpcYdnPF.jnA3Jbq7QyTX64H924qywefBjXzhriUI7s8fLIYUrF4OtCA4I5UagQ; };
allow-update { key K8VMS.r8vKnF4I; };
allow-transfer { key K8VMS.r8vKnF4I; key S8oUeWsERvzj4f6SlQu3aAucUhp.X5yMyeE287KUpVbXhl3ikdCIkQ59SFKTBS6yxd7qPEkaVIOA8PS4.sTIX7xW1qtyqxBpKqdSKP4hy5V2.Dub0zm5nVloE1KQrMSagYqJAKcW7gSu24U.gQuF3LQERY6cdRS4aEvEW101jPsBU1oqdMTsgEssVwIyE2445UEgV0vj8nw77J; };
inline-signing yes;
auto-dnssec maintain;
key-directory "/tmp/knottest-1656601404-kbfbxc3h/dnssec/validate_bind/bind1/keys";
};
```
### Relevant logs and/or screenshots
The issue is easily visible by `named-journalprint /tmp/knottest-1656601404-kbfbxc3h/dnssec/validate_bind/bind1/master/hahnekedar.rndzone.signed.jnl`
Please look for any changes to the node `xyz.xyz.hahnekedar.`. With serial `345896694`, it is interoduced first having an A record, and it becomes a RRSIG and NSEC+RRSIG appropriately. With serial `345896695` it becomes an AAAA record as well, and RRSIGs and NSECs are adjusted appropriately. With serial `345896696` it becomes a NS record as well, making it a delegation point. The RRSIGs for A and AAAA should disappear, but they don't.
```
add hahnekedar. 2648 IN SOA ns.hahnekedar. username.hahnekedar. 345896694 3600 1200 2419200 2648
add xyz.ns2.hahnekedar. 0 IN A 180.47.165.192
add xyz.xyz.hahnekedar. 0 IN A 56.101.241.32
add xyz.a7a.hahnekedar. 0 IN AAAA fd9c:20c0:91fc:cb36:2e9f:23c9:369e:1bfc
add xyz.bl7b907b61.hahnekedar. 0 IN AAAA fd9c:20c0:91fc:cb36:e87c:1f6b:3b46:cf28
add xyz.pub2a0.hahnekedar. 0 IN AAAA fd9c:20c0:91fc:cb36:2e76:b085:aeda:bdcb
add xyz._sip._udp.671A1.hahnekedar. 0 IN SRV 23 5 49681 intervention.hahnekedar.
add hahnekedar. 2648 IN RRSIG SOA 8 1 2648 20220730150351 20220630140351 54646 hahnekedar. IahqYdkP/6OFhyXEKWB13M2eCgMPWrSLonWVYIWE5GeWJuV2wKFp2xYd WbduWLj9jlchQbpCb1WvZZUiNdZLJqvpCx4YkWt0mFqW5PZyDq7Rz1VM j53d6JkpKlKw8gSgt0y9RoMJKXGaz4IpvzMBDulAr88uhU5vy4XLn/uD BfQ=
add xyz._sip._udp.671A1.hahnekedar. 0 IN RRSIG SRV 8 5 0 20220718231722 20220630140351 54646 hahnekedar. tXIeLqk3MXTbBAVk13y9PtO6u/oQunXUY40zrZLyhiqEbs3UtpejFRJq tLELYr2+7eCEYeFBEqdjkgzyahDuNKJwtICP6TFRlF9zXKacHJ+vQYm4 AUv466DMg3fficEj2VKr95aQA6v1ZoGPtth5AyY/NYvQ1g3wfKvG536o ZXM=
add xyz.a7a.hahnekedar. 0 IN RRSIG AAAA 8 3 0 20220718231722 20220630140351 54646 hahnekedar. hxkD0n2C+aHZnl5Ds5kRVO10pN5rY3VuKu6UqiRKvbbLG4x8uCp5vMpq xraRSYZpiXlRUtz6tMtLHfw2c/NAbhuVSZwtxsf1zk0kShqZVbJgUgdr 4SA4zPyow99SVkiEj9udter0HJmpnmUgWdwbExCaoLgMBqhrSP/Sb7fA 8oY=
add xyz.bl7b907b61.hahnekedar. 0 IN RRSIG AAAA 8 3 0 20220718231722 20220630140351 54646 hahnekedar. eAXsvnXqYd+AaS7KyiubuKuetx6fAOrqQtkQmIl41OH8Cop1Ui24u6OD pdT3Ii75CFJz3IFtHdScdlEAvhFkO9Jlpk5BAat4aooUZDXMRfvUNCoB K748aL2tSpEOkRCD84g/ETsnTA4iW+dFAuDBlgYOwgQAaMFvduoU4prV jqk=
add xyz.ns2.hahnekedar. 0 IN RRSIG A 8 3 0 20220718231722 20220630140351 54646 hahnekedar. dgrlg6VprM78tL3KQXHpZHWyMeRCEB2Onp9t00atg16LEt5b9XdI4ona WODfHCRLP9JUEZFd/Sa1EzI9yRrHQz/sLNsFwHkJ4aydm9yNW5gSmNsC Jn0zkiCBS1hifGnwcb7URAgN0M84h68TCD0Q808RrzpRbxDneceOCR85 N1A=
add xyz.pub2a0.hahnekedar. 0 IN RRSIG AAAA 8 3 0 20220718231722 20220630140351 54646 hahnekedar. N/gC4r/QlMfPBguDeMTc2VPmuC8vQom6XLAnkiUNZHCfAVwNSfSZwXdT 5SiB5p2Ys8m/u3KUGufEX7mACz0naQ0zbvwDr5vD+bUc7TjksmBrARoC RUzl3830KnulI/BwoLUh70g3pYuKzIgFTjOwx1i79wUCfZfimqJ2V8Av jK0=
add xyz.xyz.hahnekedar. 0 IN RRSIG A 8 3 0 20220718231722 20220630140351 54646 hahnekedar. W5ifYcm1UHBwx5j6FX0o5ms/1lmQX81IOA31KbIS4kKSFJ6X7nm3Fb7z Q4B7qZAmH6P631HouaUNyJHIZEt2F+ZOhCN3Edp3gU6xDVDtLrS1/PUs /bhAUD+oIMWNd/fEqMiU2SKKYrDpkj1ph336wWiI4yci6/emZ4tsmXyE ztQ=
add 0acC77eFAbbdc268.hahnekedar. 2648 IN RRSIG NSEC 8 2 2648 20220718231722 20220630140351 54646 hahnekedar. kQXr8PiNDG9IwnJG+d17dlHpJoZSMFO6AqJyS5R+9FuUgk8NX+tFqkOE 39lfDF8+xfavsiuR7luOYY9Y5fOM9/2qkCWP6dvdXw3HNrqD/aRrMW1K PERGgrqqdzAGaiGlNF9Kpc8erjAp1to+lbwVlok7n/4pxBxNUWlJmrXj F/E=
add xyz.2Ce98A477.hahnekedar. 2648 IN RRSIG NSEC 8 3 2648 20220718231722 20220630140351 54646 hahnekedar. C/akuQjWbn2GxlV0jbXLSJiDoTZoZMnuy+i7VGtuFjnjXogl097tjAME XUwhMOiqTDOcwJpcF4vdhQe+bWE+nXulZcrPXXtxP/DpuBdTQLrN8Z9k bCsqSW35Zyxvz1SLhah5xMw6iKR2l+cYYwyl4dqm7KuxEYgJF7riNp2t AlA=
add _sip._udp.671A1.hahnekedar. 2648 IN RRSIG NSEC 8 4 2648 20220718231722 20220630140351 54646 hahnekedar. ZUpBFwJdsbBf9QESGqLvSGpS0RpUnztyHsWpWlu2C7iZmoAjbwqqXERu mTJsxSPMfb17EaSBLhDF9FoyWH6ko0j5Jr2+Vd1a9M1EYGzKZh+h7Ook nPJp9Y8FfuBbURyKuQUUK20zoVSEaXeUcxISXS2k7b0InKdYlqz5uhsx 4M8=
add xyz._sip._udp.671A1.hahnekedar. 2648 IN RRSIG NSEC 8 5 2648 20220718231722 20220630140351 54646 hahnekedar. LYnDH3YWcuLN3l6Ew39HvjX4AEr4zSeP3kwxbwShcEWP0w/ofNH85kMW lAf02Qh45yxIOaOS9GpEeaVx6F8F+qpGnXbkzFCmAlIAZeZ6YG6EXxmt B4sFTYdKR2VwcqgMhAA8uaShZO0HwQwC7wjkaRG+GVdmKW/ZnRCi9zmI EeU=
add a7a.hahnekedar. 2648 IN RRSIG NSEC 8 2 2648 20220718231722 20220630140351 54646 hahnekedar. o4hzq3sirzZBjygVzAghzfOsgB5Cl0ty3d0Ecm8saylbUOKF39ZwDv94 +hSRjrA5Iz2ZgSNrUsHKNitRLCfrw9ChYVxEtlVaKVEw8DxmjfcirqWP nGnWTrIOh7g+F1R1RQmPdCsWNmT4HlYTMCk5iifxizUYE8RkEin9tR0q 37A=
add xyz.a7a.hahnekedar. 2648 IN RRSIG NSEC 8 3 2648 20220718231722 20220630140351 54646 hahnekedar. lTF4MPwG/vrRk+JT152fckLXUgW3LwcxnvThyMnGvY6KL//7nkHVlIFK JE9leplUDgoLI9Ps1l67Emmm0PVuur5mbfLMjP145S5isPM4LGNlNdnA Y4rOQztcvaPYSx0+IScvNJ1BRQM5VJ2S7/ngZG7aHEYmuU5Ql66RBwvu KMo=
add bl7b907b61.hahnekedar. 2648 IN RRSIG NSEC 8 2 2648 20220718231722 20220630140351 54646 hahnekedar. nAs4KftHJJ/vY7xpC+248ZXWn8l9f/hHzf53c7iq5RWvMkKkdW/JDFHh UurQpkcRCstpuoWqmSe5Oxrrt5xpVxk+Z+mEk1NX0aBgBQ1KJSeXuSL1 MHk5Uzsx1or4+Gk/XroIQkAzqjq7bPxuSsChObFHxWO5+Oeklkkgv+Le YcQ=
add xyz.bl7b907b61.hahnekedar. 2648 IN RRSIG NSEC 8 3 2648 20220718231722 20220630140351 54646 hahnekedar. cFmRZ1JTEwilLAhUzxUbHWkHxKbuaGIuHp8YEZzo1/h62JNAtfdlZAUJ o1i4asv9vwt6YoLkJvPzREaXkeSIjGu8bmtZTy60u+jsc9g4pU7K7P3Y c4ftzt4HeK2/SrS8PoHLdMPQsPNxGwvhbb80k+aoeITvMDJ2R6HHdbj5 hfA=
add ecFEC5.hahnekedar. 2648 IN RRSIG NSEC 8 2 2648 20220718231722 20220630140351 54646 hahnekedar. L6bNP7szNfC6Md0IbQvVS2vAmElKhRrdIJYwcGrTjAvpF3jjTOPtjVd6 TSsq9Z8KjNyoH/6fPsKWiExCHPMPXRF7PHN62+li/5axU9ObklqLit+z tq4ekP4hiNnsKDkBJRDw/4LtetGy3DreuI7PzFBFxHVSkxN63Zsws6EE KPw=
add fau69a83d99.hahnekedar. 2648 IN RRSIG NSEC 8 2 2648 20220718231722 20220630140351 54646 hahnekedar. Eyqb/OtO6F7xhmG6iPT268td64oglqfCsxA1qO5LCCup1G0uyObjB3Sp AUx2SY8vx1nNamEawLI/H+oEaUPjN6bhGZZbrhdCFjGvu3E4iUFC0xMp zaLeFXMwFH4KpdeFOuIq+h4wd+uPU8hS+dI6QKFJpgjUy4JlY82PU7e9 bfI=
add ns2.hahnekedar. 2648 IN RRSIG NSEC 8 2 2648 20220718231722 20220630140351 54646 hahnekedar. vX9Os5VqbgS8mMCzIqYoTnrKnhANAYxcu0/vJIYf2hlzqhKc6+WVUpjs Xf8Nmy1EwV5HAjZWkPkvfrAXfyAUEXyeXeVCPAHIbFSWE9GCg9DRaKxR Rwl7PjIADrxS8A3tRaEUc/VWNiSv2DBRPmJU0EeZq5fthhDP5i1telSC OPg=
add xyz.ns2.hahnekedar. 2648 IN RRSIG NSEC 8 3 2648 20220718231722 20220630140351 54646 hahnekedar. UkRBxDJNZ/oNt9f5uKqynzuGVF8S1GFHD54Gtb1AKFIrTXjcagOFPGRL nmGwzZQZ7Cn3TI+4iUUzRkA8gcGxrrgsp6uZyf0M1EJi6sqAVdZPoa8f qLW+RUDZNj34qTtJJF3AEcCJr9uLt2BXFof4dKkp9ggZxEtQz9IG4iqJ rhM=
add pub2a0.hahnekedar. 2648 IN RRSIG NSEC 8 2 2648 20220718231722 20220630140351 54646 hahnekedar. i4UA98/A9+ffiYVYBAmqZctmskhteBA8wj9gPuVooaAWMdtA8Fn7qLxb V4I/aYL96JbN/ht11f3NpkpWGzia+KKewHyQZ0GMwuFdrSiLxqPoCdxa JVd0hiNHvQKVLuuwNiqh2UBxXwBg3Z/bX6gPQDxkj5k+Of/WHuOuJB4C KWQ=
add xyz.pub2a0.hahnekedar. 2648 IN RRSIG NSEC 8 3 2648 20220718231722 20220630140351 54646 hahnekedar. eOg5aJ8O2Q+9XIL1b17JdnAmJJp3fm4y/XdfqnDHxtBek1vIt2PCERnR sLGc9+dTmn+XeJ8lPjbWzofG97Mtl2OS5cDxurbyPW5rST7svpiNDzEe N/rU3JsoWwbzJO3nEAbYYjIvMoKcMo/pF+0128yqpLLGEM9l1cEvZnW5 Xw4=
add xyz.xyz.hahnekedar. 2648 IN RRSIG NSEC 8 3 2648 20220718231722 20220630140351 54646 hahnekedar. ObP4lVTa8gghLfH4nQcUaaHiGRZlJbo490aySgcUuXsVrz1U2yitAXZ+ D7H2MYQlo0Ya/bNwz7xC105d9qZpexsvoFwNMzhhA9iUXLKK3SrRhxnP J0X2zLDyRhkltn3P4mF5yXwg3E7vQe7NeFe+Wpw2IVJqZsv5KB40AwGq dZ0=
add 0acC77eFAbbdc268.hahnekedar. 2648 IN NSEC xyz.2Ce98A477.hahnekedar. AAAA RRSIG NSEC
add xyz.2Ce98A477.hahnekedar. 2648 IN NSEC _sip._udp.671A1.hahnekedar. MX RRSIG NSEC
add _sip._udp.671A1.hahnekedar. 2648 IN NSEC xyz._sip._udp.671A1.hahnekedar. SRV RRSIG NSEC
add xyz._sip._udp.671A1.hahnekedar. 2648 IN NSEC 68Dbb6914Fc4e.hahnekedar. SRV RRSIG NSEC
add a7a.hahnekedar. 2648 IN NSEC xyz.a7a.hahnekedar. AAAA RRSIG NSEC
add xyz.a7a.hahnekedar. 2648 IN NSEC aEc7940DfdDEb.hahnekedar. AAAA RRSIG NSEC
add bl7b907b61.hahnekedar. 2648 IN NSEC xyz.bl7b907b61.hahnekedar. AAAA RRSIG NSEC
add xyz.bl7b907b61.hahnekedar. 2648 IN NSEC _sip._udp.protocol.CCb3.hahnekedar. AAAA RRSIG NSEC
add ecFEC5.hahnekedar. 2648 IN NSEC f30.hahnekedar. A RRSIG NSEC
add fau69a83d99.hahnekedar. 2648 IN NSEC _sip._udp.grid.hahnekedar. A RRSIG NSEC
add ns2.hahnekedar. 2648 IN NSEC xyz.ns2.hahnekedar. A RRSIG NSEC
add xyz.ns2.hahnekedar. 2648 IN NSEC pub2a0.hahnekedar. A RRSIG NSEC
add pub2a0.hahnekedar. 2648 IN NSEC xyz.pub2a0.hahnekedar. AAAA RRSIG NSEC
add xyz.pub2a0.hahnekedar. 2648 IN NSEC xyz.xyz.hahnekedar. AAAA RRSIG NSEC
add xyz.xyz.hahnekedar. 2648 IN NSEC hahnekedar. A RRSIG NSEC
del hahnekedar. 2648 IN SOA ns.hahnekedar. username.hahnekedar. 345896694 3600 1200 2419200 2648
del fau69a83d99.hahnekedar. 2648 IN A 84.195.165.254
del 623bC33D3B.normandy.hahnekedar. 2648 IN NS 0acC77eFAbbdc268.hahnekedar.
del hahnekedar. 2648 IN RRSIG SOA 8 1 2648 20220730150351 20220630140351 54646 hahnekedar. IahqYdkP/6OFhyXEKWB13M2eCgMPWrSLonWVYIWE5GeWJuV2wKFp2xYd WbduWLj9jlchQbpCb1WvZZUiNdZLJqvpCx4YkWt0mFqW5PZyDq7Rz1VM j53d6JkpKlKw8gSgt0y9RoMJKXGaz4IpvzMBDulAr88uhU5vy4XLn/uD BfQ=
del fau69a83d99.hahnekedar. 2648 IN RRSIG A 8 2 2648 20220720235742 20220630140325 54646 hahnekedar. BOTESc7VHASmQOWJ6R+x7Re1qC61AyuAljG8IsRdG9c+CivlZVFI3XFi AOzW63YZVfvf3mbtgED96Y8tnhQVz4a0Ny/mITS/5A0XO/GAfrP9DWbA aVkvILUWHmqHZUZZQZIE6WIMw7h/0a13IowwLkWJU19UimLG4A0L102u g7w=
del fau69a83d99.hahnekedar. 2648 IN RRSIG NSEC 8 2 2648 20220718231722 20220630140351 54646 hahnekedar. Eyqb/OtO6F7xhmG6iPT268td64oglqfCsxA1qO5LCCup1G0uyObjB3Sp AUx2SY8vx1nNamEawLI/H+oEaUPjN6bhGZZbrhdCFjGvu3E4iUFC0xMp zaLeFXMwFH4KpdeFOuIq+h4wd+uPU8hS+dI6QKFJpgjUy4JlY82PU7e9 bfI=
del 623bC33D3B.normandy.hahnekedar. 2648 IN RRSIG NSEC 8 3 2648 20220721114036 20220630140325 54646 hahnekedar. Ok+p2mXO/3fR1sveRxDiMgGDUXlW49edOW7w6DVAFipCNvX+mEsf/j6B /z3a52AU8b2kXmZHVkINapm11ZZOGWdZkqxfTU4mwCvGbqZFMm06nJmC B96IxlFmDcqfn6BOnUgpQjVy9CCz/sFFJTLVT/aRp5SQnxyeypbWQHz2 YU4=
del assuming-control.hahnekedar. 2648 IN RRSIG NSEC 8 2 2648 20220729153646 20220630140325 54646 hahnekedar. 0dGbtAuuCwKomF7plmLdL2nv6VD/idu8nbi8iteAsNVaJxsz0ZCYHPHv ApOdKcR7HPejJtBRa6MDdryPSmxU47qGkxKv5CaUDL2Gb7xJfx6aD4kl XKcCmnUZZEbnl9FFOA9AW9BQNrfiwKrENM3tlMTNnCl1LNeRhqUgyrOX 608=
del f75cd935ddEAee38.hahnekedar. 2648 IN RRSIG NSEC 8 2 2648 20220720235742 20220630140325 54646 hahnekedar. Yc0K6qKm731PfU8w+BbyDuDGUhAT/v39jfOeWl6nymwVZNbGNHNX+GoS djA4pXX4+IQJzxuBk5lONumJo2lmpHEd9zNijVKh7dwGDA7+C5RD6KHT 4kRBlNd7MFjxCXaPLlgpOWBNx08eL+Q6dbLdE5xXY948+d7Ma93FeC1i wwM=
del nexus5b2002b.hahnekedar. 2648 IN RRSIG NSEC 8 2 2648 20220716100219 20220630140325 54646 hahnekedar. JfVSZWSDL7iFs0SIUbkiovCkIUMP5HI3JNzUFDEk2J3UmVK+N3xoGfh1 XLbWGt39uWIOASnIk8TEC9c6G8E9khLpNnAL1QmTkQlkRJ+5UGvoa0cb MiUDClMPIXO6nUZnzIuM7EjbzFK0Nsq4d/mGhC6OuUnYitC7Zjda/H4V 6Dc=
del xyz.xyz.hahnekedar. 2648 IN RRSIG NSEC 8 3 2648 20220718231722 20220630140351 54646 hahnekedar. ObP4lVTa8gghLfH4nQcUaaHiGRZlJbo490aySgcUuXsVrz1U2yitAXZ+ D7H2MYQlo0Ya/bNwz7xC105d9qZpexsvoFwNMzhhA9iUXLKK3SrRhxnP J0X2zLDyRhkltn3P4mF5yXwg3E7vQe7NeFe+Wpw2IVJqZsv5KB40AwGq dZ0=
del fau69a83d99.hahnekedar. 2648 IN NSEC _sip._udp.grid.hahnekedar. A RRSIG NSEC
del 623bC33D3B.normandy.hahnekedar. 2648 IN NSEC ns.hahnekedar. NS RRSIG NSEC
del assuming-control.hahnekedar. 2648 IN NSEC bl7b907b61.hahnekedar. CNAME RRSIG NSEC
del f75cd935ddEAee38.hahnekedar. 2648 IN NSEC fau69a83d99.hahnekedar. LOC RRSIG NSEC
del nexus5b2002b.hahnekedar. 2648 IN NSEC 623bC33D3B.normandy.hahnekedar. NS RRSIG NSEC
del xyz.xyz.hahnekedar. 2648 IN NSEC hahnekedar. A RRSIG NSEC
add hahnekedar. 2648 IN SOA ns.hahnekedar. username.hahnekedar. 345896695 3600 1200 2419200 2648
add xyz.hahnekedar.hahnekedar. 0 IN NS elkoss.hahnekedar.
add xyz.assuming-control.hahnekedar. 0 IN CNAME fB10a30E6eF5dE40.hahnekedar.
add xyz.xyz.hahnekedar. 0 IN AAAA fd9c:20c0:91fc:cb36:e87c:1f6b:3b46:cf28
add hahnekedar. 2648 IN RRSIG SOA 8 1 2648 20220730150358 20220630140358 54646 hahnekedar. GCrdijPFrcpZG5PT/ot6iBSx45K+mdq79+wOSznTDeNsw6/v9wvrQgA6 x2Rn4TfeKQ7aq2ZImFpveiR4pZ7jxx6/8ZtwWH9BVaabNr3KapomkqaB iCFJtqJndtoHW/lwPeNl5wE+/PwysuWCVqkyudjfEozBToOdo7JNxO7r GlM=
add xyz.assuming-control.hahnekedar. 0 IN RRSIG CNAME 8 3 0 20220725200453 20220630140358 54646 hahnekedar. K38W9+66tRFYcgf9kyUmTbgHbYTuF3oF4IFVXtBpV4JjW34lq41a3ftP knAyjCfVFp4n5VLyqoseHwWiEcnddnnfwGcPfPniOlj9GZCiQ0ihcP8h PYe1+Olc7tZA7c+U4QsiosuxPSGiPC52wtR/wR4ARokt1FH8yCcoh+4z jog=
add xyz.xyz.hahnekedar. 0 IN RRSIG AAAA 8 3 0 20220725200453 20220630140358 54646 hahnekedar. seH/ntazJr3QWuDLcE+h28cQF62XrkPohioZ/wn6v9jfbIPJGKVMVzHf qfmmZC2HlNYV1+cTqQQ3g50/Z7HW5EWg6oi7TMHhgEq4xVJ/h3vrsHmc g8u7w//6Q8sI1AeMcLWBokGeCsmpiCqeHQoGS/FFah/yX+Ji9EbJv4if jIw=
add assuming-control.hahnekedar. 2648 IN RRSIG NSEC 8 2 2648 20220725200453 20220630140358 54646 hahnekedar. pcN/Sja1FzxShSsfKgni3GNi5axAnAXy4uR9MT4Ll6DpGD8zGcptzWyF CX/jHmWqUmGxjNRgWueOlix3Ki1u2Mjq5OdDWmhkDIt+hi9ObYH72yGz 4tOnrfvdC2mp6ochv2MkxqNfGzzzkA0LKzNV4rudlELlLaBrMKKVBjel 0Bs=
add xyz.assuming-control.hahnekedar. 2648 IN RRSIG NSEC 8 3 2648 20220725200453 20220630140358 54646 hahnekedar. DDibxXdK1ss2XWZsV+XX91DK3Kx/Hre2p2sDtKZSef5q6kwezxPy57sU RstEBEA+CZog6O5/t+EIEmDAA9Vj43bgxfM/y8SeJ1Ll4/PqKpZlyJYd LMWt/5cJk+GavNaSRZ5Qxw72T52kEvcaFR2T2iNPG6/MygGx/NaFFnvB kZw=
add f75cd935ddEAee38.hahnekedar. 2648 IN RRSIG NSEC 8 2 2648 20220725200453 20220630140358 54646 hahnekedar. pgfrrkMrp9Z+zB17gI0VtUuEY5HPSSbA//yEzkIzau4aFzQwT8iHA8Px 14YHac/bi7UQXjevM19FWRwpZHd8m+sPsTHIhEVhLTodNSm6HcgsxYmd kvfDjVybsg6Xjo2bJX9/2RmNeY3T31RB5ts4BgTpk8982zzqwIVFx0mo G8U=
add nexus5b2002b.hahnekedar. 2648 IN RRSIG NSEC 8 2 2648 20220725200453 20220630140358 54646 hahnekedar. gJxMggNCMdNZsZvQ7tI32COR9NopCRiHG0VUViBDzh9RbnUxbiqBLcJW X51wfLzLnRaOKp8/eEuZ5O8MxCAJycL9hJRDaGOztGykLcI3nMR6quFc qqvViMz3hPgJpNRIqClIr2L5NhLLoYDrR1L/yZo/RhuJ4+KFiSKfFhXd 2Hw=
add xyz.xyz.hahnekedar. 2648 IN RRSIG NSEC 8 3 2648 20220725200453 20220630140358 54646 hahnekedar. ykB150lyAJro6CvENSzlCVJ34VD2jxwpCoosqtB9N5sXqgnDmcy96NhO Up4Ng447q3tG2Oq7+LH0siwrZCFVJ3LjCUvF5HWwTrZTMiHAIgA+oH2i E4jlRqbB/8FyyJhRF2ZtKeXI+jo2Pe5DlAc65sCr/dw0yoqtLgl/R4kd +Nk=
add assuming-control.hahnekedar. 2648 IN NSEC xyz.assuming-control.hahnekedar. CNAME RRSIG NSEC
add xyz.assuming-control.hahnekedar. 2648 IN NSEC bl7b907b61.hahnekedar. CNAME RRSIG NSEC
add f75cd935ddEAee38.hahnekedar. 2648 IN NSEC _sip._udp.grid.hahnekedar. LOC RRSIG NSEC
add nexus5b2002b.hahnekedar. 2648 IN NSEC ns.hahnekedar. NS RRSIG NSEC
add xyz.xyz.hahnekedar. 2648 IN NSEC hahnekedar. A AAAA RRSIG NSEC
del hahnekedar. 2648 IN SOA ns.hahnekedar. username.hahnekedar. 345896695 3600 1200 2419200 2648
del nexus.hahnekedar. 2648 IN A 182.188.108.174
del hahnekedar. 2648 IN RRSIG SOA 8 1 2648 20220730150358 20220630140358 54646 hahnekedar. GCrdijPFrcpZG5PT/ot6iBSx45K+mdq79+wOSznTDeNsw6/v9wvrQgA6 x2Rn4TfeKQ7aq2ZImFpveiR4pZ7jxx6/8ZtwWH9BVaabNr3KapomkqaB iCFJtqJndtoHW/lwPeNl5wE+/PwysuWCVqkyudjfEozBToOdo7JNxO7r GlM=
del nexus.hahnekedar. 2648 IN RRSIG A 8 2 2648 20220716100219 20220630140325 54646 hahnekedar. z/cC3zcVlxVA1TPjbT7wKqfqX2TAhaMvpbjnZH0LxTkS9rRuGKMTQKXC R+wqsOtH1ga75TqVC7ZP2kF+donBESBcDA4fRAq6k98lYh40TJyKNLWA DeBHwW8PBgb9nPDzylC0XBwYS9/ghHjMg3sKD2A4M9xOR/YEy2EOWvzJ GP0=
del nexus.hahnekedar. 2648 IN RRSIG NSEC 8 2 2648 20220716100219 20220630140325 54646 hahnekedar. b9JCqITdbcKbjOaigxxu9brTnpkcwI8WmIJoMxLzkg+s9y/OVveVc1wg jYPCc4XjqJiXdPujX4gaokRsDU8UDyDmZ8BUNCdY/3LAjKVywwKwzTKP 9Q2WrGFq60fi/6ew+2bMFwdJlzgWXvDsSslGdMNn2/r2WM1dSf3QO5qv Y6g=
del 0acC77eFAbbdc268.hahnekedar. 2648 IN RRSIG NSEC 8 2 2648 20220718231722 20220630140351 54646 hahnekedar. kQXr8PiNDG9IwnJG+d17dlHpJoZSMFO6AqJyS5R+9FuUgk8NX+tFqkOE 39lfDF8+xfavsiuR7luOYY9Y5fOM9/2qkCWP6dvdXw3HNrqD/aRrMW1K PERGgrqqdzAGaiGlNF9Kpc8erjAp1to+lbwVlok7n/4pxBxNUWlJmrXj F/E=
del customer.hahnekedar. 2648 IN RRSIG NSEC 8 2 2648 20220719173022 20220630140325 54646 hahnekedar. R7jC+oTa3Dh48LZfB0ogmH3oJ8RsdVTzmvkjr76+Dfhd0+QOO6uW4AO5 j14U3n88C4cUK9VbMzDNKLr98hcB/CaCCITALR0Dvyyo3kC4D00cXjHZ FCsVofMP9gA4M8KEALgQdDrZ7RZ8/KyDooPyVPK2iqaayiNVnnVsMn6R SDs=
del ne05cc.hahnekedar. 2648 IN RRSIG NSEC 8 2 2648 20220716100219 20220630140325 54646 hahnekedar. sBHhFLoDr4ED3PVdSAlPoCX88y7BGgnfY+C0Ji9ynAl6ofqyfT5msfiW uSzB2lOAC1HA5zs8mHWJ91/DJupwvqLxEclkCDrJZjura8Oq1chB44I0 VPLT20gG8KujGijyMkjeR3o492S+63nCk0iQZJPJ8kVv94jQNSR9s0Ex 9fE=
del xyz.xyz.hahnekedar. 2648 IN RRSIG NSEC 8 3 2648 20220725200453 20220630140358 54646 hahnekedar. ykB150lyAJro6CvENSzlCVJ34VD2jxwpCoosqtB9N5sXqgnDmcy96NhO Up4Ng447q3tG2Oq7+LH0siwrZCFVJ3LjCUvF5HWwTrZTMiHAIgA+oH2i E4jlRqbB/8FyyJhRF2ZtKeXI+jo2Pe5DlAc65sCr/dw0yoqtLgl/R4kd +Nk=
del nexus.hahnekedar. 2648 IN NSEC nexus5b2002b.hahnekedar. A RRSIG NSEC
del 0acC77eFAbbdc268.hahnekedar. 2648 IN NSEC xyz.2Ce98A477.hahnekedar. AAAA RRSIG NSEC
del customer.hahnekedar. 2648 IN NSEC DCB285c85e6.hahnekedar. CNAME RRSIG NSEC
del ne05cc.hahnekedar. 2648 IN NSEC nexus.hahnekedar. A RRSIG NSEC
del xyz.xyz.hahnekedar. 2648 IN NSEC hahnekedar. A AAAA RRSIG NSEC
add hahnekedar. 2648 IN SOA ns.hahnekedar. username.hahnekedar. 345896696 3600 1200 2419200 2648
add xyz.xyz.hahnekedar. 0 IN NS elkoss.hahnekedar.
add xyz.customer.hahnekedar. 0 IN CNAME rosenkov.hahnekedar.
add xyz.0acC77eFAbbdc268.hahnekedar. 0 IN AAAA fd9c:20c0:91fc:cb36:aaed:e102:c5de:51a2
add hahnekedar. 2648 IN RRSIG SOA 8 1 2648 20220730150405 20220630140405 54646 hahnekedar. GLhH8KEDcajSr2gGrZGnKQyi2sbfLH0CcpcDIO1Li6k5ZqY/bi25OiXb ORJ1IKjJdQ4/bU22rCAxjsSOXYGF1CnJ2JkDkXpvZgl6yxWh6PdH7iXu M+M7z9+eaf4SpXvskTxotOhjBFaBpmh4S3AooFE7/hPV4R6FcOQGh1RQ Vlg=
add xyz.0acC77eFAbbdc268.hahnekedar. 0 IN RRSIG AAAA 8 3 0 20220714050033 20220630140405 54646 hahnekedar. gCYEiWyOyV6z4WY50yCFoYEBm33EbfHK+IPzl7tsosOrPq0SNZjdS/j8 u3BFS0L3zESl6QtnVUQu8pkMfSEcZz0sE94F8b086OXSAkTAIXMsvXxZ zf2wmCfQI1u+pRr1tWRieyAiebQ9NuDQ/3ZqT03MRQIJL8l9RthIvykO O3Q=
add xyz.customer.hahnekedar. 0 IN RRSIG CNAME 8 3 0 20220714050033 20220630140405 54646 hahnekedar. hS3RPH9ukbqqFpJJEr668yk8wUiX9OOCBaScavREQl468T6B5+gDLYsa fMdrj7E5HmAONznlih5c2utXmUSDH1SgHL0FEbP/eB4ir7LrmFEyhOjV ZWCuw1u2EiNp4fxGIw0bHE4273RisxLIpibxRuVQrvqrZtxpwUdEprHr 5k0=
add 0acC77eFAbbdc268.hahnekedar. 2648 IN RRSIG NSEC 8 2 2648 20220714050033 20220630140405 54646 hahnekedar. W4YDPjri0Vu9tJr11oDl13Gz1W+HjX0MbUiUZnDI5FALYxV9H3R/w+Rs w3p/nPCMgMsfhOj+2Zh3EubpvXp09QiPIFVXfXjvwcnW7bBA39Bx2F1t JAmlt65KPdLbix+veez32Et+CbZ3lkSLpoE2oEDBBXZZemkC1a/JPTdM GYI=
add xyz.0acC77eFAbbdc268.hahnekedar. 2648 IN RRSIG NSEC 8 3 2648 20220714050033 20220630140405 54646 hahnekedar. 0vaxSNYGS/Qzpt8rY4WDagnak3lVT9/ZO9Hgq+wP2pz6i0IAwUzU+3y8 v5jEWtcImbA9MA/htH1YvRAEOKAPtkBgfaMynQsoqy2+F8NiA4Nslee1 l4DqATmy1NqIcK0qr3TAMlECXHv59fB/aUlqJpFO7ZpZadh5wiV0bVKb 0qA=
add customer.hahnekedar. 2648 IN RRSIG NSEC 8 2 2648 20220714050033 20220630140405 54646 hahnekedar. bRq3GcqhB888x11nxdhywIaWcVwGpfpI0zDv5WqHgmHl5xM4iI3Np/Io vfsFqYqg5T1HaTy9h8sKDAV7tVOff30tjDOE3fqiADoHi/arSUG38oO4 KHdEjCzFMWSnxt6+nOxlaHhuiPFWV23HqRnLn3t5hWeHflmAVNpThcDD 9WQ=
add xyz.customer.hahnekedar. 2648 IN RRSIG NSEC 8 3 2648 20220714050033 20220630140405 54646 hahnekedar. kTV2b5iS5VOVt+tx27QxEzlOOEG+pLqkve1nmgcBqNA9tMlsSj/HUz8n iV+xIubbY/aIUbRskfCZPdbw0EA9oT+4P/DXZpv2fndXg90hviJvAFwL gCbQwLAknpMxQJ9YHQnuETXY8hhRfFj8MPBVMqUNK2ET+7jq3cQ15+9g 8+U=
add ne05cc.hahnekedar. 2648 IN RRSIG NSEC 8 2 2648 20220714050033 20220630140405 54646 hahnekedar. oeJyXlKBRNiB55mDfGnpb+2aoxjwTDxHX6+s89FTXx4+OthF7Q0W6TJD xiQZvVDGzAreoSccAP8FDELWbSCO2Gz9FjZWWJwkMCjZrueC+A1eBEZj w2rTUigKrVL7U6dY8dy90KunQMf50VzUQEZVM0ylPTydowkbmEk/mVVo W04=
add xyz.xyz.hahnekedar. 2648 IN RRSIG NSEC 8 3 2648 20220714050033 20220630140405 54646 hahnekedar. EQEXU9OBa17qQhpAbRvvsOeGAnIwg8hXdykY0UNCJ4XFpxk7Iv/UchSI nh2c2ZisCX8Ulq6eCXnj37U4XMQBSCXbXpHj1SFae/lJ59c0GMXpVn1b /DRM6g1VTJjM9JX6y7Wa+dpxyJziaqP8bVWwqWHGAhW1Hzeh63feaPsZ Yhg=
add 0acC77eFAbbdc268.hahnekedar. 2648 IN NSEC xyz.0acC77eFAbbdc268.hahnekedar. AAAA RRSIG NSEC
add xyz.0acC77eFAbbdc268.hahnekedar. 2648 IN NSEC xyz.2Ce98A477.hahnekedar. AAAA RRSIG NSEC
add customer.hahnekedar. 2648 IN NSEC xyz.customer.hahnekedar. CNAME RRSIG NSEC
add xyz.customer.hahnekedar. 2648 IN NSEC DCB285c85e6.hahnekedar. CNAME RRSIG NSEC
add ne05cc.hahnekedar. 2648 IN NSEC nexus5b2002b.hahnekedar. A RRSIG NSEC
add xyz.xyz.hahnekedar. 2648 IN NSEC hahnekedar. NS RRSIG NSEC
```
### Possible fixes
Good luck :)Not plannedhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3430upgrade from Ubuntu 20.04 to ISC packages on Launchpad break `host`2022-07-04T07:31:50ZPetr Špačekpspacek@isc.orgupgrade from Ubuntu 20.04 to ISC packages on Launchpad break `host`### Summary
Upgrade from Ubuntu-supplied packages to ISC-supplied packages from Launchpad break `host` utility.
### BIND version used
Platform: Ubuntu 20.04
Package version: 1:9.18.4-1+ubuntu20.04.1+isc+1
### Steps to reproduce
```
...### Summary
Upgrade from Ubuntu-supplied packages to ISC-supplied packages from Launchpad break `host` utility.
### BIND version used
Platform: Ubuntu 20.04
Package version: 1:9.18.4-1+ubuntu20.04.1+isc+1
### Steps to reproduce
```
sudo add-apt-repository ppa:isc/bind
sudo apt update
sudo apt install bind9
```
### What is the current *bug* behavior?
Problem:
```console
$ host -V
host: error while loading shared libraries: libdns.so.1601: cannot open shared object file: No such file or directory
```
```console
$ dpkg -l | grep bind9
ii bind9 1:9.18.4-1+ubuntu20.04.1+isc+1 amd64 Internet Domain Name Server
ii bind9-dnsutils 1:9.18.4-1+ubuntu20.04.1+isc+1 amd64 Clients provided with BIND 9
ii bind9-host 1:9.16.1-0ubuntu2.10 amd64 DNS Lookup Utility
ii bind9-libs:amd64 1:9.18.4-1+ubuntu20.04.1+isc+1 amd64 Shared Libraries used by BIND 9
ii bind9-utils 1:9.18.4-1+ubuntu20.04.1+isc+1 amd64 Utilities for BIND 9
```
### What is the expected *correct* behavior?
Reporter expected the -host package to correctly upgrade.
FTR I did not discover this but it was reported to me privately.Ondřej SurýOndřej Surýhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3428Dig 9.18 times out on mDNS queries2022-06-30T12:54:49ZLarry StoneDig 9.18 times out on mDNS queries<!--
If the bug you are reporting is potentially security-related - for example,
if it involves an assertion failure or other crash in `named` that can be
triggered repeatedly - then please do *NOT* report it here, but send an
email to [...<!--
If the bug you are reporting is potentially security-related - for example,
if it involves an assertion failure or other crash in `named` that can be
triggered repeatedly - then please do *NOT* report it here, but send an
email to [security-officer@isc.org](security-officer@isc.org).
-->
### Summary
Dig times out when attempting to resolve a mDNS query. This is new behavior with 9.18; dig worked fine with 9.16 and before. I get the same behavior with both a built from source copy of dig and one ported from MacPorts.
### BIND version used
```
BIND 9.18.3 (Stable Release) <id:16aefa3>
running on Darwin arm64 21.5.0 Darwin Kernel Version 21.5.0: Tue Apr 26 21:08:37 PDT 2022; root:xnu-8020.121.3~4/RELEASE_ARM64_T6000
built by make with '--prefix=/opt/local' '--disable-silent-rules' '--mandir=/opt/local/share/man' '--with-openssl=/opt/local' '--with-libidn2=/opt/local' '--enable-doh' 'CC=/usr/bin/clang' 'CFLAGS=-pipe -Os -isysroot/Library/Developer/CommandLineTools/SDKs/MacOSX12.sdk -arch arm64' 'LDFLAGS=-L/opt/local/lib -Wl,-headerpad_max_install_names -Wl,-syslibroot,/Library/Developer/CommandLineTools/SDKs/MacOSX12.sdk -arch arm64' 'CPPFLAGS=-I/opt/local/include -isysroot/Library/Developer/CommandLineTools/SDKs/MacOSX12.sdk'
compiled by CLANG Apple LLVM 13.1.6 (clang-1316.0.21.2.5)
compiled with OpenSSL version: OpenSSL 3.0.3 3 May 2022
linked to OpenSSL version: OpenSSL 3.0.3 3 May 2022
compiled with libuv version: 1.44.1
linked to libuv version: 1.44.1
compiled with libnghttp2 version: 1.47.0
linked to libnghttp2 version: 1.47.0
compiled with libxml2 version: 2.9.14
linked to libxml2 version: 20914
compiled with json-c version: 0.15
linked to json-c version: 0.15
compiled with zlib version: 1.2.12
linked to zlib version: 1.2.12
threads support is enabled
default paths:
named configuration: /opt/local/etc/named.conf
rndc configuration: /opt/local/etc/rndc.conf
DNSSEC root key: /opt/local/etc/bind.keys
nsupdate session key: /opt/local/var/run/named/session.key
named PID file: /opt/local/var/run/named/named.pid
named lock file: /opt/local/var/run/named/named.lock
```
### Steps to reproduce
Attempt an mDNS query with dig 9.18 as shown below.
### What is the current *bug* behavior?
```
$ dig +short @224.0.0.251 -p 5353 Maggie.local
;; connection timed out; no servers could be reached
```
### What is the expected *correct* behavior?
```
$ dig +short @224.0.0.251 -p 5353 Maggie.local
192.168.0.82
```
### Relevant configuration files
N/A
### Relevant logs and/or screenshots
PCAP file attached (if I've read tcpdump correctly, the query goes out and is answered but dig doesn't see it).[dig9_18mdns_202206272218.pcap](/uploads/1fa41ca129e9114aad64ac1ce49adf1c/dig9_18mdns_202206272218.pcap)
### Possible fixes
Unknownhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3311Consider parent-centric delegations2024-03-01T10:04:57ZOndřej SurýConsider parent-centric delegationsThis is an umbrella issue to discuss the parent vs child-centric delegations.
## Child-centric NS
The child-centric NS way lets the child NS records override the delegation NS, but the parent NS has to be used at least once. This work...This is an umbrella issue to discuss the parent vs child-centric delegations.
## Child-centric NS
The child-centric NS way lets the child NS records override the delegation NS, but the parent NS has to be used at least once. This works fine as long as the parent and child NS records are in sync. When they are not in sync (both inter and intra), the used delegation NS can vary between runs based on what's in the cache.
## Parent-centric NS
The parent-centric NS way always uses the parent NS records for delegations, but requires a separate "delegation" database that's distinct from the resource-record cache. The parent-centric NS doesn't suffer from the problems that could happen when the child-NS and parent-NS are out of sync - there's only one "authority" for the delegation NS (parent).
This approach is not without problems - because of the way DNS is (under-)specified, the child-centric NS has been used for a long time, and changing the BIND 9 to use parent NS will break some users' expectations. Fortunately for us, this path has been already paved by (at least) Nominum Vantio and Google Public DNS (and apparently the world didn't collapse).
## To be considered
- [ ] DS vs apex-CNAME
- [ ] parent vs child NSEC RRsets
- [ ] glue records from the parent pointing into the child zone
- [ ] Debug/query options
(add more as stuff comes up in the discussion)Not plannedhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3305Consider removing the built-in "_bind" view from the default configuration2022-05-02T08:07:17ZMichał KępieńConsider removing the built-in "_bind" view from the default configurationTime to stir up a hornet's nest!
The built-in `_bind` view has been part of BIND 9 since version 9.3.0.
Its purpose is to service CHAOS class queries for the following zones:
- `version.bind`
- `hostname.bind`
- `id.server`
- `...Time to stir up a hornet's nest!
The built-in `_bind` view has been part of BIND 9 since version 9.3.0.
Its purpose is to service CHAOS class queries for the following zones:
- `version.bind`
- `hostname.bind`
- `id.server`
- `authors.bind`
I have some thoughts on these. YMMV.
- `version.bind`: commonly set to `none` or some nonsense string in
production environments because it is believed to be a security hole
:shrug: [citation needed]
- `hostname.bind`: superseded by [NSID][1], I think?
- `id.server`: same.
That leaves us with `authors.bind`, which is a bit of a delicate topic.
I would not want to hurt anyone's feelings, so please just hear me out;
this issue is meant to be a place for discussion.
The primary problem I have with the `_bind` view is that it is a
liability on memory-constrained platforms because its presence in the
default configuration causes a useless `dns_resolver_t` object to be
[unconditionally created][2] upon `named` startup. That is no small
object: it comes with tasks, dispatches, etc. - the ironic part being
that this view does not need recursion at all (`recursion no;` does not
help). To the best of my knowledge, there is no way to disable creating
that view in the configuration file; it can only be *replaced* with a
different view, which does not prevent the memory use problem.
Other hiccups which this view has caused in the past (that I can
recall...) include making the default configuration vulnerable to a
security issue related to RRL, which is enabled for the `_bind` view by
default (see [CVE-2021-25218][3]), or having to extend its configuration
to prevent it from uselessly allocating even more memory on startup (see
86698ded32515710b5b8734b4ed8ac4d2be62b60).
I have been running a home resolver with the `_bind` view removed from
the source code for about a year and a half now and I have not noticed
any adverse effects caused by that modification.
I think we should consider removing the `_bind` view from the default
configuration. It can always be re-enabled via explicit configuration,
if somebody wants that. In other words, I think it should be "opt-in"
rather than "opt-out" (noting that there is no way to *actually* opt-out
right now). I am *not* proposing to remove the code responsible for
preparing the contents of the `authors.bind` zone or any other built-in
zone served by the `_bind` view. It's just that IMHO the long-term
costs of maintaining this view in the default configuration are not
worth the benefits.
Let the tomatoes fly :tomato: :tomato: :tomato:
[1]: https://datatracker.ietf.org/doc/html/rfc5001
[2]: https://gitlab.isc.org/isc-projects/bind9/-/blob/fcab10a26ece6419c2f53a2ad82499b4b5ba75c5/bin/named/server.c#L4740-4743
[3]: https://gitlab.isc.org/isc-projects/bind9/-/issues/2856#note_229301Not plannedhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3304Consider dropping the JSON_C_TO_STRING_PRETTY flag used for generating JSON s...2022-04-27T09:43:52ZMichał KępieńConsider dropping the JSON_C_TO_STRING_PRETTY flag used for generating JSON statisticsJSON is a machine-readable format, so there is no need to [generate
statschannel output in a "pretty" form][1]. Quick experiments show that
redundant whitespace adds up to about 40% of the JSON payload produced
by statschannel code.
Ye...JSON is a machine-readable format, so there is no need to [generate
statschannel output in a "pretty" form][1]. Quick experiments show that
redundant whitespace adds up to about 40% of the JSON payload produced
by statschannel code.
Yes, `lib/isc/httpd.c` supports DEFLATE compression via zlib and that
enables massive savings in terms of payload size, but it requires
clients to send the `Accept-Encoding: deflate` HTTP header in order to
kick in, so IMHO HTTP-level compression and producing "minified" JSON
data are tangential mechanisms rather than exclusive alternatives.
Piping "minified" JSON through `jq` allows one to get the "pretty" form
without the extra bandwidth cost.
Some semi-random measurements:
- ~"v9.19", 4 logical CPU cores
$ curl -s http://localhost:8080/json | wc -c
71386
$ curl -s http://localhost:8080/json | jq -c | wc -c
41881
$ curl -s -H "Accept-Encoding: deflate" http://localhost:8080/json | wc -c
4150
- ~"v9.19", 32 logical CPU cores
$ curl -s http://localhost:8080/json | wc -c
391217
$ curl -s http://localhost:8080/json | jq -c | wc -c
227837
$ curl -s -H "Accept-Encoding: deflate" http://localhost:8080/json | wc -c
16721
- ~"v9.16", 4 logical CPU cores
$ curl -s http://localhost:8080/json | wc -c
896972
$ curl -s http://localhost:8080/json | jq -c | wc -c
529643
$ curl -s -H "Accept-Encoding: deflate" http://localhost:8080/json | wc -c
25880
- ~"v9.16", 32 logical CPU cores
$ curl -s http://localhost:8080/json | wc -c
6954362
$ curl -s http://localhost:8080/json | jq -c | wc -c
4106064
$ curl -s -H "Accept-Encoding: deflate" http://localhost:8080/json | wc -c
174557
[1]: https://gitlab.isc.org/isc-projects/bind9/-/blob/fcab10a26ece6419c2f53a2ad82499b4b5ba75c5/bin/named/statschannel.c#L3264-3265Not planned