BIND issueshttps://gitlab.isc.org/isc-projects/bind9/-/issues2024-03-01T10:04:57Zhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3396Cached RRSIGs need to be more tightly bound to the covered RRset2024-03-01T10:04:57ZMark AndrewsCached RRSIGs need to be more tightly bound to the covered RRsetIt is currently possible for the RRSIG's to be deleted from the cache while the covered RRset remains (e.g. removed for space reasons). This causes problems for downstream validators as they won't get the RRSIGs returned along with RRse...It is currently possible for the RRSIG's to be deleted from the cache while the covered RRset remains (e.g. removed for space reasons). This causes problems for downstream validators as they won't get the RRSIGs returned along with RRset requested.
Provide a mechanism to tightly bind the RRSIGs to the covered RRset so that they can't be deleted separately. Perhaps adding methods
```
isc_result_t
dns_rdataset_addrrsigs(dns_rdataset_t *rdataset, dns_rdataset_t *sigrdataset);
isc_result_t
dns_rdataset_getrrsigs(dns_rdataset_t *rdataset, dns_rdataset_t *sigrdataset);
```
could be used attach and retrieve a RRSIG RRset to a non-RRSIG RRset. The cache would no longer maintain seperate RRSIG RRsets but return those bound to the covered RRset.Not plannedMark AndrewsMark Andrewshttps://gitlab.isc.org/isc-projects/bind9/-/issues/3391add current View name to dnstap messages2022-08-25T04:13:28ZPeter Muchadd current View name to dnstap messages### Description
A combined nameserver for authoritative and recursive tasks, for intranet and public service, plus root-slave etc., may need six or more partially interconnected views. Dnstap is the perfect tool to debug and verify that...### Description
A combined nameserver for authoritative and recursive tasks, for intranet and public service, plus root-slave etc., may need six or more partially interconnected views. Dnstap is the perfect tool to debug and verify that all of it actually does what is intended. But for this to be successful, information about which view is currently acting, would be needed in each dnstap message.
### Request
The 'dnstap-identity' option should work within a view statement. (The documentation does not state that it does /not/ work, but it gets rejected by the software.)
Then the admin could choose an appropriate name to their liking individually in each view.
### Links / references
https://lists.isc.org/pipermail/bind-users/2022-June/106295.html
https://lists.isc.org/pipermail/bind-users/2022-June/106296.htmlNot plannedhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3382named shutdown blocks on dnstap Unix domain socket2022-06-14T16:42:04ZBorja Marcos EA2EKHnamed shutdown blocks on dnstap Unix domain socket<!--
If the bug you are reporting is potentially security-related - for example,
if it involves an assertion failure or other crash in `named` that can be
triggered repeatedly - then please do *NOT* report it here, but send an
email to [...<!--
If the bug you are reporting is potentially security-related - for example,
if it involves an assertion failure or other crash in `named` that can be
triggered repeatedly - then please do *NOT* report it here, but send an
email to [security-officer@isc.org](security-officer@isc.org).
-->
### Summary
When using a Unix domain socket for dnstap output, named blocks when shutting down (either using `rndc stop` or `kill -TERM`) if the socket is connected and the dnstap server process is not consuming messages.
### BIND version used
Affects both 9.16 and 9.18 (tested on FreeBSD)
### Steps to reproduce
1. Configure dnstap to use a Unix domain socket.
1. Start a dnstap server process (for example dnstap/golang-dnstap).
1. Let named process some queries
1. Stop the dnstap process either with `kill -STOP` or using ctrl-Z if running in foreground
1. Send some more queries to named to make sure there are unprocessed dnstap messages
1. Try to stop named using `rndc stop` or `kill -TERM`
### What is the current *bug* behavior?
The named process stays blocked forever.
It won't exit unless the dnstap server process is killed or resumed, consuming the messages buffered in the Unix socket.
The process has closed all of its descriptors, so it won't answer queries nor respond to `rndc`.
### What is the expected *correct* behavior?
The process should terminate. Maybe a short timeout would be in order, but it should not block on dnstap output.
### Relevant configuration files
```
# named-checkconf -px
logging {
channel "graylog" {
syslog "local1";
severity info;
print-time iso8601-utc;
print-category yes;
};
category "default" {
"graylog";
};
};
options {
directory "/usr/local/etc/namedb/working";
dnstap-output unix"/tmp/dnstap.sock";
dump-file "/var/dump/named_dump.db";
listen-on {
127.0.0.1/32;
};
listen-on {
192.168.1.155/32;
};
listen-on-v6 {
::1/128;
X:Y:Z:5353::1/128;
X:Y:Z:5353::2/128;
};
pid-file "/var/run/named/pid";
querylog yes;
recursive-clients 256;
statistics-file "/var/stats/named.stats";
allow-recursion {
127.0.0.1/32;
192.168.1.0/24;
192.168.2.0/24;
192.168.3.0/24;
192.168.0.0/16;
::1/128;
X:Y:Z::/48;
};
disable-empty-zone "255.255.255.255.IN-ADDR.ARPA";
disable-empty-zone "0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.IP6.ARPA";
disable-empty-zone "1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.IP6.ARPA";
dnssec-validation auto;
dnstap {
all;
};
max-cache-size 16777216;
query-source address X.Y.Z.T port 0;
request-nsid no;
resolver-query-timeout 20000;
send-cookie no;
stale-answer-enable no;
allow-query {
127.0.0.1/32;
192.168.1.0/24;
192.168.2.0/24;
192.168.3.0/24;
X.Y.Z.T/32;
192.168.0.0/16;
::1/128;
X:Y:Z::/48;
};
transfer-source X.Y.Z.T;
};
statistics-channels {
inet 127.0.0.1 port 8053 allow {
127.0.0.1/32;
};
};
server 82.159.210.51/32 {
send-cookie no;
};
server 2001:500:94::/48 {
bogus yes;
};
server 204.13.251.136/32 {
bogus yes;
};
server 208.78.71.136/32 {
bogus yes;
};
zone "." {
type mirror;
};
zone "localhost" {
type master;
file "/usr/local/etc/namedb/primary/localhost-forward.db";
};
zone "127.in-addr.arpa" {
type master;
file "/usr/local/etc/namedb/primary/localhost-reverse.db";
};
zone "255.in-addr.arpa" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "rpzzone" {
type master;
file "/usr/local/etc/namedb/primary/rpz.db";
};
zone "0.ip6.arpa" {
type master;
file "/usr/local/etc/namedb/primary/localhost-reverse.db";
};
zone "0.0.192.in-addr.arpa" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "test" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "example" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "invalid" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "example.com" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "example.net" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "example.org" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "18.198.in-addr.arpa" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "19.198.in-addr.arpa" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "240.in-addr.arpa" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "241.in-addr.arpa" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "242.in-addr.arpa" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "243.in-addr.arpa" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "244.in-addr.arpa" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "245.in-addr.arpa" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "246.in-addr.arpa" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "247.in-addr.arpa" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "248.in-addr.arpa" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "249.in-addr.arpa" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "250.in-addr.arpa" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "251.in-addr.arpa" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "252.in-addr.arpa" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "253.in-addr.arpa" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "254.in-addr.arpa" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "1.ip6.arpa" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "3.ip6.arpa" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "4.ip6.arpa" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "5.ip6.arpa" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "6.ip6.arpa" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "7.ip6.arpa" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "8.ip6.arpa" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "9.ip6.arpa" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "a.ip6.arpa" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "b.ip6.arpa" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "c.ip6.arpa" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "d.ip6.arpa" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "e.ip6.arpa" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "0.f.ip6.arpa" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "1.f.ip6.arpa" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "2.f.ip6.arpa" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "3.f.ip6.arpa" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "4.f.ip6.arpa" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "5.f.ip6.arpa" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "6.f.ip6.arpa" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "7.f.ip6.arpa" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "8.f.ip6.arpa" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "9.f.ip6.arpa" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "a.f.ip6.arpa" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "b.f.ip6.arpa" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "0.e.f.ip6.arpa" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "1.e.f.ip6.arpa" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "2.e.f.ip6.arpa" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "3.e.f.ip6.arpa" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "4.e.f.ip6.arpa" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "5.e.f.ip6.arpa" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "6.e.f.ip6.arpa" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "7.e.f.ip6.arpa" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "c.f.ip6.arpa" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "d.f.ip6.arpa" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "c.e.f.ip6.arpa" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "d.e.f.ip6.arpa" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "e.e.f.ip6.arpa" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "f.e.f.ip6.arpa" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
zone "ip6.int" {
type master;
file "/usr/local/etc/namedb/primary/empty.db";
};
```
### Relevant logs and/or screenshots
```
9943 100117 named SCTL "kern.hostname"
9943 100117 named RET __sysctl 0
9943 100117 named CALL getpid
9943 100117 named RET getpid 9943/0x26d7
9943 100117 named CALL sendto(0x3,0x7fffffffc3b0,0x6c,0,0,0)
9943 100117 named GIO fd 3 wrote 108 bytes
"<142>1 2022-05-30T10:32:37.874081+00:00 elnuc named 9943 - - 2022-05-3\
0T10:32:37.874Z dnstap: closing dnstap"
9943 100117 named RET sendto 108/0x6c
9943 100117 named CALL _umtx_op(0x802102000,0x2<UMTX_OP_WAIT>,0x32d6e,0,0)
```
After resuming the dnstap process,
```
9943 208238 named GIO fd 78 wrote 4096 bytes
(some data written)
9943 208238 named RET sendmsg 8153/0x1fd9
9943 208238 named CALL sendmsg(0x4e,0x7fffdedf4ec0,0x20000<MSG_NOSIGNAL>)
9943 208238 named GIO fd 78 wrote 4096 bytes
(more data)
9943 208238 named RET sendmsg 7236/0x1c44
9943 208238 named CALL sendmsg(0x4e,0x7fffdedf4e30,0x20000<MSG_NOSIGNAL>)
9943 208238 named GIO fd 78 wrote 12 bytes
0x0000 0000 0000 0000 0004 0000 0003 |............|
9943 208238 named RET sendmsg 12/0xc
9943 208238 named CALL read(0x4e,0x7fffdedf4edc,0x4)
9943 208238 named GIO fd 78 read 4 bytes
"\0\0\0\0"
9943 208238 named RET read 4
9943 208238 named CALL read(0x4e,0x7fffdedf4edc,0x4)
9943 208238 named GIO fd 78 read 4 bytes
0x0000 0000 0004 |....|
9943 208238 named RET read 4
9943 208238 named CALL read(0x4e,0x7fffdedf4eb0,0x4)
9943 208238 named GIO fd 78 read 4 bytes
0x0000 0000 0005 |....|
9943 208238 named RET read 4
9943 208238 named CALL close(0x4e)
9943 208238 named RET close 0
9943 208238 named CALL madvise(0x802f79000,0x16000,MADV_FREE)
9943 208238 named RET madvise 0
9943 208238 named CALL madvise(0x801570000,0x1d000,MADV_FREE)
9943 208238 named RET madvise 0
9943 208238 named CALL madvise(0x8014c9000,0x5000,MADV_FREE)
9943 208238 named RET madvise 0
9943 208238 named CALL madvise(0x801526000,0x27000,MADV_FREE)
9943 208238 named RET madvise 0
9943 208238 named CALL madvise(0x80159c000,0x5e000,MADV_FREE)
9943 208238 named RET madvise 0
9943 208238 named CALL thr_exit(0x802102000)
9943 100117 named RET _umtx_op 0 **LOOKS LIKE IT WAS BLOCKED HERE**
9943 100117 named CALL __sysctl(0x7fffffffa2f0,0x2,0x7fffffffe400,0x7fffffffa2e8,0,0)
9943 100117 named SCTL "kern.hostname"
9943 100117 named RET __sysctl 0
9943 100117 named CALL getpid
9943 100117 named RET getpid 9943/0x26d7
9943 100117 named CALL sendto(0x3,0x7fffffffc400,0x66,0,0,0)
9943 100117 named GIO fd 3 wrote 102 bytes
"<141>1 2022-05-30T10:47:41.246758+00:00 elnuc named 9943 - - 2022-05-3\
0T10:47:41.246Z general: exiting"
9943 100117 named RET sendto 102/0x66
9943 100117 named CALL close(0x4)
9943 100117 named RET close 0
9943 100117 named CALL close(0x3)
9943 100117 named RET close 0
9943 100117 named CALL unlink(0x8020eba80)
9943 100117 named NAMI "/var/run/named/pid"
```
### Possible fixes
(If you can, link to the line of code that might be responsible for the
problem.)Not plannedhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3379Expose general health indication in stats2023-12-12T05:50:59ZTony FinchExpose general health indication in statsA suggestion from Chris Siebenmann's blog:
https://utcc.utoronto.ca/~cks/space/blog/sysadmin/HaveGeneralHealthMetric
> If your system is reasonably decent sized, it probably has some sort of logging framework that categorizes log messag...A suggestion from Chris Siebenmann's blog:
https://utcc.utoronto.ca/~cks/space/blog/sysadmin/HaveGeneralHealthMetric
> If your system is reasonably decent sized, it probably has some sort of logging framework that categorizes log messages by both subsystem and broad level of alarmingness. Add a hook into your logging system so that you track the last time a message was emitted for a given subsystem at a given priority level, and expose these times (with level and subsystem) as metrics. Then people like me can put together monitoring for things like 'the Prometheus TSDB has logged warnings or above within the last five minutes'.Not plannedhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3273Refactor `dns_rdataset_t->privateN`2022-04-28T12:12:19ZTony FinchRefactor `dns_rdataset_t->privateN`The generic `dns_rdataset_t` structure has a number of `private` fields that are (according to the comment before their declarations) "for use by the rdataset implementation, and MUST NOT be changed by clients." That suggests they should...The generic `dns_rdataset_t` structure has a number of `private` fields that are (according to the comment before their declarations) "for use by the rdataset implementation, and MUST NOT be changed by clients." That suggests they should only be used by `dns_rdatalist_t` and the `rdataslab` implementation (which is largely in `rbtdb.c`). However, the `privateN` fields are also used by:
* `dnsrps.c`
* `keytable.c`
* `ncache.c`
* `sdb.c`
* `sdlz.c`
It is not clear what each `privateN` field is for, which code owns which fields, whether or not there are clashes (or how developers can be sure to avoid them).
This is one of the work items tracked in #3268Tony FinchTony Finchhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3268Refactor RBTDB2023-11-01T08:05:30ZTony FinchRefactor RBTDBWhereas #722 is about the rbt lookup data structure (used in lots of places in BIND), this is about the rbtdb that is built on rbt, and stores cache contents and authoritative zone data.
The problem is that rbtdb is a tangle of function...Whereas #722 is about the rbt lookup data structure (used in lots of places in BIND), this is about the rbtdb that is built on rbt, and stores cache contents and authoritative zone data.
The problem is that rbtdb is a tangle of functionality that would ideally be separated, and the tangle is not isolated but ties in to other parts of the code.
I plan to use this issue to collect notes about rbtdb-related things that need cleaning up, so expect it to have an unclear scope, and plenty of edits as our understanding changes. When we find a problem that has a clear outline and a feasible solution, that will become a separate issue.
## tangles - vertical
In principle there are a few parts in a DNS database that ideally would be separated:
* general-purpose lookup index structures, e.g. comparison tree, radix tree, hash table
* DNS-specific helper structures for handling the "tricky details" listed in the [DB notes](https://gitlab.isc.org/isc-projects/bind9/-/wikis/BIND-9.19-Planning:-DB-notes).
* representation of domain names and DNS rdata in memory
BIND's red-black tree contains extra fields that are just for the rbtdb, even though the rbt is used for a number of other purposes.
Similarly, `rdataslab` storage is notionally separated from rbtdb, but it is only used by rbtdb, and much of the `rdataslab` implementation is inside rbtdb.
The rdataset API is not clear whether it is just for accessing DNS rdata, because also helps with the DNS namespace. There isn't a distinct layer for that: it is handled partly by the basic RBT, partly by RBTDB, partly by the rdataset/rdataslab interface.
## tangles - horizontal
The "tricky details" of a DNS cache are rather different from authoritative data, and it isn't obvious that it makes sense to use the same data structures for them both. In BIND the rbtdb mixes them together.
The separation between `rdataslab` and rbtdb might make more sense if the `header` part were different for the cache and for an authoritative zone, but this separation isn't used, and its implementation is so dirty that it probably would not make it easier to separate cache and auth.
BIND has multiple implementations of its `dns_db` interface for authoritative zones. They all use `dns_rdataset_t` to pass DNS records in and out. The `rdataset` structure has some support for this polymorphism, but it is woeful.
## specific things to fix
- [x] Refactor `dns_rdataset_t->privateN` #3273
- [x] make the rdataslab implementation more self-contained
- can we split off the header structure that has the cache/auth details? or would that harm performance too much?
- can we make the rdataslab implementation more typeful, less like hand-written serialization and deserialization code?
- [ ] the layout of an rdataslab is not the same for all rrtypes: RRSIG records have an extra byte to remember whether the corresponding key is offline or not
- can the offline status of a key be stored with the key instead?
- [ ] when a `struct` is directly followed in memory by some specially structured data, use a flexible array member to indicate this
- the domain name following an rbtnode
- the raw rdata following an rdataslab header
- [ ] examine how the noqname and covers methods on rdatasets work: does it make sense to decouple them?
- might need to do this before the flexible array member change, because of an interaction between rbtdb's rdataset raw pointer and its noqname and covers implementationsTony FinchTony Finchhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3251Follow-up from "Cleanup the tasks and memory contexts in dns_zone"2022-04-01T18:34:39ZOndřej SurýFollow-up from "Cleanup the tasks and memory contexts in dns_zone"The following discussion from !6004 should be addressed:
- [ ] @each started a [discussion](https://gitlab.isc.org/isc-projects/bind9/-/merge_requests/6004#note_277859): (+1 comment)
> Perhaps `UINT_MAX` could mean "pick a thread ...The following discussion from !6004 should be addressed:
- [ ] @each started a [discussion](https://gitlab.isc.org/isc-projects/bind9/-/merge_requests/6004#note_277859): (+1 comment)
> Perhaps `UINT_MAX` could mean "pick a thread ID at random"?https://gitlab.isc.org/isc-projects/bind9/-/issues/3246Add log rolling by time.2022-04-01T07:25:32ZGeorge MitchellAdd log rolling by time.### Description
Enable log rolling by time; not currently supported by the configuration file "logging" command.
### Request
Add command channel "closelog" to call existing routine isc_log_closefilelogs. Invoking this command from a ...### Description
Enable log rolling by time; not currently supported by the configuration file "logging" command.
### Request
Add command channel "closelog" to call existing routine isc_log_closefilelogs. Invoking this command from a cron job will enable the sysadmin to roll logs by time. I have used (versions of) the attached patch for years due to the lack of this feature with no problems. (Patch applies cleanly to bind-9.16.27.) [bind9.patch](/uploads/3f7f5b3ba95062497c4e8228d5448c73/bind9.patch)
### Links / referencesNot plannedhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3240Add code paths to fully support PRIVATEDNS and PRIVATEOID keys.2022-03-30T00:13:12ZMark AndrewsAdd code paths to fully support PRIVATEDNS and PRIVATEOID keys.PRIVATEDNS and PRIVATEOID require that alogorithm's name/oid also be checked along with its number 253/254. We left the implementation until there was an actual PRIVATEDNS or PRIVATEOID requested / specified. There are moves in DNSOP t...PRIVATEDNS and PRIVATEOID require that alogorithm's name/oid also be checked along with its number 253/254. We left the implementation until there was an actual PRIVATEDNS or PRIVATEOID requested / specified. There are moves in DNSOP to allocate a set of test algorithm which will effectively burn each test algorithm number used for testing as you can't do proper cleanups at scale.
Add code and tests using example.com and ISC's OID (1.3.6.1.4.1.2495) to prevent collisions to demonstrate that PRIVATEDNS and PRIVATEOID can be used.https://gitlab.isc.org/isc-projects/bind9/-/issues/3233Build broken when using FreeBSD's make instead of gnu make2022-03-28T16:32:50ZMathieu ArnoldBuild broken when using FreeBSD's make instead of gnu make<!--
If the bug you are reporting is potentially security-related - for example,
if it involves an assertion failure or other crash in `named` that can be
triggered repeatedly - then please do *NOT* report it here, but send an
email to [...<!--
If the bug you are reporting is potentially security-related - for example,
if it involves an assertion failure or other crash in `named` that can be
triggered repeatedly - then please do *NOT* report it here, but send an
email to [security-officer@isc.org](security-officer@isc.org).
-->
### Summary
After b42681c4e973fc819a42c36b18cfb59415d0fcba, the build is broken in both main and v9_18 branches.
### BIND version used
Anything after b42681c4e973fc819a42c36b18cfb59415d0fcba
### Steps to reproduce
Build with FreeBSD's make.
### What is the current *bug* behavior?
```
reading sources... [ 73%] named-nzd2nzf
reading sources... [ 76%] named-rrchecker
reading sources... [ 79%] named.conf
reading sources... [ 82%] nsec3hash
reading sources... [ 85%] nslookup
reading sources... [ 88%] nsupdate
reading sources... [ 91%] rndc
reading sources... [ 94%] rndc-confgen
reading sources... [ 97%] rndc.conf
reading sources... [100%] tsig-keygen
Warning, treated as error:
../../bin/delv/delv.rst:105:Undefined substitution referenced: "bind_keys".
*** Error code 2
Stop.
make[5]: stopped in /wrkdirs/usr/ports/dns/bind-tools/work/bind9-23cb022247e414bb99d901ed5de0f8f0bc9b9b90/doc/man
*** Error code 1
Stop.
make[4]: stopped in /wrkdirs/usr/ports/dns/bind-tools/work/bind9-23cb022247e414bb99d901ed5de0f8f0bc9b9b90/doc
*** Error code 1
```
### What is the expected *correct* behavior?
Well, it shoud be building fine.
### Possible fixes
Using GNU Make to build when build the docs works, but it is suboptimal.https://gitlab.isc.org/isc-projects/bind9/-/issues/3131test case timeout when the number of CPUs is 1282022-02-21T11:59:29Zjin ggtest case timeout when the number of CPUs is 128test case timeout when the number of CPUs is 128 :
![image](/uploads/c53b15deba7467dd5e2d82cf56c92288/image.png)
I see some test cases will create manager wokers with the same number of CPUs on machine. This is the root cause of the tes...test case timeout when the number of CPUs is 128 :
![image](/uploads/c53b15deba7467dd5e2d82cf56c92288/image.png)
I see some test cases will create manager wokers with the same number of CPUs on machine. This is the root cause of the test case timeout. The more workers threads, the worse the performance seems.
![image](/uploads/00c9a46063873de9cfd1f78d3ca782a3/image.png)
At the same time, I see that named's main program also uses the number of CPUs on the macine to create manager worker threads, So I wonder if this affects named's performance when the number of CPUs is high?
![image](/uploads/1d7157ba9cafb358f80ea2fd3005fa25/image.png)Not plannedhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3121rndc reload ignores changes to "querylog"2022-03-01T09:47:04ZPeter Daviesrndc reload ignores changes to "querylog"rndc reload ignores changes to "querylog":
The "querylog" statement in "options" appears not to get updated after BIND received an "rndc reload" command.
The "rndc querylog" does however change the logging of queries.
Configurat...rndc reload ignores changes to "querylog":
The "querylog" statement in "options" appears not to get updated after BIND received an "rndc reload" command.
The "rndc querylog" does however change the logging of queries.
Configurations that define the "queries" logging category have "query logging" enabled and are not affected
There may be cause to update the ARM to describe this behaviour.
1) Start BIND with no "querylog" statement in "options":
rndc status
query logging is OFF (queries are not logged)
2) Change "querylog yes;" in "options":
rndc reload
rndc status
query logging is OFF (queries are not logged) *
3) Start BIND with "querylog no;" in "options":
rndc reload
rndc status
query logging is OFF (queries are not logged)
4) Change "querylog yes;" defined in "options":
rndc reload
rndc status
query logging is NO (queries are not logged) *
5) Start BIND with "querylog yes;" defined in "options:"
rndc reload
rndc status
query logging is ON (queries are logged to syslog)
6) Change "querylog no;" in "options":
rndc reload
rndc status
query logging is ON (queries are logged to syslog) *
[RT #20067](https://support.isc.org/Ticket/Display.html?id=20067)Not plannedhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3098Intermittent mkeys test failure2022-01-14T13:53:30ZOndřej SurýIntermittent mkeys test failurehttps://gitlab.isc.org/isc-projects/bind9/-/jobs/2231254
```
I:mkeys:reset the root server with no keys, check for minimal update (23)
I:mkeys:ns2 refreshing managed keys for '_default'
I:mkeys:ns2 refreshing managed keys for '_default'
...https://gitlab.isc.org/isc-projects/bind9/-/jobs/2231254
```
I:mkeys:reset the root server with no keys, check for minimal update (23)
I:mkeys:ns2 refreshing managed keys for '_default'
I:mkeys:ns2 refreshing managed keys for '_default'
I:mkeys:failed
I:mkeys:reset the root server with no signatures, check for minimal update (24)
```
I am guessing this is timing related again.Not plannedhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3094doth: RNDC reconfiguration too slow on OpenIndiana2022-01-14T11:18:06ZMichal Nowakdoth: RNDC reconfiguration too slow on OpenIndianaEven with 2e5f9a0df5e0a8a15c1cdf69f2aae55fd7a0aca3 RNDC reconfiguration in `doth` system test takes 30-60 second and `dig` invocations fail with "connection refused" on OpenIndiana (but not on Solaris 11.4). Affects `doth` tests "checkin...Even with 2e5f9a0df5e0a8a15c1cdf69f2aae55fd7a0aca3 RNDC reconfiguration in `doth` system test takes 30-60 second and `dig` invocations fail with "connection refused" on OpenIndiana (but not on Solaris 11.4). Affects `doth` tests "checking DoT query after a reconfiguration" and "checking DoH query (POST) after a reconfiguration".
Keep `named`s from `doth` system test running and issue `../../../bin/rndc/rndc -c common/rndc.conf -p 5312 -s 10.53.0.4 reconfig` command:
```
...
11-Jan-2022 16:43:49.938 calling free_rbtdb(.)
11-Jan-2022 16:43:49.938 done free_rbtdb(.)
```
Only after 30 seconds (sometimes close to 60 seconds) `named` is ready:
```
11-Jan-2022 16:44:19.082 listening on IPv4 interface lo0, 10.53.0.4#5301
11-Jan-2022 16:44:19.083 listening on IPv4 interface lo0, 10.53.0.4#5303
```
Otherwise, `../../../bin/dig/dig +tls +noadd +nosea +nostat +noquest +nocmd -p 5301 @10.53.0.4 example SOA` fails with:
```
;; Connection to 10.53.0.4#5301(10.53.0.4) for example failed: connection refused.
;; Connection to 10.53.0.4#5301(10.53.0.4) for example failed: connection refused.
;; Connection to 10.53.0.4#5301(10.53.0.4) for example failed: connection refused.
```
CPU utilization of `named` looks sub 1% during the reconfiguration.Not plannedArtem BoldarievArtem Boldarievhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3089dispatch_test unit test fails on Dragonfly BSD2023-11-02T17:02:22ZMichal Nowakdispatch_test unit test fails on Dragonfly BSD`dispatch_test` unit test fails on Dragonfly BSD 6.2.1.
```
Core was generated by `dispatch_test'.
Program terminated with signal 6, Aborted.
#0 0x00000008019a879c in lwp_kill () from /lib/libc.so.8
#0 0x00000008019a879c in lwp_kill (...`dispatch_test` unit test fails on Dragonfly BSD 6.2.1.
```
Core was generated by `dispatch_test'.
Program terminated with signal 6, Aborted.
#0 0x00000008019a879c in lwp_kill () from /lib/libc.so.8
#0 0x00000008019a879c in lwp_kill () from /lib/libc.so.8
#1 0x000000080173c7f2 in _thr_send_sig () from /usr/lib/libpthread.so.0
#2 0x0000000801733ee5 in raise () from /usr/lib/libpthread.so.0
#3 0x0000000801a42dff in abort () from /lib/libc.so.8
#4 0x000000080069030f in isc_assertion_failed (file=file@entry=0x8006c3822 "netmgr/netmgr.c", line=line@entry=2565, type=type@entry=isc_assertiontype_require, cond=cond@entry=0x8006c8780 "(((handle) != ((void *)0) && ((const isc__magic_t *)(handle))->magic == ((('N') << 24 | ('M') << 16 | ('H') << 8 | ('D')))) && __extension__ ({ __auto_type __atomic_load_ptr = (&(handle)->references);"...) at assertions.c:48
#5 0x000000080067d1df in isc_nm_send (handle=handle@entry=0x0, region=region@entry=0x7fffffdfc720, cb=cb@entry=0x800c3ac4b <send_done>, cbarg=cbarg@entry=0x8023e3a20) at netmgr/netmgr.c:2567
#6 0x0000000800c3d506 in dns_dispatch_send (resp=0x8023e3a20, r=r@entry=0x7fffffdfc720, dscp=dscp@entry=-1) at dispatch.c:1913
#7 0x0000000000403082 in connected (eresult=<optimized out>, region=<optimized out>, cbarg=0x7fffffdfc720) at dispatch_test.c:406
#8 0x0000000800c3b0fd in tcp_connected (handle=<optimized out>, eresult=ISC_R_UNEXPECTED, arg=<optimized out>) at dispatch.c:1747
#9 0x000000080067f739 in isc__nm_async_connectcb (worker=worker@entry=0x802558020, ev0=ev0@entry=0x802346ce0) at netmgr/netmgr.c:2763
#10 0x00000008006803ee in process_netievent (worker=worker@entry=0x802558020, ievent=0x802346ce0) at netmgr/netmgr.c:968
#11 0x00000008006806c7 in process_queue (worker=worker@entry=0x802558020, type=type@entry=NETIEVENT_NORMAL) at netmgr/netmgr.c:1008
#12 0x0000000800680da8 in process_all_queues (worker=0x802558020) at netmgr/netmgr.c:754
#13 async_cb (handle=0x8025582f8) at netmgr/netmgr.c:783
#14 0x000000080048c871 in ?? () from /usr/local/lib/libuv.so.1
#15 0x000000080049cd6a in ?? () from /usr/local/lib/libuv.so.1
#16 0x000000080048cfb6 in uv_run () from /usr/local/lib/libuv.so.1
#17 0x00000008006807bc in nm_thread (worker0=0x802558020) at netmgr/netmgr.c:689
#18 0x00000008006b8b03 in isc__trampoline_run (arg=0x802498440) at trampoline.c:185
#19 0x0000000801735a11 in ?? () from /usr/lib/libpthread.so.0
#20 0x0000000000000000 in ?? ()
```
<details><summary>backtrace</summary>
```
[newman@ ~/bind9]$ ./libtool --mode=execute /usr/bin/gdb.base -batch -command=bin/tests/system/run.gdb -core=lib/dns/tests/dispatch_test.core -- lib/dns/tests/.libs/dispatch_test
[New process 7]
[New process 1]
[New process 2]
[New process 3]
[New process 4]
[New process 5]
[New process 6]
[New process 8]
[New process 9]
[New process 10]
Core was generated by `dispatch_test'.
Program terminated with signal 6, Aborted.
#0 0x00000008019a879c in lwp_kill () from /lib/libc.so.8
Thread 10 (process 10):
#0 0x00000008019a8c7c in kevent () from /lib/libc.so.8
No symbol table info available.
#1 0x000000080049cba3 in ?? () from /usr/local/lib/libuv.so.1
No symbol table info available.
#2 0x000000080048cfb6 in uv_run () from /usr/local/lib/libuv.so.1
No symbol table info available.
#3 0x00000008006807bc in nm_thread (worker0=0x802558b90) at netmgr/netmgr.c:689
r = <optimized out>
worker = 0x802558b90
mgr = 0x8023b0ea0
#4 0x00000008006b8b03 in isc__trampoline_run (arg=0x8024968c0) at trampoline.c:185
trampoline = 0x8024968c0
result = <optimized out>
#5 0x0000000801735a11 in ?? () from /usr/lib/libpthread.so.0
No symbol table info available.
#6 0x0000000000000000 in ?? ()
No symbol table info available.
Thread 9 (process 9):
#0 0x00000008019a8c7c in kevent () from /lib/libc.so.8
No symbol table info available.
#1 0x000000080049cba3 in ?? () from /usr/local/lib/libuv.so.1
No symbol table info available.
#2 0x000000080048cfb6 in uv_run () from /usr/local/lib/libuv.so.1
No symbol table info available.
#3 0x00000008006807bc in nm_thread (worker0=0x8025587c0) at netmgr/netmgr.c:689
r = <optimized out>
worker = 0x8025587c0
mgr = 0x8023b0ea0
#4 0x00000008006b8b03 in isc__trampoline_run (arg=0x802496a00) at trampoline.c:185
trampoline = 0x802496a00
result = <optimized out>
#5 0x0000000801735a11 in ?? () from /usr/lib/libpthread.so.0
No symbol table info available.
#6 0x0000000000000000 in ?? ()
No symbol table info available.
Thread 8 (process 8):
#0 0x00000008019a8c7c in kevent () from /lib/libc.so.8
No symbol table info available.
#1 0x000000080049cba3 in ?? () from /usr/local/lib/libuv.so.1
No symbol table info available.
#2 0x000000080048cfb6 in uv_run () from /usr/local/lib/libuv.so.1
No symbol table info available.
#3 0x00000008006807bc in nm_thread (worker0=0x8025583f0) at netmgr/netmgr.c:689
r = <optimized out>
worker = 0x8025583f0
mgr = 0x8023b0ea0
#4 0x00000008006b8b03 in isc__trampoline_run (arg=0x802498b00) at trampoline.c:185
trampoline = 0x802498b00
result = <optimized out>
#5 0x0000000801735a11 in ?? () from /usr/lib/libpthread.so.0
No symbol table info available.
#6 0x0000000000000000 in ?? ()
No symbol table info available.
Thread 7 (process 6):
#0 0x00000008017388bc in _umtx_sleep_err () from /usr/lib/libpthread.so.0
No symbol table info available.
#1 0x00000008017387fa in _thr_umtx_wait () from /usr/lib/libpthread.so.0
No symbol table info available.
#2 0x0000000801736531 in ?? () from /usr/lib/libpthread.so.0
No symbol table info available.
#3 0x0000000801736990 in pthread_cond_wait () from /usr/lib/libpthread.so.0
No symbol table info available.
#4 0x00000008006b4c88 in run (uap=0x8024a10a0) at timer.c:621
manager = 0x8024a10a0
now = {seconds = 1641839066, nanoseconds = 218490898}
result = <optimized out>
#5 0x00000008006b8b03 in isc__trampoline_run (arg=0x802497fa0) at trampoline.c:185
trampoline = 0x802497fa0
result = <optimized out>
#6 0x0000000801735a11 in ?? () from /usr/lib/libpthread.so.0
No symbol table info available.
#7 0x0000000000000000 in ?? ()
No symbol table info available.
Thread 6 (process 5):
#0 0x00000008019a8c7c in kevent () from /lib/libc.so.8
No symbol table info available.
#1 0x000000080049cba3 in ?? () from /usr/local/lib/libuv.so.1
No symbol table info available.
#2 0x000000080048cfb6 in uv_run () from /usr/local/lib/libuv.so.1
No symbol table info available.
#3 0x00000008006807bc in nm_thread (worker0=0x80255fb90) at netmgr/netmgr.c:689
r = <optimized out>
worker = 0x80255fb90
mgr = 0x8023b12c0
#4 0x00000008006b8b03 in isc__trampoline_run (arg=0x8024979a0) at trampoline.c:185
trampoline = 0x8024979a0
result = <optimized out>
#5 0x0000000801735a11 in ?? () from /usr/lib/libpthread.so.0
No symbol table info available.
#6 0x0000000000000000 in ?? ()
No symbol table info available.
Thread 5 (process 4):
#0 0x00000008019a8c7c in kevent () from /lib/libc.so.8
No symbol table info available.
#1 0x000000080049cba3 in ?? () from /usr/local/lib/libuv.so.1
No symbol table info available.
#2 0x000000080048cfb6 in uv_run () from /usr/local/lib/libuv.so.1
No symbol table info available.
#3 0x00000008006807bc in nm_thread (worker0=0x80255f7c0) at netmgr/netmgr.c:689
r = <optimized out>
worker = 0x80255f7c0
mgr = 0x8023b12c0
#4 0x00000008006b8b03 in isc__trampoline_run (arg=0x802496500) at trampoline.c:185
trampoline = 0x802496500
result = <optimized out>
#5 0x0000000801735a11 in ?? () from /usr/lib/libpthread.so.0
No symbol table info available.
#6 0x0000000000000000 in ?? ()
No symbol table info available.
Thread 4 (process 3):
#0 0x00000008019a8c7c in kevent () from /lib/libc.so.8
No symbol table info available.
#1 0x000000080049cba3 in ?? () from /usr/local/lib/libuv.so.1
No symbol table info available.
#2 0x000000080048cfb6 in uv_run () from /usr/local/lib/libuv.so.1
No symbol table info available.
#3 0x00000008006807bc in nm_thread (worker0=0x80255f3f0) at netmgr/netmgr.c:689
r = <optimized out>
worker = 0x80255f3f0
mgr = 0x8023b12c0
#4 0x00000008006b8b03 in isc__trampoline_run (arg=0x802497de0) at trampoline.c:185
trampoline = 0x802497de0
result = <optimized out>
#5 0x0000000801735a11 in ?? () from /usr/lib/libpthread.so.0
No symbol table info available.
#6 0x0000000000000000 in ?? ()
No symbol table info available.
Thread 3 (process 2):
#0 0x00000008019a8c7c in kevent () from /lib/libc.so.8
No symbol table info available.
#1 0x000000080049cba3 in ?? () from /usr/local/lib/libuv.so.1
No symbol table info available.
#2 0x000000080048cfb6 in uv_run () from /usr/local/lib/libuv.so.1
No symbol table info available.
#3 0x00000008006807bc in nm_thread (worker0=0x80255f020) at netmgr/netmgr.c:689
r = <optimized out>
worker = 0x80255f020
mgr = 0x8023b12c0
#4 0x00000008006b8b03 in isc__trampoline_run (arg=0x8024985c0) at trampoline.c:185
trampoline = 0x8024985c0
result = <optimized out>
#5 0x0000000801735a11 in ?? () from /usr/lib/libpthread.so.0
No symbol table info available.
#6 0x0000000000000000 in ?? ()
No symbol table info available.
Thread 2 (process 1):
#0 0x00000008017388bc in _umtx_sleep_err () from /usr/lib/libpthread.so.0
No symbol table info available.
#1 0x000000080173886e in _thr_umtx_wait_intr () from /usr/lib/libpthread.so.0
No symbol table info available.
#2 0x0000000801739f5a in sem_wait () from /usr/lib/libpthread.so.0
No symbol table info available.
#3 0x000000080049943d in uv_sem_wait () from /usr/local/lib/libuv.so.1
No symbol table info available.
#4 0x0000000000403950 in dispatch_timeout_tcp_response (state=<optimized out>) at dispatch_test.c:532
result = <optimized out>
region = {base = 0x7fffffdfc708 "V\001", length = 12}
rbuf = '\000' <repeats 11 times>
message = "V\001\000\000\000\000\000\000\000\000\000"
id = 22017
sock = 0x80253e220
#5 0x0000000800606ae7 in ?? () from /usr/local/lib/libcmocka.so.0
No symbol table info available.
#6 0x00000008006073e8 in _cmocka_run_group_tests () from /usr/local/lib/libcmocka.so.0
No symbol table info available.
#7 0x00000000004041a2 in main () at dispatch_test.c:736
tests = {{name = 0x405256 "dispatch_timeout_tcp_connect", test_func = 0x403f87 <dispatch_timeout_tcp_connect>, setup_func = 0x403dde <_setup>, teardown_func = 0x403cd3 <_teardown>, initial_state = 0x0}, {name = 0x405273 "dispatch_timeout_tcp_response", test_func = 0x4037dc <dispatch_timeout_tcp_response>, setup_func = 0x403dde <_setup>, teardown_func = 0x403cd3 <_teardown>, initial_state = 0x0}, {name = 0x405291 "dispatch_tcp_response", test_func = 0x403a47 <dispatch_tcp_response>, setup_func = 0x403dde <_setup>, teardown_func = 0x403cd3 <_teardown>, initial_state = 0x0}, {name = 0x4052a7 "dispatch_timeout_udp_response", test_func = 0x4034a7 <dispatch_timeout_udp_response>, setup_func = 0x403dde <_setup>, teardown_func = 0x403cd3 <_teardown>, initial_state = 0x0}, {name = 0x4052c5 "dispatchset_create", test_func = 0x403445 <dispatchset_create>, setup_func = 0x403dde <_setup>, teardown_func = 0x403cd3 <_teardown>, initial_state = 0x0}, {name = 0x4052d8 "dispatchset_get", test_func = 0x403202 <dispatchset_get>, setup_func = 0x403dde <_setup>, teardown_func = 0x403cd3 <_teardown>, initial_state = 0x0}, {name = 0x4052e8 "dispatch_getnext", test_func = 0x402d57 <dispatch_getnext>, setup_func = 0x403dde <_setup>, teardown_func = 0x403cd3 <_teardown>, initial_state = 0x0}}
Thread 1 (process 7):
#0 0x00000008019a879c in lwp_kill () from /lib/libc.so.8
No symbol table info available.
#1 0x000000080173c7f2 in _thr_send_sig () from /usr/lib/libpthread.so.0
No symbol table info available.
#2 0x0000000801733ee5 in raise () from /usr/lib/libpthread.so.0
No symbol table info available.
#3 0x0000000801a42dff in abort () from /lib/libc.so.8
No symbol table info available.
#4 0x000000080069030f in isc_assertion_failed (file=file@entry=0x8006c3822 "netmgr/netmgr.c", line=line@entry=2565, type=type@entry=isc_assertiontype_require, cond=cond@entry=0x8006c8780 "(((handle) != ((void *)0) && ((const isc__magic_t *)(handle))->magic == ((('N') << 24 | ('M') << 16 | ('H') << 8 | ('D')))) && __extension__ ({ __auto_type __atomic_load_ptr = (&(handle)->references);"...) at assertions.c:48
No locals.
#5 0x000000080067d1df in isc_nm_send (handle=handle@entry=0x0, region=region@entry=0x7fffffdfc720, cb=cb@entry=0x800c3ac4b <send_done>, cbarg=cbarg@entry=0x8023e3a20) at netmgr/netmgr.c:2567
No locals.
#6 0x0000000800c3d506 in dns_dispatch_send (resp=0x8023e3a20, r=r@entry=0x7fffffdfc720, dscp=dscp@entry=-1) at dispatch.c:1913
handle = 0x0
#7 0x0000000000403082 in connected (eresult=<optimized out>, region=<optimized out>, cbarg=0x7fffffdfc720) at dispatch_test.c:406
r = 0x7fffffdfc720
__func__ = "connected"
#8 0x0000000800c3b0fd in tcp_connected (handle=<optimized out>, eresult=ISC_R_UNEXPECTED, arg=<optimized out>) at dispatch.c:1747
disp = 0x802543b00
resp = 0x8023e3a20
next = 0x0
resps = {head = 0x0, tail = 0x0}
#9 0x000000080067f739 in isc__nm_async_connectcb (worker=worker@entry=0x802558020, ev0=ev0@entry=0x802346ce0) at netmgr/netmgr.c:2763
ievent = 0x802346ce0
sock = 0x80253d920
uvreq = 0x80247f020
eresult = ISC_R_UNEXPECTED
#10 0x00000008006803ee in process_netievent (worker=worker@entry=0x802558020, ievent=0x802346ce0) at netmgr/netmgr.c:968
No locals.
#11 0x00000008006806c7 in process_queue (worker=worker@entry=0x802558020, type=type@entry=NETIEVENT_NORMAL) at netmgr/netmgr.c:1008
stop = <optimized out>
waiting = 0
ievent = <optimized out>
#12 0x0000000800680da8 in process_all_queues (worker=0x802558020) at netmgr/netmgr.c:754
result = <optimized out>
type = 3
reschedule = false
#13 async_cb (handle=0x8025582f8) at netmgr/netmgr.c:783
worker = 0x802558020
#14 0x000000080048c871 in ?? () from /usr/local/lib/libuv.so.1
No symbol table info available.
#15 0x000000080049cd6a in ?? () from /usr/local/lib/libuv.so.1
No symbol table info available.
#16 0x000000080048cfb6 in uv_run () from /usr/local/lib/libuv.so.1
No symbol table info available.
#17 0x00000008006807bc in nm_thread (worker0=0x802558020) at netmgr/netmgr.c:689
r = <optimized out>
worker = 0x802558020
mgr = 0x8023b0ea0
#18 0x00000008006b8b03 in isc__trampoline_run (arg=0x802498440) at trampoline.c:185
trampoline = 0x802498440
result = <optimized out>
#19 0x0000000801735a11 in ?? () from /usr/lib/libpthread.so.0
No symbol table info available.
#20 0x0000000000000000 in ?? ()
No symbol table info available.
```
</details>
<details><summary>dispatch_test.log</summary>
```
[==========] Running 7 test(s).
[ RUN ] dispatch_timeout_tcp_connect
netmgr/tcpdns.c:151: unable to convert libuv error code in tcpdns_connect_direct to isc_result: -45: operation not supported on socket
timeout_connected(..., unexpected error, ...)
[ ERROR ] --- 0x22 != 0x2
[ LINE ] --- dispatch_test.c:487: error: Failure!
[ FAILED ] dispatch_timeout_tcp_connect
[ RUN ] dispatch_timeout_tcp_response
netmgr/tcpdns.c:151: unable to convert libuv error code in tcpdns_connect_direct to isc_result: -45: operation not supported on socket
connected(..., unexpected error, ...)
netmgr/netmgr.c:2565: REQUIRE((((handle) != ((void *)0) && ((const isc__magic_t *)(handle))->magic == ((('N') << 24 | ('M') << 16 | ('H') << 8 | ('D')))) && __extension__ ({ __auto_type __atomic_load_ptr = (&(handle)->references); __typeof__ (*__atomic_load_ptr) __atomic_load_tmp; __atomic_load (__atomic_load_ptr, &__atomic_load_tmp, (5)); __atomic_load_tmp; }) > 0)) failed, back trace
0x80069038f <isc_assertion_typetotext+0x6a> at /home/newman/bind9/lib/isc/.libs/libisc-9.17.21.so
0x80069030a <isc_assertion_failed+0xa> at /home/newman/bind9/lib/isc/.libs/libisc-9.17.21.so
0x80067d1df <isc_nm_send+0x52> at /home/newman/bind9/lib/isc/.libs/libisc-9.17.21.so
0x800c3d506 <dns_dispatch_send+0x5a> at /home/newman/bind9/lib/dns/.libs/libdns-9.17.21.so
0x403082 <connected+0x43> at /home/newman/bind9/lib/dns/tests/.libs/dispatch_test
0x800c3b0fd <dns_dispatch_detach+0xa4c> at /home/newman/bind9/lib/dns/.libs/libdns-9.17.21.so
0x80067f739 <isc__nm_async_connectcb+0xa5> at /home/newman/bind9/lib/isc/.libs/libisc-9.17.21.so
0x8006803ee <isc__nm_async_sendcb+0x79f> at /home/newman/bind9/lib/isc/.libs/libisc-9.17.21.so
0x8006806c7 <isc__nm_async_sendcb+0xa78> at /home/newman/bind9/lib/isc/.libs/libisc-9.17.21.so
0x800680da8 <isc_nm_resume+0x26d> at /home/newman/bind9/lib/isc/.libs/libisc-9.17.21.so
0x80048c871 <uv_version_string+0x1b1> at /usr/local/lib/libuv.so.1
0x80049cd6a <uv_cpu_info+0xb4a> at /usr/local/lib/libuv.so.1
0x80048cfb6 <uv_run+0xf6> at /usr/local/lib/libuv.so.1
0x8006807bc <isc__nm_async_sendcb+0xb6d> at /home/newman/bind9/lib/isc/.libs/libisc-9.17.21.so
0x8006b8b03 <isc__trampoline_run+0x16> at /home/newman/bind9/lib/isc/.libs/libisc-9.17.21.so
0x801735a11 <pthread_detach+0x287> at /usr/lib/libpthread.so.0
FAIL dispatch_test (exit status: 134)
```
</details>Not plannedhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3081When is BIND ready?2024-03-27T13:27:23ZGreg ChoulesWhen is BIND ready?Related to Support ticket [19717](https://support.isc.org/Ticket/Display.html?id=19717)
The purpose of this issue is to make BIND more verbose and precise about reporting various stages of readiness when starting up, leading to a definit...Related to Support ticket [19717](https://support.isc.org/Ticket/Display.html?id=19717)
The purpose of this issue is to make BIND more verbose and precise about reporting various stages of readiness when starting up, leading to a definitive "I'm ready now" log message.
The question an operator will want an answer to is, when can I send queries to this server again?
Different features will all have their own completeness check. For example: RPZ, local zones, remote zones, mirror zones, CATZ. The request is for new log messages to allow operators to track progress of each of these features and a new (or redefined) final log message when all tasks are complete.
What is a task? When is it complete and when is BIND ready to do that thing?
Us and the customer have, in parallel, come up with similar thinking on what needs to be done. The principle is, at startup time create a one-time todo list from the zone configuration statements. As each list item is completed, generate a signal and remove it from the list. When all items are completed generate a final completion signal and set the state of an indicator that can be queried by RNDC, so that users can test the current complete/not complete state periodically.
Taking some different types of zones as examples, we would expect behaviour like this:
Primary zones:
- Read zone data from local storage. Once this has been read into memory the zone is 'ready', a signal is generated and no further readiness checks need to be made: this task is complete.
Secondary zones:
- If a zone has been configured with a file, read zone data from local storage. Once this has been read into memory the zone is 'ready', a signal is generated and no further readiness checks need to be made: this task is complete. NOTE: checking whether the zone is up to date (SOA queries and possible subsequent zone transfer) is specifically excluded from this task.
- If a zone has **not** been configured with a file, make SOA queries and attempt zone transfers as necessary in order to load the zone. If zone transfer succeeds and zone data is loaded into memory the zone is 'ready', a signal is generated and no further readiness checks need to be made: this task is complete. If zone transfer fails there needs to be a limit - number of tries without success - to how long this task remains on the todo list. In this case generate a 'not ready' signal and remove the task from the list.
Catalog zones:
- These can be treated similarly to Primary or Secondary zones for the catalog itself. Once the catalog is loaded generate a ready signal and remove it from the todo list.
- However, during processing of each catalog a further list of (member) zones will be generated, each of which need to be added to the todo list and treated as a Secondary zone with no previous local data storage - i.e. needing to be transferred from a primary server.
Response Policy Zones:
- These can be treated similarly to Primary or Secondary zones for the zone data itself, but with the (possible?) additional step of needing to build the policy once it has been loaded. An RPZ should be considered ready only when the policy is active and responses would be re-written.
Mirror zones:
- These are similar to secondary zones.
Anything else?https://gitlab.isc.org/isc-projects/bind9/-/issues/3050Post load checking of missing delegations2023-11-02T16:26:08ZMark AndrewsPost load checking of missing delegationsIs it worth while to perform a post load DS lookup for each primary / slave zone against the other loaded zone looking for a NXDOMAIN response which would indicate a missing delegation? This would catch cases like bhutan.gov.bt where b...Is it worth while to perform a post load DS lookup for each primary / slave zone against the other loaded zone looking for a NXDOMAIN response which would indicate a missing delegation? This would catch cases like bhutan.gov.bt where both it and the parent zone are served by the same servers but there isn't a delegation for bhutan.gov.bt in the gov.bt zone.Not plannedhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3046TCP client not being identified in log message2023-11-02T17:02:21ZMark AndrewsTCP client not being identified in log messageI was trying to match expected log messages in a system test (!5616) and switching from UDP to TCP to force a SERVFAIL rather than timeout response resulted in `<unknown>` for the client being logged instead of IP address and port.
`06-...I was trying to match expected log messages in a system test (!5616) and switching from UDP to TCP to force a SERVFAIL rather than timeout response resulted in `<unknown>` for the client being logged instead of IP address and port.
`06-Dec-2021 16:26:06.430 DNS format error from 10.53.0.8#5300 resolving tcpalso.no-questions/A for <unknown>: empty question section, accepting it anyway as TC=1`
The string I was expecting to match against was
`resolving tcpalso.no-questions/A for 10.53.0.5#[0-9]*: empty question section, accepting it anyway as TC=1`
I haven't checked 9.16 yet.Not plannedhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3039update forward logging2024-03-27T13:21:26ZPeter Daviesupdate forward loggingupdate forward logging:
In installations where dynamic updates are forwarded from a secondary server to a stealth master server.
It could be helpful to be able to configure the forwarding server to log the originating source of the u...update forward logging:
In installations where dynamic updates are forwarded from a secondary server to a stealth master server.
It could be helpful to be able to configure the forwarding server to log the originating source of the update and the RRs being updated or enough sufficient to identify the client requesting the update.
[RT #19907](https://support.isc.org/Ticket/Display.html?id=19907 )Not plannedhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3036clang-format-13 leads to weird layout formatting of function parameters2021-12-01T08:52:54ZMatthijs Mekkingmatthijs@isc.orgclang-format-13 leads to weird layout formatting of function parametersThe following discussion from !5602 should be addressed:
- [ ] @matthijs started a [discussion](https://gitlab.isc.org/isc-projects/bind9/-/merge_requests/5602#note_251399): (+1 comment)
> Off topic, but is this really the preferr...The following discussion from !5602 should be addressed:
- [ ] @matthijs started a [discussion](https://gitlab.isc.org/isc-projects/bind9/-/merge_requests/5602#note_251399): (+1 comment)
> Off topic, but is this really the preferred format? Maybe we want to adjust the `clang-format`? @ondrej