ISC Open Source Projects issueshttps://gitlab.isc.org/groups/isc-projects/-/issues2022-04-26T13:30:40Zhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3248dig can get stuck when using a server with a IPv4 mapped IPv6 address2022-04-26T13:30:40ZArаm Sаrgsyаndig can get stuck when using a server with a IPv4 mapped IPv6 address```
$ bin/dig/dig @::1 @::ffff:0.0.0.0 -p 12345 test
;; Skipping mapped address '::ffff:0.0.0.0'
;; No acceptable nameservers
```
(stuck until Ctrl+C)
With `-d`:
```
$ bin/dig/dig @::1 @::ffff:0.0.0.0 -p 12345 test -d
setup_libs()
setu...```
$ bin/dig/dig @::1 @::ffff:0.0.0.0 -p 12345 test
;; Skipping mapped address '::ffff:0.0.0.0'
;; No acceptable nameservers
```
(stuck until Ctrl+C)
With `-d`:
```
$ bin/dig/dig @::1 @::ffff:0.0.0.0 -p 12345 test -d
setup_libs()
setup_system()
create_search_list()
ndots is 1.
timeout is 0.
retries is 3.
get_server_list()
make_server(127.0.0.1)
dig_query_setup
parse_args()
making new lookup
make_empty_lookup()
make_empty_lookup() = 0x7fa726958000->references = 1
digrc (open)
main parsing @::1
make_server(::1)
main parsing @::ffff:0.0.0.0
make_server(::ffff:0.0.0.0)
main parsing -p
main parsing test
clone_lookup()
make_empty_lookup()
make_empty_lookup() = 0x7fa726959800->references = 1
clone_server_list()
make_server(::1)
make_server(::ffff:0.0.0.0)
looking up test
main parsing -d
dig_startup()
lock_lookup dighost.c:4464
success
start_lookup()
setup_lookup(0x7fa726959800)
resetting lookup counter.
using root origin
recursive query
AD query
add_question()
starting to render the message
add_opt()
done rendering
create query 0x7fa725420540 linked to lookup 0x7fa726959800
dighost.c:2134:lookup_attach(0x7fa726959800) = 2
dighost.c:2637:new_query(0x7fa725420540) = 1
create query 0x7fa725420700 linked to lookup 0x7fa726959800
dighost.c:2134:lookup_attach(0x7fa726959800) = 3
dighost.c:2637:new_query(0x7fa725420700) = 1
do_lookup()
start_udp(0x7fa725420540)
dighost.c:3134:query_attach(0x7fa725420540) = 2
working on lookup 0x7fa726959800, query 0x7fa725420540
dighost.c:3179:query_attach(0x7fa725420540) = 3
unlock_lookup dighost.c:4466
dighost.c:3096:query_attach(0x7fa725420540) = 4
recving with lookup=0x7fa726959800, query=0x7fa725420540, handle=0x7fa72543b180
recvcount=1
have local timeout of 5000
dighost.c:3043:query_attach(0x7fa725420540) = 5
sending a request
sendcount=1
dighost.c:1727:query_detach(0x7fa725420540) = 4
dighost.c:3116:query_detach(0x7fa725420540) = 3
send_done(0x7fa72543b180, success, 0x7fa725420540)
sendcount=0
lock_lookup dighost.c:2665
success
dighost.c:2669:lookup_attach(0x7fa726959800) = 4
dighost.c:2707:query_detach(0x7fa725420540) = 2
dighost.c:2708:lookup_detach(0x7fa726959800) = 3
check_if_done()
list empty
unlock_lookup dighost.c:2711
recv_done(0x7fa72543b180, connection refused, 0x7fa726179d90, 0x7fa725420540)
lock_lookup dighost.c:3790
success
recvcount=0
dighost.c:3795:lookup_attach(0x7fa726959800) = 4
starting next query 0x7fa725420700
start_udp(0x7fa725420700)
dighost.c:3134:query_attach(0x7fa725420700) = 2
working on lookup 0x7fa726959800, query 0x7fa725420700
;; Skipping mapped address '::ffff:0.0.0.0'
dighost.c:1727:query_detach(0x7fa725420700) = 1
dighost.c:3160:query_detach(0x7fa725420700) = 0
dighost.c:3160:destroy_query(0x7fa725420700) = 0
dighost.c:1685:lookup_detach(0x7fa726959800) = 3
;; No acceptable nameservers
clear_current_lookup()
still have a worker
dighost.c:4359:query_detach(0x7fa725420540) = 1
dighost.c:4367:lookup_detach(0x7fa726959800) = 2
unlock_lookup dighost.c:4371
```April 2022 (9.16.28, 9.16.28-S1, 9.18.2, 9.19.0)Arаm SаrgsyаnArаm Sаrgsyаnhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3144"dig { +trace | +nssearch } +tcp" always crashes in BIND 9.18+2022-04-26T13:31:30ZMichał Kępień"dig { +trace | +nssearch } +tcp" always crashes in BIND 9.18+On my laptop, I am consistently observing the following behavior:
```
$ dig +tcp +trace www.isc.org.
; <<>> DiG 9.17.22 <<>> +tcp +trace www.isc.org.
;; global options: +cmd
. 77196 IN NS f.root-servers.net.
. 77196 IN NS g.root-se...On my laptop, I am consistently observing the following behavior:
```
$ dig +tcp +trace www.isc.org.
; <<>> DiG 9.17.22 <<>> +tcp +trace www.isc.org.
;; global options: +cmd
. 77196 IN NS f.root-servers.net.
. 77196 IN NS g.root-servers.net.
. 77196 IN NS k.root-servers.net.
. 77196 IN NS m.root-servers.net.
. 77196 IN NS j.root-servers.net.
. 77196 IN NS c.root-servers.net.
. 77196 IN NS b.root-servers.net.
. 77196 IN NS d.root-servers.net.
. 77196 IN NS h.root-servers.net.
. 77196 IN NS a.root-servers.net.
. 77196 IN NS i.root-servers.net.
. 77196 IN NS e.root-servers.net.
. 77196 IN NS l.root-servers.net.
. 77287 IN RRSIG NS 8 0 518400 20220219050000 20220206040000 9799 . HHcODlZKloXHTGpG2jEDuCtBrOBIhXKf+C08Rlaps84YbLolC8BDw3XO KGXtnojjOAAkVJkjhBxKQTX31l5+Vd4pdG1egoP5W88EuZhe9bYomSCT yVsUBJS68+NvBfYnblOE5QAgIX2v9IgHWg7HzJqMuKLzzVuQhaCGW/XC gnZVyGT5hriM2j7R1n9gfzPjvunv3HduvYg4DKf5Ngio6ZU+ncAiiH0w b+uu4QU1MFZk8UbJ7Cl9oDza+siaQzRLy3eZJoPSY8snpeu8kSRyFfo4 /6GTZxrpmTXNJnHBfaBSL6Emsxah/T/DL56e5oB93JlDwUVMc2LR7d5U zDAgew==
dighost.c:1683: INSIST(query->readhandle == ((void *)0)) failed, back trace
```
AFAICT, this issue is not specific to my local environment, to the
address family used, etc.May 2022 (9.16.29, 9.16.29-S1, 9.18.3, 9.19.1)Arаm SаrgsyаnArаm Sаrgsyаnhttps://gitlab.isc.org/isc-projects/bind9/-/issues/2307"dig @localhost" does not fall back to 127.0.0.1 if ::1 is not answering2022-04-26T13:32:59ZMichał Kępień"dig @localhost" does not fall back to 127.0.0.1 if ::1 is not answeringThis started happening after !4115. It seems that each UDP query does
time out after 5 seconds, but `dig` never tries the "next server"
(127.0.0.1) after its "first pick" (::1) fails to respond.
The problem is easily reproducible with ...This started happening after !4115. It seems that each UDP query does
time out after 5 seconds, but `dig` never tries the "next server"
(127.0.0.1) after its "first pick" (::1) fails to respond.
The problem is easily reproducible with the following `named.conf`:
```
options {
listen-on port 5300 { 127.0.0.1; };
listen-on-v6 { none; };
};
```
Test with: `dig @localhost isc.org.` - it will time out even though it
should not. This is a change in behavior that was caught by [automated
RPM tests][1].
[1]: https://gitlab.isc.org/isc-private/rpms/bind/-/pipelines/57751April 2022 (9.16.28, 9.16.28-S1, 9.18.2, 9.19.0)Mark AndrewsMark Andrewshttps://gitlab.isc.org/isc-projects/bind9/-/issues/3028dig crashes when interrupted while waiting for TCP response2022-04-26T13:33:28ZPetr Špačekpspacek@isc.orgdig crashes when interrupted while waiting for TCP response<!--
If the bug you are reporting is potentially security-related - for example,
if it involves an assertion failure or other crash in `named` that can be
triggered repeatedly - then please do *NOT* report it here, but send an
email to [...<!--
If the bug you are reporting is potentially security-related - for example,
if it involves an assertion failure or other crash in `named` that can be
triggered repeatedly - then please do *NOT* report it here, but send an
email to [security-officer@isc.org](security-officer@isc.org).
-->
### Summary
dig crashes when interrupted while waiting for TCP response
### BIND version used
* ~"v9.17": c52a38352378d94129ccd590281c63fbe58dcb00
* ~"v9.11" and ~"v9.16" not affected
### Steps to reproduce
SIGINT dig while it's waiting for TCP response. :boom:
1. Run python3 [tcphang.py](/uploads/a03cf98515d2ea2794cbe0b9b7a298ae/tcphang.py)
2. Run `dig -d 99 +tcp @127.0.0.1 .`
3. SIGINT dig before it timeouts
(`timeout --signal=SIGINT 1 dig -d 99 +tcp @127.0.0.1 .` to do that in one go)
### What is the current *bug* behavior?
```
dighost.c:1683: INSIST(query->readhandle == ((void *)0)) failed, back trace
```
```
(gdb) bt
#0 0x00007ffff6b15d22 in raise () from /usr/lib/libc.so.6
#1 0x00007ffff6aff862 in abort () from /usr/lib/libc.so.6
#2 0x00007ffff7a514b5 in isc_assertion_failed (file=0x55555557605c "dighost.c", line=1683,
type=isc_assertiontype_insist, cond=0x5555555770b8 "query->readhandle == ((void *)0)") at assertions.c:48
#3 0x0000555555567cf0 in _query_detach (queryp=0x7fffffffe5d8, file=0x55555557605c "dighost.c", line=4216)
at dighost.c:1683
#4 0x0000555555571c3b in cancel_all () at dighost.c:4216
#5 0x0000555555562bc9 in dig_shutdown () at dig.c:2946
#6 0x0000555555562c2f in main (argc=4, argv=0x7fffffffe748) at dig.c:2957
```
### What is the expected *correct* behavior?
Obviously, it should not crash.
### Relevant configuration files
None, I believe.
### Relevant logs and/or screenshots
<details>
```
$ timeout --signal=SIGINT 1 /tmp/main/bin/dig -d 99 +tcp @127.0.0.1 .
setup_libs()
setup_system()
create_search_list()
ndots is 1.
timeout is 0.
retries is 3.
get_server_list()
make_server(127.0.0.111)
dig_query_setup
parse_args()
making new lookup
make_empty_lookup()
make_empty_lookup() = 0x7fd7117c0000->references = 1
digrc (open)
main parsing -d
main parsing 99
clone_lookup()
make_empty_lookup()
make_empty_lookup() = 0x7fd7117c1800->references = 1
clone_server_list()
looking up 99
main parsing +tcp
main parsing @127.0.0.1
make_server(127.0.0.1)
main parsing .
clone_lookup()
make_empty_lookup()
make_empty_lookup() = 0x7fd7117c3000->references = 1
clone_server_list()
looking up .
dig_startup()
lock_lookup dighost.c:4186
success
start_lookup()
setup_lookup(0x7fd7117c1800)
resetting lookup counter.
using root origin
recursive query
AD query
add_question()
starting to render the message
add_opt()
done rendering
create query 0x7fd710835000 linked to lookup 0x7fd7117c1800
dighost.c:2083:lookup_attach(0x7fd7117c1800) = 2
dighost.c:2587:new_query(0x7fd710835000) = 1
do_lookup()
start_tcp(0x7fd710835000)
dighost.c:2693:query_attach(0x7fd710835000) = 2
query->servname = 127.0.0.1
unlock_lookup dighost.c:4188
tcp_connected()
tcp_connected(0x7fd710813300, success, 0x7fd710835000)
lock_lookup dighost.c:3231
success
dighost.c:3232:lookup_attach(0x7fd7117c1800) = 3
launch_next_query()
dighost.c:3127:lookup_attach(0x7fd7117c1800) = 4
recvcount=1
have local timeout of 10000
dighost.c:3148:query_attach(0x7fd710835000) = 3
sending a request in launch_next_query
dighost.c:3179:query_attach(0x7fd710835000) = 4
sendcount=1
dighost.c:3202:lookup_detach(0x7fd7117c1800) = 3
dighost.c:1676:query_detach(0x7fd710835000) = 3
dighost.c:3303:query_detach(0x7fd710835000) = 2
dighost.c:3304:lookup_detach(0x7fd7117c1800) = 2
unlock_lookup dighost.c:3305
send_done(0x7fd710813300, success, 0x7fd710835000)
sendcount=0
lock_lookup dighost.c:2615
success
dighost.c:2629:lookup_attach(0x7fd7117c1800) = 3
dighost.c:2648:query_detach(0x7fd710835000) = 1
dighost.c:2649:lookup_detach(0x7fd7117c1800) = 2
check_if_done()
list full
pending lookup 0x7fd7117c3000
unlock_lookup dighost.c:2652
destroy_lookup
cancel_all()
lock_lookup dighost.c:4202
success
canceling pending query 0x7fd710835000, belonging to 0x7fd7117c1800
dighost.c:4216:query_detach(0x7fd710835000) = 0
dighost.c:1683: INSIST(query->readhandle == ((void *)0)) failed, back trace
/tmp/main/lib/libisc-9.17.20.so(+0x4059d)[0x7fd7152cd59d]
recv_done(0x7fd710813300, end of file, 0x7fd711579d90, 0x7fd710835000)
lock_lookup dighost.c:3579
/tmp/main/lib/libisc-9.17.20.so(isc_assertion_failed+0x31)[0x7fd7152cd4b0]
/tmp/main/bin/dig(+0x13cf0)[0x55d41bb0fcf0]
/tmp/main/bin/dig(+0x1dc3b)[0x55d41bb19c3b]
/tmp/main/bin/dig(+0xebc9)[0x55d41bb0abc9]
/tmp/main/bin/dig(+0xec2f)[0x55d41bb0ac2f]
/usr/lib/libc.so.6(__libc_start_main+0xd5)[0x7fd71437cb25]
/tmp/main/bin/dig(+0x5e9e)[0x55d41bb01e9e]
timeout: the monitored command dumped core
```
</details>
### Possible fixes
(If you can, link to the line of code that might be responsible for the
problem.)April 2022 (9.16.28, 9.16.28-S1, 9.18.2, 9.19.0)Ondřej SurýOndřej Surýhttps://gitlab.isc.org/isc-projects/bind9/-/issues/2849Dig: Add option to change default record type2022-04-26T13:36:02ZSören KleinDig: Add option to change default record type### Description
It would be great to have the option to change the default record type from e.g. `A` to `AAAA`.
It would also be very helpful if multiple default record types are supported, e.g. `A, AAAA`.
### Request
I would like t...### Description
It would be great to have the option to change the default record type from e.g. `A` to `AAAA`.
It would also be very helpful if multiple default record types are supported, e.g. `A, AAAA`.
### Request
I would like to set the default records either with an option, e.g. `dig --set-default-records "A, AAAA"` or as part of an system environment variable.
If multiple record types are defined, then the command `dig example.com` with the types `A, AAAA` should be extended to `dig example.com A example.com AAAA`.
### Links / referencesNot plannedhttps://gitlab.isc.org/isc-projects/bind9/-/issues/1633Can query flags in +yaml be made a list?2022-04-26T13:36:36ZJP MensCan query flags in +yaml be made a list?### Description
In `dig`'s new YAML output (`+yaml`), the query flags are reported as a string (e.g. `flags: qr rd ra ad`)
```json
"flags": "qr rd ra ad"
```
### Request
I think it'd be more natural to have them returned as a list (e...### Description
In `dig`'s new YAML output (`+yaml`), the query flags are reported as a string (e.g. `flags: qr rd ra ad`)
```json
"flags": "qr rd ra ad"
```
### Request
I think it'd be more natural to have them returned as a list (e.g. `flags: [ qr, rd, ra, ad ]`) which would result in
```json
"flags": [ "qr", "rd", "ra", "ad" ]
```
The benefit I see is that this change would make it easier to find out programatically whether a specific flag is set.
### Links / referenceshttps://gitlab.isc.org/isc-projects/bind9/-/issues/2394dig +short for MX when the record is broken gives confusing answer2022-04-26T13:36:40ZPaul Hoffmandig +short for MX when the record is broken gives confusing answerA confused user said that dig +short for an MX record did not report the preference level. The example he gave was:
```
# dig +short cyclonit.com MX
HDRedirect-LB7-5a03e1c2772e1c9c.elb.us-east-1.amazonaws.com.
```
When given without +sho...A confused user said that dig +short for an MX record did not report the preference level. The example he gave was:
```
# dig +short cyclonit.com MX
HDRedirect-LB7-5a03e1c2772e1c9c.elb.us-east-1.amazonaws.com.
```
When given without +short, the reason becomes clear:
```
# dig cyclonit.com MX
; <<>> DiG 9.16.10 <<>> cyclonit.com MX
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 65526
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: b0e8599a68ce3729dc85d51e6003a03d038d9864e7c2e63c (good)
;; QUESTION SECTION:
;cyclonit.com. IN MX
;; ANSWER SECTION:
cyclonit.com. 10544 IN CNAME HDRedirect-LB7-5a03e1c2772e1c9c.elb.us-east-1.amazonaws.com.
;; AUTHORITY SECTION:
elb.us-east-1.amazonaws.com. 53 IN SOA ns-1826.awsdns-36.co.uk. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 60
```
Yep, it's the dreaded "CNAME and MX at the same level" issue. However, +short hides that in a confusing way.
Proposal: dig +short for broken names such as this should instead reply "CNAME target", or possibly "Bad CNAME target".Not plannedhttps://gitlab.isc.org/isc-projects/bind9/-/issues/1259resolv.conf search parsing has artificial limit2022-04-26T13:37:46ZPetr Menšíkresolv.conf search parsing has artificial limit### Summary
libirs support only 8 search domains
### BIND version used
master branch
### Steps to reproduce
1. sed -i 's/^search .*/search non-existent1.very-long-domain.fedoraproject.org non-existent2.very-long-domain.fedoraproject...### Summary
libirs support only 8 search domains
### BIND version used
master branch
### Steps to reproduce
1. sed -i 's/^search .*/search non-existent1.very-long-domain.fedoraproject.org non-existent2.very-long-domain.fedoraproject.org non-existent3.very-long-domain.fedoraproject.org non-existent4.very-long-domain.fedoraproject.org non-existent5.very-long-domain.fedoraproject.org non-existent6.very-long-domain.fedoraproject.org openstacklocal redhat.com/' /etc/resolv.conf
2. [ "$(host access)" == "$(getent host access.redhat.com.)" ] && echo matches
3. sed -i 's/^search .*/search non-existent1.very-long-domain.fedoraproject.org non-existent2.very-long-domain.fedoraproject.org non-existent3.very-long-domain.fedoraproject.org non-existent4.very-long-domain.fedoraproject.org non-existent5.very-long-domain.fedoraproject.org non-existent6.very-long-domain.fedoraproject.org non-existent7.very-long-domain.fedoraproject.org non-existent8.very-long-domain.fedoraproject.org openstacklocal redhat.com/' /etc/resolv.conf
4. [ "$(host access)" == "$(getent host access.redhat.com.)" ] && echo still matches
### What is the current *bug* behavior?
4. does not match
### What is the expected *correct* behavior?
4. does match
### Relevant configuration files
```
# resolv.conf
search non-existent1.very-long-domain.fedoraproject.org non-existent2.very-long-domain.fedoraproject.org non-existent3.very-long-domain.fedoraproject.org non-existent4.very-long-domain.fedoraproject.org non-existent5.very-long-domain.fedoraproject.org non-existent6.very-long-domain.fedoraproject.org non-existent7.very-long-domain.fedoraproject.org non-existent8.very-long-domain.fedoraproject.org openstacklocal redhat.com
nameserver 9.9.9.9
```
### Relevant logs and/or screenshots
(none.)
### Possible fixes
I *know* using such high number domains would cause too many queries to resolve. I *know it is a bad idea*. But I think there should not be such hard limit preventing this. It was updated in glibc to accept unlimited domains. It is not good idea, but I think it is better than simply ignoring what user wishes. We have some customers hitting limits on it.
I think it should be discouraged but possible.
[RH Bug for glibc](https://bugzilla.redhat.com/show_bug.cgi?id=677316), [RH Bug of nslookup](https://bugzilla.redhat.com/show_bug.cgi?id=1758317)https://gitlab.isc.org/isc-projects/bind9/-/issues/2919nsupdate with GSS-TSIG ignores server keyword2022-04-26T13:38:48ZPetr Špačekpspacek@isc.orgnsupdate with GSS-TSIG ignores server keyword### Summary
`nsupdate -g` ignores `server` keyword and sends updates to SOA MNAME (instead of sending them to server specified by user).
### BIND version used
(Paste the output of `named -V`.)
```
named -V
BIND 9.16.8-Ubuntu (Stable R...### Summary
`nsupdate -g` ignores `server` keyword and sends updates to SOA MNAME (instead of sending them to server specified by user).
### BIND version used
(Paste the output of `named -V`.)
```
named -V
BIND 9.16.8-Ubuntu (Stable Release) <id:539f9f0>
running on Linux x86_64 5.4.0-84-generic #94-Ubuntu SMP Thu Aug 26 20:27:37 UTC 2021
built by make with '--build=x86_64-linux-gnu' '--prefix=/usr' '--includedir=/usr/include' '--mandir=/usr/share/man' '--infodir=/usr/share/info' '--sysconfdir=/etc' '--localstatedir=/var' '--disable-option-checking' '--disable-silent-rules' '--libdir=/usr/lib/x86_64-linux-gnu' '--runstatedir=/run' '--disable-maintainer-mode' '--disable-dependency-tracking' '--libdir=/usr/lib/x86_64-linux-gnu' '--sysconfdir=/etc/bind' '--with-python=python3' '--localstatedir=/' '--enable-threads' '--enable-largefile' '--with-libtool' '--enable-shared' '--enable-static' '--with-gost=no' '--with-openssl=/usr' '--with-gssapi=/usr' '--with-libidn2' '--with-json-c' '--with-lmdb=/usr' '--with-gnu-ld' '--with-maxminddb' '--with-atf=no' '--enable-ipv6' '--enable-rrl' '--enable-filter-aaaa' '--disable-native-pkcs11' '--disable-isc-spnego' 'build_alias=x86_64-linux-gnu' 'CFLAGS=-g -O2 -ffile-prefix-map=/build/bind9-ctcsDC/bind9-9.16.8=. -flto=auto -ffat-lto-objects -fstack-protector-strong -Wformat -Werror=format-security -fno-strict-aliasing -fno-delete-null-pointer-checks -DNO_VERSION_DATE -DDIG_SIGCHASE' 'LDFLAGS=-Wl,-Bsymbolic-functions -flto=auto -Wl,-z,relro -Wl,-z,now' 'CPPFLAGS=-Wdate-time -D_FORTIFY_SOURCE=2'
compiled by GCC 10.3.0
compiled with OpenSSL version: OpenSSL 1.1.1j 16 Feb 2021
linked to OpenSSL version: OpenSSL 1.1.1j 16 Feb 2021
compiled with libuv version: 1.40.0
linked to libuv version: 1.40.0
compiled with libxml2 version: 2.9.10
linked to libxml2 version: 20910
compiled with json-c version: 0.15
linked to json-c version: 0.15
compiled with zlib version: 1.2.11
linked to zlib version: 1.2.11
linked to maxminddb version: 1.5.2
threads support is enabled
default paths:
named configuration: /etc/bind/named.conf
rndc configuration: /etc/bind/rndc.conf
DNSSEC root key: /etc/bind/bind.keys
nsupdate session key: //run/named/session.key
named PID file: //run/named/named.pid
named lock file: //run/named/named.lock
geoip-directory: /usr/share/GeoIP
```
### Steps to reproduce
0. Configure GSS-TSIG (good luck...)
1. Configure a test DNS zone ZZZ with SOA MNAME = MNAMEINSOA
2. Run `nsupdate -g`
3. Use input which modifies a record in zone ZZZ **and** includes keyword `server DIFFERENTSERVER` (DIFFERENTSERVER != MNAMEINSOA)
### What is the current *bug* behavior?
`nsupdate` attempts to obtain Kerberos service ticket for DNS server name MNAMEINSOA (from SOA RR) and ignores value provided in keyword `server`.
### What is the expected *correct* behavior?
`nsupdate` should respect value provided in `server` keyword.
### Relevant configuration files
named.conf:
```
zone "example.org" {
type master;
file "/var/lib/bind/db.example.org";
update-policy {
grant "DHCP/admin.example.org@EXAMPLE.ORG" zonesub any;
};
};
```
Input:
```
nsupdate -g <<EOF
server server.example.org
update add abc.example.org. 120 TXT "Hello from Kerberos"
send
EOF
```
### Relevant logs and/or screenshots
```
setup_system()
reset_system()
user_interaction()
do_next_command()
do_next_command()
evaluate_update()
update_addordelete()
do_next_command()
start_update()
recvsoa()
About to create rcvmsg
show_message()
Reply from SOA query:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 37613
;; flags: qr aa ra; QUESTION: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0
;; QUESTION SECTION:
;abc.example.org. IN SOA
;; AUTHORITY SECTION:
example.org. 0 IN SOA example.org. root.example.org. 8 604800 86400 2419200 604800
Found zone name: example.org
The master is: example.org <<<--- THIS SHOULD NOT HAPPEN
start_gssrequest
Found realm from ticket: EXAMPLE.ORG
[404] 1632329550.171413: ccselect module realm chose cache FILE:/tmp/krb5cc_0 with client principal DHCP/admin.example.org@EXAMPLE.ORG for server principal DNS/example.org@EXAMPLE.ORG
```
### Additional notes
We need to inspect other parameters as well.
Chat with investigation starts here:
https://mattermost.isc.org/isc/pl/jrk7fqwp4pbr9n787qx7wi18ghhttps://gitlab.isc.org/isc-projects/kea/-/issues/2390ddns-tuning hook lib fails to unload during reconfigure command2022-04-26T15:35:27ZThomas Markwalderddns-tuning hook lib fails to unload during reconfigure commandLoading ddns-tuning library and running at least and submitting at least one client message (e.g. DISCOVER, REQUEST..) causes the libary to fail to unload entirely during reconfigure command:
```
2022-04-26 08:15:08.816 INFO [kea-dhcp4...Loading ddns-tuning library and running at least and submitting at least one client message (e.g. DISCOVER, REQUEST..) causes the libary to fail to unload entirely during reconfigure command:
```
2022-04-26 08:15:08.816 INFO [kea-dhcp4.ddns-tuning-hooks/5581.140041746450560] DDNS_TUNING_UNLOAD DDNS Tuning hooks library has been unloaded
2022-04-26 08:15:08.816 DEBUG [kea-dhcp4.hooks/5581.140041746450560] HOOKS_UNLOAD_SUCCESS 'unload' function in hook library lib/kea/hooks/libdhcp_ddns_tuning.so returned success
2022-04-26 08:15:08.816 DEBUG [kea-dhcp4.callouts/5581.140041746450560] HOOKS_ALL_CALLOUTS_DEREGISTERED hook library at index 1 removed all callouts on hook ddns4_update
2022-04-26 08:15:08.816 DEBUG [kea-dhcp4.hooks/5581.140041746450560] HOOKS_CALLOUTS_REMOVED callouts removed from hook ddns4_update for library lib/kea/hooks/libdhcp_ddns_tuning.so
2022-04-26 08:15:08.816 ERROR [kea-dhcp4.dhcp4/5581.140041746450560] DHCP4_PARSER_COMMIT_FAIL parser failed to commit changes: some libraries are still opened
2022-04-26 08:15:08.816 FATAL [kea-dhcp4.dhcp4/5581.140041746450560] DHCP4_CONFIG_UNRECOVERABLE_ERROR DHCPv4 server new configuration failed with an error which cannot be recovered
2022-04-26 08:15:08.816 ERROR [kea-dhcp4.dhcp4/5581.140041746450560] DHCP4_CONFIG_LOAD_FAIL configuration error using file: /home/tmark/labspace_var/kea/etc/v4/1548.conf, reason: some libraries are still opened
2022-04-26 08:15:08.816 FATAL [kea-dhcp4.dhcp4/5581.140041746450560] DHCP4_DYNAMIC_RECONFIGURATION_FAIL dynamic server reconfiguration failed with file: /home/tmark/labspace_var/kea/etc/v4/1548.conf
2022-04-26 08:15:08.816 DEBUG [kea-dhcp4.commands/5581.140041746450560] COMMAND_SOCKET_WRITE Sent response of 160 bytes (0 bytes left to send) over command socket 32
2022-04-26 08:15:08.816 DEBUG [kea-dhcp4.commands/5581.140041746450560] COMMAND_SOCKET_CONNECTION_CLOSED Closed socket 32 for existing command connection
2022-04-26 08:15:14.573 DEBUG [kea-dhcp4.commands/5581.140041746450560] COMMAND_SOCKET_CONNECTION_OPENED Opened socket 21 for incoming command connection
2022-04-26 08:15:14.573 DEBUG [kea-dhcp4.commands/5581.140041746450560] COMMAND_SOCKET_READ Received 32 bytes over command socket 21
```Thomas MarkwalderThomas Markwalderhttps://gitlab.isc.org/isc-projects/kea/-/issues/2392kea-dhcp6 does not update outbound FQDN if name is modified by ddns6_update c...2022-04-26T19:49:40ZThomas Markwalderkea-dhcp6 does not update outbound FQDN if name is modified by ddns6_update calloutin dhcp6_srv.cc the code is mistakenly grabbing the fqdn from the query not the response packet and thus, updates the wrong option when ddns6_update calculates a new name.in dhcp6_srv.cc the code is mistakenly grabbing the fqdn from the query not the response packet and thus, updates the wrong option when ddns6_update calculates a new name.Thomas MarkwalderThomas Markwalderhttps://gitlab.isc.org/isc-projects/bind9/-/issues/2856[CVE-2021-25218] named crashes when trying to send a UDP packet exceeding MTU...2022-04-26T20:51:36ZMichał Kępień[CVE-2021-25218] named crashes when trying to send a UDP packet exceeding MTU with RRL enabled### CVE-specific actions
- [x] [Assign a CVE identifier](https://gitlab.isc.org/isc-projects/bind9/-/issues/2839#note_227833)
- [x] [Determine CVSS score](#note_229297)
- [x] [Determine the range of BIND versions affected (includi...### CVE-specific actions
- [x] [Assign a CVE identifier](https://gitlab.isc.org/isc-projects/bind9/-/issues/2839#note_227833)
- [x] [Determine CVSS score](#note_229297)
- [x] [Determine the range of BIND versions affected (including the Subscription Edition)](#note_229299)
- [x] [Determine whether workarounds for the problem exists](#note_229301)
- [x] [Create a draft of the security advisory and put the information above in there](https://gitlab.isc.org/isc-private/bind9/-/wikis/CVE-2021-25218-Advisory-Draft)
- [x] Prepare a detailed description of the problem which should include the following by default:
- [instructions for reproducing the problem (a system test is good enough)](isc-private/bind9!315)
- [explanation of code flow which triggers the problem (a system test is *not* good enough)](#note_229306)
- [x] [Prepare a private merge request containing the following items in separate commits:](isc-private/bind9!313)
- a test for the issue (may be moved to a separate merge request for deferred merging)
- a fix for the issue
- documentation updates (`CHANGES`, release notes, anything else applicable)
- [x] Ensure the merge request from the previous step is reviewed by SWENG staff and has no outstanding discussions
- [x] Ensure the documentation changes introduced by the merge request addressing the problem are reviewed by Support and Marketing staff
- [x] Prepare backports of the merge request addressing the problem for all affected (and still maintained) BIND branches (backporting might affect the issue's scope and/or description)
- [x] Prepare a standalone patch for the last stable release of each affected (and still maintained) BIND branch
### Release-specific actions
- [x] [Create/update the private issue containing links to fixes & reproducers for all CVEs fixed in a given release cycle](isc-private/bind9#46)
- [x] [Reserve a block of `CHANGES` placeholders once the complete set of vulnerabilities fixed in a given release cycle is determined](!5318)
- [x] Ensure the merge requests containing CVE fixes are merged into `security-*` branches in CVE identifier order
---
Original report: #2839August 2021 (9.11.35, 9.11.35-S1, 9.16.20, 9.16.20-S1, 9.17.17)Brian ConryBrian Conryhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3304Consider dropping the JSON_C_TO_STRING_PRETTY flag used for generating JSON s...2022-04-27T09:43:52ZMichał KępieńConsider dropping the JSON_C_TO_STRING_PRETTY flag used for generating JSON statisticsJSON is a machine-readable format, so there is no need to [generate
statschannel output in a "pretty" form][1]. Quick experiments show that
redundant whitespace adds up to about 40% of the JSON payload produced
by statschannel code.
Ye...JSON is a machine-readable format, so there is no need to [generate
statschannel output in a "pretty" form][1]. Quick experiments show that
redundant whitespace adds up to about 40% of the JSON payload produced
by statschannel code.
Yes, `lib/isc/httpd.c` supports DEFLATE compression via zlib and that
enables massive savings in terms of payload size, but it requires
clients to send the `Accept-Encoding: deflate` HTTP header in order to
kick in, so IMHO HTTP-level compression and producing "minified" JSON
data are tangential mechanisms rather than exclusive alternatives.
Piping "minified" JSON through `jq` allows one to get the "pretty" form
without the extra bandwidth cost.
Some semi-random measurements:
- ~"v9.19", 4 logical CPU cores
$ curl -s http://localhost:8080/json | wc -c
71386
$ curl -s http://localhost:8080/json | jq -c | wc -c
41881
$ curl -s -H "Accept-Encoding: deflate" http://localhost:8080/json | wc -c
4150
- ~"v9.19", 32 logical CPU cores
$ curl -s http://localhost:8080/json | wc -c
391217
$ curl -s http://localhost:8080/json | jq -c | wc -c
227837
$ curl -s -H "Accept-Encoding: deflate" http://localhost:8080/json | wc -c
16721
- ~"v9.16", 4 logical CPU cores
$ curl -s http://localhost:8080/json | wc -c
896972
$ curl -s http://localhost:8080/json | jq -c | wc -c
529643
$ curl -s -H "Accept-Encoding: deflate" http://localhost:8080/json | wc -c
25880
- ~"v9.16", 32 logical CPU cores
$ curl -s http://localhost:8080/json | wc -c
6954362
$ curl -s http://localhost:8080/json | jq -c | wc -c
4106064
$ curl -s -H "Accept-Encoding: deflate" http://localhost:8080/json | wc -c
174557
[1]: https://gitlab.isc.org/isc-projects/bind9/-/blob/fcab10a26ece6419c2f53a2ad82499b4b5ba75c5/bin/named/statschannel.c#L3264-3265Not plannedhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3093"unable to convert libuv error code" on Dragonfly BSD2022-04-27T10:14:29ZMichal Nowak"unable to convert libuv error code" on Dragonfly BSDAbout a dozen system tests fail on Dragonfly BSD 6.2.1 because `dig` fails in `netmgr/tcp.c` with `unable to convert libuv error code in tcp_connect_direct to isc_result: -45: operation not supported on socket` on `main`, e.g.:
```
$ ../...About a dozen system tests fail on Dragonfly BSD 6.2.1 because `dig` fails in `netmgr/tcp.c` with `unable to convert libuv error code in tcp_connect_direct to isc_result: -45: operation not supported on socket` on `main`, e.g.:
```
$ ../../dig/dig -p 5300 +tcp @10.53.0.2 -6 +mapped A a.example
netmgr/tcpdns.c:151: unable to convert libuv error code in tcpdns_connect_direct to isc_result: -45: operation not supported on socket
;; Connection to ::ffff:10.53.0.2#5300(::ffff:10.53.0.2) for a.example failed: unexpected error.
netmgr/tcpdns.c:151: unable to convert libuv error code in tcpdns_connect_direct to isc_result: -45: operation not supported on socket
;; Connection to ::ffff:10.53.0.2#5300(::ffff:10.53.0.2) for a.example failed: unexpected error.
netmgr/tcpdns.c:151: unable to convert libuv error code in tcpdns_connect_direct to isc_result: -45: operation not supported on socket
;; Connection to ::ffff:10.53.0.2#5300(::ffff:10.53.0.2) for a.example failed: unexpected error.
[newman@dragonflybsd ~/bind9/bin/tests/system]$ scp digdelv.log newman@192.168.122.1:Downloads/
```
[digdelv.log](/uploads/74a1603a1b04ba511e26be85c342c80c/digdelv.log)January 2022 (9.18.0)https://gitlab.isc.org/isc-projects/bind9/-/issues/3285dig hangs when there is a TLS context creation failure2022-04-27T14:02:09ZArаm Sаrgsyаndig hangs when there is a TLS context creation failureFor testing, you can simulate an error by providing TLS certificate file without a key. This will hang in `main` until interrupted:
```
bin/dig/dig +tcp +tls +tls-certfile=/dev/null localhost
;; both TLS client certificate and key file ...For testing, you can simulate an error by providing TLS certificate file without a key. This will hang in `main` until interrupted:
```
bin/dig/dig +tcp +tls +tls-certfile=/dev/null localhost
;; both TLS client certificate and key file must be specified a the same time
^C
```
There seems to be a missing detachment of the `connectquery` in dighost.c:start_tcp() when `get_create_tls_context()` call fails.
Assigning to @artem by his request.May 2022 (9.16.29, 9.16.29-S1, 9.18.3, 9.19.1)Artem BoldarievArtem Boldarievhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3300"xferquota" system test fails intermittently2022-04-27T18:08:42ZMichał Kępień"xferquota" system test fails intermittentlyAn interesting crash was triggered for `main`[^1] under TSAN:
https://gitlab.isc.org/isc-projects/bind9/-/jobs/2459947
The test failed because a crash happened:
```
D:xferquota:#4 0x00007f77159655ce in isc_assertion_failed (file=0x2 ...An interesting crash was triggered for `main`[^1] under TSAN:
https://gitlab.isc.org/isc-projects/bind9/-/jobs/2459947
The test failed because a crash happened:
```
D:xferquota:#4 0x00007f77159655ce in isc_assertion_failed (file=0x2 <error: Cannot access memory at address 0x2>, line=198882080, line@entry=506, type=isc_assertiontype_require, type@entry=isc_assertiontype_insist, cond=0x7f7714c5ace1 <__GI_raise+321> "H\213\204$\b\001") at assertions.c:49
D:xferquota:#5 0x00007f77156bf440 in udp_recv (handle=handle@entry=0x7b4800019b00, eresult=eresult@entry=ISC_R_SUCCESS, region=region@entry=0x7f770bdabc18, arg=<optimized out>) at dispatch.c:506
D:xferquota:#6 0x00007f771594cade in isc__nm_async_readcb (worker=<optimized out>, ev0=ev0@entry=0x7f770bdabc78) at netmgr/netmgr.c:2702
D:xferquota:#7 0x00007f771594ac73 in isc__nm_readcb (sock=sock@entry=0x7b74000bfe00, uvreq=uvreq@entry=0x7b6c00070000, eresult=eresult@entry=ISC_R_SUCCESS) at netmgr/netmgr.c:2675
D:xferquota:#8 0x00007f77159612eb in udp_recv_cb (handle=handle@entry=0x7b74000c03c8, nrecv=nrecv@entry=36, buf=buf@entry=0x7f770bdabe40, addr=addr@entry=0x7f770bdabe90, flags=flags@entry=0) at netmgr/udp.c:647
D:xferquota:#9 0x00007f7715962c6c in isc__nm_udp_read_cb (handle=0x7b74000c03c8, nrecv=36, buf=0x7f770bdabe40, addr=0x7f770bdabe90, flags=0) at netmgr/udp.c:1053
D:xferquota:#10 0x00007f77151aa3f1 in uv__udp_recvmsg (handle=0x7b74000c03c8) at /usr/src/libuv-v1.43.0/src/unix/udp.c:302
D:xferquota:#11 0x00007f77151a9d30 in uv__udp_io (loop=0x7ba00000a540, w=0x7b74000c0448, revents=1) at /usr/src/libuv-v1.43.0/src/unix/udp.c:178
D:xferquota:#12 0x00007f77151b0b87 in uv__io_poll (loop=0x7ba00000a540, timeout=14836) at /usr/src/libuv-v1.43.0/src/unix/epoll.c:374
D:xferquota:#13 0x00007f771519586c in uv_run (loop=0x7ba00000a540, mode=UV_RUN_DEFAULT) at /usr/src/libuv-v1.43.0/src/unix/core.c:389
D:xferquota:#14 0x00007f77159410de in nm_thread (worker0=0x7ba00000a530) at netmgr/netmgr.c:654
```
The crash happens here:
```c
496 if (eresult != ISC_R_SUCCESS) {
497 /*
498 * This is most likely a network error on a connected
499 * socket, a timeout, or the query has been canceled.
500 * It makes no sense to check the address or parse the
501 * packet, but we can return the error to the caller.
502 */
503 goto done;
504 }
505
506 >>> INSIST(ISC_LINK_LINKED(resp, alink));
507
508 peer = isc_nmhandle_peeraddr(handle);
509 isc_netaddr_fromsockaddr(&netaddr, &peer);
```
However, note that zone transfers seem to have become stuck shortly
after startup:
```
I:xferquota:starting servers
I:xferquota:Have 50 zones up in 1 seconds
I:xferquota:Changing test zone...
I:xferquota:Have 54 zones up in 2 seconds
I:xferquota:Have 54 zones up in 3 seconds
I:xferquota:Have 54 zones up in 4 seconds
I:xferquota:Have 54 zones up in 5 seconds
I:xferquota:Have 54 zones up in 6 seconds
I:xferquota:Have 54 zones up in 7 seconds
I:xferquota:Have 54 zones up in 8 seconds
I:xferquota:Have 54 zones up in 9 seconds
I:xferquota:Have 54 zones up in 10 seconds
...
I:xferquota:Have 54 zones up in 357 seconds
I:xferquota:Have 54 zones up in 358 seconds
I:xferquota:Have 54 zones up in 359 seconds
I:xferquota:Took too long to load zones
```
The crash itself seems to have happened during a shutdown attempt:
```
I:xferquota:Took too long to load zones
I:xferquota:stopping servers
I:xferquota:ns1 died before a SIGTERM was sent
I:xferquota:ns1 didn't die when sent a SIGTERM
I:xferquota:stopping servers failed
I:xferquota:Core dump(s) found: xferquota/ns1/core.30627
```
<details>
<summary>Click to expand/collapse full test log</summary>
```
S:xferquota:2022-04-22T09:44:23+0000
T:xferquota:1:A
A:xferquota:System test xferquota
I:xferquota:PORTS:32124,32125,32126,32127,32128,32129,32130,32131,32132,32133,32134,32135,32136
I:xferquota:starting servers
I:xferquota:Have 50 zones up in 1 seconds
I:xferquota:Changing test zone...
I:xferquota:Have 54 zones up in 2 seconds
I:xferquota:Have 54 zones up in 3 seconds
I:xferquota:Have 54 zones up in 4 seconds
I:xferquota:Have 54 zones up in 5 seconds
I:xferquota:Have 54 zones up in 6 seconds
I:xferquota:Have 54 zones up in 7 seconds
I:xferquota:Have 54 zones up in 8 seconds
I:xferquota:Have 54 zones up in 9 seconds
I:xferquota:Have 54 zones up in 10 seconds
I:xferquota:Have 54 zones up in 11 seconds
I:xferquota:Have 54 zones up in 12 seconds
I:xferquota:Have 54 zones up in 13 seconds
I:xferquota:Have 54 zones up in 14 seconds
I:xferquota:Have 54 zones up in 15 seconds
I:xferquota:Have 54 zones up in 16 seconds
I:xferquota:Have 54 zones up in 17 seconds
I:xferquota:Have 54 zones up in 18 seconds
I:xferquota:Have 54 zones up in 19 seconds
I:xferquota:Have 54 zones up in 20 seconds
I:xferquota:Have 54 zones up in 21 seconds
I:xferquota:Have 54 zones up in 22 seconds
I:xferquota:Have 54 zones up in 23 seconds
I:xferquota:Have 54 zones up in 24 seconds
I:xferquota:Have 54 zones up in 25 seconds
I:xferquota:Have 54 zones up in 26 seconds
I:xferquota:Have 54 zones up in 27 seconds
I:xferquota:Have 54 zones up in 28 seconds
I:xferquota:Have 54 zones up in 29 seconds
I:xferquota:Have 54 zones up in 30 seconds
I:xferquota:Have 54 zones up in 31 seconds
I:xferquota:Have 54 zones up in 32 seconds
I:xferquota:Have 54 zones up in 33 seconds
I:xferquota:Have 54 zones up in 34 seconds
I:xferquota:Have 54 zones up in 35 seconds
I:xferquota:Have 54 zones up in 36 seconds
I:xferquota:Have 54 zones up in 37 seconds
I:xferquota:Have 54 zones up in 38 seconds
I:xferquota:Have 54 zones up in 39 seconds
I:xferquota:Have 54 zones up in 40 seconds
I:xferquota:Have 54 zones up in 41 seconds
I:xferquota:Have 54 zones up in 42 seconds
I:xferquota:Have 54 zones up in 43 seconds
I:xferquota:Have 54 zones up in 44 seconds
I:xferquota:Have 54 zones up in 45 seconds
I:xferquota:Have 54 zones up in 46 seconds
I:xferquota:Have 54 zones up in 47 seconds
I:xferquota:Have 54 zones up in 48 seconds
I:xferquota:Have 54 zones up in 49 seconds
I:xferquota:Have 54 zones up in 50 seconds
I:xferquota:Have 54 zones up in 51 seconds
I:xferquota:Have 54 zones up in 52 seconds
I:xferquota:Have 54 zones up in 53 seconds
I:xferquota:Have 54 zones up in 54 seconds
I:xferquota:Have 54 zones up in 55 seconds
I:xferquota:Have 54 zones up in 56 seconds
I:xferquota:Have 54 zones up in 57 seconds
I:xferquota:Have 54 zones up in 58 seconds
I:xferquota:Have 54 zones up in 59 seconds
I:xferquota:Have 54 zones up in 60 seconds
I:xferquota:Have 54 zones up in 61 seconds
I:xferquota:Have 54 zones up in 62 seconds
I:xferquota:Have 54 zones up in 63 seconds
I:xferquota:Have 54 zones up in 64 seconds
I:xferquota:Have 54 zones up in 65 seconds
I:xferquota:Have 54 zones up in 66 seconds
I:xferquota:Have 54 zones up in 67 seconds
I:xferquota:Have 54 zones up in 68 seconds
I:xferquota:Have 54 zones up in 69 seconds
I:xferquota:Have 54 zones up in 70 seconds
I:xferquota:Have 54 zones up in 71 seconds
I:xferquota:Have 54 zones up in 72 seconds
I:xferquota:Have 54 zones up in 73 seconds
I:xferquota:Have 54 zones up in 74 seconds
I:xferquota:Have 54 zones up in 75 seconds
I:xferquota:Have 54 zones up in 76 seconds
I:xferquota:Have 54 zones up in 77 seconds
I:xferquota:Have 54 zones up in 78 seconds
I:xferquota:Have 54 zones up in 79 seconds
I:xferquota:Have 54 zones up in 80 seconds
I:xferquota:Have 54 zones up in 81 seconds
I:xferquota:Have 54 zones up in 82 seconds
I:xferquota:Have 54 zones up in 83 seconds
I:xferquota:Have 54 zones up in 84 seconds
I:xferquota:Have 54 zones up in 85 seconds
I:xferquota:Have 54 zones up in 86 seconds
I:xferquota:Have 54 zones up in 87 seconds
I:xferquota:Have 54 zones up in 88 seconds
I:xferquota:Have 54 zones up in 89 seconds
I:xferquota:Have 54 zones up in 90 seconds
I:xferquota:Have 54 zones up in 91 seconds
I:xferquota:Have 54 zones up in 92 seconds
I:xferquota:Have 54 zones up in 93 seconds
I:xferquota:Have 54 zones up in 94 seconds
I:xferquota:Have 54 zones up in 95 seconds
I:xferquota:Have 54 zones up in 96 seconds
I:xferquota:Have 54 zones up in 97 seconds
I:xferquota:Have 54 zones up in 98 seconds
I:xferquota:Have 54 zones up in 99 seconds
I:xferquota:Have 54 zones up in 100 seconds
I:xferquota:Have 54 zones up in 101 seconds
I:xferquota:Have 54 zones up in 102 seconds
I:xferquota:Have 54 zones up in 103 seconds
I:xferquota:Have 54 zones up in 104 seconds
I:xferquota:Have 54 zones up in 105 seconds
I:xferquota:Have 54 zones up in 106 seconds
I:xferquota:Have 54 zones up in 107 seconds
I:xferquota:Have 54 zones up in 108 seconds
I:xferquota:Have 54 zones up in 109 seconds
I:xferquota:Have 54 zones up in 110 seconds
I:xferquota:Have 54 zones up in 111 seconds
I:xferquota:Have 54 zones up in 112 seconds
I:xferquota:Have 54 zones up in 113 seconds
I:xferquota:Have 54 zones up in 114 seconds
I:xferquota:Have 54 zones up in 115 seconds
I:xferquota:Have 54 zones up in 116 seconds
I:xferquota:Have 54 zones up in 117 seconds
I:xferquota:Have 54 zones up in 118 seconds
I:xferquota:Have 54 zones up in 119 seconds
I:xferquota:Have 54 zones up in 120 seconds
I:xferquota:Have 54 zones up in 121 seconds
I:xferquota:Have 54 zones up in 122 seconds
I:xferquota:Have 54 zones up in 123 seconds
I:xferquota:Have 54 zones up in 124 seconds
I:xferquota:Have 54 zones up in 125 seconds
I:xferquota:Have 54 zones up in 126 seconds
I:xferquota:Have 54 zones up in 127 seconds
I:xferquota:Have 54 zones up in 128 seconds
I:xferquota:Have 54 zones up in 129 seconds
I:xferquota:Have 54 zones up in 130 seconds
I:xferquota:Have 54 zones up in 131 seconds
I:xferquota:Have 54 zones up in 132 seconds
I:xferquota:Have 54 zones up in 133 seconds
I:xferquota:Have 54 zones up in 134 seconds
I:xferquota:Have 54 zones up in 135 seconds
I:xferquota:Have 54 zones up in 136 seconds
I:xferquota:Have 54 zones up in 137 seconds
I:xferquota:Have 54 zones up in 138 seconds
I:xferquota:Have 54 zones up in 139 seconds
I:xferquota:Have 54 zones up in 140 seconds
I:xferquota:Have 54 zones up in 141 seconds
I:xferquota:Have 54 zones up in 142 seconds
I:xferquota:Have 54 zones up in 143 seconds
I:xferquota:Have 54 zones up in 144 seconds
I:xferquota:Have 54 zones up in 145 seconds
I:xferquota:Have 54 zones up in 146 seconds
I:xferquota:Have 54 zones up in 147 seconds
I:xferquota:Have 54 zones up in 148 seconds
I:xferquota:Have 54 zones up in 149 seconds
I:xferquota:Have 54 zones up in 150 seconds
I:xferquota:Have 54 zones up in 151 seconds
I:xferquota:Have 54 zones up in 152 seconds
I:xferquota:Have 54 zones up in 153 seconds
I:xferquota:Have 54 zones up in 154 seconds
I:xferquota:Have 54 zones up in 155 seconds
I:xferquota:Have 54 zones up in 156 seconds
I:xferquota:Have 54 zones up in 157 seconds
I:xferquota:Have 54 zones up in 158 seconds
I:xferquota:Have 54 zones up in 159 seconds
I:xferquota:Have 54 zones up in 160 seconds
I:xferquota:Have 54 zones up in 161 seconds
I:xferquota:Have 54 zones up in 162 seconds
I:xferquota:Have 54 zones up in 163 seconds
I:xferquota:Have 54 zones up in 164 seconds
I:xferquota:Have 54 zones up in 165 seconds
I:xferquota:Have 54 zones up in 166 seconds
I:xferquota:Have 54 zones up in 167 seconds
I:xferquota:Have 54 zones up in 168 seconds
I:xferquota:Have 54 zones up in 169 seconds
I:xferquota:Have 54 zones up in 170 seconds
I:xferquota:Have 54 zones up in 171 seconds
I:xferquota:Have 54 zones up in 172 seconds
I:xferquota:Have 54 zones up in 173 seconds
I:xferquota:Have 54 zones up in 174 seconds
I:xferquota:Have 54 zones up in 175 seconds
I:xferquota:Have 54 zones up in 176 seconds
I:xferquota:Have 54 zones up in 177 seconds
I:xferquota:Have 54 zones up in 178 seconds
I:xferquota:Have 54 zones up in 179 seconds
I:xferquota:Have 54 zones up in 180 seconds
I:xferquota:Have 54 zones up in 181 seconds
I:xferquota:Have 54 zones up in 182 seconds
I:xferquota:Have 54 zones up in 183 seconds
I:xferquota:Have 54 zones up in 184 seconds
I:xferquota:Have 54 zones up in 185 seconds
I:xferquota:Have 54 zones up in 186 seconds
I:xferquota:Have 54 zones up in 187 seconds
I:xferquota:Have 54 zones up in 188 seconds
I:xferquota:Have 54 zones up in 189 seconds
I:xferquota:Have 54 zones up in 190 seconds
I:xferquota:Have 54 zones up in 191 seconds
I:xferquota:Have 54 zones up in 192 seconds
I:xferquota:Have 54 zones up in 193 seconds
I:xferquota:Have 54 zones up in 194 seconds
I:xferquota:Have 54 zones up in 195 seconds
I:xferquota:Have 54 zones up in 196 seconds
I:xferquota:Have 54 zones up in 197 seconds
I:xferquota:Have 54 zones up in 198 seconds
I:xferquota:Have 54 zones up in 199 seconds
I:xferquota:Have 54 zones up in 200 seconds
I:xferquota:Have 54 zones up in 201 seconds
I:xferquota:Have 54 zones up in 202 seconds
I:xferquota:Have 54 zones up in 203 seconds
I:xferquota:Have 54 zones up in 204 seconds
I:xferquota:Have 54 zones up in 205 seconds
I:xferquota:Have 54 zones up in 206 seconds
I:xferquota:Have 54 zones up in 207 seconds
I:xferquota:Have 54 zones up in 208 seconds
I:xferquota:Have 54 zones up in 209 seconds
I:xferquota:Have 54 zones up in 210 seconds
I:xferquota:Have 54 zones up in 211 seconds
I:xferquota:Have 54 zones up in 212 seconds
I:xferquota:Have 54 zones up in 213 seconds
I:xferquota:Have 54 zones up in 214 seconds
I:xferquota:Have 54 zones up in 215 seconds
I:xferquota:Have 54 zones up in 216 seconds
I:xferquota:Have 54 zones up in 217 seconds
I:xferquota:Have 54 zones up in 218 seconds
I:xferquota:Have 54 zones up in 219 seconds
I:xferquota:Have 54 zones up in 220 seconds
I:xferquota:Have 54 zones up in 221 seconds
I:xferquota:Have 54 zones up in 222 seconds
I:xferquota:Have 54 zones up in 223 seconds
I:xferquota:Have 54 zones up in 224 seconds
I:xferquota:Have 54 zones up in 225 seconds
I:xferquota:Have 54 zones up in 226 seconds
I:xferquota:Have 54 zones up in 227 seconds
I:xferquota:Have 54 zones up in 228 seconds
I:xferquota:Have 54 zones up in 229 seconds
I:xferquota:Have 54 zones up in 230 seconds
I:xferquota:Have 54 zones up in 231 seconds
I:xferquota:Have 54 zones up in 232 seconds
I:xferquota:Have 54 zones up in 233 seconds
I:xferquota:Have 54 zones up in 234 seconds
I:xferquota:Have 54 zones up in 235 seconds
I:xferquota:Have 54 zones up in 236 seconds
I:xferquota:Have 54 zones up in 237 seconds
I:xferquota:Have 54 zones up in 238 seconds
I:xferquota:Have 54 zones up in 239 seconds
I:xferquota:Have 54 zones up in 240 seconds
I:xferquota:Have 54 zones up in 241 seconds
I:xferquota:Have 54 zones up in 242 seconds
I:xferquota:Have 54 zones up in 243 seconds
I:xferquota:Have 54 zones up in 244 seconds
I:xferquota:Have 54 zones up in 245 seconds
I:xferquota:Have 54 zones up in 246 seconds
I:xferquota:Have 54 zones up in 247 seconds
I:xferquota:Have 54 zones up in 248 seconds
I:xferquota:Have 54 zones up in 249 seconds
I:xferquota:Have 54 zones up in 250 seconds
I:xferquota:Have 54 zones up in 251 seconds
I:xferquota:Have 54 zones up in 252 seconds
I:xferquota:Have 54 zones up in 253 seconds
I:xferquota:Have 54 zones up in 254 seconds
I:xferquota:Have 54 zones up in 255 seconds
I:xferquota:Have 54 zones up in 256 seconds
I:xferquota:Have 54 zones up in 257 seconds
I:xferquota:Have 54 zones up in 258 seconds
I:xferquota:Have 54 zones up in 259 seconds
I:xferquota:Have 54 zones up in 260 seconds
I:xferquota:Have 54 zones up in 261 seconds
I:xferquota:Have 54 zones up in 262 seconds
I:xferquota:Have 54 zones up in 263 seconds
I:xferquota:Have 54 zones up in 264 seconds
I:xferquota:Have 54 zones up in 265 seconds
I:xferquota:Have 54 zones up in 266 seconds
I:xferquota:Have 54 zones up in 267 seconds
I:xferquota:Have 54 zones up in 268 seconds
I:xferquota:Have 54 zones up in 269 seconds
I:xferquota:Have 54 zones up in 270 seconds
I:xferquota:Have 54 zones up in 271 seconds
I:xferquota:Have 54 zones up in 272 seconds
I:xferquota:Have 54 zones up in 273 seconds
I:xferquota:Have 54 zones up in 274 seconds
I:xferquota:Have 54 zones up in 275 seconds
I:xferquota:Have 54 zones up in 276 seconds
I:xferquota:Have 54 zones up in 277 seconds
I:xferquota:Have 54 zones up in 278 seconds
I:xferquota:Have 54 zones up in 279 seconds
I:xferquota:Have 54 zones up in 280 seconds
I:xferquota:Have 54 zones up in 281 seconds
I:xferquota:Have 54 zones up in 282 seconds
I:xferquota:Have 54 zones up in 283 seconds
I:xferquota:Have 54 zones up in 284 seconds
I:xferquota:Have 54 zones up in 285 seconds
I:xferquota:Have 54 zones up in 286 seconds
I:xferquota:Have 54 zones up in 287 seconds
I:xferquota:Have 54 zones up in 288 seconds
I:xferquota:Have 54 zones up in 289 seconds
I:xferquota:Have 54 zones up in 290 seconds
I:xferquota:Have 54 zones up in 291 seconds
I:xferquota:Have 54 zones up in 292 seconds
I:xferquota:Have 54 zones up in 293 seconds
I:xferquota:Have 54 zones up in 294 seconds
I:xferquota:Have 54 zones up in 295 seconds
I:xferquota:Have 54 zones up in 296 seconds
I:xferquota:Have 54 zones up in 297 seconds
I:xferquota:Have 54 zones up in 298 seconds
I:xferquota:Have 54 zones up in 299 seconds
I:xferquota:Have 54 zones up in 300 seconds
I:xferquota:Have 54 zones up in 301 seconds
I:xferquota:Have 54 zones up in 302 seconds
I:xferquota:Have 54 zones up in 303 seconds
I:xferquota:Have 54 zones up in 304 seconds
I:xferquota:Have 54 zones up in 305 seconds
I:xferquota:Have 54 zones up in 306 seconds
I:xferquota:Have 54 zones up in 307 seconds
I:xferquota:Have 54 zones up in 308 seconds
I:xferquota:Have 54 zones up in 309 seconds
I:xferquota:Have 54 zones up in 310 seconds
I:xferquota:Have 54 zones up in 311 seconds
I:xferquota:Have 54 zones up in 312 seconds
I:xferquota:Have 54 zones up in 313 seconds
I:xferquota:Have 54 zones up in 314 seconds
I:xferquota:Have 54 zones up in 315 seconds
I:xferquota:Have 54 zones up in 316 seconds
I:xferquota:Have 54 zones up in 317 seconds
I:xferquota:Have 54 zones up in 318 seconds
I:xferquota:Have 54 zones up in 319 seconds
I:xferquota:Have 54 zones up in 320 seconds
I:xferquota:Have 54 zones up in 321 seconds
I:xferquota:Have 54 zones up in 322 seconds
I:xferquota:Have 54 zones up in 323 seconds
I:xferquota:Have 54 zones up in 324 seconds
I:xferquota:Have 54 zones up in 325 seconds
I:xferquota:Have 54 zones up in 326 seconds
I:xferquota:Have 54 zones up in 327 seconds
I:xferquota:Have 54 zones up in 328 seconds
I:xferquota:Have 54 zones up in 329 seconds
I:xferquota:Have 54 zones up in 330 seconds
I:xferquota:Have 54 zones up in 331 seconds
I:xferquota:Have 54 zones up in 332 seconds
I:xferquota:Have 54 zones up in 333 seconds
I:xferquota:Have 54 zones up in 334 seconds
I:xferquota:Have 54 zones up in 335 seconds
I:xferquota:Have 54 zones up in 336 seconds
I:xferquota:Have 54 zones up in 337 seconds
I:xferquota:Have 54 zones up in 338 seconds
I:xferquota:Have 54 zones up in 339 seconds
I:xferquota:Have 54 zones up in 340 seconds
I:xferquota:Have 54 zones up in 341 seconds
I:xferquota:Have 54 zones up in 342 seconds
I:xferquota:Have 54 zones up in 343 seconds
I:xferquota:Have 54 zones up in 344 seconds
I:xferquota:Have 54 zones up in 345 seconds
I:xferquota:Have 54 zones up in 346 seconds
I:xferquota:Have 54 zones up in 347 seconds
I:xferquota:Have 54 zones up in 348 seconds
I:xferquota:Have 54 zones up in 349 seconds
I:xferquota:Have 54 zones up in 350 seconds
I:xferquota:Have 54 zones up in 351 seconds
I:xferquota:Have 54 zones up in 352 seconds
I:xferquota:Have 54 zones up in 353 seconds
I:xferquota:Have 54 zones up in 354 seconds
I:xferquota:Have 54 zones up in 355 seconds
I:xferquota:Have 54 zones up in 356 seconds
I:xferquota:Have 54 zones up in 357 seconds
I:xferquota:Have 54 zones up in 358 seconds
I:xferquota:Have 54 zones up in 359 seconds
I:xferquota:Took too long to load zones
I:xferquota:stopping servers
I:xferquota:ns1 died before a SIGTERM was sent
I:xferquota:ns1 didn't die when sent a SIGTERM
I:xferquota:stopping servers failed
I:xferquota:Core dump(s) found: xferquota/ns1/core.30627
D:xferquota:backtrace from xferquota/ns1/core.30627:
D:xferquota:--------------------------------------------------------------------------------
D:xferquota:Core was generated by `/builds/isc-projects/bind9/bin/named/.libs/named -D xferquota-ns1 -X named.lock'.
D:xferquota:Program terminated with signal SIGABRT, Aborted.
D:xferquota:#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
D:xferquota:[Current thread is 1 (Thread 0x7f770bdef700 (LWP 30664))]
D:xferquota:#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
D:xferquota:#1 0x00007f7714c44537 in __GI_abort () at abort.c:79
D:xferquota:#2 0x0000000000466663 in abort ()
D:xferquota:#3 0x00000000004e23d2 in assertion_failed (file=<optimized out>, line=<optimized out>, type=<optimized out>, cond=<optimized out>) at main.c:237
D:xferquota:#4 0x00007f77159655ce in isc_assertion_failed (file=0x2 <error: Cannot access memory at address 0x2>, line=198882080, line@entry=506, type=isc_assertiontype_require, type@entry=isc_assertiontype_insist, cond=0x7f7714c5ace1 <__GI_raise+321> "H\213\204$\b\001") at assertions.c:49
D:xferquota:#5 0x00007f77156bf440 in udp_recv (handle=handle@entry=0x7b4800019b00, eresult=eresult@entry=ISC_R_SUCCESS, region=region@entry=0x7f770bdabc18, arg=<optimized out>) at dispatch.c:506
D:xferquota:#6 0x00007f771594cade in isc__nm_async_readcb (worker=<optimized out>, ev0=ev0@entry=0x7f770bdabc78) at netmgr/netmgr.c:2702
D:xferquota:#7 0x00007f771594ac73 in isc__nm_readcb (sock=sock@entry=0x7b74000bfe00, uvreq=uvreq@entry=0x7b6c00070000, eresult=eresult@entry=ISC_R_SUCCESS) at netmgr/netmgr.c:2675
D:xferquota:#8 0x00007f77159612eb in udp_recv_cb (handle=handle@entry=0x7b74000c03c8, nrecv=nrecv@entry=36, buf=buf@entry=0x7f770bdabe40, addr=addr@entry=0x7f770bdabe90, flags=flags@entry=0) at netmgr/udp.c:647
D:xferquota:#9 0x00007f7715962c6c in isc__nm_udp_read_cb (handle=0x7b74000c03c8, nrecv=36, buf=0x7f770bdabe40, addr=0x7f770bdabe90, flags=0) at netmgr/udp.c:1053
D:xferquota:#10 0x00007f77151aa3f1 in uv__udp_recvmsg (handle=0x7b74000c03c8) at /usr/src/libuv-v1.43.0/src/unix/udp.c:302
D:xferquota:#11 0x00007f77151a9d30 in uv__udp_io (loop=0x7ba00000a540, w=0x7b74000c0448, revents=1) at /usr/src/libuv-v1.43.0/src/unix/udp.c:178
D:xferquota:#12 0x00007f77151b0b87 in uv__io_poll (loop=0x7ba00000a540, timeout=14836) at /usr/src/libuv-v1.43.0/src/unix/epoll.c:374
D:xferquota:#13 0x00007f771519586c in uv_run (loop=0x7ba00000a540, mode=UV_RUN_DEFAULT) at /usr/src/libuv-v1.43.0/src/unix/core.c:389
D:xferquota:#14 0x00007f77159410de in nm_thread (worker0=0x7ba00000a530) at netmgr/netmgr.c:654
D:xferquota:#15 0x00007f771599d254 in isc__trampoline_run (arg=0x7b0c00000870) at trampoline.c:198
D:xferquota:#16 0x0000000000460cfd in __tsan_thread_start_func ()
D:xferquota:#17 0x00007f7714f4cea7 in start_thread (arg=<optimized out>) at pthread_create.c:477
D:xferquota:#18 0x00007f7714d1cdef in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
D:xferquota:--------------------------------------------------------------------------------
D:xferquota:full backtrace from xferquota/ns1/core.30627 saved in xferquota/ns1/core.30627-backtrace.txt
D:xferquota:core dump xferquota/ns1/core.30627 archived as xferquota/ns1/core.30627.gz
R:xferquota:FAIL
E:xferquota:2022-04-22T09:51:34+0000
```
</details>
[^1]: Technically speaking, it was triggered for a merge request on top
of `main`, but that merge request only touches Python-related
code.May 2022 (9.16.29, 9.16.29-S1, 9.18.3, 9.19.1)Arаm SаrgsyаnArаm Sаrgsyаnhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3229Remove task exclusive mode from ns_interfacemgr2022-04-28T05:11:43ZOndřej SurýRemove task exclusive mode from ns_interfacemgrThe ns_interfacemgr uses the exclusive mode as a global lock for modifying the dns_aclenv.The ns_interfacemgr uses the exclusive mode as a global lock for modifying the dns_aclenv.April 2022 (9.16.28, 9.16.28-S1, 9.18.2, 9.19.0)Ondřej SurýOndřej Surýhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3273Refactor `dns_rdataset_t->privateN`2022-04-28T12:12:19ZTony FinchRefactor `dns_rdataset_t->privateN`The generic `dns_rdataset_t` structure has a number of `private` fields that are (according to the comment before their declarations) "for use by the rdataset implementation, and MUST NOT be changed by clients." That suggests they should...The generic `dns_rdataset_t` structure has a number of `private` fields that are (according to the comment before their declarations) "for use by the rdataset implementation, and MUST NOT be changed by clients." That suggests they should only be used by `dns_rdatalist_t` and the `rdataslab` implementation (which is largely in `rbtdb.c`). However, the `privateN` fields are also used by:
* `dnsrps.c`
* `keytable.c`
* `ncache.c`
* `sdb.c`
* `sdlz.c`
It is not clear what each `privateN` field is for, which code owns which fields, whether or not there are clashes (or how developers can be sure to avoid them).
This is one of the work items tracked in #3268Tony FinchTony Finchhttps://gitlab.isc.org/isc-projects/bind9/-/issues/1631kasp system test failed (bad number of keys)2022-04-28T19:13:15ZMichał Kępieńkasp system test failed (bad number of keys)https://gitlab.isc.org/isc-projects/bind9/-/jobs/700050
```
I:kasp:check number of keys with algorithm 5 for zone legacy-keys.kasp in dir ns3 (56)
I:kasp:error: bad number (2) of key files for zone legacy-keys.kasp (expected 3)
I:kasp:f...https://gitlab.isc.org/isc-projects/bind9/-/jobs/700050
```
I:kasp:check number of keys with algorithm 5 for zone legacy-keys.kasp in dir ns3 (56)
I:kasp:error: bad number (2) of key files for zone legacy-keys.kasp (expected 3)
I:kasp:failed
I:kasp:check key 24906
I:kasp:check key 38941
I:kasp:error: No KEY2 found for zone legacy-keys.kasp
I:kasp:failed
I:kasp:check DNSKEY rrset is signed correctly for zone legacy-keys.kasp (57)
I:kasp:check SOA rrset is signed correctly for zone legacy-keys.kasp (58)
I:kasp:error: SOA RRset not signed with key no
I:kasp:failed
I:kasp:check CDS and CDNSKEY rrset are signed correctly for zone legacy-keys.kasp (59)
I:kasp:check A a.legacy-keys.kasp rrset is signed correctly for zone legacy-keys.kasp (60)
I:kasp:error: A RRset not signed with key no
I:kasp:failed
```BIND 9.17 Backburnerhttps://gitlab.isc.org/isc-projects/dhcp/-/issues/239dhcpd initialization race can leave a socket unread forever2022-04-28T21:59:09ZNick Owensdhcpd initialization race can leave a socket unread forever---
name: Bug report
about: Create a report to help us improve
---
If you believe your bug report is a security issue (e.g. a packet that can kill the server), DO NOT
REPORT IT HERE. Please use https://www.isc.org/community/report-bug/...---
name: Bug report
about: Create a report to help us improve
---
If you believe your bug report is a security issue (e.g. a packet that can kill the server), DO NOT
REPORT IT HERE. Please use https://www.isc.org/community/report-bug/ instead or send mail to
security-office(at)isc(dot)org. If you really need to report it here, please set the confidential
field to true.
**Describe the bug**
we are experiencing a race condition in dhcpd initialization that prevents leases from being handed out if a packet arrives during server init. note that in our case we use dhcpd 4.4.1 and libisc from bind 9.11.35 but i believe the issue still exists. our bind libraries are built using threads and epoll on linux.
in https://gitlab.isc.org/isc-projects/dhcp/-/blob/master/omapip/dispatch.c#L259 we register a socket into libisc with a callback. a little later, we insert into a linked list https://gitlab.isc.org/isc-projects/dhcp/-/blob/master/omapip/dispatch.c#L279.
in the callback, we check if the callback argument (the registered socket) is in the linked list https://gitlab.isc.org/isc-projects/dhcp/-/blob/master/omapip/dispatch.c#L137. if it is not, we return 0. this return 0 will disable re-arming of the socket in libisc epoll code in https://gitlab.isc.org/isc-projects/bind9/-/blob/v9_11_35/lib/isc/unix/socket.c#L4017.
if socket fails to be re-armed (because we returned 0), it never gets armed again and server will run without reading any DHCP packets. we can observe the socket receive queue increasing when this happens, and dhcp clients time out.
```
# ss -0ap | grep -E 'dhcpd|Netid'
Cannot open netlink socket: Protocol not supported
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
p_raw UNCONN 26880 0 *:br-lan * users:(("dhcpd",pid=13706,fd=11))
```
**To Reproduce**
Steps to reproduce the behavior:
1. run dhcpd.
2. client sends dhcp request.
3. if server race is triggered, it will not reply and packets pile up in receive queue.
to widen the race, you can try to place sleep(10); at https://gitlab.isc.org/isc-projects/dhcp/-/blob/master/omapip/dispatch.c#L278 after fdwatch creation but before linked list insertion, and send dhcp request in that sleep window.
**Expected behavior**
server replies to dhcp requests.
**Environment:**
- ISC DHCP version: 4.4.1
- OS: yocto 3.1
- Which features were compiled in
**Additional Information**
Add any other context about the problem here. In particular, feel free to share your config file and
logs from around the time error occurred. Don't be shy to send more logs than you think are
relevant. It is easy to grep large log files. It is tricky to guess what may have happened without
any information.
Make sure you anonymize your config files (at the very lease make sure you obfuscate your database
credentials, but you may also replace your actual IP addresses and host names with example.com
and 10.0.0.0/8 or 2001:db8::/32).
**Some initial questions**
- Are you sure your feature is not already implemented in the latest ISC DHCP version?
- Are you sure your requrested feature is not already impemented in Kea? Perhaps it's a good time
to consider migration?
- Are you sure what you would like to do is not possible using some other mechanisms?
- Have you discussed your idea on dhcp-users and/or dhcp-workers mailing lists?
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
It is very important to describe what you would like to do and why?
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context about the feature request here.
**Funding its development**
ISC DHCP is run by ISC, which is a small non-profit organization without any government funding or
any permanent sponsorship organizations. Are you able and willing to participate financially in the
development costs?
**Participating in development**
Are you willing to participate in the feature development? ISC team always tries to make a feature
as generic as possible, so it can be used in wide variety of situations. That means the proposed
solution may be a bit different that you initially thought. Are you willing to take part in the
design discussions? Are you willing to test an unreleased engineering code?
**Contacting you**
How can ISC reach you to discuss this matter further? If you do not specify any means such as
e-mail, jabber id or a telephone, we may send you a message on github with questions when we have
them.