ISC Open Source Projects issueshttps://gitlab.isc.org/groups/isc-projects/-/issues2022-01-06T13:08:49Zhttps://gitlab.isc.org/isc-projects/kea/-/issues/1628Postgres backend with SSL2022-01-06T13:08:49ZvarsrajaPostgres backend with SSLWe would like to know if we can specify the client cert,key and ca chain to be used for connecting to postgres backend for leases.
If kea supports ssl , can you please provide us the keywords to be used in lease-database section.We would like to know if we can specify the client cert,key and ca chain to be used for connecting to postgres backend for leases.
If kea supports ssl , can you please provide us the keywords to be used in lease-database section.kea2.1-backlogFrancis DupontFrancis Duponthttps://gitlab.isc.org/isc-projects/kea/-/issues/1625Feature Request: Add version control and version history (and maybe some limi...2022-05-25T09:44:44ZCathy AlmondFeature Request: Add version control and version history (and maybe some limited roll back?) capability to Kea configuration/CB[Per Support Ticket #17332](https://support.isc.org/Ticket/Display.html?id=17332)
I think it's highly probable this is something that would need to be integrated with Stork and/or something completely independent (git-based?) for config...[Per Support Ticket #17332](https://support.isc.org/Ticket/Display.html?id=17332)
I think it's highly probable this is something that would need to be integrated with Stork and/or something completely independent (git-based?) for configuration change management.
Recording it here as a placeholder feature request anywaykea2.1-backloghttps://gitlab.isc.org/isc-projects/kea/-/issues/1624Add optional validation of overlapping pools in Kea configuration (v4 and v6)2022-05-25T09:42:27ZCathy AlmondAdd optional validation of overlapping pools in Kea configuration (v4 and v6)Bad things happen if you accidentally configure overlapping lease pools, but currently Kea does not do any checks of the configuration to prevent this from happening.
The reason it doesn't is that for large and complex configurations, t...Bad things happen if you accidentally configure overlapping lease pools, but currently Kea does not do any checks of the configuration to prevent this from happening.
The reason it doesn't is that for large and complex configurations, the performance cost will outweigh the potential benefit for many administrators. Those environments already have change control processes with sanity checks that prevent this type of accident.
But some smaller sites - especially those that are seldom updates, so the DHCP admins are generalists rather than specialists, might appreciate something like this.
This request also applies to PD pools, where it was discovered (in ticket [Ticket #17393](https://support.isc.org/Ticket/Display.html?id=17393), albeit accidentally, that configuring overlapping PD pools might actually work without problems - although we don't recommend this because not all potential scenarios have been tested (plus the stats will be quite peculiar).kea2.1-backloghttps://gitlab.isc.org/isc-projects/kea/-/issues/1517Overlapping Subnet Warning2022-05-25T09:25:25ZPeter DaviesOverlapping Subnet WarningOverlapping Subnet Warning:
It is at present not considered an error to configure a subnet that has an address space that either partially or completely overlaps the address space of existing subnet.
It may however be of interest to ...Overlapping Subnet Warning:
It is at present not considered an error to configure a subnet that has an address space that either partially or completely overlaps the address space of existing subnet.
It may however be of interest to administrators that this sort of configuration exists.
Would it be possible to allow Kea to log a warning message when it discovers this type of situation?
refers to RT [#17206](https://support.isc.org/Ticket/Display.html?id=17206)kea2.1-backloghttps://gitlab.isc.org/isc-projects/kea/-/issues/1410Implement GSS-TSIG to send DDNS updates to Active Directory2021-10-12T13:20:08ZVicky Riskvicky@isc.orgImplement GSS-TSIG to send DDNS updates to Active Directory---
name: Implement GSS-TSIG to send DDNS updates to Active Directory
about: DDNS updates
---
**Some initial questions**
- Are you sure your feature is not already implemented in the latest Kea version? Yes.
- Are you sure what you wou...---
name: Implement GSS-TSIG to send DDNS updates to Active Directory
about: DDNS updates
---
**Some initial questions**
- Are you sure your feature is not already implemented in the latest Kea version? Yes.
- Are you sure what you would like to do is not possible using some other mechanisms? No.
- Have you discussed your idea on kea-users or kea-dev mailing lists? No.
**Is your feature request related to a problem? Please describe.**
Some users of an OEM product implementing Kea would like to send DDNS updates to Active Directory, securing those updates with GSS-TSIG.
**Describe the solution you'd like**
The requestor would like to see Kea add support for GSS-TSIG authentication on the DDNS connections, as well as probably testing and validation of updating to AD.
**Describe alternatives you've considered**
I don't know enough about AD to know if other authentication mechanisms are available, but that would seem to be the most obvious alternative.
**Additional context**
The Kea core team discussed this feature request in a development meeting at the end of August, 2020 and concluded this is a big effort, both for initial development and for maintenance. One issue is the quality of available GSS-TSIG libraries to use. So, we are at the moment NOT PLANNING to implement this. I am opening this ticket so that others may chime in about their requirements, or workarounds, or possibly, someone may volunteer to contribute this.
(related ISC support issue [#17008](https://support.isc.org/Ticket/Display.html?id=17008))kea2.1-backlogVicky Riskvicky@isc.orgVicky Riskvicky@isc.orghttps://gitlab.isc.org/isc-projects/kea/-/issues/1312Clone subnet command2022-05-25T09:14:44ZPeter DaviesClone subnet commandClone subnet command:
Creating many subnets where the basic structure is similar to other subnets, for example the subnet-mask, position of default router, pool structure etc.. can be time-consuming and subject to error.
It would be he...Clone subnet command:
Creating many subnets where the basic structure is similar to other subnets, for example the subnet-mask, position of default router, pool structure etc.. can be time-consuming and subject to error.
It would be helpful if one were able to create a subnet based on an existing subnet or a subnet template giving the new subnet address as a parameter. This request also encompasses the cloning of local host reservations.
refers to RT [#16773](https://support.isc.org/Ticket/Display.html?id=16773)kea2.1-backloghttps://gitlab.isc.org/isc-projects/kea/-/issues/1290Client lease limit2022-07-08T13:51:02ZPeter DaviesClient lease limitClient lease limit:
Customers experiencing unwanted clients requesting dhcp leases:
IT would be appreciated if there were an option which limited a client to having a maximum of one lease per subnet.
Requests for this feature:
- RT [#1...Client lease limit:
Customers experiencing unwanted clients requesting dhcp leases:
IT would be appreciated if there were an option which limited a client to having a maximum of one lease per subnet.
Requests for this feature:
- RT [#16736](https://support.isc.org/Ticket/Display.html?id=16736)
- https://lists.isc.org/pipermail/kea-users/2020-January/002613.htmlkea2.2.0 - a new stable branchhttps://gitlab.isc.org/isc-projects/kea/-/issues/1193Client throttling hook (limits)2022-05-30T14:07:41ZTomek MrugalskiClient throttling hook (limits)We should develop a solution that would do client throttling, somewhat similar to RRL in DNS.We should develop a solution that would do client throttling, somewhat similar to RRL in DNS.kea2.1-backloghttps://gitlab.isc.org/isc-projects/kea/-/issues/1078kea lease migration tool2022-02-18T08:00:21ZPeter Davieskea lease migration tool---
name: kea lease migration tool
RT: [support#15853](https://support.isc.org/Ticket/Display.html?id=15853)
There maybe some interest in customers who wish to migrate the lease database between different backends for a tool to assist th...---
name: kea lease migration tool
RT: [support#15853](https://support.isc.org/Ticket/Display.html?id=15853)
There maybe some interest in customers who wish to migrate the lease database between different backends for a tool to assist them in this.
Some extra notes:
* Alberto Garrido wrote a script for this: [here](https://gist.github.com/agrrd/287446084bab09a6628a7624f394a642)
* request (and some discussion) on [kea-users](https://lists.isc.org/pipermail/kea-users/2021-January/002977.html)kea2.1-backloghttps://gitlab.isc.org/isc-projects/kea/-/issues/789update hammer to build documentation using sphinx on all systems2022-07-28T08:06:01ZFrancis Dupontupdate hammer to build documentation using sphinx on all systemsRHEL for instance still use docbook and xlst.RHEL for instance still use docbook and xlst.kea2.1-backloghttps://gitlab.isc.org/isc-projects/kea/-/issues/635Add gitlab CI checks for doxygen2022-06-09T09:00:39ZTomek MrugalskiAdd gitlab CI checks for doxygenNow that 1.6 beta is being prepared, we managed to get the number of doxygen warnings down to 0. We should use that opportunity to enable doxygen checks in CI and prevent any code that adds new warnings from being merged.Now that 1.6 beta is being prepared, we managed to get the number of doxygen warnings down to 0. We should use that opportunity to enable doxygen checks in CI and prevent any code that adds new warnings from being merged.kea2.1-backloghttps://gitlab.isc.org/isc-projects/kea/-/issues/582Get rid of sockcreator2021-06-18T09:37:13ZTomek MrugalskiGet rid of sockcreatorThis is a leftover from BIND10 days. It's 2019 and modern systems have a way to fine tune privileges. Linux has capabilities and CAP_NET_RAW. See man 7 capabilities.
We don't need sockcreator any more.This is a leftover from BIND10 days. It's 2019 and modern systems have a way to fine tune privileges. Linux has capabilities and CAP_NET_RAW. See man 7 capabilities.
We don't need sockcreator any more.kea2.1-backloghttps://gitlab.isc.org/isc-projects/kea/-/issues/483Create a script to check that headers are installed.2022-06-09T08:58:27ZFrancis DupontCreate a script to check that headers are installed.Need to compare headers in sources and installed. Requires a list of headers which should not be installed. Depends on #441 to make things easier (it removes a whole class of such headers).
I don't know if it is possible;e to hook distc...Need to compare headers in sources and installed. Requires a list of headers which should not be installed. Depends on #441 to make things easier (it removes a whole class of such headers).
I don't know if it is possible;e to hook distcheck to do it automatically but as it is not system dependent it should be not too hard for Jenkins...kea2.1-backloghttps://gitlab.isc.org/isc-projects/bind9/-/issues/3474dig +nssearch (9.18.5) crashes or shows empty results for domains with IPv6 a...2022-10-17T07:44:05ZThomas Amgartendig +nssearch (9.18.5) crashes or shows empty results for domains with IPv6 authoritatives from a IPv4-only-connected system<!--
If the bug you are reporting is potentially security-related - for example,
if it involves an assertion failure or other crash in `named` that can be
triggered repeatedly - then please do *NOT* report it here, but send an
email to [...<!--
If the bug you are reporting is potentially security-related - for example,
if it involves an assertion failure or other crash in `named` that can be
triggered repeatedly - then please do *NOT* report it here, but send an
email to [security-officer@isc.org](security-officer@isc.org).
-->
### Summary
```dig +nssearch``` in 9.18.5 either shows nothing or is crashing for domains with IPv6 authoritatives from a **IPv4-only-connected system**. From what I've tested, it seems to me, that ```dig +nssearch``` shows the correct answer for domains, that have only IPv4 authoritatives (ex. sbb.ch, migros.ch, link.ch). As soon a domain has an IPv6-enabled authoritatives (ex. isc.org, arcade.ch), then ```dig``` is crashing or shows empty output on the IPv4-only-connected system. Running ```dig +nssearch``` from a dual-stack connected system, everything is working fine.
### BIND version used
```
$ named -V
BIND 9.18.5 (Stable Release) <id:6593103>
running on Linux x86_64 4.18.0-305.10.2.el8_4.x86_64 #1 SMP Tue Jul 20 20:34:55 UTC 2021
built by make with '--prefix=/usr/local/bind-9.18.5' '--sysconfdir=/opt/chroot/bind/etc/named/' '--mandir=/usr/local/share/man' '--localstatedir=/opt/chroot/bind/var' '--enable-largefile' '--enable-full-report' '--without-gssapi' '--with-json-c' '--enable-singletrace' 'PKG_CONFIG_PATH=:/usr/local/libuv/lib/pkgconfig/'
compiled by GCC 8.4.1 20200928 (Red Hat 8.4.1-1)
compiled with OpenSSL version: OpenSSL 1.1.1g FIPS 21 Apr 2020
linked to OpenSSL version: OpenSSL 1.1.1g FIPS 21 Apr 2020
compiled with libuv version: 1.41.1
linked to libuv version: 1.41.1
compiled with libnghttp2 version: 1.33.0
linked to libnghttp2 version: 1.33.0
compiled with json-c version: 0.13.1
linked to json-c version: 0.13.1
compiled with zlib version: 1.2.11
linked to zlib version: 1.2.11
threads support is enabled
default paths:
named configuration: /opt/chroot/bind/etc/named/named.conf
rndc configuration: /opt/chroot/bind/etc/named/rndc.conf
DNSSEC root key: /opt/chroot/bind/etc/named/bind.keys
nsupdate session key: /opt/chroot/bind/var/run/named/session.key
named PID file: /opt/chroot/bind/var/run/named/named.pid
named lock file: /opt/chroot/bind/var/run/named/named.lock
```
### Steps to reproduce
```dig```ing every second for the NS for "arcade.ch":
```
$ while true; do date ; /usr/local/bind-9.18.5/bin/dig @127.0.0.1 +nssearch arcade.ch ; echo -e "----------------------------------------------------"; sleep 1; done
Tue Aug 2 16:08:16 CEST 2022
----------------------------------------------------
Tue Aug 2 16:08:17 CEST 2022
----------------------------------------------------
Tue Aug 2 16:08:18 CEST 2022
----------------------------------------------------
Tue Aug 2 16:08:19 CEST 2022
dighost.c:1551: INSIST((uint_fast32_t) __extension__ ({ __auto_type __atomic_load_ptr = ((&recvcount)); __typeof__ (*__atomic_load_ptr) __atomic_load_tmp; __atomic_load (__atomic_load_ptr, &__atomic_load_tmp, (memory_order_acquire)); __atomic_load_tmp; }) == 0) failed, back trace
/usr/local/bind-9.18.5/lib/libisc-9.18.5.so(+0x3690c)[0x7fbb4797390c]
/usr/local/bind-9.18.5/lib/libisc-9.18.5.so(isc_assertion_failed+0xa)[0x7fbb4797387a]
/usr/local/bind-9.18.5/bin/dig[0x40dcae]
/usr/local/bind-9.18.5/bin/dig[0x4158eb]
/usr/local/bind-9.18.5/lib/libisc-9.18.5.so(isc__nm_async_readcb+0x9c)[0x7fbb479612bc]
/usr/local/bind-9.18.5/lib/libisc-9.18.5.so(+0x25873)[0x7fbb47962873]
/usr/local/bind-9.18.5/lib/libisc-9.18.5.so(+0x2607b)[0x7fbb4796307b]
/usr/local/bind-9.18.5/lib/libisc-9.18.5.so(+0x269f2)[0x7fbb479639f2]
/lib64/libuv.so.1(+0x132f1)[0x7fbb466f32f1]
/lib64/libuv.so.1(uv__io_poll+0x4c5)[0x7fbb46704d15]
/lib64/libuv.so.1(uv_run+0x114)[0x7fbb466f3a74]
/usr/local/bind-9.18.5/lib/libisc-9.18.5.so(+0x2632a)[0x7fbb4796332a]
/usr/local/bind-9.18.5/lib/libisc-9.18.5.so(isc__trampoline_run+0x15)[0x7fbb47998155]
/lib64/libpthread.so.0(+0x815a)[0x7fbb46ce015a]
/lib64/libc.so.6(clone+0x43)[0x7fbb46a0fdd3]
Aborted (core dumped)
----------------------------------------------------
Tue Aug 2 16:08:20 CEST 2022
----------------------------------------------------
Tue Aug 2 16:08:21 CEST 2022
----------------------------------------------------
Tue Aug 2 16:08:22 CEST 2022
----------------------------------------------------
Tue Aug 2 16:08:23 CEST 2022
----------------------------------------------------
Tue Aug 2 16:08:24 CEST 2022
----------------------------------------------------
Tue Aug 2 16:08:25 CEST 2022
----------------------------------------------------
Tue Aug 2 16:08:26 CEST 2022
----------------------------------------------------
Tue Aug 2 16:08:27 CEST 2022
dighost.c:1551: INSIST((uint_fast32_t) __extension__ ({ __auto_type __atomic_load_ptr = ((&recvcount)); __typeof__ (*__atomic_load_ptr) __atomic_load_tmp; __atomic_load (__atomic_load_ptr, &__atomic_load_tmp, (memory_order_acquire)); __atomic_load_tmp; }) == 0) failed, back trace
/usr/local/bind-9.18.5/lib/libisc-9.18.5.so(+0x3690c)[0x7f211c89c90c]
/usr/local/bind-9.18.5/lib/libisc-9.18.5.so(isc_assertion_failed+0xa)[0x7f211c89c87a]
/usr/local/bind-9.18.5/bin/dig[0x40dcae]
/usr/local/bind-9.18.5/bin/dig[0x4145f7]
/usr/local/bind-9.18.5/bin/dig[0x4158f0]
/usr/local/bind-9.18.5/lib/libisc-9.18.5.so(isc__nm_async_readcb+0x9c)[0x7f211c88a2bc]
/usr/local/bind-9.18.5/lib/libisc-9.18.5.so(+0x25873)[0x7f211c88b873]
/usr/local/bind-9.18.5/lib/libisc-9.18.5.so(+0x2607b)[0x7f211c88c07b]
/usr/local/bind-9.18.5/lib/libisc-9.18.5.so(+0x269f2)[0x7f211c88c9f2]
/lib64/libuv.so.1(+0x132f1)[0x7f211b61c2f1]
/lib64/libuv.so.1(uv__io_poll+0x4c5)[0x7f211b62dd15]
/lib64/libuv.so.1(uv_run+0x114)[0x7f211b61ca74]
/usr/local/bind-9.18.5/lib/libisc-9.18.5.so(+0x2632a)[0x7f211c88c32a]
/usr/local/bind-9.18.5/lib/libisc-9.18.5.so(isc__trampoline_run+0x15)[0x7f211c8c1155]
/lib64/libpthread.so.0(+0x815a)[0x7f211bc0915a]
/lib64/libc.so.6(clone+0x43)[0x7f211b938dd3]
Aborted (core dumped)
----------------------------------------------------
Tue Aug 2 16:08:29 CEST 2022
----------------------------------------------------
Tue Aug 2 16:08:30 CEST 2022
----------------------------------------------------
Tue Aug 2 16:08:31 CEST 2022
dighost.c:1551: INSIST((uint_fast32_t) __extension__ ({ __auto_type __atomic_load_ptr = ((&recvcount)); __typeof__ (*__atomic_load_ptr) __atomic_load_tmp; __atomic_load (__atomic_load_ptr, &__atomic_load_tmp, (memory_order_acquire)); __atomic_load_tmp; }) == 0) failed, back trace
/usr/local/bind-9.18.5/lib/libisc-9.18.5.so(+0x3690c)[0x7f16dfadd90c]
/usr/local/bind-9.18.5/lib/libisc-9.18.5.so(isc_assertion_failed+0xa)[0x7f16dfadd87a]
/usr/local/bind-9.18.5/bin/dig[0x40dcae]
/usr/local/bind-9.18.5/bin/dig[0x4145f7]
/usr/local/bind-9.18.5/bin/dig[0x4158f0]
/usr/local/bind-9.18.5/lib/libisc-9.18.5.so(isc__nm_async_readcb+0x9c)[0x7f16dfacb2bc]
/usr/local/bind-9.18.5/lib/libisc-9.18.5.so(+0x25873)[0x7f16dfacc873]
/usr/local/bind-9.18.5/lib/libisc-9.18.5.so(+0x2607b)[0x7f16dfacd07b]
/usr/local/bind-9.18.5/lib/libisc-9.18.5.so(+0x269f2)[0x7f16dfacd9f2]
/lib64/libuv.so.1(+0x132f1)[0x7f16de85d2f1]
/lib64/libuv.so.1(uv__io_poll+0x4c5)[0x7f16de86ed15]
/lib64/libuv.so.1(uv_run+0x114)[0x7f16de85da74]
/usr/local/bind-9.18.5/lib/libisc-9.18.5.so(+0x2632a)[0x7f16dfacd32a]
/usr/local/bind-9.18.5/lib/libisc-9.18.5.so(isc__trampoline_run+0x15)[0x7f16dfb02155]
/lib64/libpthread.so.0(+0x815a)[0x7f16dee4a15a]
/lib64/libc.so.6(clone+0x43)[0x7f16deb79dd3]
Aborted (core dumped)
----------------------------------------------------
Tue Aug 2 16:08:32 CEST 2022
----------------------------------------------------
Tue Aug 2 16:08:33 CEST 2022
----------------------------------------------------
```
### What is the current *bug* behavior?
```dig``` shows empty output or is crashing (tested with several domains (isc.org, google.ch, arcade.ch)).
### What is the expected *correct* behavior?
Correct ```+nssearch```-Results.
### Relevant configuration files
### Relevant logs and/or screenshots
Here are some more dig debug logs.
#### dig-debug, when dig output is empty
```
# dig debug from an "empty" output
$ dig @127.0.0.1 +nssearch arcade.ch -d
setup_libs()
setup_system()
create_search_list()
ndots is 1.
timeout is 0.
retries is 3.
get_server_list()
make_server(127.0.0.1)
make_server(46.22.21.21)
make_server(46.22.22.22)
dig_query_setup
parse_args()
making new lookup
make_empty_lookup()
make_empty_lookup() = 0x7fbfadd9e000->references = 1
digrc (open)
main parsing @127.0.0.1
make_server(127.0.0.1)
main parsing +nssearch
main parsing arcade.ch
clone_lookup()
make_empty_lookup()
make_empty_lookup() = 0x7fbfadd9f800->references = 1
clone_server_list()
make_server(127.0.0.1)
looking up arcade.ch
main parsing -d
dig_startup()
lock_lookup dighost.c:4598
success
start_lookup()
setup_lookup(0x7fbfadd9f800)
resetting lookup counter.
using root origin
recursive query
AD query
add_question()
starting to render the message
add_opt()
done rendering
create query 0x7fbfaca2a000 linked to lookup 0x7fbfadd9f800
dighost.c:2160:lookup_attach(0x7fbfadd9f800) = 2
dighost.c:2663:new_query(0x7fbfaca2a000) = 1
do_lookup()
start_udp(0x7fbfaca2a000)
dighost.c:3197:query_attach(0x7fbfaca2a000) = 2
working on lookup 0x7fbfadd9f800, query 0x7fbfaca2a000
dighost.c:3242:query_attach(0x7fbfaca2a000) = 3
unlock_lookup dighost.c:4600
udp_ready()
udp_ready(0x7fbfaca26180, success, 0x7fbfaca2a000)
dighost.c:3159:query_attach(0x7fbfaca2a000) = 4
recving with lookup=0x7fbfadd9f800, query=0x7fbfaca2a000, handle=0x7fbfaca26180
recvcount=1
have local timeout of 5000
dighost.c:3085:query_attach(0x7fbfaca2a000) = 5
sending a request
sendcount=1
dighost.c:1754:query_detach(0x7fbfaca2a000) = 4
dighost.c:3179:query_detach(0x7fbfaca2a000) = 3
send_done(0x7fbfaca26180, success, 0x7fbfaca2a000)
sendcount=0
lock_lookup dighost.c:2691
success
dighost.c:2695:lookup_attach(0x7fbfadd9f800) = 3
dighost.c:2739:query_detach(0x7fbfaca2a000) = 2
dighost.c:2740:lookup_detach(0x7fbfadd9f800) = 2
check_if_done()
list empty
unlock_lookup dighost.c:2743
recv_done(0x7fbfaca26180, success, 0x7fbfad7fa1a0, 0x7fbfaca2a000)
lock_lookup dighost.c:3857
success
recvcount=0
dighost.c:3862:lookup_attach(0x7fbfadd9f800) = 3
before parse starts
after parse
in NSSEARCH code
following up arcade.ch
found NS set
found NS ns1.arcade.ch
requeue_lookup()
clone_lookup()
make_empty_lookup()
make_empty_lookup() = 0x7fbfaca73000->references = 1
before insertion, init@0x7fbfadd9f800 -> 0xffffffffffffffff, new@0x7fbfaca73000 -> 0xffffffffffffffff
after insertion, init -> 0x7fbfadd9f800, new = 0x7fbfaca73000, new -> (nil)
dighost.c:1941:_cancel_lookup()
canceling pending query 0x7fbfaca2a000, belonging to 0x7fbfadd9f800
dighost.c:2767:query_detach(0x7fbfaca2a000) = 1
check_if_done()
list full
pending lookup 0x7fbfaca73000
adding server ns1.arcade.ch
make_server(46.22.21.30)
make_server(2a02:1368:1:47::53)
found NS set
found NS ns3.arcade.ch
adding server ns3.arcade.ch
make_server(20.199.186.126)
make_server(2603:1020:a01:2::34)
found NS set
found NS ns2.arcade.ch
adding server ns2.arcade.ch
make_server(46.22.22.30)
make_server(2a02:1368:1:48::53)
dighost.c:4493:query_detach(0x7fbfaca2a000) = 0
dighost.c:4493:destroy_query(0x7fbfaca2a000) = 0
dighost.c:1712:lookup_detach(0x7fbfadd9f800) = 2
dighost.c:4501:lookup_detach(0x7fbfadd9f800) = 1
clear_current_lookup()
dighost.c:1837:lookup_detach(0x7fbfadd9f800) = 0
destroy_lookup
freeing server 0x7fbfadd91800 belonging to 0x7fbfadd9f800
start_lookup()
setup_lookup(0x7fbfaca73000)
using root origin
AD query
add_question()
starting to render the message
add_opt()
done rendering
create query 0x7fbfaca2a000 linked to lookup 0x7fbfaca73000
dighost.c:2160:lookup_attach(0x7fbfaca73000) = 2
dighost.c:2663:new_query(0x7fbfaca2a000) = 1
create query 0x7fbfaca2a1c0 linked to lookup 0x7fbfaca73000
dighost.c:2160:lookup_attach(0x7fbfaca73000) = 3
dighost.c:2663:new_query(0x7fbfaca2a1c0) = 1
create query 0x7fbfaca2a380 linked to lookup 0x7fbfaca73000
dighost.c:2160:lookup_attach(0x7fbfaca73000) = 4
dighost.c:2663:new_query(0x7fbfaca2a380) = 1
create query 0x7fbfaca2a540 linked to lookup 0x7fbfaca73000
dighost.c:2160:lookup_attach(0x7fbfaca73000) = 5
dighost.c:2663:new_query(0x7fbfaca2a540) = 1
create query 0x7fbfaca2a700 linked to lookup 0x7fbfaca73000
dighost.c:2160:lookup_attach(0x7fbfaca73000) = 6
dighost.c:2663:new_query(0x7fbfaca2a700) = 1
create query 0x7fbfaca2a8c0 linked to lookup 0x7fbfaca73000
dighost.c:2160:lookup_attach(0x7fbfaca73000) = 7
dighost.c:2663:new_query(0x7fbfaca2a8c0) = 1
do_lookup()
start_udp(0x7fbfaca2a000)
dighost.c:3197:query_attach(0x7fbfaca2a000) = 2
working on lookup 0x7fbfaca73000, query 0x7fbfaca2a000
dighost.c:3242:query_attach(0x7fbfaca2a000) = 3
unlock_lookup dighost.c:4505
udp_ready()
udp_ready(0x7fbfadc2b300, success, 0x7fbfaca2a000)
dighost.c:3159:query_attach(0x7fbfaca2a000) = 4
recving with lookup=0x7fbfaca73000, query=0x7fbfaca2a000, handle=0x7fbfadc2b300
recvcount=1
have local timeout of 5000
dighost.c:3085:query_attach(0x7fbfaca2a000) = 5
sending a request
sendcount=1
dighost.c:1754:query_detach(0x7fbfaca2a000) = 4
dighost.c:3179:query_detach(0x7fbfaca2a000) = 3
send_done(0x7fbfadc2b300, success, 0x7fbfaca2a000)
sendcount=0
lock_lookup dighost.c:2691
success
dighost.c:2695:lookup_attach(0x7fbfaca73000) = 8
sending next, since searching
dighost.c:2721:query_detach(0x7fbfaca2a000) = 2
dighost.c:2722:lookup_detach(0x7fbfaca73000) = 7
start_udp(0x7fbfaca2a1c0)
dighost.c:3197:query_attach(0x7fbfaca2a1c0) = 2
working on lookup 0x7fbfaca73000, query 0x7fbfaca2a1c0
dighost.c:3242:query_attach(0x7fbfaca2a1c0) = 3
check_if_done()
list empty
unlock_lookup dighost.c:2735
udp_ready()
udp_ready(0x7fbfaca26600, network unreachable, 0x7fbfaca2a1c0)
udp setup failed: network unreachable
dighost.c:1754:query_detach(0x7fbfaca2a1c0) = 2
dighost.c:3153:query_detach(0x7fbfaca2a1c0) = 1
dighost.c:3154:_cancel_lookup()
canceling pending query 0x7fbfaca2a000, belonging to 0x7fbfaca73000
dighost.c:2767:query_detach(0x7fbfaca2a000) = 1
canceling pending query 0x7fbfaca2a1c0, belonging to 0x7fbfaca73000
dighost.c:2767:query_detach(0x7fbfaca2a1c0) = 0
dighost.c:2767:destroy_query(0x7fbfaca2a1c0) = 0
dighost.c:1712:lookup_detach(0x7fbfaca73000) = 6
canceling pending query 0x7fbfaca2a380, belonging to 0x7fbfaca73000
dighost.c:2767:query_detach(0x7fbfaca2a380) = 0
dighost.c:2767:destroy_query(0x7fbfaca2a380) = 0
dighost.c:1712:lookup_detach(0x7fbfaca73000) = 5
canceling pending query 0x7fbfaca2a540, belonging to 0x7fbfaca73000
dighost.c:2767:query_detach(0x7fbfaca2a540) = 0
dighost.c:2767:destroy_query(0x7fbfaca2a540) = 0
dighost.c:1712:lookup_detach(0x7fbfaca73000) = 4
canceling pending query 0x7fbfaca2a700, belonging to 0x7fbfaca73000
dighost.c:2767:query_detach(0x7fbfaca2a700) = 0
dighost.c:2767:destroy_query(0x7fbfaca2a700) = 0
dighost.c:1712:lookup_detach(0x7fbfaca73000) = 3
canceling pending query 0x7fbfaca2a8c0, belonging to 0x7fbfaca73000
dighost.c:2767:query_detach(0x7fbfaca2a8c0) = 0
dighost.c:2767:destroy_query(0x7fbfaca2a8c0) = 0
dighost.c:1712:lookup_detach(0x7fbfaca73000) = 2
check_if_done()
list empty
dighost.c:3155:lookup_detach(0x7fbfaca73000) = 1
recv_done(0x7fbfadc2b300, end of file, 0x7fbfad7f9f40, 0x7fbfaca2a000)
lock_lookup dighost.c:3857
success
recvcount=0
dighost.c:3862:lookup_attach(0x7fbfaca73000) = 2
recv_done: cancel
dighost.c:3870:query_detach(0x7fbfaca2a000) = 0
dighost.c:3870:destroy_query(0x7fbfaca2a000) = 0
dighost.c:1712:lookup_detach(0x7fbfaca73000) = 1
dighost.c:3871:lookup_detach(0x7fbfaca73000) = 0
destroy_lookup
freeing server 0x7fbfaca53a00 belonging to 0x7fbfaca73000
freeing server 0x7fbfaca55800 belonging to 0x7fbfaca73000
freeing server 0x7fbfaca54e00 belonging to 0x7fbfaca73000
freeing server 0x7fbfaca54400 belonging to 0x7fbfaca73000
freeing server 0x7fbfaca56200 belonging to 0x7fbfaca73000
freeing server 0x7fbfaca56c00 belonging to 0x7fbfaca73000
start_lookup()
check_if_done()
list empty
shutting down
clear_current_lookup()
current_lookup is already detached
unlock_lookup dighost.c:3873
destroy_lookup
freeing server 0x7fbfadd8f000 belonging to 0x7fbfadd9e000
cancel_all()
lock_lookup dighost.c:4614
success
unlock_lookup dighost.c:4647
destroy_libs()
freeing task
lock_lookup dighost.c:4667
success
flush_server_list()
destroy DST lib
unlock_lookup dighost.c:4695
Removing log context
Destroy memory
```
#### dig-debug, when dig is crashing
```
# dig debug from a "crashed" output
$ dig @127.0.0.1 +nssearch arcade.ch -d
setup_libs()
setup_system()
create_search_list()
ndots is 1.
timeout is 0.
retries is 3.
get_server_list()
make_server(127.0.0.1)
make_server(46.22.21.21)
make_server(46.22.22.22)
dig_query_setup
parse_args()
making new lookup
make_empty_lookup()
make_empty_lookup() = 0x7fa3c0d9e000->references = 1
digrc (open)
main parsing @127.0.0.1
make_server(127.0.0.1)
main parsing +nssearch
main parsing arcade.ch
clone_lookup()
make_empty_lookup()
make_empty_lookup() = 0x7fa3c0d9f800->references = 1
clone_server_list()
make_server(127.0.0.1)
looking up arcade.ch
main parsing -d
dig_startup()
lock_lookup dighost.c:4598
success
start_lookup()
setup_lookup(0x7fa3c0d9f800)
resetting lookup counter.
using root origin
recursive query
AD query
add_question()
starting to render the message
add_opt()
done rendering
create query 0x7fa3bfa2a000 linked to lookup 0x7fa3c0d9f800
dighost.c:2160:lookup_attach(0x7fa3c0d9f800) = 2
dighost.c:2663:new_query(0x7fa3bfa2a000) = 1
do_lookup()
start_udp(0x7fa3bfa2a000)
dighost.c:3197:query_attach(0x7fa3bfa2a000) = 2
working on lookup 0x7fa3c0d9f800, query 0x7fa3bfa2a000
dighost.c:3242:query_attach(0x7fa3bfa2a000) = 3
unlock_lookup dighost.c:4600
udp_ready()
udp_ready(0x7fa3bfa26180, success, 0x7fa3bfa2a000)
dighost.c:3159:query_attach(0x7fa3bfa2a000) = 4
recving with lookup=0x7fa3c0d9f800, query=0x7fa3bfa2a000, handle=0x7fa3bfa26180
recvcount=1
have local timeout of 5000
dighost.c:3085:query_attach(0x7fa3bfa2a000) = 5
sending a request
sendcount=1
dighost.c:1754:query_detach(0x7fa3bfa2a000) = 4
dighost.c:3179:query_detach(0x7fa3bfa2a000) = 3
send_done(0x7fa3bfa26180, success, 0x7fa3bfa2a000)
sendcount=0
lock_lookup dighost.c:2691
success
dighost.c:2695:lookup_attach(0x7fa3c0d9f800) = 3
dighost.c:2739:query_detach(0x7fa3bfa2a000) = 2
dighost.c:2740:lookup_detach(0x7fa3c0d9f800) = 2
check_if_done()
list empty
unlock_lookup dighost.c:2743
recv_done(0x7fa3bfa26180, success, 0x7fa3c07fa1a0, 0x7fa3bfa2a000)
lock_lookup dighost.c:3857
success
recvcount=0
dighost.c:3862:lookup_attach(0x7fa3c0d9f800) = 3
before parse starts
after parse
in NSSEARCH code
following up arcade.ch
found NS set
found NS ns2.arcade.ch
requeue_lookup()
clone_lookup()
make_empty_lookup()
make_empty_lookup() = 0x7fa3bfa73000->references = 1
before insertion, init@0x7fa3c0d9f800 -> 0xffffffffffffffff, new@0x7fa3bfa73000 -> 0xffffffffffffffff
after insertion, init -> 0x7fa3c0d9f800, new = 0x7fa3bfa73000, new -> (nil)
dighost.c:1941:_cancel_lookup()
canceling pending query 0x7fa3bfa2a000, belonging to 0x7fa3c0d9f800
dighost.c:2767:query_detach(0x7fa3bfa2a000) = 1
check_if_done()
list full
pending lookup 0x7fa3bfa73000
adding server ns2.arcade.ch
make_server(46.22.22.30)
make_server(2a02:1368:1:48::53)
found NS set
found NS ns3.arcade.ch
adding server ns3.arcade.ch
make_server(20.199.186.126)
make_server(2603:1020:a01:2::34)
found NS set
found NS ns1.arcade.ch
adding server ns1.arcade.ch
make_server(46.22.21.30)
make_server(2a02:1368:1:47::53)
dighost.c:4493:query_detach(0x7fa3bfa2a000) = 0
dighost.c:4493:destroy_query(0x7fa3bfa2a000) = 0
dighost.c:1712:lookup_detach(0x7fa3c0d9f800) = 2
dighost.c:4501:lookup_detach(0x7fa3c0d9f800) = 1
clear_current_lookup()
dighost.c:1837:lookup_detach(0x7fa3c0d9f800) = 0
destroy_lookup
freeing server 0x7fa3c0d91800 belonging to 0x7fa3c0d9f800
start_lookup()
setup_lookup(0x7fa3bfa73000)
using root origin
AD query
add_question()
starting to render the message
add_opt()
done rendering
create query 0x7fa3bfa2a000 linked to lookup 0x7fa3bfa73000
dighost.c:2160:lookup_attach(0x7fa3bfa73000) = 2
dighost.c:2663:new_query(0x7fa3bfa2a000) = 1
create query 0x7fa3bfa2a1c0 linked to lookup 0x7fa3bfa73000
dighost.c:2160:lookup_attach(0x7fa3bfa73000) = 3
dighost.c:2663:new_query(0x7fa3bfa2a1c0) = 1
create query 0x7fa3bfa2a380 linked to lookup 0x7fa3bfa73000
dighost.c:2160:lookup_attach(0x7fa3bfa73000) = 4
dighost.c:2663:new_query(0x7fa3bfa2a380) = 1
create query 0x7fa3bfa2a540 linked to lookup 0x7fa3bfa73000
dighost.c:2160:lookup_attach(0x7fa3bfa73000) = 5
dighost.c:2663:new_query(0x7fa3bfa2a540) = 1
create query 0x7fa3bfa2a700 linked to lookup 0x7fa3bfa73000
dighost.c:2160:lookup_attach(0x7fa3bfa73000) = 6
dighost.c:2663:new_query(0x7fa3bfa2a700) = 1
create query 0x7fa3bfa2a8c0 linked to lookup 0x7fa3bfa73000
dighost.c:2160:lookup_attach(0x7fa3bfa73000) = 7
dighost.c:2663:new_query(0x7fa3bfa2a8c0) = 1
do_lookup()
start_udp(0x7fa3bfa2a000)
dighost.c:3197:query_attach(0x7fa3bfa2a000) = 2
working on lookup 0x7fa3bfa73000, query 0x7fa3bfa2a000
dighost.c:3242:query_attach(0x7fa3bfa2a000) = 3
unlock_lookup dighost.c:4505
udp_ready()
udp_ready(0x7fa3c0c2b300, success, 0x7fa3bfa2a000)
dighost.c:3159:query_attach(0x7fa3bfa2a000) = 4
recving with lookup=0x7fa3bfa73000, query=0x7fa3bfa2a000, handle=0x7fa3c0c2b300
recvcount=1
have local timeout of 5000
dighost.c:3085:query_attach(0x7fa3bfa2a000) = 5
sending a request
sendcount=1
dighost.c:1754:query_detach(0x7fa3bfa2a000) = 4
dighost.c:3179:query_detach(0x7fa3bfa2a000) = 3
send_done(0x7fa3c0c2b300, success, 0x7fa3bfa2a000)
sendcount=0
lock_lookup dighost.c:2691
success
dighost.c:2695:lookup_attach(0x7fa3bfa73000) = 8
sending next, since searching
dighost.c:2721:query_detach(0x7fa3bfa2a000) = 2
dighost.c:2722:lookup_detach(0x7fa3bfa73000) = 7
start_udp(0x7fa3bfa2a1c0)
dighost.c:3197:query_attach(0x7fa3bfa2a1c0) = 2
working on lookup 0x7fa3bfa73000, query 0x7fa3bfa2a1c0
dighost.c:3242:query_attach(0x7fa3bfa2a1c0) = 3
check_if_done()
list empty
unlock_lookup dighost.c:2735
udp_ready()
udp_ready(0x7fa3bfa26600, success, 0x7fa3bfa2a1c0)
dighost.c:3159:query_attach(0x7fa3bfa2a1c0) = 4
recving with lookup=0x7fa3bfa73000, query=0x7fa3bfa2a1c0, handle=0x7fa3bfa26600
recvcount=2
have local timeout of 5000
dighost.c:3085:query_attach(0x7fa3bfa2a1c0) = 5
sending a request
sendcount=1
dighost.c:1754:query_detach(0x7fa3bfa2a1c0) = 4
dighost.c:3179:query_detach(0x7fa3bfa2a1c0) = 3
send_done(0x7fa3bfa26600, success, 0x7fa3bfa2a1c0)
sendcount=0
lock_lookup dighost.c:2691
success
dighost.c:2695:lookup_attach(0x7fa3bfa73000) = 8
sending next, since searching
dighost.c:2721:query_detach(0x7fa3bfa2a1c0) = 2
dighost.c:2722:lookup_detach(0x7fa3bfa73000) = 7
start_udp(0x7fa3bfa2a380)
dighost.c:3197:query_attach(0x7fa3bfa2a380) = 2
working on lookup 0x7fa3bfa73000, query 0x7fa3bfa2a380
dighost.c:3242:query_attach(0x7fa3bfa2a380) = 3
check_if_done()
list empty
unlock_lookup dighost.c:2735
udp_ready()
udp_ready(0x7fa3bfa26180, network unreachable, 0x7fa3bfa2a380)
udp setup failed: network unreachable
dighost.c:1754:query_detach(0x7fa3bfa2a380) = 2
dighost.c:3153:query_detach(0x7fa3bfa2a380) = 1
dighost.c:3154:_cancel_lookup()
canceling pending query 0x7fa3bfa2a000, belonging to 0x7fa3bfa73000
dighost.c:2767:query_detach(0x7fa3bfa2a000) = 1
canceling pending query 0x7fa3bfa2a1c0, belonging to 0x7fa3bfa73000
dighost.c:2767:query_detach(0x7fa3bfa2a1c0) = 1
canceling pending query 0x7fa3bfa2a380, belonging to 0x7fa3bfa73000
dighost.c:2767:query_detach(0x7fa3bfa2a380) = 0
dighost.c:2767:destroy_query(0x7fa3bfa2a380) = 0
dighost.c:1712:lookup_detach(0x7fa3bfa73000) = 6
canceling pending query 0x7fa3bfa2a540, belonging to 0x7fa3bfa73000
dighost.c:2767:query_detach(0x7fa3bfa2a540) = 0
dighost.c:2767:destroy_query(0x7fa3bfa2a540) = 0
dighost.c:1712:lookup_detach(0x7fa3bfa73000) = 5
canceling pending query 0x7fa3bfa2a700, belonging to 0x7fa3bfa73000
dighost.c:2767:query_detach(0x7fa3bfa2a700) = 0
dighost.c:2767:destroy_query(0x7fa3bfa2a700) = 0
dighost.c:1712:lookup_detach(0x7fa3bfa73000) = 4
canceling pending query 0x7fa3bfa2a8c0, belonging to 0x7fa3bfa73000
dighost.c:2767:query_detach(0x7fa3bfa2a8c0) = 0
dighost.c:2767:destroy_query(0x7fa3bfa2a8c0) = 0
dighost.c:1712:lookup_detach(0x7fa3bfa73000) = 3
check_if_done()
list empty
dighost.c:3155:lookup_detach(0x7fa3bfa73000) = 2
recv_done(0x7fa3c0c2b300, end of file, 0x7fa3c07f9f40, 0x7fa3bfa2a000)
lock_lookup dighost.c:3857
success
recvcount=1
dighost.c:3862:lookup_attach(0x7fa3bfa73000) = 3
recv_done: cancel
dighost.c:3870:query_detach(0x7fa3bfa2a000) = 0
dighost.c:3870:destroy_query(0x7fa3bfa2a000) = 0
dighost.c:1712:lookup_detach(0x7fa3bfa73000) = 2
dighost.c:3871:lookup_detach(0x7fa3bfa73000) = 1
clear_current_lookup()
dighost.c:1837:lookup_detach(0x7fa3bfa73000) = 0
destroy_lookup
freeing server 0x7fa3bfa56200 belonging to 0x7fa3bfa73000
freeing server 0x7fa3bfa53a00 belonging to 0x7fa3bfa73000
freeing server 0x7fa3bfa54400 belonging to 0x7fa3bfa73000
freeing server 0x7fa3bfa56c00 belonging to 0x7fa3bfa73000
freeing server 0x7fa3bfa54e00 belonging to 0x7fa3bfa73000
freeing server 0x7fa3bfa55800 belonging to 0x7fa3bfa73000
start_lookup()
check_if_done()
list empty
dighost.c:1551: INSIST((uint_fast32_t) __extension__ ({ __auto_type __atomic_load_ptr = ((&recvcount)); __typeof__ (*__atomic_load_ptr) __atomic_load_tmp; __atomic_load (__atomic_load_ptr, &__atomic_load_tmp, (memory_order_acquire)); __atomic_load_tmp; }) == 0) failed, back trace
/usr/local/bind-9.18.5/lib/libisc-9.18.5.so(+0x3690c)[0x7fa3c432690c]
/usr/local/bind-9.18.5/lib/libisc-9.18.5.so(isc_assertion_failed+0xa)[0x7fa3c432687a]
/usr/local/bind/bin/dig[0x40dcae]
/usr/local/bind/bin/dig[0x4145f7]
/usr/local/bind/bin/dig[0x4158f0]
/usr/local/bind-9.18.5/lib/libisc-9.18.5.so(isc__nm_async_readcb+0x9c)[0x7fa3c43142bc]
/usr/local/bind-9.18.5/lib/libisc-9.18.5.so(+0x25873)[0x7fa3c4315873]
/usr/local/bind-9.18.5/lib/libisc-9.18.5.so(+0x2607b)[0x7fa3c431607b]
/usr/local/bind-9.18.5/lib/libisc-9.18.5.so(+0x269f2)[0x7fa3c43169f2]
/lib64/libuv.so.1(+0x132f1)[0x7fa3c30a62f1]
/lib64/libuv.so.1(uv__io_poll+0x4c5)[0x7fa3c30b7d15]
/lib64/libuv.so.1(uv_run+0x114)[0x7fa3c30a6a74]
/usr/local/bind-9.18.5/lib/libisc-9.18.5.so(+0x2632a)[0x7fa3c431632a]
/usr/local/bind-9.18.5/lib/libisc-9.18.5.so(isc__trampoline_run+0x15)[0x7fa3c434b155]
/lib64/libpthread.so.0(+0x815a)[0x7fa3c369315a]
/lib64/libc.so.6(clone+0x43)[0x7fa3c33c2dd3]
Aborted (core dumped)
```August 2022 (9.16.32, 9.16.32-S1, 9.18.6, 9.19.4)Arаm SаrgsyаnArаm Sаrgsyаnhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3471dig -h doesn't list the option for "+qid="2022-08-25T12:41:49ZThomas Amgartendig -h doesn't list the option for "+qid="### Summary
Regarding the release notes, ```dig``` has an option ```+qid=``` since 9.18.0.
Currently, ```dig -h``` doesn't list the option/explanation for specifying the query id:
```
$ dig -h | grep -i qid
```
BUT: ```dig``` accepts ...### Summary
Regarding the release notes, ```dig``` has an option ```+qid=``` since 9.18.0.
Currently, ```dig -h``` doesn't list the option/explanation for specifying the query id:
```
$ dig -h | grep -i qid
```
BUT: ```dig``` accepts the ```+qid=``` parameter:
```
$ dig +qid=9999 isc.org | grep 9999
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 9999
```
### BIND version used
Tested with ```dig``` in versions 9.18.0, 9.18.2 and 9.18.5
### Steps to reproduce
```$ dig -h | grep -i qid``` doesn't show any explanation about this parameter.
### What is the current *bug* behavior?
```dig -h``` does not list the option/explanation for specifying the query id.
### What is the expected *correct* behavior?
```dig -h``` should list the option/explanation for specifying the query id.
### Relevant configuration files
### Relevant logs and/or screenshots
### Possible fixesAugust 2022 (9.16.32, 9.16.32-S1, 9.18.6, 9.19.4)Arаm SаrgsyаnArаm Sаrgsyаnhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3469Auto disable RSASHA1 and NSEC3RSASHA1 when not supported by the OS2022-08-25T12:41:49ZMark AndrewsAuto disable RSASHA1 and NSEC3RSASHA1 when not supported by the OSThe is a minimal change so that named can run on `Oracle Linux 9` and `RHEL9` with default crypto policy.
Primary zones using RSASHA1 and NSEC3RSASHA1 for signing need to be migrated off of those algorithms before upgrading/using `Oracl...The is a minimal change so that named can run on `Oracle Linux 9` and `RHEL9` with default crypto policy.
Primary zones using RSASHA1 and NSEC3RSASHA1 for signing need to be migrated off of those algorithms before upgrading/using `Oracle Linux 9` and `RHEL9`.
Zones solely using `RSASHA1` and `NSEC3RSASHA1` will validate as insecure.August 2022 (9.16.32, 9.16.32-S1, 9.18.6, 9.19.4)https://gitlab.isc.org/isc-projects/bind9/-/issues/3462rndc dumpdb -expired doesn't always work2022-08-25T12:41:49ZCathy Almondrndc dumpdb -expired doesn't always workPer Support ticket [#20909](https://support.isc.org/Ticket/Display.html?id=20909), I asked for a cache dump that included expired content (lots, as indicated by the stats) but got a dump that included only active content.
This new featu...Per Support ticket [#20909](https://support.isc.org/Ticket/Display.html?id=20909), I asked for a cache dump that included expired content (lots, as indicated by the stats) but got a dump that included only active content.
This new feature was introduced in https://gitlab.isc.org/isc-projects/bind9/-/issues/1870
I strongly suspect that it doesn't output expired content unless we have stale cache enabled. In this instance stale cache was disabled, but it looked as if the cache was full of expired content and very little active content (per the stats output). I was very very disappointed when I received the cache dump, and then further checked (from the logs) that the 'expired' option had been included when it was requested.
Please fix!
Also I have a secondary question, inspired by the original issue #1870, and following discussion with BIND engineers.
As well as expired content, cache may also contain replaced content - that is, older versions of RRsets that have since been updated. I had this highlighted to me when discussing with Engineering the dumpdb output (and the problem I am trying to troubleshoot that I hoped this dumpdb would help with). This is the question that was posed in the original issue during review:
```
There may also be cases where a single node has multiple rdatasets/rdataslabs associated with it for the same RRtype - only one of which is "current". These expired and not-yet reclaimed data should be dumped as well.
I don't know that this is a problem, but I don't know that it isn't either.
```
Do we also dump replaced content as well as expired content with the -expired dumpdb option? (This might required a separate and new feature request if it doesn't).August 2022 (9.16.32, 9.16.32-S1, 9.18.6, 9.19.4)Matthijs Mekkingmatthijs@isc.orgMatthijs Mekkingmatthijs@isc.orghttps://gitlab.isc.org/isc-projects/bind9/-/issues/3461Do a better job of logging when fetches-per-zone is triggered2022-08-26T09:07:43ZCathy AlmondDo a better job of logging when fetches-per-zone is triggeredBased on a lot of trawling and deduction about what could be happening, in Support ticket [#20945](https://support.isc.org/Ticket/Display.html?id=20945)...
BLAT (bottom line at the top) - I originally thought this had to be a bug in the...Based on a lot of trawling and deduction about what could be happening, in Support ticket [#20945](https://support.isc.org/Ticket/Display.html?id=20945)...
BLAT (bottom line at the top) - I originally thought this had to be a bug in the logging or the maintenance of the counters, but I was wrong - we just, and particularly in this situation, don't really log anything useful for the administrator. We could do better!
Detail:
We are presented with a sequence of log messages about fetches-per-zone having been triggered for the com domain - should we worry about them?
```
/var/log/named/default.2:15-Jul-2022 02:12:28.654 spill: info: too many simultaneous fetches for com (allowed 1162 spilled 1)
/var/log/named/default.2:15-Jul-2022 02:12:33.165 spill: info: too many simultaneous fetches for com (allowed 2382 spilled 1)
/var/log/named/default.2:15-Jul-2022 02:12:41.151 spill: info: too many simultaneous fetches for com (allowed 1522 spilled 1)
/var/log/named/default.2:15-Jul-2022 02:12:49.593 spill: info: too many simultaneous fetches for com (allowed 1357 spilled 1)
/var/log/named/default.2:15-Jul-2022 02:12:57.290 spill: info: too many simultaneous fetches for com (allowed 904 spilled 1)
/var/log/named/default.2:15-Jul-2022 02:13:05.679 spill: info: too many simultaneous fetches for com (allowed 3854 spilled 1)
/var/log/named/default.2:15-Jul-2022 02:13:10.944 spill: info: too many simultaneous fetches for com (allowed 912 spilled 1)
/var/log/named/default.2:15-Jul-2022 02:13:36.487 spill: info: too many simultaneous fetches for com (allowed 5206 spilled 1)
/var/log/named/default.2:15-Jul-2022 02:13:37.712 spill: info: too many simultaneous fetches for com (allowed 762 spilled 1)
/var/log/named/default.2:15-Jul-2022 02:13:47.819 spill: info: too many simultaneous fetches for com (allowed 760 spilled 1)
/var/log/named/default.2:15-Jul-2022 02:13:49.927 spill: info: too many simultaneous fetches for com (allowed 906 spilled 1)
/var/log/named/default.2:15-Jul-2022 02:13:54.215 spill: info: too many simultaneous fetches for com (allowed 1757 spilled 1)
/var/log/named/default.2:15-Jul-2022 02:14:05.718 spill: info: too many simultaneous fetches for com (allowed 1444 spilled 1)
/var/log/named/default.2:15-Jul-2022 02:14:21.880 spill: info: too many simultaneous fetches for com (allowed 2159 spilled 1)
/var/log/named/default.2:15-Jul-2022 02:14:25.377 spill: info: too many simultaneous fetches for com (allowed 1708 spilled 1)
/var/log/named/default.2:15-Jul-2022 02:14:34.921 spill: info: too many simultaneous fetches for com (allowed 887 spilled 1)
/var/log/named/default.2:15-Jul-2022 02:14:56.110 spill: info: too many simultaneous fetches for com (allowed 1950 spilled 1)
/var/log/named/default.2:15-Jul-2022 02:14:58.813 spill: info: too many simultaneous fetches for com (allowed 2773 spilled 1)
/var/log/named/default.2:15-Jul-2022 02:15:08.737 spill: info: too many simultaneous fetches for com (allowed 1997 spilled 1)
/var/log/named/default.2:15-Jul-2022 02:15:11.528 spill: info: too many simultaneous fetches for com (allowed 1210 spilled 1)
/var/log/named/default.2:15-Jul-2022 02:15:17.300 spill: info: too many simultaneous fetches for com (allowed 1281 spilled 1)
/var/log/named/default.2:15-Jul-2022 02:15:19.483 spill: info: too many simultaneous fetches for com (allowed 941 spilled 1)
/var/log/named/default.2:15-Jul-2022 02:15:29.511 spill: info: too many simultaneous fetches for com (allowed 2122 spilled 1)
/var/log/named/default.2:15-Jul-2022 02:15:36.315 spill: info: too many simultaneous fetches for com (allowed 960 spilled 1)
/var/log/named/default.2:15-Jul-2022 02:15:38.065 spill: info: too many simultaneous fetches for com (allowed 1153 spilled 1)
/var/log/named/default.2:15-Jul-2022 02:16:11.206 spill: info: too many simultaneous fetches for com (allowed 810 spilled 1)
/var/log/named/default.2:15-Jul-2022 02:16:12.717 spill: info: too many simultaneous fetches for com (allowed 897 spilled 1)
```
It is mighty peculiar that we only ever log a single spill (=dropped fetch). Usually when we see a zone being rate limited, we see logging every 60s or so, and the counters going up and up.
But read on ...
The way fetches-per-zone and its logging works is this.
1. There is a structure that is created (and which persists for as long as there are any outstanding fetches) for each zone that we might want to limit - in this case com.
2. The structure has in it:
- count - a gauge that is incremented or decremented and reflects
how many outstanding fetches there are in progress
- allowed - a counter that is only incremented - it goes up each time
we allow a fetch to proceed
- dropped - a counter that is only incremented - it goes up each time
we decide to drop a fetch
- logged - the timestamp of when we last logged that we dropped
anything
3. The structure goes away once 'count' has gone back down to zero
4. We ONLY think about logging if 'dropped' is incremented. We also
only log (having incremented 'dropped') if it's more than 60s since
the last time we logged for this zone.
COROLLARY: We ALWAYS log the first drop for a zone that is being
limited by fetches-per-zone, but thereafter, will only log at most
every 60s for the duration of the existence of this specific counting
structure.
====
Now looking at the logs for com - I see that:
a) We're only ever logging a spill (drop) of 1. This is the first drop we did against com for this fetch-limits counting structure.
b) We never see anything bigger than 1 being logged. This can't be because we never dropped anything ever again because:
- We only think about logging WHEN we drop something - therefore if
the counting structure persisted, the next time we log would be
because we dropped another fetch - but it's still logging only 1!
- Even if we think about logging, we do it at most every 60s, yet
we're logging more frequently than that, which means that the
counting structure has to have gone away and then been recreated
for com, for us to be able to log more frequently.
c) See b) - we're logging more frequently than 60s...
====
My hypothesis on what's happening therefore is that there are periodic spikes in getting query responses from com, but between times, all is normal (and/or all is in cache) so the outstanding fetches count goes back to zero and the counting structure is deleted.
The logging is happening more frequently than 60s (and we only ever see
"spilled 1" which means that we see only the fist time we drop a fetch to the com servers.
We may well drop a bunch more (and have no way of knowing that we did from the logs), but whatever is causing us to drop them, subsides before the 60s is up (after which we'd log again the next time we drop one) - and moreover, subsides so much so, that the count of outstandings went back to zero and we deleted the fetches counter structure entirely!
====
What I don't know, is how many drops this server actually did, before things went back to normal, because we deleted the structure that didn't the counting before we reached 60s after the first log message.
Also, now that I understand it, it's really annoying that I'm seeing the first instance of a drop being logged as if it contains useful facts about how many fetches we're spilling. Nope - that's not it. What we're logging is "fetches-per-zone has been triggered for the first time for com, and we dropped our first fetch, we might drop a few more, who knows? ..."
My suggestion for an improved logging strategy is something like this:
a) First time we log, we say something like:
too many simultaneous fetches for com (allowed 1997 spilled 1; fetches-per-zone initial trigger event)
b) Subsequent logging (based on 'we dropped something else AND it's more than 60s since we last logged') should look moreorless the same:
too many simultaneous fetches for com (allowed 6423 spilled 24; cumulative since initial trigger event)
c) New, when we delete the counting structure and IFF the spill counter>0, we have a new log emitted:
fetch counters for com now being discarded (allowed 2345 spilled 6; cumulative since initial trigger event)
----
Or something like that.
Note that we only want to emit the new log IF we ever emitted the first one. Also that we can't log unless something triggers it (in the case of the other logs, the trigger is that we spill something, this last one is that we destroy the counter structure).
Obviously we don't want to log every time we hit 60s if we're no longer spilling. Also this means that we might not log that final event until after the server has been running for hours, or if we're shutting it down, if we're no longer spilling. But it's better than nothing. And it at least gives us a final count of 'we spilled this many..', which is the piece of information that was woefully lacking in the log sequence I investigated above.
And yes, I agree that logging meaningfully and useful for fetches-per-zone is hard, plus there's a balance to be struck between too much and too little.
I'm also delighted that we can now dump all the information about the current fetch-limits status 'now', should we want to:
https://gitlab.isc.org/isc-projects/bind9/-/issues/665August 2022 (9.16.32, 9.16.32-S1, 9.18.6, 9.19.4)Arаm SаrgsyаnArаm Sаrgsyаnhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3456Possible race in dns_dispatch_connect()2022-08-25T13:18:30ZEvan HuntPossible race in dns_dispatch_connect()A [test failure](https://gitlab.isc.org/isc-projects/bind9/-/jobs/2640755) turned up in !6562 that indicates a probable race in `dns_dispatch_connect()`: thread 1 sets `disp->tcpstate` from NONE to CONNECTING; thread 2 finds `disp->tcpst...A [test failure](https://gitlab.isc.org/isc-projects/bind9/-/jobs/2640755) turned up in !6562 that indicates a probable race in `dns_dispatch_connect()`: thread 1 sets `disp->tcpstate` from NONE to CONNECTING; thread 2 finds `disp->tcpstate` is CONNECTING, locks the dispatch, appends something to `disp->pending`, and unlocks; thread 1 locks the dispatch, finds that `disp->pending` is not empty, and asserts.
I think the correct fix is just to remove the `INSIST` that the list has to be empty.August 2022 (9.16.32, 9.16.32-S1, 9.18.6, 9.19.4)Evan HuntEvan Hunthttps://gitlab.isc.org/isc-projects/bind9/-/issues/3453resolver.c: warning: '%s' directive output may be truncated writing up to 102...2022-08-04T07:25:39ZMichal Nowakresolver.c: warning: '%s' directive output may be truncated writing up to 1023 bytes into a region of size 1021549cf0f3e65e31e8a5d99ff819f38b8a80ea93be on Solaris 11.4 with GCC 11.2.0 produces the following warning:
```
resolver.c: In function 'dns_resolver_dumpquota':
resolver.c:11537:39: warning: '%s' directive output may be truncated writing ...549cf0f3e65e31e8a5d99ff819f38b8a80ea93be on Solaris 11.4 with GCC 11.2.0 produces the following warning:
```
resolver.c: In function 'dns_resolver_dumpquota':
resolver.c:11537:39: warning: '%s' directive output may be truncated writing up to 1023 bytes into a region of size 1021 [-Wformat-truncation=]
11537 | "\n- %s: %u active (allowed %u spilled %u)",
| ^~
11538 | nb, fc->count, fc->allowed, fc->dropped);
| ~~
resolver.c:11536:25: note: 'snprintf' output between 36 and 1086 bytes into a destination of size 1024
11536 | snprintf(text, sizeof(text),
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~
11537 | "\n- %s: %u active (allowed %u spilled %u)",
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
11538 | nb, fc->count, fc->allowed, fc->dropped);
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
```August 2022 (9.16.32, 9.16.32-S1, 9.18.6, 9.19.4)