ISC Open Source Projects issueshttps://gitlab.isc.org/groups/isc-projects/-/issues2023-06-19T12:08:06Zhttps://gitlab.isc.org/isc-projects/kea/-/issues/2256Prepare subnet selection speedup: patrica trees2023-06-19T12:08:06ZFrancis DupontPrepare subnet selection speedup: patrica treesPatricia trees are the optimal general data structure for IP prefix lookup: the number of comparisons is bounded by the number of bits so 32 or 128 for IP, the real number depends on the number of nodes where children differs and in the ...Patricia trees are the optimal general data structure for IP prefix lookup: the number of comparisons is bounded by the number of bits so 32 or 128 for IP, the real number depends on the number of nodes where children differs and in the real world is far lower.
BTW the current code just iterates on all subnetworks so is linear in the number of subnets (average case N/2, N for the worst case which includes the not found case). The patricia tree gives also overlaps (equality or inclusion).backlogRazvan BecheriuRazvan Becheriuhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3071Signed version of an inline-signed zone may be dumped without unsigned serial...2022-01-06T11:27:25ZMichał KępieńSigned version of an inline-signed zone may be dumped without unsigned serial number informationWhen the signed version of an inline-signed zone is dumped to disk, the
serial number of the unsigned version of the zone is written in the
raw-format header so that the contents of the signed zone can be
resynchronized after `named` res...When the signed version of an inline-signed zone is dumped to disk, the
serial number of the unsigned version of the zone is written in the
raw-format header so that the contents of the signed zone can be
resynchronized after `named` restart if the unsigned zone file is
modified while `named` is not running (see [RT #26676][1]).
In order for the serial number of the unsigned zone to be determined
during the dump, `zone->raw` must be set to a non-`NULL` value. This
should always be the case as long as the signed version of the zone is
used for anything by `named`.
However, a scenario exists in which the signed version of the zone has
`zone->raw` set to `NULL` while it is being dumped:
1. Zone dump is requested; `zone_dump()` is invoked.
2. Another zone dump is already in progress, so the dump gets deferred
until I/O is available (see `zonemgr_getio()`).
3. The last external reference to the zone is released.
`zone_shutdown()` gets queued to the zone's task.
4. I/O becomes available for zone dumping. `zone_gotwritehandle()`
gets queued to the zone's task.
5. The zone's task runs `zone_shutdown()`. `zone->raw` gets set to
`NULL`.
6. The zone's task runs `zone_gotwritehandle()`. `zone->raw` is
determined to be `NULL`, causing the serial number of the unsigned
version of the zone to be omitted from the raw-format dump of the
signed zone file.
I believe this issue became easier to trigger in BIND 9.12.0. That was
the first BIND 9 release containing change 4613 (see [RT #38324][2]),
specifically this hunk:
```diff
@@ -9773,7 +9822,7 @@ dns_zone_flush(dns_zone_t *zone) {
dumping = ISC_TRUE;
UNLOCK_ZONE(zone);
if (!dumping)
- result = zone_dump(zone, ISC_FALSE); /* Unknown task. */
+ result = zone_dump(zone, ISC_TRUE); /* Unknown task. */
return (result);
}
```
`zone_dump()` can either perform the `zone->raw` check itself or defer
it until zone dump I/O becomes available. Before the above change,
deferring the check was only possible if `zone_dump()` was called from
`zone_maintenance()` (which itself is timer-based). The above change
enables the `zone->raw` check to also be deferred when `zone_dump()` is
called from `dns_zone_flush()`, i.e. essentially from anywhere,
particularly from zone table cleanup callbacks which are run when the
zone's reference count is likely to drop to zero, triggering
`zone_shutdown()` and ultimately causing the bug above.
The above change was originally introduced in commit
[980611a3fe3ececeb0049b9e7c2e380b577f5e68][3] without any detailed
explanation. I am not entirely sure why, but the change seems to be
necessary in order for some tests related to `max-journal-size` to pass.
I ran out of time to determine why that is. Note, however, that
`zone_dump()` [warns][4] against setting `compact` to `true` for
non-task-locked call sites (see also the code comments next to
`zone_dump()` invocations).
At any rate, I believe that the bug could be triggered even without the
above change - when the zone's reference count drops to zero while
`zone_maintenance()` is running. I have not confirmed that it is
practically possible and I can certainly be missing some implicit
protection against such a triggering scenario happening. It does not
matter much anyway with BIND 9.11 reaching EoL soon and this not being a
critical problem.
The only quick way to fix this issue that I see is to defer detaching
from `zone->raw` in `zone_shutdown()` if the zone is in the process of
being dumped to disk.
The problem is easily reproducible, though I need to find a clean way of
turning it into a system test.
This problem was [discovered][5] in the process of attempting to fix an
unrelated issue.
[1]: https://bugs.isc.org/Ticket/Display.html?id=26676
[2]: https://bugs.isc.org/Ticket/Display.html?id=38324
[3]: https://gitlab.isc.org/isc-archive/bind9/-/commit/980611a3fe3ececeb0049b9e7c2e380b577f5e68
[4]: https://gitlab.isc.org/isc-projects/bind9/-/blob/ae7ba926d4dca3f3d3eedc63d946b4f99f438029/lib/dns/zone.c#L11967-11969
[5]: https://gitlab.isc.org/isc-projects/bind9/-/merge_requests/5676#note_257235January 2022 (9.16.25, 9.16.25-S1, 9.17.22)https://gitlab.isc.org/isc-projects/bind9/-/issues/3072Investigate/fix how CATZ update post-processing can block servicing of inboun...2023-01-19T09:31:38ZCathy AlmondInvestigate/fix how CATZ update post-processing can block servicing of inbound queries in BIND 9.16Related to Support Ticket [RT #19629](https://support.isc.org/Ticket/Display.html?id=19629)
After upgrading a busy authoritative server from BIND 9.11 to BIND 9.16, the behaviour of the RTTs on the primary zone monitoring test queries (...Related to Support Ticket [RT #19629](https://support.isc.org/Ticket/Display.html?id=19629)
After upgrading a busy authoritative server from BIND 9.11 to BIND 9.16, the behaviour of the RTTs on the primary zone monitoring test queries (sampled at a rate of 2 QPS) changed significantly to become much more 'spiky'. Overall, the RTTs are lower (9.16 is faster at servicing queries than 9.11), but there are some significant 'spikes' where the RTT of the test queries are much larger than average.
Investigation of the causes (carried out by eliminating potential candidates during the running and monitoring of a 'test' server highlighted that the 'spikes' corresponded to the period immediately after an update had been received for a catalog zone.
(Also noted was that the spikes didn't occur after every catalog zone update, but this would tally with the way that inbound client queries are hashed to a netmgr thread which may or may not be the one to get temporarily blocked by CATZ post-processing)
---
My understanding is that catalog zone post-processing is a task that runs to completion, so does not iterate/pause for the duration of its operation to sort out adds/deletes/changes to catalog zones on the receiving secondary server - so this is a plausible cause for these test RTT spikes.
Also a potential candidate might be inbound AXFR/IXFR processing as a follow-on outcome of CATZ updates.
---
Please can this be investigated/tested/confirmed and solutions considered. It may be that CATZ post-processing should be considered as another candidate to migrate to threadpools, although it might also need to be something that iterates also, if it ends up locking resources that could block inbound client queries too (as in, it's not just about it sitting on the netmgr thread doing its thing, but it also by what it is doing, blocks other threads - ref !5151https://gitlab.isc.org/isc-projects/kea/-/issues/2257lease not loaded from memfile if user context has multiple key-value pairs2022-01-12T12:26:19ZAndrei Pavelandrei@isc.orglease not loaded from memfile if user context has multiple key-value pairsmemfile CSV:
```
address,hwaddr,client_id,valid_lifetime,expire,subnet_id,fqdn_fwd,fqdn_rev,hostname,state,user_context
10.0.0.1,ff:01:02:03:04:08,01:ff:01:02:03:04:08,7200,1641205200,1,0,0,,0,{"comment": "hello", "comment2": "do not rel...memfile CSV:
```
address,hwaddr,client_id,valid_lifetime,expire,subnet_id,fqdn_fwd,fqdn_rev,hostname,state,user_context
10.0.0.1,ff:01:02:03:04:08,01:ff:01:02:03:04:08,7200,1641205200,1,0,0,,0,{"comment": "hello", "comment2": "do not release"}
```
error:
```
ERROR DHCPSRV_MEMFILE_LEASE_LOAD_ROW_ERROR discarding row x, error: EOF read, one of ",}" expected in <string>:y:z
```
See `MemfileLeaseMgrTest.v4UserContext` and `MemfileLeaseMgrTest.v6UserContext` unit tests. They will require adjustments.https://gitlab.isc.org/isc-projects/bind9/-/issues/3073NSUPDATE crypto failure2022-01-03T11:47:35ZJan SorensenNSUPDATE crypto failure<!--
If the bug you are reporting is potentially security-related - for example,
if it involves an assertion failure or other crash in `named` that can be
triggered repeatedly - then please do *NOT* report it here, but send an
email to [...<!--
If the bug you are reporting is potentially security-related - for example,
if it involves an assertion failure or other crash in `named` that can be
triggered repeatedly - then please do *NOT* report it here, but send an
email to [security-officer@isc.org](security-officer@isc.org).
-->
### Summary
NSUPDATE returns dns_request_createvia: crypto failure
### BIND version used
BIND 9.17.21 (Development Release) <id:ffdb856>
running on Linux x86_64 4.18.0-348.7.1.el8_5.x86_64 #1 SMP Tue Dec 21 19:02:23 UTC 2021
built by make with '--disable-linux-caps' '--with-gssapi=no' '--with-tuning=small' '--with-libnghttp2=no' '--disable-doh' 'LDFLAGS=-L/usr/local/lib64/' 'CPPFLAGS=-I/usr/local/include/openssl'
compiled by GCC 8.5.0 20210514 (Red Hat 8.5.0-4)
compiled with OpenSSL version: OpenSSL 3.0.1 14 Dec 2021
linked to OpenSSL version: OpenSSL 3.0.1 14 Dec 2021
compiled with libuv version: 1.41.1
linked to libuv version: 1.41.1
compiled with libxml2 version: 2.9.7
linked to libxml2 version: 20907
compiled with zlib version: 1.2.11
linked to zlib version: 1.2.11
threads support is enabled
default paths:
named configuration: /usr/local/etc/named.conf
rndc configuration: /usr/local/etc/rndc.conf
DNSSEC root key: /usr/local/etc/bind.keys
nsupdate session key: /usr/local/var/run/named/session.key
named PID file: /usr/local/var/run/named/named.pid
named lock file: /usr/local/var/run/named/named.lock
### Steps to reproduce
/usr/local/bin/nsupdate -DD -k bistruphave.key file
### What is the current *bug* behavior?
setup_system()
Creating key...
Creating key...
namefromtext
keycreate
reset_system()
user_interaction()
do_next_command()
do_next_command()
evaluate_update()
update_addordelete()
do_next_command()
evaluate_update()
update_addordelete()
do_next_command()
evaluate_update()
update_addordelete()
do_next_command()
start_update()
dns_request_createvia: crypto failure
### What is the expected *correct* behavior?
No crypto failure
### Additional information
When NSUPDATE is compiled with OpenSSL version 1.1.1 it works correctly.
With version 3.0.1 it fails, and no traffic is observed with tcpdump on the
primary DNS server, which should receive the update.https://gitlab.isc.org/isc-projects/stork/-/issues/681Optimize host reservation puller2022-02-01T12:53:59ZMarcin SiodelskiOptimize host reservation pullerKea provides no way to signal whether or not host reservations have changed in the database. Thus, Stork always fetches all host reservations for a daemon and updates them in its database. In a typical environment, the host reservations ...Kea provides no way to signal whether or not host reservations have changed in the database. Thus, Stork always fetches all host reservations for a daemon and updates them in its database. In a typical environment, the host reservations do not change very often and there is no need to update them in the Stork database all the time. In addition, the ticket #680 introduces config reviews for host reservations. Right now, the config reviews are scheduled every time the puller fetched all reservations. We should optimize the hosts puller to check whether there are any changes to the pulled host reservations and update the reservations in the Stork database only when there were some changes. The config reviews should be scheduled only when at least one host reservation has changed.1.1Marcin SiodelskiMarcin Siodelskihttps://gitlab.isc.org/isc-projects/kea/-/issues/2258Cloudsmith repos not working with RockyLinux 8.5 (RHEL8)2022-02-10T14:53:41ZOlavi KuldveeCloudsmith repos not working with RockyLinux 8.5 (RHEL8)---
name: Cloudsmith repos not working with RockyLinux 8.5 (RHEL8)
about: RHEL8 and derivates come default with ISC-KEA 1.8.0 from EPEL-release repo, if you want to update to newer binary version, you have to add repos from ISC Cloudsmit...---
name: Cloudsmith repos not working with RockyLinux 8.5 (RHEL8)
about: RHEL8 and derivates come default with ISC-KEA 1.8.0 from EPEL-release repo, if you want to update to newer binary version, you have to add repos from ISC Cloudsmith and get newer versions of ISC-KEA from there.
---
All ISC Cloudsmith .repo setup links come with faulty urls to download RPMs for ISC-KEA.
**To Reproduce**
1. Install RockyLinux 8.4/8.5
2. Add ISC-KEA repo from https://cloudsmith.io/~isc/repos/ (Set Me Up -> RedHat)
3. After adding ISC-KEA repo, try "dnf search kea"
4. Result is, only KEA 1.8.0 is listed from EPEL repo, no ISC-KEA is listed.
**Expected behavior**
You should get both versions of KEA(KEA from EPEL and ISC-KEA from Cloudsmith) listed from both repos, if you have added EPEL and Cloudsmith repos installed.
**Environment:**
- Kea version: Tried all "Set Me Up -> RedHat" repos from 1.9-2.0-2.1
- OS: RockyLinux 8.5 4.18.0-348.7.1.el8_5.x86_64
**Solution**
1. cd /etc/yum.repos.d
2. Fix urls in isc-kea-2-1.repo (or isc-kea-1-9.repo, isc-kea-2-0.repo)
3. Find three lines with each section
baseurl=https://dl.cloudsmith.io/public/isc/kea-2-1/rpm/rocky/8.5/$basearch
And replace "rocky" with "el" and "8.5" with "8"
baseurl=https://dl.cloudsmith.io/public/isc/kea-2-1/rpm/el/8/$basearch
4. After these replacements, ISC-KEA is listed in DNF packages and can be installed without problems.Vicky Riskvicky@isc.orgVicky Riskvicky@isc.orghttps://gitlab.isc.org/isc-projects/kea/-/issues/2259gss-tsig-rekey and gss-tsig-rekey-all are missing from the ARM2022-04-07T20:12:36ZAndrei Pavelandrei@isc.orggss-tsig-rekey and gss-tsig-rekey-all are missing from the ARM* [X] add `src/share/api/gss-tsig-rekey.json`
* [X] add `src/share/api/gss-tsig-rekey-all.json`* [X] add `src/share/api/gss-tsig-rekey.json`
* [X] add `src/share/api/gss-tsig-rekey-all.json`kea2.1.5Razvan BecheriuRazvan Becheriuhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3074Assertion failure in the "catz" system test (isc_taskmgr_excltask() error)2022-01-10T14:26:28ZMichał KępieńAssertion failure in the "catz" system test (isc_taskmgr_excltask() error)https://gitlab.isc.org/isc-projects/bind9/-/jobs/2203661
This happened for the `v9_16` branch, but `main` has a similar code
block.
Looking at line numbers, it seems that the failing assertion is:
```c
2953 task = NULL;
2954...https://gitlab.isc.org/isc-projects/bind9/-/jobs/2203661
This happened for the `v9_16` branch, but `main` has a similar code
block.
Looking at line numbers, it seems that the failing assertion is:
```c
2953 task = NULL;
2954 result = isc_taskmgr_excltask(taskmgr, &task);
2955 >>> REQUIRE(result == ISC_R_SUCCESS);
2956 isc_task_send(task, ISC_EVENT_PTR(&event));
2957 isc_task_detach(&task);
```
Could the recent updates to `dns_catz_zones_merge()` be somehow related?
<details>
<summary>Click to expand/hide full test output</summary>
<pre>S:catz:2022-01-03T03:10:10+0000
T:catz:1:A
A:catz:System test catz
I:catz:PORTRANGE:6200 - 6299
I:catz:starting servers
I:catz:Testing adding/removing of domain in catalog zone
I:catz:checking that dom1.example. is not served by primary (1)
I:catz:Adding a domain dom1.example. to primary via RNDC (2)
I:catz:checking that dom1.example. is now served by primary (3)
I:catz:Adding domain dom1.example. to catalog1 zone (4)
I:catz:waiting for secondary to sync up (5)
I:catz:checking that dom1.example. is served by secondary (6)
I:catz:checking that zone-directory is populated (7)
I:catz:update dom1.example. (8)
I:catz:wait for secondary to be updated (9)
I:catz:check that journal was created for cleanup test (10)
I:catz:update catalog zone serial (11)
I:catz:wait for catalog zone to transfer (12)
I:catz:update dom1.example. again (13)
I:catz:wait for secondary to be updated again (14)
I:catz:removing domain dom1.example. from catalog1 zone (15)
I:catz:waiting for secondary to sync up (16)
I:catz:checking that dom1.example. is not served by secondary (17)
I:catz:checking that zone-directory is emptied (18)
I:catz:Testing various simple operations on domains, including using multiple catalog zones and garbage in zone
I:catz:adding domain dom2.example. to primary via RNDC (19)
I:catz:adding domain dom4.example. to primary via RNDC (20)
I:catz:adding domains dom2.example, dom3.example. and some garbage to catalog1 zone (21)
I:catz:adding domain dom4.example. to catalog2 zone (22)
I:catz:waiting for secondary to sync up (23)
I:catz:checking that dom4.example. is served by secondary (24)
I:catz:checking that dom3.example. is not served by primary (25)
I:catz:adding a domain dom3.example. to primary via RNDC (26)
I:catz:checking that dom3.example. is served by primary (27)
I:catz:waiting for secondary to sync up (28)
I:catz:checking that dom3.example. is served by secondary (29)
I:catz:removing all records from catalog1 zone (30)
I:catz:removing all records from catalog2 zone (31)
I:catz:Testing masters suboption and random labels
I:catz:adding dom5.example. with a valid masters suboption (IP without TSIG) and a random label (32)
I:catz:waiting for secondary to sync up (33)
I:catz:checking that dom5.example. is served by secondary (34)
I:catz:removing dom5.example. (35)
I:catz:waiting for secondary to sync up (36)
I:catz:checking that dom5.example. is no longer served by secondary (37)
I:catz:Testing masters global option
I:catz:adding dom6.example. and a valid global masters option (IP without TSIG) (38)
I:catz:waiting for secondary to sync up (39)
I:catz:checking that dom6.example. is served by secondary (40)
I:catz:removing dom6.example. (41)
I:catz:waiting for secondary to sync up (42)
I:catz:checking that dom6.example. is no longer served by secondary (43)
I:catz:adding dom6.example. and an invalid global masters option (TSIG without IP) (44)
I:catz:waiting for secondary to sync up (45)
I:catz:removing dom6.example. (46)
I:catz:waiting for secondary to sync up (47)
I:catz:Checking that a missing zone directory forces in-memory (48)
I:catz:Testing allow-query and allow-transfer ACLs
I:catz:adding domains dom7.example. and dom8.example. to primary via RNDC (49)
I:catz:checking that dom7.example. is now served by primary (50)
I:catz:adding domain dom7.example. to catalog1 zone with an allow-query statement (51)
I:catz:waiting for secondary to sync up (52)
I:catz:checking that dom7.example. is accessible from 10.53.0.1 (53)
I:catz:checking that dom7.example. is not accessible from 10.53.0.2 (54)
I:catz:checking that dom7.example. is accessible from 10.53.0.5 (55)
I:catz:adding dom8.example. domain and global allow-query and allow-transfer ACLs (56)
I:catz:waiting for secondary to sync up (57)
I:catz:checking that dom8.example. is accessible from 10.53.0.1 (58)
I:catz:checking that dom8.example. is not accessible from 10.53.0.2 (59)
I:catz:checking that dom8.example. is not AXFR accessible from 10.53.0.1 (60)
I:catz:checking that dom8.example. is AXFR accessible from 10.53.0.2 (61)
I:catz:deleting global allow-query and allow-domain ACLs (62)
I:catz:checking that dom8.example. is accessible from 10.53.0.1 (63)
I:catz:checking that dom8.example. is accessible from 10.53.0.2 (64)
I:catz:checking that dom8.example. is AXFR accessible from 10.53.0.1 (65)
I:catz:checking that dom8.example. is AXFR accessible from 10.53.0.2 (66)
I:catz:Testing TSIG keys for masters set per-domain
I:catz:adding a domain dom9.example. to primary via RNDC, with transfers allowed only with TSIG key (67)
I:catz:checking that dom9.example. is now served by primary (68)
I:catz:adding domain dom9.example. to catalog1 zone with a valid masters suboption (IP with TSIG) (69)
I:catz:waiting for secondary to sync up (70)
I:catz:checking that dom9.example. is accessible on secondary (71)
I:catz:deleting domain dom9.example. from catalog1 zone (72)
I:catz:waiting for secondary to sync up (73)
I:catz:checking that dom9.example. is no longer accessible on secondary (74)
I:catz:adding domain dom9.example. to catalog1 zone with an invalid masters suboption (TSIG without IP) (75)
I:catz:waiting for secondary to sync up (76)
I:catz:deleting domain dom9.example. from catalog1 zone (77)
I:catz:waiting for secondary to sync up (78)
I:catz:Testing catalog entries that can't be represented as filenames
I:catz:checking that this.is.a.very.very.long.long.long.domain.that.will.cause.catalog.zones.to.generate.hash.instead.of.using.regular.filename.dom10.example. is not served by primary (79)
I:catz:Adding a domain this.is.a.very.very.long.long.long.domain.that.will.cause.catalog.zones.to.generate.hash.instead.of.using.regular.filename.dom10.example. to primary via RNDC (80)
I:catz:checking that this.is.a.very.very.long.long.long.domain.that.will.cause.catalog.zones.to.generate.hash.instead.of.using.regular.filename.dom10.example. is now served by primary (81)
I:catz:Adding domain this.is.a.very.very.long.long.long.domain.that.will.cause.catalog.zones.to.generate.hash.instead.of.using.regular.filename.dom10.example. to catalog1 zone (82)
I:catz:waiting for secondary to sync up (83)
I:catz:checking that this.is.a.very.very.long.long.long.domain.that.will.cause.catalog.zones.to.generate.hash.instead.of.using.regular.filename.dom10.example. is served by secondary (84)
I:catz:checking that zone-directory is populated with a hashed filename (85)
I:catz:removing domain this.is.a.very.very.long.long.long.domain.that.will.cause.catalog.zones.to.generate.hash.instead.of.using.regular.filename.dom10.example. from catalog1 zone (86)
I:catz:waiting for secondary to sync up (87)
I:catz:checking that this.is.a.very.very.long.long.long.domain.that.will.cause.catalog.zones.to.generate.hash.instead.of.using.regular.filename.dom10.example. is not served by secondary (88)
I:catz:checking that zone-directory is emptied (89)
I:catz:checking that this.zone/domain.has.a.slash.dom10.example. is not served by primary (90)
I:catz:Adding a domain this.zone/domain.has.a.slash.dom10.example. to primary via RNDC (91)
I:catz:checking that this.zone/domain.has.a.slash.dom10.example. is now served by primary (92)
I:catz:Adding domain this.zone/domain.has.a.slash.dom10.example. to catalog1 zone (93)
I:catz:waiting for secondary to sync up (94)
I:catz:checking that this.zone/domain.has.a.slash.dom10.example. is served by secondary (95)
I:catz:checking that zone-directory is populated with a hashed filename (96)
I:catz:removing domain this.zone/domain.has.a.slash.dom10.example. from catalog1 zone (97)
I:catz:waiting for secondary to sync up (98)
I:catz:checking that this.zone/domain.has.a.slash.dom10.example. is not served by secondary (99)
I:catz:checking that zone-directory is emptied (100)
I:catz:checking that this.zone\\domain.has.backslash.dom10.example. is not served by primary (101)
I:catz:Adding a domain this.zone\\domain.has.backslash.dom10.example. to primary via RNDC (102)
I:catz:checking that this.zone\\domain.has.backslash.dom10.example. is now served by primary (103)
I:catz:Adding domain this.zone\\domain.has.backslash.dom10.example. to catalog1 zone (104)
I:catz:waiting for secondary to sync up (105)
I:catz:checking that this.zone\\domain.has.backslash.dom10.example. is served by secondary (106)
I:catz:checking that zone-directory is populated with a hashed filename (107)
I:catz:removing domain this.zone\\domain.has.backslash.dom10.example. from catalog1 zone (108)
I:catz:waiting for secondary to sync up (109)
I:catz:checking that this.zone\\domain.has.backslash.dom10.example. is not served by secondary (110)
I:catz:checking that zone-directory is emptied (111)
I:catz:checking that this.zone:domain.has.a.colon.dom.10.example. is not served by primary (112)
I:catz:Adding a domain this.zone:domain.has.a.colon.dom.10.example. to primary via RNDC (113)
I:catz:checking that this.zone:domain.has.a.colon.dom.10.example. is now served by primary (114)
I:catz:Adding domain this.zone:domain.has.a.colon.dom.10.example. to catalog1 zone (115)
I:catz:waiting for secondary to sync up (116)
I:catz:checking that this.zone:domain.has.a.colon.dom.10.example. is served by secondary (117)
I:catz:checking that zone-directory is populated with a hashed filename (118)
I:catz:removing domain this.zone:domain.has.a.colon.dom.10.example. from catalog1 zone (119)
I:catz:waiting for secondary to sync up (120)
I:catz:checking that this.zone:domain.has.a.colon.dom.10.example. is not served by secondary (121)
I:catz:checking that zone-directory is emptied (122)
I:catz:Testing adding a domain and a subdomain of it
I:catz:checking that dom11.example. is not served by primary (123)
I:catz:Adding a domain dom11.example. to primary via RNDC (124)
I:catz:checking that dom11.example. is now served by primary (125)
I:catz:Adding domain dom11.example. to catalog1 zone (126)
I:catz:waiting for secondary to sync up (127)
I:catz:checking that dom11.example. is served by secondary (128)
I:catz:checking that subdomain.of.dom11.example. is not served by primary (129)
I:catz:Adding a domain subdomain.of.dom11.example. to primary via RNDC (130)
I:catz:checking that subdomain.of.dom11.example. is now served by primary (131)
I:catz:Adding domain subdomain.of.dom11.example. to catalog1 zone (132)
I:catz:waiting for secondary to sync up (133)
I:catz:checking that subdomain.of.dom11.example. is served by secondary (134)
I:catz:removing domain dom11.example. from catalog1 zone (135)
I:catz:waiting for secondary to sync up (136)
I:catz:checking that dom11.example. is not served by secondary (137)
I:catz:checking that subdomain.of.dom11.example. is still served by secondary (138)
I:catz:removing domain subdomain.of.dom11.example. from catalog1 zone (139)
I:catz:waiting for secondary to sync up (140)
I:catz:checking that subdomain.of.dom11.example. is not served by secondary (141)
I:catz:Testing adding a catalog zone at runtime with rndc reconfig
I:catz:checking that dom12.example. is not served by primary (142)
I:catz:Adding a domain dom12.example. to primary via RNDC (143)
I:catz:checking that dom12.example. is now served by primary (144)
I:catz:Adding domain dom12.example. to catalog4 zone (145)
I:catz:checking that dom12.example. is not served by secondary (146)
I:catz:reconfiguring secondary - adding catalog4 catalog zone (147)
I:catz:waiting for secondary to sync up (148)
I:catz:checking that dom7.example. is still served by secondary after reconfiguration (149)
I:catz:checking that dom12.example. is served by secondary (150)
I:catz:reconfiguring secondary - removing catalog4 catalog zone, adding non-existent catalog5 catalog zone (151)
I:catz:reconfiguring secondary - removing non-existent catalog5 catalog zone (152)
I:catz:checking that dom12.example. is not served by secondary (153)
I:catz:removing domain dom12.example. from catalog4 zone (154)
I:catz:Testing having a zone in two different catalogs
I:catz:checking that dom13.example. is not served by primary (155)
I:catz:Adding a domain dom13.example. to primary ns1 via RNDC (156)
I:catz:checking that dom13.example. is now served by primary ns1 (157)
I:catz:Adding a domain dom13.example. to primary ns3 via RNDC (158)
I:catz:checking that dom13.example. is now served by primary ns3 (159)
I:catz:Adding domain dom13.example. to catalog1 zone with ns1 as primary (160)
I:catz:waiting for secondary to sync up (161)
I:catz:checking that dom13.example. is served by secondary and that it's the one from ns1 (162)
I:catz:Adding domain dom13.example. to catalog2 zone with ns3 as primary (163)
I:catz:waiting for secondary to sync up (164)
I:catz:checking that dom13.example. is served by secondary and that it's still the one from ns1 (165)
I:catz:Deleting domain dom13.example. from catalog2 (166)
I:catz:waiting for secondary to sync up (167)
I:catz:checking that dom13.example. is served by secondary and that it's still the one from ns1 (168)
I:catz:Deleting domain dom13.example. from catalog1 (169)
I:catz:waiting for secondary to sync up (170)
I:catz:checking that dom13.example. is no longer served by secondary (171)
I:catz:Testing having a regular zone and a zone in catalog zone of the same name
I:catz:checking that dom14.example. is not served by primary (172)
I:catz:Adding a domain dom14.example. to primary ns1 via RNDC (173)
I:catz:checking that dom14.example. is now served by primary ns1 (174)
I:catz:Adding a domain dom14.example. to primary ns3 via RNDC (175)
I:catz:checking that dom14.example. is now served by primary ns3 (176)
I:catz:Adding domain dom14.example. with rndc with ns1 as primary (177)
I:catz:waiting for secondary to sync up (178)
I:catz:checking that dom14.example. is served by secondary and that it's the one from ns1 (179)
I:catz:Adding domain dom14.example. to catalog2 zone with ns3 as primary (180)
I:catz:waiting for secondary to sync up (181)
I:catz:checking that dom14.example. is served by secondary and that it's still the one from ns1 (182)
I:catz:Deleting domain dom14.example. from catalog2 (183)
I:catz:waiting for secondary to sync up (184)
I:catz:checking that dom14.example. is served by secondary and that it's still the one from ns1 (185)
I:catz:Testing changing label for a member zone
I:catz:checking that dom15.example. is not served by primary (186)
I:catz:Adding a domain dom15.example. to primary ns1 via RNDC (187)
I:catz:checking that dom15.example. is now served by primary ns1 (188)
I:catz:Adding domain dom15.example. to catalog1 zone with 'dom15label1' label (188)
I:catz:waiting for secondary to sync up (189)
I:catz:checking that dom15.example. is served by secondary (190)
I:catz:Changing label of domain dom15.example. from 'dom15label1' to 'dom15label2' (191)
I:catz:waiting for secondary to sync up (192)
I:catz:checking that dom15.example. is served by secondary (193)
I:catz:Testing recreation of a manually deleted zone after a reload
I:catz:checking that dom16.example. is not served by primary (194)
I:catz:Adding a domain dom16.example. to primary ns1 via RNDC (195)
I:catz:checking that dom16.example. is now served by primary ns1 (196)
I:catz:Adding domain dom16.example. to catalog1 zone with ns1 as primary (197)
I:catz:waiting for secondary to sync up (198)
I:catz:checking that dom16.example. is served by secondary and that it's the one from ns1 (199)
I:catz:Deleting dom16.example. from secondary ns2 via RNDC (199)
I:catz:checking that dom16.example. is no longer served by secondary (200)
I:catz:Reloading secondary ns2 via RNDC (200)
I:catz:waiting for secondary to sync up (201)
I:catz:checking that dom16.example. is served by secondary and that it's the one from ns1 (202)
I:catz:Deleting domain dom16.example. from catalog1 (203)
I:catz:waiting for secondary to sync up (204)
I:catz:checking that dom16.example. is no longer served by secondary (205)
I:catz:checking that reconfig can delete and restore catalog zone configuration (206)
I:catz:exit status: 0
I:catz:stopping servers
I:catz:Core dump(s) found: catz/ns2/core.18648
R:catz:FAIL
D:catz:backtrace from catz/ns2/core.18648:
D:catz:--------------------------------------------------------------------------------
D:catz:Core was generated by `/builds/isc-projects/bind9/bin/named/.libs/named -D catz-ns2 -X named.lock -m r'.
D:catz:Program terminated with signal SIGABRT, Aborted.
D:catz:#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
D:catz:[Current thread is 1 (Thread 0x7fcb19bbc700 (LWP 18663))]
D:catz:#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
D:catz:#1 0x00007fcb1e9f1537 in __GI_abort () at abort.c:79
D:catz:#2 0x00000000004695d3 in abort ()
D:catz:#3 0x00000000004e434a in assertion_failed (file=<optimized out>, line=<optimized out>, type=<optimized out>, cond=0x54b771 "result == 0") at ./main.c:270
D:catz:#4 0x00007fcb1f33342e in isc_assertion_failed (file=0x2 <error: Cannot access memory at address 0x2>, line=431451424, line@entry=2955, type=type@entry=isc_assertiontype_require, cond=0x7fcb1ea07ce1 <__GI_raise+321> "H\213\204$\b\001") at assertions.c:46
D:catz:#5 0x000000000050dd1f in catz_create_chg_task (entry=entry@entry=0x7b3000026410, origin=origin@entry=0x7b5000010410, view=view@entry=0x7b6c00404910, taskmgr=taskmgr@entry=0x7b3000000190, udata=udata@entry=0x383ecc8 <ns_catz_cbdata>, type=type@entry=262198) at ./server.c:2955
D:catz:#6 0x000000000050daa0 in catz_addzone (entry=0x2, origin=0x7fcb19b76d20, view=0x0, taskmgr=0x7fcb1ea07ce1 <__GI_raise+321>, udata=0x0) at ./server.c:2965
D:catz:#7 0x00007fcb1f475f30 in dns_catz_zones_merge (target=target@entry=0x7b5000010410, newzone=<optimized out>) at catz.c:555
D:catz:#8 0x00007fcb1f47ae06 in dns_catz_update_from_db (db=db@entry=0x7b5000000610, catzs=<optimized out>) at catz.c:1995
D:catz:#9 0x00007fcb1f476b21 in dns_catz_update_taskaction (task=<optimized out>, event=<optimized out>) at catz.c:1746
D:catz:#10 0x00007fcb1f379dda in task_run (task=<optimized out>) at task.c:857
D:catz:#11 isc_task_run (task=<optimized out>) at task.c:950
D:catz:#12 0x00007fcb1f3570f6 in process_netievent (worker=worker@entry=0x7ba000008010, ievent=ievent@entry=0x7b38003283b0) at netmgr.c:933
D:catz:#13 0x00007fcb1f3526c9 in process_queue (worker=0x7ba000008010, type=NETIEVENT_TASK) at netmgr.c:1007
D:catz:#14 process_all_queues (worker=0x7ba000008010) at netmgr.c:778
D:catz:#15 async_cb (handle=0x7ba000008370) at netmgr.c:807
D:catz:#16 0x00007fcb1f0bbdee in uv__async_io (loop=0x7ba000008020, w=0x7ba0000081e8, events=1) at /usr/src/libuv-v1.42.0/src/unix/async.c:163
D:catz:#17 0x00007fcb1f0d792d in uv__io_poll (loop=0x7ba000008020, timeout=-1) at /usr/src/libuv-v1.42.0/src/unix/epoll.c:374
D:catz:#18 0x00007fcb1f0bc7d7 in uv_run (loop=0x7ba000008020, mode=UV_RUN_DEFAULT) at /usr/src/libuv-v1.42.0/src/unix/core.c:389
D:catz:#19 0x00007fcb1f3528b8 in nm_thread (worker0=0x7ba000008010) at netmgr.c:713
D:catz:#20 0x00007fcb1f37d29a in isc__trampoline_run (arg=0x7b0800017580) at trampoline.c:196
D:catz:#21 0x0000000000463c6d in __tsan_thread_start_func ()
D:catz:#22 0x00007fcb1f091ea7 in start_thread (arg=<optimized out>) at pthread_create.c:477
D:catz:#23 0x00007fcb1eac9def in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
D:catz:--------------------------------------------------------------------------------
D:catz:full backtrace from catz/ns2/core.18648 saved in catz/ns2/core.18648-backtrace.txt
D:catz:core dump catz/ns2/core.18648 archived as catz/ns2/core.18648.gz
E:catz:2022-01-03T03:13:50+0000</pre></details>January 2022 (9.16.25, 9.16.25-S1, 9.17.22)https://gitlab.isc.org/isc-projects/bind9/-/issues/3075Assertion failure in isc_tlsctx_cache_attach()2022-01-11T14:57:04ZMichał KępieńAssertion failure in isc_tlsctx_cache_attach()https://gitlab.isc.org/isc-projects/bind9/-/jobs/2205912
This happened on OpenBSD, but I do not immediately see anything
OpenBSD-specific here.
Looking at line numbers, it seems that the failing assertion is:
```c
939 void
940 isc_t...https://gitlab.isc.org/isc-projects/bind9/-/jobs/2205912
This happened on OpenBSD, but I do not immediately see anything
OpenBSD-specific here.
Looking at line numbers, it seems that the failing assertion is:
```c
939 void
940 isc_tlsctx_cache_attach(isc_tlsctx_cache_t *source,
941 isc_tlsctx_cache_t **targetp) {
942 REQUIRE(VALID_TLSCTX_CACHE(source));
943 >>> REQUIRE(targetp != NULL && *targetp == NULL);
944
945 isc_refcount_increment(&source->references);
946
947 *targetp = source;
948 }
```
The failing assertion was introduced by !5672 (see #3067).
Looking at the relevant code, this crash looks pretty wild because I
cannot immediately see how it could have happened. (I am only judging
from the backtrace and the source code, though, I have not looked at the
actual core dump.) Maybe a race during `rndc reconfig`?
<details>
<summary>Click to expand/hide full test output</summary>
<pre>S:mirror:2022-01-04T04:14:39+0000
T:mirror:1:A
A:mirror:System test mirror
I:mirror:PORTS:29227,29228,29229,29230,29231,29232,29233,29234,29235,29236,29237,29238,29239
I:mirror:starting servers
I:mirror:checking that an unsigned mirror zone is rejected (1)
I:mirror:checking that a mirror zone signed using an untrusted key is rejected (2)
I:mirror:checking that a mirror zone signed using a CSK without the SEP bit set is accepted (3)
I:mirror:checking that an AXFR of an incorrectly signed mirror zone is rejected (4)
I:mirror:checking that an AXFR of an updated, correctly signed mirror zone is accepted (5)
I:mirror:ns2 server reload successful
I:mirror:checking that an IXFR of an incorrectly signed mirror zone is rejected (6)
I:mirror:ns2 server reload successful
I:mirror:checking that an IXFR of an updated, correctly signed mirror zone is accepted after AXFR failover (7)
I:mirror:ns2 server reload successful
I:mirror:checking that loading an incorrectly signed mirror zone from disk fails (8)
I:mirror:ensuring trust anchor telemetry queries are sent upstream for a mirror zone (9)
I:mirror:checking that loading a correctly signed mirror zone from disk succeeds (10)
I:mirror:checking that loading a journal for an incorrectly signed mirror zone fails (11)
I:mirror:checking that loading a journal for a correctly signed mirror zone succeeds (12)
I:mirror:checking delegations sourced from a mirror zone (13)
I:mirror:checking that resolution involving a mirror zone works as expected (14)
I:mirror:checking that non-recursive queries for names below mirror zone get responded from cache (15)
I:mirror:checking that delegations from cache which improve mirror zone delegations are properly handled (16)
I:mirror:checking flags set in a DNSKEY response sourced from a mirror zone (17)
I:mirror:checking flags set in a SOA response sourced from a mirror zone (18)
I:mirror:checking that resolution succeeds with unavailable mirror zone data (19)
I:mirror:checking that resolution succeeds with expired mirror zone data (20)
I:mirror:checking that clients without cache access cannot retrieve mirror zone data (21)
I:mirror:checking that outgoing transfers of mirror zones are disabled by default (22)
I:mirror:checking that notifies are disabled by default for mirror zones (23)
I:mirror:checking output of "rndc zonestatus" for a mirror zone (24)
I:mirror:checking that "rndc reconfig" properly handles a mirror -> secondary zone type change (25)
I:mirror:ns3 rndc: connection to remote host closed.
I:mirror:ns3 * This may indicate that the
I:mirror:ns3 * remote server is using an older
I:mirror:ns3 * version of the command protocol,
I:mirror:ns3 * this host is not authorized to connect,
I:mirror:ns3 * the clocks are not synchronized,
I:mirror:ns3 * the key signing algorithm is incorrect,
I:mirror:ns3 * or the key is invalid.
rndc: connect failed: 10.53.0.3#29239: connection refused
I:mirror:exceeded time limit waiting for proof of 'verify-reconfig' being loaded to appear in ns3/named.run
I:mirror:failed
I:mirror:checking that "rndc reconfig" properly handles a secondary -> mirror zone type change (26)
I:mirror:ns3 rndc: connect failed: 10.53.0.3#29239: connection refused
rndc: connect failed: 10.53.0.3#29239: connection refused
I:mirror:exceeded time limit waiting for proof of 'verify-reconfig' being loaded to appear in ns3/named.run
I:mirror:failed
I:mirror:checking that a mirror zone can be added using rndc (27)
I:mirror:exceeded time limit waiting for proof of 'verify-addzone' being transferred to appear in ns3/named.run
I:mirror:failed
I:mirror:checking that a mirror zone can be deleted using rndc (28)
I:mirror:exceeded time limit waiting for 'zone verify-addzone/IN: mirror zone is no longer in use; reverting to normal recursion' in ns3/named.run
I:mirror:failed
I:mirror:exit status: 4
I:mirror:stopping servers
I:mirror:ns3 died before a SIGTERM was sent
I:mirror:stopping servers failed
I:mirror:Core dump(s) found: mirror/ns3/named.core
D:mirror:backtrace from mirror/ns3/named.core:
D:mirror:--------------------------------------------------------------------------------
D:mirror:Core was generated by `named'.
D:mirror:Program terminated with signal SIGABRT, Aborted.
D:mirror:#0 thrkill () at /tmp/-:3
D:mirror:[Current thread is 1 (process 569235)]
D:mirror:#0 thrkill () at /tmp/-:3
D:mirror:#1 0x00000848cb47fdee in _libc_abort () at /usr/src/lib/libc/stdlib/abort.c:51
D:mirror:#2 0x00000845e2c269a3 in assertion_failed (file=<optimized out>, line=<optimized out>, type=<optimized out>, cond=<optimized out>) at main.c:236
D:mirror:#3 0x00000848a30ac260 in isc_assertion_failed (file=0x0, line=6, type=isc_assertiontype_require, cond=0x848cb41f66a <thrkill+10> "r\001\303d\211\004% ") at assertions.c:47
D:mirror:#4 0x00000848a30d71f4 in isc_tlsctx_cache_attach (source=<error reading variable: Unhandled dwarf expression opcode 0xa3>, targetp=<error reading variable: Unhandled dwarf expression opcode 0xa3>) at tls.c:943
D:mirror:#5 0x00000848abba625e in dns_zonemgr_set_tlsctx_cache (zmgr=0x848bfeee020, tlsctx_cache=0x8485934e520) at zone.c:23652
D:mirror:#6 0x00000845e2c3608e in load_configuration (filename=<optimized out>, server=0x8483918c020, first_time=<optimized out>) at server.c:8440
D:mirror:#7 0x00000845e2c2a3fe in loadconfig (server=0x8483918c020) at server.c:10365
D:mirror:#8 0x00000845e2c2a2ed in named_server_reconfigcommand (server=0x8483918c020) at server.c:10762
D:mirror:#9 0x00000845e2c1efe5 in named_control_docommand (message=<error reading variable: Unhandled dwarf expression opcode 0xa3>, readonly=<optimized out>, text=0x848a26ba708) at control.c:246
D:mirror:#10 0x00000845e2c22467 in control_command (task=<error reading variable: Unhandled dwarf expression opcode 0xa3>, event=<error reading variable: Unhandled dwarf expression opcode 0xa3>) at controlconf.c:389
D:mirror:#11 0x00000848a30d0e2c in task_run (task=0x8487d3d1520) at task.c:827
D:mirror:#12 isc_task_run (task=0x8487d3d1520) at task.c:907
D:mirror:#13 0x00000848a30979ed in isc__nm_async_task (worker=<optimized out>, ev0=0x848a26ba820) at netmgr/netmgr.c:835
D:mirror:#14 process_netievent (worker=0x848941f40c0, ievent=0x848a26ba820) at netmgr/netmgr.c:914
D:mirror:#15 0x00000848a3092ab4 in process_queue (worker=0x848941f40c0, type=<error reading variable: Cannot access memory at address 0x2>) at netmgr/netmgr.c:1008
D:mirror:#16 process_all_queues (worker=0x848941f40c0) at netmgr/netmgr.c:754
D:mirror:#17 async_cb (handle=0x848941f4398) at netmgr/netmgr.c:783
D:mirror:#18 0x00000847ec25a92d in uv.async_io () from /usr/local/lib/libuv.so.3.0
D:mirror:#19 0x00000847ec26c829 in uv.io_poll () from /usr/local/lib/libuv.so.3.0
D:mirror:#20 0x00000847ec25b018 in uv_run () from /usr/local/lib/libuv.so.3.0
D:mirror:#21 0x00000848a3092bfb in nm_thread (worker0=0x848941f40c0) at netmgr/netmgr.c:689
D:mirror:#22 0x00000848a30d9146 in isc__trampoline_run (arg=0x848941e57a0) at trampoline.c:185
D:mirror:#23 0x000008482f9591c1 in _rthread_start (v=<error reading variable: Unhandled dwarf expression opcode 0xa3>) at /usr/src/lib/librthread/rthread.c:96
D:mirror:#24 0x00000848cb43e33a in __tfork_thread () at /usr/src/lib/libc/arch/amd64/sys/tfork_thread.S:84
D:mirror:--------------------------------------------------------------------------------
D:mirror:full backtrace from mirror/ns3/named.core saved in mirror/ns3/named.core-backtrace.txt
D:mirror:core dump mirror/ns3/named.core archived as mirror/ns3/named.core.gz
R:mirror:FAIL
E:mirror:2022-01-04T04:15:53+0000</pre></details>January 2022 (9.16.25, 9.16.25-S1, 9.17.22)Artem BoldarievArtem Boldarievhttps://gitlab.isc.org/isc-projects/kea/-/issues/2260minor bugs in lease4 commands2022-11-02T15:10:41ZMarcin Godzinaminor bugs in lease4 commands1. `lease4-get` error states "duid" and not "client-id"
```shell
{"command": "lease4-get",
"arguments": {"identifier": "ff:01:02:03:ff:05",
"identifier-type": "something",
"sub...1. `lease4-get` error states "duid" and not "client-id"
```shell
{"command": "lease4-get",
"arguments": {"identifier": "ff:01:02:03:ff:05",
"identifier-type": "something",
"subnet-id": 1}}
```
Should return: `"Incorrect identifier type: something, the only supported values are: address, hw-address, client-id"`
Returns: `"Incorrect identifier type: something, the only supported values are: address, hw-address, duid"`
2. `lease4-get-by-client-id` error states "duid" and not "client-id"
```shell
{"command": "lease4-get-by-client-id",
"arguments": {"client-id": ""}}
```
Should return: `"Empty client-id is not allowed"`
Returns: `"Empty DUIDs are not allowed"`
3. `lease4-get-all` accepts negative numbers as subnet number
Not necessary a problem.
```shell
{"command": "lease4-get-all",
"arguments": {"subnets": [-1]}}
```
4. `lease6-get` accepts `"subnet-id"` and `"identifier"` and `"identifier-type": "duid"` without `"iaid"`
but returns 0 leases found without complaining about lack of `iaid`
example:
```shell
{"command": "lease4-get",
"arguments": {"subnet-id": 1, "identifier": "1a:1b:1c:1d:1e:1f:20:21:22:23:24", "identifier-type": "duid"}}
```
will always return 0 leases found.backloghttps://gitlab.isc.org/isc-projects/bind9/-/issues/3077`check that SERVFAIL is returned for an empty question section via TCP` fails2022-03-01T12:31:38ZMark Andrews`check that SERVFAIL is returned for an empty question section via TCP` failsJob [#2208322](https://gitlab.isc.org/isc-projects/bind9/-/jobs/2208322) failed for 96d63ac075a66d231784eb2933b393c23521f619:
I've seen this multiple times.
```
I:resolver:check that SERVFAIL is returned for an empty question section v...Job [#2208322](https://gitlab.isc.org/isc-projects/bind9/-/jobs/2208322) failed for 96d63ac075a66d231784eb2933b393c23521f619:
I've seen this multiple times.
```
I:resolver:check that SERVFAIL is returned for an empty question section via TCP (70)
I:resolver:failed
I:resolver:checking SERVFAIL is returned when all authoritative servers return FORMERR (71)
```February 2022 (9.16.26, 9.16.26-S1)https://gitlab.isc.org/isc-projects/bind9/-/issues/3078Release Checklist for BIND 9.16.25, 9.16.25-S1, 9.17.222022-01-20T12:24:02ZPetr Špačekpspacek@isc.orgRelease Checklist for BIND 9.16.25, 9.16.25-S1, 9.17.22## Release Schedule
**Preliminary!**
**Code Freeze:**
Friday, 07 January 2022
**Tagging Deadline:**
Monday, 10 January 2022
**Public Release:**
Wednesday, 19 January 2022
## Documentation Review Links
**Closed issues assigned to the...## Release Schedule
**Preliminary!**
**Code Freeze:**
Friday, 07 January 2022
**Tagging Deadline:**
Monday, 10 January 2022
**Public Release:**
Wednesday, 19 January 2022
## Documentation Review Links
**Closed issues assigned to the milestone without a release note:**
- [9.17.22](https://gitlab.isc.org/isc-projects/bind9/-/issues?scope=all&state=closed&milestone_title=January+2022+%289.16.25%2C+9.16.25-S1%2C+9.17.22%29¬%5Blabel_name%5D%5B%5D=Release+Notes¬%5Blabel_name%5D%5B%5D=Duplicate&label_name%5B%5D=v9.17)
- [9.16.25-S1](https://gitlab.isc.org/isc-private/bind9/-/issues?scope=all&state=closed&milestone_title=January+2022+%289.16.25%2C+9.16.25-S1%2C+9.17.22%29¬%5Blabel_name%5D%5B%5D=Release+Notes¬%5Blabel_name%5D%5B%5D=Duplicate&label_name%5B%5D=v9.16-S)
- [9.16.25](https://gitlab.isc.org/isc-projects/bind9/-/issues?scope=all&state=closed&milestone_title=January+2022+%289.16.25%2C+9.16.25-S1%2C+9.17.22%29¬%5Blabel_name%5D%5B%5D=Release+Notes¬%5Blabel_name%5D%5B%5D=Duplicate&label_name%5B%5D=v9.16)
**Merge requests merged into the milestone without a release note:**
- [9.17.22](https://gitlab.isc.org/isc-projects/bind9/-/merge_requests?scope=all&state=merged&milestone_title=January+2022+%289.16.25%2C+9.16.25-S1%2C+9.17.22%29¬%5Blabel_name%5D%5B%5D=Release+Notes&target_branch=main)
- [9.16.25-S1](https://gitlab.isc.org/isc-private/bind9/-/merge_requests?scope=all&state=merged&milestone_title=January+2022+%289.16.25%2C+9.16.25-S1%2C+9.17.22%29¬%5Blabel_name%5D%5B%5D=Release+Notes&target_branch=v9_16_sub)
- [9.16.25](https://gitlab.isc.org/isc-projects/bind9/-/merge_requests?scope=all&state=merged&milestone_title=January+2022+%289.16.25%2C+9.16.25-S1%2C+9.17.22%29¬%5Blabel_name%5D%5B%5D=Release+Notes&target_branch=v9_16)
**Merge requests merged into the milestone without a CHANGES entry:**
- [9.17.22](https://gitlab.isc.org/isc-projects/bind9/-/merge_requests?scope=all&state=merged&milestone_title=January+2022+%289.16.25%2C+9.16.25-S1%2C+9.17.22%29&label_name%5B%5D=No+CHANGES&target_branch=main)
- [9.16.25-S1](https://gitlab.isc.org/isc-private/bind9/-/merge_requests?scope=all&state=merged&milestone_title=January+2022+%289.16.25%2C+9.16.25-S1%2C+9.17.22%29&label_name%5B%5D=No+CHANGES&target_branch=v9_16_sub)
- [9.16.25](https://gitlab.isc.org/isc-projects/bind9/-/merge_requests?scope=all&state=merged&milestone_title=January+2022+%289.16.25%2C+9.16.25-S1%2C+9.17.22%29&label_name%5B%5D=No+CHANGES&target_branch=v9_16)
## Release Checklist
### Before the Code Freeze
- [x] ***(QA)*** [Inform](https://mattermost.isc.org/isc/pl/9kp8757b3ig6xexm1eg7sc6gzo) Support and Marketing of impending release (and give estimated release dates).
- [x] ***(QA)*** Ensure there are [no permanent test failures](https://gitlab.isc.org/isc-projects/bind9/-/issues/3078#note_258248) on any platform.
- [x] ***(QA)*** [Check Perflab](https://gitlab.isc.org/isc-projects/bind9/-/issues/3078#note_258243) to ensure there has been no unexplained drop in performance for the versions being released.
- [x] ***(QA)*** Check whether all issues assigned to the release milestone are resolved[^1].
- [x] ***(QA)*** Ensure that there are no outstanding merge requests in the private repository[^1] (Subscription Edition only).
- [x] ***(QA)*** Ensure all merge requests marked for backporting have been indeed backported.
- [x] ***(QA)*** Announce ([on Mattermost](https://mattermost.isc.org/isc/pl/gtmfaxb9ij8ijgzyar47mejkzr)) that the code freeze is in effect.
### Before the Tagging Deadline
- [x] ***(QA)*** Look for outstanding documentation issues (e.g. `CHANGES` mistakes) and address them if any are found.
- [x] ***(QA)*** Ensure release notes are correct, ask Support and Marketing to check them as well.
- [x] ***(QA)*** Update API files for libraries with new version information.
- [x] ***(QA)*** Change software version and library versions in `configure.ac` (new major release only).
- [x] ***(QA)*** Rebuild `configure` using Autoconf on `docs.isc.org`.
- [x] ***(QA)*** Update `CHANGES`.
- [x] ***(QA)*** Update `CHANGES.SE` (Subscription Edition only).
- [x] ***(QA)*** Update `README.md`.
- [x] ***(QA)*** Update `version`.
- [x] ***(QA)*** Build documentation on `docs.isc.org`.
- [x] ***(QA)*** Check that the formatting is correct for text, PDF, and HTML versions of release notes.
- [x] ***(QA)*** Check that the formatting of the generated man pages is correct.
- [x] ***(QA)*** Tag the releases in the private repository (`git tag -s -m "BIND 9.x.y" v9_x_y`).
### Before the ASN Deadline (for ASN Releases) or the Public Release Date (for Regular Releases)
- [x] ***(QA)*** Verify GitLab CI results for the tags created and prepare a QA report for the releases to be published.
- [x] ***(QA)*** Announce (on Mattermost) that the code freeze is over.
- [x] ***(QA)*** Request signatures for the tarballs, providing their location and checksums.
- [x] ***(Signers)*** Validate tarball checksums, sign tarballs, and upload signatures.
- [x] ***(QA)*** Verify tarball signatures and check tarball checksums again.
- [x] ***(Support)*** Pre-publish ASN and/or Subscription Edition tarballs so that packages can be built.
- [x] ***(QA)*** Build and test ASN and/or Subscription Edition packages.
- [x] ***(QA)*** Notify Support that the releases have been prepared.
- [x] ***(Support)*** Send out ASNs (if applicable).
### On the Day of Public Release
- [x] ***(Support)*** Wait for clearance from Security Officer to proceed with the public release (if applicable).
- [x] ***(Support)*** Place tarballs in public location on FTP site.
- [x] ***(Support)*** Publish links to downloads on ISC website.
- [x] ***(Support)*** Write release email to *bind-announce*.
- [x] ***(Support)*** Write email to *bind-users* (if a major release).
- [x] ***(Support)*** Send eligible customers updated links to the Subscription Edition (update the -S edition delivery tickets, even if those links were provided earlier via an ASN ticket).
- [x] ***(Support)*** Update tickets in case of waiting support customers.
- [x] ***(QA)*** Build and test any outstanding private packages.
- [x] ***(QA)*** Build public RPMs.
- [x] ***(SwEng) *** Build Debian/Ubuntu packages.
- [x] ***(SwEng) *** Update Docker images.
- [x] ***(QA)*** Inform Marketing of the release.
- [x] ***(QA)*** Update the internal [BIND release dates wiki page](https://wiki.isc.org/bin/view/Main/BindReleaseDates) when public announcement has been made.
- [x] ***(Marketing)*** Post short note to Twitter.
- [x] ***(Marketing)*** Update [Wikipedia entry for BIND](https://en.wikipedia.org/wiki/BIND).
- [x] ***(Marketing)*** Write blog article (if a major release).
- [x] ***(QA)*** Ensure all new tags are annotated and signed.
- [x] ***(QA)*** Push tags for the published releases to the public repository.
- [x] ***(QA)*** Merge the automatically prepared `prep 9.x.y` commit which updates `version` and documentation on the release branch into the relevant maintenance branch (`v9_x`).
- [x] ***(QA)*** For each maintained branch, update the `BIND_BASELINE_VERSION` variable for the `abi-check` job in `.gitlab-ci.yml` to the latest published BIND version tag for a given branch.
- [x] ***(QA)*** Prepare empty release notes for the next set of releases.
- [x] ***(QA)*** Sanitize confidential issues which are assigned to the current release milestone and do not describe a security vulnerability, then make them public.
- [x] ***(QA)*** Sanitize confidential issues which are assigned to older release milestones and describe security vulnerabilities, then make them public if appropriate[^2].
- [x] ***(QA)*** Update QA tools used in GitLab CI (e.g. Flake8, PyLint) by modifying the relevant `Dockerfile`.
[^1]: If not, use the time remaining until the tagging deadline to ensure all outstanding issues are either resolved or moved to a different milestone.
[^2]: As a rule of thumb, security vulnerabilities which have reproducers merged to the public repository are considered okay for full disclosure.January 2022 (9.16.25, 9.16.25-S1, 9.17.22)Michał KępieńMichał Kępień2022-01-19https://gitlab.isc.org/isc-projects/kea/-/issues/2261Malformed DHCP potentially related to Option 812022-07-21T13:24:22ZMunroe SollogMalformed DHCP potentially related to Option 81**Describe the bug**
We recently migrated from isc-dhcpd to kea (Kea DHCPv4 server, version 2.0.1 (stable)). We noticed that some older embedded devices were unable to complete the DORA process. These older devices had no problems DHC...**Describe the bug**
We recently migrated from isc-dhcpd to kea (Kea DHCPv4 server, version 2.0.1 (stable)). We noticed that some older embedded devices were unable to complete the DORA process. These older devices had no problems DHCP'ing using isc-dhcpd-4.2.7. Using Wireshark we confirmed the server does receive the DISCOVERs, but Wireshark labels the packet as "malformed". I have attached the single packet to this report. As an aside, older versions of Wireshark do not label the packet as malformed.
There are two issues I'd like to report.
1) We added debug output for kea-dhcp4.bad-packets all the way up to 99, and kea never logged an event related to this potentially malformed packet. So we also added debug output for kea-dhcp4.packets all the way up to 50. There we get a single line confirming the buffer was received, but nothing after that.
2) Perhaps Kea should be able to actually accept whatever deformity this packet has, as it seems like isc-dhcpd had no issues accepting it.
**To Reproduce**
Steps to reproduce the behavior:
1. Run Kea (dhcpv4) with the following config
```
"loggers": [{
"name": "kea-dhcp4",
"output_options": [
{"output": "syslog:local6"}],
"severity": "INFO"},
{"name": "kea-dhcp4.bad-packets",
"output_options": [
{"output": "/var/log/kea/badpackets.log"}],
"severity": "DEBUG",
"debuglevel": 99},
{ "name": "kea-dhcp4.packets",
"output_options": [
{"output": "/var/log/kea/allpackets.log"}],
"severity": "DEBUG",
"debuglevel": 50}],
```
2. Client sends a packet formatted like the attached packet [malformed_dhcp.pcap](/uploads/ab501a4c8cc53f1ce3fca01e6d70605a/malformed_dhcp.pcap)
3. The server then logs receipt of the buffer, but then nothing else
```
2022-01-05 10:09:25.416 DEBUG [kea-dhcp4.packets/15803.140131142945216] DHCP4_BUFFER_RECEIVED received buffer from 128.180.184.251:67 to 128.180.2.9:67 over interface bond0
```
**Expected behavior**
Ideally, I expect to be able to massage Kea into serving DHCP to these devices, but if that is deemed impossible, Kea should log the receipt of the bad packet in the badpacket logger with some explanation of why it's bad.
**Environment:**
- Kea version: version 2.0.1 (stable)
- OS: Debian 10.11, 4.19.0-6-amd64 \#1 SMP Debian 4.19.67-2+deb10u2
- We used the sic-provided packages with some custom hooks
- Hooks being used:
```
"library": "/home/kea/libfingerprint.so",
"library": "/usr/lib/x86_64-linux-gnu/kea/hooks/libdhcp_lease_cmds.so",
"library": "/usr/lib/x86_64-linux-gnu/kea/hooks/libdhcp_stat_cmds.so",
"library": "/usr/lib/x86_64-linux-gnu/kea/hooks/libdhcp_ha.so",
"library": "/usr/lib/x86_64-linux-gnu/kea/hooks/libdhcp_legal_log.so"
```
**Additional Information**
This is a repeatable issue and we're happy to collect/provide additional detail if necessary. In reading some previous bug reports I noticed similar behavior related to option 81 flags which is why I think this is an option 81 issue. Devices similar in nature that are not sending option 81 do not exhibit this behavior. Looking at the specific packet I attacked, in Option 81 of the 4 flags (NEOS), our client is setting the E and S flags to 1. For those just wanting to see the Option 81 hex data:
```
0120 03 06 74 01 01 51 0a 05 00 00 43 41 42 44 35 30
0130 44 3d 07 01 00 20 4a ab d5 0d ff 00 00 00 00 00
0140 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0150 00 00 00 00 00 00
```
- 51 = Option 81
- 0a = 10 bytes
- 05 = NEOS
- 00 = A-RR result
- 00 = PTR-RR result
- 43 41 42 44 35 30 44 = should be the FQDN?kea2.2.0 - a new stable branchRazvan BecheriuRazvan Becheriuhttps://gitlab.isc.org/isc-projects/kea/-/issues/2262keactrl exits with an error without definitions for netconf2022-02-14T16:22:43ZJinmei Tatuyakeactrl exits with an error without definitions for netconf**Describe the bug**
I'm trying Kea 2.0.1 (which is built without kea-netconf). If I try to start Kea servers using `keactrl` with the following configuration:
```
prefix=/home/keadist
kea_dhcp4_config_file=${prefix}/etc/kea/kea-dhcp4...**Describe the bug**
I'm trying Kea 2.0.1 (which is built without kea-netconf). If I try to start Kea servers using `keactrl` with the following configuration:
```
prefix=/home/keadist
kea_dhcp4_config_file=${prefix}/etc/kea/kea-dhcp4.conf
kea_dhcp6_config_file=${prefix}/etc/kea/kea-dhcp6.conf
kea_dhcp_ddns_config_file=${prefix}/etc/kea/kea-dhcp-ddns.conf
kea_ctrl_agent_config_file=${prefix}/etc/kea/kea-ctrl-agent.conf
exec_prefix=${prefix}
dhcp4_srv=${exec_prefix}/sbin/kea-dhcp4
dhcp6_srv=${exec_prefix}/sbin/kea-dhcp6
dhcp_ddns_srv=${exec_prefix}/sbin/kea-dhcp-ddns
ctrl_agent_srv=${exec_prefix}/sbin/kea-ctrl-agent
dhcp4=yes
dhcp6=no
dhcp_ddns=no
ctrl_agent=no
kea_verbose=no
```
then the servers start up but keactrl exits with a non-0 status:
```
# /home/keadist/sbin/keactrl start -c $PWD/keactrl.conf && echo OK
/home/keadist/sbin/keactrl: 467: /home/keadist/sbin/keactrl: netconf: parameter not set
INFO/keactrl: Starting /home/keadist/sbin/kea-dhcp4 -c /home/keadist/etc/kea/kea-dhcp4.conf
INFO/keactrl: Starting /home/keadist/sbin/kea-dhcp-ddns -c /home/keadist/etc/kea/kea-dhcp-ddns.conf
/home/keadist/sbin/keactrl: 488: /home/keadist/sbin/keactrl: netconf_srv: parameter not set
```
(Note that "OK" isn't emitted)
Same for `keactrl stop`.
The cause of this looks like the following check:
```
# Exit with error if commands exit with non-zero and if undefined variables are
# used.
set -eu
```
and the fact that the `keactrl.conf` file doesn't define `netconf_srv`. So the execution of `keactrl` triggers an error at line 488:
```
run_conditional "netconf" "start_server ${netconf_srv} -c ${kea_netconf_config_file} \
${args}" 1
```
(I initially thought the same thing could happen on line 467, but probably because of the use of a pipe it somehow avoids this failure mode).
**To Reproduce**
See above.
**Expected behavior**
I would expect `keactrl` exits normally (with the exit status of 0) when Kea is built without the support of netconf. One might argue that the configuration should have netconf related definitions, but it doesn't make sense to me if Kea isn't build its support (so `netconf_srv` should be set to some placeholder value, which would look awkward). Besides, `keactrl` already seems to try avoiding this failure mode in some places, e.g.:
```
if ${have_netconf}; then
printf "Kea Netconf configuration file: %s\n" "${kea_netconf_config_file}"
fi
```
Addressing the essentially same glitch in place but not for other places is inconsistent.
**Environment:**
- Kea version: 2.0.1
- OS: Ubuntu 18.04.4 x64
- Build option: `--disable-static --enable-generate-messages --with-gtest-source`. I don't think it matters for this issue, but I'd note that libyang etc wasn't detected and `kea-netconf` wasn't built.
- No hook is used (again, though, I don't think it matters)kea2.1.3Razvan BecheriuRazvan Becheriuhttps://gitlab.isc.org/isc-projects/kea/-/issues/2263Kea database cluster issue2023-04-14T14:17:49Zkpramodk46Kea database cluster issuewhen we are using in postgres database cluster (Syn and async) for kea dhcp server then it will showing the error message and also remove the ip address when lease has been expired but when we are using single node database(without clust...when we are using in postgres database cluster (Syn and async) for kea dhcp server then it will showing the error message and also remove the ip address when lease has been expired but when we are using single node database(without cluster) it is perfectly fine. My database service and dhcp service are up.
What is issue? please help....
![Kea_error_issue](/uploads/f7db45c5b9c1b8c3b481d83d711206dc/Kea_error_issue.JPG)
```
Jan 06 17:22:04 stag-kea-dhcp.nic.in kea-dhcp4[3999]: INFO DHCP4_LEASE_ADVERT [hwtype=1 c0:3f:d5:46:32:48], cid=[01:c0:3f:d5:46:32:48>
Jan 06 17:22:04 stag-kea-dhcp.nic.in kea-dhcp4[3999]: ERROR ALLOC_ENGINE_V4_ALLOC_ERROR [hwtype=1 c0:3f:d5:46:32:48], cid=[01:c0:3f:d5>
Jan 06 17:22:04 stag-kea-dhcp.nic.in kea-dhcp4[3999]: INFO DHCP4_LEASE_ADVERT [hwtype=1 c0:3f:d5:46:32:48], cid=[01:c0:3f:d5:46:32:48>
Jan 06 17:22:04 stag-kea-dhcp.nic.in kea-dhcp4[3999]: ERROR ALLOC_ENGINE_V4_ALLOC_ERROR [hwtype=1 c0:3f:d5:46:32:48], cid=[01:c0:3f:d5>
Jan 06 17:22:04 stag-kea-dhcp.nic.in kea-dhcp4[3999]: INFO DHCP4_LEASE_ADVERT [hwtype=1 c0:3f:d5:46:32:48], cid=[01:c0:3f:d5:46:32:48>
Jan 06 17:22:04 stag-kea-dhcp.nic.in kea-dhcp4[3999]: ERROR ALLOC_ENGINE_V4_ALLOC_ERROR [hwtype=1 c0:3f:d5:46:32:48], cid=[01:c0:3f:d5>
Jan 06 17:22:04 stag-kea-dhcp.nic.in kea-dhcp4[3999]: INFO DHCP4_LEASE_ADVERT [hwtype=1 c0:3f:d5:46:32:48], cid=[01:c0:3f:d5:46:32:48>
Jan 06 17:22:04 stag-kea-dhcp.nic.in kea-dhcp4[3999]: ERROR ALLOC_ENGINE_V4_ALLOC_ERROR [hwtype=1 c0:3f:d5:46:32:48], cid=[01:c0:3f:d5>
Jan 06 17:22:04 stag-kea-dhcp.nic.in kea-dhcp4[3999]: INFO DHCP4_LEASE_ADVERT [hwtype=1 c0:3f:d5:46:32:48], cid=[01:c0:3f:d5:46:32:48>
Jan 06 17:22:04 stag-kea-dhcp.nic.in kea-dhcp4[3999]: ERROR ALLOC_ENGINE_V4_ALLOC_ERROR [hwtype=1 c0:3f:d5:46:32:48], cid=[01:c0:3f:d5>```backloghttps://gitlab.isc.org/isc-projects/bind9/-/issues/3079Assertion failure on TCP read (raw TCP - rndc and stats)2022-01-21T09:57:43ZOndřej SurýAssertion failure on TCP read (raw TCP - rndc and stats)The `isc__nm_tcp_resumeread()` could directly process the network manager `netievent` causing the read callback to call the `isc__nm_alloc_cb()` function before the previous read callback has called `isc__nm_free_cb()` causing an asserti...The `isc__nm_tcp_resumeread()` could directly process the network manager `netievent` causing the read callback to call the `isc__nm_alloc_cb()` function before the previous read callback has called `isc__nm_free_cb()` causing an assertion failure, because the worker receive buffer would still be marked as "in use".
```
(gdb) bt
#0 0x00007f5481d0f93f in raise () from /lib64/libc.so.6
#1 0x00007f5481cf9c95 in abort () from /lib64/libc.so.6
#2 0x000000000041427a in assertion_failed (file=<optimized out>, line=<optimized out>, type=isc_assertiontype_insist, cond=0x7f548572b628 "!worker->recvbuf_inuse || sock->type == isc_nm_udpsocket") at main.c:236
#3 0x00007f54856fa16a in isc_assertion_failed (file=file@entry=0x7f548572b314 "netmgr/netmgr.c", line=line@entry=2225, type=type@entry=isc_assertiontype_insist,
cond=cond@entry=0x7f548572b628 "!worker->recvbuf_inuse || sock->type == isc_nm_udpsocket") at assertions.c:47
#4 0x00007f54856e4019 in isc__nm_alloc_cb (handle=<optimized out>, size=65536, buf=0x7f547866f170) at netmgr/netmgr.c:2225
#5 0x00007f5482e8beab in uv.read () from /lib64/libuv.so.1
#6 0x00007f5482e8cbb8 in uv.stream_io () from /lib64/libuv.so.1
#7 0x00007f5482e92888 in uv.io_poll () from /lib64/libuv.so.1
#8 0x00007f5482e82035 in uv_run () from /lib64/libuv.so.1
#9 0x00007f54856ea6af in nm_thread (worker0=0x7f54802b8d90) at netmgr/netmgr.c:689
#10 0x00007f5485720995 in isc__trampoline_run (arg=0x7f5480274760) at trampoline.c:185
#11 0x00007f54820a42de in start_thread () from /lib64/libpthread.so.0
#12 0x00007f5481dd4a63 in clone () from /lib64/libc.so.6
```
Version: d27d20e6d4bafbb229046db5b4ae09ac293ff08a and laterJanuary 2022 (9.18.0)Ondřej SurýOndřej Surýhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3080`rndc` sometimes crashes when it's interrupted by a signal2022-11-02T17:11:04ZPetr Špačekpspacek@isc.org`rndc` sometimes crashes when it's interrupted by a signal### Summary
`rndc` sometimes crashes when it's interrupted by a signal
### BIND version used
* ~"Affects v9.17" : ca1a664005f849eb5720192832e7b283b480198b
* ~"Affects v9.16" : ebec9c701a9f7676becbf6304ff06edd63a9af78
### Steps to rep...### Summary
`rndc` sometimes crashes when it's interrupted by a signal
### BIND version used
* ~"Affects v9.17" : ca1a664005f849eb5720192832e7b283b480198b
* ~"Affects v9.16" : ebec9c701a9f7676becbf6304ff06edd63a9af78
### Steps to reproduce
* configure control channel: `rndc-confgen -a`
* run `named` with an arbitrary configuration, `-c /dev/null`
* run `rndc` in a loop, signal to terminate in short time so it is likely to get interrupted in the middle of operation:
```
while true; do timeout 0.01 rndc flush; done
```
* play with 0.01 timeout to find a fit for your machine
### What is the current *bug* behavior?
`rndc` crashes:
~"Affects v9.17" :
```
rndc: connect failed: 127.0.0.1#953: shutting down
task.c:721: REQUIRE(((task) != ((void *)0) && ((const isc__magic_t *)(task))->magic == ((('T') << 24 | ('A') << 16 | ('S') << 8 | ('K'))))) failed, back trace
```
~"Affects v9.16" :
```
task.c:293: INSIST(__v > 0 && __v < (4294967295U)) failed, back trace
```
### What is the expected *correct* behavior?
Well, it should not crash :-)
### Relevant configuration files
None.
### Relevant logs and/or screenshots
gdb back trace ~"Affects v9.17" :
<details>
```
(gdb) bt
#0 0x00007fac6c407d22 in raise () from /usr/lib/libc.so.6
#1 0x00007fac6c3f1862 in abort () from /usr/lib/libc.so.6
#2 0x00007fac6d367740 in isc_assertion_failed (file=0x7fac6d3c36ed "task.c", line=721,
type=isc_assertiontype_require,
cond=0x7fac6d3c3ca0 "((task) != ((void *)0) && ((const isc__magic_t *)(task))->magic == ((('T') << 24 | ('A') << 16 | ('S') << 8 | ('K'))))") at assertions.c:48
#3 0x00007fac6d3956ec in isc_task_shutdown (task=0x0) at task.c:721
#4 0x00005557fb200e81 in rndc_recvdone (handle=0x7fac6881f000, result=ISC_R_NOTFOUND,
arg=0x5557fb208ba0 <rndc_ccmsg>) at rndc.c:394
#5 0x00007fac6d04732c in recv_data (handle=0x7fac6881f000, eresult=ISC_R_SUCCESS, region=0x7fac69579f80,
arg=0x5557fb208ba0 <rndc_ccmsg>) at ccmsg.c:110
#6 0x00007fac6d34f5db in isc__nm_async_readcb (worker=0x0, ev0=0x7fac69579ff0) at netmgr/netmgr.c:2807
#7 0x00007fac6d34f3db in isc__nm_readcb (sock=0x7fac68800000, uvreq=0x7fac6881a000, eresult=ISC_R_SUCCESS)
at netmgr/netmgr.c:2780
#8 0x00007fac6d35438c in isc__nm_tcp_read_cb (stream=0x7fac688005b0, nread=261, buf=0x7fac6957a100)
at netmgr/tcp.c:884
#9 0x00007fac6cdb23a1 in ?? () from /usr/lib/libuv.so.1
#10 0x00007fac6cdb2cf8 in ?? () from /usr/lib/libuv.so.1
#11 0x00007fac6cdbb266 in ?? () from /usr/lib/libuv.so.1
#12 0x00007fac6cda7897 in uv_run () from /usr/lib/libuv.so.1
#13 0x00007fac6d34656b in nm_thread (worker0=0x7fac69cc3000) at netmgr/netmgr.c:689
#14 0x00007fac6d39f8b0 in isc__trampoline_run (arg=0x7fac69c77c80) at trampoline.c:185
#15 0x00007fac6c5a0259 in start_thread () from /usr/lib/libpthread.so.0
#16 0x00007fac6c4c95e3 in clone () from /usr/lib/libc.so.6
```
</details>
gdb back trace ~"Affects v9.16" :
<details>
```
(gdb) bt
#0 0x00007f4cf8861d22 in raise () from /usr/lib/libc.so.6
#1 0x00007f4cf884b862 in abort () from /usr/lib/libc.so.6
#2 0x0000563710a317dc in isc_assertion_failed (file=0x563710a99c8d "task.c", line=293,
type=isc_assertiontype_insist, cond=0x563710a9a058 "__v > 0 && __v < (4294967295U)") at assertions.c:47
#3 0x0000563710a69f32 in isc_task_attach (source=0x563711deca30, targetp=0x7f4cf5cc4470) at task.c:293
#4 0x0000563710a7e542 in isc_socket_connect (sock=0x7f4cf0000b60, addr=0x563710ab71a0 <serveraddrs>,
task=0x563711deca30, action=0x563710a16e33 <rndc_connected>, arg=0x0) at socket.c:4780
#5 0x0000563710a176b1 in rndc_startconnect (addr=0x563710ab71a0 <serveraddrs>, task=0x563711deca30)
at ./rndc.c:573
#6 0x0000563710a17770 in rndc_start (task=0x563711deca30, event=0x0) at ./rndc.c:583
#7 0x0000563710a6c10d in task_run (task=0x563711deca30) at task.c:857
#8 0x0000563710a6c386 in isc_task_run (task=0x563711deca30) at task.c:950
#9 0x0000563710a48555 in isc__nm_async_task (worker=0x563711dee210, ev0=0x563711e4c7a0) at netmgr.c:859
#10 0x0000563710a4880d in process_netievent (worker=0x563711dee210, ievent=0x563711e4c7a0) at netmgr.c:938
#11 0x0000563710a48f79 in process_queue (worker=0x563711dee210, type=NETIEVENT_TASK) at netmgr.c:1007
#12 0x0000563710a483a1 in process_all_queues (worker=0x563711dee210) at netmgr.c:778
#13 0x0000563710a48425 in async_cb (handle=0x563711dee570) at netmgr.c:807
#14 0x00007f4cf8a28fcd in ?? () from /usr/lib/libuv.so.1
#15 0x00007f4cf8a3d266 in ?? () from /usr/lib/libuv.so.1
#16 0x00007f4cf8a29897 in uv_run () from /usr/lib/libuv.so.1
#17 0x0000563710a47f61 in nm_thread (worker0=0x563711dee210) at netmgr.c:713
#18 0x0000563710a6ef3b in isc__trampoline_run (arg=0x563711defa20) at trampoline.c:196
#19 0x00007f4cf89fa259 in start_thread () from /usr/lib/libpthread.so.0
#20 0x00007f4cf89235e3 in clone () from /usr/lib/libc.so.6
```
</details>January 2022 (9.18.0)https://gitlab.isc.org/isc-projects/bind9/-/issues/3081When is BIND ready?2024-03-27T13:27:23ZGreg ChoulesWhen is BIND ready?Related to Support ticket [19717](https://support.isc.org/Ticket/Display.html?id=19717)
The purpose of this issue is to make BIND more verbose and precise about reporting various stages of readiness when starting up, leading to a definit...Related to Support ticket [19717](https://support.isc.org/Ticket/Display.html?id=19717)
The purpose of this issue is to make BIND more verbose and precise about reporting various stages of readiness when starting up, leading to a definitive "I'm ready now" log message.
The question an operator will want an answer to is, when can I send queries to this server again?
Different features will all have their own completeness check. For example: RPZ, local zones, remote zones, mirror zones, CATZ. The request is for new log messages to allow operators to track progress of each of these features and a new (or redefined) final log message when all tasks are complete.
What is a task? When is it complete and when is BIND ready to do that thing?
Us and the customer have, in parallel, come up with similar thinking on what needs to be done. The principle is, at startup time create a one-time todo list from the zone configuration statements. As each list item is completed, generate a signal and remove it from the list. When all items are completed generate a final completion signal and set the state of an indicator that can be queried by RNDC, so that users can test the current complete/not complete state periodically.
Taking some different types of zones as examples, we would expect behaviour like this:
Primary zones:
- Read zone data from local storage. Once this has been read into memory the zone is 'ready', a signal is generated and no further readiness checks need to be made: this task is complete.
Secondary zones:
- If a zone has been configured with a file, read zone data from local storage. Once this has been read into memory the zone is 'ready', a signal is generated and no further readiness checks need to be made: this task is complete. NOTE: checking whether the zone is up to date (SOA queries and possible subsequent zone transfer) is specifically excluded from this task.
- If a zone has **not** been configured with a file, make SOA queries and attempt zone transfers as necessary in order to load the zone. If zone transfer succeeds and zone data is loaded into memory the zone is 'ready', a signal is generated and no further readiness checks need to be made: this task is complete. If zone transfer fails there needs to be a limit - number of tries without success - to how long this task remains on the todo list. In this case generate a 'not ready' signal and remove the task from the list.
Catalog zones:
- These can be treated similarly to Primary or Secondary zones for the catalog itself. Once the catalog is loaded generate a ready signal and remove it from the todo list.
- However, during processing of each catalog a further list of (member) zones will be generated, each of which need to be added to the todo list and treated as a Secondary zone with no previous local data storage - i.e. needing to be transferred from a primary server.
Response Policy Zones:
- These can be treated similarly to Primary or Secondary zones for the zone data itself, but with the (possible?) additional step of needing to build the policy once it has been loaded. An RPZ should be considered ready only when the policy is active and responses would be re-written.
Mirror zones:
- These are similar to secondary zones.
Anything else?https://gitlab.isc.org/isc-projects/bind9/-/issues/3082pass ECS client info to DLZ modules2022-02-25T13:41:30ZEvan Huntpass ECS client info to DLZ modulesThis works in 9.16-S and there's been a [customer request](https://support.isc.org/Ticket/Display.html?id=19496) to support it in the main branch as well.This works in 9.16-S and there's been a [customer request](https://support.isc.org/Ticket/Display.html?id=19496) to support it in the main branch as well.February 2022 (9.16.26, 9.16.26-S1)