ISC Open Source Projects issueshttps://gitlab.isc.org/groups/isc-projects/-/issues2021-04-12T15:12:44Zhttps://gitlab.isc.org/isc-projects/kea/-/issues/1733HA+MT Packet Parking Improvements2021-04-12T15:12:44ZTomek MrugalskiHA+MT Packet Parking ImprovementsThis is the third ticket in the HA+MT effort. It covers the implementation of the packet parking improvements described in the [HA+MT design](https://gitlab.isc.org/isc-projects/kea/-/wikis/designs/HA-MT-Design-for-Multi-threaded-Http-HA...This is the third ticket in the HA+MT effort. It covers the implementation of the packet parking improvements described in the [HA+MT design](https://gitlab.isc.org/isc-projects/kea/-/wikis/designs/HA-MT-Design-for-Multi-threaded-Http-HA-traffic):
Initially this ticket called allowing to self-parking the packets. Subsequent analysis has provided a cleaner solution "Core Proactive Parking".
The scope of this ticket will include changes to src/lib/hooks parking mechanics and kea-dhcpX core code. The resultant code should work seamlessly with existing configuration and hooks (after recompilation).kea1.9.7Thomas MarkwalderThomas Markwalderhttps://gitlab.isc.org/isc-projects/bind9/-/issues/2202Intermittent kasp system test failure: Job Failed #12081812021-04-12T14:42:34ZMatthijs Mekkingmatthijs@isc.orgIntermittent kasp system test failure: Job Failed #1208181Job [#1208181](https://gitlab.isc.org/isc-projects/bind9/-/jobs/1208181) failed for 9ca8a789e8d538ff685370c1ed8b0dd7b5008935:Job [#1208181](https://gitlab.isc.org/isc-projects/bind9/-/jobs/1208181) failed for 9ca8a789e8d538ff685370c1ed8b0dd7b5008935:Matthijs Mekkingmatthijs@isc.orgMatthijs Mekkingmatthijs@isc.orghttps://gitlab.isc.org/isc-projects/bind9/-/issues/2602dnssec-policy publish dynamic zone without NSEC3 despite policy2021-04-12T07:04:38ZMarc Dequènes (Duck)dnssec-policy publish dynamic zone without NSEC3 despite policy### Summary
History: I had a problem with dynamic zones (migrated from dnssec-keymgr) on a server with RRs which stopped being validated properly but the logs did not go far enough to find the origin of the problem. On one zone that I c...### Summary
History: I had a problem with dynamic zones (migrated from dnssec-keymgr) on a server with RRs which stopped being validated properly but the logs did not go far enough to find the origin of the problem. On one zone that I could not fix with dnssec-signzone I decided to recreate it from scratch.
All seemed to go well, the checkds went well, and RRSIGs are published but NSEC3 are not and the zone is not secure.
### BIND version used
```
BIND 9.16.11-Debian (Stable Release) <id:9ff601b>
running on Linux x86_64 4.19.0-10-amd64 #1 SMP Debian 4.19.132-1 (2020-07-24)
built by make with '--build=x86_64-linux-gnu' '--prefix=/usr' '--includedir=/usr/include' '--mandir=/usr/share/man' '--infodir=/usr/share/info' '--sysconfdir=/etc' '--localstatedir=/var' '--disable-silent-rules' '--libdir=/usr/lib/x86_64-linux-gnu' '--runstatedir=/run' '--disable-maintainer-mode' '--disable-dependency-tracking' '--libdir=/usr/lib/x86_64-linux-gnu' '--sysconfdir=/etc/bind' '--with-python=python3' '--localstatedir=/' '--enable-threads' '--enable-largefile' '--with-libtool' '--enable-shared' '--enable-static' '--with-gost=no' '--with-openssl=/usr' '--with-gssapi=/usr' '--with-libidn2' '--with-json-c' '--with-lmdb=/usr' '--with-gnu-ld' '--with-maxminddb' '--with-atf=no' '--enable-ipv6' '--enable-rrl' '--enable-filter-aaaa' '--disable-native-pkcs11' '--enable-dnstap' 'build_alias=x86_64-linux-gnu' 'CFLAGS=-g -O2 -fdebug-prefix-map=/build/bind9-DpdRXh/bind9-9.16.11=. -fstack-protector-strong -Wformat -Werror=format-security -fno-strict-aliasing -fno-delete-null-pointer-checks -DNO_VERSION_DATE -DDIG_SIGCHASE' 'LDFLAGS=-Wl,-z,relro -Wl,-z,now' 'CPPFLAGS=-Wdate-time -D_FORTIFY_SOURCE=2'
compiled by GCC 8.3.0
compiled with OpenSSL version: OpenSSL 1.1.1d 10 Sep 2019
linked to OpenSSL version: OpenSSL 1.1.1d 10 Sep 2019
compiled with libuv version: 1.24.1
linked to libuv version: 1.24.1
compiled with libxml2 version: 2.9.4
linked to libxml2 version: 20904
compiled with json-c version: 0.12.1
linked to json-c version: 0.12.1
compiled with zlib version: 1.2.11
linked to zlib version: 1.2.11
linked to maxminddb version: 1.3.2
compiled with protobuf-c version: 1.3.1
linked to protobuf-c version: 1.3.1
threads support is enabled
default paths:
named configuration: /etc/bind/named.conf
rndc configuration: /etc/bind/rndc.conf
DNSSEC root key: /etc/bind/bind.keys
nsupdate session key: //run/named/session.key
named PID file: //run/named/named.pid
named lock file: //run/named/named.lock
geoip-directory: /usr/share/GeoIP
```
### Steps to reproduce
* After stopping bind I removed everything about the old zone, journals, keys and so on
* I recreated a basic zone with just a handful of RRs I need (SOA, 3 NS, 3 TLSA entries), nothing related to DNSSEC
* I defined the zone with the dnssec-policy I us for all my zones and works fine for non-dynamic zones so far
* started bind
* waited for CDS to be published
* rndc dnssec -checkds -key 32826 published _kage.duckcorp.org
* waited for the status to all switch to omnipresent
* used dig and also delv to check the zone
### What is the current *bug* behavior?
```
delv +rtrace +dnssec @ns1.duckcorp.org SOA _kage.duckcorp.org
;; fetch: ns1.duckcorp.org/A
;; fetch: ns1.duckcorp.org/AAAA
;; fetch: ns1.duckcorp.org.hq.duckcorp.org/A
;; fetch: ns1.duckcorp.org.hq.duckcorp.org/AAAA
;; fetch: ns1.duckcorp.org.duckcorp.org/A
;; fetch: ns1.duckcorp.org.duckcorp.org/AAAA
;; fetch: _kage.duckcorp.org/SOA
;; fetch: org/DS
;; fetch: ./DNSKEY
;; fetch: duckcorp.org/DS
;; fetch: org/DNSKEY
;; fetch: _kage.duckcorp.org/DS
;; fetch: duckcorp.org/DNSKEY
;; insecurity proof failed resolving '_kage.duckcorp.org/SOA/IN': 2001:67c:1740:9016::c111:c0d3#53
;; validating _kage.duckcorp.org/SOA: got insecure response; parent indicates it should be secure
;; insecurity proof failed resolving '_kage.duckcorp.org/SOA/IN': 193.200.43.105#53
;; resolution failed: insecurity proof failed
```
Confirmed by dnsviz: https://dnsviz.net/d/_kage.duckcorp.org/dnssec/
Even after `rndc sync -clean _kage.duckcorp.org` there is no NSEC3 (or even NSEC) RRs, and no NSEC3PARAM in the zone file.
### What is the expected *correct* behavior?
I expected the NSEC3PARAM RR to be added in the zone according to policy and then NSEC3 RRs to be generated.
### Relevant configuration files
The zone:
```
zone "_kage.duckcorp.org" IN {
type master;
allow-transfer { key duckcorp-internal; };
update-policy { <many-grants> };
file "/var/cache/bind/masters/_kage.duckcorp.org.zone";
dnssec-policy "generated";
};
```
And the policy:
```
dnssec-policy "generated" {
keys {
ksk key-directory lifetime P1Y algorithm rsasha512 4096;
zsk key-directory lifetime 30d algorithm rsasha512 2048;
};
max-zone-ttl PT1H;
nsec3param iterations 5 optout no salt-length 8;
};
```
### Relevant logs and/or screenshots
(Paste any relevant logs - please use code blocks (```) to format console
output, logs, and code, as it's very hard to read otherwise.)
```
# rndc dnssec -status _kage.duckcorp.org
dnssec-policy: generated
current time: Sun Mar 28 17:24:17 2021
key: 32826 (RSASHA512), KSK
published: yes - since Sat Mar 27 08:56:19 2021
key signing: yes - since Sat Mar 27 08:56:19 2021
Next rollover scheduled on Sun Mar 27 07:51:19 2022
- goal: omnipresent
- dnskey: omnipresent
- ds: omnipresent
- key rrsig: omnipresent
key: 63491 (RSASHA512), ZSK
published: yes - since Sat Mar 27 08:56:19 2021
zone signing: yes - since Sat Mar 27 08:56:19 2021
Next rollover scheduled on Mon Apr 26 07:51:19 2021
- goal: omnipresent
- dnskey: omnipresent
- zone rrsig: omnipresent
```
But the zone is not considered secure for some reason:
```
# rndc zonestatus _kage.duckcorp.org
name: _kage.duckcorp.org
type: master
files: /var/cache/bind/masters/_kage.duckcorp.org.zone
serial: 10056
nodes: 5
last loaded: Sat, 27 Mar 2021 10:15:37 GMT
secure: no
key maintenance: automatic
next key event: Mon, 26 Apr 2021 05:51:19 GMT
dynamic: yes
frozen: no
reconfigurable via modzone: no
```
```
27-Mar-2021 08:56:19.477 dnssec: info: zone _kage.duckcorp.org/IN: reconfiguring zone keys
27-Mar-2021 08:56:21.241 dnssec: info: keymgr: DNSKEY _kage.duckcorp.org/RSASHA512/32826 (KSK) created for policy generated
27-Mar-2021 08:56:21.473 dnssec: info: keymgr: DNSKEY _kage.duckcorp.org/RSASHA512/63491 (ZSK) created for policy generated
27-Mar-2021 08:56:21.473 dnssec: info: Fetching _kage.duckcorp.org/RSASHA512/32826 (KSK) from key repository.
27-Mar-2021 08:56:21.473 dnssec: info: DNSKEY _kage.duckcorp.org/RSASHA512/32826 (KSK) is now published
27-Mar-2021 08:56:21.473 dnssec: info: DNSKEY _kage.duckcorp.org/RSASHA512/32826 (KSK) is now active
27-Mar-2021 08:56:21.473 dnssec: info: Fetching _kage.duckcorp.org/RSASHA512/63491 (ZSK) from key repository.
27-Mar-2021 08:56:21.473 dnssec: info: DNSKEY _kage.duckcorp.org/RSASHA512/63491 (ZSK) is now published
27-Mar-2021 08:56:21.473 dnssec: info: DNSKEY _kage.duckcorp.org/RSASHA512/63491 (ZSK) is now active
27-Mar-2021 08:56:21.553 dnssec: info: zone _kage.duckcorp.org/IN: next key event: 27-Mar-2021 11:01:19.477
27-Mar-2021 11:01:19.480 dnssec: info: zone _kage.duckcorp.org/IN: reconfiguring zone keys
27-Mar-2021 11:01:19.580 dnssec: info: zone _kage.duckcorp.org/IN: next key event: 27-Mar-2021 12:01:19.480
27-Mar-2021 12:01:19.483 dnssec: info: zone _kage.duckcorp.org/IN: reconfiguring zone keys
27-Mar-2021 12:01:19.483 dnssec: info: zone _kage.duckcorp.org/IN: next key event: 28-Mar-2021 14:28:24.483
28-Mar-2021 06:12:18.696 dnssec: info: zone _kage.duckcorp.org/IN: reconfiguring zone keys
28-Mar-2021 06:12:18.756 dnssec: info: zone _kage.duckcorp.org/IN: next key event: 28-Mar-2021 14:28:24.696
28-Mar-2021 14:28:24.697 dnssec: info: zone _kage.duckcorp.org/IN: reconfiguring zone keys
28-Mar-2021 14:28:24.737 dnssec: info: zone _kage.duckcorp.org/IN: next key event: 26-Apr-2021 07:51:19.697
```
### Possible fixes
I have no idea how to fix this problem. I suppose the NSEC3PARAM RR is not created in dynamic zones for some reason and then NSEC3 RRs are never created. Maybe inserting it manually would solve the problem but I have not tried this yet.https://gitlab.isc.org/isc-projects/stork/-/issues/508Benchmark loading many leases into a DB2021-04-09T11:10:14ZMarcin SiodelskiBenchmark loading many leases into a DBThe purpose of this ticket is to see how long it would take to load many leases into the Stork database, in case we decide that Stork should cache the lease information gathered from multiple Kea servers. This work is related to the http...The purpose of this ticket is to see how long it would take to load many leases into the Stork database, in case we decide that Stork should cache the lease information gathered from multiple Kea servers. This work is related to the https://gitlab.isc.org/isc-projects/stork/-/wikis/Leases-Tracking document.0.17Marcin SiodelskiMarcin Siodelskihttps://gitlab.isc.org/isc-projects/stork/-/issues/355Add server option to skip DB migration on startup2021-04-09T10:46:37ZTomek MrugalskiAdd server option to skip DB migration on startupBy default, the server always runs migrations on startup. This is convenient, as users don't need to remember about it and migrations are done automatically. However, on some systems where migration is causing problems, there should be a...By default, the server always runs migrations on startup. This is convenient, as users don't need to remember about it and migrations are done automatically. However, on some systems where migration is causing problems, there should be a way to skip migration.
When migration is disabled, the server should simply check if the schema version is as expected. If it's not, refuse to start. Alternatively, it could print a critical warning and try to run, but if the DB is not up to date, there would be problems that's impossible to predict.
Background for this request [support#16817](https://support.isc.org/Ticket/Display.html?id=16817).outstandinghttps://gitlab.isc.org/isc-projects/stork/-/issues/367migration tool up to version X doesn't work, doesn't report its own version (-h)2021-04-09T10:44:12ZTomek Mrugalskimigration tool up to version X doesn't work, doesn't report its own version (-h)Two problems with the migration tool:
- the migration to specific version doesn't work, `stork-db-migrate up 20` always migrates to latest version.
- Every software should be able to return its own version using -v or --version.Two problems with the migration tool:
- the migration to specific version doesn't work, `stork-db-migrate up 20` always migrates to latest version.
- Every software should be able to return its own version using -v or --version.outstandinghttps://gitlab.isc.org/isc-projects/kea/-/issues/1570CB failures when using oracle mysql 8.0.222021-04-09T09:38:25ZRazvan BecheriuCB failures when using oracle mysql 8.0.22several unittests fail when upgrading to oracle mysql 8.0.22
```
[ PASSED ] 123 tests.
[ FAILED ] 6 tests, listed below:
[ FAILED ] MySqlConfigBackendDHCPv4Test.deleteSubnet4
[ FAILED ] MySqlConfigBackendDHCPv4Test.deleteSharedNe...several unittests fail when upgrading to oracle mysql 8.0.22
```
[ PASSED ] 123 tests.
[ FAILED ] 6 tests, listed below:
[ FAILED ] MySqlConfigBackendDHCPv4Test.deleteSubnet4
[ FAILED ] MySqlConfigBackendDHCPv4Test.deleteSharedNetwork4
[ FAILED ] MySqlConfigBackendDHCPv4Test.getAllOptionDefs4
[ FAILED ] MySqlConfigBackendDHCPv6Test.deleteSubnet6
[ FAILED ] MySqlConfigBackendDHCPv6Test.deleteSharedNetwork6
[ FAILED ] MySqlConfigBackendDHCPv6Test.getAllOptionDefs6
```
fount during sanity checks on 1.9.1 and 1.9.2 (28 oct 2020)
package release date for Ubuntu 20.04:
| 8.0.22-0ubuntu0.20.04.2 | security, updates (main) | 2020-10-27 |
```
[ RUN ] MySqlConfigBackendDHCPv4Test.deleteSubnet4
mysql_cb_dhcp4_unittest.cc:1738: Failure
Expected equality of these values:
1
deleted_count
Which is: 2
Google Test trace:
mysql_cb_dhcp4_unittest.cc:1733: one server
mysql_cb_dhcp4_unittest.cc:1759: Failure
Expected equality of these values:
1
deleted_count
Which is: 2
Google Test trace:
mysql_cb_dhcp4_unittest.cc:1754: one server
[ FAILED ] MySqlConfigBackendDHCPv4Test.deleteSubnet4 (272 ms)
[ RUN ] MySqlConfigBackendDHCPv4Test.deleteSharedNetwork4
mysql_cb_dhcp4_unittest.cc:2762: Failure
Expected equality of these values:
1
deleted_count
Which is: 2
Google Test trace:
mysql_cb_dhcp4_unittest.cc:2756: one server
[ FAILED ] MySqlConfigBackendDHCPv4Test.deleteSharedNetwork4 (200 ms)
[ RUN ] MySqlConfigBackendDHCPv4Test.getAllOptionDefs4
mysql_cb_dhcp4_unittest.cc:505: Failure
Expected equality of these values:
audit_entries_size_save + new_entries_num
Which is: 3
audit_entries_[tag].size()
Which is: 2
dhcp4_option_def, 29, 0, 2020-Nov-25 11:33:39, 1174, option definition set
dhcp4_option_def, 29, 1, 2020-Nov-25 11:33:39, 1175, option definition set
Google Test trace:
mysql_cb_dhcp4_unittest.cc:3221: CREATE audit entry for the option defnition fish
mysql_cb_dhcp4_unittest.cc:505: Failure
Expected equality of these values:
audit_entries_size_save + new_entries_num
Which is: 3
audit_entries_[tag].size()
Which is: 2
dhcp4_option_def, 29, 0, 2020-Nov-25 11:33:39, 1174, option definition set
dhcp4_option_def, 29, 1, 2020-Nov-25 11:33:39, 1175, option definition set
Google Test trace:
mysql_cb_dhcp4_unittest.cc:3221: CREATE audit entry for the option defnition whale
mysql_cb_dhcp4_unittest.cc:3230: Failure
Expected equality of these values:
test_option_defs_.size() - updates_num
Which is: 3
option_defs.size()
Which is: 1
[ FAILED ] MySqlConfigBackendDHCPv4Test.getAllOptionDefs4 (138 ms)
[ RUN ] MySqlConfigBackendDHCPv6Test.deleteSubnet6
mysql_cb_dhcp6_unittest.cc:1748: Failure
Expected equality of these values:
1
deleted_count
Which is: 2
Google Test trace:
mysql_cb_dhcp6_unittest.cc:1743: one server
mysql_cb_dhcp6_unittest.cc:1769: Failure
Expected equality of these values:
1
deleted_count
Which is: 2
Google Test trace:
mysql_cb_dhcp6_unittest.cc:1764: one server
[ FAILED ] MySqlConfigBackendDHCPv6Test.deleteSubnet6 (283 ms)
[ RUN ] MySqlConfigBackendDHCPv6Test.deleteSharedNetwork6
mysql_cb_dhcp6_unittest.cc:2797: Failure
Expected equality of these values:
1
deleted_count
Which is: 2
Google Test trace:
mysql_cb_dhcp6_unittest.cc:2791: one server
[ FAILED ] MySqlConfigBackendDHCPv6Test.deleteSharedNetwork6 (159 ms)
[ RUN ] MySqlConfigBackendDHCPv6Test.getAllOptionDefs6
mysql_cb_dhcp6_unittest.cc:552: Failure
Expected equality of these values:
audit_entries_size_save + new_entries_num
Which is: 3
audit_entries_[tag].size()
Which is: 2
dhcp6_option_def, 29, 0, 2020-Nov-25 11:33:49, 1236, option definition set
dhcp6_option_def, 29, 1, 2020-Nov-25 11:33:49, 1237, option definition set
Google Test trace:
mysql_cb_dhcp6_unittest.cc:3258: CREATE audit entry for the option definition fish
mysql_cb_dhcp6_unittest.cc:552: Failure
Expected equality of these values:
audit_entries_size_save + new_entries_num
Which is: 3
audit_entries_[tag].size()
Which is: 2
dhcp6_option_def, 29, 0, 2020-Nov-25 11:33:49, 1236, option definition set
dhcp6_option_def, 29, 1, 2020-Nov-25 11:33:49, 1237, option definition set
Google Test trace:
mysql_cb_dhcp6_unittest.cc:3258: CREATE audit entry for the option definition whale
mysql_cb_dhcp6_unittest.cc:3268: Failure
Expected equality of these values:
test_option_defs_.size() - updates_num
Which is: 3
option_defs.size()
Which is: 1
[ FAILED ] MySqlConfigBackendDHCPv6Test.getAllOptionDefs6 (129 ms)
```kea1.9.6https://gitlab.isc.org/isc-projects/kea/-/issues/1795when a shared network is being deleted Kea responses that it deleted 2 shared...2021-04-09T09:38:24ZMichal Nowikowskiwhen a shared network is being deleted Kea responses that it deleted 2 shared networksThe failing tests go more like this over config backend API:
- create 2 shared networks
- delete the first one -> ok
- delete the second one -> kea says that it deleted 2 networks
All failures come from Ubuntu 20.04, on Fedora 32 the te...The failing tests go more like this over config backend API:
- create 2 shared networks
- delete the first one -> ok
- delete the second one -> kea says that it deleted 2 networks
All failures come from Ubuntu 20.04, on Fedora 32 the tests are passing.
Example failures from Jenkins:
- https://jenkins.aws.isc.org/job/kea-dev/job/tarball-system-tests/70/testReport/tests.dhcpv6.kea_only.config_backend/test_cb_v6_cmds_api/run_tests___ubuntu_20_04_v6___test_remote_network6_del_basic_http_/
- https://jenkins.aws.isc.org/job/kea-dev/job/tarball-system-tests/70/testReport/tests.dhcpv6.kea_only.config_backend/test_cb_v6_cmds_api/run_tests___ubuntu_20_04_v6___test_remote_network6_del_subnet_keep/
Forge tests:
- run tests-ubuntu-20.04-v6-tests.dhcpv6.kea_only.config_backend.test_cb_v6_cmds_api.test_remote_network6_del_basic[http]
- run tests-ubuntu-20.04-v6-tests.dhcpv6.kea_only.config_backend.test_cb_v6_cmds_api.test_remote_network6_del_basic[socket]
- run tests-ubuntu-20.04-v6-tests.dhcpv6.kea_only.config_backend.test_cb_v6_cmds_api.test_remote_network6_del_subnet_delete
- run tests-ubuntu-20.04-v6-tests.dhcpv6.kea_only.config_backend.test_cb_v6_cmds_api.test_remote_network6_del_subnet_keep
- run tests-ubuntu-20.04-v6-tests.dhcpv6.kea_only.config_backend.test_cb_v6_cmds_api_server_tag.test_remote_network4_del_server_tags
- run tests-ubuntu-20.04-v6-tests.dhcpv6.kea_only.config_backend.test_cb_v6_cmds_api_server_tag.test_remote_option4_del_server_tags
- run tests-ubuntu-20.04-v6-tests.dhcpv6.kea_only.config_backend.test_cb_v6_cmds_api_server_tag.test_remote_subnet4_del_server_tags
- run tests-ubuntu-20.04-v4-tests.dhcpv4.kea_only.config_backend.test_cb_v4_cmds_api.test_remote_network4_del_basic[http]
- run tests-ubuntu-20.04-v4-tests.dhcpv4.kea_only.config_backend.test_cb_v4_cmds_api.test_remote_network4_del_basic[socket]
- run tests-ubuntu-20.04-v4-tests.dhcpv4.kea_only.config_backend.test_cb_v4_cmds_api.test_remote_network4_del_subnet_delete
- run tests-ubuntu-20.04-v4-tests.dhcpv4.kea_only.config_backend.test_cb_v4_cmds_api.test_remote_network4_del_subnet_keep
- run tests-ubuntu-20.04-v4-tests.dhcpv4.kea_only.config_backend.test_cb_v4_cmds_api_server_tag.test_remote_network4_del_server_tags
- run tests-ubuntu-20.04-v4-tests.dhcpv4.kea_only.config_backend.test_cb_v4_cmds_api_server_tag.test_remote_option4_del_server_tags
- run tests-ubuntu-20.04-v4-tests.dhcpv4.kea_only.config_backend.test_cb_v4_cmds_api_server_tag.test_remote_subnet4_del_server_tags
Example repro command line:
`./forge --lxc --sid v4 -s ubuntu-20.04 test tests/dhcpv4/kea_only/config_backend/test_cb_v4_cmds_api_server_tag.py::test_remote_subnet4_del_server_tags`https://gitlab.isc.org/isc-projects/bind9/-/issues/2502Several TSAN issues.2021-04-09T09:37:34ZMark AndrewsSeveral TSAN issues.Job [#1507620](https://gitlab.isc.org/isc-projects/bind9/-/jobs/1507620) failed for 3d340ecfd2f4a703608a001c6821949b534c9312:
See tsan directory for details.Job [#1507620](https://gitlab.isc.org/isc-projects/bind9/-/jobs/1507620) failed for 3d340ecfd2f4a703608a001c6821949b534c9312:
See tsan directory for details.April 2021 (9.11.30/9.11.31, 9.11.30-S1/9.11.31-S1, 9.16.14/9.16.15, 9.16.14-S1/9.16.15-S1, 9.17.12)https://gitlab.isc.org/isc-projects/bind9/-/issues/2582ThreadSanitizer: data race lib/dns/zone.c:10272:7 in zone_maintenance2021-04-09T08:50:49ZMichal NowakThreadSanitizer: data race lib/dns/zone.c:10272:7 in zone_maintenanceTSAN [error](https://gitlab.isc.org/isc-projects/bind9/-/jobs/1577490) on `v9_11` (ff463f375f882e9bf4ab228bc5d1bbcc3e7b4571):
```
S:notify:Wed Mar 17 04:43:32 UTC 2021
T:notify:1:A
A:notify:System test notify
I:notify:PORTRANGE:10000 - ...TSAN [error](https://gitlab.isc.org/isc-projects/bind9/-/jobs/1577490) on `v9_11` (ff463f375f882e9bf4ab228bc5d1bbcc3e7b4571):
```
S:notify:Wed Mar 17 04:43:32 UTC 2021
T:notify:1:A
A:notify:System test notify
I:notify:PORTRANGE:10000 - 10099
I:notify:checking initial status (1)
I:notify:checking startup notify rate limit (2)
I:notify:reloading with example2 using HUP and waiting up to 45 seconds
I:notify:checking notify message was logged (3)
I:notify:checking example2 loaded (4)
I:notify:checking example2 contents have been transferred after HUP reload (5)
I:notify:stopping master and restarting with example4 then waiting up to 45 seconds
I:notify:checking notify message was logged (6)
I:notify:checking example4 loaded (7)
I:notify:checking example4 contents have been transferred after restart (8)
I:notify:checking notify to alternate port with master inheritance (9)
I:notify:checking notify to multiple views using tsig (10)
I:notify:exit status: 0
I:notify:1 sanitizer report(s) found
R:notify:FAIL
E:notify:Wed Mar 17 04:44:05 UTC 2021
```
```
WARNING: ThreadSanitizer: data race
Read of size 4 at 0x000000000001 by thread T1:
#0 zone_maintenance lib/dns/zone.c:10272:7
#1 zone_timer lib/dns/zone.c:13569:2
#2 dispatch lib/isc/task.c:1157:7
#3 run lib/isc/task.c:1331:2
Previous write of size 4 at 0x000000000001 by thread T2 (mutexes: write M1):
#0 dns_zone_notifyreceive2 lib/dns/zone.c:14053:3
#1 ns_notify_start bin/named/notify.c:150:13
#2 client_request bin/named/client.c:3150:3
#3 dispatch lib/isc/task.c:1157:7
#4 run lib/isc/task.c:1331:2
Location is heap block of size 2841 at 0x000000000009 allocated by thread T3:
#0 malloc <null>
#1 internal_memalloc lib/isc/mem.c:887:8
#2 mem_get lib/isc/mem.c:792:8
#3 mem_allocateunlocked lib/isc/mem.c:1545:8
#4 isc___mem_allocate lib/isc/mem.c:1566:7
#5 isc__mem_allocate lib/isc/mem.c:3048:11
#6 isc___mem_get lib/isc/mem.c:1304:11
#7 isc__mem_get lib/isc/mem.c:3012:11
#8 dns_zone_create lib/dns/zone.c:930:9
#9 dns_zonemgr_createzone lib/dns/zone.c:16916:11
#10 configure_zone bin/named/./server.c:5637:3
#11 configure_view bin/named/./server.c:3435:3
#12 load_configuration bin/named/./server.c:8179:3
#13 run_server bin/named/./server.c
#14 dispatch lib/isc/task.c:1157:7
#15 run lib/isc/task.c:1331:2
Mutex M1 is already destroyed.
Thread T3 (running) created by main thread at:
#0 pthread_create <null>
#1 isc_thread_create lib/isc/pthreads/thread.c:60:8
#2 isc__taskmgr_create lib/isc/task.c:1468:7
#3 isc_taskmgr_create lib/isc/task.c:2109:11
#4 create_managers bin/named/./main.c:886:11
#5 setup bin/named/./main.c:1305:11
#6 main bin/named/./main.c:1556:2
Thread T2 (running) created by main thread at:
#0 pthread_create <null>
#1 isc_thread_create lib/isc/pthreads/thread.c:60:8
#2 isc__taskmgr_create lib/isc/task.c:1468:7
#3 isc_taskmgr_create lib/isc/task.c:2109:11
#4 create_managers bin/named/./main.c:886:11
#5 setup bin/named/./main.c:1305:11
#6 main bin/named/./main.c:1556:2
Thread T3 (running) created by main thread at:
#0 pthread_create <null>
#1 isc_thread_create lib/isc/pthreads/thread.c:60:8
#2 isc__taskmgr_create lib/isc/task.c:1468:7
#3 isc_taskmgr_create lib/isc/task.c:2109:11
#4 create_managers bin/named/./main.c:886:11
#5 setup bin/named/./main.c:1305:11
#6 main bin/named/./main.c:1556:2
SUMMARY: ThreadSanitizer: data race lib/dns/zone.c:10272:7 in zone_maintenance
```
There's this similar TSAN on `v9_11`: https://gitlab.isc.org/isc-projects/bind9/-/issues/2261.April 2021 (9.11.30/9.11.31, 9.11.30-S1/9.11.31-S1, 9.16.14/9.16.15, 9.16.14-S1/9.16.15-S1, 9.17.12)Diego dos Santos FronzaDiego dos Santos Fronzahttps://gitlab.isc.org/isc-projects/bind9/-/issues/2612CID 330954: Possible Control flow issues in lib/isc/netmgr/tlsstream.c2021-04-09T08:38:37ZMichal NowakCID 330954: Possible Control flow issues in lib/isc/netmgr/tlsstream.cCoverity Scan identified the following issue on `main`:
```
*** CID 330954: Possible Control flow issues (DEADCODE)
/lib/isc/netmgr/tlsstream.c: 423 in tls_do_bio()
417 return;
418 }
419
420 switch (tls_status) {
...Coverity Scan identified the following issue on `main`:
```
*** CID 330954: Possible Control flow issues (DEADCODE)
/lib/isc/netmgr/tlsstream.c: 423 in tls_do_bio()
417 return;
418 }
419
420 switch (tls_status) {
421 case SSL_ERROR_NONE:
422 case SSL_ERROR_ZERO_RETURN:
>>> CID 330954: Possible Control flow issues (DEADCODE)
>>> Execution cannot reach the expression "received_shutdown" inside this statement: "if (sent_shutdown && receiv...".
423 if (sent_shutdown && received_shutdown) {
424 /* clean shutdown */
425 isc_nm_cancelread(sock->outerhandle);
426 isc__nm_tls_close(sock);
427 };
428 return;
```
This likely appeared with 11ed7aac5d9d2d804fa18d98d8c68f0f1bacbb32.May 2021 (9.11.32, 9.11.32-S1, 9.16.16, 9.16.16-S1, 9.17.13)Artem BoldarievArtem Boldarievhttps://gitlab.isc.org/isc-projects/bind9/-/issues/2611doth system test fails due to SSL error in BIO: 5 unexpected error2021-04-09T08:38:34ZMatthijs Mekkingmatthijs@isc.orgdoth system test fails due to SSL error in BIO: 5 unexpected errorFor example here:
https://gitlab.isc.org/isc-projects/bind9/-/jobs/1614684
```
I:doth:checking DoH query (POST) (6)
02-Apr-2021 12:17:13.870 SSL error in BIO: 5 unexpected error
02-Apr-2021 12:17:13.870 SSL error in BIO: 5 unexpected e...For example here:
https://gitlab.isc.org/isc-projects/bind9/-/jobs/1614684
```
I:doth:checking DoH query (POST) (6)
02-Apr-2021 12:17:13.870 SSL error in BIO: 5 unexpected error
02-Apr-2021 12:17:13.870 SSL error in BIO: 5 unexpected error
02-Apr-2021 12:17:13.870 SSL error in BIO: 5 unexpected error
I:doth:failed
```April 2021 (9.11.30/9.11.31, 9.11.30-S1/9.11.31-S1, 9.16.14/9.16.15, 9.16.14-S1/9.16.15-S1, 9.17.12)Artem BoldarievArtem Boldarievhttps://gitlab.isc.org/isc-projects/bind9/-/issues/2613lib/dns/gen is not deleted on make clean2021-04-09T08:35:26ZArtem Boldarievlib/dns/gen is not deleted on make cleanWhen doing `make clean`, `lib/dns/gen` executable is not being deleted. The issues is present when building at least on Linux or FreeBSD. Not a show stopper, but annoying when building for multiple platforms from the same directory.When doing `make clean`, `lib/dns/gen` executable is not being deleted. The issues is present when building at least on Linux or FreeBSD. Not a show stopper, but annoying when building for multiple platforms from the same directory.April 2021 (9.11.30/9.11.31, 9.11.30-S1/9.11.31-S1, 9.16.14/9.16.15, 9.16.14-S1/9.16.15-S1, 9.17.12)https://gitlab.isc.org/isc-projects/stork/-/issues/500Bring the demo wiki up to date2021-04-09T07:18:46ZAndrei Pavelandrei@isc.orgBring the demo wiki up to datehttps://gitlab.isc.org/isc-projects/stork/-/wikis/Demo
For example, there seems to be no `Add New Machine` button which is mentioned in several parts, it seems that machines are automatically detected now.https://gitlab.isc.org/isc-projects/stork/-/wikis/Demo
For example, there seems to be no `Add New Machine` button which is mentioned in several parts, it seems that machines are automatically detected now.0.17https://gitlab.isc.org/isc-projects/kea/-/issues/1777A suggestion for the improvement of lease database backend performance2021-04-09T06:41:05ZPeter DaviesA suggestion for the improvement of lease database backend performanceA suggestion for the improvement of lease database backend performance:
Performance suffers in Kea configurations which employ a lease database backen. It is lower than in configurations that use memfile.
This because the penalty...A suggestion for the improvement of lease database backend performance:
Performance suffers in Kea configurations which employ a lease database backen. It is lower than in configurations that use memfile.
This because the penalty incurred by the time taken to update the database. All things being equal it is quicker to update a local file.
As I understand it the protocol demands that lease information be written to disk before ACKing a request.
It is my contention that under "normal" running conditions, on a well configured server that has been active for period of time, the majority of incoming DHCP packets would be either renewals, or reboot discoveries where an active lease may still exist.
To improve lease database backend performance, it would be possible for Kea to write DHCP renewals to a local lease cache file, thereby not breaking the protocol?
This lease cache file could be used to update the database asynchronously, and perhaps by a separate task.
This lease cache file would also be available over reboots.https://gitlab.isc.org/isc-projects/stork/-/issues/507agent-server TLS part 6: system tests2021-04-08T14:43:39ZMichal Nowikowskiagent-server TLS part 6: system tests0.17Michal NowikowskiMichal Nowikowskihttps://gitlab.isc.org/isc-projects/kea/-/issues/1732Multithreaded mode for HttpClient2021-04-08T13:36:57ZTomek MrugalskiMultithreaded mode for HttpClientThis is the second ticket in a series aiming to provide multithreaded HA support. For an overall design, see [here](https://gitlab.isc.org/isc-projects/kea/-/wikis/designs/HA-MT-Design-for-Multi-threaded-Http-HA-traffic#httpclient-multi-...This is the second ticket in a series aiming to provide multithreaded HA support. For an overall design, see [here](https://gitlab.isc.org/isc-projects/kea/-/wikis/designs/HA-MT-Design-for-Multi-threaded-Http-HA-traffic#httpclient-multi-threaded-mode).
The goal of this ticket is to extend the HttpClient code with MT capability, with ability to open multi connections per URL.
At the completion, the HttpClient code should be MT-capable, which will be proved by unit tests. The code will not be usable yet.kea1.9.7Thomas MarkwalderThomas Markwalderhttps://gitlab.isc.org/isc-projects/bind9/-/issues/2578System test setup sometimes fails because of missing port assignments2021-04-08T09:37:53ZMichał KępieńSystem test setup sometimes fails because of missing port assignmentsExample test job failures:
- https://gitlab.isc.org/isc-projects/bind9/-/jobs/1505564
- https://gitlab.isc.org/isc-projects/bind9/-/jobs/1505566
- https://gitlab.isc.org/isc-projects/bind9/-/jobs/1505642
- https://gitlab.isc.org...Example test job failures:
- https://gitlab.isc.org/isc-projects/bind9/-/jobs/1505564
- https://gitlab.isc.org/isc-projects/bind9/-/jobs/1505566
- https://gitlab.isc.org/isc-projects/bind9/-/jobs/1505642
- https://gitlab.isc.org/isc-projects/bind9/-/jobs/1505647
- https://gitlab.isc.org/isc-projects/bind9/-/jobs/1506024
The problem here is that:
- the [sleep period][1] in case of failure to acquire the
`bin/tests/system/get_ports.lock` lock file is fixed, which leads to
a "thundering herd" type of problem, where (depending on how
processes are scheduled by the operating system) multiple system
tests try to acquire the lock file at the same time and subsequently
sleep for 1 second, only for the same situation to likely happen the
next time around,
- the lock file is being locked and then unlocked for every single
_port_ assignment made, not just once for the entire _range_ of
ports a system test should use; in other words, the lock file is
currently locked and unlocked 13 times per system test.
Given the above, in certain cases, with the [retry count][2] set to 10
attempts and up to 6 system tests being run [in parallel][3] in every
GitLab CI job, a given system test may simply not manage to acquire the
lock file before it reaches the retry limit. This is what happened in
all the jobs linked to above (search for `PORTS:,` in the job logs).
Another (arguably less severe) problem with this design is that it
results in delayed test startup when multiple system tests are started
in parallel using `make -jX check` (also due to the lock file contention
issue described above). Here are some sample timings produced with a
version of `bin/tests/system/run.sh` which only runs
`bin/tests/system/get_ports.sh` and then immediately exits (without
running any `setup.sh` or `tests.sh` script):
```sh
$ time make -C bin/tests/system/ -j6 check TESTS="acl"
...
real 0m0,248s
user 0m0,217s
sys 0m0,054s
$ time make -C bin/tests/system/ -j6 check TESTS="acl additional"
...
real 0m1,294s
user 0m0,345s
sys 0m0,061s
$ time make -C bin/tests/system/ -j6 check TESTS="acl additional addzone"
...
real 0m2,327s
user 0m0,458s
sys 0m0,126s
$ time make -C bin/tests/system/ -j6 check TESTS="acl additional addzone allow-query"
...
real 0m3,312s
user 0m0,605s
sys 0m0,164s
$ time make -C bin/tests/system/ -j6 check TESTS="acl additional addzone allow-query auth"
...
real 0m4,327s
user 0m0,754s
sys 0m0,207s
$ time make -C bin/tests/system/ -j6 check TESTS="acl additional addzone allow-query auth autosign"
...
real 0m5,352s
user 0m0,859s
sys 0m0,274s
$ time make -C bin/tests/system/ -j6 check TESTS="acl additional addzone allow-query auth autosign builtin"
...
real 0m5,343s
user 0m0,941s
sys 0m0,259s
$ time make -C bin/tests/system/ -j6 check TESTS="acl additional addzone allow-query auth autosign builtin cacheclean"
...
real 0m5,384s
user 0m1,053s
sys 0m0,276s
```
What this shows is that it takes almost 6 seconds to actually start all
of the requested 6 tests in parallel. This initial "problem" resolves
itself shortly afterwards, though, as the started tests take various
amounts of time to finish and thus the lock file contention issue
mostly disappears for tests started later on.
It would be nice to come up with a version of the `get_ports.sh` script
which does not suffer from the above issues.
[1]: https://gitlab.isc.org/isc-projects/bind9/-/blob/8795b12c49d3f2f5c9c5254cbf2532a4f230f0cc/bin/tests/system/get_ports.sh#L58
[2]: https://gitlab.isc.org/isc-projects/bind9/-/blob/8795b12c49d3f2f5c9c5254cbf2532a4f230f0cc/bin/tests/system/get_ports.sh#L27
[3]: https://gitlab.isc.org/isc-projects/bind9/-/blob/8795b12c49d3f2f5c9c5254cbf2532a4f230f0cc/.gitlab-ci.yml#L13April 2021 (9.11.30/9.11.31, 9.11.30-S1/9.11.31-S1, 9.16.14/9.16.15, 9.16.14-S1/9.16.15-S1, 9.17.12)Michał KępieńMichał Kępieńhttps://gitlab.isc.org/isc-projects/bind9/-/issues/2620Ensure proper resource cleanup for failed gss_accept_sec_context() calls2021-04-08T09:06:50ZMichał KępieńEnsure proper resource cleanup for failed gss_accept_sec_context() callsEven if a call to `gss_accept_sec_context()` fails, it might still cause
a GSS-API response token to be allocated and left for the caller to
release. We should make sure that all resources are properly cleaned up
when a call to `gss_acc...Even if a call to `gss_accept_sec_context()` fails, it might still cause
a GSS-API response token to be allocated and left for the caller to
release. We should make sure that all resources are properly cleaned up
when a call to `gss_accept_sec_context()` fails.April 2021 (9.11.30/9.11.31, 9.11.30-S1/9.11.31-S1, 9.16.14/9.16.15, 9.16.14-S1/9.16.15-S1, 9.17.12)Michał KępieńMichał Kępieńhttps://gitlab.isc.org/isc-projects/stork/-/issues/518Support for 32 bit architecture?2021-04-08T08:35:37ZValéry AugaisSupport for 32 bit architecture?Hello.
As I couldn't find on Cloudsmith repo the package for my debian system (a 32 bit architecture!) I downloaded the sources from ISC server. This approach worked fine for kea as I could compile all sources successfully. As to stork ...Hello.
As I couldn't find on Cloudsmith repo the package for my debian system (a 32 bit architecture!) I downloaded the sources from ISC server. This approach worked fine for kea as I could compile all sources successfully. As to stork looking at Rakefile I get that only 64 bit is supported. Is there a chance 32 bit will come into play at some point?
Thanks.