ISC Open Source Projects issueshttps://gitlab.isc.org/groups/isc-projects/-/issues2021-11-12T16:44:22Zhttps://gitlab.isc.org/isc-projects/kea/-/issues/2185kea-admin can't initialize remote postgresdb2021-11-12T16:44:22Zjujukea-admin can't initialize remote postgresdbI have a fresh postgres install with no tables. Trying to initialize the new db with kea-admin. I get the message below. All the tutorials I have seen seem to imply the install of postgres on the same server as the kea-server? Can i init...I have a fresh postgres install with no tables. Trying to initialize the new db with kea-admin. I get the message below. All the tutorials I have seen seem to imply the install of postgres on the same server as the kea-server? Can i initialize a remote postgres db ?
`
juju@kea-01:~$ kea-admin db-init pgsql -h 10.0.0.20 -P 5432 -u kea -p xxxxxx -n kea_lease_db
Checking if there is a database initialized already...
/usr/sbin/kea-admin: 132: psql: not found
ERROR/kea-admin: pgsql_init: table query failed, status code: 127?
`https://gitlab.isc.org/isc-projects/stork/-/issues/616Stork-Server didn't show correct HA State in Dashboard2024-02-26T14:50:32ZThorsten KrohnStork-Server didn't show correct HA State in DashboardHi,
i have just upgraded from kea 1.8.2 to kea 2.0.0 and now the HA-State shown in the Dashboard is incorrect:
![image](/uploads/2233e90dee484c94b6d7c8114d343546/image.png)
But in the view of the kea-app from both Servers all is fine:...Hi,
i have just upgraded from kea 1.8.2 to kea 2.0.0 and now the HA-State shown in the Dashboard is incorrect:
![image](/uploads/2233e90dee484c94b6d7c8114d343546/image.png)
But in the view of the kea-app from both Servers all is fine:
![image](/uploads/4bcc0f10c70eb9dc22eafbdd84fcc33b/image.png)
I swtiched on the KEA-Server with 2.0.0 to multithreading HA.
Server and Client-Versions are 0.22.0.211105072749 on Debian Buster.analysis-in-progresshttps://gitlab.isc.org/isc-projects/stork/-/issues/613follow up to 0.22 sanity checks2021-12-09T09:01:37ZAndrei Pavelandrei@isc.orgfollow up to 0.22 sanity checks* [x] A minor issue. The Stork Agent help prints "A new cli application":
```
$ ./stork-agent --help
NAME:
Stork Agent - A new cli application
```
It is no longer a new application. Shouldn't it rather be "Stork Agent - A Stork appl...* [x] A minor issue. The Stork Agent help prints "A new cli application":
```
$ ./stork-agent --help
NAME:
Stork Agent - A new cli application
```
It is no longer a new application. Shouldn't it rather be "Stork Agent - A Stork application monitoring Kea and BIND9 servers".
* [x] We seem to be a bit inconsistent how we define boolean values in the env files. In the agent.env we have:
```
# skip TLS certificate verification when the Stork Agent connects
# to Kea over TLS and Kea uses self-signed certificates
# STORK_AGENT_SKIP_TLS_CERT_VERIFICATION=true
```
In the server.env we have:
```
# Enable Prometheus /metrics HTTP endpoint for exporting metrics from
# the server to Prometheus. It is recommended to secure this endpoint
# (e.g. using HTTP proxy).
# STORK_ENABLE_METRICS=1
```1.1Marcin SiodelskiMarcin Siodelskihttps://gitlab.isc.org/isc-projects/bind9/-/issues/3010ThreadSanitizer: data race in closedir2021-12-23T14:43:22ZMichal NowakThreadSanitizer: data race in closedirSame issue as isc-projects/bind9#2457 [this time on Debian 11 with Clang](https://gitlab.isc.org/isc-projects/bind9/-/jobs/2094359):
```
WARNING: ThreadSanitizer: data race
Write of size 8 at 0x000000000001 by thread T1:
#0 closed...Same issue as isc-projects/bind9#2457 [this time on Debian 11 with Clang](https://gitlab.isc.org/isc-projects/bind9/-/jobs/2094359):
```
WARNING: ThreadSanitizer: data race
Write of size 8 at 0x000000000001 by thread T1:
#0 closedir <null>
#1 isc_dir_close lib/isc/dir.c:134:8
#2 dns_dnssec_findmatchingkeys lib/dns/dnssec.c:1514:3
#3 zone_rekey lib/dns/zone.c:21567:11
#4 zone_maintenance lib/dns/zone.c:11386:4
#5 zone_timer lib/dns/zone.c:15039:2
#6 task_run lib/isc/task.c:827:5
#7 isc_task_run lib/isc/task.c:907:10
#8 isc__nm_async_task lib/isc/netmgr/netmgr.c:834:11
#9 process_netievent lib/isc/netmgr/netmgr.c
#10 process_queue lib/isc/netmgr/netmgr.c:1007:16
#11 process_all_queues lib/isc/netmgr/netmgr.c:753:25
#12 async_cb lib/isc/netmgr/netmgr.c:782:6
#13 <null> <null>
#14 isc__trampoline_run lib/isc/trampoline.c:185:11
Previous read of size 8 at 0x000000000001 by thread T2:
#0 epoll_ctl <null>
#1 <null> <null>
#2 uv_run <null>
#3 isc__trampoline_run lib/isc/trampoline.c:185:11
Location is file descriptor 105 created by thread T2 at:
#0 accept4 <null>
#1 <null> <null>
#2 isc__trampoline_run lib/isc/trampoline.c:185:11
Thread T2 (running) created by main thread at:
#0 pthread_create <null>
#1 isc_thread_create lib/isc/thread.c:79:8
#2 isc__netmgr_create lib/isc/netmgr/netmgr.c:328:3
#3 isc_managers_create lib/isc/managers.c:36:2
#4 create_managers bin/named/main.c:920:11
#5 setup bin/named/main.c:1184:11
#6 main bin/named/main.c:1452:2
Thread T2 (running) created by main thread at:
#0 pthread_create <null>
#1 isc_thread_create lib/isc/thread.c:79:8
#2 isc__netmgr_create lib/isc/netmgr/netmgr.c:328:3
#3 isc_managers_create lib/isc/managers.c:36:2
#4 create_managers bin/named/main.c:920:11
#5 setup bin/named/main.c:1184:11
#6 main bin/named/main.c:1452:2
SUMMARY: ThreadSanitizer: data race in closedir
```
A blocker for base image upgrade from Debian 10 to Debian 11 (isc-projects/bind9!5367).
The solution is to have custom libuv for Debian 11 as have for Fedora already (https://gitlab.isc.org/isc-projects/images/-/merge_requests/112/diffs?commit_id=dd3c9f1d49f98a8d3584ef73478da8e9fdc2fd84).January 2022 (9.16.25, 9.16.25-S1, 9.17.22)Michal NowakMichal Nowakhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3008Bind9 going down with error rbtdb->next_serial2021-11-08T22:09:24ZAleksandr NikitinBind9 going down with error rbtdb->next_serial### Summary
I have some instances bind9, and some of them going down with error
```
Nov 04 02:30:46 vm-name named[10257]: 04-Nov-2021 02:30:46.319 general: critical: ../../../lib/dns/rbtdb.c:1497: fatal error:
Nov 04 02:30:46 vm-name na...### Summary
I have some instances bind9, and some of them going down with error
```
Nov 04 02:30:46 vm-name named[10257]: 04-Nov-2021 02:30:46.319 general: critical: ../../../lib/dns/rbtdb.c:1497: fatal error:
Nov 04 02:30:46 vm-name named[10257]: 04-Nov-2021 02:30:46.319 general: critical: RUNTIME_CHECK(rbtdb->next_serial != 0) failed
Nov 04 02:30:46 vm-name named[10257]: 04-Nov-2021 02:30:46.319 general: critical: exiting (due to fatal error in library)
Nov 04 02:30:46 vm-name systemd[1]: bind9.service: Main process exited, code=killed, status=6/ABRT
Nov 04 02:30:46 vm-name systemd[1]: bind9.service: Failed with result 'signal'.
```
### BIND version used
```
BIND 9.11.5-P4-5.1+deb10u6-Debian (Extended Support Version) <id:998753c>
running on Linux x86_64 4.19.0-18-cloud-amd64 #1 SMP Debian 4.19.208-1 (2021-09-29)
built by make with '--build=x86_64-linux-gnu' '--prefix=/usr' '--includedir=/usr/include' '--mandir=/usr/share/man' '--infodir=/usr/share/info' '--sysconfdir=/etc' '--localstatedir=/var' '--disable-silent-rules' '--libdir=/usr/lib/x86_64-linux-gnu' '--libexecdir=/usr/lib/x86_64-linux-gnu' '--disable-maintainer-mode' '--disable-dependency-tracking' '--libdir=/usr/lib/x86_64-linux-gnu' '--sysconfdir=/etc/bind' '--with-python=python3' '--localstatedir=/' '--enable-threads' '--enable-largefile' '--with-libtool' '--enable-shared' '--enable-static' '--with-gost=no' '--with-openssl=/usr' '--with-gssapi=/usr' '--disable-isc-spnego' '--with-libidn2' '--with-libjson=/usr' '--with-lmdb=/usr' '--with-gnu-ld' '--with-geoip=/usr' '--with-atf=no' '--enable-ipv6' '--enable-rrl' '--enable-filter-aaaa' '--enable-native-pkcs11' '--with-pkcs11=/usr/lib/softhsm/libsofthsm2.so' '--with-randomdev=/dev/urandom' '--enable-dnstap' 'build_alias=x86_64-linux-gnu' 'CFLAGS=-g -O2 -fdebug-prefix-map=/build/bind9-gHNcz0/bind9-9.11.5.P4+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -fno-strict-aliasing -fno-delete-null-pointer-checks -DNO_VERSION_DATE -DDIG_SIGCHASE' 'LDFLAGS=-Wl,-z,relro -Wl,-z,now' 'CPPFLAGS=-Wdate-time -D_FORTIFY_SOURCE=2'
compiled by GCC 8.3.0
compiled with OpenSSL version: OpenSSL 1.1.1d 10 Sep 2019
linked to OpenSSL version: OpenSSL 1.1.1d 10 Sep 2019
compiled with libxml2 version: 2.9.4
linked to libxml2 version: 20904
compiled with libjson-c version: 0.12.1
linked to libjson-c version: 0.12.1
threads support is enabled
```
### Steps to reproduce
Run bind9 with configuration, files down in text
### What is the current *bug* behavior?
Bind9 fails and not automaticaly restarts
```
Nov 04 02:30:46 vm-name named[10257]: 04-Nov-2021 02:30:46.319 general: critical: ../../../lib/dns/rbtdb.c:1497: fatal error:
Nov 04 02:30:46 vm-name named[10257]: 04-Nov-2021 02:30:46.319 general: critical: RUNTIME_CHECK(rbtdb->next_serial != 0) failed
Nov 04 02:30:46 vm-name named[10257]: 04-Nov-2021 02:30:46.319 general: critical: exiting (due to fatal error in library)
Nov 04 02:30:46 vm-name systemd[1]: bind9.service: Main process exited, code=killed, status=6/ABRT
Nov 04 02:30:46 vm-name systemd[1]: bind9.service: Failed with result 'signal'.
```
### What is the expected *correct* behavior?
Bind9 not going down
### Relevant configuration files
My configuration is:
named.conf
```
key key.for.internal.domain {
algorithm HMAC-MD5;
secret "[masked]";
};
include "/etc/bind/named.conf.options";
include "/etc/bind/named.conf.local";
include "/etc/bind/named.conf.default-zones";
```
named.conf.options
```
acl "private" {
127.0.0.1/32; # localhost
172.16.1.0/24; # wg
};
options {
directory "/var/cache/bind";
allow-query { private; };
recursion yes;
forwarders {
127.0.0.1 port 5053;
1.1.1.1;
1.0.0.1;
8.8.8.8;
8.8.4.4;
};
dnssec-enable no;
dnssec-validation no;
listen-on { any; };
listen-on-v6 { none; };
};
```
named.conf.local
```
zone "dc1.internal.domain" {
type master;
file "/etc/bind/dc1.internal.domain.db";
allow-update { key "key.for.internal.domain"; };
};
zone "156.10.in-addr.arpa" {
type master;
file "/etc/bind/156.10.in-addr.arpa.db";
allow-update { key "key.for.internal.domain"; };
};
zone "dc2.internal.domain" {
type slave;
file "/etc/bind/slave_dc2.internal.domain.db";
masters { 10.60.0.1; };
};
zone "60.10.in-addr.arpa" {
type slave;
file "/etc/bind/slave_60.10.in-addr.arpa.db";
masters { 10.60.0.1; };
};
zone "dc3.internal.domain" {
type slave;
file "/etc/bind/slave_dc3.internal.domain.db";
masters { 10.200.0.1; };
};
zone "200.10.in-addr.arpa" {
type slave;
file "/etc/bind/slave_200.10.in-addr.arpa.db";
masters { 10.200.0.1; };
};
zone "dc4.internal.domain" {
type slave;
file "/etc/bind/slave_dc4.internal.domain.db";
masters { 10.90.0.1; };
};
zone "90.10.in-addr.arpa" {
type slave;
file "/etc/bind/slave_90.10.in-addr.arpa.db";
masters { 10.90.0.1; };
};
zone "dc5.internal.domain" {
type slave;
file "/etc/bind/slave_dc5.internal.domain.db";
masters { 10.9.96.1; };
};
zone "9.10.in-addr.arpa" {
type slave;
file "/etc/bind/slave_9.10.in-addr.arpa.db";
masters { 10.9.96.1; };
};
```
named.conf.default-zones
```
zone "." {
type hint;
file "/usr/share/dns/root.hints";
};
zone "internal.domain" {
type master;
file "/etc/bind/internal.domain.db";
};
zone "another.domain" {
type master;
file "/etc/bind/another.domain.db";
};
zone "third.domain" {
type master;
file "/etc/bind/third.domain.db";
};
zone "fourth.domain" {
type master;
file "/etc/bind/fourth.domain.db";
};
```https://gitlab.isc.org/isc-projects/kea/-/issues/2178ARM example configs incorrectly show heatbeat-delay and max-response-delay bo...2021-11-11T20:13:36ZThomas MarkwalderARM example configs incorrectly show heatbeat-delay and max-response-delay both set equalWe have HA example configs like this:
```
:
"heartbeat-delay": 10000,
"max-response-delay": 10000,
:
```
We specifically tell people that max-response-delay should be substantially larger than heartbeat-delay. It defaults to 6...We have HA example configs like this:
```
:
"heartbeat-delay": 10000,
"max-response-delay": 10000,
:
```
We specifically tell people that max-response-delay should be substantially larger than heartbeat-delay. It defaults to 60000. We need to fix these configs because users are using them verbatim.https://gitlab.isc.org/isc-projects/kea/-/issues/2176handle credentials with short lifetime2021-12-10T10:42:42ZFrancis Duponthandle credentials with short lifetimeDesign to do in order to decide the best behavior. Note when correctly configured this should never happen.
- [x] implement the code changes (https://gitlab.isc.org/isc-private/kea-premium/-/merge_requests/236)
- [x] update the examples...Design to do in order to decide the best behavior. Note when correctly configured this should never happen.
- [x] implement the code changes (https://gitlab.isc.org/isc-private/kea-premium/-/merge_requests/236)
- [x] update the examples (!1477)
- [ ] document the changes in ARM (!1485)kea2.1.2Francis DupontFrancis Duponthttps://gitlab.isc.org/isc-projects/stork/-/issues/608readthedocs build fails2021-11-09T17:59:49ZAndrei Pavelandrei@isc.orgreadthedocs build failshttps://readthedocs.org/projects/kea/builds/15209538/
```
TypeError: 'generator' object is not subscriptable
```
[readthedocs official statement](https://blog.readthedocs.com/build-errors-docutils-0-18/): upgrade to 3.0 or laterhttps://readthedocs.org/projects/kea/builds/15209538/
```
TypeError: 'generator' object is not subscriptable
```
[readthedocs official statement](https://blog.readthedocs.com/build-errors-docutils-0-18/): upgrade to 3.0 or later1.0Andrei Pavelandrei@isc.orgAndrei Pavelandrei@isc.orghttps://gitlab.isc.org/isc-projects/bind9/-/issues/3007DNS resolution fails temporarily2021-11-08T12:02:25ZK VDNS resolution fails temporarilyWe are using two Named Servers in our Production system.
BIND 9.11.4-P2-RedHat-9.11.4-26.P2.el7_9.3 (Extended Support Version) on Linux x86_64 4.9.215-36.el7.x86_64
Recently, we started to see a trend when the DNS resolution fails betwe...We are using two Named Servers in our Production system.
BIND 9.11.4-P2-RedHat-9.11.4-26.P2.el7_9.3 (Extended Support Version) on Linux x86_64 4.9.215-36.el7.x86_64
Recently, we started to see a trend when the DNS resolution fails between a specific time period for random domain names(out of over 100 records). Each record may fail for max 10 minutes. At all other times, it works absolutely fine.
AT THE TIME OF ISSUE:
```
dig @MY_DNS_SERVER docker.mycompany.net
; <<>> DiG 9.11.4-P2-RedHat-9.11.4-26.P2.el7_9.3 <<>> @MY_DNS_SERVER docker.mycompany.net
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 33467
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;docker.mycompany.net. IN A
;; AUTHORITY SECTION:
mycompany.net. 58 IN SOA ns-604.awsdns-11.net. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 86400
;; Query time: 0 msec
;; SERVER: MY_DNS_SERVER#53(MY_DNS_SERVER)
;; WHEN: Thu Nov 04 10:22:01 UTC 2021
;; MSG SIZE rcvd: 128
```
NORMAL TIMES:
```
dig @MY_DNS_SERVER docker.mycompany.net
; <<>> DiG 9.11.4-P2-RedHat-9.11.4-26.P2.el7_9.3 <<>> @MY_DNS_SERVER docker.mycompany.net
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 28581
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 4, ADDITIONAL: 7
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;docker.mycompany.net. IN A
;; ANSWER SECTION:
docker.mycompany.net. 54 IN A PROPER IP
docker.mycompany.net. 54 IN A PROPER IP
;; AUTHORITY SECTION:
mycompany.net. 81323 IN NS ns-16.awsdns-02.com.
mycompany.net. 81323 IN NS ns-1158.awsdns-16.org.
mycompany.net. 81323 IN NS ns-604.awsdns-11.net.
mycompany.net. 81323 IN NS ns-1731.awsdns-24.co.uk.
;; ADDITIONAL SECTION:
ns-1158.awsdns-16.org. 47096 IN A 205.251.196.134
ns-16.awsdns-02.com. 25941 IN A 205.251.192.16
ns-604.awsdns-11.net. 81323 IN A 205.251.194.92
ns-1158.awsdns-16.org. 68043 IN AAAA 2600:9000:5304:8600::1
ns-16.awsdns-02.com. 60863 IN AAAA 2600:9000:5300:1000::1
ns-1731.awsdns-24.co.uk. 38132 IN AAAA 2600:9000:5306:c300::1
;; Query time: 0 msec
;; SERVER: MY_DNS_SERVER#53(MY_DNS_SERVER)
;; WHEN: Thu Nov 04 10:29:03 UTC 2021
;; MSG SIZE rcvd: 347
```
The BIND Cache metrics shows a trend where it exactly starts to increase around the start of the issue(2:30pm IST). Though the DNS Resolution resolves within an hour, the graph shows a continuous upward trend which decreases only beyond midnight that day.
![BIND_CACHE_METRICS](/uploads/e26bda06452d8554a835d37e73128beb/BIND_CACHE_METRICS.PNG)
This is badly affecting Production users. Please share your suggestions as soon as possible.https://gitlab.isc.org/isc-projects/stork/-/issues/607No statistics available in GUI2021-12-02T08:16:43ZBryan SeitzNo statistics available in GUI---
name: Bug report
about: Create a report to help us improve
---
If you believe your bug report is a security issue (e.g. a packet that can kill the server), DO NOT
REPORT IT HERE. Please use https://www.isc.org/community/report-bug/...---
name: Bug report
about: Create a report to help us improve
---
If you believe your bug report is a security issue (e.g. a packet that can kill the server), DO NOT
REPORT IT HERE. Please use https://www.isc.org/community/report-bug/ instead or send mail to
security-office(at)isc(dot)org.
**Describe the bug**
- I see no statistics for subnets in the Stork GUI.
- One subnet is giving errors trying to allocate IPs which is not full
**To Reproduce**
Steps to reproduce the behavior:
1. Installed KEA + Stork from recommended repos on Ubuntu 20.04
2. Stats hooks are loaded in ctrl agent + dhcp4 server now, still no stats.
KEA Configs:
```{
"Control-agent": {
"http-host": "127.0.0.1",
"http-port": 8000,
"control-sockets": {
"dhcp4": {
"socket-type": "unix",
"socket-name": "/tmp/kea4-ctrl-socket"
},
"dhcp6": {
"socket-type": "unix",
"socket-name": "/tmp/kea6-ctrl-socket"
}
},
"hooks-libraries": [
{
"library": "/usr/lib/x86_64-linux-gnu/kea/hooks/libdhcp_stat_cmds.so",
"parameters": { }
},
{
"library": "/usr/lib/x86_64-linux-gnu/kea/hooks/libdhcp_lease_cmds.so",
"parameters": { }
}
],
"loggers": [
{
"name": "kea-ctrl-agent",
"output_options": [
{
"output": "/var/log/kea/kea-ctrl-agent.log"
}
],
"severity": "INFO",
"debuglevel": 0
}
]
}
}
```
Stork Configs
```root@dhcp1:/etc/stork# cat agent.env
# address to bind ie. for listening
STORK_AGENT_ADDRESS=192.168.0.7
STORK_AGENT_PORT=8079
# enable Stork functionality only, i.e. disable Prometheus exporters
STORK_AGENT_LISTEN_STORK_ONLY=true
# enable Prometheus exporters only, i.e. disable Stork functionality
# STORK_AGENT_LISTEN_PROMETHEUS_ONLY=true
# settings for exporting stats to Prometheus
# STORK_AGENT_PROMETHEUS_KEA_EXPORTER_ADDRESS=
# STORK_AGENT_PROMETHEUS_KEA_EXPORTER_PORT=
# STORK_AGENT_PROMETHEUS_KEA_EXPORTER_INTERVAL=
# STORK_AGENT_PROMETHEUS_BIND9_EXPORTER_ADDRESS=
# STORK_AGENT_PROMETHEUS_BIND9_EXPORTER_PORT=
# STORK_AGENT_PROMETHEUS_BIND9_EXPORTER_INTERVAL=
# this is used when agent is automatically registered in Stork server
STORK_AGENT_SERVER_URL=http://192.168.0.7:8080
# STORK_AGENT_ADDRESS=
# skip TLS certificate verification when the Stork Agent connects
# to Kea over TLS and Kea uses self-signed certificates
STORK_AGENT_SKIP_TLS_CERT_VERIFICATION=true
```
```root@dhcp1:/etc/stork# cat server.env
# database settings
STORK_DATABASE_HOST=192.168.0.5
STORK_DATABASE_PORT=5432
STORK_DATABASE_NAME=stork
STORK_DATABASE_USER_NAME=stork
# empty password is set to avoid prompting user for password to database
STORK_DATABASE_PASSWORD=pw
# ReST API settings
# STORK_REST_HOST=
# STORK_REST_PORT=
# STORK_REST_TLS_CERTIFICATE=
# STORK_REST_TLS_PRIVATE_KEY=
# STORK_REST_TLS_CA_CERTIFICATE=
STORK_REST_STATIC_FILES_DIR=/usr/share/stork/www
```
Dashboard Screenshots:
- https://salty.link/screenshots/user1/0802iv00f.png
- https://salty.link/screenshots/user1/314boia99.png
- https://salty.link/screenshots/user1/9509P7234.png
-
**Expected behavior**
I expect the leases and the subnet utilization to show up in the Stork dashboard.
**Environment:**
```root@dhcp1:/etc/stork# kea-dhcp4 -V
2.0.0
tarball
linked with:
log4cplus 2.0.5
OpenSSL 1.1.1l 24 Aug 2021
database:
MySQL backend 12.0, library 8.0.27
PostgreSQL backend 6.2, library 130004
Memfile backend 2.1
```
```root@dhcp1:/etc/stork# stork-server -v
0.22.0
root@dhcp1:/etc/stork# stork-agent -v
0.22.0
```
- OS: Ubuntu 20.04 x64
```root@dhcp1:/etc/stork# dpkg --list |grep stork
ii isc-stork-agent 0.22.0.211105072749 amd64 ISC Stork Agent
ii isc-stork-server 0.22.0.211105072749 amd64 ISC Stork Server
root@dhcp1:/etc/stork# dpkg --list |grep kea
ii isc-kea-admin 2.0.0-isc20210927143053 amd64 Administration utilities for ISC Kea DHCP server
ii isc-kea-common 2.0.0-isc20210927143053 amd64 Common libraries for the ISC Kea DHCP server
ii isc-kea-ctrl-agent 2.0.0-isc20210927143053 amd64 ISC Kea DHCP server REST API service
ii isc-kea-dev 2.0.0-isc20210927143053 amd64 Development headers for ISC Kea DHCP server
ii isc-kea-dhcp4-server 2.0.0-isc20210927143053 amd64 ISC Kea IPv4 DHCP server
ii isc-kea-dhcp6-server 2.0.0-isc20210927143053 amd64 ISC Kea IPv6 DHCP server
ii isc-kea-doc 2.0.0-isc20210927143053 all Documentation for ISC Kea DHCP server
ii python3-isc-kea-connector 2.0.0-isc20210927143053 all Python3 management connector for ISC Kea DHCP server
```
Logs:
```2021-11-05 20:35:31.215 WARN [kea-dhcp4.alloc-engine/5416.140484751342016] ALLOC_ENGINE_V4_ALLOC_FAIL_SUBNET [hwtype=1 94:57:a5:50:0e:56], cid=[01:94:57:a5:50:0e:56], tid=0x8: failed t
o allocate an IPv4 address in the subnet 10.43.60.0/26, subnet-id 4, shared network
2021-11-05 20:35:31.215 WARN [kea-dhcp4.alloc-engine/5416.140484751342016] ALLOC_ENGINE_V4_ALLOC_FAIL [hwtype=1 94:57:a5:50:0e:56], cid=[01:94:57:a5:50:0e:56], tid=0x8: failed to alloc
ate an IPv4 address after 61 attempt(s)
2021-11-05 20:35:31.215 WARN [kea-dhcp4.alloc-engine/5416.140484751342016] ALLOC_ENGINE_V4_ALLOC_FAIL_CLASSES [hwtype=1 94:57:a5:50:0e:56], cid=[01:94:57:a5:50:0e:56], tid=0x8: Failed
to allocate an IPv4 address for client with classes: ALL, VENDOR_CLASS_CPQRIB3, UNKNOWN
2021-11-05 20:35:32.081 WARN [kea-dhcp4.alloc-engine/5416.140484751342016] ALLOC_ENGINE_V4_ALLOC_FAIL_SUBNET [hwtype=1 94:57:a5:50:0e:64], cid=[01:94:57:a5:50:0e:64], tid=0x8: failed t
o allocate an IPv4 address in the subnet 10.43.60.0/26, subnet-id 4, shared network
2021-11-05 20:35:32.081 WARN [kea-dhcp4.alloc-engine/5416.140484751342016] ALLOC_ENGINE_V4_ALLOC_FAIL [hwtype=1 94:57:a5:50:0e:64], cid=[01:94:57:a5:50:0e:64], tid=0x8: failed to alloc
ate an IPv4 address after 61 attempt(s)
2021-11-05 20:35:32.081 WARN [kea-dhcp4.alloc-engine/5416.140484751342016] ALLOC_ENGINE_V4_ALLOC_FAIL_CLASSES [hwtype=1 94:57:a5:50:0e:64], cid=[01:94:57:a5:50:0e:64], tid=0x8: Failed
to allocate an IPv4 address for client with classes: ALL, VENDOR_CLASS_CPQRIB3, UNKNOWN
2021-11-05 20:35:32.817 WARN [kea-dhcp4.alloc-engine/5416.140484751342016] ALLOC_ENGINE_V4_ALLOC_FAIL_SUBNET [hwtype=1 94:57:a5:50:1e:3c], cid=[01:94:57:a5:50:1e:3c], tid=0x8: failed t
o allocate an IPv4 address in the subnet 10.43.60.0/26, subnet-id 4, shared network
2021-11-05 20:35:32.817 WARN [kea-dhcp4.alloc-engine/5416.140484751342016] ALLOC_ENGINE_V4_ALLOC_FAIL [hwtype=1 94:57:a5:50:1e:3c], cid=[01:94:57:a5:50:1e:3c], tid=0x8: failed to alloc
ate an IPv4 address after 61 attempt(s)
2021-11-05 20:35:32.817 WARN [kea-dhcp4.alloc-engine/5416.140484751342016] ALLOC_ENGINE_V4_ALLOC_FAIL_CLASSES [hwtype=1 94:57:a5:50:1e:3c], cid=[01:94:57:a5:50:1e:3c], tid=0x8: Failed
to allocate an IPv4 address for client with classes: ALL, VENDOR_CLASS_CPQRIB3, UNKNOWN
2021-11-05 20:35:33.259 INFO [kea-dhcp4.leases/5416.140484751342016] DHCP4_LEASE_ALLOC [hwtype=1 2c:60:0c:d7:fc:dc], cid=[01:2c:60:0c:d7:fc:dc], tid=0x87e34c3e: lease 10.43.25.9 has be
en allocated for 3600 seconds
2021-11-05 20:35:33.696 WARN [kea-dhcp4.alloc-engine/5416.140484751342016] ALLOC_ENGINE_V4_ALLOC_FAIL_SUBNET [hwtype=1 14:02:ec:38:0a:a2], cid=[01:14:02:ec:38:0a:a2], tid=0x8: failed t
o allocate an IPv4 address in the subnet 10.43.60.0/26, subnet-id 4, shared network
2021-11-05 20:35:33.696 WARN [kea-dhcp4.alloc-engine/5416.140484751342016] ALLOC_ENGINE_V4_ALLOC_FAIL [hwtype=1 14:02:ec:38:0a:a2], cid=[01:14:02:ec:38:0a:a2], tid=0x8: failed to alloc
ate an IPv4 address after 61 attempt(s)
2021-11-05 20:35:33.696 WARN [kea-dhcp4.alloc-engine/5416.140484751342016] ALLOC_ENGINE_V4_ALLOC_FAIL_CLASSES [hwtype=1 14:02:ec:38:0a:a2], cid=[01:14:02:ec:38:0a:a2], tid=0x8: Failed
to allocate an IPv4 address for client with classes: ALL, VENDOR_CLASS_CPQRIB3, UNKNOWN
2021-11-05 20:35:33.925 INFO [kea-dhcp4.leases/5416.140484751342016] DHCP4_LEASE_ALLOC [hwtype=1 2c:60:0c:d7:fd:18], cid=[01:2c:60:0c:d7:fd:18], tid=0xdff67c12: lease 10.43.25.8 has be
en allocated for 3600 seconds
2021-11-05 20:35:35.304 WARN [kea-dhcp4.alloc-engine/5416.140484751342016] ALLOC_ENGINE_V4_ALLOC_FAIL_SUBNET [hwtype=1 94:57:a5:50:0e:56], cid=[01:94:57:a5:50:0e:56], tid=0x8: failed t
o allocate an IPv4 address in the subnet 10.43.60.0/26, subnet-id 4, shared network
2021-11-05 20:35:35.304 WARN [kea-dhcp4.alloc-engine/5416.140484751342016] ALLOC_ENGINE_V4_ALLOC_FAIL [hwtype=1 94:57:a5:50:0e:56], cid=[01:94:57:a5:50:0e:56], tid=0x8: failed to alloc
ate an IPv4 address after 61 attempt(s)
2021-11-05 20:35:35.304 WARN [kea-dhcp4.alloc-engine/5416.140484751342016] ALLOC_ENGINE_V4_ALLOC_FAIL_CLASSES [hwtype=1 94:57:a5:50:0e:56], cid=[01:94:57:a5:50:0e:56], tid=0x8: Failed
to allocate an IPv4 address for client with classes: ALL, VENDOR_CLASS_CPQRIB3, UNKNOWN
2021-11-05 20:35:36.171 WARN [kea-dhcp4.alloc-engine/5416.140484751342016] ALLOC_ENGINE_V4_ALLOC_FAIL_SUBNET [hwtype=1 94:57:a5:50:0e:64], cid=[01:94:57:a5:50:0e:64], tid=0x8: failed t
o allocate an IPv4 address in the subnet 10.43.60.0/26, subnet-id 4, shared network
2021-11-05 20:35:36.171 WARN [kea-dhcp4.alloc-engine/5416.140484751342016] ALLOC_ENGINE_V4_ALLOC_FAIL [hwtype=1 94:57:a5:50:0e:64], cid=[01:94:57:a5:50:0e:64], tid=0x8: failed to alloc
ate an IPv4 address after 61 attempt(s)
2021-11-05 20:35:36.171 WARN [kea-dhcp4.alloc-engine/5416.140484751342016] ALLOC_ENGINE_V4_ALLOC_FAIL_CLASSES [hwtype=1 94:57:a5:50:0e:64], cid=[01:94:57:a5:50:0e:64], tid=0x8: Failed
to allocate an IPv4 address for client with classes: ALL, VENDOR_CLASS_CPQRIB3, UNKNOWN
```
**Contacting you**
How can ISC reach you to discuss this matter further? If you do not specify any means such as
e-mail, jabber id or a telephone, we may send you a message on github with questions when we have
them.
Email / Gitlab / Github are fine.1.0Marcin SiodelskiMarcin Siodelskihttps://gitlab.isc.org/isc-projects/bind9/-/issues/3005[ISC-support #19488] IXFR changes committed before failing with "extra data"2021-11-30T10:42:51ZEverett Fulton[ISC-support #19488] IXFR changes committed before failing with "extra data"Ref: https://support.isc.org/Ticket/Display.html?id=19488
A Support customer has reported an awkward response by BIND to a malformed inbound IXFR from a non-BIND server, in which there are records sent after the closing SOA:
- BIND emi...Ref: https://support.isc.org/Ticket/Display.html?id=19488
A Support customer has reported an awkward response by BIND to a malformed inbound IXFR from a non-BIND server, in which there are records sent after the closing SOA:
- BIND emits a log indicating an xfr failure due to "extra data"
- Changes excluding the additional RRs are committed to the zone
- No immediate fallback from IXFR to AXFR happened (it triggers a refresh as usual)
Mark A. has mentioned that the journal code was designed with committing single deltas at a time. The journal commit could be made to cover multiple deltas and be made only after the check for extra data has been made.
This undesired behavior only occurs with malformed IXFR streams.Not plannedhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3004dig and named crash when receiving XFR over TLS2021-12-01T15:01:19ZCesar Kuroiwadig and named crash when receiving XFR over TLS### Summary
On some occasions, `dig` and `named` crash when making a zone transfer over TLS (client-side). This seems to become more often on larger zones, where there is a need for more DNS messages to complete the transfer.
### BIND ...### Summary
On some occasions, `dig` and `named` crash when making a zone transfer over TLS (client-side). This seems to become more often on larger zones, where there is a need for more DNS messages to complete the transfer.
### BIND version used
9.17.19
### See also
* #2986
*Note: privacy sensitive data was removed from the issue upon making it non-confidential.*December 2021 (9.16.24, 9.16.24-S1, 9.17.21)Artem BoldarievArtem Boldarievhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3000HTTP pipelining in statistics channels do not work - browser hangs2021-12-03T11:07:08ZPetr Špačekpspacek@isc.orgHTTP pipelining in statistics channels do not work - browser hangs### Summary
Web browser hands for 30 seconds while loading XML statistics, and after a timeout it loads the page successfully. This is most likely caused by broken HTTP pipelining.
### BIND version used
main - d48fa3b
```
$ curl --ver...### Summary
Web browser hands for 30 seconds while loading XML statistics, and after a timeout it loads the page successfully. This is most likely caused by broken HTTP pipelining.
### BIND version used
main - d48fa3b
```
$ curl --version
curl 7.79.1 (x86_64-pc-linux-gnu) libcurl/7.79.1 OpenSSL/1.1.1l zlib/1.2.11 brotli/1.0.9 zstd/1.5.0 libidn2/2.3.2 libpsl/0.21.1 (+libidn2/2.3.0) libssh2/1.9.0 nghttp2/1.46.0
Release-Date: 2021-09-22
Protocols: dict file ftp ftps gopher gophers http https imap imaps mqtt pop3 pop3s rtsp scp sftp smb smbs smtp smtps telnet tftp
Features: alt-svc AsynchDNS brotli GSS-API HSTS HTTP2 HTTPS-proxy IDN IPv6 Kerberos Largefile libz NTLM NTLM_WB PSL SPNEGO SSL TLS-SRP UnixSockets zstd
$ pacman -Q curl
curl 7.79.1-1
```
BIND branches 9.11 and 9.16 are not affected. A possible regression in !5455?
### Steps to reproduce
Request two URLs and use pipelining. cURL does this by default on my system:
```bash
$ curl -v -o /dev/null http://127.0.0.1:8888/bind9.xsl -o /dev/null http://127.0.0.1:8888/bind9.xsl
```
### What is the current *bug* behavior?
First file is downloaded immediately and the second file just hangs until cURL times out.
<details>
$ curl -v -o /dev/null http://127.0.0.1:8888/bind9.xsl -o /dev/null http://127.0.0.1:8888/bind9.xsl
* Trying 127.0.0.1:8888...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Connected to 127.0.0.1 (127.0.0.1) port 8888 (#0)
> GET /bind9.xsl HTTP/1.1
> Host: 127.0.0.1:8888
> User-Agent: curl/7.79.1
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Content-Type: text/xslt+xml
< Date: Wed, 03 Nov 2021 14:15:19 GMT
< Expires: Wed, 03 Nov 2021 14:15:19 GMT
< Last-Modified: Wed, 03 Nov 2021 14:12:44 GMT
< Cache-Control: public
< Server: libisc
< Content-Length: 38976
<
{ [7007 bytes data]
100 38976 100 38976 0 0 31.6M 0 --:--:-- --:--:-- --:--:-- 37.1M
* Connection #0 to host 127.0.0.1 left intact
* Found bundle for host 127.0.0.1: 0x5649e7d7a9a0 [serially]
* Can not multiplex, even if we wanted to!
* Re-using existing connection! (#0) with host 127.0.0.1
* Connected to 127.0.0.1 (127.0.0.1) port 8888 (#0)
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0> GET /bind9.xsl HTTP/1.1
> Host: 127.0.0.1:8888
> User-Agent: curl/7.79.1
> Accept: */*
>
0 0 0 0 0 0 0 0 --:--:-- 0:00:30 --:--:-- 0* Connection died, retrying a fresh connect (retry count: 1)
0 0 0 0 0 0 0 0 --:--:-- 0:00:30 --:--:-- 0
* Closing connection 0
* Issue another request to this URL: 'http://127.0.0.1:8888/bind9.xsl'
* Hostname 127.0.0.1 was found in DNS cache
* Trying 127.0.0.1:8888...
* Connected to 127.0.0.1 (127.0.0.1) port 8888 (#1)
> GET /bind9.xsl HTTP/1.1
> Host: 127.0.0.1:8888
> User-Agent: curl/7.79.1
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Content-Type: text/xslt+xml
< Date: Wed, 03 Nov 2021 14:15:49 GMT
< Expires: Wed, 03 Nov 2021 14:15:49 GMT
< Last-Modified: Wed, 03 Nov 2021 14:12:44 GMT
< Cache-Control: public
< Server: libisc
< Content-Length: 38976
<
{ [7007 bytes data]
100 38976 100 38976 0 0 1298 0 0:00:30 0:00:30 --:--:-- 1298
* Connection #1 to host 127.0.0.1 left intact
</details>
### What is the expected *correct* behavior?
HTTP Persistent Connections either correctly serve multiple requests, or are correctly terminated with `Connection: close` HTTP header.
### Relevant configuration files
```
statistics-channels {
inet 127.0.0.1 port 8888;
};
```
### Relevant logs and/or screenshots
None.
### Possible fixes
Either:
- Fix persistent HTTP connections in statistics channel
- Signal connection closure after each HTTP response (`Connection: close` header)
- Downgrade to HTTP/1.0 which does not use persistent connections by default - we most likely don't need HTTP 1.1 features anyway.December 2021 (9.16.24, 9.16.24-S1, 9.17.21)https://gitlab.isc.org/isc-projects/bind9/-/issues/2998CID 340918: Uninitialized variables (UNINIT)2021-11-03T14:35:18ZMichal NowakCID 340918: Uninitialized variables (UNINIT)It seems to point to 78066157145b6a75f58ff843ac32ffabe62b2143:
```
790 static isc_result_t
791. opensslrsa_tofile(const dst_key_t *key, const char *directory) {
792 isc_result_t ret;
1. var_decl: Declaring variable p...It seems to point to 78066157145b6a75f58ff843ac32ffabe62b2143:
```
790 static isc_result_t
791. opensslrsa_tofile(const dst_key_t *key, const char *directory) {
792 isc_result_t ret;
1. var_decl: Declaring variable priv without initializer.
793 dst_private_t priv;
794 unsigned char *bufs[8] = { NULL };
795 unsigned short i = 0;
796 EVP_PKEY *pkey;
797. #if OPENSSL_VERSION_NUMBER < 0x30000000L
798 RSA *rsa = NULL;
799 const BIGNUM *n = NULL, *e = NULL, *d = NULL;
800 const BIGNUM *p = NULL, *q = NULL;
801 const BIGNUM *dmp1 = NULL, *dmq1 = NULL, *iqmp = NULL;
802 #else
803 BIGNUM *n = NULL, *e = NULL, *d = NULL;
804 BIGNUM *p = NULL, *q = NULL;
805 BIGNUM *dmp1 = NULL, *dmq1 = NULL, *iqmp = NULL;
806. #endif /* OPENSSL_VERSION_NUMBER < 0x30000000L */
807
2. Condition key->keydata.pkey == NULL, taking true branch.
808 if (key->keydata.pkey == NULL) {
3. Jumping to label err.
809 DST_RET(DST_R_NULLKEY);
810 }
```
```
*** CID 340918: Uninitialized variables (UNINIT)
/lib/dns/opensslrsa_link.c: 937 in opensslrsa_tofile()
931 priv.nelements = i;
932 ret = dst__privstruct_writefile(key, &priv, directory);
933
934 err:
935 for (i = 0; i < ARRAY_SIZE(bufs); i++) {
936 if (bufs[i] != NULL) {
>>> CID 340918: Uninitialized variables (UNINIT)
>>> Using uninitialized value "priv.elements[i].length" when calling "isc__mem_put".
937 isc_mem_put(key->mctx, bufs[i],
938 priv.elements[i].length);
939 }
940 }
941 #if OPENSSL_VERSION_NUMBER < 0x30000000L
942 RSA_free(rsa);
```November 2021 (9.16.23, 9.16.23-S1, 9.17.20)Mark AndrewsMark Andrewshttps://gitlab.isc.org/isc-projects/bind9/-/issues/2997nsupdate command return infomation: dns_request_createvia3: address in use2021-11-03T06:10:24Z395096713nsupdate command return infomation: dns_request_createvia3: address in usehi admin:
i meet a problem ,when use nudapte to add record ,the return infomation is dns_request_createvia3: address in use
my os is centos7
my bind-ver is bind-9.11.0.3
please help me,what's the reason of this problem?hi admin:
i meet a problem ,when use nudapte to add record ,the return infomation is dns_request_createvia3: address in use
my os is centos7
my bind-ver is bind-9.11.0.3
please help me,what's the reason of this problem?https://gitlab.isc.org/isc-projects/bind9/-/issues/2994query_test failure on Ubuntu 21.10 (Impish)2023-10-12T11:14:49ZJean-Christophe Manciotquery_test failure on Ubuntu 21.10 (Impish)
Does not pass the tests when built on Ubuntu impish.
### BIND version used
v9_17_19
### Some packages of interest
- autoconf: 2.69-14
- make: 4.3-4ubuntu1
- gcc-11: 11.2.0-7ubuntu2
- python3.9: 3.9.7-4
- pytest: 6.2.5
### Steps to ...
Does not pass the tests when built on Ubuntu impish.
### BIND version used
v9_17_19
### Some packages of interest
- autoconf: 2.69-14
- make: 4.3-4ubuntu1
- gcc-11: 11.2.0-7ubuntu2
- python3.9: 3.9.7-4
- pytest: 6.2.5
### Steps to reproduce
```
git checkout v9_17_19
export CFLAGS+=-Wno-error
export NOCONFIGURE=yes
autoreconf -f -i
./configure --build=x86_64-pc-linux-gnu \
--prefix=/usr --sysconfdir=/etc/bind --localstatedir=/ \
--datarootdir=/usr/share --docdir=/usr/share/doc --mandir=/usr/share/man \
--disable-native-pkcs11 \
--disable-querytrace \
--enable-auto-validation \
--enable-developer \
--enable-dnstap \
--enable-fixed-rrset \
--enable-full-report \
--enable-largefile \
--enable-linux-caps \
--enable-shared=yes \
--with-cmocka=yes \
--with-gnu-ld=yes \
--with-gssapi=/usr/bin/krb5-config \
--with-jemalloc=detect \
--with-json-c=yes \
--with-libidn2 \
--with-libxml2=yes \
--with-lmdb=auto \
--with-maxminddb=yes \
--with-openssl=/usr/lib/x86_64-linux-gnu \
--with-tuning=large \
--with-zlib=yes
make all
make doc html pdf
sudo pip3 install -I pytest
sudo bin/tests/system/ifconfig.sh up
make check
```
leads to:
```
...
make[5]: Entering directory 'git-bind9/lib/ns/tests'
make[6]: Entering directory 'git-bind9/lib/ns/tests'
PASS: listenlist_test
PASS: notify_test
PASS: plugin_test
FAIL: query_test
============================================================================
Testsuite summary for BIND 9.17.19
============================================================================
# TOTAL: 4
# PASS: 3
# SKIP: 0
# XFAIL: 0
# FAIL: 1
# XPASS: 0
# ERROR: 0
============================================================================
See lib/ns/tests/test-suite.log
Please report to info@isc.org
============================================================================
```
**test-suite.log**
```
===============================================
BIND 9.17.19: lib/ns/tests/test-suite.log
===============================================
# TOTAL: 4
# PASS: 3
# SKIP: 0
# XFAIL: 0
# FAIL: 1
# XPASS: 0
# ERROR: 0
.. contents:: :depth: 2
FAIL: query_test
================
[==========] Running 4 test(s).
[ RUN ] ns__query_sfcache_test
[ OK ] ns__query_sfcache_test
[ RUN ] ns__query_start_test
[ OK ] ns__query_start_test
[ RUN ] ns__query_hookasync_test
netmgr/tcpdns.c:334: fatal error: RUNTIME_CHECK(result == ISC_R_SUCCESS) failed
FAIL query_test (exit status: 134)
```Mark AndrewsMark Andrewshttps://gitlab.isc.org/isc-projects/kea/-/issues/2165Can LFC cause dropped packets with HA+MT at heavy load2021-11-04T14:28:27ZPeter DaviesCan LFC cause dropped packets with HA+MT at heavy loadCan LFC cause dropped packets with HA+MT at heavy load:
Would it be able to test for packet loss due to the influence of the LFC process when handling large number of requests. And if found to be the case could there be a method to m...Can LFC cause dropped packets with HA+MT at heavy load:
Would it be able to test for packet loss due to the influence of the LFC process when handling large number of requests. And if found to be the case could there be a method to mitigate this.
[RT #19480 ](https://support.isc.org/Ticket/Display.html?id=19480)https://gitlab.isc.org/isc-projects/bind9/-/issues/2991Address reports by Coverity in updated OpenSSL code !53852021-11-02T14:49:08ZMark AndrewsAddress reports by Coverity in updated OpenSSL code !5385```
** CID 340808: Control flow issues (DEADCODE)
/lib/dns/openssldh_link.c: 1209 in openssldh_parse()
________________________________________________________________________________________________________
*** CID 340808: Control ...```
** CID 340808: Control flow issues (DEADCODE)
/lib/dns/openssldh_link.c: 1209 in openssldh_parse()
________________________________________________________________________________________________________
*** CID 340808: Control flow issues (DEADCODE)
/lib/dns/openssldh_link.c: 1209 in openssldh_parse()
1203 key->key_size = (unsigned int)key_size;
1204 ret = ISC_R_SUCCESS;
1205
1206 err:
1207 #if OPENSSL_VERSION_NUMBER < 0x30000000L
1208 if (dh != NULL) {
CID 340808: Control flow issues (DEADCODE)
Execution cannot reach this statement: "DH_free(dh);".
1209 DH_free(dh);
1210 }
1211 #else
1212 if (pkey != NULL) {
1213 EVP_PKEY_free(pkey);
1214 }
```
```
** CID 340807: (OVERRUN)
/lib/dns/opensslrsa_link.c: 931 in opensslrsa_tofile()
/lib/dns/opensslrsa_link.c: 930 in opensslrsa_tofile()
/lib/dns/opensslrsa_link.c: 931 in opensslrsa_tofile()
________________________________________________________________________________________________________
*** CID 340807: (OVERRUN)
/lib/dns/opensslrsa_link.c: 931 in opensslrsa_tofile()
925 priv.nelements = i;
926 ret = dst__privstruct_writefile(key, &priv, directory);
927
928 err:
929 while (i--) {
930 if (bufs[i] != NULL) {
CID 340807: (OVERRUN)
Overrunning array "bufs" of 8 8-byte elements at element index 9 (byte offset 79) using index "i" (which evaluates to 9).
931 isc_mem_put(key->mctx, bufs[i],
932 priv.elements[i].length);
933 }
934 }
935 #if OPENSSL_VERSION_NUMBER < 0x30000000L
936 RSA_free(rsa);
/lib/dns/opensslrsa_link.c: 930 in opensslrsa_tofile()
924
925 priv.nelements = i;
926 ret = dst__privstruct_writefile(key, &priv, directory);
927
928 err:
929 while (i--) {
CID 340807: (OVERRUN)
Overrunning array "bufs" of 8 8-byte elements at element index 9 (byte offset 79) using index "i" (which evaluates to 9).
930 if (bufs[i] != NULL) {
931 isc_mem_put(key->mctx, bufs[i],
932 priv.elements[i].length);
933 }
934 }
935 #if OPENSSL_VERSION_NUMBER < 0x30000000L
/lib/dns/opensslrsa_link.c: 931 in opensslrsa_tofile()
925 priv.nelements = i;
926 ret = dst__privstruct_writefile(key, &priv, directory);
927
928 err:
929 while (i--) {
930 if (bufs[i] != NULL) {
CID 340807: (OVERRUN)
Overrunning array "bufs" of 8 8-byte elements at element index 9 (byte offset 79) using index "i" (which evaluates to 9).
931 isc_mem_put(key->mctx, bufs[i],
932 priv.elements[i].length);
933 }
934 }
935 #if OPENSSL_VERSION_NUMBER < 0x30000000L
936 RSA_free(rsa);
```November 2021 (9.16.23, 9.16.23-S1, 9.17.20)Mark AndrewsMark Andrewshttps://gitlab.isc.org/isc-projects/bind9/-/issues/2990Only one request is sent with dig tool, but bind9 processes two same requests...2021-10-31T06:39:47Zliu feiOnly one request is sent with dig tool, but bind9 processes two same requests and replies with two same responses.<!--
If the bug you are reporting is potentially security-related - for example,
if it involves an assertion failure or other crash in `named` that can be
triggered repeatedly - then please do *NOT* report it here, but send an
email to [...<!--
If the bug you are reporting is potentially security-related - for example,
if it involves an assertion failure or other crash in `named` that can be
triggered repeatedly - then please do *NOT* report it here, but send an
email to [security-officer@isc.org](security-officer@isc.org).
-->
### Summary
```
Use the dig tool to send only one request, but bind receive and processes two same requests and replies with two same responses.
After packets are captured on the network port, only one request message is received and two messages are returned.
```
(Summarize the bug encountered concisely.)
### BIND version used
version: 9.10.0-P1 <id:e94d8db1>
(Paste the output of `named -V`.)
### Steps to reproduce
```
No operation, after running for a while, it's like this. Restart the process. The problem persists.
```
(How one can reproduce the issue - this is very important.)
### What is the current *bug* behavior?
(What actually happens.)
```
Use the dig tool to send only one request, but bind processes two same requests and replies with two same responses.
```
### What is the expected *correct* behavior?
```
bind only process one request
```
(What you should see instead.)
### Relevant configuration files
(Paste any relevant configuration files - please use code blocks (```)
to format console output. If submitting the contents of your
configuration file in a non-confidential Issue, it is advisable to
obscure key secrets: this can be done automatically by using
`named-checkconf -px`.)
### Relevant logs and/or screenshots
```
The following logs are recorded after debugging is enabled.
31-Oct-2021 01:51:54.962 general: socket 0xe4596908: dispatch_recv: event 0xe3f29108 -> task 0xe4597408
31-Oct-2021 01:51:54.963 general: socket 0xe4596788: dispatch_recv: event 0xe3f6a108 -> task 0xe4597388
31-Oct-2021 01:51:54.963 general: socket 0xe4596908: internal_recv: task 0xe4597408 got event 0xe459697c
31-Oct-2021 01:51:54.963 general: socket 0xe4596608: dispatch_recv: event 0xe3fab108 -> task 0xe4597308
31-Oct-2021 01:51:54.963 general: socket 0xe4596788: internal_recv: task 0xe4597388 got event 0xe45967fc
31-Oct-2021 01:51:54.964 general: socket 0xe4596908 127.0.0.1#43060: packet received correctly
31-Oct-2021 01:51:54.964 general: socket 0xe4596488: dispatch_recv: event 0xe3fec108 -> task 0xe4597288
31-Oct-2021 01:51:54.964 general: socket 0xe4596308: dispatch_recv: event 0xe402d108 -> task 0xe4597208
31-Oct-2021 01:51:54.965 general: socket 0xe4596788 127.0.0.1#43060: packet received correctly
31-Oct-2021 01:51:54.965 general: socket 0xe4596908: processing cmsg 0xf6f8d7a0
31-Oct-2021 01:51:54.965 general: socket 0xe4596488: internal_recv: task 0xe4597288 got event 0xe45964fc
31-Oct-2021 01:51:54.966 general: socket 0xe4596608: internal_recv: task 0xe4597308 got event 0xe459667c
31-Oct-2021 01:51:54.966 general: socket 0xe4596188: dispatch_recv: event 0xe406e108 -> task 0xe4597188
31-Oct-2021 01:51:54.966 general: socket 0xe4596788: processing cmsg 0xf6f8d758
31-Oct-2021 01:51:54.967 general: socket 0xe4596788: processing cmsg 0xf6f8d76c
31-Oct-2021 01:51:54.967 general: socket 0xe4596908: processing cmsg 0xf6f8d7b4
31-Oct-2021 01:51:54.967 general: sockmgr 0xf6f77008: watcher got message -3 for socket 532
31-Oct-2021 01:51:54.967 general: socket 0xe4596308: internal_recv: task 0xe4597208 got event 0xe459637c
31-Oct-2021 01:51:54.968 client: client 127.0.0.1#43060: received DSCP 0
31-Oct-2021 01:51:54.968 client: client 127.0.0.1#43060: UDP request
31-Oct-2021 01:51:54.968 client: client 127.0.0.1#43060: received DSCP 0
31-Oct-2021 01:51:54.968 general: sockmgr 0xf6f77008: watcher got message -3 for socket 533
31-Oct-2021 01:51:54.969 general: socket 0xe4596188: internal_recv: task 0xe4597188 got event 0xe45961fc
31-Oct-2021 01:51:54.969 client: client 127.0.0.1#43060: using view '_default'
31-Oct-2021 01:51:54.969 security: client 127.0.0.1#43060: request is not signed
31-Oct-2021 01:51:54.969 security: client 127.0.0.1#43060: recursion available
31-Oct-2021 01:51:54.970 client: client 127.0.0.1#43060: query
31-Oct-2021 01:51:54.970 client: client 0x84f16b0: ns_query_start
31-Oct-2021 01:51:54.970 queries: client 127.0.0.1#43060 (3gnet.test.mncxxx.mccxxx.gprs): query: 3gnet.test.mncxxx.mccxxx.gprs IN A + (127.0.0.1)
31-Oct-2021 01:51:54.970 general: sockmgr 0xf6f77008: watcher got message -3 for socket 531
31-Oct-2021 01:51:54.971 client: client 127.0.0.1#43060: UDP request
31-Oct-2021 01:51:54.971 client: client 127.0.0.1#43060 (3gnet.test.mncxxx.mccxxx.gprs): ns_client_attach: ref = 1
31-Oct-2021 01:51:54.971 client: client 0x84f16b0: query_find
31-Oct-2021 01:51:54.971 client: client 0x84f16b0: query_find: restart
31-Oct-2021 01:51:54.971 general: sockmgr 0xf6f77008: watcher got message -3 for socket 530
31-Oct-2021 01:51:54.972 client: client 127.0.0.1#43060: using view '_default'
31-Oct-2021 01:51:54.972 security: client 127.0.0.1#43060: request is not signed
31-Oct-2021 01:51:54.972 general: sockmgr 0xf6f77008: watcher got message -2 for socket -1
31-Oct-2021 01:51:54.972 security: client 127.0.0.1#43060 (3gnet.test.mncxxx.mccxxx.gprs): query '3gnet.test.mncxxx.mccxxx.gprs/A/IN' approved
31-Oct-2021 01:51:54.972 client: client 0x84f16b0: query_find: db_find
31-Oct-2021 01:51:54.973 client: client 0x84f16b0: query_getnamebuf
31-Oct-2021 01:51:54.973 client: client 0x84f16b0: query_getnamebuf: done
31-Oct-2021 01:51:54.973 client: client 0x84f16b0: query_newname
31-Oct-2021 01:51:54.973 client: client 0x84f16b0: query_newname: done
31-Oct-2021 01:51:54.973 client: client 0x84f16b0: query_newrdataset
31-Oct-2021 01:51:54.973 client: client 0x84f16b0: query_newrdataset: done
31-Oct-2021 01:51:54.974 client: client 0x84f16b0: query_find: resume
31-Oct-2021 01:51:54.974 client: client 0x84f16b0: query_addrrset
31-Oct-2021 01:51:54.974 client: client 0x84f16b0: query_keepname
31-Oct-2021 01:51:54.974 client: client 0x84f16b0: query_addrdataset
31-Oct-2021 01:51:54.974 client: client 0x84f16b0: query_addrdataset: done
31-Oct-2021 01:51:54.974 client: client 0x84f16b0: query_addrrset: done
31-Oct-2021 01:51:54.975 client: client 0x84f16b0: query_find: addauth
31-Oct-2021 01:51:54.975 client: client 0x84f16b0: query_addns
31-Oct-2021 01:51:54.975 client: client 0x84f16b0: query_newrdataset
31-Oct-2021 01:51:54.975 client: client 0x84f16b0: query_newrdataset: done
31-Oct-2021 01:51:54.975 client: client 0x84f16b0: query_addrrset
31-Oct-2021 01:51:54.975 security: client 127.0.0.1#43060: recursion available
31-Oct-2021 01:51:54.976 client: client 127.0.0.1#43060: query
31-Oct-2021 01:51:54.976 client: client 0x84f9200: ns_query_start
31-Oct-2021 01:51:54.976 queries: client 127.0.0.1#43060 (3gnet.test.mncxxx.mccxxx.gprs): query: 3gnet.test.mncxxx.mccxxx.gprs IN A + (127.0.0.1)
31-Oct-2021 01:51:54.976 client: client 0x84f16b0: query_addrdataset
31-Oct-2021 01:51:54.977 client: client 127.0.0.1#43060 (3gnet.test.mncxxx.mccxxx.gprs): ns_client_attach: ref = 1
31-Oct-2021 01:51:54.977 client: client 0x84f9200: query_find
31-Oct-2021 01:51:54.977 client: client 0x84f9200: query_find: restart
31-Oct-2021 01:51:54.977 client: client 0x84f16b0: query_addadditional
31-Oct-2021 01:51:54.977 client: client 0x84f16b0: query_getnamebuf
31-Oct-2021 01:51:54.977 client: client 0x84f16b0: query_getnamebuf: done
31-Oct-2021 01:51:54.978 client: client 0x84f16b0: query_newname
31-Oct-2021 01:51:54.978 client: client 0x84f16b0: query_newname: done
31-Oct-2021 01:51:54.978 client: client 0x84f16b0: query_newrdataset
31-Oct-2021 01:51:54.978 client: client 0x84f16b0: query_newrdataset: done
31-Oct-2021 01:51:54.978 client: client 0x84f16b0: query_addadditional: db_find
31-Oct-2021 01:51:54.979 client: client 0x84f16b0: query_keepname
31-Oct-2021 01:51:54.979 security: client 127.0.0.1#43060 (3gnet.test.mncxxx.mccxxx.gprs): query '3gnet.test.mncxxx.mccxxx.gprs/A/IN' approved
```
(Paste any relevant logs - please use code blocks (```) to format console
output, logs, and code, as it's very hard to read otherwise.)
### Possible fixes
(If you can, link to the line of code that might be responsible for the
problem.)https://gitlab.isc.org/isc-projects/bind9/-/issues/2989CPU test for taskset fails if machine has four threads or less2021-11-01T15:16:40ZAnton CastelliCPU test for taskset fails if machine has four threads or less### Summary
The CPU test for processor affinity fails on machines with four threads or less because the value used for the `taskset` command selects threads 4-15 and excludes 0-3.
### BIND version used
9.16.11
### Steps to reproduce
...### Summary
The CPU test for processor affinity fails on machines with four threads or less because the value used for the `taskset` command selects threads 4-15 and excludes 0-3.
### BIND version used
9.16.11
### Steps to reproduce
Run on a VM with four vCPUs (or less).
```
./configure --prefix=/usr --sysconfdir=/etc --enable-static --localstatedir=/var
make -j $(nproc)
sudo bin/tests/system/ifconfig.sh up
cd bin/tests/system/
sh run.sh cpu
```
### What is the current *bug* behavior?
Output of test:
```
T:cpu:1:A
A:cpu:System test cpu
I:cpu:PORTRANGE:5300 - 5399
I:cpu:starting servers
/home/user/bind/bin/named/named -D cpu-ns1 -X named.lock -m record,size,mctx -c named.conf -d 99 -g -U 4 -T maxcachesize=2097152 >named.run 2>&1 & echo $!
I:cpu:stop server (1)
I:cpu:start server with taskset (2)
I:cpu:Couldn't start server taskset fff0 /home/user/bind/bin/named/named -D cpu-ns1 -X named.lock -m record,size,mctx -c named.conf -d 99 -g -U 4 -T maxcachesize=2097152 >>named.run 2>&1 & echo $! (pid=13759)
I:cpu:failed
I:cpu:failed
I:cpu:check ps output (3)
cat: ns1/named.pid: No such file or directory
I:cpu:pid=
I:cpu:psr=
tests.sh: line 41: test: : integer expression expected
I:cpu:failed
I:cpu:exit status: 2
I:cpu:stopping servers
I:cpu:pytest not installed, skipping python tests
R:cpu:FAIL
```
### What is the expected *correct* behavior?
Test succeeds.
### Relevant configuration files
N/A
### Relevant logs and/or screenshots
From `bin/tests/system/cpu/ns1/named.run`
```
taskset: failed to set pid 0's affinity: Invalid argument
```
### Possible fixes
Line that causes the problem: [bin/tests/system/cpu/tests.sh#L28](bin/tests/system/cpu/tests.sh#L28)
The mask argument `fff0` for the `taskset` command selects CPUs/threads 4-15 and excludes 0-3. On a machine that only has 4 CPUs/threads, the command fails because there are no valid CPUs/threads to select.
Several possible fixes could be done. Changing the mask value from `fff0` to `fffe` would select CPUs/threads 1-15 and only exclude CPU/thread 0. This would work for everything except for single core/thread machines. For that case, a prereq could be added to check that the output of `nproc --all` is greater than one.
It appears that in later versions, a check was added to skip this test entirely if the `taskset` command fails. [bin/tests/system/cpu/prereq.sh#L32](bin/tests/system/cpu/prereq.sh#L32) This prereq could be replaced by the above suggested one if the mask value is changed to `fffe`.