ISC Open Source Projects issueshttps://gitlab.isc.org/groups/isc-projects/-/issues2021-07-26T07:15:10Zhttps://gitlab.isc.org/isc-projects/kea/-/issues/1556make netconf unit test failure more verbose2021-07-26T07:15:10ZFrancis Dupontmake netconf unit test failure more verboseSome netconf unit tests should be more verbose, for instance in:
```
netconf_unittests.cc:321
Expected: agent_->initSysrepo() doesn't throw an exception.
Actual: it throws.
```
it will be very useful to know what is the issue i.e. to d...Some netconf unit tests should be more verbose, for instance in:
```
netconf_unittests.cc:321
Expected: agent_->initSysrepo() doesn't throw an exception.
Actual: it throws.
```
it will be very useful to know what is the issue i.e. to display the exception message.kea1.9.10Tomek MrugalskiTomek Mrugalskihttps://gitlab.isc.org/isc-projects/bind9/-/issues/2293zone transfer tracking2020-11-19T08:13:25ZPeter Davieszone transfer tracking### Description
zone transfer tracking mechanism
### Request
This is feature request for a function that would allow one to list zone transfers that are in-progress at any one time and how long they have been running for.
Benefit...### Description
zone transfer tracking mechanism
### Request
This is feature request for a function that would allow one to list zone transfers that are in-progress at any one time and how long they have been running for.
Benefits:
- be able to determine if zone transfers are running longer than expected.
- be able to track transfers in-progress over time to monitor primary and secondary zone transfer health.
### Links / references
RT #[17310](https://support.isc.org/Ticket/Display.html?id=17310).https://gitlab.isc.org/isc-projects/kea/-/issues/1555bump up libs and hooks versions for 1.9.2 release2020-11-23T11:06:45ZAndrei Pavelandrei@isc.orgbump up libs and hooks versions for 1.9.2 releasekea1.9.2Razvan BecheriuRazvan Becheriuhttps://gitlab.isc.org/isc-projects/kea/-/issues/15541.9.2 release changes2020-12-11T17:08:31ZAndrei Pavelandrei@isc.org1.9.2 release changeskea1.9.2Andrei Pavelandrei@isc.orgAndrei Pavelandrei@isc.orghttps://gitlab.isc.org/isc-projects/kea/-/issues/1552Client trying to renew lease being incorrectly sent DHCPNAK and then moved to...2021-05-24T15:42:38ZJay TuckeyClient trying to renew lease being incorrectly sent DHCPNAK and then moved to different IP with ALLOC_ENGINE_V4_DISCOVER_ADDRESS_CONFLICT**Describe the bug**
**A clear and concise description of what the bug is.**
I've been running into an issue where a client sends a DHCPDISCOVER and the following is true:
* Client already has a lease
* Client has a reservation matching...**Describe the bug**
**A clear and concise description of what the bug is.**
I've been running into an issue where a client sends a DHCPDISCOVER and the following is true:
* Client already has a lease
* Client has a reservation matching lease
This is my first bug report so let me know if you need more info
**To Reproduce**
Steps to reproduce the behavior:
1. Run Kea dhcpv4 with a standard subnet config
2. Create a reservation for the client
3. Client boots up correctly and gets the reservation
4. Client later sends a DHCPDISCOVER (not clear why, but it should still be fine) - not that it does come via a relay `138.80.150.1` in this log
5. Client is assigned the wrong IP because the server sees it as a lease conflict, even though the MAC address matches and everything looks the same to me.
**Expected behavior**
I would expect the server to just treat the DHCPDISCOVER as a renewal for the existing lease, because of the matching MAC/CID and reservation
**Environment:**
```
[root@dhcppal1]# /opt/kea/sbin/kea-dhcp4 -V
1.6.2
tarball
linked with:
log4cplus 1.2.0
OpenSSL 1.1.1g FIPS 21 Apr 2020
database:
Memfile backend 2.1
```
- OS: RHEL 8.2
- Nothing extra, just used `./configure --prefix=/opt/kea162`
- Which hooks where loaded in: HA - in a hot-spare config.
**Additional Information**
Here's the subnet config:
```
{
# Palmerston
"subnet": "138.80.150.0/24",
"id": 1500,
"pools": [
{ "pool": "138.80.150.10 - 138.80.150.254" }
],
# "valid-lifetime": 691200, # 8 days lease times for servers - recuced because there were leases never being cleaned up
# "renew-timer": 86400, # renew every 1 day
# "rebind-timer": 518400, # rebind after 6 days if needed
### Shortened for testing 20202-11-19 - jtuckey
"valid-lifetime": 6912, # 8 days lease times for servers - recuced because there were leases never being cleaned up
"renew-timer": 864, # renew every 1 day
"rebind-timer": 5184, # rebind after 6 days if needed
"option-data": [
{ "name": "routers", "data": "138.80.150.1" }
],
"reservations": [
# Make sure to also copy the reservations file
<?include "reservations/138.80.150.0_24.json"?>
]
}
```
Reservation:
```
{
# jtuckey - 2020-11-05 - dyndnsprd1 dynamic dns server
"hw-address": "00:50:56:8a:db:0e",
"ip-address": "138.80.150.34"
}
```
Here's a log dump at DEBUG/level 99 of the issue occurring:
```
Nov 19 13:53:25 dhcppal1 kea-dhcp4[50758]: DEBUG kea-dhcp4.packets DHCP4_BUFFER_RECEIVED received buffer from 138.80.150.1:67 to 138.80.255.102:67 over interface dhcp
Nov 19 13:53:25 dhcppal1 kea-dhcp4[50758]: DEBUG kea-dhcp4.callouts HOOKS_CALLOUTS_BEGIN begin all callouts for hook buffer4_receive
Nov 19 13:53:25 dhcppal1 kea-dhcp4[50758]: DEBUG kea-dhcp4.callouts HOOKS_CALLOUT_CALLED hooks library with index 2 has called a callout on hook buffer4_receive that has address 0x7feb57f2bb80 (callout duration: 0.056 ms)
Nov 19 13:53:25 dhcppal1 kea-dhcp4[50758]: DEBUG kea-dhcp4.callouts HOOKS_CALLOUTS_COMPLETE completed callouts for hook buffer4_receive (total callouts duration: 0.056 ms)
Nov 19 13:53:25 dhcppal1 kea-dhcp4[50758]: DEBUG kea-dhcp4.hooks DHCP4_HOOK_BUFFER_RCVD_SKIP received buffer from 138.80.150.1 to 138.80.255.102 over interface dhcp is not parsed because a callout set the next step to SKIP.
Nov 19 13:53:25 dhcppal1 kea-dhcp4[50758]: DEBUG kea-dhcp4.packets DHCP4_PACKET_RECEIVED [hwtype=1 00:50:56:8a:db:0e], cid=[01:00:50:56:8a:db:0e], tid=0x333a0e3c: DHCPDISCOVER (type 1) received from 138.80.150.1 to 138.80.255.102 on interface dhcp
Nov 19 13:53:25 dhcppal1 kea-dhcp4[50758]: DEBUG kea-dhcp4.packets DHCP4_QUERY_DATA [hwtype=1 00:50:56:8a:db:0e], cid=[01:00:50:56:8a:db:0e], tid=0x333a0e3c, packet details: local_address=138.80.255.102:67, remote_address=138.80.150.1:67, msg_type=DHCPDISCOVER (1), transid=0x333a0e3c,
Nov 19 13:53:25 dhcppal1 kea-dhcp4[50758]: options:
Nov 19 13:53:25 dhcppal1 kea-dhcp4[50758]: type=012, len=010: "dyndnsprd1" (string)
Nov 19 13:53:25 dhcppal1 kea-dhcp4[50758]: type=050, len=004: 138.80.150.34 (ipv4-address)
Nov 19 13:53:25 dhcppal1 kea-dhcp4[50758]: type=053, len=001: 1 (uint8)
Nov 19 13:53:25 dhcppal1 kea-dhcp4[50758]: type=055, len=017: 1(uint8) 2(uint8) 6(uint8) 12(uint8) 15(uint8) 26(uint8) 28(uint8) 121(uint8) 3(uint8) 33(uint8) 40(uint8) 41(uint8) 42(uint8) 119(uint8) 249(uint8) 252(uint8) 17(uint8)
Nov 19 13:53:25 dhcppal1 kea-dhcp4[50758]: type=057, len=002: 576 (uint16)
Nov 19 13:53:25 dhcppal1 kea-dhcp4[50758]: type=061, len=007: 01:00:50:56:8a:db:0e
Nov 19 13:53:25 dhcppal1 kea-dhcp4[50758]: DEBUG kea-dhcp4.dhcpsrv DHCPSRV_CFGMGR_SUBNET4_ADDR selected subnet 138.80.150.0/24 for packet received by matching address 138.80.150.1
Nov 19 13:53:25 dhcppal1 kea-dhcp4[50758]: DEBUG kea-dhcp4.packets DHCP4_SUBNET_SELECTED [hwtype=1 00:50:56:8a:db:0e], cid=[01:00:50:56:8a:db:0e], tid=0x333a0e3c: the subnet with ID 1500 was selected for client assignments
Nov 19 13:53:25 dhcppal1 kea-dhcp4[50758]: DEBUG kea-dhcp4.packets DHCP4_SUBNET_DATA [hwtype=1 00:50:56:8a:db:0e], cid=[01:00:50:56:8a:db:0e], tid=0x333a0e3c: the selected subnet details: 138.80.150.0/24
Nov 19 13:53:25 dhcppal1 kea-dhcp4[50758]: DEBUG kea-dhcp4.hosts HOSTS_CFG_GET_ONE_SUBNET_ID_IDENTIFIER get one host with IPv4 reservation for subnet id 1500, identified by hwaddr=0050568ADB0E
Nov 19 13:53:25 dhcppal1 kea-dhcp4[50758]: DEBUG kea-dhcp4.hosts HOSTS_CFG_GET_ALL_IDENTIFIER get all hosts with reservations using identifier: hwaddr=0050568ADB0E
Nov 19 13:53:25 dhcppal1 kea-dhcp4[50758]: DEBUG kea-dhcp4.hosts HOSTS_CFG_GET_ALL_IDENTIFIER_HOST using identifier: hwaddr=0050568ADB0E, found host: hwaddr=0050568ADB0E ipv4_subnet_id=1500 hostname=(empty) ipv4_reservation=138.80.150.34 siaddr=(no) sname=(empty) file=(empty) key=(empty) ipv6_reservations=(none)
Nov 19 13:53:25 dhcppal1 kea-dhcp4[50758]: DEBUG kea-dhcp4.hosts HOSTS_CFG_GET_ALL_IDENTIFIER_COUNT using identifier hwaddr=0050568ADB0E, found 1 host(s)
Nov 19 13:53:25 dhcppal1 kea-dhcp4[50758]: DEBUG kea-dhcp4.hosts HOSTS_CFG_GET_ONE_SUBNET_ID_IDENTIFIER_HOST using subnet id 1500 and identifier hwaddr=0050568ADB0E, found host: hwaddr=0050568ADB0E ipv4_subnet_id=1500 hostname=(empty) ipv4_reservation=138.80.150.34 siaddr=(no) sname=(empty) file=(empty) key=(empty) ipv6_reservations=(none)
Nov 19 13:53:25 dhcppal1 kea-dhcp4[50758]: DEBUG kea-dhcp4.dhcp4 DHCP4_CLASS_ASSIGNED [hwtype=1 00:50:56:8a:db:0e], cid=[01:00:50:56:8a:db:0e], tid=0x333a0e3c: client packet has been assigned to the following class(es): KNOWN
Nov 19 13:53:25 dhcppal1 kea-dhcp4[50758]: DEBUG kea-dhcp4.dhcp4 DHCP4_CLASS_ASSIGNED [hwtype=1 00:50:56:8a:db:0e], cid=[01:00:50:56:8a:db:0e], tid=0x333a0e3c: client packet has been assigned to the following class(es): HA_dhcp1, ALL, KNOWN
Nov 19 13:53:25 dhcppal1 kea-dhcp4[50758]: DEBUG kea-dhcp4.ddns DHCP4_CLIENT_HOSTNAME_PROCESS [hwtype=1 00:50:56:8a:db:0e], cid=[01:00:50:56:8a:db:0e], tid=0x333a0e3c: processing client's Hostname option
Nov 19 13:53:25 dhcppal1 kea-dhcp4[50758]: DEBUG kea-dhcp4.ddns DHCP4_CLIENT_HOSTNAME_DATA [hwtype=1 00:50:56:8a:db:0e], cid=[01:00:50:56:8a:db:0e], tid=0x333a0e3c: client sent Hostname option: dyndnsprd1
Nov 19 13:53:25 dhcppal1 kea-dhcp4[50758]: DEBUG kea-dhcp4.ddns DHCP4_RESPONSE_HOSTNAME_DATA [hwtype=1 00:50:56:8a:db:0e], cid=[01:00:50:56:8a:db:0e], tid=0x333a0e3c: including Hostname option in the server's response: dyndnsprd1
Nov 19 13:53:25 dhcppal1 kea-dhcp4[50758]: DEBUG kea-dhcp4.dhcpsrv DHCPSRV_MEMFILE_GET_CLIENTID obtaining IPv4 leases for client ID 01:00:50:56:8a:db:0e
Nov 19 13:53:25 dhcppal1 kea-dhcp4[50758]: DEBUG kea-dhcp4.alloc-engine ALLOC_ENGINE_V4_DISCOVER_HR client [hwtype=1 00:50:56:8a:db:0e], cid=[01:00:50:56:8a:db:0e], tid=0x333a0e3c sending DHCPDISCOVER has reservation for the address 138.80.150.34
Nov 19 13:53:25 dhcppal1 kea-dhcp4[50758]: DEBUG kea-dhcp4.dhcpsrv DHCPSRV_MEMFILE_GET_ADDR4 obtaining IPv4 lease for address 138.80.150.34
Nov 19 13:53:25 dhcppal1 kea-dhcp4[50758]: WARN kea-dhcp4.alloc-engine ALLOC_ENGINE_V4_DISCOVER_ADDRESS_CONFLICT [hwtype=1 00:50:56:8a:db:0e], cid=[01:00:50:56:8a:db:0e], tid=0x333a0e3c: conflicting reservation for address 138.80.150.34 with existing lease Address: 138.80.150.34
Nov 19 13:53:25 dhcppal1 kea-dhcp4[50758]: Valid life: 6912
Nov 19 13:53:25 dhcppal1 kea-dhcp4[50758]: Cltt: 1605758850
Nov 19 13:53:25 dhcppal1 kea-dhcp4[50758]: Hardware addr: 00:50:56:8a:db:0e
Nov 19 13:53:25 dhcppal1 kea-dhcp4[50758]: Client id: 01:00:50:56:8a:db:0e
Nov 19 13:53:25 dhcppal1 kea-dhcp4[50758]: Subnet ID: 1500
Nov 19 13:53:25 dhcppal1 kea-dhcp4[50758]: State: default
Nov 19 13:53:25 dhcppal1 kea-dhcp4[50758]: DEBUG kea-dhcp4.hosts HOSTS_CFG_GET_ONE_SUBNET_ID_ADDRESS4 get one host with reservation for subnet id 1500 and IPv4 address 138.80.150.32
Nov 19 13:53:25 dhcppal1 kea-dhcp4[50758]: DEBUG kea-dhcp4.hosts HOSTS_CFG_GET_ALL_ADDRESS4 get all hosts with reservations for IPv4 address 138.80.150.32
Nov 19 13:53:25 dhcppal1 kea-dhcp4[50758]: DEBUG kea-dhcp4.hosts HOSTS_CFG_GET_ALL_ADDRESS4_COUNT using address 138.80.150.32, found 0 host(s)
Nov 19 13:53:25 dhcppal1 kea-dhcp4[50758]: DEBUG kea-dhcp4.hosts HOSTS_CFG_GET_ONE_SUBNET_ID_ADDRESS4_NULL host not found using subnet id 1500 and address 138.80.150.32
Nov 19 13:53:25 dhcppal1 kea-dhcp4[50758]: DEBUG kea-dhcp4.alloc-engine ALLOC_ENGINE_V4_OFFER_EXISTING_LEASE allocation engine will try to offer existing lease to the client [hwtype=1 00:50:56:8a:db:0e], cid=[01:00:50:56:8a:db:0e], tid=0x333a0e3c
Nov 19 13:53:25 dhcppal1 kea-dhcp4[50758]: INFO kea-dhcp4.leases DHCP4_LEASE_ADVERT [hwtype=1 00:50:56:8a:db:0e], cid=[01:00:50:56:8a:db:0e], tid=0x333a0e3c: lease 138.80.150.32 will be advertised
Nov 19 13:53:25 dhcppal1 kea-dhcp4[50758]: DEBUG kea-dhcp4.options DHCP4_PACKET_PACK [hwtype=1 00:50:56:8a:db:0e], cid=[01:00:50:56:8a:db:0e], tid=0x333a0e3c: preparing on-wire format of the packet to be sent
Nov 19 13:53:25 dhcppal1 kea-dhcp4[50758]: DEBUG kea-dhcp4.packets DHCP4_PACKET_SEND [hwtype=1 00:50:56:8a:db:0e], cid=[01:00:50:56:8a:db:0e], tid=0x333a0e3c: trying to send packet DHCPOFFER (type 2) from 138.80.255.102:67 to 138.80.150.1:67 on interface dhcp
Nov 19 13:53:25 dhcppal1 kea-dhcp4[50758]: DEBUG kea-dhcp4.packets DHCP4_RESPONSE_DATA [hwtype=1 00:50:56:8a:db:0e], cid=[01:00:50:56:8a:db:0e], tid=0x333a0e3c: responding with packet DHCPOFFER (type 2), packet details: local_address=138.80.255.102:67, remote_address=138.80.150.1:67, msg_type=DHCPOFFER (2), transid=0x333a0e3c,
Nov 19 13:53:25 dhcppal1 kea-dhcp4[50758]: options:
Nov 19 13:53:25 dhcppal1 kea-dhcp4[50758]: type=001, len=004: 4294967040 (uint32)
Nov 19 13:53:25 dhcppal1 kea-dhcp4[50758]: type=003, len=004: 138.80.150.1
Nov 19 13:53:25 dhcppal1 kea-dhcp4[50758]: type=006, len=008: 138.80.255.100 138.80.255.200
Nov 19 13:53:25 dhcppal1 kea-dhcp4[50758]: type=012, len=010: "dyndnsprd1" (string)
Nov 19 13:53:25 dhcppal1 kea-dhcp4[50758]: type=015, len=010: "cdu.edu.au" (string)
Nov 19 13:53:25 dhcppal1 kea-dhcp4[50758]: type=042, len=012: 138.80.12.1 138.80.12.5 138.80.12.7
Nov 19 13:53:25 dhcppal1 kea-dhcp4[50758]: type=051, len=004: 6912 (uint32)
Nov 19 13:53:25 dhcppal1 kea-dhcp4[50758]: type=053, len=001: 2 (uint8)
Nov 19 13:53:25 dhcppal1 kea-dhcp4[50758]: type=054, len=004: 138.80.255.102
Nov 19 13:53:25 dhcppal1 kea-dhcp4[50758]: type=058, len=004: 864 (uint32)
Nov 19 13:53:25 dhcppal1 kea-dhcp4[50758]: type=059, len=004: 5184 (uint32)
Nov 19 13:53:25 dhcppal1 kea-dhcp4[50758]: type=061, len=007: 01:00:50:56:8a:db:0e
Nov 19 13:53:25 dhcppal1 kea-dhcp4[50758]: type=119, len=050: "cdu.edu.au." (fqdn) "cducloud.cdu.edu.au." (fqdn) "cdu-staff.local." (fqdn)
Nov 19 13:53:25 dhcppal1 kea-dhcp4[50758]: DEBUG kea-dhcp4.packets DHCP4_BUFFER_RECEIVED received buffer from 138.80.150.1:67 to 138.80.255.102:67 over interface dhcp
```
**Contacting you**
I've available at jay.tuckey `<at>` cdu.edu.au or via Gitlab: https://gitlab.com/jaytuckkea1.9.9Thomas MarkwalderThomas Markwalderhttps://gitlab.isc.org/isc-projects/bind9/-/issues/22929.16.8 can't create PID file at Centos72020-11-19T09:03:05ZDen Ivanov9.16.8 can't create PID file at Centos7<!--
If the bug you are reporting is potentially security-related - for example,
if it involves an assertion failure or other crash in `named` that can be
triggered repeatedly - then please do *NOT* report it here, but send an
email to [...<!--
If the bug you are reporting is potentially security-related - for example,
if it involves an assertion failure or other crash in `named` that can be
triggered repeatedly - then please do *NOT* report it here, but send an
email to [security-officer@isc.org](security-officer@isc.org).
-->
### Summary
BIND could not open '/var/opt/isc/isc-bind/run/named/named.pid' and systemd service terminates after start
### BIND version used
```
BIND 9.16.8 (Stable Release) <id:539f9f0>
running on Linux x86_64 3.10.0-1160.6.1.el7.x86_64 #1 SMP Tue Nov 17 13:59:11 UTC 2020
built by make with '--build=x86_64-redhat-linux-gnu' '--host=x86_64-redhat-linux-gnu' '--program-prefix=' '--disable-dependency-tracking' '--prefix=/opt/isc/isc-bind/root/usr' '--exec-prefix=/opt/isc/isc-bind/root/usr' '--bindir=/opt/isc/isc-bind/root/usr/bin' '--sbindir=/opt/isc/isc-bind/root/usr/sbin' '--sysconfdir=/etc/opt/isc/isc-bind' '--datadir=/opt/isc/isc-bind/root/usr/share' '--includedir=/opt/isc/isc-bind/root/usr/include' '--libdir=/opt/isc/isc-bind/root/usr/lib64' '--libexecdir=/opt/isc/isc-bind/root/usr/libexec' '--localstatedir=/var/opt/isc/isc-bind' '--sharedstatedir=/var/opt/isc/isc-bind/lib' '--mandir=/opt/isc/isc-bind/root/usr/share/man' '--infodir=/opt/isc/isc-bind/root/usr/share/info' '--disable-static' '--enable-dnstap' '--with-pic' '--with-gssapi' '--with-json-c' '--with-libtool' '--with-libxml2' '--without-lmdb' '--with-python' 'build_alias=x86_64-redhat-linux-gnu' 'host_alias=x86_64-redhat-linux-gnu' 'CFLAGS=-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic' 'LDFLAGS=-Wl,-z,relro -L/opt/isc/isc-bind/root/usr/lib64' 'LT_SYS_LIBRARY_PATH=/usr/lib64' 'PKG_CONFIG_PATH=:/opt/isc/isc-bind/root/usr/lib64/pkgconfig:/opt/isc/isc-bind/root/usr/share/pkgconfig' 'SPHINX_BUILD=/builddir/build/BUILD/bind-9.16.8/sphinx/bin/sphinx-build'
compiled by GCC 4.8.5 20150623 (Red Hat 4.8.5-39)
compiled with OpenSSL version: OpenSSL 1.0.2k-fips 26 Jan 2017
linked to OpenSSL version: OpenSSL 1.0.2k-fips 26 Jan 2017
compiled with libuv version: 1.38.0
linked to libuv version: 1.38.0
compiled with libxml2 version: 2.9.1
linked to libxml2 version: 20901
compiled with json-c version: 0.11
linked to json-c version: 0.11
compiled with zlib version: 1.2.7
linked to zlib version: 1.2.7
compiled with protobuf-c version: 1.3.3
linked to protobuf-c version: 1.3.1
threads support is enabled
default paths:
named configuration: /etc/opt/isc/isc-bind/named.conf
rndc configuration: /etc/opt/isc/isc-bind/rndc.conf
DNSSEC root key: /etc/opt/isc/isc-bind/bind.keys
nsupdate session key: /var/opt/isc/isc-bind/run/named/session.key
named PID file: /var/opt/isc/isc-bind/run/named/named.pid
named lock file: /var/opt/isc/isc-bind/run/named/named.lock
```
### Steps to reproduce
systemctl start isc-bind-named.service
### What is the current *bug* behavior?
Service terminates
### What is the expected *correct* behavior?
Service must start and run
### Relevant configuration files
Doesn't matter for this case
### Relevant logs and/or screenshots
```
Nov 19 11:59:27 server named[893]: listening on IPv4 interface lo, 127.0.0.1#53
Nov 19 11:59:27 server named[893]: Could not open '/var/opt/isc/isc-bind/run/named/named.pid'.
Nov 19 11:59:27 server named[893]: Please check file and directory permissions or reconfigure the filename.
Nov 19 11:59:27 server named[893]: could not open file '/var/opt/isc/isc-bind/run/named/named.pid': Permission denied
<...skip...>
Nov 19 12:00:57 server systemd[1]: isc-bind-named.service start operation timed out. Terminating.
Nov 19 12:00:57 server named[893]: 19-Nov-2020 12:00:57.102 network: no longer listening on 127.0.0.1#53
Nov 19 12:00:57 server named[893]: 19-Nov-2020 12:00:57.139 general: shutting down
Nov 19 12:00:57 server named[893]: 19-Nov-2020 12:00:57.139 general: stopping command channel on 127.0.0.1#953
Nov 19 12:00:57 server named[893]: 19-Nov-2020 12:00:57.156 general: exiting
Nov 19 12:00:57 server systemd[1]: Failed to start isc-bind-named.service.
Nov 19 12:00:57 server systemd[1]: Unit isc-bind-named.service entered failed state.
Nov 19 12:00:57 server systemd[1]: isc-bind-named.service failed.
```
### Possible fixes
```
semanage fcontext -a -t var_run_t "/var/opt/isc/isc-bind/run"
semanage fcontext -a -t named_var_run_t "/var/opt/isc/isc-bind/run(/.*)"
restorecon -vr /var/opt/isc/isc-bind/run
```Michał KępieńMichał Kępieńhttps://gitlab.isc.org/isc-projects/kea/-/issues/1551Granular control over logging authentication information2021-05-21T14:48:33ZVicky Riskvicky@isc.orgGranular control over logging authentication informationin some organizations/jurisdictions (but IANAL) authentication information is seen as sensitive information and it should be possible to treat authentication logging information differently from other (non authentication) log output.
It...in some organizations/jurisdictions (but IANAL) authentication information is seen as sensitive information and it should be possible to treat authentication logging information differently from other (non authentication) log output.
It should be possible to turn off all authentication logging without restricting the other log output, or to send the authentication log information into a separate file (with different access permissions).
Without such function, users that must implement regulation compliant logging will need to turn off all logging that could contain authentication information.kea1.9.9Vicky Riskvicky@isc.orgVicky Riskvicky@isc.orghttps://gitlab.isc.org/isc-projects/stork/-/issues/455rename HTML IDs to conform to coding guidelines2021-03-17T11:53:39ZMichal Nowikowskirename HTML IDs to conform to coding guidelineshttps://gitlab.isc.org/isc-projects/stork/-/wikis/Processes/coding-guidelines#html-css-stylehttps://gitlab.isc.org/isc-projects/stork/-/wikis/Processes/coding-guidelines#html-css-style0.16Michal NowikowskiMichal Nowikowskihttps://gitlab.isc.org/isc-projects/bind9/-/issues/2291DNAME-DNAME loop generates ~17 length CNAME chain but a DNAME-CNAME loop ter...2020-11-17T23:25:16ZSiva Kesava R KakarlaDNAME-DNAME loop generates ~17 length CNAME chain but a DNAME-CNAME loop terminates early### Summary
When there is a `DNAME-DNAME` loop in the zone file, the BIND server generates 17 CNAMEs, but for a `DNAME-CNAME` chain, the BIND server stops after one iteration. The `DNAME-DNAME` loop behavior is also different from Knot ...### Summary
When there is a `DNAME-DNAME` loop in the zone file, the BIND server generates 17 CNAMEs, but for a `DNAME-CNAME` chain, the BIND server stops after one iteration. The `DNAME-DNAME` loop behavior is also different from Knot and NSD.
### BIND version used
BIND 9.11.3-1ubuntu1.13-Ubuntu (Extended Support Version) <id:a375815>
### Steps to reproduce
Consider the following zone file:
| | | |
|--------------------|-----------|----------------------------------------------------------|
| campus.edu. | 500 SOA | ns1.campus.edu. root.campus.edu. 3 86400 7200 604800 300 |
| campus.edu. | 500 NS | ns1.outside.edu. |
| d.campus.edu. | 500 DNAME | f.d.campus.edu. |
For the query `<a.f.d.campus.edu, A>`, the response returned by BIND was:
```
"opcode QUERY",
"rcode NOERROR",
"flags QR AA RA",
";QUESTION",
"a.f.d.campus.edu. IN A",
";ANSWER",
"d.campus.edu. 500 IN DNAME f.d.campus.edu.",
"a.f.d.campus.edu. 500 IN CNAME a.f.f.d.campus.edu.",
"a.f.f.d.campus.edu. 500 IN CNAME a.f.f.f.d.campus.edu.",
"a.f.f.f.d.campus.edu. 500 IN CNAME a.f.f.f.f.d.campus.edu.",
"a.f.f.f.f.d.campus.edu. 500 IN CNAME a.f.f.f.f.f.d.campus.edu.",
"a.f.f.f.f.f.d.campus.edu. 500 IN CNAME a.f.f.f.f.f.f.d.campus.edu.",
"a.f.f.f.f.f.f.d.campus.edu. 500 IN CNAME a.f.f.f.f.f.f.f.d.campus.edu.",
"a.f.f.f.f.f.f.f.d.campus.edu. 500 IN CNAME a.f.f.f.f.f.f.f.f.d.campus.edu.",
"a.f.f.f.f.f.f.f.f.d.campus.edu. 500 IN CNAME a.f.f.f.f.f.f.f.f.f.d.campus.edu.",
"a.f.f.f.f.f.f.f.f.f.d.campus.edu. 500 IN CNAME a.f.f.f.f.f.f.f.f.f.f.d.campus.edu.",
"a.f.f.f.f.f.f.f.f.f.f.d.campus.edu. 500 IN CNAME a.f.f.f.f.f.f.f.f.f.f.f.d.campus.edu.",
"a.f.f.f.f.f.f.f.f.f.f.f.d.campus.edu. 500 IN CNAME a.f.f.f.f.f.f.f.f.f.f.f.f.d.campus.edu.",
"a.f.f.f.f.f.f.f.f.f.f.f.f.d.campus.edu. 500 IN CNAME a.f.f.f.f.f.f.f.f.f.f.f.f.f.d.campus.edu.",
"a.f.f.f.f.f.f.f.f.f.f.f.f.f.d.campus.edu. 500 IN CNAME a.f.f.f.f.f.f.f.f.f.f.f.f.f.f.d.campus.edu.",
"a.f.f.f.f.f.f.f.f.f.f.f.f.f.f.d.campus.edu. 500 IN CNAME a.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.d.campus.edu.",
"a.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.d.campus.edu. 500 IN CNAME a.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.d.campus.edu.",
"a.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.d.campus.edu. 500 IN CNAME a.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.d.campus.edu.",
"a.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.d.campus.edu. 500 IN CNAME a.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.f.d.campus.edu.",
```
whereas the response from Knot and NSD was:
```
"opcode QUERY",
"rcode NOERROR",
"flags QR AA",
";QUESTION",
"a.f.d.campus.edu. IN A",
";ANSWER",
"d.campus.edu. 500 IN DNAME f.d.campus.edu.",
"a.f.d.campus.edu. 500 IN CNAME a.f.f.d.campus.edu.",
";AUTHORITY",
";ADDITIONAL"
```
**NSD logs mention -- `DNAME processing stopped due to loop, qname a.f.d.campus.edu.`**
Consider another zone file:
| | | |
|--------------------|-----------|----------------------------------------------------------|
| campus.edu. | 500 SOA | ns1.campus.edu. root.campus.edu. 3 86400 7200 604800 300 |
| campus.edu. | 500 NS | ns1.outside.edu. |
| d.campus.edu. | 500 DNAME | f.campus.edu. |
| e.f.campus.edu. | 500 CNAME | e.d.campus.edu. |
The response from BIND, NSD, and Knot was:
```
"opcode QUERY",
"rcode NOERROR",
"flags QR AA RA",
";QUESTION",
"e.d.campus.edu. IN A",
";ANSWER",
"d.campus.edu. 500 IN DNAME f.campus.edu.",
"e.d.campus.edu. 500 IN CNAME e.f.campus.edu.",
"e.f.campus.edu. 500 IN CNAME e.d.campus.edu.",
";AUTHORITY",
";ADDITIONAL"
```
### What is the current *bug* behavior?
BIND authoritative server goes on for an infinite (17) CNAME synthesis.
### What is the expected *correct* behavior?
In the `DNAME-CNAME` case, it is evident that after the second `CNAME,` the new query is the same as the original one, so the implementations stop. For the `DNAME-DNAME` case, it is harder to say which behavior (BIND or others) is the correct behavior as the zone file is not proper. I expected BIND also to stop after the first iteration in both cases.
(I looked in the repo for this issue and did not find it, so I am filing a new issue and please excuse me if it's a duplicate.)https://gitlab.isc.org/isc-projects/bind9/-/issues/2289cache dump sometimes reports nonsense values with "stale (will be retained f...2021-04-28T09:11:11ZBrian Conrycache dump sometimes reports nonsense values with "stale (will be retained for %u more seconds)"Several customers have reported this issue, but the cause had been completely baffling.
Examples include:
```
; stale (will be retained for 4294362497 more seconds)
```
While reviewing the relevant areas of the code for other issues, I...Several customers have reported this issue, but the cause had been completely baffling.
Examples include:
```
; stale (will be retained for 4294362497 more seconds)
```
While reviewing the relevant areas of the code for other issues, I realized that the decision about whether or not to report the record as stale, and report how long it will be retained for (as modified by the recorded stale intervals), is based solely on the `STALE` flag in the header.
This is an accurate decision, but I wonder if this might lead to odd results if the record expiration is actually in the future because it has been forcibly expired due to the cache being overmem.
I've also wondered if there might be some incorrect reporting if/when the max-stale-ttl is changed after data has been added to the cache.
I will investigate both of these issues further using customer data I already have on hand, but I won't complain if it is fixed before I get to it.May 2021 (9.11.32, 9.11.32-S1, 9.16.16, 9.16.16-S1, 9.17.13)Matthijs Mekkingmatthijs@isc.orgMatthijs Mekkingmatthijs@isc.orghttps://gitlab.isc.org/isc-projects/stork/-/issues/454UI tests needed for menu2022-03-01T14:19:02ZTomek MrugalskiUI tests needed for menuAs a follow-up to #419, we decided to implement UI unit-tests for menu. Yes, it's a compromise. After this ticket is done and we have UT ready and working, we may revisit the question whether the function is lacking in performance and wh...As a follow-up to #419, we decided to implement UI unit-tests for menu. Yes, it's a compromise. After this ticket is done and we have UT ready and working, we may revisit the question whether the function is lacking in performance and whether this is a problem or not. But that's outside of scope of this ticket.backloghttps://gitlab.isc.org/isc-projects/bind9/-/issues/2288"dig" crashes when interrupted while waiting for a TCP connection2022-04-26T13:15:52ZMichał Kępień"dig" crashes when interrupted while waiting for a TCP connectionTo reproduce, fire up a `dig` query that will not connect over TCP
before it times out, e.g.:
dig @192.0.2.1 isc.org. A +time=10 +vc
and then hit CTRL+C:
dighost.c:3232: REQUIRE((__builtin_expect(!!(((query)) != ((void *)0)), ...To reproduce, fire up a `dig` query that will not connect over TCP
before it times out, e.g.:
dig @192.0.2.1 isc.org. A +time=10 +vc
and then hit CTRL+C:
dighost.c:3232: REQUIRE((__builtin_expect(!!(((query)) != ((void *)0)), 1) && __builtin_expect(!!(((const isc__magic_t *)((query)))->magic == ((('D') << 24 | ('i') << 16 | ('g') << 8 | ('q')))), 1))) failed, back trace
Looks like `arg` (which gets cast to `dig_query_t *query`) passed to
`tcp_connected()` is broken in this case.
See #2287 for the UDP counterpart of this problem.December 2020 (9.11.26, 9.11.26-S1, 9.16.10, 9.16.10-S1, 9.17.8)https://gitlab.isc.org/isc-projects/bind9/-/issues/2287"dig" crashes when interrupted while listening for UDP responses2020-12-09T10:08:46ZMichał Kępień"dig" crashes when interrupted while listening for UDP responsesTo reproduce, fire up a `dig` query that will not get a response before
it times out, e.g.:
dig @192.0.2.1 isc.org. A +time=10
and then hit CTRL+C:
dighost.c:4262: REQUIRE(isc_refcount_current(&recvcount) == 0) failed, back tr...To reproduce, fire up a `dig` query that will not get a response before
it times out, e.g.:
dig @192.0.2.1 isc.org. A +time=10
and then hit CTRL+C:
dighost.c:4262: REQUIRE(isc_refcount_current(&recvcount) == 0) failed, back trace
It looks like `current_lookup->q` is empty in `cancel_all()` and thus
the [code block which decrements `recvcount`][1] is not evaluated.
[1]: https://gitlab.isc.org/isc-projects/bind9/-/blob/ff2bc7891e99442df51acea1110ad599ddc6756a/bin/dig/dighost.c#L4203-4211December 2020 (9.11.26, 9.11.26-S1, 9.16.10, 9.16.10-S1, 9.17.8)https://gitlab.isc.org/isc-projects/bind9/-/issues/2286Using "rndc delzone" during zone transfer may crash named2021-09-02T10:34:46ZMichał KępieńUsing "rndc delzone" during zone transfer may crash namedThe following crash occurred during the `inline` system test:
```
17-Nov-2020 04:26:19.900 queue_xfrin: zone test-l/IN (unsigned): enter
17-Nov-2020 04:26:19.900 zone test-l/IN (unsigned): Transfer started.
17-Nov-2020 04:26:19.900 zone...The following crash occurred during the `inline` system test:
```
17-Nov-2020 04:26:19.900 queue_xfrin: zone test-l/IN (unsigned): enter
17-Nov-2020 04:26:19.900 zone test-l/IN (unsigned): Transfer started.
17-Nov-2020 04:26:19.900 zone test-l/IN (unsigned): no database exists yet, requesting AXFR of initial version from 10.53.0.2#12100
17-Nov-2020 04:26:19.904 received control channel command 'delzone test-l'
17-Nov-2020 04:26:19.904 zone test-l scheduled for removal via delzone
17-Nov-2020 04:26:19.904 transfer of 'test-l/IN (unsigned)' from 10.53.0.2#12100: connected using 10.53.0.2#12100
17-Nov-2020 04:26:19.904 deleting zone test-l in view _default via delzone
17-Nov-2020 04:26:19.904 transfer of 'test-l/IN (unsigned)' from 10.53.0.2#12100: sent request data
17-Nov-2020 04:26:19.904 transfer of 'test-l/IN (unsigned)' from 10.53.0.2#12100: received 148 bytes
17-Nov-2020 04:26:19.904 received message from 10.53.0.2#12100
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 19176
;; flags: qr aa; QUESTION: 1, ANSWER: 5, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;test-l. IN AXFR
;; ANSWER SECTION:
test-l. 300 IN SOA ns2.test-l. . 2000042407 20 20 1814400 3600
test-l. 300 IN NS ns3.test-l.
ns2.test-l. 300 IN A 10.53.0.2
ns3.test-l. 300 IN A 10.53.0.3
test-l. 300 IN SOA ns2.test-l. . 2000042407 20 20 1814400 3600
17-Nov-2020 04:26:19.904 transfer of 'test-l/IN (unsigned)' from 10.53.0.2#12100: got nonincremental response
17-Nov-2020 04:26:19.904 zone_shutdown: zone test-l/IN (signed): shutting down
17-Nov-2020 04:26:19.904 zone_shutdown: zone test-l/IN (unsigned): shutting down
17-Nov-2020 04:26:19.904 transfer of 'test-l/IN (unsigned)' from 10.53.0.2#12100: shut down: operation canceled
17-Nov-2020 04:26:19.904 dns_zone_verifydb: zone test-l/IN (unsigned): enter
17-Nov-2020 04:26:19.904 zone test-l/IN (unsigned): zone transfer finished: operation canceled
17-Nov-2020 04:26:19.904 removing journal file
17-Nov-2020 04:26:19.904 zone test-l/IN (unsigned): replacing zone database
17-Nov-2020 04:26:19.904 zone test-l/IN (unsigned): zone transfer finished: success
17-Nov-2020 04:26:19.904 zone test-l/IN (unsigned): transferred serial 2000042407
17-Nov-2020 04:26:19.904 zone_needdump: zone test-l/IN (unsigned): enter
17-Nov-2020 04:26:19.904 zone_settimer: zone test-l/IN (unsigned): enter
17-Nov-2020 04:26:19.904 zone_settimer: zone test-l/IN (unsigned): enter
17-Nov-2020 04:26:19.904 transfer of 'test-l/IN (unsigned)' from 10.53.0.2#12100: Transfer status: success
17-Nov-2020 04:26:19.904 transfer of 'test-l/IN (unsigned)' from 10.53.0.2#12100: Transfer completed: 1 messages, 5 records, 148 bytes, 0.001 secs (148000 bytes/sec) (serial 2000042407)
17-Nov-2020 04:26:19.904 transfer of 'test-l/IN (unsigned)' from 10.53.0.2#12100: freeing transfer context
17-Nov-2020 04:26:19.904 zone.c:16915: INSIST(((__extension__ ({ __auto_type __atomic_load_ptr = ((&(zone)->flags)); __typeof__ (*__atomic_load_ptr) __atomic_load_tmp; __atomic_load (__atomic_load_ptr, &__atomic_load_tmp, (memory_order_relaxed)); __atomic_load_tmp; }) & (DNS_ZONEFLG_REFRESH)) != 0)) failed, back trace
17-Nov-2020 04:26:19.904 /builds/isc-projects/bind9/bin/named/.libs/lt-named() [0x428fcc]
17-Nov-2020 04:26:19.904 /builds/isc-projects/bind9/lib/isc/.libs/libisc.so.1705(isc_assertion_failed+0xa) [0x7ff0d8adfc7a]
17-Nov-2020 04:26:19.904 /builds/isc-projects/bind9/lib/dns/.libs/libdns.so.1706(+0x185385) [0x7ff0d8812385]
17-Nov-2020 04:26:19.904 /builds/isc-projects/bind9/lib/dns/.libs/libdns.so.1706(+0x16f8e1) [0x7ff0d87fc8e1]
17-Nov-2020 04:26:19.904 /builds/isc-projects/bind9/lib/dns/.libs/libdns.so.1706(dns_xfrin_shutdown+0x31) [0x7ff0d87fca61]
17-Nov-2020 04:26:19.904 /builds/isc-projects/bind9/lib/dns/.libs/libdns.so.1706(+0x19111e) [0x7ff0d881e11e]
17-Nov-2020 04:26:19.904 /builds/isc-projects/bind9/lib/isc/.libs/libisc.so.1705(+0x5a879) [0x7ff0d8b00879]
17-Nov-2020 04:26:19.904 /lib/x86_64-linux-gnu/libpthread.so.0(+0x76ba) [0x7ff0d71576ba]
17-Nov-2020 04:26:19.904 /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7ff0d66ad4dd]
17-Nov-2020 04:26:19.904 exiting (due to assertion failure)
```
Looks like the `test-l` zone was deleted using `rndc` while its transfer
was in progress.
While I do not have any proof that this is related to migrating zone
transfer code to netmgr, this particular `INSIST` has been in place for
the past 20 years, so it would be quite a coincidence to only start
hitting it now. If that turned out to be the case, branches other than
~"v9.17" might be affected, too, but I am sticking with the netmgr
theory for now.September 2021 (9.16.21, 9.16.21-S1, 9.17.18)https://gitlab.isc.org/isc-projects/bind9/-/issues/2285Windows-specific system test framework glitches2021-11-02T09:30:25ZMichał KępieńWindows-specific system test framework glitchesOn Windows, system tests may (rarely) fail because of system test
framework imperfections.
Two types of intermittent issues have been observed in the past few
months:
1. Issues with starting servers ([example][1]).
```
S:addz...On Windows, system tests may (rarely) fail because of system test
framework imperfections.
Two types of intermittent issues have been observed in the past few
months:
1. Issues with starting servers ([example][1]).
```
S:addzone:2020-07-03T09:45:21+0100
T:addzone:1:A
A:addzone:System test addzone
I:addzone:PORTS:,5150,5151,5152,5153,5154,5155,5156,5157,5158
Value "" invalid for option p (number expected)
I:ns3:ns3/sign.sh
I:addzone:starting servers
Value "" invalid for option port (number expected)
usage: start.pl [--noclean] [--restart] [--port <port>] test-directory [server-directory [server-options]]
I:addzone:starting servers failed
R:addzone:FAIL
E:addzone:2020-07-03T09:45:25+0100
```
This failure mode has not been investigated closely, but it looks
like an issue with the `bin/tests/system/get_ports.sh` script on
`main` (this script is only present on `main`) - it seems that it
can fail to set the `PORT` environment variable in certain
circumstances, which prevents test `named` instances from being
started.
2. Issues with PID reuse ([example][2]).
```
S:rndc:2020-11-16T06:56:52-0800
T:rndc:1:A
A:rndc:System test rndc
I:rndc:PORTRANGE:11000 - 11099
I:rndc:starting servers
I:rndc:preparing (1)
I:rndc:rndc freeze
I:rndc:checking zone was dumped (2)
...
S:rrsetorder:2020-11-16T06:57:52-0800
T:rrsetorder:1:A
A:rrsetorder:System test rrsetorder
I:rrsetorder:PORTRANGE:11500 - 11599
I:rndc:exit status: 0
I:rndc:stopping servers
I:rrsetorder:starting servers
I:rrsetorder:Order 'fixed' disabled at compile time
I:rrsetorder:Checking order fixed behaves as cyclic when disabled (master)
I:rrsetorder:Checking order cyclic (master + additional)
I:rrsetorder:Checking order cyclic (master)
I:rrsetorder:Checking order random (master)
I:rrsetorder:Random selection return 12 of 24 possible orders in 36 samples
I:rrsetorder:Checking order none (primary)
I:rrsetorder:Checking order cyclic (slave + additional)
I:rrsetorder:Checking order cyclic (slave)
I:rrsetorder:Checking order random (slave)
I:rndc:ns4 didn't die when sent a SIGTERM
I:rndc:stopping servers failed
R:rndc:FAIL
I:rrsetorder:Random selection return 12 of 24 possible orders in 36 samples
E:rndc:2020-11-16T06:58:55-0800
I:rrsetorder:Checking order none (secondary)
I:rrsetorder:Shutting down slave
I:rrsetorder:Checking for slave's on disk copy of zone
I:rrsetorder:Re-starting slave
I:rrsetorder:Checking order cyclic (slave + additional, loaded from disk)
I:rrsetorder:Checking order cyclic (slave loaded from disk)
I:rrsetorder:Checking order random (slave loaded from disk)
I:rrsetorder:Random selection return 12 of 24 possible orders in 36 samples
I:rrsetorder:Checking order none (secondary loaded from disk)
I:rrsetorder:Checking order cyclic (cache + additional)
I:rrsetorder:failed
I:rrsetorder:Checking order cyclic (cache)
I:rrsetorder:failed
I:rrsetorder:Checking order random (cache)
I:rrsetorder:Random selection return 0 of 24 possible orders in 36 samples
I:rrsetorder:failed
I:rrsetorder:Checking order none (cache)
I:rrsetorder:failed
I:rrsetorder:Checking default order (cache)
I:rrsetorder:Default selection return 0 of 24 possible orders in 36 samples
I:rrsetorder:failed
I:rrsetorder:Checking default order no match in rrset-order (cache)
I:rrsetorder:failed
I:rrsetorder:exit status: 5
I:rrsetorder:stopping servers
I:rrsetorder:ns1 died before a SIGTERM was sent
I:rrsetorder:stopping servers failed
R:rrsetorder:FAIL
E:rrsetorder:2020-11-16T07:10:23-0800
```
A similar failure mode was triggered in the course of [BIND 9.17.1
release testing][3]. The root cause of this problem is that signal
handlers do not work on Windows and thus when SIGTERM is sent to a
`named` process, it dies immediately without cleaning up its PID
file. To work around this, the system test framework relies on
`kill` returning an error for non-existing PIDs for detecting when a
given `named` instance is no longer alive. However, Windows [tends
to recycle PIDs][4]. If `named` instances belonging to one system
test are shut down while `named` instances belonging to another
system test are just starting up, the system test framework may
"confuse" `named` instances from these two tests with each other:
1. `stop.pl` attempts to stop `named` instance `ns1` for system
test `testA`. It send it a SIGTERM. `ns1` for `testA` exits
without cleaning up its PID file.
2. `start.pl` starts up `named` instance `ns1` for system test
`testB`. It gets assigned the same PID as `ns1` for `testA` which
has just exited.
3. `stop.pl` tests whether `ns1` for `testA` is still alive. It
reads its PID file and attempts to `kill` the PID it read. Since
`ns1` for `testB` has the same PID, `stop.pl` assumes `ns1` for
`testA` is still alive.
4. After 1 minute, `stop.pl` decides to send a SIGABRT to `ns1` for
`testA`, but that one is already long gone - instead, the signal
hits `ns1` for `testB`, killing it (possibly in the middle of
`testB`). `stop.pl` reports that `ns1` for `testA` did not die
when it was sent a SIGTERM (even though it did).
5. `stop.pl` attempts to `kill` `ns1` for `testB`, but it was
already `kill`ed beforehand. `stop.pl` reports that `ns1` for
`testB` died before it was sent a SIGTERM.
[1]: https://gitlab.isc.org/isc-private/bind9/-/jobs/1007156#L153
[2]: https://gitlab.isc.org/isc-private/bind9/-/jobs/1299877#L3517
[3]: https://wiki.isc.org/bin/view/QA/BindQaResults_9_11_18
[4]: https://stackoverflow.com/questions/26301382/does-windows-7-recycle-process-id-pid-numbersBIND 9.17 Backburnerhttps://gitlab.isc.org/isc-projects/kea/-/issues/1550dhcpv6 server assigns reservations from the pools even if out of pool flag is...2020-11-19T18:18:43ZRazvan Becheriudhcpv6 server assigns reservations from the pools even if out of pool flag is setthis was discovered while implementing #1405 and seeing the different behavior from v4 in previous written UT:
TEST_F(DORATest, reservationModeOutOfPool)
TEST_F(DORATest, reservationIgnoredInOutOfPoolMode)
in v4, before dynamic alloc...this was discovered while implementing #1405 and seeing the different behavior from v4 in previous written UT:
TEST_F(DORATest, reservationModeOutOfPool)
TEST_F(DORATest, reservationIgnoredInOutOfPoolMode)
in v4, before dynamic allocation (from pool), all reservations from the pools are removed.
in v6, because the retrieval of the reservations is done once for all IANAs, there is no filtering, and this must be done later, just before the dynamic allocation from the pools. at this stage, the lease type is also available, so proper filtering from the pool is optimal.kea1.9.2Razvan BecheriuRazvan Becheriuhttps://gitlab.isc.org/isc-projects/stork/-/issues/453stabilise selenium tests2020-11-18T13:33:01ZWlodzimierz Wencelstabilise selenium testsright now we have random result, figure out how to stabilise UI system testsright now we have random result, figure out how to stabilise UI system tests0.14Wlodzimierz WencelWlodzimierz Wencelhttps://gitlab.isc.org/isc-projects/bind9/-/issues/2284DNAME: synthetized CNAME might be perfect answer to CNAME query2020-12-01T08:08:22ZSiva Kesava R KakarlaDNAME: synthetized CNAME might be perfect answer to CNAME query### Summary
When there is a `DNAME` record in the zone file (partial rewrite to the same zone), and that record handles the query, then the `RCODE` of the server should probably be different depending on whether the query is for `CNAME`...### Summary
When there is a `DNAME` record in the zone file (partial rewrite to the same zone), and that record handles the query, then the `RCODE` of the server should probably be different depending on whether the query is for `CNAME` or not. After the `DNAME` rewrite, the new query name belongs to the same zone but doesn't exist. The server currently returns `NXDOMAIN` for all the types, but it might be reasonable to return `NOERROR` for the `CNAME` type.
### BIND version used
BIND 9.11.3-1ubuntu1.13-Ubuntu (Extended Support Version) <id:a375815>
### Steps to reproduce
Consider the following sample zone file:
| | | |
|--------------------|-----------|----------------------------------------------------------|
| campus.edu. | 500 SOA | ns1.campus.edu. root.campus.edu. 3 86400 7200 604800 300 |
| campus.edu. | 500 NS | ns1.outside.edu. |
| c.d.campus.edu. | 500 DNAME | f.d.campus.edu. |
For the query `<a.c.d.campus.edu., CNAME>` the answer from the BIND server is:
```
"opcode QUERY",
"rcode NXDOMAIN",
"flags QR AA",
";QUESTION",
"a.c.d.campus.edu. IN CNAME",
";ANSWER",
"a.c.d.campus.edu. 500 IN CNAME a.f.d.campus.edu.",
"c.d.campus.edu. 500 IN DNAME f.d.campus.edu.",
";AUTHORITY",
";ADDITIONAL"
```
### What is the current *bug* behavior?
The `RCODE` is `NXDOMAIN.`
### What is the expected *correct* behavior?
The answer section would be the same for the above query, but the `RCODE` should likely be `NOERROR.` PowerDNS was doing it that way, but BIND, NSD, and Knot were returning `NXDOMAIN` for all types so I thought it might be an issue with PowerDNS. I have filed a [GitHub issue](https://github.com/PowerDNS/pdns/issues/9725) with PowerDNS, and they said it's intentional and they are doing it correctly. A patch was sent to [Knot server](https://gitlab.nic.cz/knot/knot-dns/-/merge_requests/1217) also to return `NOERROR` for the `CNAME` type.December 2020 (9.11.26, 9.11.26-S1, 9.16.10, 9.16.10-S1, 9.17.8)https://gitlab.isc.org/isc-projects/kea/-/issues/1549Logger output_options inheritance not working2022-11-22T16:18:41ZChrisLogger output_options inheritance not working**Describe the bug**
When defining the "kea-dhcp4" logger with output options and further sub-loggers like "kea-dhcp4.alloc-engine" without output options, they won't appear in the file defined in the parent logger.
**To Reproduce**
Ste...**Describe the bug**
When defining the "kea-dhcp4" logger with output options and further sub-loggers like "kea-dhcp4.alloc-engine" without output options, they won't appear in the file defined in the parent logger.
**To Reproduce**
Steps to reproduce the behavior:
1. Run Kea dhcpv4 with the following logger config
```
"loggers": [
{
"name": "kea-dhcp4",
"severity": "INFO",
#"severity": "DEBUG",
"debuglevel": 55,
"output_options": [
{
"output": "/var/log/kea/kea.log",
#"output": "/tmp/keadebug.log",
"flush": true,
"maxver": 8
}
]
},
{
"name": "kea-dhcp4.alloc-engine",
"severity": "DEBUG",
"debuglevel": 70
}
]
```
2. A client requests an IP
3. There are no log messages from alloc-engine in /var/log/kea/kea.log
**Expected behavior**
Debug-level alloc-engine messages.
**Environment:**
- Kea version: 1.8.1
- OS: Ubuntu 18.04 x64]
- Which features were compiled in: cloudsmith
- If/which hooks where loaded in: kea-ha, lease-commandsbackloghttps://gitlab.isc.org/isc-projects/kea/-/issues/1548DNS update improvements: generate hostname (based on hostname in HR or expres...2022-04-22T19:06:05ZTomek MrugalskiDNS update improvements: generate hostname (based on hostname in HR or expression)There's a [request on kea-users](https://lists.isc.org/pipermail/kea-users/2019-December/002583.html) to be able to force DNS update based on hostname field in the reservation, when the client doesn't send any hostname or fqdn options.
...There's a [request on kea-users](https://lists.isc.org/pipermail/kea-users/2019-December/002583.html) to be able to force DNS update based on hostname field in the reservation, when the client doesn't send any hostname or fqdn options.
Another, related request is to generate hostname procedurally, using an expression similar to what we have in flex-option.
[Support #17550](https://support.isc.org/Ticket/Display.html?id=17550)kea2.1.5Thomas MarkwalderThomas Markwalder