ISC Open Source Projects issueshttps://gitlab.isc.org/groups/isc-projects/-/issues2021-11-10T01:51:51Zhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3003Greedy regular expression causes intermittent "nsupdate" system test failures2021-11-10T01:51:51ZMichał KępieńGreedy regular expression causes intermittent "nsupdate" system test failuresOne of the checks in the `nsupdate` system test [prepares][1] an
`nsupdate` script by processing a response to a DNSKEY query.
Specifically, it attempts to change the TTL of the DNSKEY RRset (from 10
to 600). However, a greedy regular e...One of the checks in the `nsupdate` system test [prepares][1] an
`nsupdate` script by processing a response to a DNSKEY query.
Specifically, it attempts to change the TTL of the DNSKEY RRset (from 10
to 600). However, a greedy regular expression involved in that process
may cause DNSKEY RDATA to be mangled instead of the TTL:
https://gitlab.isc.org/isc-private/bind9/-/jobs/2088895
```
05-Nov-2021 11:50:17.573 received client packet from 10.53.0.3#60245
;; ->>HEADER<<- opcode: UPDATE, status: NOERROR, id: 38838
;; flags:; ZONE: 1, PREREQ: 0, UPDATE: 2, ADDITIONAL: 0
;; ZONE SECTION:
;dnskey.test. IN SOA
;; UPDATE SECTION:
;dnskey.test. 600 IN DNSKEY 256 3 5 (
; AwEAAdS72SeIDeDR/y7ZxEToyLSQ
; Q/rm7f3dQBo/GK8RjRZTjTxMchRW
; itmi/kCJxSOW0rFV/ueWJTwcJbSq
; upYYo1bgNUGNmLDoYfPEDIsClZrK
; jaLjlSWb2v7nYGVuMpLGJX5D2NCm
; QJz5uOQR+b7r/8uSW1eQzodpsLTm
; XQCnuKvj
; ) ; ZSK; alg = RSASHA1 ; key id = 40375
;dnskey.test. 10 IN DNSKEY 257 3 5 (
; AwEAAa600INEzZ8hHtv3d2j5grzq
; 7gAvaWk2TxHTuFhRUuIVJxUNTpTa
; vHvSbZglx/AXSGIIgfXDKd0VVXTa
; sW0eewfCpjNol5Cgfnb+VlO5kmjW
; 6nr1UnLgd+H/sRdG1Ip8amR+D0Xi
; pYmXnOFuO2VvFRBizPlWCFu1sQFr
; sCRYXhB/
; ) ; KSK; alg = RSASHA1 ; key id = 19267
```
Note that the second DNSKEY RR still has a TTL of 10 seconds and
contains the string `600` in its RDATA. Looking at the contents of
`ns3/dnskey.test.db` confirms that the relevant RDATA originally
contained a string matching the regular expression `10.IN`, breaking the
replacement:
```
$TTL 10
dnskey.test. IN SOA dnskey.test. hostmaster.dnskey.test. 1 3600 900 2419200 3600
dnskey.test. IN NS dnskey.test.
dnskey.test. IN A 10.53.0.3
; This is a key-signing key, keyid 18947, for dnskey.test.
; Created: 20211105114907 (Fri Nov 5 11:49:07 2021)
; Publish: 20211105114907 (Fri Nov 5 11:49:07 2021)
; Activate: 20211105114907 (Fri Nov 5 11:49:07 2021)
dnskey.test. IN DNSKEY 257 3 5 AwEAAa100INEzZ8hHtv3d2j5grzq7gAvaWk2TxHTuFhRUuIVJxUNTpTa vHvSbZglx/AXSGIIgfXDKd0VVXTasW0eewfCpjNol5Cgfnb+VlO5kmjW 6nr1UnLgd+H/sRdG1Ip8amR+D0XipYmXnOFuO2VvFRBizPlWCFu1sQFr sCRYXhB/
```
This cannot end well:
```
05-Nov-2021 11:50:17.573 dns_dnssec_findzonekeys2: error reading Kdnskey.test.+005+19267.private: file not found
```
[1]: https://gitlab.isc.org/isc-projects/bind9/-/blob/b69dfd6a7503ebb02496e115c3c05cbbf5f5f4bc/bin/tests/system/nsupdate/tests.sh#L751-755December 2021 (9.16.24, 9.16.24-S1, 9.17.21)https://gitlab.isc.org/isc-projects/bind9/-/issues/3004dig and named crash when receiving XFR over TLS2021-12-01T15:01:19ZCesar Kuroiwadig and named crash when receiving XFR over TLS### Summary
On some occasions, `dig` and `named` crash when making a zone transfer over TLS (client-side). This seems to become more often on larger zones, where there is a need for more DNS messages to complete the transfer.
### BIND ...### Summary
On some occasions, `dig` and `named` crash when making a zone transfer over TLS (client-side). This seems to become more often on larger zones, where there is a need for more DNS messages to complete the transfer.
### BIND version used
9.17.19
### See also
* #2986
*Note: privacy sensitive data was removed from the issue upon making it non-confidential.*December 2021 (9.16.24, 9.16.24-S1, 9.17.21)Artem BoldarievArtem Boldarievhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3005[ISC-support #19488] IXFR changes committed before failing with "extra data"2021-11-30T10:42:51ZEverett Fulton[ISC-support #19488] IXFR changes committed before failing with "extra data"Ref: https://support.isc.org/Ticket/Display.html?id=19488
A Support customer has reported an awkward response by BIND to a malformed inbound IXFR from a non-BIND server, in which there are records sent after the closing SOA:
- BIND emi...Ref: https://support.isc.org/Ticket/Display.html?id=19488
A Support customer has reported an awkward response by BIND to a malformed inbound IXFR from a non-BIND server, in which there are records sent after the closing SOA:
- BIND emits a log indicating an xfr failure due to "extra data"
- Changes excluding the additional RRs are committed to the zone
- No immediate fallback from IXFR to AXFR happened (it triggers a refresh as usual)
Mark A. has mentioned that the journal code was designed with committing single deltas at a time. The journal commit could be made to cover multiple deltas and be made only after the check for extra data has been made.
This undesired behavior only occurs with malformed IXFR streams.Not plannedhttps://gitlab.isc.org/isc-projects/stork/-/issues/607No statistics available in GUI2021-12-02T08:16:43ZBryan SeitzNo statistics available in GUI---
name: Bug report
about: Create a report to help us improve
---
If you believe your bug report is a security issue (e.g. a packet that can kill the server), DO NOT
REPORT IT HERE. Please use https://www.isc.org/community/report-bug/...---
name: Bug report
about: Create a report to help us improve
---
If you believe your bug report is a security issue (e.g. a packet that can kill the server), DO NOT
REPORT IT HERE. Please use https://www.isc.org/community/report-bug/ instead or send mail to
security-office(at)isc(dot)org.
**Describe the bug**
- I see no statistics for subnets in the Stork GUI.
- One subnet is giving errors trying to allocate IPs which is not full
**To Reproduce**
Steps to reproduce the behavior:
1. Installed KEA + Stork from recommended repos on Ubuntu 20.04
2. Stats hooks are loaded in ctrl agent + dhcp4 server now, still no stats.
KEA Configs:
```{
"Control-agent": {
"http-host": "127.0.0.1",
"http-port": 8000,
"control-sockets": {
"dhcp4": {
"socket-type": "unix",
"socket-name": "/tmp/kea4-ctrl-socket"
},
"dhcp6": {
"socket-type": "unix",
"socket-name": "/tmp/kea6-ctrl-socket"
}
},
"hooks-libraries": [
{
"library": "/usr/lib/x86_64-linux-gnu/kea/hooks/libdhcp_stat_cmds.so",
"parameters": { }
},
{
"library": "/usr/lib/x86_64-linux-gnu/kea/hooks/libdhcp_lease_cmds.so",
"parameters": { }
}
],
"loggers": [
{
"name": "kea-ctrl-agent",
"output_options": [
{
"output": "/var/log/kea/kea-ctrl-agent.log"
}
],
"severity": "INFO",
"debuglevel": 0
}
]
}
}
```
Stork Configs
```root@dhcp1:/etc/stork# cat agent.env
# address to bind ie. for listening
STORK_AGENT_ADDRESS=192.168.0.7
STORK_AGENT_PORT=8079
# enable Stork functionality only, i.e. disable Prometheus exporters
STORK_AGENT_LISTEN_STORK_ONLY=true
# enable Prometheus exporters only, i.e. disable Stork functionality
# STORK_AGENT_LISTEN_PROMETHEUS_ONLY=true
# settings for exporting stats to Prometheus
# STORK_AGENT_PROMETHEUS_KEA_EXPORTER_ADDRESS=
# STORK_AGENT_PROMETHEUS_KEA_EXPORTER_PORT=
# STORK_AGENT_PROMETHEUS_KEA_EXPORTER_INTERVAL=
# STORK_AGENT_PROMETHEUS_BIND9_EXPORTER_ADDRESS=
# STORK_AGENT_PROMETHEUS_BIND9_EXPORTER_PORT=
# STORK_AGENT_PROMETHEUS_BIND9_EXPORTER_INTERVAL=
# this is used when agent is automatically registered in Stork server
STORK_AGENT_SERVER_URL=http://192.168.0.7:8080
# STORK_AGENT_ADDRESS=
# skip TLS certificate verification when the Stork Agent connects
# to Kea over TLS and Kea uses self-signed certificates
STORK_AGENT_SKIP_TLS_CERT_VERIFICATION=true
```
```root@dhcp1:/etc/stork# cat server.env
# database settings
STORK_DATABASE_HOST=192.168.0.5
STORK_DATABASE_PORT=5432
STORK_DATABASE_NAME=stork
STORK_DATABASE_USER_NAME=stork
# empty password is set to avoid prompting user for password to database
STORK_DATABASE_PASSWORD=pw
# ReST API settings
# STORK_REST_HOST=
# STORK_REST_PORT=
# STORK_REST_TLS_CERTIFICATE=
# STORK_REST_TLS_PRIVATE_KEY=
# STORK_REST_TLS_CA_CERTIFICATE=
STORK_REST_STATIC_FILES_DIR=/usr/share/stork/www
```
Dashboard Screenshots:
- https://salty.link/screenshots/user1/0802iv00f.png
- https://salty.link/screenshots/user1/314boia99.png
- https://salty.link/screenshots/user1/9509P7234.png
-
**Expected behavior**
I expect the leases and the subnet utilization to show up in the Stork dashboard.
**Environment:**
```root@dhcp1:/etc/stork# kea-dhcp4 -V
2.0.0
tarball
linked with:
log4cplus 2.0.5
OpenSSL 1.1.1l 24 Aug 2021
database:
MySQL backend 12.0, library 8.0.27
PostgreSQL backend 6.2, library 130004
Memfile backend 2.1
```
```root@dhcp1:/etc/stork# stork-server -v
0.22.0
root@dhcp1:/etc/stork# stork-agent -v
0.22.0
```
- OS: Ubuntu 20.04 x64
```root@dhcp1:/etc/stork# dpkg --list |grep stork
ii isc-stork-agent 0.22.0.211105072749 amd64 ISC Stork Agent
ii isc-stork-server 0.22.0.211105072749 amd64 ISC Stork Server
root@dhcp1:/etc/stork# dpkg --list |grep kea
ii isc-kea-admin 2.0.0-isc20210927143053 amd64 Administration utilities for ISC Kea DHCP server
ii isc-kea-common 2.0.0-isc20210927143053 amd64 Common libraries for the ISC Kea DHCP server
ii isc-kea-ctrl-agent 2.0.0-isc20210927143053 amd64 ISC Kea DHCP server REST API service
ii isc-kea-dev 2.0.0-isc20210927143053 amd64 Development headers for ISC Kea DHCP server
ii isc-kea-dhcp4-server 2.0.0-isc20210927143053 amd64 ISC Kea IPv4 DHCP server
ii isc-kea-dhcp6-server 2.0.0-isc20210927143053 amd64 ISC Kea IPv6 DHCP server
ii isc-kea-doc 2.0.0-isc20210927143053 all Documentation for ISC Kea DHCP server
ii python3-isc-kea-connector 2.0.0-isc20210927143053 all Python3 management connector for ISC Kea DHCP server
```
Logs:
```2021-11-05 20:35:31.215 WARN [kea-dhcp4.alloc-engine/5416.140484751342016] ALLOC_ENGINE_V4_ALLOC_FAIL_SUBNET [hwtype=1 94:57:a5:50:0e:56], cid=[01:94:57:a5:50:0e:56], tid=0x8: failed t
o allocate an IPv4 address in the subnet 10.43.60.0/26, subnet-id 4, shared network
2021-11-05 20:35:31.215 WARN [kea-dhcp4.alloc-engine/5416.140484751342016] ALLOC_ENGINE_V4_ALLOC_FAIL [hwtype=1 94:57:a5:50:0e:56], cid=[01:94:57:a5:50:0e:56], tid=0x8: failed to alloc
ate an IPv4 address after 61 attempt(s)
2021-11-05 20:35:31.215 WARN [kea-dhcp4.alloc-engine/5416.140484751342016] ALLOC_ENGINE_V4_ALLOC_FAIL_CLASSES [hwtype=1 94:57:a5:50:0e:56], cid=[01:94:57:a5:50:0e:56], tid=0x8: Failed
to allocate an IPv4 address for client with classes: ALL, VENDOR_CLASS_CPQRIB3, UNKNOWN
2021-11-05 20:35:32.081 WARN [kea-dhcp4.alloc-engine/5416.140484751342016] ALLOC_ENGINE_V4_ALLOC_FAIL_SUBNET [hwtype=1 94:57:a5:50:0e:64], cid=[01:94:57:a5:50:0e:64], tid=0x8: failed t
o allocate an IPv4 address in the subnet 10.43.60.0/26, subnet-id 4, shared network
2021-11-05 20:35:32.081 WARN [kea-dhcp4.alloc-engine/5416.140484751342016] ALLOC_ENGINE_V4_ALLOC_FAIL [hwtype=1 94:57:a5:50:0e:64], cid=[01:94:57:a5:50:0e:64], tid=0x8: failed to alloc
ate an IPv4 address after 61 attempt(s)
2021-11-05 20:35:32.081 WARN [kea-dhcp4.alloc-engine/5416.140484751342016] ALLOC_ENGINE_V4_ALLOC_FAIL_CLASSES [hwtype=1 94:57:a5:50:0e:64], cid=[01:94:57:a5:50:0e:64], tid=0x8: Failed
to allocate an IPv4 address for client with classes: ALL, VENDOR_CLASS_CPQRIB3, UNKNOWN
2021-11-05 20:35:32.817 WARN [kea-dhcp4.alloc-engine/5416.140484751342016] ALLOC_ENGINE_V4_ALLOC_FAIL_SUBNET [hwtype=1 94:57:a5:50:1e:3c], cid=[01:94:57:a5:50:1e:3c], tid=0x8: failed t
o allocate an IPv4 address in the subnet 10.43.60.0/26, subnet-id 4, shared network
2021-11-05 20:35:32.817 WARN [kea-dhcp4.alloc-engine/5416.140484751342016] ALLOC_ENGINE_V4_ALLOC_FAIL [hwtype=1 94:57:a5:50:1e:3c], cid=[01:94:57:a5:50:1e:3c], tid=0x8: failed to alloc
ate an IPv4 address after 61 attempt(s)
2021-11-05 20:35:32.817 WARN [kea-dhcp4.alloc-engine/5416.140484751342016] ALLOC_ENGINE_V4_ALLOC_FAIL_CLASSES [hwtype=1 94:57:a5:50:1e:3c], cid=[01:94:57:a5:50:1e:3c], tid=0x8: Failed
to allocate an IPv4 address for client with classes: ALL, VENDOR_CLASS_CPQRIB3, UNKNOWN
2021-11-05 20:35:33.259 INFO [kea-dhcp4.leases/5416.140484751342016] DHCP4_LEASE_ALLOC [hwtype=1 2c:60:0c:d7:fc:dc], cid=[01:2c:60:0c:d7:fc:dc], tid=0x87e34c3e: lease 10.43.25.9 has be
en allocated for 3600 seconds
2021-11-05 20:35:33.696 WARN [kea-dhcp4.alloc-engine/5416.140484751342016] ALLOC_ENGINE_V4_ALLOC_FAIL_SUBNET [hwtype=1 14:02:ec:38:0a:a2], cid=[01:14:02:ec:38:0a:a2], tid=0x8: failed t
o allocate an IPv4 address in the subnet 10.43.60.0/26, subnet-id 4, shared network
2021-11-05 20:35:33.696 WARN [kea-dhcp4.alloc-engine/5416.140484751342016] ALLOC_ENGINE_V4_ALLOC_FAIL [hwtype=1 14:02:ec:38:0a:a2], cid=[01:14:02:ec:38:0a:a2], tid=0x8: failed to alloc
ate an IPv4 address after 61 attempt(s)
2021-11-05 20:35:33.696 WARN [kea-dhcp4.alloc-engine/5416.140484751342016] ALLOC_ENGINE_V4_ALLOC_FAIL_CLASSES [hwtype=1 14:02:ec:38:0a:a2], cid=[01:14:02:ec:38:0a:a2], tid=0x8: Failed
to allocate an IPv4 address for client with classes: ALL, VENDOR_CLASS_CPQRIB3, UNKNOWN
2021-11-05 20:35:33.925 INFO [kea-dhcp4.leases/5416.140484751342016] DHCP4_LEASE_ALLOC [hwtype=1 2c:60:0c:d7:fd:18], cid=[01:2c:60:0c:d7:fd:18], tid=0xdff67c12: lease 10.43.25.8 has be
en allocated for 3600 seconds
2021-11-05 20:35:35.304 WARN [kea-dhcp4.alloc-engine/5416.140484751342016] ALLOC_ENGINE_V4_ALLOC_FAIL_SUBNET [hwtype=1 94:57:a5:50:0e:56], cid=[01:94:57:a5:50:0e:56], tid=0x8: failed t
o allocate an IPv4 address in the subnet 10.43.60.0/26, subnet-id 4, shared network
2021-11-05 20:35:35.304 WARN [kea-dhcp4.alloc-engine/5416.140484751342016] ALLOC_ENGINE_V4_ALLOC_FAIL [hwtype=1 94:57:a5:50:0e:56], cid=[01:94:57:a5:50:0e:56], tid=0x8: failed to alloc
ate an IPv4 address after 61 attempt(s)
2021-11-05 20:35:35.304 WARN [kea-dhcp4.alloc-engine/5416.140484751342016] ALLOC_ENGINE_V4_ALLOC_FAIL_CLASSES [hwtype=1 94:57:a5:50:0e:56], cid=[01:94:57:a5:50:0e:56], tid=0x8: Failed
to allocate an IPv4 address for client with classes: ALL, VENDOR_CLASS_CPQRIB3, UNKNOWN
2021-11-05 20:35:36.171 WARN [kea-dhcp4.alloc-engine/5416.140484751342016] ALLOC_ENGINE_V4_ALLOC_FAIL_SUBNET [hwtype=1 94:57:a5:50:0e:64], cid=[01:94:57:a5:50:0e:64], tid=0x8: failed t
o allocate an IPv4 address in the subnet 10.43.60.0/26, subnet-id 4, shared network
2021-11-05 20:35:36.171 WARN [kea-dhcp4.alloc-engine/5416.140484751342016] ALLOC_ENGINE_V4_ALLOC_FAIL [hwtype=1 94:57:a5:50:0e:64], cid=[01:94:57:a5:50:0e:64], tid=0x8: failed to alloc
ate an IPv4 address after 61 attempt(s)
2021-11-05 20:35:36.171 WARN [kea-dhcp4.alloc-engine/5416.140484751342016] ALLOC_ENGINE_V4_ALLOC_FAIL_CLASSES [hwtype=1 94:57:a5:50:0e:64], cid=[01:94:57:a5:50:0e:64], tid=0x8: Failed
to allocate an IPv4 address for client with classes: ALL, VENDOR_CLASS_CPQRIB3, UNKNOWN
```
**Contacting you**
How can ISC reach you to discuss this matter further? If you do not specify any means such as
e-mail, jabber id or a telephone, we may send you a message on github with questions when we have
them.
Email / Gitlab / Github are fine.1.0Marcin SiodelskiMarcin Siodelskihttps://gitlab.isc.org/isc-projects/bind9/-/issues/3007DNS resolution fails temporarily2021-11-08T12:02:25ZK VDNS resolution fails temporarilyWe are using two Named Servers in our Production system.
BIND 9.11.4-P2-RedHat-9.11.4-26.P2.el7_9.3 (Extended Support Version) on Linux x86_64 4.9.215-36.el7.x86_64
Recently, we started to see a trend when the DNS resolution fails betwe...We are using two Named Servers in our Production system.
BIND 9.11.4-P2-RedHat-9.11.4-26.P2.el7_9.3 (Extended Support Version) on Linux x86_64 4.9.215-36.el7.x86_64
Recently, we started to see a trend when the DNS resolution fails between a specific time period for random domain names(out of over 100 records). Each record may fail for max 10 minutes. At all other times, it works absolutely fine.
AT THE TIME OF ISSUE:
```
dig @MY_DNS_SERVER docker.mycompany.net
; <<>> DiG 9.11.4-P2-RedHat-9.11.4-26.P2.el7_9.3 <<>> @MY_DNS_SERVER docker.mycompany.net
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 33467
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;docker.mycompany.net. IN A
;; AUTHORITY SECTION:
mycompany.net. 58 IN SOA ns-604.awsdns-11.net. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 86400
;; Query time: 0 msec
;; SERVER: MY_DNS_SERVER#53(MY_DNS_SERVER)
;; WHEN: Thu Nov 04 10:22:01 UTC 2021
;; MSG SIZE rcvd: 128
```
NORMAL TIMES:
```
dig @MY_DNS_SERVER docker.mycompany.net
; <<>> DiG 9.11.4-P2-RedHat-9.11.4-26.P2.el7_9.3 <<>> @MY_DNS_SERVER docker.mycompany.net
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 28581
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 4, ADDITIONAL: 7
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;docker.mycompany.net. IN A
;; ANSWER SECTION:
docker.mycompany.net. 54 IN A PROPER IP
docker.mycompany.net. 54 IN A PROPER IP
;; AUTHORITY SECTION:
mycompany.net. 81323 IN NS ns-16.awsdns-02.com.
mycompany.net. 81323 IN NS ns-1158.awsdns-16.org.
mycompany.net. 81323 IN NS ns-604.awsdns-11.net.
mycompany.net. 81323 IN NS ns-1731.awsdns-24.co.uk.
;; ADDITIONAL SECTION:
ns-1158.awsdns-16.org. 47096 IN A 205.251.196.134
ns-16.awsdns-02.com. 25941 IN A 205.251.192.16
ns-604.awsdns-11.net. 81323 IN A 205.251.194.92
ns-1158.awsdns-16.org. 68043 IN AAAA 2600:9000:5304:8600::1
ns-16.awsdns-02.com. 60863 IN AAAA 2600:9000:5300:1000::1
ns-1731.awsdns-24.co.uk. 38132 IN AAAA 2600:9000:5306:c300::1
;; Query time: 0 msec
;; SERVER: MY_DNS_SERVER#53(MY_DNS_SERVER)
;; WHEN: Thu Nov 04 10:29:03 UTC 2021
;; MSG SIZE rcvd: 347
```
The BIND Cache metrics shows a trend where it exactly starts to increase around the start of the issue(2:30pm IST). Though the DNS Resolution resolves within an hour, the graph shows a continuous upward trend which decreases only beyond midnight that day.
![BIND_CACHE_METRICS](/uploads/e26bda06452d8554a835d37e73128beb/BIND_CACHE_METRICS.PNG)
This is badly affecting Production users. Please share your suggestions as soon as possible.https://gitlab.isc.org/isc-projects/stork/-/issues/608readthedocs build fails2021-11-09T17:59:49ZAndrei Pavelandrei@isc.orgreadthedocs build failshttps://readthedocs.org/projects/kea/builds/15209538/
```
TypeError: 'generator' object is not subscriptable
```
[readthedocs official statement](https://blog.readthedocs.com/build-errors-docutils-0-18/): upgrade to 3.0 or laterhttps://readthedocs.org/projects/kea/builds/15209538/
```
TypeError: 'generator' object is not subscriptable
```
[readthedocs official statement](https://blog.readthedocs.com/build-errors-docutils-0-18/): upgrade to 3.0 or later1.0Andrei Pavelandrei@isc.orgAndrei Pavelandrei@isc.orghttps://gitlab.isc.org/isc-projects/kea/-/issues/2174configurable TKEY-exchange timeout2021-12-09T15:46:05ZRazvan Becheriuconfigurable TKEY-exchange timeoutConfigure IOFetch timeoutConfigure IOFetch timeoutkea2.1.1Razvan BecheriuRazvan Becheriuhttps://gitlab.isc.org/isc-projects/kea/-/issues/2175Fix rekey timer computing2021-12-09T15:46:04ZFrancis DupontFix rekey timer computingrekey timer should use a date based on the newest key (vs just the configured interval).rekey timer should use a date based on the newest key (vs just the configured interval).kea2.1.1Francis DupontFrancis Duponthttps://gitlab.isc.org/isc-projects/kea/-/issues/2177remove very old GSS-TSIG keys2021-12-09T15:46:05ZFrancis Dupontremove very old GSS-TSIG keysCurrent proposal is to remove keys which expired more than 2 maximum lifetimes.Current proposal is to remove keys which expired more than 2 maximum lifetimes.kea2.1.1Francis DupontFrancis Duponthttps://gitlab.isc.org/isc-projects/kea/-/issues/2176handle credentials with short lifetime2021-12-10T10:42:42ZFrancis Duponthandle credentials with short lifetimeDesign to do in order to decide the best behavior. Note when correctly configured this should never happen.
- [x] implement the code changes (https://gitlab.isc.org/isc-private/kea-premium/-/merge_requests/236)
- [x] update the examples...Design to do in order to decide the best behavior. Note when correctly configured this should never happen.
- [x] implement the code changes (https://gitlab.isc.org/isc-private/kea-premium/-/merge_requests/236)
- [x] update the examples (!1477)
- [ ] document the changes in ARM (!1485)kea2.1.2Francis DupontFrancis Duponthttps://gitlab.isc.org/isc-projects/kea/-/issues/2178ARM example configs incorrectly show heatbeat-delay and max-response-delay bo...2021-11-11T20:13:36ZThomas MarkwalderARM example configs incorrectly show heatbeat-delay and max-response-delay both set equalWe have HA example configs like this:
```
:
"heartbeat-delay": 10000,
"max-response-delay": 10000,
:
```
We specifically tell people that max-response-delay should be substantially larger than heartbeat-delay. It defaults to 6...We have HA example configs like this:
```
:
"heartbeat-delay": 10000,
"max-response-delay": 10000,
:
```
We specifically tell people that max-response-delay should be substantially larger than heartbeat-delay. It defaults to 60000. We need to fix these configs because users are using them verbatim.https://gitlab.isc.org/isc-projects/bind9/-/issues/3008Bind9 going down with error rbtdb->next_serial2021-11-08T22:09:24ZAleksandr NikitinBind9 going down with error rbtdb->next_serial### Summary
I have some instances bind9, and some of them going down with error
```
Nov 04 02:30:46 vm-name named[10257]: 04-Nov-2021 02:30:46.319 general: critical: ../../../lib/dns/rbtdb.c:1497: fatal error:
Nov 04 02:30:46 vm-name na...### Summary
I have some instances bind9, and some of them going down with error
```
Nov 04 02:30:46 vm-name named[10257]: 04-Nov-2021 02:30:46.319 general: critical: ../../../lib/dns/rbtdb.c:1497: fatal error:
Nov 04 02:30:46 vm-name named[10257]: 04-Nov-2021 02:30:46.319 general: critical: RUNTIME_CHECK(rbtdb->next_serial != 0) failed
Nov 04 02:30:46 vm-name named[10257]: 04-Nov-2021 02:30:46.319 general: critical: exiting (due to fatal error in library)
Nov 04 02:30:46 vm-name systemd[1]: bind9.service: Main process exited, code=killed, status=6/ABRT
Nov 04 02:30:46 vm-name systemd[1]: bind9.service: Failed with result 'signal'.
```
### BIND version used
```
BIND 9.11.5-P4-5.1+deb10u6-Debian (Extended Support Version) <id:998753c>
running on Linux x86_64 4.19.0-18-cloud-amd64 #1 SMP Debian 4.19.208-1 (2021-09-29)
built by make with '--build=x86_64-linux-gnu' '--prefix=/usr' '--includedir=/usr/include' '--mandir=/usr/share/man' '--infodir=/usr/share/info' '--sysconfdir=/etc' '--localstatedir=/var' '--disable-silent-rules' '--libdir=/usr/lib/x86_64-linux-gnu' '--libexecdir=/usr/lib/x86_64-linux-gnu' '--disable-maintainer-mode' '--disable-dependency-tracking' '--libdir=/usr/lib/x86_64-linux-gnu' '--sysconfdir=/etc/bind' '--with-python=python3' '--localstatedir=/' '--enable-threads' '--enable-largefile' '--with-libtool' '--enable-shared' '--enable-static' '--with-gost=no' '--with-openssl=/usr' '--with-gssapi=/usr' '--disable-isc-spnego' '--with-libidn2' '--with-libjson=/usr' '--with-lmdb=/usr' '--with-gnu-ld' '--with-geoip=/usr' '--with-atf=no' '--enable-ipv6' '--enable-rrl' '--enable-filter-aaaa' '--enable-native-pkcs11' '--with-pkcs11=/usr/lib/softhsm/libsofthsm2.so' '--with-randomdev=/dev/urandom' '--enable-dnstap' 'build_alias=x86_64-linux-gnu' 'CFLAGS=-g -O2 -fdebug-prefix-map=/build/bind9-gHNcz0/bind9-9.11.5.P4+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -fno-strict-aliasing -fno-delete-null-pointer-checks -DNO_VERSION_DATE -DDIG_SIGCHASE' 'LDFLAGS=-Wl,-z,relro -Wl,-z,now' 'CPPFLAGS=-Wdate-time -D_FORTIFY_SOURCE=2'
compiled by GCC 8.3.0
compiled with OpenSSL version: OpenSSL 1.1.1d 10 Sep 2019
linked to OpenSSL version: OpenSSL 1.1.1d 10 Sep 2019
compiled with libxml2 version: 2.9.4
linked to libxml2 version: 20904
compiled with libjson-c version: 0.12.1
linked to libjson-c version: 0.12.1
threads support is enabled
```
### Steps to reproduce
Run bind9 with configuration, files down in text
### What is the current *bug* behavior?
Bind9 fails and not automaticaly restarts
```
Nov 04 02:30:46 vm-name named[10257]: 04-Nov-2021 02:30:46.319 general: critical: ../../../lib/dns/rbtdb.c:1497: fatal error:
Nov 04 02:30:46 vm-name named[10257]: 04-Nov-2021 02:30:46.319 general: critical: RUNTIME_CHECK(rbtdb->next_serial != 0) failed
Nov 04 02:30:46 vm-name named[10257]: 04-Nov-2021 02:30:46.319 general: critical: exiting (due to fatal error in library)
Nov 04 02:30:46 vm-name systemd[1]: bind9.service: Main process exited, code=killed, status=6/ABRT
Nov 04 02:30:46 vm-name systemd[1]: bind9.service: Failed with result 'signal'.
```
### What is the expected *correct* behavior?
Bind9 not going down
### Relevant configuration files
My configuration is:
named.conf
```
key key.for.internal.domain {
algorithm HMAC-MD5;
secret "[masked]";
};
include "/etc/bind/named.conf.options";
include "/etc/bind/named.conf.local";
include "/etc/bind/named.conf.default-zones";
```
named.conf.options
```
acl "private" {
127.0.0.1/32; # localhost
172.16.1.0/24; # wg
};
options {
directory "/var/cache/bind";
allow-query { private; };
recursion yes;
forwarders {
127.0.0.1 port 5053;
1.1.1.1;
1.0.0.1;
8.8.8.8;
8.8.4.4;
};
dnssec-enable no;
dnssec-validation no;
listen-on { any; };
listen-on-v6 { none; };
};
```
named.conf.local
```
zone "dc1.internal.domain" {
type master;
file "/etc/bind/dc1.internal.domain.db";
allow-update { key "key.for.internal.domain"; };
};
zone "156.10.in-addr.arpa" {
type master;
file "/etc/bind/156.10.in-addr.arpa.db";
allow-update { key "key.for.internal.domain"; };
};
zone "dc2.internal.domain" {
type slave;
file "/etc/bind/slave_dc2.internal.domain.db";
masters { 10.60.0.1; };
};
zone "60.10.in-addr.arpa" {
type slave;
file "/etc/bind/slave_60.10.in-addr.arpa.db";
masters { 10.60.0.1; };
};
zone "dc3.internal.domain" {
type slave;
file "/etc/bind/slave_dc3.internal.domain.db";
masters { 10.200.0.1; };
};
zone "200.10.in-addr.arpa" {
type slave;
file "/etc/bind/slave_200.10.in-addr.arpa.db";
masters { 10.200.0.1; };
};
zone "dc4.internal.domain" {
type slave;
file "/etc/bind/slave_dc4.internal.domain.db";
masters { 10.90.0.1; };
};
zone "90.10.in-addr.arpa" {
type slave;
file "/etc/bind/slave_90.10.in-addr.arpa.db";
masters { 10.90.0.1; };
};
zone "dc5.internal.domain" {
type slave;
file "/etc/bind/slave_dc5.internal.domain.db";
masters { 10.9.96.1; };
};
zone "9.10.in-addr.arpa" {
type slave;
file "/etc/bind/slave_9.10.in-addr.arpa.db";
masters { 10.9.96.1; };
};
```
named.conf.default-zones
```
zone "." {
type hint;
file "/usr/share/dns/root.hints";
};
zone "internal.domain" {
type master;
file "/etc/bind/internal.domain.db";
};
zone "another.domain" {
type master;
file "/etc/bind/another.domain.db";
};
zone "third.domain" {
type master;
file "/etc/bind/third.domain.db";
};
zone "fourth.domain" {
type master;
file "/etc/bind/fourth.domain.db";
};
```https://gitlab.isc.org/isc-projects/bind9/-/issues/3009Set -DOPENSSL_SUPPRESS_DEPRECATED for 9.16 and 9.112022-03-01T09:44:10ZMark AndrewsSet -DOPENSSL_SUPPRESS_DEPRECATED for 9.16 and 9.11Given we are not planning to back port OpenSSL 3.0 changes to 9.16 and 9.11, perhaps we should just silence the deprecated warnings on these branches as they impact on --enable-developer / --enable-warn-error.Given we are not planning to back port OpenSSL 3.0 changes to 9.16 and 9.11, perhaps we should just silence the deprecated warnings on these branches as they impact on --enable-developer / --enable-warn-error.December 2021 (9.16.24, 9.16.24-S1, 9.17.21)https://gitlab.isc.org/isc-projects/bind9/-/issues/3010ThreadSanitizer: data race in closedir2021-12-23T14:43:22ZMichal NowakThreadSanitizer: data race in closedirSame issue as isc-projects/bind9#2457 [this time on Debian 11 with Clang](https://gitlab.isc.org/isc-projects/bind9/-/jobs/2094359):
```
WARNING: ThreadSanitizer: data race
Write of size 8 at 0x000000000001 by thread T1:
#0 closed...Same issue as isc-projects/bind9#2457 [this time on Debian 11 with Clang](https://gitlab.isc.org/isc-projects/bind9/-/jobs/2094359):
```
WARNING: ThreadSanitizer: data race
Write of size 8 at 0x000000000001 by thread T1:
#0 closedir <null>
#1 isc_dir_close lib/isc/dir.c:134:8
#2 dns_dnssec_findmatchingkeys lib/dns/dnssec.c:1514:3
#3 zone_rekey lib/dns/zone.c:21567:11
#4 zone_maintenance lib/dns/zone.c:11386:4
#5 zone_timer lib/dns/zone.c:15039:2
#6 task_run lib/isc/task.c:827:5
#7 isc_task_run lib/isc/task.c:907:10
#8 isc__nm_async_task lib/isc/netmgr/netmgr.c:834:11
#9 process_netievent lib/isc/netmgr/netmgr.c
#10 process_queue lib/isc/netmgr/netmgr.c:1007:16
#11 process_all_queues lib/isc/netmgr/netmgr.c:753:25
#12 async_cb lib/isc/netmgr/netmgr.c:782:6
#13 <null> <null>
#14 isc__trampoline_run lib/isc/trampoline.c:185:11
Previous read of size 8 at 0x000000000001 by thread T2:
#0 epoll_ctl <null>
#1 <null> <null>
#2 uv_run <null>
#3 isc__trampoline_run lib/isc/trampoline.c:185:11
Location is file descriptor 105 created by thread T2 at:
#0 accept4 <null>
#1 <null> <null>
#2 isc__trampoline_run lib/isc/trampoline.c:185:11
Thread T2 (running) created by main thread at:
#0 pthread_create <null>
#1 isc_thread_create lib/isc/thread.c:79:8
#2 isc__netmgr_create lib/isc/netmgr/netmgr.c:328:3
#3 isc_managers_create lib/isc/managers.c:36:2
#4 create_managers bin/named/main.c:920:11
#5 setup bin/named/main.c:1184:11
#6 main bin/named/main.c:1452:2
Thread T2 (running) created by main thread at:
#0 pthread_create <null>
#1 isc_thread_create lib/isc/thread.c:79:8
#2 isc__netmgr_create lib/isc/netmgr/netmgr.c:328:3
#3 isc_managers_create lib/isc/managers.c:36:2
#4 create_managers bin/named/main.c:920:11
#5 setup bin/named/main.c:1184:11
#6 main bin/named/main.c:1452:2
SUMMARY: ThreadSanitizer: data race in closedir
```
A blocker for base image upgrade from Debian 10 to Debian 11 (isc-projects/bind9!5367).
The solution is to have custom libuv for Debian 11 as have for Fedora already (https://gitlab.isc.org/isc-projects/images/-/merge_requests/112/diffs?commit_id=dd3c9f1d49f98a8d3584ef73478da8e9fdc2fd84).January 2022 (9.16.25, 9.16.25-S1, 9.17.22)Michal NowakMichal Nowakhttps://gitlab.isc.org/isc-projects/kea/-/issues/2179ddns and active directory setup configuration and error2021-11-09T15:56:53ZWlodzimierz Wencelddns and active directory setup configuration and errorAdd documentation about setting windows up and include AD cryptic errorAdd documentation about setting windows up and include AD cryptic errorkea2.1.1Wlodzimierz WencelWlodzimierz Wencelhttps://gitlab.isc.org/isc-projects/stork/-/issues/612rake system_tests freezes in CI2021-11-09T16:32:58ZAndrei Pavelandrei@isc.orgrake system_tests freezes in CITimes out at 5 mintues.
```
tests.py::test_users_management[ubuntu/18.04-centos/8]
************ START tests.py::test_users_management[ubuntu/18.04-centos/8] **************************************************************
FAILED
******...Times out at 5 mintues.
```
tests.py::test_users_management[ubuntu/18.04-centos/8]
************ START tests.py::test_users_management[ubuntu/18.04-centos/8] **************************************************************
FAILED
************ RESULT FAILED tests.py::test_users_management[ubuntu/18.04-centos/8] took 0:05:00 ****************************************
```1.0https://gitlab.isc.org/isc-projects/stork/-/issues/613follow up to 0.22 sanity checks2021-12-09T09:01:37ZAndrei Pavelandrei@isc.orgfollow up to 0.22 sanity checks* [x] A minor issue. The Stork Agent help prints "A new cli application":
```
$ ./stork-agent --help
NAME:
Stork Agent - A new cli application
```
It is no longer a new application. Shouldn't it rather be "Stork Agent - A Stork appl...* [x] A minor issue. The Stork Agent help prints "A new cli application":
```
$ ./stork-agent --help
NAME:
Stork Agent - A new cli application
```
It is no longer a new application. Shouldn't it rather be "Stork Agent - A Stork application monitoring Kea and BIND9 servers".
* [x] We seem to be a bit inconsistent how we define boolean values in the env files. In the agent.env we have:
```
# skip TLS certificate verification when the Stork Agent connects
# to Kea over TLS and Kea uses self-signed certificates
# STORK_AGENT_SKIP_TLS_CERT_VERIFICATION=true
```
In the server.env we have:
```
# Enable Prometheus /metrics HTTP endpoint for exporting metrics from
# the server to Prometheus. It is recommended to secure this endpoint
# (e.g. using HTTP proxy).
# STORK_ENABLE_METRICS=1
```1.1Marcin SiodelskiMarcin Siodelskihttps://gitlab.isc.org/isc-projects/stork/-/issues/614Kea stats for prometheus2022-02-28T10:56:12ZPeter DaviesKea stats for prometheusKea stats for prometheus:
In Kea installations with a very large number of subnets and where the user is only interested in employing the stork agent to collect global statistics to export to prometheus.
To limit the load on the Ke...Kea stats for prometheus:
In Kea installations with a very large number of subnets and where the user is only interested in employing the stork agent to collect global statistics to export to prometheus.
To limit the load on the Kea server it could be useful if one could configure stork to collect only the global statistics and not the subnet lists.
Perhaps via a configurable setting such as: “STORK_AGENT_PROMETHEUS_KEA_EXPORTER_STATS_ONLY”
[RT #18748](https://support.isc.org/Ticket/Display.html?id=18748)1.2Slawek FigielSlawek Figielhttps://gitlab.isc.org/isc-projects/stork/-/issues/615access denied in deploy_demo2021-12-02T13:17:20ZAndrei Pavelandrei@isc.orgaccess denied in deploy_demohttps://stork.lab.isc.org/ has been offline for a while because of the following error in `deploy_demo`:
https://gitlab.isc.org/isc-projects/stork/-/jobs/2095564
```
Pushing agent-kea-premium (registry.gitlab.isc.org/isc-private/stork/...https://stork.lab.isc.org/ has been offline for a while because of the following error in `deploy_demo`:
https://gitlab.isc.org/isc-projects/stork/-/jobs/2095564
```
Pushing agent-kea-premium (registry.gitlab.isc.org/isc-private/stork/agent-kea-premium:latest)...
The push refers to repository [registry.gitlab.isc.org/isc-private/stork/agent-kea-premium]
denied: requested access to the resource is denied
rake aborted!
Command failed with status (1): [docker-compose -f docker-compose.yaml -f d...]
/builds/isc-projects/stork/Rakefile:672:in `block in <top (required)>'
Tasks: TOP => build_and_push_demo_images
```1.0https://gitlab.isc.org/isc-projects/stork/-/issues/616Stork-Server didn't show correct HA State in Dashboard2024-02-26T14:50:32ZThorsten KrohnStork-Server didn't show correct HA State in DashboardHi,
i have just upgraded from kea 1.8.2 to kea 2.0.0 and now the HA-State shown in the Dashboard is incorrect:
![image](/uploads/2233e90dee484c94b6d7c8114d343546/image.png)
But in the view of the kea-app from both Servers all is fine:...Hi,
i have just upgraded from kea 1.8.2 to kea 2.0.0 and now the HA-State shown in the Dashboard is incorrect:
![image](/uploads/2233e90dee484c94b6d7c8114d343546/image.png)
But in the view of the kea-app from both Servers all is fine:
![image](/uploads/4bcc0f10c70eb9dc22eafbdd84fcc33b/image.png)
I swtiched on the KEA-Server with 2.0.0 to multithreading HA.
Server and Client-Versions are 0.22.0.211105072749 on Debian Buster.analysis-in-progress