ISC Open Source Projects issueshttps://gitlab.isc.org/groups/isc-projects/-/issues2021-03-05T13:12:51Zhttps://gitlab.isc.org/isc-projects/stork/-/issues/366Debug/verbose mode to db migration2021-03-05T13:12:51ZTomek MrugalskiDebug/verbose mode to db migrationOur DB schema is not documented anywhere and it's stored in .go files. There was one incident when migration failed and it was difficult to debug what exactly was going on. We need a `--debug` or `--verbose` flag that would print each DB...Our DB schema is not documented anywhere and it's stored in .go files. There was one incident when migration failed and it was difficult to debug what exactly was going on. We need a `--debug` or `--verbose` flag that would print each DB migration schema before it's actually applied.0.12Tomek MrugalskiTomek Mrugalskihttps://gitlab.isc.org/isc-projects/stork/-/issues/365not monitored services should not be shown in the dashboard2020-09-02T15:49:56ZTomek Mrugalskinot monitored services should not be shown in the dashboardThe services that are not monitored (e.g. dhcpv6 on agent-kea) are still shown on a dashboard. They shouldn't be.
![Screenshot_2020-08-11_at_11.40.23](/uploads/762c86ee31b69c60a31ce25e3fff9c87/Screenshot_2020-08-11_at_11.40.23.png)The services that are not monitored (e.g. dhcpv6 on agent-kea) are still shown on a dashboard. They shouldn't be.
![Screenshot_2020-08-11_at_11.40.23](/uploads/762c86ee31b69c60a31ce25e3fff9c87/Screenshot_2020-08-11_at_11.40.23.png)0.11Tomek MrugalskiTomek Mrugalskihttps://gitlab.isc.org/isc-projects/stork/-/issues/364App refresh mechanism keeps tossing information about log files2020-08-12T10:03:35ZMarcin SiodelskiApp refresh mechanism keeps tossing information about log filesIn order to reproduce the bug:
- Start with rake docker_up
- Add agent-kea
- Navigate to the Kea app
- Click on the log file which will take you to the log viewer
- Click refresh button
You're going to see error message saying that the ...In order to reproduce the bug:
- Start with rake docker_up
- Add agent-kea
- Navigate to the Kea app
- Click on the log file which will take you to the log viewer
- Click refresh button
You're going to see error message saying that the log file with the given id doesn't exist. It seems like we have some background task updating the app information which keeps tossing the log files and adds them with a different id. If you now:
- click back in the browser
- click again on the same log file
you will see that the log file ID has changed. That's what seems to be causing the error message to be displayed.0.10Marcin SiodelskiMarcin Siodelskihttps://gitlab.isc.org/isc-projects/stork/-/issues/363Add tool tip for RPS columns on DHCP dashboard2020-08-10T17:43:57ZThomas MarkwalderAdd tool tip for RPS columns on DHCP dashboardDHCP dashboard needs tool tips for RPS columnsDHCP dashboard needs tool tips for RPS columns0.10Thomas MarkwalderThomas Markwalderhttps://gitlab.isc.org/isc-projects/kea/-/issues/1378missing deb dependency: isc-kea-ctrl-agent should require python3-isc-kea-con...2020-08-25T11:07:20ZTomek Mrugalskimissing deb dependency: isc-kea-ctrl-agent should require python3-isc-kea-connectorHere's what happened on ubuntu 20.04:
```
#apt install isc-kea-ctrl-agent
# which kea-shell
/usr/sbin/kea-shell
root@tycho:/etc/kea# kea-shell
Traceback (most recent call last):
File "/usr/sbin/kea-shell", line 27, in <module>
fro...Here's what happened on ubuntu 20.04:
```
#apt install isc-kea-ctrl-agent
# which kea-shell
/usr/sbin/kea-shell
root@tycho:/etc/kea# kea-shell
Traceback (most recent call last):
File "/usr/sbin/kea-shell", line 27, in <module>
from kea_conn import CARequest # CAResponse
ModuleNotFoundError: No module named 'kea_conn'
root@tycho:/etc/kea# dpkg -S kea-shell
isc-kea-ctrl-agent: /usr/sbin/kea-shell
isc-kea-ctrl-agent: /usr/share/man/man8/kea-shell.8.gz
```
To get kea_conn module, you need to install python3-isc-kea-connector.kea1.8.0https://gitlab.isc.org/isc-projects/kea/-/issues/1377when client has global reservation without an address kea4 is not checking fo...2020-09-01T09:59:13ZWlodzimierz Wencelwhen client has global reservation without an address kea4 is not checking for reservation on subnet level.configuration:
```
{
"Dhcp4": {
"client-classes": [
{
"name": "special"
},
{
"name": "NOTspecial",
"test": "not member('special')"
...configuration:
```
{
"Dhcp4": {
"client-classes": [
{
"name": "special"
},
{
"name": "NOTspecial",
"test": "not member('special')"
}
],
"hooks-libraries": [],
"interfaces-config": {
"interfaces": [
"enp0s9"
]
},
"lease-database": {
"type": "memfile"
},
"loggers": [
{
"debuglevel": 99,
"name": "kea-dhcp4",
"output_options": [
{
"output": "/home/wlodek/installed/git-thread/var/log/kea.log"
}
],
"severity": "DEBUG"
}
],
"multi-threading": {
"enable-multi-threading": true,
"packet-queue-size": 16,
"thread-pool-size": 2
},
"option-data": [],
"rebind-timer": 2000,
"renew-timer": 1000,
"reservation-mode": "global",
"reservations": [
{
"client-classes": [
"special"
],
"hw-address": "ff:01:02:03:ff:04"
}
],
"shared-networks": [
{
"interface": "enp0s9",
"name": "name-abc",
"subnet4": [
{
"client-class": "NOTspecial",
"interface": "enp0s9",
"pools": [
{
"pool": "192.168.50.1-192.168.50.50"
}
],
"subnet": "192.168.50.0/24"
},
{
"client-class": "special",
"interface": "enp0s9",
"pools": [
{
"pool": "192.168.51.1-192.168.51.50"
}
],
"reservation-mode": "all",
"reservations": [
{
"hw-address": "ff:01:02:03:ff:04",
"ip-address": "192.168.51.200"
}
],
"subnet": "192.168.51.0/24"
}
]
}
],
"subnet4": [],
"valid-lifetime": 4000
}
}
```
Scenario: client has a two reservations, one global with class that is manipulating subnet selection from shared network, second on subnet level that has specific address.
Problem: Client is getting address from correct subnet (global reservation works) but kea is not checking for second reservation and assign address from regular pool.
full logs attached [kea.log](/uploads/9f9617fa86188f19ff4a3e5dd8735a69/kea.log)
introduced https://gitlab.isc.org/isc-projects/kea/-/issues/1139 on similar scenario but on pool level works perfectlykea1.9.0https://gitlab.isc.org/isc-projects/kea/-/issues/1376manipulating subnet choice from shared network with class from global reserva...2020-09-01T09:59:19ZWlodzimierz Wencelmanipulating subnet choice from shared network with class from global reservation does not work in kea v6configuration:
```
{
"Dhcp6": {
"client-classes": [
{
"name": "special"
},
{
"name": "NOTspecial",
"test": "not member('special')"
...configuration:
```
{
"Dhcp6": {
"client-classes": [
{
"name": "special"
},
{
"name": "NOTspecial",
"test": "not member('special')"
}
],
"hooks-libraries": [],
"interfaces-config": {
"interfaces": [
"enp0s9"
]
},
"lease-database": {
"type": "memfile"
},
"loggers": [
{
"debuglevel": 99,
"name": "kea-dhcp6",
"output_options": [
{
"output": "/home/wlodek/installed/git-thread/var/log/kea.log"
}
],
"severity": "DEBUG"
}
],
"multi-threading": {
"enable-multi-threading": true,
"packet-queue-size": 16,
"thread-pool-size": 2
},
"option-data": [],
"preferred-lifetime": 3000,
"rebind-timer": 2000,
"renew-timer": 1000,
"reservation-mode": "global",
"reservations": [
{
"client-classes": [
"special"
],
"hw-address": "01:02:03:04:05:07"
}
],
"shared-networks": [
{
"interface": "enp0s9",
"name": "name-abc",
"subnet6": [
{
"client-class": "NOTspecial",
"interface": "enp0s9",
"pools": [
{
"pool": "2001:db8:a::1-2001:db8:a::1"
}
],
"subnet": "2001:db8:a::/64"
},
{
"client-class": "special",
"interface": "enp0s9",
"pools": [
{
"pool": "2001:db8:b::1-2001:db8:b::1"
}
],
"reservation-mode": "all",
"reservations": [
{
"hw-address": "01:02:03:04:05:07",
"ip-addresses": [
"2001:db8:a::1111"
]
}
],
"subnet": "2001:db8:b::/64"
}
]
}
],
"subnet6": [],
"valid-lifetime": 4000
}
}
```
Scenario:
client haas a global reservation with a class, with that class he should get specific subnet from shared-network, and than with reservation from subnet level should get specific address from this subnet.
logs of address selection:
```
2020-08-10 02:20:11.281 DEBUG [kea-dhcp6.dhcpsrv/18466.140536017057536] DHCPSRV_CFGMGR_SUBNET6_IFACE selected subnet 2001:db8:a::/64 for packet received over interface enp0s9
2020-08-10 02:20:11.281 DEBUG [kea-dhcp6.packets/18466.140536017057536] DHCP6_SUBNET_SELECTED duid=[00:03:00:01:01:02:03:04:05:07], tid=0xdfecd6: the subnet with ID 1 was selected for client assignments
2020-08-10 02:20:11.281 DEBUG [kea-dhcp6.packets/18466.140536017057536] DHCP6_SUBNET_DATA duid=[00:03:00:01:01:02:03:04:05:07], tid=0xdfecd6: the selected subnet details: 2001:db8:a::/64
2020-08-10 02:20:11.281 DEBUG [kea-dhcp6.hosts/18466.140536017057536] HOSTS_CFG_GET_ONE_SUBNET_ID_IDENTIFIER get one host with IPv6 reservation for subnet id 0, identified by hwaddr=010203040507
2020-08-10 02:20:11.281 DEBUG [kea-dhcp6.hosts/18466.140536017057536] HOSTS_CFG_GET_ALL_IDENTIFIER get all hosts with reservations using identifier: hwaddr=010203040507
2020-08-10 02:20:11.281 DEBUG [kea-dhcp6.hosts/18466.140536017057536] HOSTS_CFG_GET_ALL_IDENTIFIER_HOST using identifier: hwaddr=010203040507, found host: hwaddr=010203040507 ipv6_subnet_id=2 hostname=(empty) ipv4_reservation=(no) siaddr=(no) sname=(empty) file=(empty) key=(empty) ipv6_reservation0=2001:db8:a::1111
2020-08-10 02:20:11.281 DEBUG [kea-dhcp6.hosts/18466.140536017057536] HOSTS_CFG_GET_ALL_IDENTIFIER_HOST using identifier: hwaddr=010203040507, found host: hwaddr=010203040507 ipv6_subnet_id=0 hostname=(empty) ipv4_reservation=(no) siaddr=(no) sname=(empty) file=(empty) key=(empty) ipv6_reservations=(none) dhcp6_class0=special
2020-08-10 02:20:11.281 DEBUG [kea-dhcp6.hosts/18466.140536017057536] HOSTS_CFG_GET_ALL_IDENTIFIER_COUNT using identifier hwaddr=010203040507, found 2 host(s)
2020-08-10 02:20:11.281 DEBUG [kea-dhcp6.hosts/18466.140536017057536] HOSTS_CFG_GET_ONE_SUBNET_ID_IDENTIFIER_HOST using subnet id 0 and identifier hwaddr=010203040507, found host: hwaddr=010203040507 ipv6_subnet_id=0 hostname=(empty) ipv4_reservation=(no) siaddr=(no) sname=(empty) file=(empty) key=(empty) ipv6_reservations=(none) dhcp6_class0=special
2020-08-10 02:20:11.282 DEBUG [kea-dhcp6.dhcp6/18466.140536017057536] DHCP6_CLASS_ASSIGNED duid=[00:03:00:01:01:02:03:04:05:07], tid=0xdfecd6: client packet has been assigned to the following class(es): ALL, special
2020-08-10 02:20:11.282 DEBUG [kea-dhcp6.eval/18466.140536017057536] EVAL_DEBUG_MEMBER Checking membership of 'special', pushing result 'true'
2020-08-10 02:20:11.282 DEBUG [kea-dhcp6.eval/18466.140536017057536] EVAL_DEBUG_NOT Popping 'true' pushing 'false'
2020-08-10 02:20:11.282 DEBUG [kea-dhcp6.dhcp6/18466.140536017057536] EVAL_RESULT Expression NOTspecial evaluated to 0
2020-08-10 02:20:11.282 DEBUG [kea-dhcp6.dhcp6/18466.140536017057536] DHCP6_CLASS_ASSIGNED duid=[00:03:00:01:01:02:03:04:05:07], tid=0xdfecd6: client packet has been assigned to the following class(es): KNOWN
2020-08-10 02:20:11.282 DEBUG [kea-dhcp6.leases/18466.140536017057536] DHCP6_PROCESS_IA_NA_REQUEST duid=[00:03:00:01:01:02:03:04:05:07], tid=0xdfecd6: server is processing IA_NA option with iaid=41204 and hint=2001:db8:b::1
2020-08-10 02:20:11.282 DEBUG [kea-dhcp6.dhcpsrv/18466.140536017057536] DHCPSRV_MEMFILE_GET_IAID_DUID obtaining IPv6 leases for IAID 41204 and DUID 00:03:00:01:01:02:03:04:05:07 and lease type IA_NA
2020-08-10 02:20:11.282 DEBUG [kea-dhcp6.alloc-engine/18466.140536017057536] ALLOC_ENGINE_V6_ALLOC_NO_LEASES_HR no leases found but reservations exist for client duid=[00:03:00:01:01:02:03:04:05:07], tid=0xdfecd6
2020-08-10 02:20:11.283 DEBUG [kea-dhcp6.alloc-engine/18466.140536017057536] ALLOC_ENGINE_V6_ALLOC_UNRESERVED no static reservations available - trying to dynamically allocate leases for client duid=[00:03:00:01:01:02:03:04:05:07], tid=0xdfecd6
2020-08-10 02:20:11.283 DEBUG [kea-dhcp6.dhcpsrv/18466.140536017057536] DHCPSRV_MEMFILE_GET_ADDR6 obtaining IPv6 lease for address 2001:db8:b::1 and lease type IA_NA
```
full logs attached [kea.log](/uploads/5b9b4f9cf4fcae04ea438824c405ae4f/kea.log)
Feature introduced in https://gitlab.isc.org/isc-projects/kea/-/issues/1139 and works perfectly on pool level :)kea1.9.0https://gitlab.isc.org/isc-projects/bind9/-/issues/2069Make possible logging of NXDOMAIN authoritative responses2020-08-07T09:19:41ZPetr MenšíkMake possible logging of NXDOMAIN authoritative responses### Description
Make possible logging of NXDOMAIN responses
### Request
It was requested on [Red Hat bug](https://bugzilla.redhat.com/show_bug.cgi?id=1845672). I have found it is only possible to log nxdomain responses on recursive se...### Description
Make possible logging of NXDOMAIN responses
### Request
It was requested on [Red Hat bug](https://bugzilla.redhat.com/show_bug.cgi?id=1845672). I have found it is only possible to log nxdomain responses on recursive server. But I haven't found a way to configure authoritative server to log NXDOMAIN results of queries.
I think DNSTAP can be used also for this, but it seems unnecessary complicated. Is there way to log queries, which got unsuccessful response?
Could a new authoritative-errors channel be created, with default configuration to null sink? It would be possible to redirect it to a log file. I am aware NXDOMAIN response is not exactly error, just statement such name does not exist. What if I am interested in most frequent non-existing names on my domains?
### Links / referenceshttps://gitlab.isc.org/isc-projects/kea/-/issues/1375Improve Database reconnect logic2021-01-22T17:15:24ZThomas MarkwalderImprove Database reconnect logicCurrently, when kea-dhcp4/6 servers lose connectivity to any of their backends (lease, host, or CB), the reconnect logic attempts to reconnect to all of them, whether they were lost or not. This is not the most efficient or flexible thi...Currently, when kea-dhcp4/6 servers lose connectivity to any of their backends (lease, host, or CB), the reconnect logic attempts to reconnect to all of them, whether they were lost or not. This is not the most efficient or flexible thing to do.
The reconnect logic is here:
ControlledDhcpv*Srv::dbReconnect(ReconnectCtlPtr db_reconnect_ctl)
This function is the lost db callback function and is invoked by the DatabaseConnection that suffers the failure. To improve behavior we will likely need more information passed in via db_reconnect_ctl, such that the above function can identify which backend has been lost and reconnect only that one.
We have a support customer that suggests we might want the ability to treat the loss of CB as non-fatal:
https://support.isc.org/Ticket/Display.html?id=16862kea1.9.3Razvan BecheriuRazvan Becheriuhttps://gitlab.isc.org/isc-projects/stork/-/issues/361Stork web interface default user/password doesn't work on Ubuntu 18.04/20.042020-08-26T18:50:38ZJon SchelerStork web interface default user/password doesn't work on Ubuntu 18.04/20.04---
name: Stork GUI Default Access on Ubuntu 18.04 / 20.04
---
If you believe your bug report is a security issue (e.g. a packet that can kill the server), DO NOT
REPORT IT HERE. Please use https://www.isc.org/community/report-bug/ in...---
name: Stork GUI Default Access on Ubuntu 18.04 / 20.04
---
If you believe your bug report is a security issue (e.g. a packet that can kill the server), DO NOT
REPORT IT HERE. Please use https://www.isc.org/community/report-bug/ instead or send mail to
security-office(at)isc(dot)org.
**Describe the bug**
Stork v0.9.0 on both Ubuntu 18.04 LTS and 20.04 LTS. Default admin/admin login does not appear to work but I can see the user in the postgres DB, verified access to 'stork' DB user and required privileges.
Followed the installation instructions here: https://stork.readthedocs.io/en/v0.9.0/install.html#installing-on-debian-ubuntu
postgres (PostgreSQL) 12.2 (Ubuntu 12.2-4)
**To Reproduce**
Steps to reproduce the behavior:
1. Install Stork on Ubuntu 18.04 or 20.04 LTS using this guide: https://stork.readthedocs.io/en/v0.9.0/install.html#installing-on-debian-ubuntu
2. Attempt to login to the web interface using the default username and password (admin/admin).
**Expected behavior**
Login should be successful and I should be able to create/edit/manage additional users.
**Environment:**
- Kea version: N/A
- BIND9 version: N/A
- Stork: 0.9.0
- OS: Ubuntu 18.04 / Ubuntu 20.04 x64 LTS
- Kea: N/A
- Kea: N/A
- postgres (PostgreSQL) 12.2 (Ubuntu 12.2-4)
**Additional Information**
I can connect to the postgres DB using the "stork" username and password, the tables and data exist, I can see the username "admin" and the hashed password, etc.
stork=# SELECT * FROM system_user;
id | email | lastname | name | password_hash | login
----+-------+----------+-------+------------------------------------+-------
1 | | admin | admin | $1$SlZLf1wT$sdC2ZyssasdfasdfasdfYo1 | admin
**Some initial questions**
- Are you sure your feature is not already implemented in the latest Kea version? This is strictly for Stork web GUI access.
- Are you sure what you would like to do is not possible using some other mechanisms? No, I can't login to Stork at all.
- Have you discussed your idea on kea-users or kea-dev mailing lists? I haven't - I'm not finding much in the way of people discussing Stork (or this version 0.9.0 specifically).
**Additional context**
Brand new installation of Ubuntu 18.04 and 20.04. I deployed 18.04 as that is what is stated to be working correctly in the installation guide to rule out an issue with 20.04 but they both operate the same way.
**Contacting you**
I can be reached via jscheler@7sigma.com0.11Marcin SiodelskiMarcin Siodelskihttps://gitlab.isc.org/isc-projects/bind9/-/issues/2068spnego.c:1430:2: error: ‘len’ may be used uninitialized in this function on ARM2020-08-31T12:10:53ZMichal Nowakspnego.c:1430:2: error: ‘len’ may be used uninitialized in this function on ARM`v9_16` and `v9_11` (but not on `main`) produce a build warning on Debian 9-based Armbian with `gcc (Debian 6.3.0-18+deb9u1) 6.3.0 20170516` on Odroid HC1, a single board system (32-bit `armv7l` CPU). As we compile on this OS with the sa...`v9_16` and `v9_11` (but not on `main`) produce a build warning on Debian 9-based Armbian with `gcc (Debian 6.3.0-18+deb9u1) 6.3.0 20170516` on Odroid HC1, a single board system (32-bit `armv7l` CPU). As we compile on this OS with the same compiler in the CI, where the warning does not manifest, it might be ARM-specific.
`v9_16`:
```
gcc -include /export/data/bind9/config.h -I/export/data/bind9 -I../.. -I. -I../../lib/dns -Iinclude -I/export/data/bind9/lib/dns/include -I../../lib/dns/include -I/export/data/bind9/lib/isc/include -I../../lib/isc -I../../lib/isc/include -I../../lib/isc/unix/include -I../../lib/isc/pthreads/include -I/usr/include -I/usr/include/json-c -I/usr/include/libxml2 -I/usr/include/arm-linux-gnueabihf -DGSSAPI -DUSE_ISC_SPNEGO -DISC_MEM_DEFAULTFILL=1 -DISC_LIST_CHECKINIT=1 -fno-omit-frame-pointer -fno-optimize-sibling-calls -O1 -g -Wall -Wextra -pthread -fPIC -W -Wall -Wmissing-prototypes -Wcast-qual -Wwrite-strings -Wformat -Wpointer-arith -Wno-missing-field-initializers -fno-strict-aliasing -Wshadow -Werror -c spnego.c
spnego.c: In function ‘gss_init_sec_context_spnego’:
spnego.c:1430:2: error: ‘len’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
memmove(p, buf, buf_size);
^~~~~~~~~~~~~~~~~~~~~~~~~
spnego.c:1510:9: note: ‘len’ was declared here
size_t len;
^~~
cc1: all warnings being treated as errors
```
`v9_11`:
```
gcc -I/export/data/bind9 -I../.. -I. -I../../lib/dns -Iinclude -I/export/data/bind9/lib/dns/include -I../../lib/dns/include -I/export/data/bind9/lib/isc/include -I../../lib/isc -I../../lib/isc/include -I../../lib/isc/unix/include -I../../lib/isc/pthreads/include -I../../lib/isc/noatomic/include -I/usr/include -D_REENTRANT -DUSE_MD5 -DOPENSSL -DGSSAPI -DUSE_ISC_SPNEGO -DISC_LIST_CHECKINIT=1 -D_GNU_SOURCE -fno-omit-frame-pointer -fno-optimize-sibling-calls -O1 -g -Wall -Wextra -I/usr/include/libxml2 -fPIC -W -Wall -Wmissing-prototypes -Wcast-qual -Wwrite-strings -Wformat -Wpointer-arith -fno-strict-aliasing -fno-delete-null-pointer-checks -Wshadow -Werror -c spnego.c
spnego.c: In function ‘gss_init_sec_context_spnego’:
spnego.c:1438:2: error: ‘len’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
memmove(p, buf, buf_size);
^~~~~~~~~~~~~~~~~~~~~~~~~
spnego.c:1521:9: note: ‘len’ was declared here
size_t len;
^~~
cc1: all warnings being treated as errors
```September 2020 (9.11.23, 9.11.23-S1, 9.16.7, 9.17.5)https://gitlab.isc.org/isc-projects/kea/-/issues/1374flexible option hook is creating string options differently than kea core2020-09-28T15:24:34ZWlodzimierz Wencelflexible option hook is creating string options differently than kea coreTest case is very simple, configuration as follows:
```
{
"Dhcp6": {
"hooks-libraries": [
{
"library": "/home/wlodek/installed/git-thread/lib/kea/hooks/libdhcp_flex_option.so",
"pa...Test case is very simple, configuration as follows:
```
{
"Dhcp6": {
"hooks-libraries": [
{
"library": "/home/wlodek/installed/git-thread/lib/kea/hooks/libdhcp_flex_option.so",
"parameters": {
"options": [
{
"code": 41,
"supersede": "ifelse(relay6[0].peeraddr == 3000::1005, 'EST5EDT4\\,M3.2.0/02:00\\,M11.1.0/02:00','')"
}
]
}
}
],
"interfaces-config": {
"interfaces": [
"enp0s9"
]
},
"lease-database": {
"type": "memfile"
},
"loggers": [
{
"debuglevel": 99,
"name": "kea-dhcp6",
"output_options": [
{
"output": "/home/wlodek/installed/git-thread/var/log/kea.log"
}
],
"severity": "DEBUG"
}
],
"multi-threading": {
"enable-multi-threading": true,
"packet-queue-size": 16,
"thread-pool-size": 2
},
"option-data": [
{
"code": 41,
"csv-format": true,
"data": "EST5EDT4\\,M3.2.0/02:00\\,M11.1.0/02:00",
"name": "new-posix-timezone",
"space": "dhcp6"
}
],
"preferred-lifetime": 3000,
"rebind-timer": 2000,
"renew-timer": 1000,
"shared-networks": [],
"subnet6": [
{
"interface": "enp0s9",
"pools": [
{
"pool": "2001:db8:1::1-2001:db8:1::1"
}
],
"subnet": "2001:db8:1::/64"
}
],
"valid-lifetime": 4000
}
}
```
We have configured option "new-posix-timezone" at global level and in flexible option, with the same value!
And I'm sending two advertise (inside Relay Forward), one message has peeraddr == 3000::1005 other don't. Kea is configured in a way that those two reply messages should be identical, but string in option that is created by hook has not escaped commas. Which is inconsistent behaviour with kea core. I'm adding capture from this test.
[capture.pcap](/uploads/8b4af215a05152141dac123b42433064/capture.pcap)kea1.9.0Wlodzimierz WencelWlodzimierz Wencelhttps://gitlab.isc.org/isc-projects/bind9/-/issues/2067NTA-related crash in checkbogus() after an "rndc reload"2020-08-11T02:03:16ZMichał KępieńNTA-related crash in checkbogus() after an "rndc reload"The following crash occurred in the `dnssec` system test:
https://gitlab.isc.org/isc-private/bind9/-/jobs/1070127
```
I:dnssec:checking positive and negative validation with negative trust anchors (134)
I:dnssec:ns4 Negative trust anch...The following crash occurred in the `dnssec` system test:
https://gitlab.isc.org/isc-private/bind9/-/jobs/1070127
```
I:dnssec:checking positive and negative validation with negative trust anchors (134)
I:dnssec:ns4 Negative trust anchor added: bogus.example/_default, expires 06-Aug-2020 09:49:06.000
I:dnssec:ns4 Negative trust anchor added: badds.example/_default, expires 06-Aug-2020 09:48:58.000
I:dnssec:ns4 Negative trust anchor added: secure.example/_default, expires 06-Aug-2020 09:48:59.000
I:dnssec:ns4 Negative trust anchor added: fakenode.secure.example/_default, expires 06-Aug-2020 09:49:03.000
I:dnssec:ns4 rndc: connection to remote host closed
I:dnssec:ns4 This may indicate that
I:dnssec:ns4 * the remote server is using an older version of the command protocol,
I:dnssec:ns4 * this host is not authorized to connect,
I:dnssec:ns4 * the clocks are not synchronized, or
I:dnssec:ns4 * the key is invalid.
rndc: connect failed: 10.53.0.4#5009: connection refused
rndc: connect failed: 10.53.0.4#5009: connection refused
I:dnssec:stopping servers
I:dnssec:Core dump(s) found: dnssec/ns4/core.14410
R:dnssec:FAIL
D:dnssec:backtrace from dnssec/ns4/core.14410:
D:dnssec:--------------------------------------------------------------------------------
D:dnssec:Core was generated by `/builds/isc-private/bind9/bind-9.16.6/bin/named/.libs/named -D dnssec-ns4 -X na'.
D:dnssec:Program terminated with signal SIGSEGV, Segmentation fault.
D:dnssec:#0 0x00007fef72ffb8c5 in isc__mem_get (mctx=0xdededededededede, size=24, file=0x7fef733ea7b9 "resolver.c", line=10758) at mem.c:2430
D:dnssec:2430 REQUIRE(ISCAPI_MCTX_VALID(mctx));
D:dnssec:[Current thread is 1 (Thread 0x7fef692e0700 (LWP 14436))]
D:dnssec:#0 0x00007fef72ffb8c5 in isc__mem_get (mctx=0xdededededededede, size=24, file=0x7fef733ea7b9 "resolver.c", line=10758) at mem.c:2430
D:dnssec:#1 0x00007fef73328737 in dns_resolver_createfetch (res=0x7fef6c14c4d0, name=0x7feeef4c7350, type=type@entry=47, domain=domain@entry=0x0, nameservers=nameservers@entry=0x0, forwarders=forwarders@entry=0x0, client=0x0, id=0, options=1024, depth=0, qc=0x0, task=0x7feeedd31c80, action=0x7fef732bb080 <fetch_done>, arg=0x7feeef4c7230, rdataset=0x7feeef4c7260, sigrdataset=0x7feeef4c72d8, fetchp=0x7feeef4c7258) at resolver.c:10758
D:dnssec:#2 0x00007fef732bafff in checkbogus (task=0x7feeedd31c80, event=<optimized out>) at nta.c:265
D:dnssec:#3 0x00007fef7300e72d in dispatch (threadid=<optimized out>, manager=<optimized out>) at task.c:1152
D:dnssec:#4 run (queuep=<optimized out>) at task.c:1344
D:dnssec:#5 0x00007fef72969fa3 in start_thread (arg=<optimized out>) at pthread_create.c:486
D:dnssec:#6 0x00007fef725914cf in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
D:dnssec:--------------------------------------------------------------------------------
D:dnssec:full backtrace from dnssec/ns4/core.14410 saved in core.14410-backtrace.txt
D:dnssec:core dump dnssec/ns4/core.14410 archived as dnssec/ns4/core.14410.gz
E:dnssec:2020-08-06T09:49:17+0000
```
What happens here is that `checkbogus()` (the timer callback used for
starting an NTA recheck query) calls `dns_resolver_createfetch()` while
the resolver is being destroyed (after an `rndc reload`). I believe the
root cause is that NTA code does not grab any references to the resolver
when the recheck timer is first set up.
I managed to reproduce this with the following patch applied:
```diff
diff --git a/lib/dns/resolver.c b/lib/dns/resolver.c
index 8956baee81..efd18e1f3c 100644
--- a/lib/dns/resolver.c
+++ b/lib/dns/resolver.c
@@ -14,6 +14,7 @@
#include <ctype.h>
#include <inttypes.h>
#include <stdbool.h>
+#include <unistd.h>
#include <isc/atomic.h>
#include <isc/counter.h>
@@ -10066,6 +10067,7 @@ destroy(dns_resolver_t *res) {
isc_timer_detach(&res->spillattimer);
res->magic = 0;
isc_mem_put(res->mctx, res, sizeof(*res));
+ sleep(5);
}
static void
```
I set `nta-recheck 2s;` in `named.conf`, started `named`, then added an
NTA (`rndc nta dnssec-failed.org.`), and finally stopped the server
(CTRL+C):
```
06-Aug-2020 13:06:37.741 running
06-Aug-2020 13:06:37.765 resolver priming query complete
06-Aug-2020 13:06:37.765 managed-keys-zone: Key 20326 for zone . is now trusted (acceptance timer complete)
06-Aug-2020 13:06:38.548 received control channel command 'nta dnssec-failed.org.'
06-Aug-2020 13:06:38.555 flush tree 'dnssec-failed.org.' in cache view '_default': success
06-Aug-2020 13:06:38.555 added NTA 'dnssec-failed.org.' (3600 sec) in view '_default'
^C06-Aug-2020 13:06:40.028 no longer listening on 127.0.0.1#5300
06-Aug-2020 13:06:40.028 no longer listening on ::1#5300
06-Aug-2020 13:06:40.031 shutting down
06-Aug-2020 13:06:40.031 stopping statistics channel on 127.0.0.1#8080
06-Aug-2020 13:06:40.031 stopping statistics channel on ::1#8080
06-Aug-2020 13:06:40.031 stopping command channel on 127.0.0.1#9953
06-Aug-2020 13:06:40.031 stopping command channel on ::1#9953
06-Aug-2020 13:06:46.568 resolver.c:10741: REQUIRE((__builtin_expect(!!((res) != ((void *)0)), 1) && __builtin_expect(!!(((const isc__magic_t *)(res))->magic == ((('R') << 24 | ('e') << 16 | ('s') << 8 | ('!')))), 1))) failed, back trace
06-Aug-2020 13:06:46.568 #0 0x5648605f3403 in ??
06-Aug-2020 13:06:46.568 #1 0x7f123e0f987a in ??
06-Aug-2020 13:06:46.568 #2 0x7f123e647e77 in ??
06-Aug-2020 13:06:46.568 #3 0x7f123e5434c3 in ??
06-Aug-2020 13:06:46.568 #4 0x7f123e142c2e in ??
06-Aug-2020 13:06:46.568 #5 0x7f123e143543 in ??
06-Aug-2020 13:06:46.568 #6 0x7f123dbc6422 in ??
06-Aug-2020 13:06:46.568 #7 0x7f123daf5bf3 in ??
06-Aug-2020 13:06:46.568 exiting (due to assertion failure)
Aborted (core dumped)
```
The crash is not exactly the same (more on this below), but the
backtrace points to the same code location:
```
(gdb) bt
#0 0x00007f123da32355 in raise () from /usr/lib/libc.so.6
#1 0x00007f123da1b853 in abort () from /usr/lib/libc.so.6
#2 0x00005648605f3701 in assertion_failed (file=0x7f123e77d1d0 "resolver.c", line=10741,
type=isc_assertiontype_require,
cond=0x7f123e780b80 "(__builtin_expect(!!((res) != ((void *)0)), 1) && __builtin_expect(!!(((const isc__magic_t *)(res))->magic == ((('R') << 24 | ('e') << 16 | ('s') << 8 | ('!')))), 1))") at ./main.c:261
#3 0x00007f123e0f987a in isc_assertion_failed (file=0x7f123e77d1d0 "resolver.c", line=10741,
type=isc_assertiontype_require,
cond=0x7f123e780b80 "(__builtin_expect(!!((res) != ((void *)0)), 1) && __builtin_expect(!!(((const isc__magic_t *)(res))->magic == ((('R') << 24 | ('e') << 16 | ('s') << 8 | ('!')))), 1))") at assertions.c:46
#4 0x00007f123e647e77 in dns_resolver_createfetch (res=0x0, name=0x7f1214012c80, type=47,
domain=0x0, nameservers=0x0, forwarders=0x0, client=0x0, id=0, options=1024, depth=0,
qc=0x0, task=0x7f12309fa970, action=0x7f123e543056 <fetch_done>, arg=0x7f1214012b60,
rdataset=0x7f1214012b90, sigrdataset=0x7f1214012c08, fetchp=0x7f1214012b88)
at resolver.c:10741
#5 0x00007f123e5434c3 in checkbogus (task=0x7f12309fa970, event=0x0) at nta.c:265
#6 0x00007f123e142c2e in dispatch (manager=0x7f123b93b010, threadid=1) at task.c:1152
#7 0x00007f123e143543 in run (queuep=0x7f123b93c0a0) at task.c:1344
#8 0x00007f123dbc6422 in start_thread () from /usr/lib/libpthread.so.0
#9 0x00007f123daf5bf3 in clone () from /usr/lib/libc.so.6
```
I believe that the reason for the discrepancy between the two call
stacks is that in my reproducer, the resolver is fully destroyed before
`checkbogus()` calls `dns_resolver_createfetch()`, which is why the
latter fails immediately on the `REQUIRE(VALID_RESOLVER(res));`
assertion; in the original crash (in GitLab CI), the resolver must have
still been valid when `dns_resolver_createfetch()` was called because it
crashed later on (after `res->mctx` became `0xdededededededede`) and it
crashed due to a segfault rather than an assertion failure.
Furthermore, inspecting the core dump shows that the resolver *has* in
fact been destroyed by the time the process crashed:
```
(gdb) frame 2
#2 0x00007fef732bafff in checkbogus (task=0x7feeedd31c80, event=<optimized out>) at nta.c:265
265 result = dns_resolver_createfetch(
(gdb) print view->resolver
$17 = (dns_resolver_t *) 0xdededededededede
```
I am marking this as confidential out of abundance of caution, until
someone can confirm my findings.September 2020 (9.11.23, 9.11.23-S1, 9.16.7, 9.17.5)https://gitlab.isc.org/isc-projects/kea/-/issues/1373flexible option hook is calculating v6 domain names length incorrectly2020-09-28T12:06:44ZWlodzimierz Wencelflexible option hook is calculating v6 domain names length incorrectlyTest case is very simple, configuration as follows:
```
{
"Dhcp6": {
"hooks-libraries": [
{
"library": "/home/wlodek/installed/git-thread/lib/kea/hooks/libdhcp_flex_option.so",
"pa...Test case is very simple, configuration as follows:
```
{
"Dhcp6": {
"hooks-libraries": [
{
"library": "/home/wlodek/installed/git-thread/lib/kea/hooks/libdhcp_flex_option.so",
"parameters": {
"options": [
{
"code": 30,
"supersede": "ifelse(relay6[0].peeraddr == 3000::1005, 'ntp.example.com','')"
}
]
}
}
],
"interfaces-config": {
"interfaces": [
"enp0s9"
]
},
"lease-database": {
"type": "memfile"
},
"loggers": [
{
"debuglevel": 99,
"name": "kea-dhcp6",
"output_options": [
{
"output": "/home/wlodek/installed/git-thread/var/log/kea.log"
}
],
"severity": "DEBUG"
}
],
"multi-threading": {
"enable-multi-threading": true,
"packet-queue-size": 16,
"thread-pool-size": 2
},
"option-data": [
{
"code": 30,
"csv-format": true,
"data": "ntp.example.com",
"name": "nisp-domain-name",
"space": "dhcp6"
}
],
"preferred-lifetime": 3000,
"rebind-timer": 2000,
"renew-timer": 1000,
"shared-networks": [],
"subnet6": [
{
"interface": "enp0s9",
"pools": [
{
"pool": "2001:db8:1::1-2001:db8:1::1"
}
],
"subnet": "2001:db8:1::/64"
}
],
"valid-lifetime": 4000
}
}
```
We have configured option "nisp-domain-name" at global level and in flexible option, with the same value!
And I'm sending two advertise (inside Relay Forward), one message has peeraddr == 3000::1005 other don't. Kea is configured in a way that those two reply messages should be identical, but one of those will have "nisp-domain-name" option added by Kea core, other will have "nisp-domain-name" added by hook.
[capture.pcap](/uploads/b031d07c9ebff5ce7152ab03727088e8/capture.pcap)
Result is that hook is adding option with incorrect length calculated. Used forge to find it. attaching capture from test.
logs looks correct
```
2020-08-06 01:03:46.128 DEBUG [kea-dhcp6.callouts/29307.140183868143360] HOOKS_CALLOUTS_BEGIN begin all callouts for hook pkt6_send
2020-08-06 01:03:46.129 DEBUG [kea-dhcp6.eval/29307.140183868143360] EVAL_DEBUG_RELAY6 Pushing PKT6 relay field peeraddr nest 0 with value 0x30000000000000000000000000001005
2020-08-06 01:03:46.129 DEBUG [kea-dhcp6.eval/29307.140183868143360] EVAL_DEBUG_IPADDRESS Pushing IPAddress 0x30000000000000000000000000001005
2020-08-06 01:03:46.129 DEBUG [kea-dhcp6.eval/29307.140183868143360] EVAL_DEBUG_EQUAL Popping 0x30000000000000000000000000001005 and 0x30000000000000000000000000001005 pushing result 'true'
2020-08-06 01:03:46.129 DEBUG [kea-dhcp6.eval/29307.140183868143360] EVAL_DEBUG_STRING Pushing text string 'ntp.example.com'
2020-08-06 01:03:46.129 DEBUG [kea-dhcp6.eval/29307.140183868143360] EVAL_DEBUG_STRING Pushing text string ''
2020-08-06 01:03:46.129 DEBUG [kea-dhcp6.eval/29307.140183868143360] EVAL_DEBUG_IFELSE_TRUE Popping 'true' (true) and 0x, leaving 0x6E74702E6578616D706C652E636F6D
2020-08-06 01:03:46.130 DEBUG [kea-dhcp6.flex-option-hooks/29307.140183868143360] FLEX_OPTION_PROCESS_SUPERSEDE Supersedes the value of option code 30 by 'ntp.example.com'
2020-08-06 01:03:46.130 DEBUG [kea-dhcp6.callouts/29307.140183868143360] HOOKS_CALLOUT_CALLED hooks library with index 1 has called a callout on hook pkt6_send that has address 0x7f7f1b41ff30 (callout duration: 1.504 ms)
```kea1.9.0Francis DupontFrancis Duponthttps://gitlab.isc.org/isc-projects/bind9/-/issues/2066Fix serve-stale so that it is usable when needed2021-01-26T16:04:27ZCathy AlmondFix serve-stale so that it is usable when neededFrom #1712 and as described in Support ticket [#16171](https://support.isc.org/Ticket/Display.html?id=16171) :
```
The problem with serve-stale was (and still is after some testing on 9.16.1),
that every client that asks for e.g. "isc.o...From #1712 and as described in Support ticket [#16171](https://support.isc.org/Ticket/Display.html?id=16171) :
```
The problem with serve-stale was (and still is after some testing on 9.16.1),
that every client that asks for e.g. "isc.org A" will all have to wait for 10
seconds before they get the stale answer. There seem to be no table of stale
resolvers so each time a request comes in, BIND seems to try the resolver
again to find out if it answers or not.
```
This really is not helpful - most clients will have given up and gone away and will never get a usable answer.
IF the name is one that is popular, then because of 'clients-per-query' and the fact that we attach any future waiting clients for the same query to the already-existing fetch process, then the late arrivals stand a fighting chance of getting a response from stale cache before they give up - but the majority won't.
See also #1688 - we haven't documented very thoroughly how this works anyway, and we certainly have not documented how it interacts with fetch-limits and other resolver-protecting features.
Here's a sample config that was being used for testing:
```
stale-answer-enable yes;
stale-answer-ttl 600;
max-stale-ttl 1w;
```
There is nothing there that provides for a configurable period of 'staleness' so that after the first time the failure to refresh has taken place, a server can immediately serve this stale content to any clients who come along later instead of repeating the refresh attempt (and likely failing again).
I think the issue is that although we do have some control over how stale an answer can be before we stop serving it, we haven't thought sufficiently about how long clients will be prepared to wait for a query response if we have to attempt to refresh and then fail for each client (or set of clients) when queried.
Note: I **do not** think we should immediately serve stale answers whenever there's cache content available that has recently expired - this is not what we're trying to achieve. The idea of serve-stale as the converse of pre-fetch ('post-fetch'?) is somehow terribly tempting because it feels like it would be faster and a better experience for the clients, plus there's this nice symmetry with pre-fetch logic. But I think it's wrong - and would absolutely break how we handle TTL=0 answers today. Authoritative server operators **expect** resolvers to come back to them as soon as their cached content expired. We should not skip this step.
But what would be more helpful (to both clients and to servers) when there are non-responding authoritative servers, would be a way to flag a stale answer with the timestamp of when the last failing refresh attempt occurred, and if a client queries the same name again within a suitable time period (configurable? Something like 10s feels like a good default here), then the stale answer gets used right away.
We're preserving resolver resources by doing this (and anyway, if we couldn't resolve this name 1s ago, why are we trying again immediately if we've got something usable-but-stale in cache we could use instead?)November 2020 (9.11.25, 9.11.25-S1, 9.16.9, 9.16.9-S1, 9.17.7)Diego dos Santos FronzaDiego dos Santos Fronzahttps://gitlab.isc.org/isc-projects/kea/-/issues/1372lease query hook is missing from Available Hooks Libraries list2020-08-19T07:06:29ZWlodzimierz Wencellease query hook is missing from Available Hooks Libraries listAs in subject:
https://jenkins.isc.org/job/Kea_doc/KeaAdministratorReferenceManual/index.html#available-hooks-librariesAs in subject:
https://jenkins.isc.org/job/Kea_doc/KeaAdministratorReferenceManual/index.html#available-hooks-librarieskea1.8.0Thomas MarkwalderThomas Markwalderhttps://gitlab.isc.org/isc-projects/bind9/-/issues/2065"geoip2" system test fails intermittently2020-08-05T09:09:56ZMichał Kępień"geoip2" system test fails intermittentlyThe problem affects ~"v9.17" and ~"v9.16" on various runners:
- https://gitlab.isc.org/isc-projects/bind9/-/jobs/1064670
- https://gitlab.isc.org/isc-private/bind9/-/jobs/1064060
- https://gitlab.isc.org/isc-private/bind9/-/jobs/1...The problem affects ~"v9.17" and ~"v9.16" on various runners:
- https://gitlab.isc.org/isc-projects/bind9/-/jobs/1064670
- https://gitlab.isc.org/isc-private/bind9/-/jobs/1064060
- https://gitlab.isc.org/isc-private/bind9/-/jobs/1064109
- https://gitlab.isc.org/isc-private/bind9/-/jobs/1064111
This was not happening for July releases.August 2020 (9.11.22, 9.11.22-S1, 9.16.6, 9.17.4)Michał KępieńMichał Kępieńhttps://gitlab.isc.org/isc-projects/bind9/-/issues/2064fuzz/isc_lex_getmastertoken.c doesn't call isc_lex_getmastertoken2020-08-31T12:21:29ZMark Andrewsfuzz/isc_lex_getmastertoken.c doesn't call isc_lex_getmastertokenThe MR that committed fuzz/isc_lex_getmastertoken.c was not reviewed.The MR that committed fuzz/isc_lex_getmastertoken.c was not reviewed.September 2020 (9.11.23, 9.11.23-S1, 9.16.7, 9.17.5)https://gitlab.isc.org/isc-projects/kea/-/issues/1371Text edits to uml diagrams appendix2021-08-12T10:24:10ZVicky Riskvicky@isc.orgText edits to uml diagrams appendixI would like to put more explanatory text into the appendix with the UML diagrams.I would like to put more explanatory text into the appendix with the UML diagrams.kea1.8.0Vicky Riskvicky@isc.orgVicky Riskvicky@isc.orghttps://gitlab.isc.org/isc-projects/stork/-/issues/360UI can't retrieve data (server side events broken?)2020-08-10T13:29:30ZTomek MrugalskiUI can't retrieve data (server side events broken?)On the latest master (8fa945083e9fb4a1ce6c87f72ab89fcbd21baf81), I have clean stork running, using rake docker_up.
I've observed a plethora of problems:
1. added agent-kea, which initially worked ok, but later (after couple mins) report...On the latest master (8fa945083e9fb4a1ce6c87f72ab89fcbd21baf81), I have clean stork running, using rake docker_up.
I've observed a plethora of problems:
1. added agent-kea, which initially worked ok, but later (after couple mins) reported comm problem with the dhcpv4 and ca daemons ("There is observed issue in communication with the daemon.", see screenshot 1)
1. added agent-kea-ha1, agent-kea-ha2, they report losing communication. ha1 becomes unreachable, ha2 reports partner down (see screenshot 2)
1. the RPS and pool utilization is not updated anymore (clicking refresh buttons in the ui, pressing ctrl-r doesn't change a thing, see screenshot 3)
1. the daemons status is broken (shows all as grey no-entry sign, but when you hover your cursor over it, some say the communication is ok, see screenshot 4)
I'm reporting all of those in a single issue, because I believe most of them (maybe except the log viewer) can be explained with a problem with server side events. More on this in the first comment.
SCREENSHOT 1 (lost comm with dhcpv4 and ca running on agent-kea)
![Screenshot_2020-08-04_at_19.10.28](/uploads/928c65b9fe461d7f5cc90addab65bb89/Screenshot_2020-08-04_at_19.10.28.png)
SCREENSHOT 2 (ha1 lost comm with its partner)
![Screenshot_2020-08-04_at_19.13.27](/uploads/74f21272b7ad6adafc00523efe25ffee/Screenshot_2020-08-04_at_19.13.27.png)
SCREENSHOT 3 (dashboard not updated)
![Screenshot_2020-08-04_at_19.17.15](/uploads/d7ac2e1fccf7c08a1992fa6c30a94432/Screenshot_2020-08-04_at_19.17.15.png)
SCREENSHOT 4 (broken app status update)
![Screenshot_2020-08-04_at_19.18.54](/uploads/6b8e39c75637771cb5e72aeb24d99a99/Screenshot_2020-08-04_at_19.18.54.png)0.10Michal NowikowskiMichal Nowikowski