ISC Open Source Projects issueshttps://gitlab.isc.org/groups/isc-projects/-/issues2023-11-02T16:30:30Zhttps://gitlab.isc.org/isc-projects/bind9/-/issues/4087Follow-up from "fix handling of TCP timeouts"2023-11-02T16:30:30ZEvan HuntFollow-up from "fix handling of TCP timeouts"The following discussion from !7937 should be addressed:
- [ ] @aram started a [discussion](https://gitlab.isc.org/isc-projects/bind9/-/merge_requests/7937#note_375087): (+2 comments)
> While you are addressing Ondřej's comments, ...The following discussion from !7937 should be addressed:
- [ ] @aram started a [discussion](https://gitlab.isc.org/isc-projects/bind9/-/merge_requests/7937#note_375087): (+2 comments)
> While you are addressing Ondřej's comments, would you please also look at something not strictly related to this MR, which caught my eye (cc @ondrej):
>
> ```c
> void
> dns_dispatch_resume(dns_dispentry_t *resp, uint16_t timeout);
> /*%<
> * Reset the read timeout in the socket associated with 'resp' and
> * continue reading.
> *
> * Requires:
> *\li 'resp' is valid.
> */
> ```
>
> The function is supposed to reset the read timeout, but if I am reading the code correctly, both `udp_dispatch_getnext()` and `tcp_dispatch_getnext()` (called by `dns_dispatch_resume()`) potentially can ignore the timeout value if the read operation is already ongoing. Is that by design?
>
> I think it should at least update the `resp->timeout` value with the new one, and probably call `isc_nmhandle_settimeout()` even when already reading, in case if the new timeout is smaller than the remaining time of the current one.Not plannedEvan HuntEvan Hunthttps://gitlab.isc.org/isc-projects/kea/-/issues/2878Host Commands require subnet-id to add or manage host reservation2023-11-07T13:04:00ZMarcin GodzinaHost Commands require subnet-id to add or manage host reservationRecently empty host reservations were made valid (containing only hardware address), but Host Command still require providing a `subnet-id` to add or manage leases.
Empty host reservations were added here https://gitlab.isc.org/isc-proj...Recently empty host reservations were made valid (containing only hardware address), but Host Command still require providing a `subnet-id` to add or manage leases.
Empty host reservations were added here https://gitlab.isc.org/isc-projects/kea/-/issues/2723next-stable-2.6https://gitlab.isc.org/isc-projects/bind9/-/issues/4092timer.c:223:timerevent_destroy(): fatal error: RUNTIME_CHECK(isc_mutex_unlock...2023-05-25T07:38:17ZMichal Nowaktimer.c:223:timerevent_destroy(): fatal error: RUNTIME_CHECK(isc_mutex_unlock((&timer->lock)) == ISC_R_SUCCESS) failedJob [#3411550](https://gitlab.isc.org/isc-projects/bind9/-/jobs/3411550) failed for 66254cf56d7072833db6d8744e6bcef2109b72e2.
BIND 9.18 `task` unit test failed on `unit:gcc:oraclelinux8:amd64`.
```
[==========] Running 11 test(s).
[ RU...Job [#3411550](https://gitlab.isc.org/isc-projects/bind9/-/jobs/3411550) failed for 66254cf56d7072833db6d8744e6bcef2109b72e2.
BIND 9.18 `task` unit test failed on `unit:gcc:oraclelinux8:amd64`.
```
[==========] Running 11 test(s).
[ RUN ] manytasks
[ OK ] manytasks
[ RUN ] all_events
[ OK ] all_events
[ RUN ] basic
timer.c:223:timerevent_destroy(): fatal error: RUNTIME_CHECK(isc_mutex_unlock((&timer->lock)) == ISC_R_SUCCESS) failed
../../tests/unit-test-driver.sh: line 36: 8595 Aborted (core dumped) "${TEST_PROGRAM}"
I:task_test:Core dump found: ./core.8595
D:task_test:backtrace from ./core.8595 start
[New LWP 8636]
[New LWP 8595]
[New LWP 8637]
[New LWP 8638]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Core was generated by `/builds/isc-projects/bind9/tests/isc/.libs/lt-task_test'.
Program terminated with signal SIGABRT, Aborted.
#0 0x00007f8c5b302aff in raise () from /lib64/libc.so.6
[Current thread is 1 (Thread 0x7f8c3bfff700 (LWP 8636))]
Thread 4 (Thread 0x7f8c412fa700 (LWP 8638)):
#0 0x00007f8c5b68846c in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
No symbol table info available.
#1 0x00007f8c5c4af725 in run (uap=0x7f8c591e1000) at timer.c:632
manager = 0x7f8c591e1000
now = {seconds = 1684976709, nanoseconds = 609640403}
result = <optimized out>
__func__ = "run"
#2 0x00007f8c5c4b4b20 in isc__trampoline_run (arg=0x1973730) at trampoline.c:189
trampoline = 0x1973730
result = <optimized out>
#3 0x00007f8c5b6821da in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#4 0x00007f8c5b2ede73 in clone () from /lib64/libc.so.6
No symbol table info available.
Thread 3 (Thread 0x7f8c40af9700 (LWP 8637)):
#0 0x00007f8c5b3e4017 in epoll_wait () from /lib64/libc.so.6
No symbol table info available.
#1 0x00007f8c5c2460f9 in uv.io_poll () from /lib64/libuv.so.1
No symbol table info available.
#2 0x00007f8c5c234a74 in uv_run () from /lib64/libuv.so.1
No symbol table info available.
#3 0x00007f8c5c47aa6c in nm_thread (worker0=0x7f8c591f75b8) at netmgr/netmgr.c:698
r = <optimized out>
worker = 0x7f8c591f75b8
mgr = 0x7f8c59036000
__func__ = "nm_thread"
#4 0x00007f8c5c4b4b20 in isc__trampoline_run (arg=0x1974330) at trampoline.c:189
trampoline = 0x1974330
result = <optimized out>
#5 0x00007f8c5b6821da in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#6 0x00007f8c5b2ede73 in clone () from /lib64/libc.so.6
No symbol table info available.
Thread 2 (Thread 0x7f8c5ce04140 (LWP 8595)):
#0 0x00007f8c5b3ae9a8 in nanosleep () from /lib64/libc.so.6
No symbol table info available.
#1 0x00007f8c5b3dbf48 in usleep () from /lib64/libc.so.6
No symbol table info available.
#2 0x00007f8c5c4ac692 in isc__taskmgr_destroy (managerp=managerp@entry=0x607348 <taskmgr>) at task.c:1041
No locals.
#3 0x00007f8c5c49b4b0 in isc_managers_destroy (netmgrp=netmgrp@entry=0x607338 <netmgr>, taskmgrp=taskmgrp@entry=0x607348 <taskmgr>, timermgrp=timermgrp@entry=0x607340 <timermgr>) at managers.c:99
No locals.
#4 0x00000000004052ee in teardown_managers (state=<optimized out>) at isc.c:84
No locals.
#5 0x0000000000404f64 in _teardown (state=<optimized out>) at task_test.c:91
No locals.
#6 0x00007f8c5be1702e in cmocka_run_one_test_or_fixture () from /lib64/libcmocka.so.0
No symbol table info available.
#7 0x00007f8c5be179e0 in _cmocka_run_group_tests () from /lib64/libcmocka.so.0
No symbol table info available.
#8 0x000000000040516b in main () at task_test.c:1408
r = <optimized out>
Thread 1 (Thread 0x7f8c3bfff700 (LWP 8636)):
#0 0x00007f8c5b302aff in raise () from /lib64/libc.so.6
No symbol table info available.
#1 0x00007f8c5b2d5ea5 in abort () from /lib64/libc.so.6
No symbol table info available.
#2 0x00007f8c5c48f5c2 in isc_error_fatal (file=file@entry=0x7f8c5c4c45a6 "timer.c", line=line@entry=223, func=func@entry=0x7f8c5c4d07a0 <__func__.7544> "timerevent_destroy", format=format@entry=0x7f8c5c4c0814 "RUNTIME_CHECK(%s) failed") at error.c:72
args = {{gp_offset = 40, fp_offset = 48, overflow_arg_area = 0x7f8c3bff9d00, reg_save_area = 0x7f8c3bff9c40}}
#3 0x00007f8c5c4af15f in timerevent_destroy (event0=0x7f8c51800b00) at timer.c:225
timer = 0x7f8c591e10a0
event = 0x7f8c51800b00
__func__ = "timerevent_destroy"
#4 0x00007f8c5c48f7e9 in isc_event_free (eventp=eventp@entry=0x7f8c3bff9d48) at event.c:93
event = <optimized out>
#5 0x0000000000403449 in basic_tick (task=<optimized out>, event=<optimized out>) at task_test.c:444
No locals.
#6 0x00007f8c5c4abf17 in task_run (task=0x7f8c591e73c0) at task.c:815
dispatch_count = 0
finished = false
quantum = <optimized out>
event = 0x7f8c51800b00
result = ISC_R_SUCCESS
dispatch_count = <optimized out>
finished = <optimized out>
event = <optimized out>
result = <optimized out>
quantum = <optimized out>
__func__ = "task_run"
__atomic_load_ptr = <optimized out>
__atomic_load_tmp = <optimized out>
__atomic_load_ptr = <optimized out>
__atomic_load_tmp = <optimized out>
__atomic_load_ptr = <optimized out>
__atomic_load_tmp = <optimized out>
__atomic_load_ptr = <optimized out>
__atomic_load_tmp = <optimized out>
__v = <optimized out>
#7 isc_task_run (task=0x7f8c591e73c0) at task.c:896
No locals.
#8 0x00007f8c5c472579 in isc__nm_async_task (worker=worker@entry=0x7f8c591f7000, ev0=ev0@entry=0x7f8c51805f80) at netmgr/netmgr.c:848
ievent = 0x7f8c51805f80
result = <optimized out>
#9 0x00007f8c5c479d78 in process_netievent (worker=worker@entry=0x7f8c591f7000, ievent=ievent@entry=0x7f8c51805f80) at netmgr/netmgr.c:920
No locals.
#10 0x00007f8c5c47a78e in process_queue (worker=worker@entry=0x7f8c591f7000, type=type@entry=NETIEVENT_TASK) at netmgr/netmgr.c:1013
next = 0x0
ievent = 0x7f8c51805f80
list = {head = 0x0, tail = 0x0}
__func__ = "process_queue"
#11 0x00007f8c5c47b23b in process_all_queues (worker=0x7f8c591f7000) at netmgr/netmgr.c:767
result = <optimized out>
type = 2
reschedule = false
reschedule = <optimized out>
type = <optimized out>
result = <optimized out>
#12 async_cb (handle=0x7f8c591f7360) at netmgr/netmgr.c:796
worker = 0x7f8c591f7000
#13 0x00007f8c5c2342f1 in uv.async_io.part () from /lib64/libuv.so.1
No symbol table info available.
#14 0x00007f8c5c245d15 in uv.io_poll () from /lib64/libuv.so.1
No symbol table info available.
#15 0x00007f8c5c234a74 in uv_run () from /lib64/libuv.so.1
No symbol table info available.
#16 0x00007f8c5c47aa6c in nm_thread (worker0=0x7f8c591f7000) at netmgr/netmgr.c:698
r = <optimized out>
worker = 0x7f8c591f7000
mgr = 0x7f8c59036000
__func__ = "nm_thread"
#17 0x00007f8c5c4b4b20 in isc__trampoline_run (arg=0x1976840) at trampoline.c:189
trampoline = 0x1976840
result = <optimized out>
#18 0x00007f8c5b6821da in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#19 0x00007f8c5b2ede73 in clone () from /lib64/libc.so.6
No symbol table info available.
D:task_test:backtrace from ./core.8595 end
FAIL task_test (exit status: 134)
```https://gitlab.isc.org/isc-projects/keama/-/issues/15keama-web should produce logs2023-12-11T18:33:48ZTomek Mrugalskikeama-web should produce logsThe web interface is introduced in #13. @vicky [asked](https://gitlab.isc.org/isc-private/devops/-/issues/183#note_375512) if we have any logs after her attempt to upload bad files. There aren't any logs. There should be.The web interface is introduced in #13. @vicky [asked](https://gitlab.isc.org/isc-private/devops/-/issues/183#note_375512) if we have any logs after her attempt to upload bad files. There aren't any logs. There should be.4.5.1https://gitlab.isc.org/isc-projects/kea/-/issues/2879starting TCP listener should be before DHCP4_STARTED2023-06-15T13:40:33ZWlodzimierz Wencelstarting TCP listener should be before DHCP4_STARTEDWhen kea starts with blq it logs:
```
2023-05-25 07:31:50.755 INFO [kea-dhcp4.dhcp4/4119.139754391320480] DHCP4_STARTED Kea DHCPv4 server version 2.3.8 started
2023-05-25 07:31:51.756 DEBUG [kea-dhcp4.tcp/4119.139754391320480] MT_TCP_LI...When kea starts with blq it logs:
```
2023-05-25 07:31:50.755 INFO [kea-dhcp4.dhcp4/4119.139754391320480] DHCP4_STARTED Kea DHCPv4 server version 2.3.8 started
2023-05-25 07:31:51.756 DEBUG [kea-dhcp4.tcp/4119.139754391320480] MT_TCP_LISTENER_MGR_STARTED MtTcpListenerMgr started with 8 threads, listening on 192.168.50.252:67, use TLS: false
```
Looks like Kea is starting TCP listener after it says it's started. And it should be before `DHCP4_STARTED` message.
Background: in each forge test after Kea is started it is looking for `DHCP4_STARTED Kea DHCPv4 server version 2.3.8 started` (or `DHCP6_STARTED Kea DHCPv6 server version 2.3.8 started` message to continue with the test. When test start to send TCP messages at the beginning it fails with error connection refused. So kea is logging DHCP4_STARTED message before it's actually ready to work.next-stable-2.6https://gitlab.isc.org/isc-projects/kea/-/issues/2880tools/check-for-missing-api-commands.sh is not portable2023-06-15T13:41:39ZFrancis Duponttools/check-for-missing-api-commands.sh is not portableThis script the -P option to grep which is not portable (unknown by the POSIX standard and according to man grep on a Linux machine an experimental feature of Gnu grep).This script the -P option to grep which is not portable (unknown by the POSIX standard and according to man grep on a Linux machine an experimental feature of Gnu grep).backloghttps://gitlab.isc.org/isc-projects/kea/-/issues/2884add commands in stat hook library for statistics by Pool ID (by subnet ID/sub...2023-06-15T13:45:24ZRazvan Becheriuadd commands in stat hook library for statistics by Pool ID (by subnet ID/subnet range/pool ID/pool range)backloghttps://gitlab.isc.org/isc-projects/bind9/-/issues/4099[PATCH] +shortans2023-05-30T13:24:54ZFredrick Brennan[PATCH] +shortans# Patch
```diff
From 6041dcb60313b5fd81076bd53713b8a53fb95f87 Mon Sep 17 00:00:00 2001
From: Fredrick Brennan <copypaste@kittens.ph>
Date: Sat, 27 May 2023 08:23:45 -0400
Subject: [PATCH] [dig] +shortans
---
bin/dig/dig.c | 48 +++++...# Patch
```diff
From 6041dcb60313b5fd81076bd53713b8a53fb95f87 Mon Sep 17 00:00:00 2001
From: Fredrick Brennan <copypaste@kittens.ph>
Date: Sat, 27 May 2023 08:23:45 -0400
Subject: [PATCH] [dig] +shortans
---
bin/dig/dig.c | 48 ++++++++++++++++++++++++++++++++++++------------
bin/dig/dig.rst | 4 ++++
doc/man/dig.1in | 5 +++++
3 files changed, 45 insertions(+), 12 deletions(-)
diff --git a/bin/dig/dig.c b/bin/dig/dig.c
index 694924c0f2..dd9bfcd4a7 100644
--- a/bin/dig/dig.c
+++ b/bin/dig/dig.c
@@ -286,6 +286,8 @@ help(void) {
"short\n"
" form of answers - global "
"option)\n"
+ " +[no]shortans (equivalent to `+noall"
+ "+authority +answer`)\n"
" +[no]showbadcookie (Show BADCOOKIE message)\n"
" +[no]showsearch (Search with intermediate "
"results)\n"
@@ -1901,18 +1903,40 @@ plus_option(char *option, bool is_batchfile, bool *need_clone,
goto invalid_option;
}
switch (cmd[3]) {
- case 'r': /* short */
- FULLCHECK("short");
- short_form = state;
- if (state) {
- printcmd = false;
- lookup->section_additional = false;
- lookup->section_answer = true;
- lookup->section_authority = false;
- lookup->section_question = false;
- lookup->comments = false;
- lookup->stats = false;
- lookup->rrcomments = -1;
+ case 'r': /* shor… */
+ switch(cmd[4]) {
+ case 't': /* short… */
+ switch(cmd[5]) { /* short */
+ case '\0':
+ FULLCHECK("short");
+ short_form = state;
+ if (state) {
+ printcmd = false;
+ lookup->section_additional = false;
+ lookup->section_answer = true;
+ lookup->section_authority = false;
+ lookup->section_question = false;
+ lookup->comments = false;
+ lookup->stats = false;
+ lookup->rrcomments = -1;
+ }
+ break;
+ case 'a': /* shortans */
+ FULLCHECK("shortans");
+ lookup->section_question = !state;
+ lookup->section_authority = state;
+ lookup->section_answer = state;
+ lookup->section_additional = !state;
+ lookup->comments = !state;
+ lookup->stats = !state;
+ printcmd = !state;
+ break;
+ default:
+ goto invalid_option;
+ }
+ break;
+ default:
+ goto invalid_option;
}
break;
case 'w': /* showsearch */
diff --git a/bin/dig/dig.rst b/bin/dig/dig.rst
index a5bfb86556..75237f0ae0 100644
--- a/bin/dig/dig.rst
+++ b/bin/dig/dig.rst
@@ -571,6 +571,10 @@ abbreviation is unambiguous; for example, :option:`+cd` is equivalent to
form. This option always has a global effect; it cannot be set globally and
then overridden on a per-lookup basis.
+.. option:: +shortans, +noshortans
+
+ This option expands to :option:`+noall` :option:`+authority` :option:`+answer`.
+
.. option:: +showbadcookie, +noshowbadcookie
This option toggles whether to show the message containing the
diff --git a/doc/man/dig.1in b/doc/man/dig.1in
index d5f42ed852..1607d7f2ca 100644
--- a/doc/man/dig.1in
+++ b/doc/man/dig.1in
@@ -663,6 +663,11 @@ then overridden on a per\-lookup basis.
.UNINDENT
.INDENT 0.0
.TP
+.B +shortans, +noshortans
+This option expands to \fI\%+noall\fP \fI\%+authority\fP \fI\%+answer\fP\&.
+.UNINDENT
+.INDENT 0.0
+.TP
.B +showbadcookie, +noshowbadcookie
This option toggles whether to show the message containing the
BADCOOKIE rcode before retrying the request or not. The default
--
2.40.1
```
# Detached signature
```gpg
-----BEGIN PGP SIGNATURE-----
iHUEABYIAB0WIQS1rLeeEfG/f0nzK7hYUwVpYvFOWAUCZHH3EAAKCRBYUwVpYvFO
WOiHAP9uTERa4rrztKKeqk1TSLkqP5RgDnBbgxcbTkHAt5q7/wEAvffIjE5SUX8P
RpxZ9yS2geRmVXwyLDiS4FjxN3u7vgE=
=i92K
-----END PGP SIGNATURE-----
```https://gitlab.isc.org/isc-projects/bind9/-/issues/4100REQUIRE(((multi) != ((void *)0) && ((const isc__magic_t *)(multi))->magic == ...2023-05-30T08:09:36ZMichal NowakREQUIRE(((multi) != ((void *)0) && ((const isc__magic_t *)(multi))->magic == ((('q') << 24 | ('p') << 16 | ('m') << 8 | ('v'))))) in qp.cJob [#3424074](https://gitlab.isc.org/isc-projects/bind9/-/jobs/3424074) failed for 2e8ceeea14e336980c9da80449b84ecd16afc7e5.
The `qpmulti_test` unit test failed.
```
[==========] Running 1 test(s).
[ RUN ] qpmulti
qp.c:634: REQUI...Job [#3424074](https://gitlab.isc.org/isc-projects/bind9/-/jobs/3424074) failed for 2e8ceeea14e336980c9da80449b84ecd16afc7e5.
The `qpmulti_test` unit test failed.
```
[==========] Running 1 test(s).
[ RUN ] qpmulti
qp.c:634: REQUIRE(((multi) != ((void *)0) && ((const isc__magic_t *)(multi))->magic == ((('q') << 24 | ('p') << 16 | ('m') << 8 | ('v'))))) failed, back trace
/builds/isc-projects/bind9/lib/isc/.libs/libisc-9.19.14-dev.so(+0x2ddb2)[0x7f231ea2ddb2]
/builds/isc-projects/bind9/lib/isc/.libs/libisc-9.19.14-dev.so(isc_assertion_failed+0xa)[0x7f231ea2dd2d]
/builds/isc-projects/bind9/lib/dns/.libs/libdns-9.19.14-dev.so(+0xb9fe3)[0x7f231e0b9fe3]
/lib64/liburcu.so.6(+0x37a9)[0x7f231d64e7a9]
/lib64/libpthread.so.0(+0x81da)[0x7f231dbdd1da]
/lib64/libc.so.6(clone+0x43)[0x7f231ceafe73]
../../tests/unit-test-driver.sh: line 36: 13597 Aborted (core dumped) "${TEST_PROGRAM}"
FAIL qpmulti_test (exit status: 134)
```
There's no core file or full backtrace in the logs.https://gitlab.isc.org/isc-projects/kea/-/issues/2897Cross-check - server should check its HA partner config2023-06-15T13:50:50ZTomek MrugalskiCross-check - server should check its HA partner configHere's an idea for new HA capability. On startup (or when explicit command is called), the server retrieves its partner configuration with `config-get` and checks it for consistency: if the subnets and pools are defined the same way, if ...Here's an idea for new HA capability. On startup (or when explicit command is called), the server retrieves its partner configuration with `config-get` and checks it for consistency: if the subnets and pools are defined the same way, if the subnet-ids match etc.
Right now the doc says those should be the same, with the only difference being server-name, but we don't check it.
What to do with spotted differences is to be determined. We could print a warning, refuse HA connection, shutdown, or even maybe the primary attempt to fix its partner's config.
This is merely an idea. If we like it, the first step would be to turn this into more coherent design. Hence the ~design.backloghttps://gitlab.isc.org/isc-projects/bind9/-/issues/4102Use liburcu QSBR flavor2023-07-26T09:59:54ZOndřej SurýUse liburcu QSBR flavorThe QSBR flavor is faster, but also requires rcu_quiescent_state() to be called periodically from every RCU thread.The QSBR flavor is faster, but also requires rcu_quiescent_state() to be called periodically from every RCU thread.Not plannedOndřej SurýOndřej Surýhttps://gitlab.isc.org/isc-projects/stork/-/issues/1043System test with Postgres using the ident authentication method2023-06-06T13:29:00ZSlawek FigielSystem test with Postgres using the ident authentication methodI added some unit and system tests to check if Stork supports the main Postgres authentication methods.
I've written unit tests for `trust`, `peer`, `ident`, `md5`, and `scram-sha-256`.
I tried to write system tests for the above method,...I added some unit and system tests to check if Stork supports the main Postgres authentication methods.
I've written unit tests for `trust`, `peer`, `ident`, `md5`, and `scram-sha-256`.
I tried to write system tests for the above method, and I did it except for `ident`.
I failed to configure the ident service. Ident service is a service running on the 113 port that implements [RFC 1413](https://datatracker.ietf.org/doc/html/rfc1413).
We use Debian 10.13-slim in our system tests, and no ident service is built-in.
In the `apt` repository are available three ident packages:
- `ident2`
- `oidentd`
- `nullidentd`
I checked all, and none of them is helpful in our case.
`ident2` runs properly, but it doesn't support IPv6, but the Postgres container tries to connect over this protocol. Due to Postgres running in a Docker container, the configuration capabilities are limited. I couldn't force it to use IPv4 without strongly reconfiguring our system tests' networks.
`oidentd` supports IPv6 well, but it didn't run due to failure during dropping root privileges. The problem occurs even if I run the service with a non-root user. I suppose it is a bug that is solved in the newer versions. Unfortunately, the author provides the binary packages on their own webpage. I think it isn't a good practice to link to non-trusted webpages from the system tests' environment, so I abandoned using them. I couldn't build the application from sources because some packages are missing in our current setup, and I didn't want to extend it.
`nullidentd` is a fake ident server intended to use with `inetd`. It increases the complexity of the solution, so I didn't spend time on it.
I think the best solution is to upgrade the system tests' operating system and use `oidentd`.
An alternative is implementing a fake ident service on our own, as the RFC 1413 is a very simple protocol.backloghttps://gitlab.isc.org/isc-projects/bind9/-/issues/4104ZoneQuota stats counter is not counting everything2024-02-24T07:55:05ZOndřej SurýZoneQuota stats counter is not counting everythingThe `ZoneQuota` should log all the hits to `fcount_incr()` returning `ISC_R_QUOTA`, but it does only in a single place. The counting should be moved to `fctx_incr()`.The `ZoneQuota` should log all the hits to `fcount_incr()` returning `ISC_R_QUOTA`, but it does only in a single place. The counting should be moved to `fctx_incr()`.May 2024 (9.18.27, 9.18.27-S1, 9.19.24)Ondřej SurýOndřej Surýhttps://gitlab.isc.org/isc-projects/bind9/-/issues/4112"serve-stale:check prefetch processing of a stale CNAME target" fails on Free...2023-07-07T09:25:45ZMichal Nowak"serve-stale:check prefetch processing of a stale CNAME target" fails on FreeBSD 13Job [#3435305](https://gitlab.isc.org/isc-projects/bind9/-/jobs/3435305) failed for ff3d25a47f9f969669b2e4f5cde10c50f9cdd171 (~"v9.18").
On FreeBSD 13.2, the `check prefetch processing of a stale CNAME target` check [failed](https://git...Job [#3435305](https://gitlab.isc.org/isc-projects/bind9/-/jobs/3435305) failed for ff3d25a47f9f969669b2e4f5cde10c50f9cdd171 (~"v9.18").
On FreeBSD 13.2, the `check prefetch processing of a stale CNAME target` check [failed](https://gitlab.isc.org/isc-projects/bind9/-/jobs/3435305) [twice](https://gitlab.isc.org/isc-private/bind9/-/jobs/3431983) in the recent days:
```
2023-06-02 01:09:52 INFO:serve-stale I:serve-stale_tmp_q8yamlle:check prefetch processing of a stale CNAME target (214)
2023-06-02 01:09:55 INFO:serve-stale I:serve-stale_tmp_q8yamlle:failed
```
This was expected:
```
target.example. 2 IN A 10.53.0.2
```
But this was the answer:
```
target.example. 30 IN A 10.53.0.2
```
We got a stale answer after client timeout (`; EDE: 3 (Stale Answer): (client timeout)`), query time was 1840 msec. Locally, I get 2 msec and a non-stale answer.
I was unable to reproduce the problem locally.https://gitlab.isc.org/isc-projects/keama/-/issues/18next-server option causes migration failure if host is unknown2023-09-21T07:41:57ZDarren Ankneynext-server option causes migration failure if host is unknownA simple configuration like this:
```
group {
filename "Xncd19c";
next-server ncd-booter;
host ncd4 { hardware ethernet 0:c0:c3:88:2d:81; }
host ncd5 { hardware ethernet 0:c0:c3:00:14:11; }
}
```
causes the error:
```
ncd-booter...A simple configuration like this:
```
group {
filename "Xncd19c";
next-server ncd-booter;
host ncd4 { hardware ethernet 0:c0:c3:88:2d:81; }
host ncd5 { hardware ethernet 0:c0:c3:00:14:11; }
}
```
causes the error:
```
ncd-booter: host unknown.
next-server ncd-booter;
^
```
which is printed to stderr and stops the input configuration from being migrated.
It is possible that, wherever this is being run, the host may not be valid from that location (e.g., internal private network hosts). This is especially true now that it is available from the web: https://dhcp.isc.org
The above example configuration is taken from the ISC DHCP man pages.4.5.1https://gitlab.isc.org/isc-projects/keama/-/issues/19group statements converted incorrectly (or ignored?)2023-09-21T07:41:57ZDarren Ankneygroup statements converted incorrectly (or ignored?)The best way to illustrate this is with example.
Here is an example dhcpd.conf (partial)
```
group {
filename "Xncd19r";
next-server www.microsoft.com;
host ncd1 { hardware ethernet 0:c0:c3:49:2b:57; fixed-address 10.0.3.252...The best way to illustrate this is with example.
Here is an example dhcpd.conf (partial)
```
group {
filename "Xncd19r";
next-server www.microsoft.com;
host ncd1 { hardware ethernet 0:c0:c3:49:2b:57; fixed-address 10.0.3.252; }
host ncd2 { hardware ethernet 0:c0:c3:80:fc:32; fixed-address 10.0.3.253; }
host ncd3 { hardware ethernet 0:c0:c3:22:46:81; fixed-address 10.0.3.254; }
}
host ncd10 {
hardware ethernet 00:00:00:11:11:11;
fixed-address 10.0.3.251;
filename "Xncd19r";
next-server www.microsoft.com;
}
```
which are migrated by keama to:
```
"reservations": [
{
"hostname": "ncd1",
"hw-address": "00:c0:c3:49:2b:57",
"ip-address": "10.0.3.252"
},
{
"hostname": "ncd2",
"hw-address": "00:c0:c3:80:fc:32",
"ip-address": "10.0.3.253"
},
{
"hostname": "ncd3",
"hw-address": "00:c0:c3:22:46:81",
"ip-address": "10.0.3.254"
},
{
"hostname": "ncd10",
"hw-address": "00:00:00:11:11:11",
"ip-address": "10.0.3.251",
"boot-file-name": "Xncd19r",
"next-server": "184.84.169.167"
}
```
note how ncd1, ncd2 and ncd3 are lacking the "next-server" and "boot-file-name" attributes.
Example dhcpd.conf was taken from ISC DHCP man pages.4.5.1https://gitlab.isc.org/isc-projects/keama/-/issues/20better document logic statements are unsupported2023-09-21T07:41:57ZDarren Ankneybetter document logic statements are unsupportedThis is best shown with example.
Partial ISC DHCP configuration
```
if exists agent.circuit-id
{
log ( error, concat( "Lease for ", binary-to-ascii (10, 8, ".", leased-address), " is connected to ", option agent.circuit-id));
}
...This is best shown with example.
Partial ISC DHCP configuration
```
if exists agent.circuit-id
{
log ( error, concat( "Lease for ", binary-to-ascii (10, 8, ".", leased-address), " is connected to ", option agent.circuit-id));
}
option space CALIXGC;
option CALIXGC.acs-url code 1 = text;
if (substring(option vendor-class-identifier, 0, 21) = "844G.ONT.dslforum.org") {
vendor-option-space CALIXGC;
option CALIXGC.acs-url "http://example.com:8080/user/pass";
}
```
get migrated to:
```
{
"Dhcp4": {
// "statement": {
// "if": {
// "condition": {
// "exists": {
// "universe": "agent",
// "name": "circuit-id",
// "code": 1
// }
// },
// "then": [
// {
// /// Kea does not support yet log statements
// /// Reference Kea #234
// "log": {
// "priority": "error",
// "message": {
// "concat": {
// "left": "Lease for ",
// "right": {
// "concat": {
// "left": {
// "binary-to-ascii": {
// "base": 10,
// "width": 8,
// "separator": ".",
// "buffer": {
// "leased-address": null
// }
// }
// },
// "right": {
// "concat": {
// "left": " is connected to ",
// "right": {
// "option": {
// "universe": "agent",
// "name": "circuit-id",
// "code": 1
// }
// }
// }
// }
// }
// }
// }
// }
// }
// }
// ]
// }
// },
"option-def": [
{
"space": "CALIXGC",
"name": "acs-url",
"code": 1,
"type": "string"
}
]
// "statement": {
// "if": {
// "condition": {
// "equal": {
// "left": {
// "substring": {
// "expression": {
// "option": {
// "universe": "dhcp",
// "name": "vendor-class-identifier",
// "code": 60
// }
// },
// "offset": 0,
// "length": 21
// }
// },
// "right": "844G.ONT.dslforum.org"
// }
// },
// "then": [
// {
// "config": {
// "name": "vendor-option-space",
// "code": 19,
// "value": "CALIXGC"
// }
// },
// {
// "option": {
// "space": "CALIXGC",
// "name": "acs-url",
// "code": 1,
// "data": "http://example.com:8080/user/pass"
// }
// }
// ]
// }
// }
}
}
```
Perhaps just print the original unsupported logic statement from the ISC DHCP configuration with a comment about how these are not supported but there is probably another method to accomplish the intent.4.5.1https://gitlab.isc.org/isc-projects/bind9/-/issues/4118Data race lib/dns/adb.c:1537 in clean_finds_at_name2023-09-04T09:09:22ZMichal NowakData race lib/dns/adb.c:1537 in clean_finds_at_nameJob [respdiff-long:tsan](https://gitlab.isc.org/isc-private/bind9/-/jobs/3440993) failed for [d2fbe443b833d093f68bf4f5a1736242fc8d18a1](https://gitlab.isc.org/isc-private/bind9/-/commit/d2fbe443b833d093f68bf4f5a1736242fc8d18a1) (~"v9.18-...Job [respdiff-long:tsan](https://gitlab.isc.org/isc-private/bind9/-/jobs/3440993) failed for [d2fbe443b833d093f68bf4f5a1736242fc8d18a1](https://gitlab.isc.org/isc-private/bind9/-/commit/d2fbe443b833d093f68bf4f5a1736242fc8d18a1) (~"v9.18-S").
```
WARNING: ThreadSanitizer: data race
Write of size 4 at 0x000000000001 by thread T1 (mutexes: write M1, write M2):
#0 clean_finds_at_name lib/dns/adb.c:1537
#1 fetch_callback lib/dns/adb.c:4009
#2 task_run lib/isc/task.c:815
#3 isc_task_run lib/isc/task.c:896
#4 isc__nm_async_task netmgr/netmgr.c:848
#5 process_netievent netmgr/netmgr.c:920
#6 process_queue netmgr/netmgr.c:1013
#7 process_all_queues netmgr/netmgr.c:767
#8 async_cb netmgr/netmgr.c:796
#9 uv__async_io /usr/src/libuv-v1.44.1/src/unix/async.c:163
#10 isc__trampoline_run lib/isc/trampoline.c:189
Previous read of size 4 at 0x000000000001 by thread T2:
#0 findname lib/dns/resolver.c:3749
#1 fctx_getaddresses lib/dns/resolver.c:3993
#2 fctx_try lib/dns/resolver.c:4390
#3 rctx_nextserver lib/dns/resolver.c:10356
#4 rctx_done lib/dns/resolver.c:10503
#5 resquery_response lib/dns/resolver.c:8511
#6 udp_recv lib/dns/dispatch.c:638
#7 isc__nm_async_readcb netmgr/netmgr.c:2885
#8 isc__nm_readcb netmgr/netmgr.c:2858
#9 udp_recv_cb netmgr/udp.c:650
#10 isc__nm_udp_read_cb netmgr/udp.c:1057
#11 uv__udp_recvmsg /usr/src/libuv-v1.44.1/src/unix/udp.c:303
#12 isc__trampoline_run lib/isc/trampoline.c:189
Location is heap block of size 256 at 0x000000000025 allocated by thread T2:
#0 malloc ../../../../src/libsanitizer/tsan/tsan_interceptors_posix.cpp:651
#1 mallocx lib/isc/jemalloc_shim.h:35
#2 mem_get lib/isc/mem.c:343
#3 isc__mem_get lib/isc/mem.c:761
#4 new_adbfind lib/dns/adb.c:1901
#5 dns_adb_createfind lib/dns/adb.c:2934
#6 findname lib/dns/resolver.c:3656
#7 fctx_getaddresses lib/dns/resolver.c:3993
#8 fctx_try lib/dns/resolver.c:4390
#9 rctx_nextserver lib/dns/resolver.c:10356
#10 rctx_done lib/dns/resolver.c:10503
#11 resquery_response lib/dns/resolver.c:8511
#12 udp_recv lib/dns/dispatch.c:638
#13 isc__nm_async_readcb netmgr/netmgr.c:2885
#14 isc__nm_readcb netmgr/netmgr.c:2858
#15 udp_recv_cb netmgr/udp.c:650
#16 isc__nm_udp_read_cb netmgr/udp.c:1057
#17 uv__udp_recvmsg /usr/src/libuv-v1.44.1/src/unix/udp.c:303
#18 isc__trampoline_run lib/isc/trampoline.c:189
Mutex M1 is already destroyed.
Mutex M2 is already destroyed.
Thread T1 (running) created by main thread at:
#0 pthread_create ../../../../src/libsanitizer/tsan/tsan_interceptors_posix.cpp:962
#1 isc_thread_create lib/isc/thread.c:73
#2 isc__netmgr_create netmgr/netmgr.c:311
#3 isc_managers_create lib/isc/managers.c:31
#4 create_managers bin/named/main.c:1042
#5 setup bin/named/main.c:1313
#6 main bin/named/main.c:1594
Thread T2 (running) created by main thread at:
#0 pthread_create ../../../../src/libsanitizer/tsan/tsan_interceptors_posix.cpp:962
#1 isc_thread_create lib/isc/thread.c:73
#2 isc__netmgr_create netmgr/netmgr.c:311
#3 isc_managers_create lib/isc/managers.c:31
#4 create_managers bin/named/main.c:1042
#5 setup bin/named/main.c:1313
#6 main bin/named/main.c:1594
SUMMARY: ThreadSanitizer: data race lib/dns/adb.c:1537 in clean_finds_at_name
```Not plannedhttps://gitlab.isc.org/isc-projects/kea/-/issues/2905Possible race with v6 BLQ extended info tables2023-07-10T18:25:52ZFrancis DupontPossible race with v6 BLQ extended info tablesv6 BLQ extended info tables are subject to race because they are not managed in the same transaction as the lease6 table.
For instance it is possible to retrieve by relay or remote ID some leases which in fact no longer have the corresp...v6 BLQ extended info tables are subject to race because they are not managed in the same transaction as the lease6 table.
For instance it is possible to retrieve by relay or remote ID some leases which in fact no longer have the corresponding extended info. Three ways to handle (or not) this issue:
- do nothing i.e. move this ticket to Outstanding
- add a warning in the doc explaining the issue and recommending to use small page sizes (configurable in the hook) to limit the race window
- check leases in the hook before returning them (i.e. add soem code after the type and expire checks)
My current opinion is for the first solution: the race is very unlikely so it is enough to know it.outstandinghttps://gitlab.isc.org/isc-projects/stork/-/issues/1046Unstable system test: test_get_ha_pair_mt_config_review_reports2023-08-01T13:49:49ZSlawek FigielUnstable system test: test_get_ha_pair_mt_config_review_reportsThere are extensive logs. It seems some system test waiting function caused the failure. The expected condition has never been met. The system tests stopped after performing 120 unsuccessful retries.
[Pipeline](https://gitlab.isc.org/is...There are extensive logs. It seems some system test waiting function caused the failure. The expected condition has never been met. The system tests stopped after performing 120 unsuccessful retries.
[Pipeline](https://gitlab.isc.org/isc-projects/stork/-/jobs/3442249)
## Code trace
```
=================================== FAILURES ===================================
_________ test_get_ha_pair_mt_config_review_reports[ha_pair_service0] __________
server_service = <core.wrappers.server.Server object at 0x7f3e37abd7f0>
ha_pair_service = (<core.wrappers.kea.Kea object at 0x7f3e37a7e970>, <core.wrappers.kea.Kea object at 0x7f3e37895490>)
@ha_pair_parametrize('agent-kea-ha1-mt', 'agent-kea-ha2-mt')
def test_get_ha_pair_mt_config_review_reports(server_service: Server, ha_pair_service):
"""Test that the Stork server suggests to use the dedicated listeners
if the Kea HA is running in the multi-threading mode but the peers
communicate over the Kea Control Agent."""
server_service.log_in_as_admin()
server_service.authorize_all_machines()
> states = server_service.wait_for_next_machine_states()
tests/test_config_review.py:90:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
core/utils.py:69: in inner_wrapper
result = f(*args, **kwargs)
core/wrappers/server.py:589: in wait_for_next_machine_states
state = self.read_machine_state(machine["id"])
core/wrappers/server.py:296: in read_machine_state
return api_instance.get_machine_state(id=machine_id)
openapi_client/api/services_api.py:2785: in get_machine_state
return self.get_machine_state_endpoint.call_with_http_info(**kwargs)
openapi_client/api_client.py:881: in call_with_http_info
return self.api_client.call_api(
openapi_client/api_client.py:423: in call_api
return self.__call_api(resource_path, method,
openapi_client/api_client.py:205: in __call_api
raise e
openapi_client/api_client.py:198: in __call_api
response_data = self.request(
openapi_client/api_client.py:449: in request
return self.rest_client.GET(url,
openapi_client/rest.py:235: in GET
return self.request("GET", url,
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <openapi_client.rest.RESTClientObject object at 0x7f3e38882e80>
method = 'GET', url = 'http://docker:42080/api/machines/1/state'
query_params = []
headers = {'Accept': 'application/json', 'Cookie': 'session=QAGPZfnh89GWqFHFZmnRhjm-Iv8Q0kzu5K94XIJYdN8; Path=/; Expires=Tue, 06 Jun 2023 16:15:45 GMT; Max-Age=86400; HttpOnly; SameSite=Lax', 'User-Agent': 'OpenAPI-Generator/1.0.0/python'}
body = None, post_params = {}, _preload_content = True, _request_timeout = None
def request(self, method, url, query_params=None, headers=None,
body=None, post_params=None, _preload_content=True,
_request_timeout=None):
"""Perform requests.
:param method: http request method
:param url: http request url
:param query_params: query parameters in the url
:param headers: http request headers
:param body: request json body, for `application/json`
:param post_params: request post parameters,
`application/x-www-form-urlencoded`
and `multipart/form-data`
:param _preload_content: if False, the urllib3.HTTPResponse object will
be returned without reading/decoding response
data. Default is True.
:param _request_timeout: timeout setting for this request. If one
number provided, it will be total request
timeout. It can also be a pair (tuple) of
(connection, read) timeouts.
"""
method = method.upper()
assert method in ['GET', 'HEAD', 'DELETE', 'POST', 'PUT',
'PATCH', 'OPTIONS']
if post_params and body:
raise ApiValueError(
"body parameter cannot be used with post_params parameter."
)
post_params = post_params or {}
headers = headers or {}
timeout = None
if _request_timeout:
if isinstance(_request_timeout, (int, float)): # noqa: E501,F821
timeout = urllib3.Timeout(total=_request_timeout)
elif (isinstance(_request_timeout, tuple) and
len(_request_timeout) == 2):
timeout = urllib3.Timeout(
connect=_request_timeout[0], read=_request_timeout[1])
try:
# For `POST`, `PUT`, `PATCH`, `OPTIONS`, `DELETE`
if method in ['POST', 'PUT', 'PATCH', 'OPTIONS', 'DELETE']:
# Only set a default Content-Type for POST, PUT, PATCH and OPTIONS requests
if (method != 'DELETE') and ('Content-Type' not in headers):
headers['Content-Type'] = 'application/json'
if query_params:
url += '?' + urlencode(query_params)
if ('Content-Type' not in headers) or (re.search('json',
headers['Content-Type'], re.IGNORECASE)):
request_body = None
if body is not None:
request_body = json.dumps(body)
r = self.pool_manager.request(
method, url,
body=request_body,
preload_content=_preload_content,
timeout=timeout,
headers=headers)
elif headers['Content-Type'] == 'application/x-www-form-urlencoded': # noqa: E501
r = self.pool_manager.request(
method, url,
fields=post_params,
encode_multipart=False,
preload_content=_preload_content,
timeout=timeout,
headers=headers)
elif headers['Content-Type'] == 'multipart/form-data':
# must del headers['Content-Type'], or the correct
# Content-Type which generated by urllib3 will be
# overwritten.
del headers['Content-Type']
r = self.pool_manager.request(
method, url,
fields=post_params,
encode_multipart=True,
preload_content=_preload_content,
timeout=timeout,
headers=headers)
# Pass a `string` parameter directly in the body to support
# other content types than Json when `body` argument is
# provided in serialized form
elif isinstance(body, str) or isinstance(body, bytes):
request_body = body
r = self.pool_manager.request(
method, url,
body=request_body,
preload_content=_preload_content,
timeout=timeout,
headers=headers)
else:
# Cannot generate the request from given parameters
msg = """Cannot prepare a request message for provided
arguments. Please check that your arguments match
declared content type."""
raise ApiException(status=0, reason=msg)
# For `GET`, `HEAD`
else:
r = self.pool_manager.request(method, url,
fields=query_params,
preload_content=_preload_content,
timeout=timeout,
headers=headers)
except urllib3.exceptions.SSLError as e:
msg = "{0}\n{1}".format(type(e).__name__, str(e))
raise ApiException(status=0, reason=msg)
if _preload_content:
r = RESTResponse(r)
# log response body
logger.debug("response body: %s", r.data)
if not 200 <= r.status <= 299:
if r.status == 401:
raise UnauthorizedException(http_resp=r)
if r.status == 403:
raise ForbiddenException(http_resp=r)
if r.status == 404:
raise NotFoundException(http_resp=r)
if 500 <= r.status <= 599:
> raise ServiceException(http_resp=r)
E openapi_client.exceptions.ServiceException: Status Code: 500
E Reason: Internal Server Error
E HTTP response headers: HTTPHeaderDict({'Content-Type': 'application/json', 'Vary': 'Cookie', 'Date': 'Mon, 05 Jun 2023 16:15:49 GMT', 'Content-Length': '64'})
E HTTP response body: {"message":"Problem storing application state in the database"}
openapi_client/rest.py:227: ServiceException
```backlog