ISC Open Source Projects issueshttps://gitlab.isc.org/groups/isc-projects/-/issues2023-07-17T18:30:49Zhttps://gitlab.isc.org/isc-projects/kea/-/issues/222Pool log threshold2023-07-17T18:30:49ZFrancis DupontPool log thresholdIdea from ISC DHCP, cf dhcpd.8:
>>>
The log-threshold-high and log-threshold-low statements
**log-threshold-high** percentage;
**log-threshold-low** percentage;
The log-threshold-low and log-threshold-high statements are used to
contr...Idea from ISC DHCP, cf dhcpd.8:
>>>
The log-threshold-high and log-threshold-low statements
**log-threshold-high** percentage;
**log-threshold-low** percentage;
The log-threshold-low and log-threshold-high statements are used to
control when a message is output about pool usage. The value for
both of them is the percentage of the pool in use. If the high
threshold is 0 or has not been specified, no messages will be pro-
duced. If a high threshold is given, a message is output once the
pool usage passes that level. After that, no more messages will be
output until the pool usage falls below the low threshold. If the
low threshold is not given, it default to a value of zero.
A special case occurs when the low threshold is set to be higer than
the high threshold. In this case, a message will be generated each
time a lease is acknowledged when the pool usage is above the high
threshold.
Note that threshold logging will be automatically disabled for shared
subnets whose total number of addresses is larger than (264)-1. The
server will emit a log statement at startup when threshold logging is
disabled as shown below:
"Threshold logging disabled for shared subnet of ranges:
<addresses>"
This is likely to have no practical runtime effect as CPUs are
unlikely to support a server actually reaching such a large number of
leases.
>>>
From this I like the idea to have a hook library which performs a simple
action (log is an example) when a threshold is crossed in a reasonably
sized pool.backloghttps://gitlab.isc.org/isc-projects/kea/-/issues/196Improve netconf performance: keep the control socket connection open2022-11-02T15:08:43ZTomek MrugalskiImprove netconf performance: keep the control socket connection openIn 1.5 the kea-netconf agent opens up a new connection every time there is a new config to be set. This means that if you're changing the configuration frequently, there are many connections set up and torn down. It would be better to ha...In 1.5 the kea-netconf agent opens up a new connection every time there is a new config to be set. This means that if you're changing the configuration frequently, there are many connections set up and torn down. It would be better to have persistent connection (or the option to enable it).
This is out of scope for 1.5, though. Looks like a potential optimization in 1.6.backloghttps://gitlab.isc.org/isc-projects/kea/-/issues/152Add a rebuild-test target for CA, D2 and NETCONF2022-11-02T15:08:43ZFrancis DupontAdd a rebuild-test target for CA, D2 and NETCONFand of course Netconf. Currently a rebuild-test target is available only for DHCPv4 and DHCPv6: it should be adapted to anything using a flex/bison JSON syntax.and of course Netconf. Currently a rebuild-test target is available only for DHCPv4 and DHCPv6: it should be adapted to anything using a flex/bison JSON syntax.backloghttps://gitlab.isc.org/isc-projects/kea/-/issues/110pool order2022-11-02T15:08:41ZFrancis Dupontpool orderConfiguration order of subnets and client classes is critical. Pools are ordered too but IMHO cases where it matters are uncommon, in fact it will be an issue only for config backend unit tests. I suggest to NOT address this issue (1.x l...Configuration order of subnets and client classes is critical. Pools are ordered too but IMHO cases where it matters are uncommon, in fact it will be an issue only for config backend unit tests. I suggest to NOT address this issue (1.x low for instance?).backloghttps://gitlab.isc.org/isc-projects/kea/-/issues/76Update leases on 'dashboard server' without running HA2022-11-02T15:08:41ZGhost UserUpdate leases on 'dashboard server' without running HAOne of our GSOC students is working on a Kea dashboard, based on the GLASS project, a dashboard for ISC DHCP. The dashboard requires access to a local lease file so it can continuously or frequently update stats about pool utilization, e...One of our GSOC students is working on a Kea dashboard, based on the GLASS project, a dashboard for ISC DHCP. The dashboard requires access to a local lease file so it can continuously or frequently update stats about pool utilization, etc. It seems like the ideal way to do this is to push lease file updates to the dashboard server.
It seems we can use the 'backup server' feature of HA, but without the HA support. So, we would want a mode that doesn't check for a valid HA configuration and an HA partner. Also, we would want this feature to not require the premium HA package.backloghttps://gitlab.isc.org/isc-projects/kea/-/issues/54Reconfigure with an unusable lease back end, leaves the server in a non-worki...2022-11-02T15:08:41ZGhost UserReconfigure with an unusable lease back end, leaves the server in a non-working state (no rollback)A running kea-dhcpX server can be rendered non-functional by issuing a reconfigure (either by command or signal) with a configuration containing
a flawed lease back end specifications or to back end which cannot be reached.
After succes...A running kea-dhcpX server can be rendered non-functional by issuing a reconfigure (either by command or signal) with a configuration containing
a flawed lease back end specifications or to back end which cannot be reached.
After successfully parsing the configuration, the server attempts to connect to the new lease back end. This causes the LeaseMgrFactory to close the existing instance and subsequently fails to open a new one. The server will emit a log message that states reconfiguration has failed and at this point it will no longer process client packets.
A simple scenario:
1. start server with memfile lease back end
2. verify server hands out leases
3. change configuration to MySQL back end with an invalid database or user name
4. issue reconfig command
5. verify server does not see or acknowledge packets
The basic issue is the LeaseMgrFactory only permits one instance to exist. There is no "Staged" instance and we do not restore the one we closed. We probably don't handle host back ends any differently.backloghttps://gitlab.isc.org/isc-projects/kea/-/issues/52kea-dhcp4 can't offer ip reserved.2022-11-02T15:08:43ZGhost Userkea-dhcp4 can't offer ip reserved.subnet : 192.168.0.0/24
reservation1 : mac(aa:aa:aa:aa:aa:aa) ip(192.168.0.11)
reservation2 : mac(bb:bb:bb:bb:bb:bb) ip(192.168.0.12)
reservation1 has router option(3) 192.168.0.3
reservation2 has no options.
I used mysql for hosts res...subnet : 192.168.0.0/24
reservation1 : mac(aa:aa:aa:aa:aa:aa) ip(192.168.0.11)
reservation2 : mac(bb:bb:bb:bb:bb:bb) ip(192.168.0.12)
reservation1 has router option(3) 192.168.0.3
reservation2 has no options.
I used mysql for hosts reservation.
kea-dhcp4 responses to reservation1 but fail to response to reservation2 somtimes.
The Failure log is 'preparing on-wire-format of the packet to be sent failed DHCPv4 Option4AddrLst 3 is too big.At most 255 bytes are supported.'
In packets debug log, kea-dhcp4 try to response to reserve2 with router option(value is 0.0.0.0 0.0.0.0 0.0.0.0 0.0.0.0 0.0.0.0 ....... maybe 2048~4096byte)backloghttps://gitlab.isc.org/isc-projects/kea/-/issues/51Impossible to use a Chromecast with kea DHCP2022-11-02T15:08:41ZGhost UserImpossible to use a Chromecast with kea DHCPHello,
since few month I use kea dhcp server, it works properly with all my devices but I have a big problem with my Chromecast, it doesn't work att all with your DHCP server. I already contacted Chromecast Support team. I don't know if ...Hello,
since few month I use kea dhcp server, it works properly with all my devices but I have a big problem with my Chromecast, it doesn't work att all with your DHCP server. I already contacted Chromecast Support team. I don't know if I am the only one with this problem.
Before I decided to use Kea I was using my ISP's dhcp server but it was too limited and verry bugfull.
I hope you will be able to find a way to fix this, I didn't gave you any logs or config files because I don't know what you really need but I really need it working and I'll give you any file you need, your DHCP server is VERRY nice !
Cordiallybackloghttps://gitlab.isc.org/isc-projects/kea/-/issues/47Update network/subnet hooks to handle new classification fields2022-11-02T15:08:43ZGhost UserUpdate network/subnet hooks to handle new classification fields[#5374](https://oldkea.isc.org/ticket/5374) was merged but introduced new features which require an update of hooks managing shared networks and subnets.[#5374](https://oldkea.isc.org/ticket/5374) was merged but introduced new features which require an update of hooks managing shared networks and subnets.backloghttps://gitlab.isc.org/isc-projects/kea/-/issues/44make database config parsing more flexible2022-11-02T15:08:41ZGhost Usermake database config parsing more flexibleCf. #5528 comments (look for "line 125").Cf. #5528 comments (look for "line 125").backloghttps://gitlab.isc.org/isc-projects/kea/-/issues/22stringop-truncation warnings2022-11-02T15:08:41ZFrancis Dupontstringop-truncation warningsG++ 8 has a new warning stringop truncation which is emitted when strncat or strncpy (only the second in kea) fails to terminate (i.e. append a null character) its result.
There are on Fedora 28 spurious warnings on local/unix socket ad...G++ 8 has a new warning stringop truncation which is emitted when strncat or strncpy (only the second in kea) fails to terminate (i.e. append a null character) its result.
There are on Fedora 28 spurious warnings on local/unix socket address or ifname because they are filled using strncpy.
I have a mixed feeling about this: IMHO the issue is not in Kea but in the system header files which should add a ```nonstring``` attribute but did not, so no action is a possible answer to this...backloghttps://gitlab.isc.org/isc-projects/bind9/-/issues/1916Check ECS response in DiG for RFC compliance2024-03-13T13:11:50ZMark AndrewsCheck ECS response in DiG for RFC complianceWe have seen servers that return ECS responses that don't meet this requirement.
```
RFC 7871, 7.2.1. Authoritative Nameserver
FAMILY, SOURCE PREFIX-LENGTH, and ADDRESS in the response MUST match
those in the query. Echoing back ...We have seen servers that return ECS responses that don't meet this requirement.
```
RFC 7871, 7.2.1. Authoritative Nameserver
FAMILY, SOURCE PREFIX-LENGTH, and ADDRESS in the response MUST match
those in the query. Echoing back these values helps to mitigate
certain attack vectors, as described in Section 11.
```
Add a warning when the ECS response fails to meet this requirement.Not plannedhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3877Support _FORTIFY_SOURCE=32023-02-16T10:57:26ZTony FinchSupport _FORTIFY_SOURCE=3Recent versions of clang, gcc, and glibc support `_FORTIFY_SOURCE=3` which adds support for tracking sizes of allocations at run time in a way that can be checked by `memmove()` and friends. To make use of the new fortification level, al...Recent versions of clang, gcc, and glibc support `_FORTIFY_SOURCE=3` which adds support for tracking sizes of allocations at run time in a way that can be checked by `memmove()` and friends. To make use of the new fortification level, allocation functions need attributes indicating which argument is the size (`__alloc_size__`) and other functions need to tell the compiler which arguments are pointer, size pairs (`__access__`). For more details see https://developers.redhat.com/articles/2023/02/06/how-improve-application-security-using-fortifysource3#Not plannedhttps://gitlab.isc.org/isc-projects/bind9/-/issues/23DDoS mitigation2023-12-22T10:28:30ZOndřej SurýDDoS mitigationThis is a placeholder bug for general DDoS mitigation techniques that needs to be introduced into BIND to cope with current DNS landscape.This is a placeholder bug for general DDoS mitigation techniques that needs to be introduced into BIND to cope with current DNS landscape.Not plannedhttps://gitlab.isc.org/isc-projects/bind9/-/issues/3058SUMMARY: ThreadSanitizer: data race in read2023-11-02T17:02:21ZOndřej SurýSUMMARY: ThreadSanitizer: data race in readThis almost seems like we are passing some buffer to isc_task that libuv is still using.
```
==================
WARNING: ThreadSanitizer: data race (pid=22062)
Write of size 8 at 0x7fdcbeac0000 by thread T9:
#0 read <null> (libtsa...This almost seems like we are passing some buffer to isc_task that libuv is still using.
```
==================
WARNING: ThreadSanitizer: data race (pid=22062)
Write of size 8 at 0x7fdcbeac0000 by thread T9:
#0 read <null> (libtsan.so.0+0x4ace2)
#1 uv__read /usr/src/libuv-v1.42.0/src/unix/stream.c:1164 (libuv.so.1+0x227e1)
#2 isc__trampoline_run /builds/isc-projects/bind9/lib/isc/trampoline.c:185 (libisc-9.17.20.so+0x7b0e1)
Previous read of size 8 at 0x7fdcbeac0000 by thread T8:
#0 memmove <null> (libtsan.so.0+0x5da6e)
#1 memmove /usr/include/bits/string_fortified.h:36 (libisc-9.17.20.so+0x453d9)
#2 isc_buffer_copyregion /builds/isc-projects/bind9/lib/isc/buffer.c:530 (libisc-9.17.20.so+0x453d9)
#3 dns_zone_forwardupdate /builds/isc-projects/bind9/lib/dns/zone.c:18408 (libdns-9.17.20.so+0x22081d)
#4 forward_action /builds/isc-projects/bind9/lib/ns/update.c:3748 (libns-9.17.20.so+0x516d6)
#5 task_run /builds/isc-projects/bind9/lib/isc/task.c:827 (libisc-9.17.20.so+0x7237a)
#6 isc_task_run /builds/isc-projects/bind9/lib/isc/task.c:907 (libisc-9.17.20.so+0x7237a)
#7 isc__nm_async_task netmgr/netmgr.c:835 (libisc-9.17.20.so+0x1e9ab)
#8 process_netievent netmgr/netmgr.c:914 (libisc-9.17.20.so+0x27efb)
#9 process_queue netmgr/netmgr.c:1008 (libisc-9.17.20.so+0x28a2a)
#10 process_all_queues netmgr/netmgr.c:754 (libisc-9.17.20.so+0x29353)
#11 async_cb netmgr/netmgr.c:783 (libisc-9.17.20.so+0x29353)
#12 uv__async_io /usr/src/libuv-v1.42.0/src/unix/async.c:163 (libuv.so.1+0x110ef)
#13 isc__trampoline_run /builds/isc-projects/bind9/lib/isc/trampoline.c:185 (libisc-9.17.20.so+0x7b0e1)
Location is heap block of size 1310720 at 0x7fdcbeac0000 allocated by main thread:
#0 malloc <null> (libtsan.so.0+0x32919)
#1 mallocx /builds/isc-projects/bind9/lib/isc/jemalloc_shim.h:33 (libisc-9.17.20.so+0x5b02a)
#2 mem_get /builds/isc-projects/bind9/lib/isc/mem.c:343 (libisc-9.17.20.so+0x5b02a)
#3 isc__mem_get /builds/isc-projects/bind9/lib/isc/mem.c:758 (libisc-9.17.20.so+0x5b02a)
#4 isc__netmgr_create netmgr/netmgr.c:319 (libisc-9.17.20.so+0x1f2a4)
#5 isc_managers_create /builds/isc-projects/bind9/lib/isc/managers.c:39 (libisc-9.17.20.so+0x59ef2)
#6 create_managers /builds/isc-projects/bind9/bin/named/main.c:920 (named+0x424a19)
#7 setup /builds/isc-projects/bind9/bin/named/main.c:1184 (named+0x424a19)
#8 main /builds/isc-projects/bind9/bin/named/main.c:1452 (named+0x424a19)
Thread T9 'isc-net-0008' (tid=22096, running) created by main thread at:
#0 pthread_create <null> (libtsan.so.0+0x5bf45)
#1 isc_thread_create /builds/isc-projects/bind9/lib/isc/thread.c:79 (libisc-9.17.20.so+0x7466d)
#2 isc__netmgr_create netmgr/netmgr.c:328 (libisc-9.17.20.so+0x1f34b)
#3 isc_managers_create /builds/isc-projects/bind9/lib/isc/managers.c:39 (libisc-9.17.20.so+0x59ef2)
#4 create_managers /builds/isc-projects/bind9/bin/named/main.c:920 (named+0x424a19)
#5 setup /builds/isc-projects/bind9/bin/named/main.c:1184 (named+0x424a19)
#6 main /builds/isc-projects/bind9/bin/named/main.c:1452 (named+0x424a19)
Thread T8 'isc-net-0007' (tid=22094, running) created by main thread at:
#0 pthread_create <null> (libtsan.so.0+0x5bf45)
#1 isc_thread_create /builds/isc-projects/bind9/lib/isc/thread.c:79 (libisc-9.17.20.so+0x7466d)
#2 isc__netmgr_create netmgr/netmgr.c:328 (libisc-9.17.20.so+0x1f34b)
#3 isc_managers_create /builds/isc-projects/bind9/lib/isc/managers.c:39 (libisc-9.17.20.so+0x59ef2)
#4 create_managers /builds/isc-projects/bind9/bin/named/main.c:920 (named+0x424a19)
#5 setup /builds/isc-projects/bind9/bin/named/main.c:1184 (named+0x424a19)
#6 main /builds/isc-projects/bind9/bin/named/main.c:1452 (named+0x424a19)
SUMMARY: ThreadSanitizer: data race (/lib64/libtsan.so.0+0x4ace2) in read
==================
ThreadSanitizer: reported 1 warnings
```Not plannedhttps://gitlab.isc.org/isc-projects/bind9/-/issues/2953Resolver issues with refactored dispatch code2023-11-02T17:02:19ZMichał KępieńResolver issues with refactored dispatch codeThis issue attempts to describe various issues with resolver behavior
found after merging !4601 (#2401). Most of these issues are
intermittent, so it is important to keep track of them somewhere in
order to not forget that they exist. ...This issue attempts to describe various issues with resolver behavior
found after merging !4601 (#2401). Most of these issues are
intermittent, so it is important to keep track of them somewhere in
order to not forget that they exist. We should get to the bottom of all
of these issues before we release BIND 9.18.0.
1. [x] **Recursive Perflab tests cause the resolver to stop responding.**
This issue might be the simplest to start with because the behavior
observed seems to be consistent rather than intermittent. Namely,
all Perflab jobs which test a resolver seem to crank out a response
rate of some 70-120 kQPS at the beginning of the test and then...
the resolver stops responding indefinitely. While Perflab was not
designed with recursive tests in mind and therefore we can treat its
recursive results with a grain of salt, it certainly should not be
reporting zeros all over the place.
- https://perflab.isc.org/#/config/run/5bf195dd83ba91a870b2976f/
- https://perflab.isc.org/#/config/run/5cd6a166643076f6c1f6c26f/
- https://perflab.isc.org/#/config/run/5db74b6264458967f762143a/
- https://perflab.isc.org/#/config/run/5db74b7264458967f762143b/
- https://perflab.isc.org/#/config/run/5db74c2764458967f7621440/
- https://perflab.isc.org/#/config/run/5db74c3464458967f7621441/
(Resolved by !5500.)
2. [x] **`respdiff` tests are *sometimes* slow.**
Ever since we merged the dispatch branch, the `respdiff` tests
started failing *intermittently* for `main` (and only `main`)
because of timeouts.
- [job 2016337][1]: pass, ~2m30s per each 10,000 queries
- [job 2016622][2]: pass, ~2m45s per each 10,000 queries
- [job 2017990][3]: pass, ~2m30s per each 10,000 queries
- [job 2020093][4]: fail, 7+ minutes per each 10,000 queries
- [job 2023057][5]: fail, 16+ minutes per each 10,000 queries
- [job 2023490][6]: pass, ~2m40s per each 10,000 queries
I do not think varying CI runner stress can be blamed for this, not
for discrepancies this large. It also never happened before merging
!4601, AFAIK.
3. [x] **A lot of "stress" test graph indicate growing memory use.** #3002
While testing October BIND 9 releases, one of the 1-hour "stress"
tests ran in recursive mode for BIND 9.17.19 yielded a graph which
indicates that memory use growth over time might be an issue.
https://wiki.isc.org/bin/viewfile/QA/BindQaResults_9_11_36?filename=bind-9.17.19-linux-amd64-recursive-1h.png;rev=1
However, that phenomenon was not observable for other OS/arch
combinations this specific code revision was tested with.
It was also not observable on the *same* OS/arch combination for a
very similar code revision (the code differences should not have any
effect on memory use patterns):
https://wiki.isc.org/bin/viewfile/QA/BindQaResults_9_11_36?filename=bind-9.17.19-linux-amd64-recursive-1h.png;rev=2
Pre-release tests run for BIND 9.17.20 confirmed that memory leaks
are a common thing when `named` is used as a recursive resolver.
More details are available in #3002.
The "stress" tests are run on isolated VMs and despite being pretty
synthetic (fixed traffic pattern, everything happens on one machine,
etc.), they have a history of being very stable, so typical issues
like test host load varying over time etc. are not a factor here.
4. [x] **Lame servers with IPv6 unreachable cause hang on shutdown.** #2927
5. [x] **resolver test fails intermittently** #3013
See https://gitlab.isc.org/isc-projects/bind9/-/jobs/2054296
```
I:resolver:query count error: 6 NS records: expected queries 10, actual 11
I:resolver:failed
```
6. [x] **Assertion failed in `dns_resolver_logfetch()`** #2962
7. [x] **Assertion failed in `dns_dispatch_gettcp()`** #2963
8. [x] **Assertion failed in `dns_resolver_destroyfetch()`** #2969
9. [x] **ThreadSanitizer issues with adb** #2978 #2979
10. [x] **fctx_cancelquery() attempts to process a query which has already been freed** #3018
11. [x] **premature TCP connection closure leaks fetch contexts (hang on shutdown)** #3026
12. [ ] **validator loops can cause shutdown hang** #3033
13. [ ] **ADB finds for a broken zone may cause fetch contexts to hang** #3037
14. [ ] **ASAN error in fctx_cancelquery()** #3102
I decided to open a single issue for all of the above problems because I
sense they are somehow related and I hope that fixing the root cause of
one of them will eliminate the other ones as well.
[1]: https://gitlab.isc.org/isc-projects/bind9/-/jobs/2016337
[2]: https://gitlab.isc.org/isc-projects/bind9/-/jobs/2016622
[3]: https://gitlab.isc.org/isc-projects/bind9/-/jobs/2017990
[4]: https://gitlab.isc.org/isc-projects/bind9/-/jobs/2020093
[5]: https://gitlab.isc.org/isc-projects/bind9/-/jobs/2023057
[6]: https://gitlab.isc.org/isc-projects/bind9/-/jobs/2023490Not plannedhttps://gitlab.isc.org/isc-projects/bind9/-/issues/2730[ISC-support #18552] Logging category for notify/xfer related messages2024-03-27T13:17:04ZChuck Stearns[ISC-support #18552] Logging category for notify/xfer related messages### Description
Logging category for notify/xfer related messages
### Request
The notify category does not include some messages that end up in the general category. There are also some messages that might be better placed in xfer-in....### Description
Logging category for notify/xfer related messages
### Request
The notify category does not include some messages that end up in the general category. There are also some messages that might be better placed in xfer-in. For instance, "notify from" and "refused notify from non-master". The intent is to have all messages useful for troubleshooting an aspect of operation in one log. For example, if troubleshooting zone transfer issues, the relevant messages would be in the transfer.log. This segregation also facilitates some noise reduction when using dynamic severity.
### Links / referencesNot plannedhttps://gitlab.isc.org/isc-projects/bind9/-/issues/2340Enable logging of rpz re-writes to dnstap.2024-03-27T13:54:38ZPeter DaviesEnable logging of rpz re-writes to dnstap.### Description
Enable logging of rpz re-writes to dnstap.
The ability to send rpz rewrite information that is generated by category rpz to the dnstap output stream.
[RT #17273](https://support.isc.org/Ticket/Display.html?id=17273)### Description
Enable logging of rpz re-writes to dnstap.
The ability to send rpz rewrite information that is generated by category rpz to the dnstap output stream.
[RT #17273](https://support.isc.org/Ticket/Display.html?id=17273)Not plannedEvan HuntEvan Hunthttps://gitlab.isc.org/isc-projects/bind9/-/issues/459[RT 39964] logging of NXDOMAIN query-responses (response source and type)2024-03-13T13:46:50ZVicky Riskvicky@isc.org[RT 39964] logging of NXDOMAIN query-responses (response source and type)Edited/compressed from the original in classic bugs-RT
for analyzing queries resulting in NXDOMAIN responses...
Please add the following to normal query log:
a) what kind of answer was given (nxrrset, rrset (how many) or servfail)
b) w...Edited/compressed from the original in classic bugs-RT
for analyzing queries resulting in NXDOMAIN responses...
Please add the following to normal query log:
a) what kind of answer was given (nxrrset, rrset (how many) or servfail)
b) where did the answer came from (authorative, from cache or was it the result of a recursive search)
The actual content of the answer is not needed outside some very special debug-cases and for these cases you have to do a complete network trace anyway.
spawned from #39253: dnstap wish-list: Query log limited by zone/domain & Answer logging
Reference to https://support.isc.org/Ticket/Display.html?id=8385 added
----
* This is response, not query information
<from Ray>
NB: recording these either means two separate log entries (one for query, one for response) or if they're merged that the query log will now be in response order rather than request order.
------
This request is for 'normal' query logging, but many have asked for this via **dnstap** as well. We would love to get this in dnstap if that is possible, realizing there is a standards/schema issue with dnstap.
------
Tagging with 9.13.3 because we would really like to try for this in 9.14.0. This is a fairly long standing and frequently-heard request.Not plannedhttps://gitlab.isc.org/isc-projects/bind9/-/issues/2679"task" unit test fails with "atomic_load(&d) <= 3"2022-03-01T09:40:45ZMichal Nowak"task" unit test fails with "atomic_load(&d) <= 3"The `task` unit test failed on `main` in the [`unit:gcc:softhsm2.4` job](https://gitlab.isc.org/isc-projects/bind9/-/jobs/1694507):
```
[==========] Running 14 test(s).
[ RUN ] manytasks
[ OK ] manytasks
[ RUN ] all_even...The `task` unit test failed on `main` in the [`unit:gcc:softhsm2.4` job](https://gitlab.isc.org/isc-projects/bind9/-/jobs/1694507):
```
[==========] Running 14 test(s).
[ RUN ] manytasks
[ OK ] manytasks
[ RUN ] all_events
[ OK ] all_events
[ RUN ] basic
[ OK ] basic
[ RUN ] create_task
[ OK ] create_task
[ RUN ] pause_unpause
[ OK ] pause_unpause
[ RUN ] post_shutdown
[ OK ] post_shutdown
[ RUN ] privilege_drop
atomic_load(&d) <= 3
[ LINE ] --- task_test.c:402: error: Failure!I:task_test:Core dump found: ./core.7161
D:task_test:backtrace from ./core.7161 start
[New LWP 7161]
[New LWP 7441]
[New LWP 7443]
[New LWP 7445]
[New LWP 7447]
[New LWP 7449]
[New LWP 7448]
[New LWP 7451]
[New LWP 7450]
[New LWP 7454]
[New LWP 7453]
[New LWP 7446]
[New LWP 7444]
[New LWP 7452]
[New LWP 7442]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Core was generated by `/builds/isc-projects/bind9/lib/isc/tests/.libs/task_test'.
Program terminated with signal SIGABRT, Aborted.
#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
50 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
[Current thread is 1 (Thread 0x7f6c48c30ec0 (LWP 7161))]
Thread 15 (Thread 0x7f6c36ffd700 (LWP 7442)):
#0 futex_wait_cancelable (private=0, expected=0, futex_word=0x55767e52e120) at ../sysdeps/unix/sysv/linux/futex-internal.h:88
__ret = 0
oldtype = 0
err = <optimized out>
oldtype = <optimized out>
err = <optimized out>
__ret = <optimized out>
resultvar = <optimized out>
__arg4 = <optimized out>
__arg3 = <optimized out>
__arg2 = <optimized out>
__arg1 = <optimized out>
_a4 = <optimized out>
_a3 = <optimized out>
_a2 = <optimized out>
_a1 = <optimized out>
#1 __pthread_cond_wait_common (abstime=0x0, mutex=0x55767e52e0a0, cond=0x55767e52e0f8) at pthread_cond_wait.c:502
spin = 0
buffer = {__routine = 0x7f6c4b7bad80 <__condvar_cleanup_waiting>, __arg = 0x7f6c36ffcc40, __canceltype = 2122456320, __prev = 0x0}
cbuffer = {wseq = 18, cond = 0x55767e52e0f8, mutex = 0x55767e52e0a0, private = 0}
rt = <optimized out>
err = <optimized out>
g = 0
flags = <optimized out>
g1_start = <optimized out>
signals = <optimized out>
result = 0
wseq = 18
seq = 9
private = 0
maxspin = <optimized out>
err = <optimized out>
result = <optimized out>
wseq = <optimized out>
g = <optimized out>
seq = <optimized out>
flags = <optimized out>
private = <optimized out>
signals = <optimized out>
g1_start = <optimized out>
spin = <optimized out>
buffer = <optimized out>
cbuffer = <optimized out>
rt = <optimized out>
s = <optimized out>
#2 __pthread_cond_wait (cond=cond@entry=0x55767e52e0f8, mutex=mutex@entry=0x55767e52e0a0) at pthread_cond_wait.c:655
No locals.
#3 0x00007f6c4b7f71fd in nm_thread (worker0=0x55767e65a7e0) at netmgr/netmgr.c:652
r = 1
worker = 0x55767e65a7e0
mgr = 0x55767e52e080
#4 0x00007f6c4b83ca2a in isc__trampoline_run (arg=0x55767f8567f0) at trampoline.c:184
trampoline = 0x55767f8567f0
result = <optimized out>
#5 0x00007f6c4b7b4fa3 in start_thread (arg=<optimized out>) at pthread_create.c:486
ret = <optimized out>
pd = <optimized out>
now = <optimized out>
unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140102755931904, 3335179980514494415, 140726091752270, 140726091752271, 140102755931904, 93967433497696, -3418058969693845553, -3418119472830117937}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
not_first_call = <optimized out>
#6 0x00007f6c4b6bd4cf in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
No locals.
Thread 14 (Thread 0x7f6c45426700 (LWP 7452)):
#0 0x00007f6c4b6bd62e in __GI_epoll_pwait (epfd=55, events=0x7f6c45422bf0, maxevents=1024, timeout=-1, set=0x0) at ../sysdeps/unix/sysv/linux/epoll_pwait.c:42
resultvar = 18446744073709551612
sc_cancel_oldtype = 0
sc_ret = <optimized out>
#1 0x00007f6c4b5aa399 in uv.io_poll () from /usr/lib/x86_64-linux-gnu/libuv.so.1
No symbol table info available.
#2 0x00007f6c4b59bf85 in uv_run () from /usr/lib/x86_64-linux-gnu/libuv.so.1
No symbol table info available.
#3 0x00007f6c4b7f6eef in nm_thread (worker0=0x55767e65d580) at netmgr/netmgr.c:612
r = <optimized out>
worker = 0x55767e65d580
mgr = 0x55767e52e080
#4 0x00007f6c4b83ca2a in isc__trampoline_run (arg=0x55767f5c7f30) at trampoline.c:184
trampoline = 0x55767f5c7f30
result = <optimized out>
#5 0x00007f6c4b7b4fa3 in start_thread (arg=<optimized out>) at pthread_create.c:486
ret = <optimized out>
pd = <optimized out>
now = <optimized out>
unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140102995175168, 3335179980514494415, 140726091752270, 140726091752271, 140102995175168, 93967433951312, -3418097971754989617, -3418119472830117937}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
not_first_call = <optimized out>
#6 0x00007f6c4b6bd4cf in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
No locals.
Thread 13 (Thread 0x7f6c37fff700 (LWP 7444)):
#0 0x00007f6c4b6bd62e in __GI_epoll_pwait (epfd=21, events=0x7f6c37ffbbf0, maxevents=1024, timeout=-1, set=0x0) at ../sysdeps/unix/sysv/linux/epoll_pwait.c:42
resultvar = 18446744073709551612
sc_cancel_oldtype = 0
sc_ret = <optimized out>
#1 0x00007f6c4b5aa399 in uv.io_poll () from /usr/lib/x86_64-linux-gnu/libuv.so.1
No symbol table info available.
#2 0x00007f6c4b59bf85 in uv_run () from /usr/lib/x86_64-linux-gnu/libuv.so.1
No symbol table info available.
#3 0x00007f6c4b7f6eef in nm_thread (worker0=0x55767e65b100) at netmgr/netmgr.c:612
r = <optimized out>
worker = 0x55767e65b100
mgr = 0x55767e52e080
#4 0x00007f6c4b83ca2a in isc__trampoline_run (arg=0x55767eb44a40) at trampoline.c:184
trampoline = 0x55767eb44a40
result = <optimized out>
#5 0x00007f6c4b7b4fa3 in start_thread (arg=<optimized out>) at pthread_create.c:486
ret = <optimized out>
pd = <optimized out>
now = <optimized out>
unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140102772717312, 3335179980514494415, 140726091752270, 140726091752271, 140102772717312, 93967420316976, -3418056769596848177, -3418119472830117937}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
not_first_call = <optimized out>
#6 0x00007f6c4b6bd4cf in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
No locals.
Thread 12 (Thread 0x7f6c4842c700 (LWP 7446)):
#0 0x00007f6c4b6bd62e in __GI_epoll_pwait (epfd=29, events=0x7f6c48428bf0, maxevents=1024, timeout=-1, set=0x0) at ../sysdeps/unix/sysv/linux/epoll_pwait.c:42
resultvar = 18446744073709551612
sc_cancel_oldtype = 0
sc_ret = <optimized out>
#1 0x00007f6c4b5aa399 in uv.io_poll () from /usr/lib/x86_64-linux-gnu/libuv.so.1
No symbol table info available.
#2 0x00007f6c4b59bf85 in uv_run () from /usr/lib/x86_64-linux-gnu/libuv.so.1
No symbol table info available.
#3 0x00007f6c4b7f6eef in nm_thread (worker0=0x55767e65ba20) at netmgr/netmgr.c:612
r = <optimized out>
worker = 0x55767e65ba20
mgr = 0x55767e52e080
#4 0x00007f6c4b83ca2a in isc__trampoline_run (arg=0x55767f7696c0) at trampoline.c:184
trampoline = 0x55767f7696c0
result = <optimized out>
#5 0x00007f6c4b7b4fa3 in start_thread (arg=<optimized out>) at pthread_create.c:486
ret = <optimized out>
pd = <optimized out>
now = <optimized out>
unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140103045531392, 3335179980514494415, 140726091752270, 140726091752271, 140103045531392, 93967433951312, -3418126560131053617, -3418119472830117937}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
not_first_call = <optimized out>
#6 0x00007f6c4b6bd4cf in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
No locals.
Thread 11 (Thread 0x7f6c44c25700 (LWP 7453)):
#0 futex_wait_cancelable (private=0, expected=0, futex_word=0x55767febaa28) at ../sysdeps/unix/sysv/linux/futex-internal.h:88
__ret = -512
oldtype = 0
err = <optimized out>
oldtype = <optimized out>
err = <optimized out>
__ret = <optimized out>
resultvar = <optimized out>
__arg4 = <optimized out>
__arg3 = <optimized out>
__arg2 = <optimized out>
__arg1 = <optimized out>
_a4 = <optimized out>
_a3 = <optimized out>
_a2 = <optimized out>
_a1 = <optimized out>
#1 __pthread_cond_wait_common (abstime=0x0, mutex=0x55767feba9b0, cond=0x55767febaa00) at pthread_cond_wait.c:502
spin = 0
buffer = {__routine = 0x7f6c4b7bad80 <__condvar_cleanup_waiting>, __arg = 0x7f6c44c24c10, __canceltype = 0, __prev = 0x0}
cbuffer = {wseq = 0, cond = 0x55767febaa00, mutex = 0x55767feba9b0, private = 0}
rt = <optimized out>
err = <optimized out>
g = 0
flags = <optimized out>
g1_start = <optimized out>
signals = <optimized out>
result = 0
wseq = 0
seq = 0
private = 0
maxspin = <optimized out>
err = <optimized out>
result = <optimized out>
wseq = <optimized out>
g = <optimized out>
seq = <optimized out>
flags = <optimized out>
private = <optimized out>
signals = <optimized out>
g1_start = <optimized out>
spin = <optimized out>
buffer = <optimized out>
cbuffer = <optimized out>
rt = <optimized out>
s = <optimized out>
#2 __pthread_cond_wait (cond=cond@entry=0x55767febaa00, mutex=mutex@entry=0x55767feba9b0) at pthread_cond_wait.c:655
No locals.
#3 0x00007f6c4b83a6ad in run (uap=0x55767feba9a0) at timer.c:627
manager = 0x55767feba9a0
now = {seconds = 1620187993, nanoseconds = 389499008}
result = <optimized out>
#4 0x00007f6c4b83ca2a in isc__trampoline_run (arg=0x55767f5c8380) at trampoline.c:184
trampoline = 0x55767f5c8380
result = <optimized out>
#5 0x00007f6c4b7b4fa3 in start_thread (arg=<optimized out>) at pthread_create.c:486
ret = <optimized out>
pd = <optimized out>
now = <optimized out>
unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140102986782464, 3335179980514494415, 140726091752350, 140726091752351, 140102986782464, 140726091753456, -3418099071803488305, -3418119472830117937}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
not_first_call = <optimized out>
#6 0x00007f6c4b6bd4cf in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
No locals.
Thread 10 (Thread 0x7f6c35ffb700 (LWP 7454)):
#0 0x00007f6c4b6bd7ef in epoll_wait (epfd=65, events=0x55767ff1ce00, maxevents=2048, timeout=timeout@entry=-1) at ../sysdeps/unix/sysv/linux/epoll_wait.c:30
resultvar = 18446744073709551612
sc_cancel_oldtype = 0
sc_ret = <optimized out>
#1 0x00007f6c4b80faaf in netthread (uap=0x55767e530d90) at unix/socket.c:3394
thread = 0x55767e530d90
manager = <optimized out>
done = false
cc = <optimized out>
fnname = <optimized out>
strbuf = '\000' <repeats 127 times>
#2 0x00007f6c4b83ca2a in isc__trampoline_run (arg=0x55767f5d26d0) at trampoline.c:184
trampoline = 0x55767f5d26d0
result = <optimized out>
#3 0x00007f6c4b7b4fa3 in start_thread (arg=<optimized out>) at pthread_create.c:486
ret = <optimized out>
pd = <optimized out>
now = <optimized out>
unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140102739146496, 3335179980514494415, 140726091751390, 140726091751391, 140102739146496, 0, -3418061165495875633, -3418119472830117937}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
not_first_call = <optimized out>
#4 0x00007f6c4b6bd4cf in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
No locals.
Thread 9 (Thread 0x7f6c46428700 (LWP 7450)):
#0 0x00007f6c4b6bd62e in __GI_epoll_pwait (epfd=45, events=0x7f6c46424bf0, maxevents=1024, timeout=-1, set=0x0) at ../sysdeps/unix/sysv/linux/epoll_pwait.c:42
resultvar = 18446744073709551612
sc_cancel_oldtype = 0
sc_ret = <optimized out>
#1 0x00007f6c4b5aa399 in uv.io_poll () from /usr/lib/x86_64-linux-gnu/libuv.so.1
No symbol table info available.
#2 0x00007f6c4b59bf85 in uv_run () from /usr/lib/x86_64-linux-gnu/libuv.so.1
No symbol table info available.
#3 0x00007f6c4b7f6eef in nm_thread (worker0=0x55767e65cc60) at netmgr/netmgr.c:612
r = <optimized out>
worker = 0x55767e65cc60
mgr = 0x55767e52e080
#4 0x00007f6c4b83ca2a in isc__trampoline_run (arg=0x55767f197870) at trampoline.c:184
trampoline = 0x55767f197870
result = <optimized out>
#5 0x00007f6c4b7b4fa3 in start_thread (arg=<optimized out>) at pthread_create.c:486
ret = <optimized out>
pd = <optimized out>
now = <optimized out>
unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140103011960576, 3335179980514494415, 140726091752270, 140726091752271, 140103011960576, 93967433951312, -3418095771657992241, -3418119472830117937}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
not_first_call = <optimized out>
#6 0x00007f6c4b6bd4cf in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
No locals.
Thread 8 (Thread 0x7f6c45c27700 (LWP 7451)):
#0 0x00007f6c4b6bd62e in __GI_epoll_pwait (epfd=49, events=0x7f6c45c23bf0, maxevents=1024, timeout=-1, set=0x0) at ../sysdeps/unix/sysv/linux/epoll_pwait.c:42
resultvar = 18446744073709551612
sc_cancel_oldtype = 0
sc_ret = <optimized out>
#1 0x00007f6c4b5aa399 in uv.io_poll () from /usr/lib/x86_64-linux-gnu/libuv.so.1
No symbol table info available.
#2 0x00007f6c4b59bf85 in uv_run () from /usr/lib/x86_64-linux-gnu/libuv.so.1
No symbol table info available.
#3 0x00007f6c4b7f6eef in nm_thread (worker0=0x55767e65d0f0) at netmgr/netmgr.c:612
r = <optimized out>
worker = 0x55767e65d0f0
mgr = 0x55767e52e080
#4 0x00007f6c4b83ca2a in isc__trampoline_run (arg=0x55767f3b79d0) at trampoline.c:184
trampoline = 0x55767f3b79d0
result = <optimized out>
#5 0x00007f6c4b7b4fa3 in start_thread (arg=<optimized out>) at pthread_create.c:486
ret = <optimized out>
pd = <optimized out>
now = <optimized out>
unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140103003567872, 3335179980514494415, 140726091752270, 140726091752271, 140103003567872, 93967433951312, -3418096871706490929, -3418119472830117937}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
not_first_call = <optimized out>
#6 0x00007f6c4b6bd4cf in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
No locals.
Thread 7 (Thread 0x7f6c4742a700 (LWP 7448)):
#0 0x00007f6c4b6bd62e in __GI_epoll_pwait (epfd=37, events=0x7f6c47426bf0, maxevents=1024, timeout=-1, set=0x0) at ../sysdeps/unix/sysv/linux/epoll_pwait.c:42
resultvar = 18446744073709551612
sc_cancel_oldtype = 0
sc_ret = <optimized out>
#1 0x00007f6c4b5aa399 in uv.io_poll () from /usr/lib/x86_64-linux-gnu/libuv.so.1
No symbol table info available.
#2 0x00007f6c4b59bf85 in uv_run () from /usr/lib/x86_64-linux-gnu/libuv.so.1
No symbol table info available.
#3 0x00007f6c4b7f6eef in nm_thread (worker0=0x55767e65c340) at netmgr/netmgr.c:612
r = <optimized out>
worker = 0x55767e65c340
mgr = 0x55767e52e080
#4 0x00007f6c4b83ca2a in isc__trampoline_run (arg=0x55767ed575b0) at trampoline.c:184
trampoline = 0x55767ed575b0
result = <optimized out>
#5 0x00007f6c4b7b4fa3 in start_thread (arg=<optimized out>) at pthread_create.c:486
ret = <optimized out>
pd = <optimized out>
now = <optimized out>
unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140103028745984, 3335179980514494415, 140726091752270, 140726091752271, 140103028745984, 93967433951312, -3418093575855962161, -3418119472830117937}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
not_first_call = <optimized out>
#6 0x00007f6c4b6bd4cf in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
No locals.
Thread 6 (Thread 0x7f6c46c29700 (LWP 7449)):
#0 0x00007f6c4b6bd62e in __GI_epoll_pwait (epfd=41, events=0x7f6c46c25bf0, maxevents=1024, timeout=-1, set=0x0) at ../sysdeps/unix/sysv/linux/epoll_pwait.c:42
resultvar = 18446744073709551612
sc_cancel_oldtype = 0
sc_ret = <optimized out>
#1 0x00007f6c4b5aa399 in uv.io_poll () from /usr/lib/x86_64-linux-gnu/libuv.so.1
No symbol table info available.
#2 0x00007f6c4b59bf85 in uv_run () from /usr/lib/x86_64-linux-gnu/libuv.so.1
No symbol table info available.
#3 0x00007f6c4b7f6eef in nm_thread (worker0=0x55767e65c7d0) at netmgr/netmgr.c:612
r = <optimized out>
worker = 0x55767e65c7d0
mgr = 0x55767e52e080
#4 0x00007f6c4b83ca2a in isc__trampoline_run (arg=0x55767ef77710) at trampoline.c:184
trampoline = 0x55767ef77710
result = <optimized out>
#5 0x00007f6c4b7b4fa3 in start_thread (arg=<optimized out>) at pthread_create.c:486
ret = <optimized out>
pd = <optimized out>
now = <optimized out>
unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140103020353280, 3335179980514494415, 140726091752270, 140726091752271, 140103020353280, 93967433951312, -3418094675904460849, -3418119472830117937}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
not_first_call = <optimized out>
#6 0x00007f6c4b6bd4cf in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
No locals.
Thread 5 (Thread 0x7f6c47c2b700 (LWP 7447)):
#0 0x00007f6c4b6bd62e in __GI_epoll_pwait (epfd=33, events=0x7f6c47c27bf0, maxevents=1024, timeout=-1, set=0x0) at ../sysdeps/unix/sysv/linux/epoll_pwait.c:42
resultvar = 18446744073709551612
sc_cancel_oldtype = 0
sc_ret = <optimized out>
#1 0x00007f6c4b5aa399 in uv.io_poll () from /usr/lib/x86_64-linux-gnu/libuv.so.1
No symbol table info available.
#2 0x00007f6c4b59bf85 in uv_run () from /usr/lib/x86_64-linux-gnu/libuv.so.1
No symbol table info available.
#3 0x00007f6c4b7f6eef in nm_thread (worker0=0x55767e65beb0) at netmgr/netmgr.c:612
r = <optimized out>
worker = 0x55767e65beb0
mgr = 0x55767e52e080
#4 0x00007f6c4b83ca2a in isc__trampoline_run (arg=0x55767fe9bf80) at trampoline.c:184
trampoline = 0x55767fe9bf80
result = <optimized out>
#5 0x00007f6c4b7b4fa3 in start_thread (arg=<optimized out>) at pthread_create.c:486
ret = <optimized out>
pd = <optimized out>
now = <optimized out>
unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140103037138688, 3335179980514494415, 140726091752270, 140726091752271, 140103037138688, 93967433951312, -3418092475807463473, -3418119472830117937}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
not_first_call = <optimized out>
#6 0x00007f6c4b6bd4cf in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
No locals.
Thread 4 (Thread 0x7f6c48c2d700 (LWP 7445)):
#0 0x00007f6c4b6bd62e in __GI_epoll_pwait (epfd=25, events=0x7f6c48c29bf0, maxevents=1024, timeout=-1, set=0x0) at ../sysdeps/unix/sysv/linux/epoll_pwait.c:42
resultvar = 18446744073709551612
sc_cancel_oldtype = 0
sc_ret = <optimized out>
#1 0x00007f6c4b5aa399 in uv.io_poll () from /usr/lib/x86_64-linux-gnu/libuv.so.1
No symbol table info available.
#2 0x00007f6c4b59bf85 in uv_run () from /usr/lib/x86_64-linux-gnu/libuv.so.1
No symbol table info available.
#3 0x00007f6c4b7f6eef in nm_thread (worker0=0x55767e65b590) at netmgr/netmgr.c:612
r = <optimized out>
worker = 0x55767e65b590
mgr = 0x55767e52e080
#4 0x00007f6c4b83ca2a in isc__trampoline_run (arg=0x55767e8185c0) at trampoline.c:184
trampoline = 0x55767e8185c0
result = <optimized out>
#5 0x00007f6c4b7b4fa3 in start_thread (arg=<optimized out>) at pthread_create.c:486
ret = <optimized out>
pd = <optimized out>
now = <optimized out>
unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140103053924096, 3335179980514494415, 140726091752270, 140726091752271, 140103053924096, 93967433951312, -3418125464377522225, -3418119472830117937}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
not_first_call = <optimized out>
#6 0x00007f6c4b6bd4cf in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
No locals.
Thread 3 (Thread 0x7f6c367fc700 (LWP 7443)):
#0 0x00007f6c4b6bd62e in __GI_epoll_pwait (epfd=17, events=0x7f6c367f8bf0, maxevents=1024, timeout=-1, set=0x0) at ../sysdeps/unix/sysv/linux/epoll_pwait.c:42
resultvar = 18446744073709551612
sc_cancel_oldtype = 0
sc_ret = <optimized out>
#1 0x00007f6c4b5aa399 in uv.io_poll () from /usr/lib/x86_64-linux-gnu/libuv.so.1
No symbol table info available.
#2 0x00007f6c4b59bf85 in uv_run () from /usr/lib/x86_64-linux-gnu/libuv.so.1
No symbol table info available.
#3 0x00007f6c4b7f6eef in nm_thread (worker0=0x55767e65ac70) at netmgr/netmgr.c:612
r = <optimized out>
worker = 0x55767e65ac70
mgr = 0x55767e52e080
#4 0x00007f6c4b83ca2a in isc__trampoline_run (arg=0x55767fa1bc80) at trampoline.c:184
trampoline = 0x55767fa1bc80
result = <optimized out>
#5 0x00007f6c4b7b4fa3 in start_thread (arg=<optimized out>) at pthread_create.c:486
ret = <optimized out>
pd = <optimized out>
now = <optimized out>
unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140102747539200, 3335179980514494415, 140726091752270, 140726091752271, 140102747539200, 93967433498800, -3418060065447376945, -3418119472830117937}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
not_first_call = <optimized out>
#6 0x00007f6c4b6bd4cf in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
No locals.
Thread 2 (Thread 0x7f6c377fe700 (LWP 7441)):
#0 0x00007f6c4b6bd62e in __GI_epoll_pwait (epfd=3, events=0x7f6c377fabf0, maxevents=1024, timeout=-1, set=0x0) at ../sysdeps/unix/sysv/linux/epoll_pwait.c:42
resultvar = 18446744073709551612
sc_cancel_oldtype = 0
sc_ret = <optimized out>
#1 0x00007f6c4b5aa399 in uv.io_poll () from /usr/lib/x86_64-linux-gnu/libuv.so.1
No symbol table info available.
#2 0x00007f6c4b59bf85 in uv_run () from /usr/lib/x86_64-linux-gnu/libuv.so.1
No symbol table info available.
#3 0x00007f6c4b7f6eef in nm_thread (worker0=0x55767e65a350) at netmgr/netmgr.c:612
r = <optimized out>
worker = 0x55767e65a350
mgr = 0x55767e52e080
#4 0x00007f6c4b83ca2a in isc__trampoline_run (arg=0x55767f859dd0) at trampoline.c:184
trampoline = 0x55767f859dd0
result = <optimized out>
#5 0x00007f6c4b7b4fa3 in start_thread (arg=<optimized out>) at pthread_create.c:486
ret = <optimized out>
pd = <optimized out>
now = <optimized out>
unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140102764324608, 3335179980514494415, 140726091752270, 140726091752271, 140102764324608, 93967431334672, -3418057869645346865, -3418119472830117937}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
not_first_call = <optimized out>
#6 0x00007f6c4b6bd4cf in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
No locals.
Thread 1 (Thread 0x7f6c48c30ec0 (LWP 7161)):
#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
set = {__val = {0 <repeats 12 times>, 47, 96, 48, 0}}
pid = <optimized out>
tid = <optimized out>
ret = <optimized out>
#1 0x00007f6c4b5e6535 in __GI_abort () at abort.c:79
save_stage = 1
act = {__sigaction_handler = {sa_handler = 0x7f6c48c31820, sa_sigaction = 0x7f6c48c31820}, sa_mask = {__val = {8, 140103099350080, 10240, 0, 93967431334640, 28, 0, 0, 93967413862528, 140726091753008, 140103097767218, 93967413862528, 140103097767218, 0, 93967385772036, 402}}, sa_flags = 1267105120, sa_restorer = 0x7f6c4b871168}
sigs = {__val = {32, 0 <repeats 15 times>}}
#2 0x00007f6c4b867d47 in ?? () from /usr/lib/x86_64-linux-gnu/libcmocka.so.0
No symbol table info available.
#3 0x00007f6c4b867daa in _fail () from /usr/lib/x86_64-linux-gnu/libcmocka.so.0
No symbol table info available.
#4 0x000055767ca62477 in privilege_drop (state=<optimized out>) at task_test.c:414
result = <optimized out>
task1 = 0x55767f5d4c20
task2 = 0x55767f5d4d30
event = 0x0
a = 1
b = 2
c = 3
d = 5
e = 4
i = <optimized out>
#5 0x00007f6c4b86a0d9 in ?? () from /usr/lib/x86_64-linux-gnu/libcmocka.so.0
No symbol table info available.
#6 0x00007f6c4b86aa49 in _cmocka_run_group_tests () from /usr/lib/x86_64-linux-gnu/libcmocka.so.0
No symbol table info available.
#7 0x000055767ca6367b in main (argc=1, argv=0x7ffd58b5b5e8) at task_test.c:1588
tests = {{name = 0x55767ca6420b "manytasks", test_func = 0x55767ca63184 <manytasks>, setup_func = 0x0, teardown_func = 0x0, initial_state = 0x0}, {name = 0x55767ca64215 "all_events", test_func = 0x55767ca62f49 <all_events>, setup_func = 0x55767ca5fd92 <_setup>, teardown_func = 0x55767ca62f2d <_teardown>, initial_state = 0x0}, {name = 0x55767ca64220 "basic", test_func = 0x55767ca62aa7 <basic>, setup_func = 0x55767ca5fcda <_setup2>, teardown_func = 0x55767ca62f2d <_teardown>, initial_state = 0x0}, {name = 0x55767ca64226 "create_task", test_func = 0x55767ca62526 <create_task>, setup_func = 0x55767ca5fd92 <_setup>, teardown_func = 0x55767ca62f2d <_teardown>, initial_state = 0x0}, {name = 0x55767ca64232 "pause_unpause", test_func = 0x55767ca62729 <pause_unpause>, setup_func = 0x55767ca5fd92 <_setup>, teardown_func = 0x55767ca62f2d <_teardown>, initial_state = 0x0}, {name = 0x55767ca64240 "post_shutdown", test_func = 0x55767ca6031f <post_shutdown>, setup_func = 0x55767ca5fcda <_setup2>, teardown_func = 0x55767ca62f2d <_teardown>, initial_state = 0x0}, {name = 0x55767ca6424e "privilege_drop", test_func = 0x55767ca61ed0 <privilege_drop>, setup_func = 0x55767ca5fd92 <_setup>, teardown_func = 0x55767ca62f2d <_teardown>, initial_state = 0x0}, {name = 0x55767ca6425d "privileged_events", test_func = 0x55767ca6183a <privileged_events>, setup_func = 0x55767ca5fd92 <_setup>, teardown_func = 0x55767ca62f2d <_teardown>, initial_state = 0x0}, {name = 0x55767ca64288 "purge", test_func = 0x55767ca61246 <purge>, setup_func = 0x55767ca5fcda <_setup2>, teardown_func = 0x55767ca62f2d <_teardown>, initial_state = 0x0}, {name = 0x55767ca6426f "purgeevent", test_func = 0x55767ca6182a <purgeevent>, setup_func = 0x55767ca5fcda <_setup2>, teardown_func = 0x55767ca62f2d <_teardown>, initial_state = 0x0}, {name = 0x55767ca6427a "purgeevent_notpurge", test_func = 0x55767ca6181a <purgeevent_notpurge>, setup_func = 0x55767ca5fd92 <_setup>, teardown_func = 0x55767ca62f2d <_teardown>, initial_state = 0x0}, {name = 0x55767ca6428e "purgerange", test_func = 0x55767ca6103c <purgerange>, setup_func = 0x55767ca5fd92 <_setup>, teardown_func = 0x55767ca62f2d <_teardown>, initial_state = 0x0}, {name = 0x55767ca64245 "shutdown", test_func = 0x55767ca5fe4a <shutdown>, setup_func = 0x55767ca5fc22 <_setup4>, teardown_func = 0x55767ca62f2d <_teardown>, initial_state = 0x0}, {name = 0x55767ca64299 "task_exclusive", test_func = 0x55767ca5f5c5 <task_exclusive>, setup_func = 0x55767ca5fc22 <_setup4>, teardown_func = 0x55767ca62f2d <_teardown>, initial_state = 0x0}}
selected = {{name = 0x0, test_func = 0x0, setup_func = 0x0, teardown_func = 0x0, initial_state = 0x0} <repeats 14 times>}
i = <optimized out>
c = <optimized out>
D:task_test:backtrace from ./core.7161 end
FAIL task_test (exit status: 134)
```
core file: [core.7161.gz](/uploads/92ace901038632ab8bb8890e73c36a5d/core.7161.gz)Not planned