BIND issueshttps://gitlab.isc.org/isc-projects/bind9/-/issues2019-11-05T10:59:39Zhttps://gitlab.isc.org/isc-projects/bind9/-/issues/1258CentOS 8 COPR builds2019-11-05T10:59:39ZGhost UserCentOS 8 COPR buildsNow that CentOS 8 is available and supported by COPR, please enable builds on CentOS 8.Now that CentOS 8 is available and supported by COPR, please enable builds on CentOS 8.November 2019 (9.11.13, 9.14.8, 9.15.6)Michał KępieńMichał Kępieńhttps://gitlab.isc.org/isc-projects/bind9/-/issues/1244CI not running IPv6 test portions2019-11-05T11:02:33ZMichal NowakCI not running IPv6 test portions### Summary
Out CI tests are have 16 times "IPv6 unavailable; skipping", e.g:
```
I:digdelv:IPv6 unavailable; skipping
I:digdelv:checking dig @IPv4addr -6 +mapped A a.example (26)
I:digdelv:IPv6 or IPv4-to-IPv6 mapping unavailable; ski...### Summary
Out CI tests are have 16 times "IPv6 unavailable; skipping", e.g:
```
I:digdelv:IPv6 unavailable; skipping
I:digdelv:checking dig @IPv4addr -6 +mapped A a.example (26)
I:digdelv:IPv6 or IPv4-to-IPv6 mapping unavailable; skipping
I:digdelv:checking dig +tcp @IPv4addr -6 +nomapped A a.example (27)
I:digdelv:IPv6 unavailable; skipping
I:digdelv:checking dig +notcp @IPv4addr -6 +nomapped A a.example (28)
I:digdelv:IPv6 unavailable; skipping
I:digdelv:checking dig +subnet (29)
```
See, e.g. https://gitlab.isc.org/isc-projects/bind9/-/jobs/333501.
I see this in CI jobs from beginning of September, so this is not a new thing.
The reason probably being, that our CI hosts do not have IPv6 networking set properly (e.g. using documentation prefix and probably other things are wrong or missing there):
```
root@gitlab-ci-07:~# cat /etc/docker/daemon.json
{ "ipv6": true, "fixed-cidr-v6": "2001:db8:1::/64" }
```
### BIND version used
`HEAD`
### Steps to reproduce
Just open any System job in CI and look for "IPv6 unavailable; skipping" in the log.November 2019 (9.11.13, 9.14.8, 9.15.6)https://gitlab.isc.org/isc-projects/bind9/-/issues/1206Customer Feature Request: Add "high-water" measurement for tcp-clients2019-11-06T20:09:50ZMichael McNallyCustomer Feature Request: Add "high-water" measurement for tcp-clientsSince we changed tcp-clients limit enforcement in response to CVE-2018-5743, several customers have contacted us asking how to set limits appropriately.
Yoshitaka Aharen of M-root has offered what we think would be a useful suggestion ...Since we changed tcp-clients limit enforcement in response to CVE-2018-5743, several customers have contacted us asking how to set limits appropriately.
Yoshitaka Aharen of M-root has offered what we think would be a useful suggestion which would help him and other operators who want to scale limits appropriately. He asks if named can keep track of a "high-water" limit for the number of simultaneous TCP clients, correctly pointing out that this is likely to be of more help to customers than random periodic polling via 'rndc status' which, if not performed frequently, could easily miss spikes in usage.
Related to Support [#15134](https://support.isc.org/Ticket/Display.html?id=15134) and [#15065](https://support.isc.org/Ticket/Display.html?id=15065)November 2019 (9.11.13, 9.14.8, 9.15.6)https://gitlab.isc.org/isc-projects/bind9/-/issues/1143A minor documentation issue & consideration of parsing inconsistencies in IPv...2019-11-05T11:03:28ZGhost UserA minor documentation issue & consideration of parsing inconsistencies in IPv4s in address match lists and in a controls/inet statement### Description
This was discovered during a course. It was confusing for students, but nothing is really wrong except perhaps a minor issue in the ARM.
1. Quick overview:
* The first of these statements is valid, the last two are no...### Description
This was discovered during a course. It was confusing for students, but nothing is really wrong except perhaps a minor issue in the ARM.
1. Quick overview:
* The first of these statements is valid, the last two are not:
> controls { inet 127.0.0.1 allow { 127.1; } keys { "XXkey"; }; };
>
> controls { inet 127.1 allow { 127.0.0.1; } keys { "XXkey"; }; };
>
> controls { inet 127.1 allow { 127.1; } keys { "XXkey"; }; };
* Without other information, comparing the statements above, it is difficult to understand why 127.1 is permitted in one location, but not the other.
* Someone used to using 127.1 on the command-line is likely to misinterpret the meaning of 127.1 in an Address Match List.
1. Address Match Lists accept IPv4 addresses with less than four octets and without a netmask.
* Example: 177.44.8 is parsed as 177.44.8.0/32
* Example: 127.1 is parsed as 127.1.0.0/32
* This is reasonable. However, it is worth noting that it is very different than the parsing on the command line using commands such as dig and ping:
* ping 177.44.8 is parsed to ping 177.44.0.8 (not to 177.44.8.0)
* dig @127.1 is parsed to dig @127.0.0.1 (not to @127.1.0.0)
* Particularly because dig and named.conf are both part of BIND, the inconsistency is understandably confusing at first glance.
* For 127.1, I presume most people using it in an Address Match List would incorrectly think it parses to 127.0.0.1.
* Note: As I read it, there is a documentation issue on page 51 of the 9.14.3 ARM in the definition of ipv4_addr:
* ipv4_addr: An IPv4 address with exactly four elements in dotted_decimal notation.
* However, Address Match Lists access IPv4s with less than four elements.
* Possible correction:
* ipv4_addr: An IPv4 address in dotted_decimal notation. If there are less than four elements, the remainder are taken to be zero.
* AFAICT, the definition of ip_prefix does not come into play because there is no slash in the address.
1. The inet statement in a controls statement does not accept an IPv4 with less than four octets.
* If it did, the parsing would need to match the what one gets on the command-line, and differ from an Address Match List.
* 127.1 would need to parse to 127.0.0.1.
* "Quick overview" above has examples.
### Request
1. Change the definition of ipv4_addr in the ARM. Possibly update the definition if ip_prefix and dotted_decimal at the same time.
1. Maybe allow IPv4s in inet statements with less than four octets. For example:
* This: controls { inet 127.1 ...
* Would become: controls { inet 127.0.0.1 ...
* I see arguments for and against.
1. In an Address Match List, for the address 127.1, named-checkconf could produce a warning message about the parsing, perhaps:
* named.conf:<line#>: warning: In the Address Match List, 127.1 is interpreted as 127.1.0.0/32
* Perhaps any address beginning with 127 and less than four octets should get this treatment.
1. Forgive me for leading you down this rabbit hole of the unimportant.November 2019 (9.11.13, 9.14.8, 9.15.6)https://gitlab.isc.org/isc-projects/bind9/-/issues/1059Fix TCP failure handling2019-11-05T11:03:04ZMichał KępieńFix TCP failure handlingThere are two issues with TCP failure handling in resolver code which are somewhat intertwined yet still distinct:
- for servers which respond to EDNS queries but never send responses larger than 512 bytes and are unavailable over TCP...There are two issues with TCP failure handling in resolver code which are somewhat intertwined yet still distinct:
- for servers which respond to EDNS queries but never send responses larger than 512 bytes and are unavailable over TCP, `named` may go into a pointless query loop which is only interrupted after the fetch context restart limit is hit; this cannot really be exploited, but is harmful to broken servers,
- TCP connection failures affect EDNS timeout statistics while EDNS mechanisms only apply to DNS over UDP.
Both of these issues are exposed by the `legacy` system test, but they went under the radar so far because they do not cause test failures - I only noticed something was up because I was running that test with Wireshark in the background.November 2019 (9.11.13, 9.14.8, 9.15.6)Michał KępieńMichał Kępieńhttps://gitlab.isc.org/isc-projects/bind9/-/issues/876Documentation feedback.2019-11-05T11:02:50ZMark AndrewsDocumentation feedback.```automatic-interface-scan```
How does this interact with ```interface-interval``` and why does it require routing sockets?
```lock-file```
I was running 9.11.5 with no ```lock-file``` directive and no lock file on disk...```automatic-interface-scan```
How does this interact with ```interface-interval``` and why does it require routing sockets?
```lock-file```
I was running 9.11.5 with no ```lock-file``` directive and no lock file on disk. When I added a ```lock-file``` directive specifying /var/run/named/named.lock that file was created. I don't think the ARM is entirely correct about the default. Also, your implementation of the lock file could be better. The file BIND created was just an empty file. If BIND dies and leaves that lock file in place it will prevent BIND from being restarted. If you write BIND's PID into the file, then if the file exists you can send a signal 0 to that PID to verify that a process with that number is running. If the kill fails you can allow BIND to start even if the lock file exists. This works unless another process with the same PID just happens to have been started in the meantime; that's unlikely, but this is still better than not testing at all.
```require-server-cookie```
The ARM doesn't indicate the default setting (or why one might pick yes versus no, for that matter). It says "Require a valid server cookie before sending a full response to a UDP request from a cookie aware client. BADCOOKIE is sent if there is a bad or no existent server cookie." I think this is trying to tell me that if I turn this on I get a BADCOOKIE error response and if I leave it off I get a normal response which is possibly truncated by the nocookie-udp-size parameter. It's a shame that's not what it says.
bin/named/config.c seems to tell me that the default is no. Why would I want to set it to yes? "yes" seems like it would be a really bad setting for servers behind a load balancer unless they were all configured with the same secret....
```resolver-nonbackoff-tries```
This option appears in the grammar but there is no description. The code tells me the default is 3.
```resolver-retry-interval```
Ditto. The code tells me the default is 800.
```send-cookie```
The ARM doesn't indicate the default setting. Again, the code seems to tell me the default is true.
stale-answer-enable
This option appears in the grammar but there is no description. The code tells me the default is false.
edit: more from the same source
Under ```dnskey-sig-validity``` the ARM says "If set to a non-zero value, this overrides the value set by ```sig-validity-interval```. The default is zero, meaning ```sig-validity-interval``` is used" However, if I specify "```dnskey-sig-validity 0;```" it says "'0' is out of range (1..3660)". November 2019 (9.11.13, 9.14.8, 9.15.6)https://gitlab.isc.org/isc-projects/bind9/-/issues/664fetches-per-server quota is lower-bounded to 1 instead of to 2% of quota2019-11-18T18:03:12ZCathy Almondfetches-per-server quota is lower-bounded to 1 instead of to 2% of quotaOn a server with "fetches-per-server 4000;" I was surprised to see a cache dump with the ADB values for a server showing me a quota set to 1.
> ; problem-server.example.com [v4 TTL 2658] [v4 not_found] [v6 unexpected]
> ; 192.0.2.25 [sr...On a server with "fetches-per-server 4000;" I was surprised to see a cache dump with the ADB values for a server showing me a quota set to 1.
> ; problem-server.example.com [v4 TTL 2658] [v4 not_found] [v6 unexpected]
> ; 192.0.2.25 [srtt 948570] [flags 00004000] [ttl -342230] [atr 0.62] [quota 1]
Although we didn't document the lower bound in the ARM (this also needs to be addressed), the KB article ([https://kb.isc.org/docs/aa-01304](https://kb.isc.org/docs/aa-01304)) explaining how fetchlimits work, based on information from Engineering, describes the adjustment algorithm thus:
> The fetches-per-server option sets a hard upper limit to the number of outstanding fetches allowed for a single server. The lower limit is 2% of fetches-per-server, but never below 1. It also allows you to select what to do with the queries that are being limited - either drop them, or send back a SERVFAIL response.
Clearly however, this is not what is in code, as seen in adb.c maybe-adjust-quota(), the last thing we do:
```
/* Ensure we don't drop to zero */
if (addr->entry->quota == 0)
addr->entry->quota = 1;
}
```
The background to this, although very much a corner case, is a mis-configured server that responds to A queries but sends back nothing (so the fetches timeout) for AAAA queries for the same name. This is interacting particularly badly with fetches-per-server because the 'good' queries all get answers and are cached, whereas the 'bad' ones all timeout, SERVFAIL to the client and are not cached.
Turning on servfail cache would mitigate that to some extent.
But nevertheless, the quota going all the way down to 1 (instead of to 80) is making matter much worse.
Please fix, because although this corner case is not our problem as such (the mis-behaving server is being fixed), it is bad that the quota is going down so low that it's very hard to get enough queries to be processed in order to recalculate the atr often enough to be reasonably representative of the query rate to this server.
There was also no evidence of the low quota in the logging - presumably because it had been a rock bottom for longer than the logfile sample I looked at - which was for several hours. It therefore needed a cache dump to identify the problem.
(P.S. I'm assuming it's inefficient to calculate "2% of quota" every time we pass this way, so the bottom limit probably wants calculating initially to use here, and might well be something we want to add to adb on a per-server basis too, in anticipation of future work on fetches-per-server to allow for server-specific quota overrides)
Reference: https://support.isc.org/Ticket/Display.html?id=13720November 2019 (9.11.13, 9.14.8, 9.15.6)https://gitlab.isc.org/isc-projects/bind9/-/issues/148Add BSDs to GitLab CI2019-11-05T11:03:22ZOndřej SurýAdd BSDs to GitLab CINovember 2019 (9.11.13, 9.14.8, 9.15.6)Michał KępieńMichał Kępieńhttps://gitlab.isc.org/isc-projects/bind9/-/issues/10Use and require atomic primitives support2024-01-03T14:09:50ZOndřej SurýUse and require atomic primitives supportUse the atomic primitives to replace the custom atomics code.Use the atomic primitives to replace the custom atomics code.November 2019 (9.11.13, 9.14.8, 9.15.6)Witold KrecickiWitold Krecickihttps://gitlab.isc.org/isc-projects/bind9/-/issues/5Revise coding style and documentation requirements2024-01-03T14:09:50ZOndřej SurýRevise coding style and documentation requirementsThis is unsorted list:
* [ ] opening curly braces on new line when the outside construct is multiline, e.g.
```
for (foo;
bar;
baz) {
```
vs current
```
for (foo;
bar;
baz)
{
```
* [ ] Using parentheses to e...This is unsorted list:
* [ ] opening curly braces on new line when the outside construct is multiline, e.g.
```
for (foo;
bar;
baz) {
```
vs current
```
for (foo;
bar;
baz)
{
```
* [ ] Using parentheses to explicitly set priority in conditions, e.g.
```
((foo == TRUE) || (bar == FALSE))
```
vs current
```
(foo == TRUE || bar == FALSE)
```
* [ ] Explicit `NULL` or `FALSE` comparison
```
(foo == FALSE && bar == NULL)
```
vs current
```
(!foo && !bar)
```November 2019 (9.11.13, 9.14.8, 9.15.6)Ondřej SurýOndřej Surýhttps://gitlab.isc.org/isc-projects/bind9/-/issues/1471ThreadSanitizer: lock-order-inversion (potential deadlock) - fcount_incr vs. ...2019-12-12T11:38:28ZOndřej SurýThreadSanitizer: lock-order-inversion (potential deadlock) - fcount_incr vs. dns_resolver_createfetch```
WARNING: ThreadSanitizer: lock-order-inversion (potential deadlock) (pid=21211)
Cycle in lock order graph: M1110 (0x7b7400000008) => M1728 (0x7b4c000001d0) => M1110
Mutex M1728 acquired here while holding mutex M1110 in main thr...```
WARNING: ThreadSanitizer: lock-order-inversion (potential deadlock) (pid=21211)
Cycle in lock order graph: M1110 (0x7b7400000008) => M1728 (0x7b4c000001d0) => M1110
Mutex M1728 acquired here while holding mutex M1110 in main thread:
#0 pthread_mutex_lock <null> (libtsan.so.0+0x3d62b)
#1 fcount_incr /home/ondrej/Projects/bind9/lib/dns/resolver.c:1525 (libdns.so.1505+0x185598)
#2 fctx_create /home/ondrej/Projects/bind9/lib/dns/resolver.c:4925 (libdns.so.1505+0x190783)
#3 dns_resolver_createfetch /home/ondrej/Projects/bind9/lib/dns/resolver.c:10581 (libdns.so.1505+0x1976b1)
#4 start_fetch /home/ondrej/Projects/bind9/lib/dns/client.c:777 (libdns.so.1505+0x27d26a)
#5 client_resfind /home/ondrej/Projects/bind9/lib/dns/client.c:862 (libdns.so.1505+0x27d26a)
#6 dns_client_startresolve /home/ondrej/Projects/bind9/lib/dns/client.c:1388 (libdns.so.1505+0x281c0c)
#7 dns_client_resolve /home/ondrej/Projects/bind9/lib/dns/client.c:1249 (libdns.so.1505+0x283501)
#8 main /home/ondrej/Projects/bind9/bin/delv/delv.c:1788 (delv+0x5d92)
Mutex M1110 previously acquired by the same thread here:
#0 pthread_mutex_lock <null> (libtsan.so.0+0x3d62b)
#1 dns_resolver_createfetch /home/ondrej/Projects/bind9/lib/dns/resolver.c:10538 (libdns.so.1505+0x196da5)
#2 start_fetch /home/ondrej/Projects/bind9/lib/dns/client.c:777 (libdns.so.1505+0x27d26a)
#3 client_resfind /home/ondrej/Projects/bind9/lib/dns/client.c:862 (libdns.so.1505+0x27d26a)
#4 dns_client_startresolve /home/ondrej/Projects/bind9/lib/dns/client.c:1388 (libdns.so.1505+0x281c0c)
#5 dns_client_resolve /home/ondrej/Projects/bind9/lib/dns/client.c:1249 (libdns.so.1505+0x283501)
#6 main /home/ondrej/Projects/bind9/bin/delv/delv.c:1788 (delv+0x5d92)
Mutex M1110 acquired here while holding mutex M1728 in main thread:
#0 pthread_mutex_lock <null> (libtsan.so.0+0x3d62b)
#1 dns_resolver_shutdown /home/ondrej/Projects/bind9/lib/dns/resolver.c:10305 (libdns.so.1505+0x196844)
#2 view_flushanddetach /home/ondrej/Projects/bind9/lib/dns/view.c:582 (libdns.so.1505+0x1fc44d)
#3 dns_view_detach /home/ondrej/Projects/bind9/lib/dns/view.c:635 (libdns.so.1505+0x1fc53e)
#4 destroyclient /home/ondrej/Projects/bind9/lib/dns/client.c:611 (libdns.so.1505+0x2810ec)
#5 dns_client_destroy /home/ondrej/Projects/bind9/lib/dns/client.c:652 (libdns.so.1505+0x2810ec)
#6 main /home/ondrej/Projects/bind9/bin/delv/delv.c:1827 (delv+0x3bf2)
Mutex M1728 previously acquired by the same thread here:
#0 pthread_mutex_lock <null> (libtsan.so.0+0x3d62b)
#1 dns_resolver_shutdown /home/ondrej/Projects/bind9/lib/dns/resolver.c:10300 (libdns.so.1505+0x196777)
#2 view_flushanddetach /home/ondrej/Projects/bind9/lib/dns/view.c:582 (libdns.so.1505+0x1fc44d)
#3 dns_view_detach /home/ondrej/Projects/bind9/lib/dns/view.c:635 (libdns.so.1505+0x1fc53e)
#4 destroyclient /home/ondrej/Projects/bind9/lib/dns/client.c:611 (libdns.so.1505+0x2810ec)
#5 dns_client_destroy /home/ondrej/Projects/bind9/lib/dns/client.c:652 (libdns.so.1505+0x2810ec)
#6 main /home/ondrej/Projects/bind9/bin/delv/delv.c:1827 (delv+0x3bf2)
SUMMARY: ThreadSanitizer: lock-order-inversion (potential deadlock) (/usr/lib/x86_64-linux-gnu/libtsan.so.0+0x3d62b) in pthread_mutex_lock
```December 2019 (9.11.14, 9.14.9, 9.15.7)https://gitlab.isc.org/isc-projects/bind9/-/issues/1469ThreadSanitizer: lock-order-inversion (potential deadlock) - isc__nm_enqueue_...2019-12-12T13:13:50ZOndřej SurýThreadSanitizer: lock-order-inversion (potential deadlock) - isc__nm_enqueue_ievent vs isc_nm_listentcp```
WARNING: ThreadSanitizer: lock-order-inversion (potential deadlock) (pid=18511)
Cycle in lock order graph: M596862637233476224 (0x000000000000) => M1105 (0x7fbaf719dcd0) => M596862637233476224
Mutex M1105 acquired here while hol...```
WARNING: ThreadSanitizer: lock-order-inversion (potential deadlock) (pid=18511)
Cycle in lock order graph: M596862637233476224 (0x000000000000) => M1105 (0x7fbaf719dcd0) => M596862637233476224
Mutex M1105 acquired here while holding mutex M596862637233476224 in thread T9:
#0 pthread_mutex_lock <null> (libtsan.so.0+0x3d62b)
#1 isc__nm_enqueue_ievent /home/ondrej/Projects/bind9/lib/isc/netmgr/netmgr.c:601 (libisc.so.1504+0x3f20c)
#2 isc_nm_listentcp /home/ondrej/Projects/bind9/lib/isc/netmgr/tcp.c:190 (libisc.so.1504+0x452ab)
#3 isc_nm_listentcpdns /home/ondrej/Projects/bind9/lib/isc/netmgr/tcpdns.c:299 (libisc.so.1504+0x4899f)
#4 ns_interface_listentcp /home/ondrej/Projects/bind9/lib/ns/interfacemgr.c:463 (libns.so.1502+0x1b0d3)
#5 ns_interface_setup /home/ondrej/Projects/bind9/lib/ns/interfacemgr.c:516 (libns.so.1502+0x1b0d3)
#6 do_scan /home/ondrej/Projects/bind9/lib/ns/interfacemgr.c:1070 (libns.so.1502+0x1c023)
#7 ns_interfacemgr_scan0 /home/ondrej/Projects/bind9/lib/ns/interfacemgr.c:1130 (libns.so.1502+0x1c8fd)
#8 ns_interfacemgr_scan /home/ondrej/Projects/bind9/lib/ns/interfacemgr.c:1177 (libns.so.1502+0x1ca72)
#9 load_configuration server.c:8712 (named+0x53e83)
#10 run_server server.c:9654 (named+0x59a47)
#11 dispatch /home/ondrej/Projects/bind9/lib/isc/task.c:1134 (libisc.so.1504+0x56f36)
#12 run /home/ondrej/Projects/bind9/lib/isc/task.c:1319 (libisc.so.1504+0x56f36)
#13 <null> <null> (libtsan.so.0+0x29b3d)
Mutex M596862637233476224 previously acquired by the same thread here:
#0 pthread_mutex_lock <null> (libtsan.so.0+0x3d62b)
#1 isc_nm_listentcp /home/ondrej/Projects/bind9/lib/isc/netmgr/tcp.c:189 (libisc.so.1504+0x4526b)
#2 isc_nm_listentcpdns /home/ondrej/Projects/bind9/lib/isc/netmgr/tcpdns.c:299 (libisc.so.1504+0x4899f)
#3 ns_interface_listentcp /home/ondrej/Projects/bind9/lib/ns/interfacemgr.c:463 (libns.so.1502+0x1b0d3)
#4 ns_interface_setup /home/ondrej/Projects/bind9/lib/ns/interfacemgr.c:516 (libns.so.1502+0x1b0d3)
#5 do_scan /home/ondrej/Projects/bind9/lib/ns/interfacemgr.c:1070 (libns.so.1502+0x1c023)
#6 ns_interfacemgr_scan0 /home/ondrej/Projects/bind9/lib/ns/interfacemgr.c:1130 (libns.so.1502+0x1c8fd)
#7 ns_interfacemgr_scan /home/ondrej/Projects/bind9/lib/ns/interfacemgr.c:1177 (libns.so.1502+0x1ca72)
#8 load_configuration server.c:8712 (named+0x53e83)
#9 run_server server.c:9654 (named+0x59a47)
#10 dispatch /home/ondrej/Projects/bind9/lib/isc/task.c:1134 (libisc.so.1504+0x56f36)
#11 run /home/ondrej/Projects/bind9/lib/isc/task.c:1319 (libisc.so.1504+0x56f36)
#12 <null> <null> (libtsan.so.0+0x29b3d)
Mutex M596862637233476224 acquired here while holding mutex M1105 in thread T3:
#0 pthread_mutex_lock <null> (libtsan.so.0+0x3d62b)
#1 isc__nm_async_tcplisten /home/ondrej/Projects/bind9/lib/isc/netmgr/tcp.c:338 (libisc.so.1504+0x44e9b)
#2 process_queue /home/ondrej/Projects/bind9/lib/isc/netmgr/netmgr.c:538 (libisc.so.1504+0x41fd5)
#3 nm_thread /home/ondrej/Projects/bind9/lib/isc/netmgr/netmgr.c:438 (libisc.so.1504+0x422b1)
#4 <null> <null> (libtsan.so.0+0x29b3d)
Mutex M1105 previously acquired by the same thread here:
#0 pthread_cond_wait <null> (libtsan.so.0+0x48e9e)
#1 nm_thread /home/ondrej/Projects/bind9/lib/isc/netmgr/netmgr.c:435 (libisc.so.1504+0x4228e)
#2 <null> <null> (libtsan.so.0+0x29b3d)
Thread T9 'isc-worker0000' (tid=18534, running) created by main thread at:
#0 pthread_create <null> (libtsan.so.0+0x2be1b)
#1 isc_thread_create /home/ondrej/Projects/bind9/lib/isc/pthreads/thread.c:75 (libisc.so.1504+0x7bc54)
#2 isc_taskmgr_create /home/ondrej/Projects/bind9/lib/isc/task.c:1410 (libisc.so.1504+0x59cf3)
#3 create_managers main.c:902 (named+0x1aeec)
#4 setup main.c:1235 (named+0x1aeec)
#5 main main.c:1515 (named+0x1aeec)
Thread T3 'isc-net-0002' (tid=18528, running) created by main thread at:
#0 pthread_create <null> (libtsan.so.0+0x2be1b)
#1 isc_thread_create /home/ondrej/Projects/bind9/lib/isc/pthreads/thread.c:75 (libisc.so.1504+0x7bc54)
#2 isc_nm_start /home/ondrej/Projects/bind9/lib/isc/netmgr/netmgr.c:149 (libisc.so.1504+0x3ec4a)
#3 create_managers main.c:895 (named+0x1ae90)
#4 setup main.c:1235 (named+0x1ae90)
#5 main main.c:1515 (named+0x1ae90)
SUMMARY: ThreadSanitizer: lock-order-inversion (potential deadlock) (/usr/lib/x86_64-linux-gnu/libtsan.so.0+0x3d62b) in pthread_mutex_lock
```December 2019 (9.11.14, 9.14.9, 9.15.7)Witold KrecickiWitold Krecickihttps://gitlab.isc.org/isc-projects/bind9/-/issues/1453The zero system test timeouts intermittently2019-12-09T17:41:28ZOndřej SurýThe zero system test timeouts intermittentlySee following jobs for evidence:
* https://gitlab.isc.org/isc-projects/bind9/-/jobs/452936
* https://gitlab.isc.org/isc-projects/bind9/-/jobs/452931
* https://gitlab.isc.org/isc-projects/bind9/-/jobs/452945See following jobs for evidence:
* https://gitlab.isc.org/isc-projects/bind9/-/jobs/452936
* https://gitlab.isc.org/isc-projects/bind9/-/jobs/452931
* https://gitlab.isc.org/isc-projects/bind9/-/jobs/452945December 2019 (9.11.14, 9.14.9, 9.15.7)https://gitlab.isc.org/isc-projects/bind9/-/issues/1442ThreadSanitizer: lock-order-inversion (potential deadlock) (/usr/lib/x86_64-l...2019-12-10T02:19:49ZOndřej SurýThreadSanitizer: lock-order-inversion (potential deadlock) (/usr/lib/x86_64-linux-gnu/libtsan.so.0+0x2d229) in pthread_rwlock_wrlock* Binary: `named`
* Commit: 289f143d8a2a248333ace4d1d43ab388c7405a73
* Tests: addzone, autosign, dnssec, ...
```
WARNING: ThreadSanitizer: lock-order-inversion (potential deadlock) (pid=3749)
Cycle in lock order graph: M33356 (0x7b540...* Binary: `named`
* Commit: 289f143d8a2a248333ace4d1d43ab388c7405a73
* Tests: addzone, autosign, dnssec, ...
```
WARNING: ThreadSanitizer: lock-order-inversion (potential deadlock) (pid=3749)
Cycle in lock order graph: M33356 (0x7b54000218a0) => M33330 (0x7b5000020490) => M33331 (0x7b50000204d0) => M33356
Mutex M33330 acquired here while holding mutex M33356 in thread T13:
#0 pthread_rwlock_wrlock <null> (libtsan.so.0+0x2d229)
#1 isc_rwlock_lock /home/ondrej/Projects/bind9/lib/isc/rwlock.c:54 (libisc.so.1504+0x5078f)
#2 isc_rwlock_lock /home/ondrej/Projects/bind9/lib/isc/rwlock.c:47 (libisc.so.1504+0x5078f)
#3 add_changed /home/ondrej/Projects/bind9/lib/dns/rbtdb.c:1359 (libdns.so.1505+0x1020ab)
#4 subtractrdataset /home/ondrej/Projects/bind9/lib/dns/rbtdb.c:6680 (libdns.so.1505+0x10d7d0)
#5 subtractrdataset /home/ondrej/Projects/bind9/lib/dns/rbtdb.c:6622 (libdns.so.1505+0x10d7d0)
#6 dns_db_subtractrdataset /home/ondrej/Projects/bind9/lib/dns/db.c:768 (libdns.so.1505+0x67909)
#7 diff_apply /home/ondrej/Projects/bind9/lib/dns/diff.c:371 (libdns.so.1505+0x6ac8e)
#8 dns_diff_apply /home/ondrej/Projects/bind9/lib/dns/diff.c:452 (libdns.so.1505+0x6c0a1)
#9 do_one_tuple /home/ondrej/Projects/bind9/lib/dns/zone.c:4089 (libdns.so.1505+0x20379e)
#10 update_one_rr /home/ondrej/Projects/bind9/lib/dns/zone.c:4118 (libdns.so.1505+0x203acf)
#11 del_sigs /home/ondrej/Projects/bind9/lib/dns/zone.c:6371 (libdns.so.1505+0x22a21e)
#12 zone_resigninc /home/ondrej/Projects/bind9/lib/dns/zone.c:6864 (libdns.so.1505+0x2412dd)
#13 zone_maintenance /home/ondrej/Projects/bind9/lib/dns/zone.c:10815 (libdns.so.1505+0x24a063)
#14 zone_timer /home/ondrej/Projects/bind9/lib/dns/zone.c:13650 (libdns.so.1505+0x24a063)
#15 dispatch /home/ondrej/Projects/bind9/lib/isc/task.c:1134 (libisc.so.1504+0x55606)
#16 run /home/ondrej/Projects/bind9/lib/isc/task.c:1319 (libisc.so.1504+0x55606)
#17 <null> <null> (libtsan.so.0+0x29b3d)
Mutex M33356 previously acquired by the same thread here:
#0 pthread_rwlock_wrlock <null> (libtsan.so.0+0x2d229)
#1 isc_rwlock_lock /home/ondrej/Projects/bind9/lib/isc/rwlock.c:54 (libisc.so.1504+0x5078f)
#2 isc_rwlock_lock /home/ondrej/Projects/bind9/lib/isc/rwlock.c:47 (libisc.so.1504+0x5078f)
#3 subtractrdataset /home/ondrej/Projects/bind9/lib/dns/rbtdb.c:6677 (libdns.so.1505+0x10d7b5)
#4 subtractrdataset /home/ondrej/Projects/bind9/lib/dns/rbtdb.c:6622 (libdns.so.1505+0x10d7b5)
#5 dns_db_subtractrdataset /home/ondrej/Projects/bind9/lib/dns/db.c:768 (libdns.so.1505+0x67909)
#6 diff_apply /home/ondrej/Projects/bind9/lib/dns/diff.c:371 (libdns.so.1505+0x6ac8e)
#7 dns_diff_apply /home/ondrej/Projects/bind9/lib/dns/diff.c:452 (libdns.so.1505+0x6c0a1)
#8 do_one_tuple /home/ondrej/Projects/bind9/lib/dns/zone.c:4089 (libdns.so.1505+0x20379e)
#9 update_one_rr /home/ondrej/Projects/bind9/lib/dns/zone.c:4118 (libdns.so.1505+0x203acf)
#10 del_sigs /home/ondrej/Projects/bind9/lib/dns/zone.c:6371 (libdns.so.1505+0x22a21e)
#11 zone_resigninc /home/ondrej/Projects/bind9/lib/dns/zone.c:6864 (libdns.so.1505+0x2412dd)
#12 zone_maintenance /home/ondrej/Projects/bind9/lib/dns/zone.c:10815 (libdns.so.1505+0x24a063)
#13 zone_timer /home/ondrej/Projects/bind9/lib/dns/zone.c:13650 (libdns.so.1505+0x24a063)
#14 dispatch /home/ondrej/Projects/bind9/lib/isc/task.c:1134 (libisc.so.1504+0x55606)
#15 run /home/ondrej/Projects/bind9/lib/isc/task.c:1319 (libisc.so.1504+0x55606)
#16 <null> <null> (libtsan.so.0+0x29b3d)
Mutex M33331 acquired here while holding mutex M33330 in thread T14:
#0 pthread_rwlock_rdlock <null> (libtsan.so.0+0x2cf99)
#1 isc_rwlock_lock /home/ondrej/Projects/bind9/lib/isc/rwlock.c:50 (libisc.so.1504+0x507f7)
#2 setnsec3parameters /home/ondrej/Projects/bind9/lib/dns/rbtdb.c:2273 (libdns.so.1505+0x103102)
#3 iszonesecure /home/ondrej/Projects/bind9/lib/dns/rbtdb.c:2246 (libdns.so.1505+0x103102)
#4 endload /home/ondrej/Projects/bind9/lib/dns/rbtdb.c:7317 (libdns.so.1505+0x108947)
#5 endload /home/ondrej/Projects/bind9/lib/dns/rbtdb.c:7294 (libdns.so.1505+0x108947)
#6 dns_db_endload /home/ondrej/Projects/bind9/lib/dns/db.c:301 (libdns.so.1505+0x65c10)
#7 dns_db_endload /home/ondrej/Projects/bind9/lib/dns/db.c:285 (libdns.so.1505+0x65c10)
#8 zone_startload /home/ondrej/Projects/bind9/lib/dns/zone.c:2554 (libdns.so.1505+0x2523d3)
#9 zone_load /home/ondrej/Projects/bind9/lib/dns/zone.c:2138 (libdns.so.1505+0x2523d3)
#10 zone_asyncload /home/ondrej/Projects/bind9/lib/dns/zone.c:2193 (libdns.so.1505+0x252655)
#11 dispatch /home/ondrej/Projects/bind9/lib/isc/task.c:1134 (libisc.so.1504+0x55606)
#12 run /home/ondrej/Projects/bind9/lib/isc/task.c:1319 (libisc.so.1504+0x55606)
#13 <null> <null> (libtsan.so.0+0x29b3d)
Mutex M33330 previously acquired by the same thread here:
#0 pthread_rwlock_wrlock <null> (libtsan.so.0+0x2d229)
#1 isc_rwlock_lock /home/ondrej/Projects/bind9/lib/isc/rwlock.c:54 (libisc.so.1504+0x5078f)
#2 isc_rwlock_lock /home/ondrej/Projects/bind9/lib/isc/rwlock.c:47 (libisc.so.1504+0x5078f)
#3 endload /home/ondrej/Projects/bind9/lib/dns/rbtdb.c:7304 (libdns.so.1505+0x1088ae)
#4 endload /home/ondrej/Projects/bind9/lib/dns/rbtdb.c:7294 (libdns.so.1505+0x1088ae)
#5 dns_db_endload /home/ondrej/Projects/bind9/lib/dns/db.c:301 (libdns.so.1505+0x65c10)
#6 dns_db_endload /home/ondrej/Projects/bind9/lib/dns/db.c:285 (libdns.so.1505+0x65c10)
#7 zone_startload /home/ondrej/Projects/bind9/lib/dns/zone.c:2554 (libdns.so.1505+0x2523d3)
#8 zone_load /home/ondrej/Projects/bind9/lib/dns/zone.c:2138 (libdns.so.1505+0x2523d3)
#9 zone_asyncload /home/ondrej/Projects/bind9/lib/dns/zone.c:2193 (libdns.so.1505+0x252655)
#10 dispatch /home/ondrej/Projects/bind9/lib/isc/task.c:1134 (libisc.so.1504+0x55606)
#11 run /home/ondrej/Projects/bind9/lib/isc/task.c:1319 (libisc.so.1504+0x55606)
#12 <null> <null> (libtsan.so.0+0x29b3d)
Mutex M33356 acquired here while holding mutex M33331 in thread T14:
#0 pthread_rwlock_rdlock <null> (libtsan.so.0+0x2cf99)
#1 isc_rwlock_lock /home/ondrej/Projects/bind9/lib/isc/rwlock.c:50 (libisc.so.1504+0x507f7)
#2 setnsec3parameters /home/ondrej/Projects/bind9/lib/dns/rbtdb.c:2276 (libdns.so.1505+0x10318d)
#3 iszonesecure /home/ondrej/Projects/bind9/lib/dns/rbtdb.c:2246 (libdns.so.1505+0x10318d)
#4 endload /home/ondrej/Projects/bind9/lib/dns/rbtdb.c:7317 (libdns.so.1505+0x108947)
#5 endload /home/ondrej/Projects/bind9/lib/dns/rbtdb.c:7294 (libdns.so.1505+0x108947)
#6 dns_db_endload /home/ondrej/Projects/bind9/lib/dns/db.c:301 (libdns.so.1505+0x65c10)
#7 dns_db_endload /home/ondrej/Projects/bind9/lib/dns/db.c:285 (libdns.so.1505+0x65c10)
#8 zone_startload /home/ondrej/Projects/bind9/lib/dns/zone.c:2554 (libdns.so.1505+0x2523d3)
#9 zone_load /home/ondrej/Projects/bind9/lib/dns/zone.c:2138 (libdns.so.1505+0x2523d3)
#10 zone_asyncload /home/ondrej/Projects/bind9/lib/dns/zone.c:2193 (libdns.so.1505+0x252655)
#11 dispatch /home/ondrej/Projects/bind9/lib/isc/task.c:1134 (libisc.so.1504+0x55606)
#12 run /home/ondrej/Projects/bind9/lib/isc/task.c:1319 (libisc.so.1504+0x55606)
#13 <null> <null> (libtsan.so.0+0x29b3d)
Mutex M33331 previously acquired by the same thread here:
#0 pthread_rwlock_rdlock <null> (libtsan.so.0+0x2cf99)
#1 isc_rwlock_lock /home/ondrej/Projects/bind9/lib/isc/rwlock.c:50 (libisc.so.1504+0x507f7)
#2 setnsec3parameters /home/ondrej/Projects/bind9/lib/dns/rbtdb.c:2273 (libdns.so.1505+0x103102)
#3 iszonesecure /home/ondrej/Projects/bind9/lib/dns/rbtdb.c:2246 (libdns.so.1505+0x103102)
#4 endload /home/ondrej/Projects/bind9/lib/dns/rbtdb.c:7317 (libdns.so.1505+0x108947)
#5 endload /home/ondrej/Projects/bind9/lib/dns/rbtdb.c:7294 (libdns.so.1505+0x108947)
#6 dns_db_endload /home/ondrej/Projects/bind9/lib/dns/db.c:301 (libdns.so.1505+0x65c10)
#7 dns_db_endload /home/ondrej/Projects/bind9/lib/dns/db.c:285 (libdns.so.1505+0x65c10)
#8 zone_startload /home/ondrej/Projects/bind9/lib/dns/zone.c:2554 (libdns.so.1505+0x2523d3)
#9 zone_load /home/ondrej/Projects/bind9/lib/dns/zone.c:2138 (libdns.so.1505+0x2523d3)
#10 zone_asyncload /home/ondrej/Projects/bind9/lib/dns/zone.c:2193 (libdns.so.1505+0x252655)
#11 dispatch /home/ondrej/Projects/bind9/lib/isc/task.c:1134 (libisc.so.1504+0x55606)
#12 run /home/ondrej/Projects/bind9/lib/isc/task.c:1319 (libisc.so.1504+0x55606)
#13 <null> <null> (libtsan.so.0+0x29b3d)
Thread T13 'isc-worker0004' (tid=3779, running) created by main thread at:
#0 pthread_create <null> (libtsan.so.0+0x2be1b)
#1 isc_thread_create /home/ondrej/Projects/bind9/lib/isc/pthreads/thread.c:75 (libisc.so.1504+0x7a324)
#2 isc_taskmgr_create /home/ondrej/Projects/bind9/lib/isc/task.c:1410 (libisc.so.1504+0x583c3)
#3 create_managers main.c:902 (named+0x1af1c)
#4 setup main.c:1235 (named+0x1af1c)
#5 main main.c:1513 (named+0x1af1c)
Thread T14 'isc-worker0005' (tid=3780, running) created by main thread at:
#0 pthread_create <null> (libtsan.so.0+0x2be1b)
#1 isc_thread_create /home/ondrej/Projects/bind9/lib/isc/pthreads/thread.c:75 (libisc.so.1504+0x7a324)
#2 isc_taskmgr_create /home/ondrej/Projects/bind9/lib/isc/task.c:1410 (libisc.so.1504+0x583c3)
#3 create_managers main.c:902 (named+0x1af1c)
#4 setup main.c:1235 (named+0x1af1c)
#5 main main.c:1513 (named+0x1af1c)
SUMMARY: ThreadSanitizer: lock-order-inversion (potential deadlock) (/usr/lib/x86_64-linux-gnu/libtsan.so.0+0x2d229) in pthread_rwlock_wrlock
```December 2019 (9.11.14, 9.14.9, 9.15.7)https://gitlab.isc.org/isc-projects/bind9/-/issues/1441ThreadSanitizer: lock-order-inversion (potential deadlock) (/usr/lib/x86_64-l...2019-12-10T20:23:43ZOndřej SurýThreadSanitizer: lock-order-inversion (potential deadlock) (/usr/lib/x86_64-linux-gnu/libtsan.so.0+0x2cf99) in pthread_rwlock_rdlock* Binary: `named`
* Commit: 289f143d8a2a248333ace4d1d43ab388c7405a73
* Tests: acl, additional, addzone, auth, autosign, builtin, cacheclean, case, catz, cds, chain, dnssec, rpzrecurse, serve-stale, ...
```
WARNING: ThreadSanitizer: lock...* Binary: `named`
* Commit: 289f143d8a2a248333ace4d1d43ab388c7405a73
* Tests: acl, additional, addzone, auth, autosign, builtin, cacheclean, case, catz, cds, chain, dnssec, rpzrecurse, serve-stale, ...
```
WARNING: ThreadSanitizer: lock-order-inversion (potential deadlock) (pid=21666)
Cycle in lock order graph: M751673771345120912 (0x000000000000) => M755051488245546000 (0x000000000000) => M751673771345120912
Mutex M755051488245546000 acquired here while holding mutex M751673771345120912 in thread T9:
#0 pthread_rwlock_rdlock <null> (libtsan.so.0+0x2cf99)
#1 isc_rwlock_lock /home/ondrej/Projects/bind9/lib/isc/rwlock.c:50 (libisc.so.1504+0x507f7)
#2 zone_findrdataset /home/ondrej/Projects/bind9/lib/dns/rbtdb.c:5445 (libdns.so.1505+0x122ed6)
#3 dns_db_findrdataset /home/ondrej/Projects/bind9/lib/dns/db.c:700 (libdns.so.1505+0x67390)
#4 iszonesecure /home/ondrej/Projects/bind9/lib/dns/rbtdb.c:2213 (libdns.so.1505+0x102ef8)
#5 endload /home/ondrej/Projects/bind9/lib/dns/rbtdb.c:7317 (libdns.so.1505+0x108947)
#6 endload /home/ondrej/Projects/bind9/lib/dns/rbtdb.c:7294 (libdns.so.1505+0x108947)
#7 dns_db_endload /home/ondrej/Projects/bind9/lib/dns/db.c:301 (libdns.so.1505+0x65c10)
#8 dns_db_endload /home/ondrej/Projects/bind9/lib/dns/db.c:285 (libdns.so.1505+0x65c10)
#9 zone_startload /home/ondrej/Projects/bind9/lib/dns/zone.c:2554 (libdns.so.1505+0x2523d3)
#10 zone_load /home/ondrej/Projects/bind9/lib/dns/zone.c:2138 (libdns.so.1505+0x2523d3)
#11 dns_zone_load /home/ondrej/Projects/bind9/lib/dns/zone.c:2177 (libdns.so.1505+0x252561)
#12 load_zones server.c:9533 (named+0x2e62a)
#13 run_server server.c:9642 (named+0x59a4a)
#14 dispatch /home/ondrej/Projects/bind9/lib/isc/task.c:1134 (libisc.so.1504+0x55606)
#15 run /home/ondrej/Projects/bind9/lib/isc/task.c:1319 (libisc.so.1504+0x55606)
#16 <null> <null> (libtsan.so.0+0x29b3d)
Mutex M751673771345120912 previously acquired by the same thread here:
#0 pthread_rwlock_wrlock <null> (libtsan.so.0+0x2d229)
#1 isc_rwlock_lock /home/ondrej/Projects/bind9/lib/isc/rwlock.c:54 (libisc.so.1504+0x5078f)
#2 isc_rwlock_lock /home/ondrej/Projects/bind9/lib/isc/rwlock.c:47 (libisc.so.1504+0x5078f)
#3 endload /home/ondrej/Projects/bind9/lib/dns/rbtdb.c:7304 (libdns.so.1505+0x1088ae)
#4 endload /home/ondrej/Projects/bind9/lib/dns/rbtdb.c:7294 (libdns.so.1505+0x1088ae)
#5 dns_db_endload /home/ondrej/Projects/bind9/lib/dns/db.c:301 (libdns.so.1505+0x65c10)
#6 dns_db_endload /home/ondrej/Projects/bind9/lib/dns/db.c:285 (libdns.so.1505+0x65c10)
#7 zone_startload /home/ondrej/Projects/bind9/lib/dns/zone.c:2554 (libdns.so.1505+0x2523d3)
#8 zone_load /home/ondrej/Projects/bind9/lib/dns/zone.c:2138 (libdns.so.1505+0x2523d3)
#9 dns_zone_load /home/ondrej/Projects/bind9/lib/dns/zone.c:2177 (libdns.so.1505+0x252561)
#10 load_zones server.c:9533 (named+0x2e62a)
#11 run_server server.c:9642 (named+0x59a4a)
#12 dispatch /home/ondrej/Projects/bind9/lib/isc/task.c:1134 (libisc.so.1504+0x55606)
#13 run /home/ondrej/Projects/bind9/lib/isc/task.c:1319 (libisc.so.1504+0x55606)
#14 <null> <null> (libtsan.so.0+0x29b3d)
Mutex M751673771345120912 acquired here while holding mutex M755051488245546000 in thread T9:
#0 pthread_rwlock_wrlock <null> (libtsan.so.0+0x2d229)
#1 isc_rwlock_lock /home/ondrej/Projects/bind9/lib/isc/rwlock.c:54 (libisc.so.1504+0x5078f)
#2 isc_rwlock_lock /home/ondrej/Projects/bind9/lib/isc/rwlock.c:47 (libisc.so.1504+0x5078f)
#3 add_changed /home/ondrej/Projects/bind9/lib/dns/rbtdb.c:1359 (libdns.so.1505+0x1020ab)
#4 add32 /home/ondrej/Projects/bind9/lib/dns/rbtdb.c:5813 (libdns.so.1505+0x11691c)
#5 addrdataset /home/ondrej/Projects/bind9/lib/dns/rbtdb.c:6600 (libdns.so.1505+0x119ed8)
#6 dns_db_addrdataset /home/ondrej/Projects/bind9/lib/dns/db.c:744 (libdns.so.1505+0x676af)
#7 diff_apply /home/ondrej/Projects/bind9/lib/dns/diff.c:364 (libdns.so.1505+0x6ae0e)
#8 dns_diff_apply /home/ondrej/Projects/bind9/lib/dns/diff.c:452 (libdns.so.1505+0x6c0a1)
#9 do_one_tuple /home/ondrej/Projects/bind9/lib/dns/zone.c:4089 (libdns.so.1505+0x20379e)
#10 update_one_rr /home/ondrej/Projects/bind9/lib/dns/zone.c:4118 (libdns.so.1505+0x203acf)
#11 add_soa /home/ondrej/Projects/bind9/lib/dns/zone.c:4222 (libdns.so.1505+0x24dc95)
#12 zone_postload /home/ondrej/Projects/bind9/lib/dns/zone.c:4585 (libdns.so.1505+0x24dc95)
#13 zone_load /home/ondrej/Projects/bind9/lib/dns/zone.c:2163 (libdns.so.1505+0x251ec9)
#14 dns_zone_load /home/ondrej/Projects/bind9/lib/dns/zone.c:2177 (libdns.so.1505+0x252561)
#15 load_zones server.c:9533 (named+0x2e62a)
#16 run_server server.c:9642 (named+0x59a4a)
#17 dispatch /home/ondrej/Projects/bind9/lib/isc/task.c:1134 (libisc.so.1504+0x55606)
#18 run /home/ondrej/Projects/bind9/lib/isc/task.c:1319 (libisc.so.1504+0x55606)
#19 <null> <null> (libtsan.so.0+0x29b3d)
Mutex M755051488245546000 previously acquired by the same thread here:
#0 pthread_rwlock_wrlock <null> (libtsan.so.0+0x2d229)
#1 isc_rwlock_lock /home/ondrej/Projects/bind9/lib/isc/rwlock.c:54 (libisc.so.1504+0x5078f)
#2 isc_rwlock_lock /home/ondrej/Projects/bind9/lib/isc/rwlock.c:47 (libisc.so.1504+0x5078f)
#3 addrdataset /home/ondrej/Projects/bind9/lib/dns/rbtdb.c:6555 (libdns.so.1505+0x119a72)
#4 dns_db_addrdataset /home/ondrej/Projects/bind9/lib/dns/db.c:744 (libdns.so.1505+0x676af)
#5 diff_apply /home/ondrej/Projects/bind9/lib/dns/diff.c:364 (libdns.so.1505+0x6ae0e)
#6 dns_diff_apply /home/ondrej/Projects/bind9/lib/dns/diff.c:452 (libdns.so.1505+0x6c0a1)
#7 do_one_tuple /home/ondrej/Projects/bind9/lib/dns/zone.c:4089 (libdns.so.1505+0x20379e)
#8 update_one_rr /home/ondrej/Projects/bind9/lib/dns/zone.c:4118 (libdns.so.1505+0x203acf)
#9 add_soa /home/ondrej/Projects/bind9/lib/dns/zone.c:4222 (libdns.so.1505+0x24dc95)
#10 zone_postload /home/ondrej/Projects/bind9/lib/dns/zone.c:4585 (libdns.so.1505+0x24dc95)
#11 zone_load /home/ondrej/Projects/bind9/lib/dns/zone.c:2163 (libdns.so.1505+0x251ec9)
#12 dns_zone_load /home/ondrej/Projects/bind9/lib/dns/zone.c:2177 (libdns.so.1505+0x252561)
#13 load_zones server.c:9533 (named+0x2e62a)
#14 run_server server.c:9642 (named+0x59a4a)
#15 dispatch /home/ondrej/Projects/bind9/lib/isc/task.c:1134 (libisc.so.1504+0x55606)
#16 run /home/ondrej/Projects/bind9/lib/isc/task.c:1319 (libisc.so.1504+0x55606)
#17 <null> <null> (libtsan.so.0+0x29b3d)
Thread T9 'isc-worker0000' (tid=21692, running) created by main thread at:
#0 pthread_create <null> (libtsan.so.0+0x2be1b)
#1 isc_thread_create /home/ondrej/Projects/bind9/lib/isc/pthreads/thread.c:75 (libisc.so.1504+0x7a324)
#2 isc_taskmgr_create /home/ondrej/Projects/bind9/lib/isc/task.c:1410 (libisc.so.1504+0x583c3)
#3 create_managers main.c:902 (named+0x1af1c)
#4 setup main.c:1235 (named+0x1af1c)
#5 main main.c:1513 (named+0x1af1c)
SUMMARY: ThreadSanitizer: lock-order-inversion (potential deadlock) (/usr/lib/x86_64-linux-gnu/libtsan.so.0+0x2cf99) in pthread_rwlock_rdlock
```December 2019 (9.11.14, 9.14.9, 9.15.7)https://gitlab.isc.org/isc-projects/bind9/-/issues/1435SUMMARY: ThreadSanitizer: lock-order-inversion (potential deadlock) (/usr/lib...2019-12-10T14:11:10ZOndřej SurýSUMMARY: ThreadSanitizer: lock-order-inversion (potential deadlock) (/usr/lib/x86_64-linux-gnu/libtsan.so.0+0x3d62b) in pthread_mutex_lock* Binary: `named`, `delv`, `dig`, `dnssec-signzone`, ...
* Commit: 289f143d8a2a248333ace4d1d43ab388c7405a73
* Tests: all of them
```
WARNING: ThreadSanitizer: lock-order-inversion (potential deadlock) (pid=22334)
Cycle in lock order g...* Binary: `named`, `delv`, `dig`, `dnssec-signzone`, ...
* Commit: 289f143d8a2a248333ace4d1d43ab388c7405a73
* Tests: all of them
```
WARNING: ThreadSanitizer: lock-order-inversion (potential deadlock) (pid=22334)
Cycle in lock order graph: M1113 (0x7b7400000058) => M1728 (0x7b4c000001d0) => M1113
Mutex M1728 acquired here while holding mutex M1113 in main thread:
#0 pthread_mutex_lock <null> (libtsan.so.0+0x3d62b)
#1 fcount_incr /home/ondrej/Projects/bind9/lib/dns/resolver.c:1524 (libdns.so.1505+0x185f18)
#2 fctx_create /home/ondrej/Projects/bind9/lib/dns/resolver.c:4928 (libdns.so.1505+0x191103)
#3 dns_resolver_createfetch /home/ondrej/Projects/bind9/lib/dns/resolver.c:10584 (libdns.so.1505+0x198031)
#4 start_fetch /home/ondrej/Projects/bind9/lib/dns/client.c:777 (libdns.so.1505+0x27dbea)
#5 client_resfind /home/ondrej/Projects/bind9/lib/dns/client.c:862 (libdns.so.1505+0x27dbea)
#6 dns_client_startresolve /home/ondrej/Projects/bind9/lib/dns/client.c:1388 (libdns.so.1505+0x28258c)
#7 dns_client_resolve /home/ondrej/Projects/bind9/lib/dns/client.c:1249 (libdns.so.1505+0x283e81)
#8 main /home/ondrej/Projects/bind9/bin/delv/delv.c:1788 (delv+0x5d92)
Mutex M1113 previously acquired by the same thread here:
#0 pthread_mutex_lock <null> (libtsan.so.0+0x3d62b)
#1 dns_resolver_createfetch /home/ondrej/Projects/bind9/lib/dns/resolver.c:10541 (libdns.so.1505+0x197725)
#2 start_fetch /home/ondrej/Projects/bind9/lib/dns/client.c:777 (libdns.so.1505+0x27dbea)
#3 client_resfind /home/ondrej/Projects/bind9/lib/dns/client.c:862 (libdns.so.1505+0x27dbea)
#4 dns_client_startresolve /home/ondrej/Projects/bind9/lib/dns/client.c:1388 (libdns.so.1505+0x28258c)
#5 dns_client_resolve /home/ondrej/Projects/bind9/lib/dns/client.c:1249 (libdns.so.1505+0x283e81)
#6 main /home/ondrej/Projects/bind9/bin/delv/delv.c:1788 (delv+0x5d92)
Mutex M1113 acquired here while holding mutex M1728 in main thread:
#0 pthread_mutex_lock <null> (libtsan.so.0+0x3d62b)
#1 dns_resolver_shutdown /home/ondrej/Projects/bind9/lib/dns/resolver.c:10308 (libdns.so.1505+0x1971c4)
#2 view_flushanddetach /home/ondrej/Projects/bind9/lib/dns/view.c:582 (libdns.so.1505+0x1fcdcd)
#3 dns_view_detach /home/ondrej/Projects/bind9/lib/dns/view.c:635 (libdns.so.1505+0x1fcebe)
#4 destroyclient /home/ondrej/Projects/bind9/lib/dns/client.c:611 (libdns.so.1505+0x281a6c)
#5 dns_client_destroy /home/ondrej/Projects/bind9/lib/dns/client.c:652 (libdns.so.1505+0x281a6c)
#6 main /home/ondrej/Projects/bind9/bin/delv/delv.c:1827 (delv+0x3bf2)
Mutex M1728 previously acquired by the same thread here:
#0 pthread_mutex_lock <null> (libtsan.so.0+0x3d62b)
#1 dns_resolver_shutdown /home/ondrej/Projects/bind9/lib/dns/resolver.c:10303 (libdns.so.1505+0x1970f7)
#2 view_flushanddetach /home/ondrej/Projects/bind9/lib/dns/view.c:582 (libdns.so.1505+0x1fcdcd)
#3 dns_view_detach /home/ondrej/Projects/bind9/lib/dns/view.c:635 (libdns.so.1505+0x1fcebe)
#4 destroyclient /home/ondrej/Projects/bind9/lib/dns/client.c:611 (libdns.so.1505+0x281a6c)
#5 dns_client_destroy /home/ondrej/Projects/bind9/lib/dns/client.c:652 (libdns.so.1505+0x281a6c)
#6 main /home/ondrej/Projects/bind9/bin/delv/delv.c:1827 (delv+0x3bf2)
SUMMARY: ThreadSanitizer: lock-order-inversion (potential deadlock) (/usr/lib/x86_64-linux-gnu/libtsan.so.0+0x3d62b) in pthread_mutex_lock
```December 2019 (9.11.14, 9.14.9, 9.15.7)https://gitlab.isc.org/isc-projects/bind9/-/issues/1422notify unit test segfaults when compiled with ThreadSanitizer2019-12-10T14:11:50ZOndřej Surýnotify unit test segfaults when compiled with ThreadSanitizer```
[==========] Running 1 test(s).
[ RUN ] notify_start
netmgr.c:981: REQUIRE((__builtin_expect(!!((handle) != ((void*)0)), 1) && __builtin_expect(!!(((const isc__magic_t *)(handle))->magic == ((('N') << 24 | ('M') << 16 | ('H') <<...```
[==========] Running 1 test(s).
[ RUN ] notify_start
netmgr.c:981: REQUIRE((__builtin_expect(!!((handle) != ((void*)0)), 1) && __builtin_expect(!!(((const isc__magic_t *)(handle))->magic == ((('N') << 24 | ('M') << 16 | ('H') << 8 | ('D')))), 1))) failed, back trace
#0 0x1010674ff in ??
#1 0x10106745e in ??
#2 0x101088ecc in ??
#3 0x100d504a5 in ??
#4 0x100d2959f in ??
#5 0x1014b8a6f in ??
#6 0x1014b6f58 in ??
#7 0x100d2931c in ??
#8 0x7fff66a7e2e5 in ??
Abort trap: 6
```December 2019 (9.11.14, 9.14.9, 9.15.7)https://gitlab.isc.org/isc-projects/bind9/-/issues/1390Crash in the autosign test2019-12-11T09:18:34ZOndřej SurýCrash in the autosign testSee https://gitlab.isc.org/isc-projects/bind9/-/jobs/422545, and I don't have the backtrace yet.See https://gitlab.isc.org/isc-projects/bind9/-/jobs/422545, and I don't have the backtrace yet.December 2019 (9.11.14, 9.14.9, 9.15.7)https://gitlab.isc.org/isc-projects/bind9/-/issues/1473ThreadSanitizer: data race /home/ondrej/Projects/bind9/lib/isc/netmgr/netmgr....2019-12-10T13:00:20ZOndřej SurýThreadSanitizer: data race /home/ondrej/Projects/bind9/lib/isc/netmgr/netmgr.c:1027 in nmhandle_free```
WARNING: ThreadSanitizer: data race (pid=29181)
Write of size 8 at 0x7b90000a0010 by thread T16 (mutexes: write M562522896233146464):
#0 nmhandle_free /home/ondrej/Projects/bind9/lib/isc/netmgr/netmgr.c:1027 (libisc.so.1504+0x3...```
WARNING: ThreadSanitizer: data race (pid=29181)
Write of size 8 at 0x7b90000a0010 by thread T16 (mutexes: write M562522896233146464):
#0 nmhandle_free /home/ondrej/Projects/bind9/lib/isc/netmgr/netmgr.c:1027 (libisc.so.1504+0x3e3b7)
#1 nmhandle_deactivate /home/ondrej/Projects/bind9/lib/isc/netmgr/netmgr.c:1057 (libisc.so.1504+0x3e59f)
#2 isc_nmhandle_unref /home/ondrej/Projects/bind9/lib/isc/netmgr/netmgr.c:1108 (libisc.so.1504+0x409f0)
#3 fetch_callback /home/ondrej/Projects/bind9/lib/ns/query.c:5680 (libns.so.1502+0x46b71)
#4 dispatch /home/ondrej/Projects/bind9/lib/isc/task.c:1134 (libisc.so.1504+0x56f36)
#5 run /home/ondrej/Projects/bind9/lib/isc/task.c:1319 (libisc.so.1504+0x56f36)
#6 <null> <null> (libtsan.so.0+0x29b3d)
Previous read of size 4 at 0x7b90000a0010 by thread T1:
#0 isc_nmhandle_unref /home/ondrej/Projects/bind9/lib/isc/netmgr/netmgr.c:1067 (libisc.so.1504+0x4091d)
#1 isc__nm_uvreq_put /home/ondrej/Projects/bind9/lib/isc/netmgr/netmgr.c:1214 (libisc.so.1504+0x41bef)
#2 udp_send_cb /home/ondrej/Projects/bind9/lib/isc/netmgr/udp.c:439 (libisc.so.1504+0x46c1d)
#3 <null> <null> (libuv.so.1+0x1d283)
#4 <null> <null> (libtsan.so.0+0x29b3d)
Location is heap block of size 7489 at 0x7b90000a0000 allocated by thread T8:
#0 malloc <null> (libtsan.so.0+0x2b1a3)
#1 default_memalloc /home/ondrej/Projects/bind9/lib/isc/mem.c:685 (libisc.so.1504+0x33fee)
#2 mem_get /home/ondrej/Projects/bind9/lib/isc/mem.c:598 (libisc.so.1504+0x34c7e)
#3 mem_allocateunlocked /home/ondrej/Projects/bind9/lib/isc/mem.c:1222 (libisc.so.1504+0x34c7e)
#4 isc___mem_allocate /home/ondrej/Projects/bind9/lib/isc/mem.c:1242 (libisc.so.1504+0x34c7e)
#5 isc__mem_allocate /home/ondrej/Projects/bind9/lib/isc/mem.c:2387 (libisc.so.1504+0x3be64)
#6 isc___mem_get /home/ondrej/Projects/bind9/lib/isc/mem.c:1007 (libisc.so.1504+0x3c6ca)
#7 isc__mem_get /home/ondrej/Projects/bind9/lib/isc/mem.c:2365 (libisc.so.1504+0x3aef1)
#8 alloc_handle /home/ondrej/Projects/bind9/lib/isc/netmgr/netmgr.c:916 (libisc.so.1504+0x40547)
#9 isc__nmhandle_get /home/ondrej/Projects/bind9/lib/isc/netmgr/netmgr.c:940 (libisc.so.1504+0x40547)
#10 udp_recv_cb /home/ondrej/Projects/bind9/lib/isc/netmgr/udp.c:312 (libisc.so.1504+0x46841)
#11 <null> <null> (libuv.so.1+0x1d6d4)
#12 <null> <null> (libtsan.so.0+0x29b3d)
Mutex M562522896233146464 is already destroyed.
Thread T16 'isc-worker0007' (tid=29211, running) created by main thread at:
#0 pthread_create <null> (libtsan.so.0+0x2be1b)
#1 isc_thread_create /home/ondrej/Projects/bind9/lib/isc/pthreads/thread.c:75 (libisc.so.1504+0x7bc54)
#2 isc_taskmgr_create /home/ondrej/Projects/bind9/lib/isc/task.c:1410 (libisc.so.1504+0x59cf3)
#3 create_managers main.c:902 (named+0x1aeec)
#4 setup main.c:1235 (named+0x1aeec)
#5 main main.c:1515 (named+0x1aeec)
Thread T1 'isc-net-0000' (tid=29196, running) created by main thread at:
#0 pthread_create <null> (libtsan.so.0+0x2be1b)
#1 isc_thread_create /home/ondrej/Projects/bind9/lib/isc/pthreads/thread.c:75 (libisc.so.1504+0x7bc54)
#2 isc_nm_start /home/ondrej/Projects/bind9/lib/isc/netmgr/netmgr.c:149 (libisc.so.1504+0x3ec4a)
#3 create_managers main.c:895 (named+0x1ae90)
#4 setup main.c:1235 (named+0x1ae90)
#5 main main.c:1515 (named+0x1ae90)
Thread T8 'isc-net-0007' (tid=29203, running) created by main thread at:
#0 pthread_create <null> (libtsan.so.0+0x2be1b)
#1 isc_thread_create /home/ondrej/Projects/bind9/lib/isc/pthreads/thread.c:75 (libisc.so.1504+0x7bc54)
#2 isc_nm_start /home/ondrej/Projects/bind9/lib/isc/netmgr/netmgr.c:149 (libisc.so.1504+0x3ec4a)
#3 create_managers main.c:895 (named+0x1ae90)
#4 setup main.c:1235 (named+0x1ae90)
#5 main main.c:1515 (named+0x1ae90)
SUMMARY: ThreadSanitizer: data race /home/ondrej/Projects/bind9/lib/isc/netmgr/netmgr.c:1027 in nmhandle_free
```
and
```
WARNING: ThreadSanitizer: data race (pid=29181)
Write of size 8 at 0x7b90000a0018 by thread T16 (mutexes: write M562522896233146464):
#0 nmhandle_free /home/ondrej/Projects/bind9/lib/isc/netmgr/netmgr.c:1027 (libisc.so.1504+0x3e3b7)
#1 nmhandle_deactivate /home/ondrej/Projects/bind9/lib/isc/netmgr/netmgr.c:1057 (libisc.so.1504+0x3e59f)
#2 isc_nmhandle_unref /home/ondrej/Projects/bind9/lib/isc/netmgr/netmgr.c:1108 (libisc.so.1504+0x409f0)
#3 fetch_callback /home/ondrej/Projects/bind9/lib/ns/query.c:5680 (libns.so.1502+0x46b71)
#4 dispatch /home/ondrej/Projects/bind9/lib/isc/task.c:1134 (libisc.so.1504+0x56f36)
#5 run /home/ondrej/Projects/bind9/lib/isc/task.c:1319 (libisc.so.1504+0x56f36)
#6 <null> <null> (libtsan.so.0+0x29b3d)
Previous atomic write of size 8 at 0x7b90000a0018 by thread T1:
#0 __tsan_atomic64_fetch_sub <null> (libtsan.so.0+0x648dd)
#1 isc_nmhandle_unref /home/ondrej/Projects/bind9/lib/isc/netmgr/netmgr.c:1069 (libisc.so.1504+0x4093c)
#2 isc__nm_uvreq_put /home/ondrej/Projects/bind9/lib/isc/netmgr/netmgr.c:1214 (libisc.so.1504+0x41bef)
#3 udp_send_cb /home/ondrej/Projects/bind9/lib/isc/netmgr/udp.c:439 (libisc.so.1504+0x46c1d)
#4 <null> <null> (libuv.so.1+0x1d283)
#5 <null> <null> (libtsan.so.0+0x29b3d)
Location is heap block of size 7489 at 0x7b90000a0000 allocated by thread T8:
#0 malloc <null> (libtsan.so.0+0x2b1a3)
#1 default_memalloc /home/ondrej/Projects/bind9/lib/isc/mem.c:685 (libisc.so.1504+0x33fee)
#2 mem_get /home/ondrej/Projects/bind9/lib/isc/mem.c:598 (libisc.so.1504+0x34c7e)
#3 mem_allocateunlocked /home/ondrej/Projects/bind9/lib/isc/mem.c:1222 (libisc.so.1504+0x34c7e)
#4 isc___mem_allocate /home/ondrej/Projects/bind9/lib/isc/mem.c:1242 (libisc.so.1504+0x34c7e)
#5 isc__mem_allocate /home/ondrej/Projects/bind9/lib/isc/mem.c:2387 (libisc.so.1504+0x3be64)
#6 isc___mem_get /home/ondrej/Projects/bind9/lib/isc/mem.c:1007 (libisc.so.1504+0x3c6ca)
#7 isc__mem_get /home/ondrej/Projects/bind9/lib/isc/mem.c:2365 (libisc.so.1504+0x3aef1)
#8 alloc_handle /home/ondrej/Projects/bind9/lib/isc/netmgr/netmgr.c:916 (libisc.so.1504+0x40547)
#9 isc__nmhandle_get /home/ondrej/Projects/bind9/lib/isc/netmgr/netmgr.c:940 (libisc.so.1504+0x40547)
#10 udp_recv_cb /home/ondrej/Projects/bind9/lib/isc/netmgr/udp.c:312 (libisc.so.1504+0x46841)
#11 <null> <null> (libuv.so.1+0x1d6d4)
#12 <null> <null> (libtsan.so.0+0x29b3d)
Mutex M562522896233146464 is already destroyed.
Thread T16 'isc-worker0007' (tid=29211, running) created by main thread at:
#0 pthread_create <null> (libtsan.so.0+0x2be1b)
#1 isc_thread_create /home/ondrej/Projects/bind9/lib/isc/pthreads/thread.c:75 (libisc.so.1504+0x7bc54)
#2 isc_taskmgr_create /home/ondrej/Projects/bind9/lib/isc/task.c:1410 (libisc.so.1504+0x59cf3)
#3 create_managers main.c:902 (named+0x1aeec)
#4 setup main.c:1235 (named+0x1aeec)
#5 main main.c:1515 (named+0x1aeec)
Thread T1 'isc-net-0000' (tid=29196, running) created by main thread at:
#0 pthread_create <null> (libtsan.so.0+0x2be1b)
#1 isc_thread_create /home/ondrej/Projects/bind9/lib/isc/pthreads/thread.c:75 (libisc.so.1504+0x7bc54)
#2 isc_nm_start /home/ondrej/Projects/bind9/lib/isc/netmgr/netmgr.c:149 (libisc.so.1504+0x3ec4a)
#3 create_managers main.c:895 (named+0x1ae90)
#4 setup main.c:1235 (named+0x1ae90)
#5 main main.c:1515 (named+0x1ae90)
Thread T8 'isc-net-0007' (tid=29203, running) created by main thread at:
#0 pthread_create <null> (libtsan.so.0+0x2be1b)
#1 isc_thread_create /home/ondrej/Projects/bind9/lib/isc/pthreads/thread.c:75 (libisc.so.1504+0x7bc54)
#2 isc_nm_start /home/ondrej/Projects/bind9/lib/isc/netmgr/netmgr.c:149 (libisc.so.1504+0x3ec4a)
#3 create_managers main.c:895 (named+0x1ae90)
#4 setup main.c:1235 (named+0x1ae90)
#5 main main.c:1515 (named+0x1ae90)
SUMMARY: ThreadSanitizer: data race /home/ondrej/Projects/bind9/lib/isc/netmgr/netmgr.c:1027 in nmhandle_free
```December 2019 (9.11.14, 9.14.9, 9.15.7)https://gitlab.isc.org/isc-projects/bind9/-/issues/1465idna system test failing on centos[67] (v9_11)2019-12-11T09:24:04ZMark Andrewsidna system test failing on centos[67] (v9_11)Job [#459966](https://gitlab.isc.org/isc-projects/bind9/-/jobs/459966) failed for 614bc25b901006051d0cfc6e14087e87ab3bf30d:
```
I:idna:Checking invalid input U-label (41)
I:idna:Checking invalid input U-label: +noidnin +noidnout (42)
I:...Job [#459966](https://gitlab.isc.org/isc-projects/bind9/-/jobs/459966) failed for 614bc25b901006051d0cfc6e14087e87ab3bf30d:
```
I:idna:Checking invalid input U-label (41)
I:idna:Checking invalid input U-label: +noidnin +noidnout (42)
I:idna:Checking invalid input U-label: +noidnin +idnout (43)
I:idna:Checking invalid input U-label: +idnin +noidnout (44)
I:idna:failed: dig command unexpectedly succeeded
I:idna:Checking invalid input U-label: +idnin +idnout (45)
```
idn_locale_to_ace is not failing on invalid input.
```
% more *44
; <<>> DiG 9.11.13 <<>> -i -p 8700 @10.53.0.1 +idnin +noidnout 🧦.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 18972
;; flags: qr aa rd; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: 0290622b9447012620106a855ded889488bee751228b272a (good)
;; QUESTION SECTION:
;xn--wv9h.com. IN A
;; AUTHORITY SECTION:
. 300 IN SOA gson.nominum.com. a.root.servers.nil. 2000042100 600 600 1200 600
;; Query time: 0 msec
;; SERVER: 10.53.0.1#8700(10.53.0.1)
;; WHEN: Sun Dec 08 23:34:44 UTC 2019
;; MSG SIZE rcvd: 135
% more *45
/builds/isc-projects/bind9/bin/dig/.libs/lt-dig: 'xn--wv9h.com.' is not a legal IDNA2008 name (string contains a disallowed character), use +noidnout
%
```December 2019 (9.11.14, 9.14.9, 9.15.7)