ISC Open Source Projects issues
https://gitlab.isc.org/groups/isc-projects/-/issues
2022-11-02T15:10:20Z
https://gitlab.isc.org/isc-projects/kea/-/issues/1600
option with all server tag are not revealed after deleting similar option wit...
2022-11-02T15:10:20Z
Michal Nowikowski
option with all server tag are not revealed after deleting similar option with another tag that was covering it
scenario:
1. add option with tag all
2. add option with tag abc
3. set server to pull config with tag abc
4. get config - it shows option with tag abc
5. delete option with tag abc
6. get config - it should show option with tag all but i...
scenario:
1. add option with tag all
2. add option with tag abc
3. set server to pull config with tag abc
4. get config - it shows option with tag abc
5. delete option with tag abc
6. get config - it should show option with tag all but it shows no options
Forge tests:
- tests/dhcpv4/kea_only/config_backend/test_cb_v4_server_tag.py::test_server_tag_global_option4
- tests/dhcpv6/kea_only/config_backend/test_cb_v6_server_tag.py::test_server_tag_global_option
backlog
Marcin Siodelski
Marcin Siodelski
https://gitlab.isc.org/isc-projects/kea/-/issues/1599
implement thread pool wait and pause and resume functions
2023-11-23T09:10:45Z
Razvan Becheriu
implement thread pool wait and pause and resume functions
The thread pool is missing wait pause and resume functions which can be used in certain situations to optimize, fix or write better code and unittests.
- the pause function should block all threads on a condition variable and should blo...
The thread pool is missing wait pause and resume functions which can be used in certain situations to optimize, fix or write better code and unittests.
- the pause function should block all threads on a condition variable and should block the current thread until that happens.
it is a lightweight implementation of stop function which destroys the threads (should not be called on processing threads)
- the resume function should signal waiting threads to resume processing and it is a lightweight implementation of the start function which creates new threads
- the wait function blocks the calling thread until all items in the queue are processe. This function is useful for unittests as it forces all packets to be processed (used in #991). It can also be used when switching from MT to ST when currently queued packets are dropped, but the main thread should wait for all packets to be handled and then switch to ST.
kea2.5.4
Razvan Becheriu
Razvan Becheriu
https://gitlab.isc.org/isc-projects/kea/-/issues/1598
when reservation-mode is changed via API then it is returned in config-get
2021-01-20T15:16:18Z
Michal Nowikowski
when reservation-mode is changed via API then it is returned in config-get
After introduction of `reserveraions-global`, `reservations-in-subnet` and `reservations-out-of-pool` field only these fields should be returned in `config-get`. `reservation-mode` should never be returned.
Forge tests:
```
tests/dhcpv4...
After introduction of `reserveraions-global`, `reservations-in-subnet` and `reservations-out-of-pool` field only these fields should be returned in `config-get`. `reservation-mode` should never be returned.
Forge tests:
```
tests/dhcpv4/kea_only/config_backend/test_reservations.py::test_reservation_mode_override_init[v4-None]
tests/dhcpv4/kea_only/config_backend/test_reservations.py::test_reservation_mode_override_init[v4-all]
tests/dhcpv4/kea_only/config_backend/test_reservations.py::test_reservation_mode_override_init[v4-out-of-pool]
tests/dhcpv4/kea_only/config_backend/test_reservations.py::test_reservation_mode_override_init[v4-global]
tests/dhcpv4/kea_only/config_backend/test_reservations.py::test_reservation_mode_override_init[v4-disabled]
tests/dhcpv4/kea_only/config_backend/test_reservations.py::test_reservation_mode_override_init[v6-None]
tests/dhcpv4/kea_only/config_backend/test_reservations.py::test_reservation_mode_override_init[v6-all]
tests/dhcpv4/kea_only/config_backend/test_reservations.py::test_reservation_mode_override_init[v6-out-of-pool]
tests/dhcpv4/kea_only/config_backend/test_reservations.py::test_reservation_mode_override_init[v6-global]
tests/dhcpv4/kea_only/config_backend/test_reservations.py::test_reservation_mode_override_init[v6-disabled]
```
kea1.9.4
Razvan Becheriu
Razvan Becheriu
https://gitlab.isc.org/isc-projects/kea/-/issues/1597
reservations-* fields does not reflect changes made to reservation-mode
2022-11-02T15:10:18Z
Michal Nowikowski
reservations-* fields does not reflect changes made to reservation-mode
If `reservation-mode` is changed using `remote-global-parameter{46}-set` then `config-get` shows unchanged values of `reserveraions-global`, `reservations-in-subnet` and `reservations-out-of-pool`.
Forge tests:
```
tests/dhcpv4/kea_only...
If `reservation-mode` is changed using `remote-global-parameter{46}-set` then `config-get` shows unchanged values of `reserveraions-global`, `reservations-in-subnet` and `reservations-out-of-pool`.
Forge tests:
```
tests/dhcpv4/kea_only/config_backend/test_reservations.py::test_reservation_mode_override_init[v4-None]
tests/dhcpv4/kea_only/config_backend/test_reservations.py::test_reservation_mode_override_init[v4-all]
tests/dhcpv4/kea_only/config_backend/test_reservations.py::test_reservation_mode_override_init[v4-out-of-pool]
tests/dhcpv4/kea_only/config_backend/test_reservations.py::test_reservation_mode_override_init[v4-global]
tests/dhcpv4/kea_only/config_backend/test_reservations.py::test_reservation_mode_override_init[v4-disabled]
tests/dhcpv4/kea_only/config_backend/test_reservations.py::test_reservation_mode_override_init[v6-None]
tests/dhcpv4/kea_only/config_backend/test_reservations.py::test_reservation_mode_override_init[v6-all]
tests/dhcpv4/kea_only/config_backend/test_reservations.py::test_reservation_mode_override_init[v6-out-of-pool]
tests/dhcpv4/kea_only/config_backend/test_reservations.py::test_reservation_mode_override_init[v6-global]
tests/dhcpv4/kea_only/config_backend/test_reservations.py::test_reservation_mode_override_init[v6-disabled]
tests/dhcpv4/kea_only/config_backend/test_reservations.py::test_reservation_mode_in_globals[v4]
tests/dhcpv4/kea_only/config_backend/test_reservations.py::test_reservation_mode_in_globals[v6]
```
backlog
https://gitlab.isc.org/isc-projects/bind9/-/issues/2344
Unused variable ‘lockid’ on Solaris 11.4 in unix/socket.c
2021-09-08T11:26:51Z
Michal Nowak
Unused variable ‘lockid’ on Solaris 11.4 in unix/socket.c
Building `main` on Solaris 11.4 I got the following warning:
```
unix/socket.c: In function ‘unwatch_fd’:
unix/socket.c:808:6: warning: unused variable ‘lockid’ [-Wunused-variable]
int lockid = FDLOCK_ID(fd);
^~~~~~
```
Building `main` on Solaris 11.4 I got the following warning:
```
unix/socket.c: In function ‘unwatch_fd’:
unix/socket.c:808:6: warning: unused variable ‘lockid’ [-Wunused-variable]
int lockid = FDLOCK_ID(fd);
^~~~~~
```
October 2021 (9.11.36, 9.11.36-S1, 9.16.22, 9.16.22-S1, 9.17.19)
Arаm Sаrgsyаn
Arаm Sаrgsyаn
https://gitlab.isc.org/isc-projects/kea/-/issues/1596
Include subnet and pool user context in lease database
2023-04-06T12:02:31Z
Peter Davies
Include subnet and pool user context in lease database
I would like the option to copy the information from user-context on subnet level and from pool level to user-context in lease4/lease6 table after a lease accepted. What I would like to see it is :
Example config:
```
{
"...
I would like the option to copy the information from user-context on subnet level and from pool level to user-context in lease4/lease6 table after a lease accepted. What I would like to see it is :
Example config:
```
{
"name": "CMTS-4",
"relay": {
"ip-addresses": [ "0123:4567:891b:cd::1" ]
},
"subnet6": [
{
"subnet": "0123:4567:891b:cd::/64",
"id": 40001,
"pools": [
{ "pool": "0123:4567:891b:cd:4000::a - 0123:4567:891b:cd:7fff:ffff:ffff:ffff" ,"client-class": "pool_one", "user-context": { "pool": "pool_one", "name" : "av", "size" : "10" }} ,
{ "pool": "0123:4567:891b:cd::a - 0123:4567:891b:cd:3fff:ffff:ffff:ffff" ,"client-class": "gamers", "user-context": { "pool": "gamers", "name" : "computers", "size" : "1000" } } ,
{ "pool": "0123:4567:891b:cd:8000::a - 0123:4567:891b:cd:bfff:ffff:ffff:ffff" ,"client-class": "internet"}
],
"pd-pools": [
{
"prefix": "abcd:ef01:9044::",
"client-class": "pool_one",
"prefix-len": 46,
"delegated-len": 56,
"user-context": { "pdpool": "pool_one", "name" : "av" }
},
{
"prefix": "abcd:ef01:9444::",
"client-class": "gamers",
"prefix-len": 46,
"delegated-len": 56,
"user-context": { "pdpool": "gamers", "name" : "lan" }
},
{
"prefix": "abcd:ef01:8120::",
"client-class": "internet",
"prefix-len": 44,
"delegated-len": 56
}
],
"user-context": {
"device": "CMTS-4",
"location": "Partner"
}
}
]
}
```
When a user gets a lease with "client-class: gamers" then on the lease record in the lease table he will have the next json:
```
"user-context": {
shared-network: {}, ## <- came from shared-network level
"subnet" : { "device": "CMTS-4", "location": "Partner"}, ## <- came from subnet level
"pd-pool" : { "pdpool": "gamers", "name" : "lan" }, ## <- came from pd-pool level
"pool" : { "pool": "gamers", "name" : "computers", "size" : "1000" } ## <- came from pool level
}
```
This will help me get info on my subscribers, the lease table doesnt have specific info,
Lets say that I have a few pools under one subnet(like the example),
with that info I can get more accurate statistics on the leased addresses.
How many subscribers I have in gamers.
Another idea is to add info from other hook,
We are using radius hook,
So If was I able to select fields from radius hook, like "username" or some other attribute (that came from radius) and put it inside user-context:
```
"user-context": {
"radius" : { "username" : "xxxxx", }
}
```
[RT #17374 ](https://support.isc.org/Ticket/Display.html?id=17374)
backlog
https://gitlab.isc.org/isc-projects/bind9/-/issues/2343
dig exits ungracefully on receipt of sigkill
2020-12-09T10:08:46Z
Peter Davies
dig exits ungracefully on receipt of sigkill
### Summary
Sending sigkill to dig causes the dig process to exit with a core dump.
This behaviour is not seen in Bind 9.16.9 or Bind 9.17.6
### BIND version used
[root@localhost bind9]# named -V
BIND 9.17.7 (Development Release) <i...
### Summary
Sending sigkill to dig causes the dig process to exit with a core dump.
This behaviour is not seen in Bind 9.16.9 or Bind 9.17.6
### BIND version used
[root@localhost bind9]# named -V
BIND 9.17.7 (Development Release) <id:ed85d06>
running on Linux x86_64 4.18.0-193.28.1.el8_2.x86_64 #1 SMP Thu Oct 22 00:20:22 UTC 2020
built by make with default
compiled by GCC 8.3.1 20191121 (Red Hat 8.3.1-5)
compiled with OpenSSL version: OpenSSL 1.1.1c FIPS 28 May 2019
linked to OpenSSL version: OpenSSL 1.1.1c FIPS 28 May 2019
compiled with libuv version: 1.40.1
linked to libuv version: 1.40.1-dev
compiled with zlib version: 1.2.11
linked to zlib version: 1.2.11
threads support is enabled
default paths:
named configuration: /usr/local/etc/named.conf
rndc configuration: /usr/local/etc/rndc.conf
DNSSEC root key: /usr/local/etc/bind.keys
nsupdate session key: /usr/local/var/run/named/session.key
named PID file: /usr/local/var/run/named/named.pid
named lock file: /usr/local/var/run/named/named.lock
### Steps to reproduce
use dig with a FQDN known to time out or take a long time to resolve. kill the dig process before it returns with a result.
Seen on CentOS Linux 8, Fedora 25 & Ubuntu 19.10
This behaviour is not seen in Bind 9.16.9 or Bind 9.17.6
### What is the current *bug* behavior?
[root@localhost bind9]# /usr/local/bin/dig -v
DiG 9.17.7
[root@localhost bind9]# /usr/local/bin/dig @127.0.0.1 -p54 android.bugly.qq.com
^Cdighost.c:4262: REQUIRE(isc_refcount_current(&recvcount) == 0) failed, back trace
/usr/local/lib/libisc.so.1706(+0x36f23) [0x7f795476df23]
/usr/local/lib/libisc.so.1706(isc_assertion_failed+0xa) [0x7f795476de8a]
/usr/local/bin/dig() [0x4146fa]
/usr/local/bin/dig() [0x40a4c9]
/usr/local/bin/dig() [0x4050c8]
/lib64/libc.so.6(__libc_start_main+0xf3) [0x7f7951db56a3]
/usr/local/bin/dig() [0x40510e]
Aborted (core dumped)
### What is the expected *correct* behavior?
a quiet close
https://gitlab.isc.org/isc-projects/kea/-/issues/1595
Does Kea support the option 90 (Authentication)
2023-08-24T11:52:27Z
varsraja
Does Kea support the option 90 (Authentication)
I would like to know if Kea DHCP servers support the option 90/ Authentication.
https://tools.ietf.org/html/rfc3118.
From the list of dhcp options supported as per Kea documentation
https://kea.readthedocs.io/en/kea-1.8.1/arm/dhcp4-srv....
I would like to know if Kea DHCP servers support the option 90/ Authentication.
https://tools.ietf.org/html/rfc3118.
From the list of dhcp options supported as per Kea documentation
https://kea.readthedocs.io/en/kea-1.8.1/arm/dhcp4-srv.html#dhcp4-std-options-list , don't find option 90 specified.
If it does support, what should i configure in the dhclient and kea dhcp server to get it working.
Thank you
Varsraja
outstanding
https://gitlab.isc.org/isc-projects/bind9/-/issues/2342
rndc retransfer issues misleading diagnostic on primary zone
2021-01-27T22:45:04Z
JP Mens
rndc retransfer issues misleading diagnostic on primary zone
### Summary
The `rndc` command has a subcommand `retransfer` which retransfers a single zone without checking serial number. When used on a primary zone on a primary server, the command issues the following diagnostic:
```console
% rnd...
### Summary
The `rndc` command has a subcommand `retransfer` which retransfers a single zone without checking serial number. When used on a primary zone on a primary server, the command issues the following diagnostic:
```console
% rndc retransfer inline.zone12.dane.onl
rndc: 'retransfer' failed: not found
```
However, if the zone doesn't exist at all, `rndc` emits this clearer message:
```console
% rndc retransfer yyy
rndc: 'retransfer' failed: not found
no matching zone 'yyy' in any view
```
### BIND version used
```
BIND 9.16.9 (Stable Release) <id:b3f41b7>
running on Linux x86_64 4.18.0-193.6.3.el8_2.x86_64 #1 SMP Wed Jun 10 11:09:32 UTC 2020
built by make with '--build=x86_64-redhat-linux-gnu' '--host=x86_64-redhat-linux-gnu' '--program-prefix=' '--disable-dependency-tracking' '--prefix=/opt/isc/isc-bind/root/usr' '--exec-prefix=/opt/isc/isc-bind/root/usr' '--bindir=/opt/isc/isc-bind/root/usr/bin' '--sbindir=/opt/isc/isc-bind/root/usr/sbin' '--sysconfdir=/etc/opt/isc/scls/isc-bind' '--datadir=/opt/isc/isc-bind/root/usr/share' '--includedir=/opt/isc/isc-bind/root/usr/include' '--libdir=/opt/isc/isc-bind/root/usr/lib64' '--libexecdir=/opt/isc/isc-bind/root/usr/libexec' '--localstatedir=/var/opt/isc/scls/isc-bind' '--sharedstatedir=/var/opt/isc/scls/isc-bind/lib' '--mandir=/opt/isc/isc-bind/root/usr/share/man' '--infodir=/opt/isc/isc-bind/root/usr/share/info' '--disable-static' '--enable-dnstap' '--with-pic' '--with-gssapi' '--with-json-c' '--with-libtool' '--with-libxml2' '--without-lmdb' '--with-python' 'build_alias=x86_64-redhat-linux-gnu' 'host_alias=x86_64-redhat-linux-gnu' 'CFLAGS=-O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fexceptions -fstack-protector-strong -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection' 'LDFLAGS=-Wl,-z,relro -Wl,-z,now -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -L/opt/isc/isc-bind/root/usr/lib64' 'LT_SYS_LIBRARY_PATH=/usr/lib64' 'PKG_CONFIG_PATH=:/opt/isc/isc-bind/root/usr/lib64/pkgconfig:/opt/isc/isc-bind/root/usr/share/pkgconfig' 'SPHINX_BUILD=/builddir/build/BUILD/bind-9.16.9/sphinx/bin/sphinx-build'
compiled by GCC 8.3.1 20191121 (Red Hat 8.3.1-5)
compiled with OpenSSL version: OpenSSL 1.1.1c FIPS 28 May 2019
linked to OpenSSL version: OpenSSL 1.1.1c FIPS 28 May 2019
compiled with libuv version: 1.40.0
linked to libuv version: 1.40.0
compiled with libxml2 version: 2.9.7
linked to libxml2 version: 20907
compiled with json-c version: 0.13.1
linked to json-c version: 0.13.1
compiled with zlib version: 1.2.11
linked to zlib version: 1.2.11
compiled with protobuf-c version: 1.3.3
linked to protobuf-c version: 1.3.3
threads support is enabled
default paths:
named configuration: /etc/opt/isc/scls/isc-bind/named.conf
rndc configuration: /etc/opt/isc/scls/isc-bind/rndc.conf
DNSSEC root key: /etc/opt/isc/scls/isc-bind/bind.keys
nsupdate session key: /var/opt/isc/scls/isc-bind/run/named/session.key
named PID file: /var/opt/isc/scls/isc-bind/run/named/named.pid
named lock file: /var/opt/isc/scls/isc-bind/run/named/named.lock
```
### Steps to reproduce
1. configure a primary zone, say, `example`
2. issue `rndc retransfer example`
### What is the current *bug* behavior?
Diagnostic as shown above
### What is the expected *correct* behavior?
What I would like to see is `rndc` telling me that the zone is a primary zone and cannot be retransferred.
February 2021 (9.11.28, 9.11.28-S1, 9.16.12, 9.16.12-S1, 9.17.10)
https://gitlab.isc.org/isc-projects/bind9/-/issues/2341
Removing 'auto-dnssec' does not turn off DNSSEC maintenance
2021-01-07T21:37:35Z
Matthijs Mekking
matthijs@isc.org
Removing 'auto-dnssec' does not turn off DNSSEC maintenance
If you reconfigure a zone and remove the `auto-dnssec` option, the zone is actually still DNSSEC maintained. This is because in `zoneconf.c` there is no call to `dns_zone_setkeyopt()` to turn off the flags. If the configuration option is...
If you reconfigure a zone and remove the `auto-dnssec` option, the zone is actually still DNSSEC maintained. This is because in `zoneconf.c` there is no call to `dns_zone_setkeyopt()` to turn off the flags. If the configuration option is not used `cfg_map_get(zoptions, "auto-dnssec", &obj)` will return an error.
(this is fixed in d72ad7c530bb3f0860bc0d47c075368cbd3fcc44 as part of #1750)
January 2021 (9.11.27, 9.11.27-S1, 9.16.11, 9.16.11-S1, 9.17.9)
Matthijs Mekking
matthijs@isc.org
Matthijs Mekking
matthijs@isc.org
https://gitlab.isc.org/isc-projects/stork/-/issues/466
sanity checks 0.14.0
2022-02-02T09:51:28Z
Wlodzimierz Wencel
sanity checks 0.14.0
Please do your sanity checks according to the steps below:
1. Download the tarball, verify it is sane, build it and run tests.
Tarball: https://gitlab.isc.org/isc-projects/stork/-/jobs/1350348/artifacts/browse
1. Start the demo wit...
Please do your sanity checks according to the steps below:
1. Download the tarball, verify it is sane, build it and run tests.
Tarball: https://gitlab.isc.org/isc-projects/stork/-/jobs/1350348/artifacts/browse
1. Start the demo with `rake docker_up` and follow the steps from: https://gitlab.isc.org/isc-projects/stork/-/wikis/Demo
1. Install server and agent locally e.g. on VMs from the binary packages:
debs: https://gitlab.isc.org/isc-projects/stork/-/jobs/1350349/artifacts/browse
rpms: https://gitlab.isc.org/isc-projects/stork/-/jobs/1350350/artifacts/browse
If you want you can execute GUI system test based on selenium with:
```
rake system_tests_ui BROWSER=Firefox
rake system_tests_ui BROWSER=Chrome
```
but I haven't run those for a while, I'm not sure that they should pass!
0.14
https://gitlab.isc.org/isc-projects/stork/-/issues/465
Logrus must be upgraded after go upgrade to 1.15
2020-12-07T16:20:27Z
Marcin Siodelski
Logrus must be upgraded after go upgrade to 1.15
After upgrade of golang to 1.15 there is a regression in logrus logging library. We need to upgrade logrus to circumvent this. See for reference: https://github.com/sirupsen/logrus/issues/1096
After upgrade of golang to 1.15 there is a regression in logrus logging library. We need to upgrade logrus to circumvent this. See for reference: https://github.com/sirupsen/logrus/issues/1096
0.14
Marcin Siodelski
Marcin Siodelski
https://gitlab.isc.org/isc-projects/stork/-/issues/464
0.14 release
2020-12-07T15:46:32Z
Wlodzimierz Wencel
0.14 release
Wlodzimierz Wencel
Wlodzimierz Wencel
https://gitlab.isc.org/isc-projects/stork/-/issues/463
Events panel is not refreshed when switching between machine tabs
2021-01-28T13:32:14Z
Marcin Siodelski
Events panel is not refreshed when switching between machine tabs
While doing #429 I noticed that, unlike in case of app panels, when you open several machines panels and switch between them the events are not updated to the currently selected machine. In order to view events from the current machine o...
While doing #429 I noticed that, unlike in case of app panels, when you open several machines panels and switch between them the events are not updated to the currently selected machine. In order to view events from the current machine one has to switch to the first (all machines) tab and then go back to the desired one. Another way is to refresh the page.
In order to reproduce:
- Start Stork demo
- Add two new machines, e.g. agent-kea-ha1 and agent-kea-ha2
- Click between agent-kea-ha1 and agent-kea-ha2 tabs. The events panel is not refreshed and is showing events specific to the other machine.
- Click on the Machines tab and go back. Now, events are properly displayed.
0.15
Marcin Siodelski
Marcin Siodelski
https://gitlab.isc.org/isc-projects/kea/-/issues/1594
Sanity checks for Kea 1.8.2 rc1
2021-01-26T17:26:09Z
jenkins
Sanity checks for Kea 1.8.2 rc1
```We are now at step SANITY CHECKS of Kea 1.8.1 rc1.
Please verify the packages and files according to
https://wiki.isc.org/bin/view/QA/KeaReleaseProcess, "4. Sanity Checks" chapter
and your imagination.
Before starting any checks. ple...
```We are now at step SANITY CHECKS of Kea 1.8.1 rc1.
Please verify the packages and files according to
https://wiki.isc.org/bin/view/QA/KeaReleaseProcess, "4. Sanity Checks" chapter
and your imagination.
Before starting any checks. please, state in Sanity Checks issue in GitLab
what check you are doing in a thread/discussion (not as comment).
When you finish given check state in the same thread/discussion what is the result.
This way we know what is covered upfront and we can avoid repeating ourselves.
Release content is located on:
1) [tarballs] repo.isc.org in the following folders:
/data/shared/sweng/kea/releases/1.8.2-rc1
/data/shared/sweng/kea/releases/premium-1.8.2-rc1
/data/shared/sweng/kea/releases/subscription-1.8.2-rc1
SHA256 (1.8.2-rc1/kea-1.8.2.tar.gz) = 486ca7abedb9d6fdf8e4344ad8688d1171f2ef0f5506d118988aadeae80a1d39
SHA256 (subscription-1.8.2-rc1/kea-subscription-1.8.2.tar.gz) = da1a0c62a094c5088d7a71c664f932fef2b26dbc4a5d83fff40421f81430259c
SHA256 (premium-1.8.2-rc1/kea-premium-1.8.2.tar.gz) = 4b37fb898928f1fe31390846c12a68906ec9183631e9536fd2e5ded9c5f4c0d0
2) [rpm/deb packages] on packages.isc.org, exact packages versions are stored here:
https://jenkins.isc.org/job/kea-1.8/job/pkg/19/
Release version is 1.8.2-isc0001520201206093433 (please verify if it is this version while installing).
Install instruction is here: https://wiki.isc.org/bin/view/QA/KeaReleaseProcess, chapter 4. Sanity Checks, point 9.
```
kea1.8.2
https://gitlab.isc.org/isc-projects/kea/-/issues/1593
possible deadlock with InterprocessSyncFile
2021-01-12T08:20:00Z
Andrei Pavel
andrei@isc.org
possible deadlock with InterprocessSyncFile
UTs stopped running at `PgSqlLeaseMgrDbLostCallbackTest.testDbLostCallback` while running unit tests for all modules with `GTEST_SHUFFLE=1`. Looks like a deadlock involving `InterprocessSyncFile`.
```
(gdb) thread apply all bt
Thread 5 ...
UTs stopped running at `PgSqlLeaseMgrDbLostCallbackTest.testDbLostCallback` while running unit tests for all modules with `GTEST_SHUFFLE=1`. Looks like a deadlock involving `InterprocessSyncFile`.
```
(gdb) thread apply all bt
Thread 5 (Thread 0x7ffad3d01640 (LWP 3207223) "lt-libdhcpsrv_u"):
#0 0x00007ffad681d6a2 in pthread_cond_wait@@GLIBC_2.3.2 () from /usr/lib/libpthread.so.0
#1 0x00007ffad6041c11 in __gthread_cond_wait (__mutex=<optimized out>, __cond=<optimized out>) at /build/gcc/src/gcc-build/x86_64-pc-linux-gnu/libstdc++-v3/include/x86_64-pc-linux-gnu/bits/gthr-default.h:865
#2 std::condition_variable::wait (this=<optimized out>, __lock=...) at /build/gcc/src/gcc/libstdc++-v3/src/c++11/condition_variable.cc:53
#3 0x00007ffad67cf8e2 in ?? () from /usr/lib/liblog4cplus-2.0.so.3
#4 0x00007ffad6047c24 in std::execute_native_thread_routine (__p=0x562068e7cd00) at /build/gcc/src/gcc/libstdc++-v3/src/c++11/thread.cc:80
#5 0x00007ffad68173e9 in start_thread () from /usr/lib/libpthread.so.0
#6 0x00007ffad5d4f293 in clone () from /usr/lib/libc.so.6
Thread 4 (Thread 0x7ffad4502640 (LWP 3207222) "lt-libdhcpsrv_u"):
#0 0x00007ffad681d6a2 in pthread_cond_wait@@GLIBC_2.3.2 () from /usr/lib/libpthread.so.0
#1 0x00007ffad6041c11 in __gthread_cond_wait (__mutex=<optimized out>, __cond=<optimized out>) at /build/gcc/src/gcc-build/x86_64-pc-linux-gnu/libstdc++-v3/include/x86_64-pc-linux-gnu/bits/gthr-default.h:865
#2 std::condition_variable::wait (this=<optimized out>, __lock=...) at /build/gcc/src/gcc/libstdc++-v3/src/c++11/condition_variable.cc:53
#3 0x00007ffad67cf8e2 in ?? () from /usr/lib/liblog4cplus-2.0.so.3
#4 0x00007ffad6047c24 in std::execute_native_thread_routine (__p=0x562068e7cde0) at /build/gcc/src/gcc/libstdc++-v3/src/c++11/thread.cc:80
#5 0x00007ffad68173e9 in start_thread () from /usr/lib/libpthread.so.0
#6 0x00007ffad5d4f293 in clone () from /usr/lib/libc.so.6
Thread 3 (Thread 0x7ffad4d03640 (LWP 3207221) "lt-libdhcpsrv_u"):
#0 0x00007ffad681d6a2 in pthread_cond_wait@@GLIBC_2.3.2 () from /usr/lib/libpthread.so.0
#1 0x00007ffad6041c11 in __gthread_cond_wait (__mutex=<optimized out>, __cond=<optimized out>) at /build/gcc/src/gcc-build/x86_64-pc-linux-gnu/libstdc++-v3/include/x86_64-pc-linux-gnu/bits/gthr-default.h:865
#2 std::condition_variable::wait (this=<optimized out>, __lock=...) at /build/gcc/src/gcc/libstdc++-v3/src/c++11/condition_variable.cc:53
#3 0x00007ffad67cf8e2 in ?? () from /usr/lib/liblog4cplus-2.0.so.3
#4 0x00007ffad6047c24 in std::execute_native_thread_routine (__p=0x562068e7c680) at /build/gcc/src/gcc/libstdc++-v3/src/c++11/thread.cc:80
#5 0x00007ffad68173e9 in start_thread () from /usr/lib/libpthread.so.0
#6 0x00007ffad5d4f293 in clone () from /usr/lib/libc.so.6
Thread 2 (Thread 0x7ffad5504640 (LWP 3207220) "lt-libdhcpsrv_u"):
#0 0x00007ffad681d6a2 in pthread_cond_wait@@GLIBC_2.3.2 () from /usr/lib/libpthread.so.0
#1 0x00007ffad6041c11 in __gthread_cond_wait (__mutex=<optimized out>, __cond=<optimized out>) at /build/gcc/src/gcc-build/x86_64-pc-linux-gnu/libstdc++-v3/include/x86_64-pc-linux-gnu/bits/gthr-default.h:865
#2 std::condition_variable::wait (this=<optimized out>, __lock=...) at /build/gcc/src/gcc/libstdc++-v3/src/c++11/condition_variable.cc:53
#3 0x00007ffad67cf8e2 in ?? () from /usr/lib/liblog4cplus-2.0.so.3
#4 0x00007ffad6047c24 in std::execute_native_thread_routine (__p=0x562068e7cce0) at /build/gcc/src/gcc/libstdc++-v3/src/c++11/thread.cc:80
#5 0x00007ffad68173e9 in start_thread () from /usr/lib/libpthread.so.0
#6 0x00007ffad5d4f293 in clone () from /usr/lib/libc.so.6
Thread 1 (Thread 0x7ffad5510040 (LWP 3207218) "lt-libdhcpsrv_u"):
#0 0x00007ffad5d40427 in fcntl64 () from /usr/lib/libc.so.6
#1 0x00007ffad6820eff in fcntl_compat () from /usr/lib/libpthread.so.0
#2 0x00007ffad72321c3 in isc::log::interprocess::InterprocessSyncFile::do_lock (this=0x562068fee310, cmd=7, l_type=1) at interprocess_sync_file.cc:85
#3 0x00007ffad7232405 in isc::log::interprocess::InterprocessSyncFile::lock (this=0x562068fee310) at interprocess_sync_file.cc:94
#4 0x00007ffad720f9de in isc::log::interprocess::InterprocessSyncLocker::lock (this=0x7ffcafeaf338) at ../../../src/lib/log/interprocess/interprocess_sync.h:109
#5 0x00007ffad720ea5a in isc::log::LoggerImpl::outputRaw (this=0x562068fa5130, severity=@0x7ffcafeaf4c8: isc::log::ERROR, message="DATABASE_PGSQL_FATAL_ERROR Unrecoverable PostgreSQL error occurred: Statement: <get_lease4_addr>, reason: could not receive data from server: Bad file descriptor\n (error code: <sqlstate null>).") at logger_impl.cc:162
#6 0x00007ffad720c19c in isc::log::Logger::output (this=0x7ffad7528e60 <isc::db::database_logger>, severity=@0x7ffcafeaf4c8: isc::log::ERROR, message="DATABASE_PGSQL_FATAL_ERROR Unrecoverable PostgreSQL error occurred: Statement: <get_lease4_addr>, reason: could not receive data from server: Bad file descriptor\n (error code: <sqlstate null>).") at logger.cc:147
#7 0x000056206621ee95 in isc::log::Formatter<isc::log::Logger>::~Formatter (this=0x7ffcafeaf4c0, __in_chrg=<optimized out>) at ../../../../src/lib/log/log_formatter.h:162
#8 0x00007ffad8057414 in isc::db::PgSqlConnection::checkStatementError (this=0x562068fa4e30, r=..., statement=...) at pgsql_connection.cc:333
#9 0x00007ffad9388f4c in isc::dhcp::PgSqlLeaseMgr::getLeaseCollection<boost::scoped_ptr<isc::dhcp::PgSqlLease4Exchange>, std::vector<boost::shared_ptr<isc::dhcp::Lease4>, std::allocator<boost::shared_ptr<isc::dhcp::Lease4> > > > (this=0x562068fb2430, ctx=..., stindex=isc::dhcp::PgSqlLeaseMgr::GET_LEASE4_ADDR, bind_array=..., exchange=..., result=std::vector of length 0, capacity 0, single=true) at pgsql_lease_mgr.cc:1354
#10 0x00007ffad9378971 in isc::dhcp::PgSqlLeaseMgr::getLease (this=0x562068fb2430, ctx=..., stindex=isc::dhcp::PgSqlLeaseMgr::GET_LEASE4_ADDR, bind_array=..., result=...) at pgsql_lease_mgr.cc:1378
#11 0x00007ffad9378de9 in isc::dhcp::PgSqlLeaseMgr::getLease4 (this=0x562068fb2430, addr=...) at pgsql_lease_mgr.cc:1429
#12 0x00005620667d255b in isc::dhcp::test::LeaseMgrDbLostCallbackTest::testDbLostCallback (this=0x562068fb3a40) at generic_lease_mgr_unittest.cc:3318
#13 0x000056206692e5df in (anonymous namespace)::PgSqlLeaseMgrDbLostCallbackTest_testDbLostCallback_Test::TestBody (this=0x562068fb3a40) at pgsql_lease_mgr_unittest.cc:942
#14 0x00007ffad61ab807 in void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) () from /usr/lib/libgtest.so.1.10.0
#15 0x00007ffad619f091 in testing::Test::Run() () from /usr/lib/libgtest.so.1.10.0
#16 0x00007ffad619f1ef in testing::TestInfo::Run() () from /usr/lib/libgtest.so.1.10.0
#17 0x00007ffad619f2d7 in testing::TestSuite::Run() () from /usr/lib/libgtest.so.1.10.0
#18 0x00007ffad619f854 in testing::internal::UnitTestImpl::RunAllTests() () from /usr/lib/libgtest.so.1.10.0
#19 0x00007ffad61abe37 in bool testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) () from /usr/lib/libgtest.so.1.10.0
#20 0x00007ffad619fa7a in testing::UnitTest::Run() () from /usr/lib/libgtest.so.1.10.0
#21 0x0000562065f96f78 in RUN_ALL_TESTS () at /usr/include/gtest/gtest.h:2473
#22 0x0000562065f96e8a in main (argc=1, argv=0x7ffcafeaff88) at run_unittests.cc:17
```
```
$ lsof -p 3207218 2>&1 | grep --color=auto lockfile
lt-libdhc 3207218 andrei 6u REG 259,8 0 29758074 /home/andrei/work/isc/kea-1574-make-shell-tests-and-shell-scripts-more-robust/logger_lockfile
lt-libdhc 3207218 andrei 7u REG 259,8 0 29758074 /home/andrei/work/isc/kea-1574-make-shell-tests-and-shell-scripts-more-robust/logger_lockfile
lt-libdhc 3207218 andrei 8u REG 259,8 0 29758074 /home/andrei/work/isc/kea-1574-make-shell-tests-and-shell-scripts-more-robust/logger_lockfile
lt-libdhc 3207218 andrei 9u REG 259,8 0 29758074 /home/andrei/work/isc/kea-1574-make-shell-tests-and-shell-scripts-more-robust/logger_lockfile
lt-libdhc 3207218 andrei 13u REG 259,8 0 29758074 /home/andrei/work/isc/kea-1574-make-shell-tests-and-shell-scripts-more-robust/logger_lockfile
```
```
$ wc -l /home/andrei/work/isc/kea-1574-make-shell-tests-and-shell-scripts-more-robust/logger_lockfile
0 /home/andrei/work/isc/kea-1574-make-shell-tests-and-shell-scripts-more-robust/logger_lockfile
```
outstanding
https://gitlab.isc.org/isc-projects/bind9/-/issues/2339
netmgr memory - assertion failed
2020-12-05T19:23:20Z
Štefan Bosák
netmgr memory - assertion failed
### Summary
Bind9 netmgr memory - assertion failed
### BIND version used
BIND 9.17.7 (Development Release) <id:ed85d06>
running on Windows 10 0 build 19041 662 for x64
built by MSVC 1916 with 'with-tools-version=15.0 with-platform-to...
### Summary
Bind9 netmgr memory - assertion failed
### BIND version used
BIND 9.17.7 (Development Release) <id:ed85d06>
running on Windows 10 0 build 19041 662 for x64
built by MSVC 1916 with 'with-tools-version=15.0 with-platform-toolset=v141 with-platform-version=10.0.17763.0 with-vcredist=C:/Program\ Files\ (x86)/Microsoft\ Visual\ Studio/2017/BuildTools/VC/Redist/MSVC/14.16.27012/vcredist_x64.exe with-openssl=C:/OpenSSL with-libxml2=C:/libxml2 with-libuv=C:/libuv without-python with-system-tests x64'
compiled by MSVC 1916
compiled with OpenSSL version: OpenSSL 1.1.1h 22 Sep 2020
linked to OpenSSL version: OpenSSL 1.1.1h 22 Sep 2020
compiled with libuv version: 1.40.0
linked to libuv version: 1.40.0
compiled with libxml2 version: 2.9.10
linked to libxml2 version: 20910
threads support is enabled
default paths:
named configuration: C:\var\bind\etc\named.conf
rndc configuration: C:\var\bind\etc\rndc.conf
DNSSEC root key: C:\var\bind\etc\bind.keys
nsupdate session key: C:\var\bind\etc\session.key
named PID file: C:\var\bind\etc\named.pid
named lock file: C:\var\bind\etc\named.lock
### Steps to reproduce
Running Bind9 on Windows 10 Pro Version 20H2 (OS Build 19042.662)
on localhost as local resolver in forwarder mode to optimize traffic, latences, ...
### What is the current *bug* behavior?
Bind9 service crashed (is not running).
### What is the expected *correct* behavior?
BIND 9.17.7 should run without any problems.
BIND 9.17.6 worked without problems using the similar configuration
except DOT (DNS over TLS) which is supported from version 9.17.7.
### Relevant configuration files
Running bind9 using following configurations (keys and similar privacy stuff have been removed):
Note: I do not know why code markdowns are not used correcly.
``
include "c:\var\bind\etc\named.conf.acl";
include "c:\var\bind\etc\named.conf.controls";
include "c:\var\bind\etc\named.conf.options";
include "c:\var\bind\etc\named.conf.logging";
include "c:\var\bind\etc\named.conf.localhost";
include "c:\var\bind\etc\named.conf.chaos";
include "c:\var\bind\etc\named.conf.root";
tls "localhost-tls" {
cert-file "C:\var\bind\etc\server.crt";
key-file "C:\var\bind\etc\server.key";
};
options {
hostname "null";
version "not disclosed";
directory "C:\\var\\bind\\etc\\";
listen-on {
localhost_ipv4;
};
listen-on tls "localhost-tls" {
localhost_ipv4;
};
listen-on-v6 {
none;
};
listen-on-v6 tls "localhost-tls" {
none;
};
recursion no;
recursive-clients 64;
forwarders {
// Quad9 (with EDNS, support DOH)
9.9.9.11; //dns11.quad9.net
149.112.112.11; //dns11.quad9.net
//2620:fe::11; //dns11.quad9.net
//2620:fe::fe:11; //dns11.quad9.net
// OpenDNS (with EDNS, no support for DOH - need to use doh.opendns.com)
//208.67.222.222; //resolver1.opendns.com
//208.67.220.220; //resolver2.opendns.com
//2620:119:35::35; //resolver1.opendns.com
//2620:119:53::53; //resolver2.opendns.com
// Cloudflare (with EDNS, support for DOH)
//1.1.1.1; //one.one.one.one
//1.0.0.1; //one.one.one.one
//2606:4700:4700::1111; //one.one.one.one
//2606:4700:4700::1001; //one.one.one.one
// Google DNS (with EDNS, support for DOH)
//8.8.8.8; //dns.google
//8.8.4.4; //dns.google
//2001:4860:4860::8888; //dns.google
//2001:4860:4860::8844; //dns.google
};
forward only;
allow-notify { none; };
allow-recursion { none; };
allow-recursion-on { none; };
allow-query { none; };
allow-query-on { none; };
allow-query-cache { none; };
allow-query-cache-on { none; };
allow-transfer { none; };
allow-update { none; };
allow-update-forwarding { none; };
deny-answer-addresses {
0.0.0.0/8;
10.0.0.0/8;
127.0.0.0/8;
172.16.0.0/12;
192.168.0.0/16;
169.254.0.0/16;
192.0.0.0/24;
192.0.2.0/24;
192.0.0.0/29;
192.0.0.8/32;
192.0.0.170/32;
192.0.0.171/32;
192.52.193.0/24;
198.18.0.0/15;
198.51.100.0/24;
203.0.113.0/24;
224.0.0.0/4;
240.0.0.0/4;
255.255.255.255/32;
::/128;
::1/128;
::ffff:0:0/96;
100::/64;
64:ff9b::/96;
2001:2::/48;
2001:3::/32;
2001:db8::/32;
2001:10::/28;
2001:20::/28;
fc00::/7;
fe80::/10;
ff00::/8;
} except-from {"<obfuscated>";};
blackhole {
!127.0.0.1/32;
0.0.0.0/8;
10.0.0.0/8;
127.0.0.0/8;
172.16.0.0/12;
169.254.0.0/16;
192.168.0.0/16;
192.0.0.0/24;
192.0.2.0/24;
192.0.0.0/29;
192.0.0.8/32;
192.0.0.170/32;
192.0.0.171/32;
192.168.0.0/16;
192.52.193.0/24;
198.18.0.0/15;
198.51.100.0/24;
203.0.113.0/24;
224.0.0.0/4;
240.0.0.0/4;
255.255.255.255/32;
::/128;
::1/128;
::ffff:0:0/96;
100::/64;
64:ff9b::/96;
2001:2::/48;
2001:3::/32;
2001:db8::/32;
2001:10::/28;
2001:20::/28;
fc00::/7;
fe80::/10;
ff00::/8;
};
rate-limit {
responses-per-second 16;
log-only yes;
};
zone-statistics true;
minimal-any yes;
minimal-responses yes;
transfer-format many-answers;
provide-ixfr yes;
ixfr-from-differences yes;
qname-minimization relaxed;
dnssec-validation auto;
empty-zones-enable no;
max-cache-size 512m;
max-cache-ttl 60;
max-ncache-ttl 60;
tcp-listen-queue 0;
interface-interval 0;
heartbeat-interval 0;
};
controls {
inet 127.0.0.1 port 953 allow { localhost_ipv4; } keys { "rndc-key"; };
};
acl "recursion-chaos" {
localhost_ipv4;
};
acl "recursion-on-chaos" {
localhost_ipv4;
};
acl "transfer-chaos" {
none;
};
acl "update-chaos" {
none;
};
acl "query-chaos" {
localhost_ipv4;
};
acl "query-on-chaos" {
localhost_ipv4;
};
view "chaos" chaos {
match-clients { query-chaos; };
match-destinations {
localhost_ipv4;
};
recursion no;
match-recursive-only no;
allow-notify { none; };
allow-query { none; };
allow-query-on { none; };
allow-transfer { none; };
allow-update { none; };
allow-update-forwarding { none; };
allow-query-cache { query-chaos; };
allow-query-cache-on { query-on-chaos; };
zone "." {
type hint;
file "nul";
};
zone "bind" {
type master;
file "C:\\var\\bind\\etc\\empty\\bind.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
};
acl "recursion-root" {
none;
};
acl "recursion-on-root" {
none;
};
acl "transfer-root" {
none;
};
acl "update-root" {
none;
};
acl "query-root" {
none;
};
acl "query-on-root" {
none;
};
// Running Root on Loopback (RFC 7706)
view "root" {
match-clients { query-root; };
match-destinations {
localhost_ipv4;
};
recursion no;
match-recursive-only no;
allow-notify { none; };
allow-query { none; };
allow-query-on { none; };
allow-transfer { none; };
allow-update { none; };
allow-update-forwarding { none; };
allow-query-cache { query-root; };
allow-query-cache-on { query-on-root; };
// root zone
zone "." {
type slave;
file "C:\\var\\bind\\etc\\sec\\root.zone";
masters {
192.5.5.241; //f.root-servers.net.
192.33.4.12; //c.root-servers.net.
193.0.14.129; //k.root-servers.net.
};
allow-query { query-root; };
allow-query-on { query-on-root; };
allow-transfer { transfer-root; };
notify no;
};
// Reserved exclusively to support operationally-critical infrastructural identifier spaces as advised by the Internet Architecture Board (RFC 3172)
zone "arpa" {
type slave;
file "C:\\var\\bind\\etc\\sec\\arpa.zone";
masters {
192.5.5.241; //f.root-servers.net.
192.33.4.12; //c.root-servers.net.
193.0.14.129; //k.root-servers.net.
};
allow-query { query-root; };
allow-query-on { query-on-root; };
allow-transfer { transfer-root; };
notify no;
};
// RFC 8375
// zone "home.arpa" {
// type slave;
// file "C:\\var\\bind\\etc\\sec\\home.arpa.zone";
// masters {
// 192.175.48.6; // blackhole-1.iana.org.
// 192.175.48.42; // blackhole-2.iana.org.
// };
// allow-query { query-root; };
// allow-query-on { query-on-root; };
// allow-transfer { transfer-root; };
// notify no;
// };
// For mapping E.164 numbers to Internet URIs (RFC 6116)
zone "e164.arpa" {
type slave;
file "C:\\var\\bind\\etc\\sec\\e164.arpa.zone";
masters {
193.0.9.5; //PRI.AUTHDNS.RIPE.NET
};
allow-query { query-root; };
allow-query-on { query-on-root; };
allow-transfer { transfer-root; };
notify no;
};
// For hosting authoritative name servers for the in-addr.arpa domain (RFC 5855)
zone "in-addr-servers.arpa" {
type slave;
file "C:\\var\\bind\\etc\\sec\\in-addr-servers.arpa.zone";
masters {
193.0.9.1; //F.IN-ADDR-SERVERS.ARPA
};
allow-query { query-root; };
allow-query-on { query-on-root; };
allow-transfer { transfer-root; };
notify no;
};
// For mapping IPv4 addresses to Internet domain names (RFC 1035)
zone "in-addr.arpa" {
type slave;
file "C:\\var\\bind\\etc\\sec\\in-addr.arpa.zone";
masters {
193.0.9.1; //F.IN-ADDR-SERVERS.ARPA
};
allow-query { query-root; };
allow-query-on { query-on-root; };
allow-transfer { transfer-root; };
notify no;
};
// For hosting authoritative name servers for the ip6.arpa domain (RFC 5855)
zone "ip6-servers.arpa" {
type slave;
file "C:\\var\\bind\\etc\\sec\\ip6-servers.arpa.zone";
masters {
193.0.9.2; //F.IP6-SERVERS.ARPA
};
allow-query { query-root; };
allow-query-on { query-on-root; };
allow-transfer { transfer-root; };
notify no;
};
// For mapping IPv6 addresses to Internet domain names (RFC 3152)
zone "ip6.arpa" {
type slave;
file "C:\\var\\bind\\etc\\sec\\ip6.arpa.zone";
masters {
193.0.9.2; //F.IP6-SERVERS.ARPA
};
allow-query { query-root; };
allow-query-on { query-on-root; };
allow-transfer { transfer-root; };
notify no;
};
zone "ipv4only.arpa" {
type slave;
file "C:\\var\\bind\\etc\\sec\\ipv4only.arpa.zone";
masters {
192.0.47.132; //iad.xfr.dns.icann.org
};
allow-query { query-root; };
allow-query-on { query-on-root; };
allow-transfer { transfer-root; };
notify no;
};
zone "root-servers.net." {
type slave;
file "C:\\var\\bind\\etc\\sec\\root-servers.net.zone";
masters {
192.0.47.132; //iad.xfr.dns.icann.org
};
allow-query { query-root; };
allow-query-on { query-on-root; };
allow-transfer { transfer-root; };
notify no;
};
// Multicast (RFC 3171)
zone "mcast.net" {
type slave;
file "C:\\var\\bind\\etc\\sec\\mcast.net.zone";
masters {
192.0.47.132; //iad.xfr.dns.icann.org
};
allow-query { query-root; };
allow-query-on { query-on-root; };
allow-transfer { transfer-root; };
notify no;
};
zone "224.in-addr.arpa" {
type slave;
file "C:\\var\\bind\\etc\\sec\\224.in-addr.zone";
masters {
192.0.47.132; //iad.xfr.dns.icann.org
};
allow-query { query-root; };
allow-query-on { query-on-root; };
allow-transfer { transfer-root; };
notify no;
};
zone "225.in-addr.arpa" {
type slave;
file "C:\\var\\bind\\etc\\sec\\225.in-addr.zone";
masters {
192.0.47.132; //iad.xfr.dns.icann.org
};
allow-query { query-root; };
allow-query-on { query-on-root; };
allow-transfer { transfer-root; };
notify no;
};
zone "226.in-addr.arpa" {
type slave;
file "C:\\var\\bind\\etc\\sec\\226.in-addr.zone";
masters {
192.0.47.132; //iad.xfr.dns.icann.org
};
allow-query { query-root; };
allow-query-on { query-on-root; };
allow-transfer { transfer-root; };
notify no;
};
zone "227.in-addr.arpa" {
type slave;
file "C:\\var\\bind\\etc\\sec\\227.in-addr.zone";
masters {
192.0.47.132; //iad.xfr.dns.icann.org
};
allow-query { query-root; };
allow-query-on { query-on-root; };
allow-transfer { transfer-root; };
notify no;
};
zone "228.in-addr.arpa" {
type slave;
file "C:\\var\\bind\\etc\\sec\\228.in-addr.zone";
masters {
192.0.47.132; //iad.xfr.dns.icann.org
};
allow-query { query-root; };
allow-query-on { query-on-root; };
allow-transfer { transfer-root; };
notify no;
};
zone "229.in-addr.arpa" {
type slave;
file "C:\\var\\bind\\etc\\sec\\229.in-addr.zone";
masters {
192.0.47.132; //iad.xfr.dns.icann.org
};
allow-query { query-root; };
allow-query-on { query-on-root; };
allow-transfer { transfer-root; };
notify no;
};
zone "230.in-addr.arpa" {
type slave;
file "C:\\var\\bind\\etc\\sec\\230.in-addr.zone";
masters {
192.0.47.132; //iad.xfr.dns.icann.org
};
allow-query { query-root; };
allow-query-on { query-on-root; };
allow-transfer { transfer-root; };
notify no;
};
zone "231.in-addr.arpa" {
type slave;
file "C:\\var\\bind\\etc\\sec\\231.in-addr.zone";
masters {
192.0.47.132; //iad.xfr.dns.icann.org
};
allow-query { query-root; };
allow-query-on { query-on-root; };
allow-transfer { transfer-root; };
notify no;
};
zone "232.in-addr.arpa" {
type slave;
file "C:\\var\\bind\\etc\\sec\\232.in-addr.zone";
masters {
192.0.47.132; //iad.xfr.dns.icann.org
};
allow-query { query-root; };
allow-query-on { query-on-root; };
allow-transfer { transfer-root; };
notify no;
};
zone "233.in-addr.arpa" {
type slave;
file "C:\\var\\bind\\etc\\sec\\233.in-addr.zone";
masters {
192.0.47.132; //iad.xfr.dns.icann.org
};
allow-query { query-root; };
allow-query-on { query-on-root; };
allow-transfer { transfer-root; };
notify no;
};
zone "234.in-addr.arpa" {
type slave;
file "C:\\var\\bind\\etc\\sec\\234.in-addr.zone";
masters {
192.0.47.132; //iad.xfr.dns.icann.org
};
allow-query { query-root; };
allow-query-on { query-on-root; };
allow-transfer { transfer-root; };
notify no;
};
zone "235.in-addr.arpa" {
type slave;
file "C:\\var\\bind\\etc\\sec\\235.in-addr.zone";
masters {
192.0.47.132; //iad.xfr.dns.icann.org
};
allow-query { query-root; };
allow-query-on { query-on-root; };
allow-transfer { transfer-root; };
notify no;
};
zone "236.in-addr.arpa" {
type slave;
file "C:\\var\\bind\\etc\\sec\\236.in-addr.zone";
masters {
192.0.47.132; //iad.xfr.dns.icann.org
};
allow-query { query-root; };
allow-query-on { query-on-root; };
allow-transfer { transfer-root; };
notify no;
};
zone "237.in-addr.arpa" {
type slave;
file "C:\\var\\bind\\etc\\sec\\237.in-addr.zone";
masters {
192.0.47.132; //iad.xfr.dns.icann.org
};
allow-query { query-root; };
allow-query-on { query-on-root; };
allow-transfer { transfer-root; };
notify no;
};
zone "238.in-addr.arpa" {
type slave;
file "C:\\var\\bind\\etc\\sec\\238.in-addr.zone";
masters {
192.0.47.132; //iad.xfr.dns.icann.org
};
allow-query { query-root; };
allow-query-on { query-on-root; };
allow-transfer { transfer-root; };
notify no;
};
zone "239.in-addr.arpa" {
type slave;
file "C:\\var\\bind\\etc\\sec\\239.in-addr.zone";
masters {
192.0.47.132; //iad.xfr.dns.icann.org
};
allow-query { query-root; };
allow-query-on { query-on-root; };
allow-transfer { transfer-root; };
notify no;
};
};
acl localhost_ipv4 { 127.0.0.1; };
acl "recursion-localhost" {
localhost_ipv4;
};
acl "recursion-on-localhost" {
localhost_ipv4;
};
acl "transfer-localhost" {
none;
};
acl "update-localhost" {
none;
};
acl "query-localhost" {
localhost_ipv4;
};
acl "query-on-localhost" {
localhost_ipv4;
};
view "localhost" {
match-clients { query-localhost; };
match-destinations {
localhost_ipv4;
};
recursion yes;
match-recursive-only yes;
allow-notify { none; };
allow-query { none; };
allow-query-on { none; };
allow-transfer { none; };
allow-update { none; };
allow-update-forwarding { none; };
allow-query-cache { query-localhost; };
allow-query-cache-on { query-on-localhost; };
allow-recursion { recursion-localhost; };
allow-recursion-on { recursion-on-localhost; };
empty-zones-enable no;
// This host on this network (RFC 1122)
disable-empty-zone "0.in-addr.arpa";
// IPv4 Loopback Network (RFC 1122)
// SPECIAL-IPV4-LOOPBACK-IANA-RESERVED
disable-empty-zone "127.in-addr.arpa";
// Private Use Networks (RFC 1918)
// PRIVATE-ADDRESS-ABLK-RFC1918-IANA-RESERVE
disable-empty-zone "10.in-addr.arpa";
// PRIVATE-ADDRESS-BBLK-RFC1918-IANA-RESERVED
disable-empty-zone "16.172.in-addr.arpa";
disable-empty-zone "17.172.in-addr.arpa";
disable-empty-zone "18.172.in-addr.arpa";
disable-empty-zone "19.172.in-addr.arpa";
disable-empty-zone "20.172.in-addr.arpa";
disable-empty-zone "21.172.in-addr.arpa";
disable-empty-zone "22.172.in-addr.arpa";
disable-empty-zone "23.172.in-addr.arpa";
disable-empty-zone "24.172.in-addr.arpa";
disable-empty-zone "25.172.in-addr.arpa";
disable-empty-zone "26.172.in-addr.arpa";
disable-empty-zone "27.172.in-addr.arpa";
disable-empty-zone "28.172.in-addr.arpa";
disable-empty-zone "29.172.in-addr.arpa";
disable-empty-zone "30.172.in-addr.arpa";
disable-empty-zone "31.172.in-addr.arpa";
// PRIVATE-ADDRESS-CBLK-RFC1918-IANA-RESERVED
disable-empty-zone "168.192.in-addr.arpa";
// Link local (RFC 3927)
// LINKLOCAL-RFC3927-IANA-RESERVED
disable-empty-zone "254.169.in-addr.arpa";
// IETF Protocol Assignments (RFC 5736)
// SPECIAL-IPV4-REGISTRY-IANA-RESERVED
disable-empty-zone "0.0.192.in-addr.arpa";
// TEST-NET-[1-3] for Documentation (RFC 5737)
// TEST-NET-1
disable-empty-zone "2.0.192.in-addr.arpa";
// TEST-NET-2
disable-empty-zone "100.51.198.in-addr.arpa";
// TEST-NET-3
disable-empty-zone "113.0.203.in-addr.arpa";
// RESERVED-19252192C
disable-empty-zone "193.52.192.in-addr.arpa";
// 6to4 Relay Anycast (RFC 3068)
// 6TO4-RELAY-ANYCAST-IANA-RESERVED
disable-empty-zone "192.88.99.in-addr.arpa";
// Network Interconnect Device Benchmark Testing (RFC 2544)
// SPECIAL-IPV4-BENCHMARK-TESTING-IANA-RESERVED
disable-empty-zone "18.198.in-addr.arpa";
disable-empty-zone "19.198.in-addr.arpa";
// Multicast (RFC 3171)
disable-empty-zone "224.in-addr.arpa";
disable-empty-zone "225.in-addr.arpa";
disable-empty-zone "226.in-addr.arpa";
disable-empty-zone "227.in-addr.arpa";
disable-empty-zone "228.in-addr.arpa";
disable-empty-zone "229.in-addr.arpa";
disable-empty-zone "230.in-addr.arpa";
disable-empty-zone "231.in-addr.arpa";
disable-empty-zone "232.in-addr.arpa";
disable-empty-zone "233.in-addr.arpa";
disable-empty-zone "234.in-addr.arpa";
disable-empty-zone "235.in-addr.arpa";
disable-empty-zone "236.in-addr.arpa";
disable-empty-zone "237.in-addr.arpa";
disable-empty-zone "238.in-addr.arpa";
disable-empty-zone "239.in-addr.arpa";
// Reserved for Future Use (RFC 1112)
disable-empty-zone "240.in-addr.arpa";
disable-empty-zone "241.in-addr.arpa";
disable-empty-zone "242.in-addr.arpa";
disable-empty-zone "243.in-addr.arpa";
disable-empty-zone "244.in-addr.arpa";
disable-empty-zone "245.in-addr.arpa";
disable-empty-zone "246.in-addr.arpa";
disable-empty-zone "247.in-addr.arpa";
disable-empty-zone "248.in-addr.arpa";
disable-empty-zone "249.in-addr.arpa";
disable-empty-zone "250.in-addr.arpa";
disable-empty-zone "251.in-addr.arpa";
disable-empty-zone "252.in-addr.arpa";
disable-empty-zone "253.in-addr.arpa";
disable-empty-zone "254.in-addr.arpa";
// Limited Broadcast (RFC0919 and RFC0922)
disable-empty-zone "255.in-addr.arpa";
// (RFC 4291)
// Unspecified address
disable-empty-zone "0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa";
// Unspecified address
disable-empty-zone "1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa";
// IPv4-mapped addresses
disable-empty-zone "f.f.f.f.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa";
disable-empty-zone "0.0.ip6.arpa";
// (RFC 4048)
disable-empty-zone "2.0.ip6.arpa";
// (RFC 4291)
disable-empty-zone "1.ip6.arpa";
disable-empty-zone "4.ip6.arpa";
disable-empty-zone "6.ip6.arpa";
disable-empty-zone "8.ip6.arpa";
disable-empty-zone "a.ip6.arpa";
disable-empty-zone "c.ip6.arpa";
disable-empty-zone "e.ip6.arpa";
disable-empty-zone "f.ip6.arpa";
disable-empty-zone "1.0.ip6.arpa";
disable-empty-zone "4.0.ip6.arpa";
disable-empty-zone "8.0.ip6.arpa";
disable-empty-zone "8.f.ip6.arpa";
disable-empty-zone "e.f.ip6.arpa";
// Multicast
disable-empty-zone "f.f.ip6.arpa";
disable-empty-zone "8.e.f.ip6.arpa";
disable-empty-zone "9.e.f.ip6.arpa";
disable-empty-zone "a.e.f.ip6.arpa";
disable-empty-zone "b.e.f.ip6.arpa";
disable-empty-zone "d.e.f.ip6.arpa";
disable-empty-zone "e.e.f.ip6.arpa";
disable-empty-zone "f.e.f.ip6.arpa";
// Unique-Local (RFC 4193)
disable-empty-zone "c.f.ip6.arpa";
// (RFC 3879)
disable-empty-zone "c.e.f.ip6.arpa";
disable-empty-zone "0.0.c.f.ip6.arpa";
disable-empty-zone "0.0.d.f.ip6.arpa";
// Overlay Routable Cryptographic Hash IDentifiers (RFC 4843)
disable-empty-zone "1.0.0.1.0.0.2.ip6.arpa";
// Teredo (RFC 4380)
disable-empty-zone "0.0.0.0.1.0.0.2.ip6.arpa";
// Documentation Prefix (RFC 3849)
disable-empty-zone "8.b.d.0.1.0.0.2.ip6.arpa";
// (RFC 5180)
disable-empty-zone "0.0.0.0.2.0.0.0.1.0.0.2.ip6.arpa";
// (RFC 6052)
disable-empty-zone "0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.b.9.f.f.4.6.0.0.ip6.arpa";
// 6to4 (RFC 3056)
disable-empty-zone "2.0.0.2.ip6.arpa";
// 6bone (RFC 3701)
// (RFC 1897)
disable-empty-zone "f.5.ip6.arpa";
// (RFC2471)
disable-empty-zone "e.f.f.3.ip6.arpa";
response-policy {
zone "rpz.local";
} qname-wait-recurse no;
// just note - regarding zone size ~108k "records"
zone "rpz.local" {
type master;
file "C:\\var\\bind\\etc\\empty\\rpz.local.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "localhost" {
type master;
file "C:\\var\\bind\\etc\\empty\localhost.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "0.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\0.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "127.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\127.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "10.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\10.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "224.in-addr.arpa" {
type static-stub;
server-addresses { 127.0.0.1; };
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
};
zone "225.in-addr.arpa" {
type static-stub;
server-addresses { 127.0.0.1; };
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
};
zone "226.in-addr.arpa" {
type static-stub;
server-addresses { 127.0.0.1; };
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
};
zone "227.in-addr.arpa" {
type static-stub;
server-addresses { 127.0.0.1; };
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
};
zone "228.in-addr.arpa" {
type static-stub;
server-addresses { 127.0.0.1; };
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
};
zone "229.in-addr.arpa" {
type static-stub;
server-addresses { 127.0.0.1; };
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
};
zone "230.in-addr.arpa" {
type static-stub;
server-addresses { 127.0.0.1; };
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
};
zone "231.in-addr.arpa" {
type static-stub;
server-addresses { 127.0.0.1; };
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
};
zone "232.in-addr.arpa" {
type static-stub;
server-addresses { 127.0.0.1; };
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
};
zone "233.in-addr.arpa" {
type static-stub;
server-addresses { 127.0.0.1; };
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
};
zone "234.in-addr.arpa" {
type static-stub;
server-addresses { 127.0.0.1; };
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
};
zone "235.in-addr.arpa" {
type static-stub;
server-addresses { 127.0.0.1; };
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
};
zone "236.in-addr.arpa" {
type static-stub;
server-addresses { 127.0.0.1; };
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
};
zone "237.in-addr.arpa" {
type static-stub;
server-addresses { 127.0.0.1; };
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
};
zone "238.in-addr.arpa" {
type static-stub;
server-addresses { 127.0.0.1; };
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
};
zone "239.in-addr.arpa" {
type static-stub;
server-addresses { 127.0.0.1; };
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
};
zone "240.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\240.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "241.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\241.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "242.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\242.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "243.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\243.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "244.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\244.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "245.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\245.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "246.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\246.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "247.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\247.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "248.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\248.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "249.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\249.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "250.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\250.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "251.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\251.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "252.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\252.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "253.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\253.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "254.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\254.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "255.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\255.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "16.172.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\16.172.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "17.172.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\17.172.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "18.172.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\18.172.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "19.172.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\19.172.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "20.172.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\20.172.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "21.172.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\21.172.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "22.172.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\22.172.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "23.172.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\23.172.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "24.172.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\24.172.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "25.172.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\25.172.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "26.172.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\26.172.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "27.172.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\27.172.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "28.172.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\28.172.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "29.172.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\29.172.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "30.172.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\30.172.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "31.172.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\31.172.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "168.192.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\168.192.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "254.169.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\254.169.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "18.198.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\18.198.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "19.198.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\19.198.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "0.0.192.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\0.0.192.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "2.0.192.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\2.0.192.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "193.52.192.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\193.52.192.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "100.51.198.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\100.51.198.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "113.0.203.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\113.0.203.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "1.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\1.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "4.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\4.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "6.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\6.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "8.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\8.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "a.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\a.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "c.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\c.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "e.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\e.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "f.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\f.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "0.0.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\0.0.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "1.0.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\1.0.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "2.0.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\2.0.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "4.0.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\4.0.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "8.0.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\8.0.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "8.f.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\8.f.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "c.f.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\c.f.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "e.f.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\e.f.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "f.f.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\f.f.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "8.e.f.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\8.e.f.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "9.e.f.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\9.e.f.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "a.e.f.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\a.e.f.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "b.e.f.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\b.e.f.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "c.e.f.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\c.e.f.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "d.e.f.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\d.e.f.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "e.e.f.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\e.e.f.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "f.e.f.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\f.e.f.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "8.b.d.0.1.0.0.2.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\8.b.d.0.1.0.0.2.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "1.0.0.1.0.0.2.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\1.0.0.1.0.0.2.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "0.0.0.0.2.0.0.0.1.0.0.2.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\0.0.0.0.2.0.0.0.1.0.0.2.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.b.9.f.f.4.6.0.0.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.b.9.f.f.4.6.0.0.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "f.f.f.f.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\f.f.f.f.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "2.0.0.2.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\2.0.0.2.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "f.5.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\f.5.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "e.f.f.3.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\e.f.f.3.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
// root zone
zone "." {
type static-stub;
server-addresses { 127.0.0.1; };
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
};
// Reserved exclusively to support operationally-critical infrastructural identifier spaces as advised by the Internet Architecture Board (RFC 3172)
zone "arpa" {
type static-stub;
server-addresses { 127.0.0.1; };
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
};
// RFC 8375
// zone "home.arpa" {
// type static-stub;
// server-addresses { 127.0.0.1; };
// allow-query { query-localhost; };
// allow-query-on { query-on-localhost; };
// };
// For mapping E.164 numbers to Internet URIs (RFC 6116)
zone "e164.arpa" {
type static-stub;
server-addresses { 127.0.0.1; };
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
};
// For hosting authoritative name servers for the in-addr.arpa domain (RFC 5855)
zone "in-addr-servers.arpa" {
type static-stub;
server-addresses { 127.0.0.1; };
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
};
// For mapping IPv4 addresses to Internet domain names (RFC 1035)
zone "in-addr.arpa" {
type static-stub;
server-addresses { 127.0.0.1; };
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
};
// For hosting authoritative name servers for the ip6.arpa domain (RFC 5855)
zone "ip6-servers.arpa" {
type static-stub;
server-addresses { 127.0.0.1; };
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
};
// For mapping IPv6 addresses to Internet domain names (RFC 3152)
zone "ip6.arpa" {
type static-stub;
server-addresses { 127.0.0.1; };
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
};
zone "ipv4only.arpa" {
type static-stub;
server-addresses { 127.0.0.1; };
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
};
zone "root-servers.net." {
type static-stub;
server-addresses { 127.0.0.1; };
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
};
// Multicast (RFC 3171)
zone "mcast.net" {
type static-stub;
server-addresses { 127.0.0.1; };
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
};
};
logging {
channel rpz_file { file "c:\var\bind\log\rpz.log" versions 3 size 5m; severity dynamic; print-time yes; print-category yes; print-severity yes; };
channel edns-disabled_file { file "c:\var\bind\log\edns-disabled.log" versions 3 size 5m; severity dynamic; print-time yes; print-category yes; print-severity yes; };
channel default_file { file "c:\var\bind\log\default.log" versions 3 size 5m; severity dynamic; print-time yes; print-category yes; print-severity yes; };
channel general_file { file "c:\var\bind\log\general.log" versions 3 size 5m; severity dynamic; print-time yes; print-category yes; print-severity yes; };
channel database_file { file "c:\var\bind\log\database.log" versions 3 size 5m; severity dynamic; print-time yes; print-category yes; print-severity yes; };
channel spill_file { file "c:\var\bind\log\spill.log" versions 3 size 5m; severity dynamic; print-time yes; print-category yes; print-severity yes; };
channel rate-limit_file { file "c:\var\bind\log\rate-limit.log" versions 3 size 5m; severity dynamic; print-time yes; print-category yes; print-severity yes; };
channel security_file { file "c:\var\bind\log\security.log" versions 3 size 5m; severity dynamic; print-time yes; print-category yes; print-severity yes; };
channel config_file { file "c:\var\bind\log\config.log" versions 3 size 5m; severity dynamic; print-time yes; print-category yes; print-severity yes; };
channel resolver_file { file "c:\var\bind\log\resolver.log" versions 3 size 5m; severity dynamic; print-time yes; print-category yes; print-severity yes; };
channel xfer-in_file { file "c:\var\bind\log\xfer-in.log" versions 3 size 5m; severity dynamic; print-time yes; print-category yes; print-severity yes; };
channel xfer-out_file { file "c:\var\bind\log\xfer-out.log" versions 3 size 5m; severity dynamic; print-time yes; print-category yes; print-severity yes; };
channel notify_file { file "c:\var\bind\log\notify.log" versions 3 size 5m; severity dynamic; print-time yes; print-category yes; print-severity yes; };
channel client_file { file "c:\var\bind\log\client.log" versions 3 size 5m; severity dynamic; print-time yes; print-category yes; print-severity yes; };
channel unmatched_file { file "c:\var\bind\log\unmatched.log" versions 3 size 5m; severity dynamic; print-time yes; print-category yes; print-severity yes; };
channel queries_file { file "c:\var\bind\log\queries.log" versions 3 size 5m; severity dynamic; print-time yes; print-category yes; print-severity yes; };
channel query-errors_file { file "c:\var\bind\log\query-errors.log" versions 3 size 5m; severity dynamic; print-time yes; print-category yes; print-severity yes; };
channel network_file { file "c:\var\bind\log\network.log" versions 3 size 5m; severity dynamic; print-time yes; print-category yes; print-severity yes; };
channel update_file { file "c:\var\bind\log\update.log" versions 3 size 5m; severity dynamic; print-time yes; print-category yes; print-severity yes; };
channel update-security_file { file "c:\var\bind\log\update-security.log" versions 3 size 5m; severity dynamic; print-time yes; print-category yes; print-severity yes; };
channel dispatch_file { file "c:\var\bind\log\dispatch.log" versions 3 size 5m; severity dynamic; print-time yes; print-category yes; print-severity yes; };
channel dnssec_file { file "c:\var\bind\log\dnssec.log" versions 3 size 5m; severity dynamic; print-time yes; print-category yes; print-severity yes; };
channel lame-servers_file { file "c:\var\bind\log\lame-servers.log" versions 3 size 5m; severity dynamic; print-time yes; print-category yes; print-severity yes; };
channel delegation-only_file { file "c:\var\bind\log\delegation-only.log" versions 3 size 5m; severity dynamic; print-time yes; print-category yes; print-severity yes; };
category rpz { rpz_file; };
category edns-disabled { edns-disabled_file; };
category default { default_file; };
category general { general_file; };
category database { database_file; };
category spill { spill_file; };
category rate-limit { rate-limit_file; };
category security { security_file; };
category config { config_file; };
category resolver { resolver_file; };
category xfer-in { xfer-in_file; };
category xfer-out { xfer-out_file; };
category notify { notify_file; };
category client { client_file; };
category unmatched { unmatched_file; };
category queries { queries_file; };
category query-errors { query-errors_file; };
category network { network_file; };
category update { update_file; };
category update-security { update-security_file; };
category dispatch { dispatch_file; };
category dnssec { dnssec_file; };
category lame-servers { lame-servers_file; };
category delegation-only { delegation-only_file; };
category update-security { update-security_file; };
};
options {
default-key "rndc-key";
default-server 127.0.0.1;
default-port 953;
};
``
There are reasons to use above mentioned configuration - optimization of latences to selected entities for zone transfers and so on (instead of build-in mirror zone possibilities for DNS core infrastructure). If any of you would found some possible improvements/hints/comments/etc as kind of bug involved person I would appretiate any feedback (potential additional side-value of this bug report).
### Relevant logs and/or screenshots
example of two cases:
`
03-dec-2020 3:03:26.817 general: critical: c:\builds\isc-private\bind9\lib\isc\netmgr\netmgr.c:1332: REQUIRE(((((*handlep) != ((void *)0)) && (((const isc__magic_t *)(*handlep))->magic == ((('N') << 24 | ('M') << 16 | ('H') << 8 | ('D'))))) && ((sizeof(*(&(*handlep)->references)) == 8 ? (memory_order_seq_cst == memory_order_relaxed ? _InterlockedOr64((atomic_int_fast64_t *)&(*handlep)->references, 0) : (memory_order_seq_cst == memory_order_acquire ? _InterlockedOr64( (atomic_int_fast64_t *)&(*handlep)->references, 0) : (memory_order_seq_cst == memory_order_release ? _InterlockedOr64( (atomic_int_fast64_t *)&(*handlep)->references, 0) : _InterlockedOr64( (atomic_int_fast64_t *)&(*handlep)->references, 0)))) : (sizeof(*(&(*handlep)->references) == 4) ? (memory_order_seq_cst == memory_order_relaxed ? (int32_t)_InterlockedOr((atomic_int_fast32_t *)&(*handlep)->references, 0) : (memory_order_seq_cst == memory_order_acquire ? (int32_t)_InterlockedOr( (atomic_int_fast32_t *)&(*handlep)->references, 0) : (memory_order_seq_cst == memory_order_release ? (int32_t)_InterlockedOr( (atomic_int_fast32_t *)&(*handlep)->references, 0) : (int32_t)_InterlockedOr( (atomic_int_fast32_t *)&(*handlep)->references, 0)))) : (sizeof(*(&(*handlep)->references) == 2) ? (short)_InterlockedOr16((atomic_short *)&(*handlep)->references, 0) : (sizeof(*(&(*handlep)->references) == 1) ? (int8_t) _InterlockedOr8((atomic_int_fast8_t *)&(*handlep)->references, 0) : atomic_load_abort())))) & (sizeof(*(&(*handlep)->references)) == 8 ? 0xffffffffffffffffULL : (sizeof(*(&(*handlep)->references)) == 4 ? 0xffffffffULL : (sizeof(*(&(*handlep)->references)) == 2 ? 0xffffULL : (sizeof(*(&(*handlep)->references)) == 1 ? 0xffULL : atomic_load_abort()))))) > 0)) failed
03-dec-2020 3:03:26.817 general: critical: exiting (due to assertion failure)
05-dec-2020 0:08:04.470 general: critical: c:\builds\isc-private\bind9\lib\isc\netmgr\netmgr.c:1332: REQUIRE(((((*handlep) != ((void *)0)) && (((const isc__magic_t *)(*handlep))->magic == ((('N') << 24 | ('M') << 16 | ('H') << 8 | ('D'))))) && ((sizeof(*(&(*handlep)->references)) == 8 ? (memory_order_seq_cst == memory_order_relaxed ? _InterlockedOr64((atomic_int_fast64_t *)&(*handlep)->references, 0) : (memory_order_seq_cst == memory_order_acquire ? _InterlockedOr64( (atomic_int_fast64_t *)&(*handlep)->references, 0) : (memory_order_seq_cst == memory_order_release ? _InterlockedOr64( (atomic_int_fast64_t *)&(*handlep)->references, 0) : _InterlockedOr64( (atomic_int_fast64_t *)&(*handlep)->references, 0)))) : (sizeof(*(&(*handlep)->references) == 4) ? (memory_order_seq_cst == memory_order_relaxed ? (int32_t)_InterlockedOr((atomic_int_fast32_t *)&(*handlep)->references, 0) : (memory_order_seq_cst == memory_order_acquire ? (int32_t)_InterlockedOr( (atomic_int_fast32_t *)&(*handlep)->references, 0) : (memory_order_seq_cst == memory_order_release ? (int32_t)_InterlockedOr( (atomic_int_fast32_t *)&(*handlep)->references, 0) : (int32_t)_InterlockedOr( (atomic_int_fast32_t *)&(*handlep)->references, 0)))) : (sizeof(*(&(*handlep)->references) == 2) ? (short)_InterlockedOr16((atomic_short *)&(*handlep)->references, 0) : (sizeof(*(&(*handlep)->references) == 1) ? (int8_t) _InterlockedOr8((atomic_int_fast8_t *)&(*handlep)->references, 0) : atomic_load_abort())))) & (sizeof(*(&(*handlep)->references)) == 8 ? 0xffffffffffffffffULL : (sizeof(*(&(*handlep)->references)) == 4 ? 0xffffffffULL : (sizeof(*(&(*handlep)->references)) == 2 ? 0xffffULL : (sizeof(*(&(*handlep)->references)) == 1 ? 0xffULL : atomic_load_abort()))))) > 0)) failed
05-dec-2020 0:08:04.470 general: critical: exiting (due to assertion failure)
`
### Possible fixes
Investigate assertion failure.
Thank you for yourt time and cooperation in advance.
https://gitlab.isc.org/isc-projects/kea/-/issues/1592
Changes for Kea 1.8.2 release
2021-01-28T13:32:50Z
Michal Nowikowski
Changes for Kea 1.8.2 release
kea1.8.2
Michal Nowikowski
Michal Nowikowski
https://gitlab.isc.org/isc-projects/bind9/-/issues/2338
Code coverage statistics graph not updated anymore
2021-09-01T11:34:54Z
Michal Nowak
Code coverage statistics graph not updated anymore
From "Code coverage statistics" [graph](https://gitlab.isc.org/isc-projects/bind9/-/graphs/main/charts) is apparent that code coverage stopped being reported to this graph around October 24.
Around that time 2dabf328c406036e012a9b0b30ed...
From "Code coverage statistics" [graph](https://gitlab.isc.org/isc-projects/bind9/-/graphs/main/charts) is apparent that code coverage stopped being reported to this graph around October 24.
Around that time 2dabf328c406036e012a9b0b30ed952785565d51 was merged. Also, suspiciously, graphs label in full mentions the `master` (sic) branch: "Code coverage statistics for *master* Sep 05 - Dec 04", which was removed in June 2020.
Weirdly enough, the [gcov](https://gitlab.isc.org/isc-projects/bind9/-/jobs/1345869) CI job on `main` passes and correctly reports: "Coverage: 77%", though the graph is not updated.
October 2021 (9.11.36, 9.11.36-S1, 9.16.22, 9.16.22-S1, 9.17.19)
Michal Nowak
Michal Nowak
https://gitlab.isc.org/isc-projects/bind9/-/issues/2337
Unusual behaviour of first query in a pipeline of queries.
2021-06-22T13:35:59Z
Peter Davies
Unusual behaviour of first query in a pipeline of queries.
### Summary
Bind does not treat the first query in a pipelined list of queries in the same way as the rest of the queries in the list.
### BIND version used
Bind 9.16.9, 9.17.7
### Steps to reproduce
Create a file with a limited num...
### Summary
Bind does not treat the first query in a pipelined list of queries in the same way as the rest of the queries in the list.
### BIND version used
Bind 9.16.9, 9.17.7
### Steps to reproduce
Create a file with a limited number of well formed resolvable queries, preferably in sorted in alphanumeric order. Use as input to mdig targeting a Bind server with pipelining enabled.
```mdig @10.0.0.237 +vc -f ttt.1```
Working with a cold cache, replies normally do not get returned in same order as the list - this is the expected behaviour.
Add a query that is known to cause the server to time out in the middle of the list of queries.
```mdig @10.0.0.237 +vc -f ttt.2```
The behaviour is as above. The ServFail reply generated by the "time out" on the server is the last reply.
Move the known query to the head of the list of queries.
```mdig @10.0.0.237 +vc -f ttt.3```
A pause in the output indicates that the server is waiting for a resolution of the first query.
Also inspecting the client messages generated by Bind bear this out.
[RT #17356](https://support.isc.org/Ticket/Display.html?id=17356)
Ondřej Surý
Ondřej Surý