ISC Open Source Projects issues
https://gitlab.isc.org/groups/isc-projects/-/issues
2022-02-02T09:51:28Z
https://gitlab.isc.org/isc-projects/stork/-/issues/466
sanity checks 0.14.0
2022-02-02T09:51:28Z
Wlodzimierz Wencel
sanity checks 0.14.0
Please do your sanity checks according to the steps below:
1. Download the tarball, verify it is sane, build it and run tests.
Tarball: https://gitlab.isc.org/isc-projects/stork/-/jobs/1350348/artifacts/browse
1. Start the demo wit...
Please do your sanity checks according to the steps below:
1. Download the tarball, verify it is sane, build it and run tests.
Tarball: https://gitlab.isc.org/isc-projects/stork/-/jobs/1350348/artifacts/browse
1. Start the demo with `rake docker_up` and follow the steps from: https://gitlab.isc.org/isc-projects/stork/-/wikis/Demo
1. Install server and agent locally e.g. on VMs from the binary packages:
debs: https://gitlab.isc.org/isc-projects/stork/-/jobs/1350349/artifacts/browse
rpms: https://gitlab.isc.org/isc-projects/stork/-/jobs/1350350/artifacts/browse
If you want you can execute GUI system test based on selenium with:
```
rake system_tests_ui BROWSER=Firefox
rake system_tests_ui BROWSER=Chrome
```
but I haven't run those for a while, I'm not sure that they should pass!
0.14
https://gitlab.isc.org/isc-projects/stork/-/issues/465
Logrus must be upgraded after go upgrade to 1.15
2020-12-07T16:20:27Z
Marcin Siodelski
Logrus must be upgraded after go upgrade to 1.15
After upgrade of golang to 1.15 there is a regression in logrus logging library. We need to upgrade logrus to circumvent this. See for reference: https://github.com/sirupsen/logrus/issues/1096
After upgrade of golang to 1.15 there is a regression in logrus logging library. We need to upgrade logrus to circumvent this. See for reference: https://github.com/sirupsen/logrus/issues/1096
0.14
Marcin Siodelski
Marcin Siodelski
https://gitlab.isc.org/isc-projects/stork/-/issues/464
0.14 release
2020-12-07T15:46:32Z
Wlodzimierz Wencel
0.14 release
Wlodzimierz Wencel
Wlodzimierz Wencel
https://gitlab.isc.org/isc-projects/bind9/-/issues/2340
Enable logging of rpz re-writes to dnstap.
2024-03-27T13:54:38Z
Peter Davies
Enable logging of rpz re-writes to dnstap.
### Description
Enable logging of rpz re-writes to dnstap.
The ability to send rpz rewrite information that is generated by category rpz to the dnstap output stream.
[RT #17273](https://support.isc.org/Ticket/Display.html?id=17273)
### Description
Enable logging of rpz re-writes to dnstap.
The ability to send rpz rewrite information that is generated by category rpz to the dnstap output stream.
[RT #17273](https://support.isc.org/Ticket/Display.html?id=17273)
Not planned
Evan Hunt
Evan Hunt
https://gitlab.isc.org/isc-projects/stork/-/issues/463
Events panel is not refreshed when switching between machine tabs
2021-01-28T13:32:14Z
Marcin Siodelski
Events panel is not refreshed when switching between machine tabs
While doing #429 I noticed that, unlike in case of app panels, when you open several machines panels and switch between them the events are not updated to the currently selected machine. In order to view events from the current machine o...
While doing #429 I noticed that, unlike in case of app panels, when you open several machines panels and switch between them the events are not updated to the currently selected machine. In order to view events from the current machine one has to switch to the first (all machines) tab and then go back to the desired one. Another way is to refresh the page.
In order to reproduce:
- Start Stork demo
- Add two new machines, e.g. agent-kea-ha1 and agent-kea-ha2
- Click between agent-kea-ha1 and agent-kea-ha2 tabs. The events panel is not refreshed and is showing events specific to the other machine.
- Click on the Machines tab and go back. Now, events are properly displayed.
0.15
Marcin Siodelski
Marcin Siodelski
https://gitlab.isc.org/isc-projects/kea/-/issues/1594
Sanity checks for Kea 1.8.2 rc1
2021-01-26T17:26:09Z
jenkins
Sanity checks for Kea 1.8.2 rc1
```We are now at step SANITY CHECKS of Kea 1.8.1 rc1.
Please verify the packages and files according to
https://wiki.isc.org/bin/view/QA/KeaReleaseProcess, "4. Sanity Checks" chapter
and your imagination.
Before starting any checks. ple...
```We are now at step SANITY CHECKS of Kea 1.8.1 rc1.
Please verify the packages and files according to
https://wiki.isc.org/bin/view/QA/KeaReleaseProcess, "4. Sanity Checks" chapter
and your imagination.
Before starting any checks. please, state in Sanity Checks issue in GitLab
what check you are doing in a thread/discussion (not as comment).
When you finish given check state in the same thread/discussion what is the result.
This way we know what is covered upfront and we can avoid repeating ourselves.
Release content is located on:
1) [tarballs] repo.isc.org in the following folders:
/data/shared/sweng/kea/releases/1.8.2-rc1
/data/shared/sweng/kea/releases/premium-1.8.2-rc1
/data/shared/sweng/kea/releases/subscription-1.8.2-rc1
SHA256 (1.8.2-rc1/kea-1.8.2.tar.gz) = 486ca7abedb9d6fdf8e4344ad8688d1171f2ef0f5506d118988aadeae80a1d39
SHA256 (subscription-1.8.2-rc1/kea-subscription-1.8.2.tar.gz) = da1a0c62a094c5088d7a71c664f932fef2b26dbc4a5d83fff40421f81430259c
SHA256 (premium-1.8.2-rc1/kea-premium-1.8.2.tar.gz) = 4b37fb898928f1fe31390846c12a68906ec9183631e9536fd2e5ded9c5f4c0d0
2) [rpm/deb packages] on packages.isc.org, exact packages versions are stored here:
https://jenkins.isc.org/job/kea-1.8/job/pkg/19/
Release version is 1.8.2-isc0001520201206093433 (please verify if it is this version while installing).
Install instruction is here: https://wiki.isc.org/bin/view/QA/KeaReleaseProcess, chapter 4. Sanity Checks, point 9.
```
kea1.8.2
https://gitlab.isc.org/isc-projects/kea/-/issues/1593
possible deadlock with InterprocessSyncFile
2021-01-12T08:20:00Z
Andrei Pavel
andrei@isc.org
possible deadlock with InterprocessSyncFile
UTs stopped running at `PgSqlLeaseMgrDbLostCallbackTest.testDbLostCallback` while running unit tests for all modules with `GTEST_SHUFFLE=1`. Looks like a deadlock involving `InterprocessSyncFile`.
```
(gdb) thread apply all bt
Thread 5 ...
UTs stopped running at `PgSqlLeaseMgrDbLostCallbackTest.testDbLostCallback` while running unit tests for all modules with `GTEST_SHUFFLE=1`. Looks like a deadlock involving `InterprocessSyncFile`.
```
(gdb) thread apply all bt
Thread 5 (Thread 0x7ffad3d01640 (LWP 3207223) "lt-libdhcpsrv_u"):
#0 0x00007ffad681d6a2 in pthread_cond_wait@@GLIBC_2.3.2 () from /usr/lib/libpthread.so.0
#1 0x00007ffad6041c11 in __gthread_cond_wait (__mutex=<optimized out>, __cond=<optimized out>) at /build/gcc/src/gcc-build/x86_64-pc-linux-gnu/libstdc++-v3/include/x86_64-pc-linux-gnu/bits/gthr-default.h:865
#2 std::condition_variable::wait (this=<optimized out>, __lock=...) at /build/gcc/src/gcc/libstdc++-v3/src/c++11/condition_variable.cc:53
#3 0x00007ffad67cf8e2 in ?? () from /usr/lib/liblog4cplus-2.0.so.3
#4 0x00007ffad6047c24 in std::execute_native_thread_routine (__p=0x562068e7cd00) at /build/gcc/src/gcc/libstdc++-v3/src/c++11/thread.cc:80
#5 0x00007ffad68173e9 in start_thread () from /usr/lib/libpthread.so.0
#6 0x00007ffad5d4f293 in clone () from /usr/lib/libc.so.6
Thread 4 (Thread 0x7ffad4502640 (LWP 3207222) "lt-libdhcpsrv_u"):
#0 0x00007ffad681d6a2 in pthread_cond_wait@@GLIBC_2.3.2 () from /usr/lib/libpthread.so.0
#1 0x00007ffad6041c11 in __gthread_cond_wait (__mutex=<optimized out>, __cond=<optimized out>) at /build/gcc/src/gcc-build/x86_64-pc-linux-gnu/libstdc++-v3/include/x86_64-pc-linux-gnu/bits/gthr-default.h:865
#2 std::condition_variable::wait (this=<optimized out>, __lock=...) at /build/gcc/src/gcc/libstdc++-v3/src/c++11/condition_variable.cc:53
#3 0x00007ffad67cf8e2 in ?? () from /usr/lib/liblog4cplus-2.0.so.3
#4 0x00007ffad6047c24 in std::execute_native_thread_routine (__p=0x562068e7cde0) at /build/gcc/src/gcc/libstdc++-v3/src/c++11/thread.cc:80
#5 0x00007ffad68173e9 in start_thread () from /usr/lib/libpthread.so.0
#6 0x00007ffad5d4f293 in clone () from /usr/lib/libc.so.6
Thread 3 (Thread 0x7ffad4d03640 (LWP 3207221) "lt-libdhcpsrv_u"):
#0 0x00007ffad681d6a2 in pthread_cond_wait@@GLIBC_2.3.2 () from /usr/lib/libpthread.so.0
#1 0x00007ffad6041c11 in __gthread_cond_wait (__mutex=<optimized out>, __cond=<optimized out>) at /build/gcc/src/gcc-build/x86_64-pc-linux-gnu/libstdc++-v3/include/x86_64-pc-linux-gnu/bits/gthr-default.h:865
#2 std::condition_variable::wait (this=<optimized out>, __lock=...) at /build/gcc/src/gcc/libstdc++-v3/src/c++11/condition_variable.cc:53
#3 0x00007ffad67cf8e2 in ?? () from /usr/lib/liblog4cplus-2.0.so.3
#4 0x00007ffad6047c24 in std::execute_native_thread_routine (__p=0x562068e7c680) at /build/gcc/src/gcc/libstdc++-v3/src/c++11/thread.cc:80
#5 0x00007ffad68173e9 in start_thread () from /usr/lib/libpthread.so.0
#6 0x00007ffad5d4f293 in clone () from /usr/lib/libc.so.6
Thread 2 (Thread 0x7ffad5504640 (LWP 3207220) "lt-libdhcpsrv_u"):
#0 0x00007ffad681d6a2 in pthread_cond_wait@@GLIBC_2.3.2 () from /usr/lib/libpthread.so.0
#1 0x00007ffad6041c11 in __gthread_cond_wait (__mutex=<optimized out>, __cond=<optimized out>) at /build/gcc/src/gcc-build/x86_64-pc-linux-gnu/libstdc++-v3/include/x86_64-pc-linux-gnu/bits/gthr-default.h:865
#2 std::condition_variable::wait (this=<optimized out>, __lock=...) at /build/gcc/src/gcc/libstdc++-v3/src/c++11/condition_variable.cc:53
#3 0x00007ffad67cf8e2 in ?? () from /usr/lib/liblog4cplus-2.0.so.3
#4 0x00007ffad6047c24 in std::execute_native_thread_routine (__p=0x562068e7cce0) at /build/gcc/src/gcc/libstdc++-v3/src/c++11/thread.cc:80
#5 0x00007ffad68173e9 in start_thread () from /usr/lib/libpthread.so.0
#6 0x00007ffad5d4f293 in clone () from /usr/lib/libc.so.6
Thread 1 (Thread 0x7ffad5510040 (LWP 3207218) "lt-libdhcpsrv_u"):
#0 0x00007ffad5d40427 in fcntl64 () from /usr/lib/libc.so.6
#1 0x00007ffad6820eff in fcntl_compat () from /usr/lib/libpthread.so.0
#2 0x00007ffad72321c3 in isc::log::interprocess::InterprocessSyncFile::do_lock (this=0x562068fee310, cmd=7, l_type=1) at interprocess_sync_file.cc:85
#3 0x00007ffad7232405 in isc::log::interprocess::InterprocessSyncFile::lock (this=0x562068fee310) at interprocess_sync_file.cc:94
#4 0x00007ffad720f9de in isc::log::interprocess::InterprocessSyncLocker::lock (this=0x7ffcafeaf338) at ../../../src/lib/log/interprocess/interprocess_sync.h:109
#5 0x00007ffad720ea5a in isc::log::LoggerImpl::outputRaw (this=0x562068fa5130, severity=@0x7ffcafeaf4c8: isc::log::ERROR, message="DATABASE_PGSQL_FATAL_ERROR Unrecoverable PostgreSQL error occurred: Statement: <get_lease4_addr>, reason: could not receive data from server: Bad file descriptor\n (error code: <sqlstate null>).") at logger_impl.cc:162
#6 0x00007ffad720c19c in isc::log::Logger::output (this=0x7ffad7528e60 <isc::db::database_logger>, severity=@0x7ffcafeaf4c8: isc::log::ERROR, message="DATABASE_PGSQL_FATAL_ERROR Unrecoverable PostgreSQL error occurred: Statement: <get_lease4_addr>, reason: could not receive data from server: Bad file descriptor\n (error code: <sqlstate null>).") at logger.cc:147
#7 0x000056206621ee95 in isc::log::Formatter<isc::log::Logger>::~Formatter (this=0x7ffcafeaf4c0, __in_chrg=<optimized out>) at ../../../../src/lib/log/log_formatter.h:162
#8 0x00007ffad8057414 in isc::db::PgSqlConnection::checkStatementError (this=0x562068fa4e30, r=..., statement=...) at pgsql_connection.cc:333
#9 0x00007ffad9388f4c in isc::dhcp::PgSqlLeaseMgr::getLeaseCollection<boost::scoped_ptr<isc::dhcp::PgSqlLease4Exchange>, std::vector<boost::shared_ptr<isc::dhcp::Lease4>, std::allocator<boost::shared_ptr<isc::dhcp::Lease4> > > > (this=0x562068fb2430, ctx=..., stindex=isc::dhcp::PgSqlLeaseMgr::GET_LEASE4_ADDR, bind_array=..., exchange=..., result=std::vector of length 0, capacity 0, single=true) at pgsql_lease_mgr.cc:1354
#10 0x00007ffad9378971 in isc::dhcp::PgSqlLeaseMgr::getLease (this=0x562068fb2430, ctx=..., stindex=isc::dhcp::PgSqlLeaseMgr::GET_LEASE4_ADDR, bind_array=..., result=...) at pgsql_lease_mgr.cc:1378
#11 0x00007ffad9378de9 in isc::dhcp::PgSqlLeaseMgr::getLease4 (this=0x562068fb2430, addr=...) at pgsql_lease_mgr.cc:1429
#12 0x00005620667d255b in isc::dhcp::test::LeaseMgrDbLostCallbackTest::testDbLostCallback (this=0x562068fb3a40) at generic_lease_mgr_unittest.cc:3318
#13 0x000056206692e5df in (anonymous namespace)::PgSqlLeaseMgrDbLostCallbackTest_testDbLostCallback_Test::TestBody (this=0x562068fb3a40) at pgsql_lease_mgr_unittest.cc:942
#14 0x00007ffad61ab807 in void testing::internal::HandleExceptionsInMethodIfSupported<testing::Test, void>(testing::Test*, void (testing::Test::*)(), char const*) () from /usr/lib/libgtest.so.1.10.0
#15 0x00007ffad619f091 in testing::Test::Run() () from /usr/lib/libgtest.so.1.10.0
#16 0x00007ffad619f1ef in testing::TestInfo::Run() () from /usr/lib/libgtest.so.1.10.0
#17 0x00007ffad619f2d7 in testing::TestSuite::Run() () from /usr/lib/libgtest.so.1.10.0
#18 0x00007ffad619f854 in testing::internal::UnitTestImpl::RunAllTests() () from /usr/lib/libgtest.so.1.10.0
#19 0x00007ffad61abe37 in bool testing::internal::HandleExceptionsInMethodIfSupported<testing::internal::UnitTestImpl, bool>(testing::internal::UnitTestImpl*, bool (testing::internal::UnitTestImpl::*)(), char const*) () from /usr/lib/libgtest.so.1.10.0
#20 0x00007ffad619fa7a in testing::UnitTest::Run() () from /usr/lib/libgtest.so.1.10.0
#21 0x0000562065f96f78 in RUN_ALL_TESTS () at /usr/include/gtest/gtest.h:2473
#22 0x0000562065f96e8a in main (argc=1, argv=0x7ffcafeaff88) at run_unittests.cc:17
```
```
$ lsof -p 3207218 2>&1 | grep --color=auto lockfile
lt-libdhc 3207218 andrei 6u REG 259,8 0 29758074 /home/andrei/work/isc/kea-1574-make-shell-tests-and-shell-scripts-more-robust/logger_lockfile
lt-libdhc 3207218 andrei 7u REG 259,8 0 29758074 /home/andrei/work/isc/kea-1574-make-shell-tests-and-shell-scripts-more-robust/logger_lockfile
lt-libdhc 3207218 andrei 8u REG 259,8 0 29758074 /home/andrei/work/isc/kea-1574-make-shell-tests-and-shell-scripts-more-robust/logger_lockfile
lt-libdhc 3207218 andrei 9u REG 259,8 0 29758074 /home/andrei/work/isc/kea-1574-make-shell-tests-and-shell-scripts-more-robust/logger_lockfile
lt-libdhc 3207218 andrei 13u REG 259,8 0 29758074 /home/andrei/work/isc/kea-1574-make-shell-tests-and-shell-scripts-more-robust/logger_lockfile
```
```
$ wc -l /home/andrei/work/isc/kea-1574-make-shell-tests-and-shell-scripts-more-robust/logger_lockfile
0 /home/andrei/work/isc/kea-1574-make-shell-tests-and-shell-scripts-more-robust/logger_lockfile
```
outstanding
https://gitlab.isc.org/isc-projects/bind9/-/issues/2339
netmgr memory - assertion failed
2020-12-05T19:23:20Z
Štefan Bosák
netmgr memory - assertion failed
### Summary
Bind9 netmgr memory - assertion failed
### BIND version used
BIND 9.17.7 (Development Release) <id:ed85d06>
running on Windows 10 0 build 19041 662 for x64
built by MSVC 1916 with 'with-tools-version=15.0 with-platform-to...
### Summary
Bind9 netmgr memory - assertion failed
### BIND version used
BIND 9.17.7 (Development Release) <id:ed85d06>
running on Windows 10 0 build 19041 662 for x64
built by MSVC 1916 with 'with-tools-version=15.0 with-platform-toolset=v141 with-platform-version=10.0.17763.0 with-vcredist=C:/Program\ Files\ (x86)/Microsoft\ Visual\ Studio/2017/BuildTools/VC/Redist/MSVC/14.16.27012/vcredist_x64.exe with-openssl=C:/OpenSSL with-libxml2=C:/libxml2 with-libuv=C:/libuv without-python with-system-tests x64'
compiled by MSVC 1916
compiled with OpenSSL version: OpenSSL 1.1.1h 22 Sep 2020
linked to OpenSSL version: OpenSSL 1.1.1h 22 Sep 2020
compiled with libuv version: 1.40.0
linked to libuv version: 1.40.0
compiled with libxml2 version: 2.9.10
linked to libxml2 version: 20910
threads support is enabled
default paths:
named configuration: C:\var\bind\etc\named.conf
rndc configuration: C:\var\bind\etc\rndc.conf
DNSSEC root key: C:\var\bind\etc\bind.keys
nsupdate session key: C:\var\bind\etc\session.key
named PID file: C:\var\bind\etc\named.pid
named lock file: C:\var\bind\etc\named.lock
### Steps to reproduce
Running Bind9 on Windows 10 Pro Version 20H2 (OS Build 19042.662)
on localhost as local resolver in forwarder mode to optimize traffic, latences, ...
### What is the current *bug* behavior?
Bind9 service crashed (is not running).
### What is the expected *correct* behavior?
BIND 9.17.7 should run without any problems.
BIND 9.17.6 worked without problems using the similar configuration
except DOT (DNS over TLS) which is supported from version 9.17.7.
### Relevant configuration files
Running bind9 using following configurations (keys and similar privacy stuff have been removed):
Note: I do not know why code markdowns are not used correcly.
``
include "c:\var\bind\etc\named.conf.acl";
include "c:\var\bind\etc\named.conf.controls";
include "c:\var\bind\etc\named.conf.options";
include "c:\var\bind\etc\named.conf.logging";
include "c:\var\bind\etc\named.conf.localhost";
include "c:\var\bind\etc\named.conf.chaos";
include "c:\var\bind\etc\named.conf.root";
tls "localhost-tls" {
cert-file "C:\var\bind\etc\server.crt";
key-file "C:\var\bind\etc\server.key";
};
options {
hostname "null";
version "not disclosed";
directory "C:\\var\\bind\\etc\\";
listen-on {
localhost_ipv4;
};
listen-on tls "localhost-tls" {
localhost_ipv4;
};
listen-on-v6 {
none;
};
listen-on-v6 tls "localhost-tls" {
none;
};
recursion no;
recursive-clients 64;
forwarders {
// Quad9 (with EDNS, support DOH)
9.9.9.11; //dns11.quad9.net
149.112.112.11; //dns11.quad9.net
//2620:fe::11; //dns11.quad9.net
//2620:fe::fe:11; //dns11.quad9.net
// OpenDNS (with EDNS, no support for DOH - need to use doh.opendns.com)
//208.67.222.222; //resolver1.opendns.com
//208.67.220.220; //resolver2.opendns.com
//2620:119:35::35; //resolver1.opendns.com
//2620:119:53::53; //resolver2.opendns.com
// Cloudflare (with EDNS, support for DOH)
//1.1.1.1; //one.one.one.one
//1.0.0.1; //one.one.one.one
//2606:4700:4700::1111; //one.one.one.one
//2606:4700:4700::1001; //one.one.one.one
// Google DNS (with EDNS, support for DOH)
//8.8.8.8; //dns.google
//8.8.4.4; //dns.google
//2001:4860:4860::8888; //dns.google
//2001:4860:4860::8844; //dns.google
};
forward only;
allow-notify { none; };
allow-recursion { none; };
allow-recursion-on { none; };
allow-query { none; };
allow-query-on { none; };
allow-query-cache { none; };
allow-query-cache-on { none; };
allow-transfer { none; };
allow-update { none; };
allow-update-forwarding { none; };
deny-answer-addresses {
0.0.0.0/8;
10.0.0.0/8;
127.0.0.0/8;
172.16.0.0/12;
192.168.0.0/16;
169.254.0.0/16;
192.0.0.0/24;
192.0.2.0/24;
192.0.0.0/29;
192.0.0.8/32;
192.0.0.170/32;
192.0.0.171/32;
192.52.193.0/24;
198.18.0.0/15;
198.51.100.0/24;
203.0.113.0/24;
224.0.0.0/4;
240.0.0.0/4;
255.255.255.255/32;
::/128;
::1/128;
::ffff:0:0/96;
100::/64;
64:ff9b::/96;
2001:2::/48;
2001:3::/32;
2001:db8::/32;
2001:10::/28;
2001:20::/28;
fc00::/7;
fe80::/10;
ff00::/8;
} except-from {"<obfuscated>";};
blackhole {
!127.0.0.1/32;
0.0.0.0/8;
10.0.0.0/8;
127.0.0.0/8;
172.16.0.0/12;
169.254.0.0/16;
192.168.0.0/16;
192.0.0.0/24;
192.0.2.0/24;
192.0.0.0/29;
192.0.0.8/32;
192.0.0.170/32;
192.0.0.171/32;
192.168.0.0/16;
192.52.193.0/24;
198.18.0.0/15;
198.51.100.0/24;
203.0.113.0/24;
224.0.0.0/4;
240.0.0.0/4;
255.255.255.255/32;
::/128;
::1/128;
::ffff:0:0/96;
100::/64;
64:ff9b::/96;
2001:2::/48;
2001:3::/32;
2001:db8::/32;
2001:10::/28;
2001:20::/28;
fc00::/7;
fe80::/10;
ff00::/8;
};
rate-limit {
responses-per-second 16;
log-only yes;
};
zone-statistics true;
minimal-any yes;
minimal-responses yes;
transfer-format many-answers;
provide-ixfr yes;
ixfr-from-differences yes;
qname-minimization relaxed;
dnssec-validation auto;
empty-zones-enable no;
max-cache-size 512m;
max-cache-ttl 60;
max-ncache-ttl 60;
tcp-listen-queue 0;
interface-interval 0;
heartbeat-interval 0;
};
controls {
inet 127.0.0.1 port 953 allow { localhost_ipv4; } keys { "rndc-key"; };
};
acl "recursion-chaos" {
localhost_ipv4;
};
acl "recursion-on-chaos" {
localhost_ipv4;
};
acl "transfer-chaos" {
none;
};
acl "update-chaos" {
none;
};
acl "query-chaos" {
localhost_ipv4;
};
acl "query-on-chaos" {
localhost_ipv4;
};
view "chaos" chaos {
match-clients { query-chaos; };
match-destinations {
localhost_ipv4;
};
recursion no;
match-recursive-only no;
allow-notify { none; };
allow-query { none; };
allow-query-on { none; };
allow-transfer { none; };
allow-update { none; };
allow-update-forwarding { none; };
allow-query-cache { query-chaos; };
allow-query-cache-on { query-on-chaos; };
zone "." {
type hint;
file "nul";
};
zone "bind" {
type master;
file "C:\\var\\bind\\etc\\empty\\bind.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
};
acl "recursion-root" {
none;
};
acl "recursion-on-root" {
none;
};
acl "transfer-root" {
none;
};
acl "update-root" {
none;
};
acl "query-root" {
none;
};
acl "query-on-root" {
none;
};
// Running Root on Loopback (RFC 7706)
view "root" {
match-clients { query-root; };
match-destinations {
localhost_ipv4;
};
recursion no;
match-recursive-only no;
allow-notify { none; };
allow-query { none; };
allow-query-on { none; };
allow-transfer { none; };
allow-update { none; };
allow-update-forwarding { none; };
allow-query-cache { query-root; };
allow-query-cache-on { query-on-root; };
// root zone
zone "." {
type slave;
file "C:\\var\\bind\\etc\\sec\\root.zone";
masters {
192.5.5.241; //f.root-servers.net.
192.33.4.12; //c.root-servers.net.
193.0.14.129; //k.root-servers.net.
};
allow-query { query-root; };
allow-query-on { query-on-root; };
allow-transfer { transfer-root; };
notify no;
};
// Reserved exclusively to support operationally-critical infrastructural identifier spaces as advised by the Internet Architecture Board (RFC 3172)
zone "arpa" {
type slave;
file "C:\\var\\bind\\etc\\sec\\arpa.zone";
masters {
192.5.5.241; //f.root-servers.net.
192.33.4.12; //c.root-servers.net.
193.0.14.129; //k.root-servers.net.
};
allow-query { query-root; };
allow-query-on { query-on-root; };
allow-transfer { transfer-root; };
notify no;
};
// RFC 8375
// zone "home.arpa" {
// type slave;
// file "C:\\var\\bind\\etc\\sec\\home.arpa.zone";
// masters {
// 192.175.48.6; // blackhole-1.iana.org.
// 192.175.48.42; // blackhole-2.iana.org.
// };
// allow-query { query-root; };
// allow-query-on { query-on-root; };
// allow-transfer { transfer-root; };
// notify no;
// };
// For mapping E.164 numbers to Internet URIs (RFC 6116)
zone "e164.arpa" {
type slave;
file "C:\\var\\bind\\etc\\sec\\e164.arpa.zone";
masters {
193.0.9.5; //PRI.AUTHDNS.RIPE.NET
};
allow-query { query-root; };
allow-query-on { query-on-root; };
allow-transfer { transfer-root; };
notify no;
};
// For hosting authoritative name servers for the in-addr.arpa domain (RFC 5855)
zone "in-addr-servers.arpa" {
type slave;
file "C:\\var\\bind\\etc\\sec\\in-addr-servers.arpa.zone";
masters {
193.0.9.1; //F.IN-ADDR-SERVERS.ARPA
};
allow-query { query-root; };
allow-query-on { query-on-root; };
allow-transfer { transfer-root; };
notify no;
};
// For mapping IPv4 addresses to Internet domain names (RFC 1035)
zone "in-addr.arpa" {
type slave;
file "C:\\var\\bind\\etc\\sec\\in-addr.arpa.zone";
masters {
193.0.9.1; //F.IN-ADDR-SERVERS.ARPA
};
allow-query { query-root; };
allow-query-on { query-on-root; };
allow-transfer { transfer-root; };
notify no;
};
// For hosting authoritative name servers for the ip6.arpa domain (RFC 5855)
zone "ip6-servers.arpa" {
type slave;
file "C:\\var\\bind\\etc\\sec\\ip6-servers.arpa.zone";
masters {
193.0.9.2; //F.IP6-SERVERS.ARPA
};
allow-query { query-root; };
allow-query-on { query-on-root; };
allow-transfer { transfer-root; };
notify no;
};
// For mapping IPv6 addresses to Internet domain names (RFC 3152)
zone "ip6.arpa" {
type slave;
file "C:\\var\\bind\\etc\\sec\\ip6.arpa.zone";
masters {
193.0.9.2; //F.IP6-SERVERS.ARPA
};
allow-query { query-root; };
allow-query-on { query-on-root; };
allow-transfer { transfer-root; };
notify no;
};
zone "ipv4only.arpa" {
type slave;
file "C:\\var\\bind\\etc\\sec\\ipv4only.arpa.zone";
masters {
192.0.47.132; //iad.xfr.dns.icann.org
};
allow-query { query-root; };
allow-query-on { query-on-root; };
allow-transfer { transfer-root; };
notify no;
};
zone "root-servers.net." {
type slave;
file "C:\\var\\bind\\etc\\sec\\root-servers.net.zone";
masters {
192.0.47.132; //iad.xfr.dns.icann.org
};
allow-query { query-root; };
allow-query-on { query-on-root; };
allow-transfer { transfer-root; };
notify no;
};
// Multicast (RFC 3171)
zone "mcast.net" {
type slave;
file "C:\\var\\bind\\etc\\sec\\mcast.net.zone";
masters {
192.0.47.132; //iad.xfr.dns.icann.org
};
allow-query { query-root; };
allow-query-on { query-on-root; };
allow-transfer { transfer-root; };
notify no;
};
zone "224.in-addr.arpa" {
type slave;
file "C:\\var\\bind\\etc\\sec\\224.in-addr.zone";
masters {
192.0.47.132; //iad.xfr.dns.icann.org
};
allow-query { query-root; };
allow-query-on { query-on-root; };
allow-transfer { transfer-root; };
notify no;
};
zone "225.in-addr.arpa" {
type slave;
file "C:\\var\\bind\\etc\\sec\\225.in-addr.zone";
masters {
192.0.47.132; //iad.xfr.dns.icann.org
};
allow-query { query-root; };
allow-query-on { query-on-root; };
allow-transfer { transfer-root; };
notify no;
};
zone "226.in-addr.arpa" {
type slave;
file "C:\\var\\bind\\etc\\sec\\226.in-addr.zone";
masters {
192.0.47.132; //iad.xfr.dns.icann.org
};
allow-query { query-root; };
allow-query-on { query-on-root; };
allow-transfer { transfer-root; };
notify no;
};
zone "227.in-addr.arpa" {
type slave;
file "C:\\var\\bind\\etc\\sec\\227.in-addr.zone";
masters {
192.0.47.132; //iad.xfr.dns.icann.org
};
allow-query { query-root; };
allow-query-on { query-on-root; };
allow-transfer { transfer-root; };
notify no;
};
zone "228.in-addr.arpa" {
type slave;
file "C:\\var\\bind\\etc\\sec\\228.in-addr.zone";
masters {
192.0.47.132; //iad.xfr.dns.icann.org
};
allow-query { query-root; };
allow-query-on { query-on-root; };
allow-transfer { transfer-root; };
notify no;
};
zone "229.in-addr.arpa" {
type slave;
file "C:\\var\\bind\\etc\\sec\\229.in-addr.zone";
masters {
192.0.47.132; //iad.xfr.dns.icann.org
};
allow-query { query-root; };
allow-query-on { query-on-root; };
allow-transfer { transfer-root; };
notify no;
};
zone "230.in-addr.arpa" {
type slave;
file "C:\\var\\bind\\etc\\sec\\230.in-addr.zone";
masters {
192.0.47.132; //iad.xfr.dns.icann.org
};
allow-query { query-root; };
allow-query-on { query-on-root; };
allow-transfer { transfer-root; };
notify no;
};
zone "231.in-addr.arpa" {
type slave;
file "C:\\var\\bind\\etc\\sec\\231.in-addr.zone";
masters {
192.0.47.132; //iad.xfr.dns.icann.org
};
allow-query { query-root; };
allow-query-on { query-on-root; };
allow-transfer { transfer-root; };
notify no;
};
zone "232.in-addr.arpa" {
type slave;
file "C:\\var\\bind\\etc\\sec\\232.in-addr.zone";
masters {
192.0.47.132; //iad.xfr.dns.icann.org
};
allow-query { query-root; };
allow-query-on { query-on-root; };
allow-transfer { transfer-root; };
notify no;
};
zone "233.in-addr.arpa" {
type slave;
file "C:\\var\\bind\\etc\\sec\\233.in-addr.zone";
masters {
192.0.47.132; //iad.xfr.dns.icann.org
};
allow-query { query-root; };
allow-query-on { query-on-root; };
allow-transfer { transfer-root; };
notify no;
};
zone "234.in-addr.arpa" {
type slave;
file "C:\\var\\bind\\etc\\sec\\234.in-addr.zone";
masters {
192.0.47.132; //iad.xfr.dns.icann.org
};
allow-query { query-root; };
allow-query-on { query-on-root; };
allow-transfer { transfer-root; };
notify no;
};
zone "235.in-addr.arpa" {
type slave;
file "C:\\var\\bind\\etc\\sec\\235.in-addr.zone";
masters {
192.0.47.132; //iad.xfr.dns.icann.org
};
allow-query { query-root; };
allow-query-on { query-on-root; };
allow-transfer { transfer-root; };
notify no;
};
zone "236.in-addr.arpa" {
type slave;
file "C:\\var\\bind\\etc\\sec\\236.in-addr.zone";
masters {
192.0.47.132; //iad.xfr.dns.icann.org
};
allow-query { query-root; };
allow-query-on { query-on-root; };
allow-transfer { transfer-root; };
notify no;
};
zone "237.in-addr.arpa" {
type slave;
file "C:\\var\\bind\\etc\\sec\\237.in-addr.zone";
masters {
192.0.47.132; //iad.xfr.dns.icann.org
};
allow-query { query-root; };
allow-query-on { query-on-root; };
allow-transfer { transfer-root; };
notify no;
};
zone "238.in-addr.arpa" {
type slave;
file "C:\\var\\bind\\etc\\sec\\238.in-addr.zone";
masters {
192.0.47.132; //iad.xfr.dns.icann.org
};
allow-query { query-root; };
allow-query-on { query-on-root; };
allow-transfer { transfer-root; };
notify no;
};
zone "239.in-addr.arpa" {
type slave;
file "C:\\var\\bind\\etc\\sec\\239.in-addr.zone";
masters {
192.0.47.132; //iad.xfr.dns.icann.org
};
allow-query { query-root; };
allow-query-on { query-on-root; };
allow-transfer { transfer-root; };
notify no;
};
};
acl localhost_ipv4 { 127.0.0.1; };
acl "recursion-localhost" {
localhost_ipv4;
};
acl "recursion-on-localhost" {
localhost_ipv4;
};
acl "transfer-localhost" {
none;
};
acl "update-localhost" {
none;
};
acl "query-localhost" {
localhost_ipv4;
};
acl "query-on-localhost" {
localhost_ipv4;
};
view "localhost" {
match-clients { query-localhost; };
match-destinations {
localhost_ipv4;
};
recursion yes;
match-recursive-only yes;
allow-notify { none; };
allow-query { none; };
allow-query-on { none; };
allow-transfer { none; };
allow-update { none; };
allow-update-forwarding { none; };
allow-query-cache { query-localhost; };
allow-query-cache-on { query-on-localhost; };
allow-recursion { recursion-localhost; };
allow-recursion-on { recursion-on-localhost; };
empty-zones-enable no;
// This host on this network (RFC 1122)
disable-empty-zone "0.in-addr.arpa";
// IPv4 Loopback Network (RFC 1122)
// SPECIAL-IPV4-LOOPBACK-IANA-RESERVED
disable-empty-zone "127.in-addr.arpa";
// Private Use Networks (RFC 1918)
// PRIVATE-ADDRESS-ABLK-RFC1918-IANA-RESERVE
disable-empty-zone "10.in-addr.arpa";
// PRIVATE-ADDRESS-BBLK-RFC1918-IANA-RESERVED
disable-empty-zone "16.172.in-addr.arpa";
disable-empty-zone "17.172.in-addr.arpa";
disable-empty-zone "18.172.in-addr.arpa";
disable-empty-zone "19.172.in-addr.arpa";
disable-empty-zone "20.172.in-addr.arpa";
disable-empty-zone "21.172.in-addr.arpa";
disable-empty-zone "22.172.in-addr.arpa";
disable-empty-zone "23.172.in-addr.arpa";
disable-empty-zone "24.172.in-addr.arpa";
disable-empty-zone "25.172.in-addr.arpa";
disable-empty-zone "26.172.in-addr.arpa";
disable-empty-zone "27.172.in-addr.arpa";
disable-empty-zone "28.172.in-addr.arpa";
disable-empty-zone "29.172.in-addr.arpa";
disable-empty-zone "30.172.in-addr.arpa";
disable-empty-zone "31.172.in-addr.arpa";
// PRIVATE-ADDRESS-CBLK-RFC1918-IANA-RESERVED
disable-empty-zone "168.192.in-addr.arpa";
// Link local (RFC 3927)
// LINKLOCAL-RFC3927-IANA-RESERVED
disable-empty-zone "254.169.in-addr.arpa";
// IETF Protocol Assignments (RFC 5736)
// SPECIAL-IPV4-REGISTRY-IANA-RESERVED
disable-empty-zone "0.0.192.in-addr.arpa";
// TEST-NET-[1-3] for Documentation (RFC 5737)
// TEST-NET-1
disable-empty-zone "2.0.192.in-addr.arpa";
// TEST-NET-2
disable-empty-zone "100.51.198.in-addr.arpa";
// TEST-NET-3
disable-empty-zone "113.0.203.in-addr.arpa";
// RESERVED-19252192C
disable-empty-zone "193.52.192.in-addr.arpa";
// 6to4 Relay Anycast (RFC 3068)
// 6TO4-RELAY-ANYCAST-IANA-RESERVED
disable-empty-zone "192.88.99.in-addr.arpa";
// Network Interconnect Device Benchmark Testing (RFC 2544)
// SPECIAL-IPV4-BENCHMARK-TESTING-IANA-RESERVED
disable-empty-zone "18.198.in-addr.arpa";
disable-empty-zone "19.198.in-addr.arpa";
// Multicast (RFC 3171)
disable-empty-zone "224.in-addr.arpa";
disable-empty-zone "225.in-addr.arpa";
disable-empty-zone "226.in-addr.arpa";
disable-empty-zone "227.in-addr.arpa";
disable-empty-zone "228.in-addr.arpa";
disable-empty-zone "229.in-addr.arpa";
disable-empty-zone "230.in-addr.arpa";
disable-empty-zone "231.in-addr.arpa";
disable-empty-zone "232.in-addr.arpa";
disable-empty-zone "233.in-addr.arpa";
disable-empty-zone "234.in-addr.arpa";
disable-empty-zone "235.in-addr.arpa";
disable-empty-zone "236.in-addr.arpa";
disable-empty-zone "237.in-addr.arpa";
disable-empty-zone "238.in-addr.arpa";
disable-empty-zone "239.in-addr.arpa";
// Reserved for Future Use (RFC 1112)
disable-empty-zone "240.in-addr.arpa";
disable-empty-zone "241.in-addr.arpa";
disable-empty-zone "242.in-addr.arpa";
disable-empty-zone "243.in-addr.arpa";
disable-empty-zone "244.in-addr.arpa";
disable-empty-zone "245.in-addr.arpa";
disable-empty-zone "246.in-addr.arpa";
disable-empty-zone "247.in-addr.arpa";
disable-empty-zone "248.in-addr.arpa";
disable-empty-zone "249.in-addr.arpa";
disable-empty-zone "250.in-addr.arpa";
disable-empty-zone "251.in-addr.arpa";
disable-empty-zone "252.in-addr.arpa";
disable-empty-zone "253.in-addr.arpa";
disable-empty-zone "254.in-addr.arpa";
// Limited Broadcast (RFC0919 and RFC0922)
disable-empty-zone "255.in-addr.arpa";
// (RFC 4291)
// Unspecified address
disable-empty-zone "0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa";
// Unspecified address
disable-empty-zone "1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa";
// IPv4-mapped addresses
disable-empty-zone "f.f.f.f.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa";
disable-empty-zone "0.0.ip6.arpa";
// (RFC 4048)
disable-empty-zone "2.0.ip6.arpa";
// (RFC 4291)
disable-empty-zone "1.ip6.arpa";
disable-empty-zone "4.ip6.arpa";
disable-empty-zone "6.ip6.arpa";
disable-empty-zone "8.ip6.arpa";
disable-empty-zone "a.ip6.arpa";
disable-empty-zone "c.ip6.arpa";
disable-empty-zone "e.ip6.arpa";
disable-empty-zone "f.ip6.arpa";
disable-empty-zone "1.0.ip6.arpa";
disable-empty-zone "4.0.ip6.arpa";
disable-empty-zone "8.0.ip6.arpa";
disable-empty-zone "8.f.ip6.arpa";
disable-empty-zone "e.f.ip6.arpa";
// Multicast
disable-empty-zone "f.f.ip6.arpa";
disable-empty-zone "8.e.f.ip6.arpa";
disable-empty-zone "9.e.f.ip6.arpa";
disable-empty-zone "a.e.f.ip6.arpa";
disable-empty-zone "b.e.f.ip6.arpa";
disable-empty-zone "d.e.f.ip6.arpa";
disable-empty-zone "e.e.f.ip6.arpa";
disable-empty-zone "f.e.f.ip6.arpa";
// Unique-Local (RFC 4193)
disable-empty-zone "c.f.ip6.arpa";
// (RFC 3879)
disable-empty-zone "c.e.f.ip6.arpa";
disable-empty-zone "0.0.c.f.ip6.arpa";
disable-empty-zone "0.0.d.f.ip6.arpa";
// Overlay Routable Cryptographic Hash IDentifiers (RFC 4843)
disable-empty-zone "1.0.0.1.0.0.2.ip6.arpa";
// Teredo (RFC 4380)
disable-empty-zone "0.0.0.0.1.0.0.2.ip6.arpa";
// Documentation Prefix (RFC 3849)
disable-empty-zone "8.b.d.0.1.0.0.2.ip6.arpa";
// (RFC 5180)
disable-empty-zone "0.0.0.0.2.0.0.0.1.0.0.2.ip6.arpa";
// (RFC 6052)
disable-empty-zone "0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.b.9.f.f.4.6.0.0.ip6.arpa";
// 6to4 (RFC 3056)
disable-empty-zone "2.0.0.2.ip6.arpa";
// 6bone (RFC 3701)
// (RFC 1897)
disable-empty-zone "f.5.ip6.arpa";
// (RFC2471)
disable-empty-zone "e.f.f.3.ip6.arpa";
response-policy {
zone "rpz.local";
} qname-wait-recurse no;
// just note - regarding zone size ~108k "records"
zone "rpz.local" {
type master;
file "C:\\var\\bind\\etc\\empty\\rpz.local.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "localhost" {
type master;
file "C:\\var\\bind\\etc\\empty\localhost.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "0.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\0.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "127.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\127.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "10.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\10.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "224.in-addr.arpa" {
type static-stub;
server-addresses { 127.0.0.1; };
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
};
zone "225.in-addr.arpa" {
type static-stub;
server-addresses { 127.0.0.1; };
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
};
zone "226.in-addr.arpa" {
type static-stub;
server-addresses { 127.0.0.1; };
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
};
zone "227.in-addr.arpa" {
type static-stub;
server-addresses { 127.0.0.1; };
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
};
zone "228.in-addr.arpa" {
type static-stub;
server-addresses { 127.0.0.1; };
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
};
zone "229.in-addr.arpa" {
type static-stub;
server-addresses { 127.0.0.1; };
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
};
zone "230.in-addr.arpa" {
type static-stub;
server-addresses { 127.0.0.1; };
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
};
zone "231.in-addr.arpa" {
type static-stub;
server-addresses { 127.0.0.1; };
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
};
zone "232.in-addr.arpa" {
type static-stub;
server-addresses { 127.0.0.1; };
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
};
zone "233.in-addr.arpa" {
type static-stub;
server-addresses { 127.0.0.1; };
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
};
zone "234.in-addr.arpa" {
type static-stub;
server-addresses { 127.0.0.1; };
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
};
zone "235.in-addr.arpa" {
type static-stub;
server-addresses { 127.0.0.1; };
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
};
zone "236.in-addr.arpa" {
type static-stub;
server-addresses { 127.0.0.1; };
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
};
zone "237.in-addr.arpa" {
type static-stub;
server-addresses { 127.0.0.1; };
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
};
zone "238.in-addr.arpa" {
type static-stub;
server-addresses { 127.0.0.1; };
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
};
zone "239.in-addr.arpa" {
type static-stub;
server-addresses { 127.0.0.1; };
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
};
zone "240.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\240.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "241.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\241.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "242.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\242.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "243.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\243.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "244.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\244.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "245.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\245.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "246.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\246.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "247.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\247.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "248.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\248.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "249.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\249.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "250.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\250.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "251.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\251.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "252.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\252.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "253.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\253.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "254.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\254.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "255.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\255.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "16.172.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\16.172.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "17.172.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\17.172.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "18.172.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\18.172.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "19.172.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\19.172.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "20.172.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\20.172.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "21.172.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\21.172.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "22.172.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\22.172.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "23.172.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\23.172.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "24.172.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\24.172.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "25.172.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\25.172.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "26.172.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\26.172.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "27.172.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\27.172.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "28.172.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\28.172.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "29.172.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\29.172.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "30.172.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\30.172.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "31.172.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\31.172.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "168.192.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\168.192.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "254.169.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\254.169.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "18.198.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\18.198.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "19.198.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\19.198.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "0.0.192.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\0.0.192.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "2.0.192.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\2.0.192.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "193.52.192.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\193.52.192.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "100.51.198.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\100.51.198.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "113.0.203.in-addr.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\113.0.203.in-addr.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "1.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\1.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "4.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\4.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "6.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\6.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "8.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\8.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "a.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\a.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "c.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\c.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "e.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\e.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "f.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\f.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "0.0.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\0.0.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "1.0.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\1.0.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "2.0.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\2.0.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "4.0.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\4.0.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "8.0.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\8.0.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "8.f.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\8.f.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "c.f.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\c.f.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "e.f.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\e.f.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "f.f.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\f.f.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "8.e.f.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\8.e.f.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "9.e.f.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\9.e.f.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "a.e.f.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\a.e.f.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "b.e.f.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\b.e.f.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "c.e.f.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\c.e.f.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "d.e.f.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\d.e.f.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "e.e.f.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\e.e.f.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "f.e.f.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\f.e.f.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "8.b.d.0.1.0.0.2.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\8.b.d.0.1.0.0.2.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "1.0.0.1.0.0.2.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\1.0.0.1.0.0.2.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "0.0.0.0.2.0.0.0.1.0.0.2.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\0.0.0.0.2.0.0.0.1.0.0.2.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.b.9.f.f.4.6.0.0.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.b.9.f.f.4.6.0.0.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "f.f.f.f.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\f.f.f.f.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "2.0.0.2.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\2.0.0.2.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "f.5.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\f.5.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
zone "e.f.f.3.ip6.arpa" {
type master;
file "C:\\var\\bind\\etc\\empty\\e.f.f.3.ip6.arpa.zone";
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
allow-transfer { transfer-localhost; };
allow-update { update-localhost; };
notify no;
};
// root zone
zone "." {
type static-stub;
server-addresses { 127.0.0.1; };
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
};
// Reserved exclusively to support operationally-critical infrastructural identifier spaces as advised by the Internet Architecture Board (RFC 3172)
zone "arpa" {
type static-stub;
server-addresses { 127.0.0.1; };
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
};
// RFC 8375
// zone "home.arpa" {
// type static-stub;
// server-addresses { 127.0.0.1; };
// allow-query { query-localhost; };
// allow-query-on { query-on-localhost; };
// };
// For mapping E.164 numbers to Internet URIs (RFC 6116)
zone "e164.arpa" {
type static-stub;
server-addresses { 127.0.0.1; };
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
};
// For hosting authoritative name servers for the in-addr.arpa domain (RFC 5855)
zone "in-addr-servers.arpa" {
type static-stub;
server-addresses { 127.0.0.1; };
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
};
// For mapping IPv4 addresses to Internet domain names (RFC 1035)
zone "in-addr.arpa" {
type static-stub;
server-addresses { 127.0.0.1; };
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
};
// For hosting authoritative name servers for the ip6.arpa domain (RFC 5855)
zone "ip6-servers.arpa" {
type static-stub;
server-addresses { 127.0.0.1; };
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
};
// For mapping IPv6 addresses to Internet domain names (RFC 3152)
zone "ip6.arpa" {
type static-stub;
server-addresses { 127.0.0.1; };
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
};
zone "ipv4only.arpa" {
type static-stub;
server-addresses { 127.0.0.1; };
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
};
zone "root-servers.net." {
type static-stub;
server-addresses { 127.0.0.1; };
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
};
// Multicast (RFC 3171)
zone "mcast.net" {
type static-stub;
server-addresses { 127.0.0.1; };
allow-query { query-localhost; };
allow-query-on { query-on-localhost; };
};
};
logging {
channel rpz_file { file "c:\var\bind\log\rpz.log" versions 3 size 5m; severity dynamic; print-time yes; print-category yes; print-severity yes; };
channel edns-disabled_file { file "c:\var\bind\log\edns-disabled.log" versions 3 size 5m; severity dynamic; print-time yes; print-category yes; print-severity yes; };
channel default_file { file "c:\var\bind\log\default.log" versions 3 size 5m; severity dynamic; print-time yes; print-category yes; print-severity yes; };
channel general_file { file "c:\var\bind\log\general.log" versions 3 size 5m; severity dynamic; print-time yes; print-category yes; print-severity yes; };
channel database_file { file "c:\var\bind\log\database.log" versions 3 size 5m; severity dynamic; print-time yes; print-category yes; print-severity yes; };
channel spill_file { file "c:\var\bind\log\spill.log" versions 3 size 5m; severity dynamic; print-time yes; print-category yes; print-severity yes; };
channel rate-limit_file { file "c:\var\bind\log\rate-limit.log" versions 3 size 5m; severity dynamic; print-time yes; print-category yes; print-severity yes; };
channel security_file { file "c:\var\bind\log\security.log" versions 3 size 5m; severity dynamic; print-time yes; print-category yes; print-severity yes; };
channel config_file { file "c:\var\bind\log\config.log" versions 3 size 5m; severity dynamic; print-time yes; print-category yes; print-severity yes; };
channel resolver_file { file "c:\var\bind\log\resolver.log" versions 3 size 5m; severity dynamic; print-time yes; print-category yes; print-severity yes; };
channel xfer-in_file { file "c:\var\bind\log\xfer-in.log" versions 3 size 5m; severity dynamic; print-time yes; print-category yes; print-severity yes; };
channel xfer-out_file { file "c:\var\bind\log\xfer-out.log" versions 3 size 5m; severity dynamic; print-time yes; print-category yes; print-severity yes; };
channel notify_file { file "c:\var\bind\log\notify.log" versions 3 size 5m; severity dynamic; print-time yes; print-category yes; print-severity yes; };
channel client_file { file "c:\var\bind\log\client.log" versions 3 size 5m; severity dynamic; print-time yes; print-category yes; print-severity yes; };
channel unmatched_file { file "c:\var\bind\log\unmatched.log" versions 3 size 5m; severity dynamic; print-time yes; print-category yes; print-severity yes; };
channel queries_file { file "c:\var\bind\log\queries.log" versions 3 size 5m; severity dynamic; print-time yes; print-category yes; print-severity yes; };
channel query-errors_file { file "c:\var\bind\log\query-errors.log" versions 3 size 5m; severity dynamic; print-time yes; print-category yes; print-severity yes; };
channel network_file { file "c:\var\bind\log\network.log" versions 3 size 5m; severity dynamic; print-time yes; print-category yes; print-severity yes; };
channel update_file { file "c:\var\bind\log\update.log" versions 3 size 5m; severity dynamic; print-time yes; print-category yes; print-severity yes; };
channel update-security_file { file "c:\var\bind\log\update-security.log" versions 3 size 5m; severity dynamic; print-time yes; print-category yes; print-severity yes; };
channel dispatch_file { file "c:\var\bind\log\dispatch.log" versions 3 size 5m; severity dynamic; print-time yes; print-category yes; print-severity yes; };
channel dnssec_file { file "c:\var\bind\log\dnssec.log" versions 3 size 5m; severity dynamic; print-time yes; print-category yes; print-severity yes; };
channel lame-servers_file { file "c:\var\bind\log\lame-servers.log" versions 3 size 5m; severity dynamic; print-time yes; print-category yes; print-severity yes; };
channel delegation-only_file { file "c:\var\bind\log\delegation-only.log" versions 3 size 5m; severity dynamic; print-time yes; print-category yes; print-severity yes; };
category rpz { rpz_file; };
category edns-disabled { edns-disabled_file; };
category default { default_file; };
category general { general_file; };
category database { database_file; };
category spill { spill_file; };
category rate-limit { rate-limit_file; };
category security { security_file; };
category config { config_file; };
category resolver { resolver_file; };
category xfer-in { xfer-in_file; };
category xfer-out { xfer-out_file; };
category notify { notify_file; };
category client { client_file; };
category unmatched { unmatched_file; };
category queries { queries_file; };
category query-errors { query-errors_file; };
category network { network_file; };
category update { update_file; };
category update-security { update-security_file; };
category dispatch { dispatch_file; };
category dnssec { dnssec_file; };
category lame-servers { lame-servers_file; };
category delegation-only { delegation-only_file; };
category update-security { update-security_file; };
};
options {
default-key "rndc-key";
default-server 127.0.0.1;
default-port 953;
};
``
There are reasons to use above mentioned configuration - optimization of latences to selected entities for zone transfers and so on (instead of build-in mirror zone possibilities for DNS core infrastructure). If any of you would found some possible improvements/hints/comments/etc as kind of bug involved person I would appretiate any feedback (potential additional side-value of this bug report).
### Relevant logs and/or screenshots
example of two cases:
`
03-dec-2020 3:03:26.817 general: critical: c:\builds\isc-private\bind9\lib\isc\netmgr\netmgr.c:1332: REQUIRE(((((*handlep) != ((void *)0)) && (((const isc__magic_t *)(*handlep))->magic == ((('N') << 24 | ('M') << 16 | ('H') << 8 | ('D'))))) && ((sizeof(*(&(*handlep)->references)) == 8 ? (memory_order_seq_cst == memory_order_relaxed ? _InterlockedOr64((atomic_int_fast64_t *)&(*handlep)->references, 0) : (memory_order_seq_cst == memory_order_acquire ? _InterlockedOr64( (atomic_int_fast64_t *)&(*handlep)->references, 0) : (memory_order_seq_cst == memory_order_release ? _InterlockedOr64( (atomic_int_fast64_t *)&(*handlep)->references, 0) : _InterlockedOr64( (atomic_int_fast64_t *)&(*handlep)->references, 0)))) : (sizeof(*(&(*handlep)->references) == 4) ? (memory_order_seq_cst == memory_order_relaxed ? (int32_t)_InterlockedOr((atomic_int_fast32_t *)&(*handlep)->references, 0) : (memory_order_seq_cst == memory_order_acquire ? (int32_t)_InterlockedOr( (atomic_int_fast32_t *)&(*handlep)->references, 0) : (memory_order_seq_cst == memory_order_release ? (int32_t)_InterlockedOr( (atomic_int_fast32_t *)&(*handlep)->references, 0) : (int32_t)_InterlockedOr( (atomic_int_fast32_t *)&(*handlep)->references, 0)))) : (sizeof(*(&(*handlep)->references) == 2) ? (short)_InterlockedOr16((atomic_short *)&(*handlep)->references, 0) : (sizeof(*(&(*handlep)->references) == 1) ? (int8_t) _InterlockedOr8((atomic_int_fast8_t *)&(*handlep)->references, 0) : atomic_load_abort())))) & (sizeof(*(&(*handlep)->references)) == 8 ? 0xffffffffffffffffULL : (sizeof(*(&(*handlep)->references)) == 4 ? 0xffffffffULL : (sizeof(*(&(*handlep)->references)) == 2 ? 0xffffULL : (sizeof(*(&(*handlep)->references)) == 1 ? 0xffULL : atomic_load_abort()))))) > 0)) failed
03-dec-2020 3:03:26.817 general: critical: exiting (due to assertion failure)
05-dec-2020 0:08:04.470 general: critical: c:\builds\isc-private\bind9\lib\isc\netmgr\netmgr.c:1332: REQUIRE(((((*handlep) != ((void *)0)) && (((const isc__magic_t *)(*handlep))->magic == ((('N') << 24 | ('M') << 16 | ('H') << 8 | ('D'))))) && ((sizeof(*(&(*handlep)->references)) == 8 ? (memory_order_seq_cst == memory_order_relaxed ? _InterlockedOr64((atomic_int_fast64_t *)&(*handlep)->references, 0) : (memory_order_seq_cst == memory_order_acquire ? _InterlockedOr64( (atomic_int_fast64_t *)&(*handlep)->references, 0) : (memory_order_seq_cst == memory_order_release ? _InterlockedOr64( (atomic_int_fast64_t *)&(*handlep)->references, 0) : _InterlockedOr64( (atomic_int_fast64_t *)&(*handlep)->references, 0)))) : (sizeof(*(&(*handlep)->references) == 4) ? (memory_order_seq_cst == memory_order_relaxed ? (int32_t)_InterlockedOr((atomic_int_fast32_t *)&(*handlep)->references, 0) : (memory_order_seq_cst == memory_order_acquire ? (int32_t)_InterlockedOr( (atomic_int_fast32_t *)&(*handlep)->references, 0) : (memory_order_seq_cst == memory_order_release ? (int32_t)_InterlockedOr( (atomic_int_fast32_t *)&(*handlep)->references, 0) : (int32_t)_InterlockedOr( (atomic_int_fast32_t *)&(*handlep)->references, 0)))) : (sizeof(*(&(*handlep)->references) == 2) ? (short)_InterlockedOr16((atomic_short *)&(*handlep)->references, 0) : (sizeof(*(&(*handlep)->references) == 1) ? (int8_t) _InterlockedOr8((atomic_int_fast8_t *)&(*handlep)->references, 0) : atomic_load_abort())))) & (sizeof(*(&(*handlep)->references)) == 8 ? 0xffffffffffffffffULL : (sizeof(*(&(*handlep)->references)) == 4 ? 0xffffffffULL : (sizeof(*(&(*handlep)->references)) == 2 ? 0xffffULL : (sizeof(*(&(*handlep)->references)) == 1 ? 0xffULL : atomic_load_abort()))))) > 0)) failed
05-dec-2020 0:08:04.470 general: critical: exiting (due to assertion failure)
`
### Possible fixes
Investigate assertion failure.
Thank you for yourt time and cooperation in advance.
https://gitlab.isc.org/isc-projects/kea/-/issues/1592
Changes for Kea 1.8.2 release
2021-01-28T13:32:50Z
Michal Nowikowski
Changes for Kea 1.8.2 release
kea1.8.2
Michal Nowikowski
Michal Nowikowski
https://gitlab.isc.org/isc-projects/bind9/-/issues/2338
Code coverage statistics graph not updated anymore
2021-09-01T11:34:54Z
Michal Nowak
Code coverage statistics graph not updated anymore
From "Code coverage statistics" [graph](https://gitlab.isc.org/isc-projects/bind9/-/graphs/main/charts) is apparent that code coverage stopped being reported to this graph around October 24.
Around that time 2dabf328c406036e012a9b0b30ed...
From "Code coverage statistics" [graph](https://gitlab.isc.org/isc-projects/bind9/-/graphs/main/charts) is apparent that code coverage stopped being reported to this graph around October 24.
Around that time 2dabf328c406036e012a9b0b30ed952785565d51 was merged. Also, suspiciously, graphs label in full mentions the `master` (sic) branch: "Code coverage statistics for *master* Sep 05 - Dec 04", which was removed in June 2020.
Weirdly enough, the [gcov](https://gitlab.isc.org/isc-projects/bind9/-/jobs/1345869) CI job on `main` passes and correctly reports: "Coverage: 77%", though the graph is not updated.
October 2021 (9.11.36, 9.11.36-S1, 9.16.22, 9.16.22-S1, 9.17.19)
Michal Nowak
Michal Nowak
https://gitlab.isc.org/isc-projects/bind9/-/issues/2337
Unusual behaviour of first query in a pipeline of queries.
2021-06-22T13:35:59Z
Peter Davies
Unusual behaviour of first query in a pipeline of queries.
### Summary
Bind does not treat the first query in a pipelined list of queries in the same way as the rest of the queries in the list.
### BIND version used
Bind 9.16.9, 9.17.7
### Steps to reproduce
Create a file with a limited num...
### Summary
Bind does not treat the first query in a pipelined list of queries in the same way as the rest of the queries in the list.
### BIND version used
Bind 9.16.9, 9.17.7
### Steps to reproduce
Create a file with a limited number of well formed resolvable queries, preferably in sorted in alphanumeric order. Use as input to mdig targeting a Bind server with pipelining enabled.
```mdig @10.0.0.237 +vc -f ttt.1```
Working with a cold cache, replies normally do not get returned in same order as the list - this is the expected behaviour.
Add a query that is known to cause the server to time out in the middle of the list of queries.
```mdig @10.0.0.237 +vc -f ttt.2```
The behaviour is as above. The ServFail reply generated by the "time out" on the server is the last reply.
Move the known query to the head of the list of queries.
```mdig @10.0.0.237 +vc -f ttt.3```
A pause in the output indicates that the server is waiting for a resolution of the first query.
Also inspecting the client messages generated by Bind bear this out.
[RT #17356](https://support.isc.org/Ticket/Display.html?id=17356)
Ondřej Surý
Ondřej Surý
https://gitlab.isc.org/isc-projects/stork/-/issues/462
Show user who authenticated reconfigure in Kea
2022-11-16T11:54:50Z
Tomek Mrugalski
Show user who authenticated reconfigure in Kea
The goal of this ticket is to show the username who authorized reconfigure (or say that auth was disabled or not possible to determine, because it was a server restart or a signal was sent).
This is a follow-up to https://gitlab.isc.org...
The goal of this ticket is to show the username who authorized reconfigure (or say that auth was disabled or not possible to determine, because it was a server restart or a signal was sent).
This is a follow-up to https://gitlab.isc.org/isc-projects/stork/-/issues/353#note_164884.
Once https://gitlab.isc.org/isc-projects/kea/-/issues/1590 is implemented, we should retrieve this information and extend the event that was introduced in #460.
For reconfig done due to signal received, we could say `root` did it, although this would be somewhat imprecise. The owner of the process can send the signal as well. Kea is usually ran as root, but it doesn't have to. So perhaps better would be say `signal` in such cases?
backlog
https://gitlab.isc.org/isc-projects/kea/-/issues/1590
Auth: logged user and command should be printed on dedicated logger
2021-05-05T16:35:44Z
Tomek Mrugalski
Auth: logged user and command should be printed on dedicated logger
As requested by @vicky in https://gitlab.isc.org/isc-projects/stork/-/issues/353#note_164884, we need a dedicated logger. This logger should provide at least two pieces of information: which command was authorized and the username of the...
As requested by @vicky in https://gitlab.isc.org/isc-projects/stork/-/issues/353#note_164884, we need a dedicated logger. This logger should provide at least two pieces of information: which command was authorized and the username of the user who authorized it.
kea1.9.8
Tomek Mrugalski
Tomek Mrugalski
https://gitlab.isc.org/isc-projects/kea/-/issues/1589
Remove unbalance parentheses from example in Arm.
2020-12-29T16:33:56Z
Peter Davies
Remove unbalance parentheses from example in Arm.
---
name: Bug report
---
The example shown in chapter 8.2.12 DHCPv4 Private Options has single parentheses:
doc/sphinx/arm/dhcp4-srv.rst: "test": "(option[vendor-class-identifier].text == 'APC'",
doc/sphinx/arm/dhcp4-s...
---
name: Bug report
---
The example shown in chapter 8.2.12 DHCPv4 Private Options has single parentheses:
doc/sphinx/arm/dhcp4-srv.rst: "test": "(option[vendor-class-identifier].text == 'APC'",
doc/sphinx/arm/dhcp4-srv.rst: "test": "(option[vendor-class-identifier].text == 'PXE'",
see[ RT 17358](https://support.isc.org/Ticket/Display.html?id=17358).
kea1.9.4
Francis Dupont
Francis Dupont
https://gitlab.isc.org/isc-projects/bind9/-/issues/2336
Implement TSIG-GSS on Windows
2020-12-04T08:37:04Z
Mark Andrews
Implement TSIG-GSS on Windows
Windows should have the necessary components to do this using Windows APIs.
Windows should have the necessary components to do this using Windows APIs.
Not planned
https://gitlab.isc.org/isc-projects/kea/-/issues/1588
EVAL_RESULT displays boolean status as an integer
2023-07-05T10:39:18Z
Francis Dupont
EVAL_RESULT displays boolean status as an integer
For instance ```EVAL_RESULT Expression 53148-RU evaluated to 1``` should be ```EVAL_RESULT Expression 53148-RU evaluated to true``` so all uses of EVAL_RESULT should set std::boolalpha or convert the boolean into false and true directly.
For instance ```EVAL_RESULT Expression 53148-RU evaluated to 1``` should be ```EVAL_RESULT Expression 53148-RU evaluated to true``` so all uses of EVAL_RESULT should set std::boolalpha or convert the boolean into false and true directly.
next-stable-2.6
https://gitlab.isc.org/isc-projects/bind9/-/issues/2335
TLSDNS refactoring
2021-02-26T15:14:59Z
Ondřej Surý
TLSDNS refactoring
The TLSDNS needs to be refactored to use libuv/OpenSSL directly, and not via netmgr layers.
The TLSDNS needs to be refactored to use libuv/OpenSSL directly, and not via netmgr layers.
February 2021 (9.11.28, 9.11.28-S1, 9.16.12, 9.16.12-S1, 9.17.10)
Ondřej Surý
Ondřej Surý
https://gitlab.isc.org/isc-projects/kea/-/issues/1587
Security and Kea new ARM section
2021-04-23T18:51:19Z
Francis Dupont
Security and Kea new ARM section
I propose a new section in the ARM about security and Kea with this plan:
- privileged sockets for DHCP service
- control channel using UNIX sockets
- local use of the control agent
- remote use of the control agent
There is no remo...
I propose a new section in the ARM about security and Kea with this plan:
- privileged sockets for DHCP service
- control channel using UNIX sockets
- local use of the control agent
- remote use of the control agent
There is no remote access to UNIX sockets as they are local objects. And of course if (when) we change the control agent function for instance putting the HTTP endpoint in servers this will have to be rewritten.
The document is being created here: https://gitlab.isc.org/isc-projects/kea/-/wikis/security.
kea1.9.7
Thomas Markwalder
Thomas Markwalder
https://gitlab.isc.org/isc-projects/bind9/-/issues/2334
Add uv timer for TCP connection timeouts
2022-06-30T06:07:05Z
Ondřej Surý
Add uv timer for TCP connection timeouts
Currently, the TCP connection timeouts is configured using a socket option that varies with the platforms:
* On Linux, it's `TCP_USER_TIMEOUT` (in miliseconds)
* On Windows, it's called `TCP_MAXRT` (in seconds)
* On macOS, there are two...
Currently, the TCP connection timeouts is configured using a socket option that varies with the platforms:
* On Linux, it's `TCP_USER_TIMEOUT` (in miliseconds)
* On Windows, it's called `TCP_MAXRT` (in seconds)
* On macOS, there are two! socket options: `TCP_CONNECTIONTIMEOUT` (in seconds) and `TCP_RXT_CONNDROPTIME` (documented just in the header)
* On FreeBSD 11 and FreeBSD 12, the option is called `TCP_KEEPINIT` (in seconds)
Alas, at least OpenBSD and NetBSD doesn't have any socket option like this, so a system-wide default is used (and it can be large) leading to connection callback not being called in quite a long time -> thus the unit test might crash on shutdown because there's dangling `nmsocket` waiting for connection.
We need to restore the connection timeout handler that will:
* call `uv_close()` on the connection `uv_tcp_t` in the timeout handle forcing the connection callback to be called with the timeout error code
* properly handle that case where the `uv_tcp_t` is closed out of order
April 2021 (9.11.30/9.11.31, 9.11.30-S1/9.11.31-S1, 9.16.14/9.16.15, 9.16.14-S1/9.16.15-S1, 9.17.12)
Ondřej Surý
Ondřej Surý
https://gitlab.isc.org/isc-projects/bind9/-/issues/2333
TCP "connection refused" handling in dig on Windows is broken
2021-09-02T09:23:43Z
Michał Kępień
TCP "connection refused" handling in dig on Windows is broken
Below, I present behavior of `dig` on Windows for three different source
code revisions:
- before !4115
- after !4115, before !4444
- after !4444
This is what happens for d48e04003560d4a87a659af9319219a123c69c75
(before !4115) wh...
Below, I present behavior of `dig` on Windows for three different source
code revisions:
- before !4115
- after !4115, before !4444
- after !4444
This is what happens for d48e04003560d4a87a659af9319219a123c69c75
(before !4115) when one tries to query a TCP port on which nothing is
listening:
```
>>>>> dig @127.0.0.1 -p 12345 isc.org. +tcp +tries=1 +time=2
; <<>> DiG 9.17.6 <<>> @127.0.0.1 -p 12345 isc.org. +tcp +tries=1 +time=2
; (1 server found)
;; global options: +cmd
;; connection timed out; no servers could be reached
;; Connection to 127.0.0.1#12345(127.0.0.1) for isc.org. failed: timed out.
>>>>> dig @127.0.0.1 -p 12345 isc.org. +tcp +tries=2 +time=2
;; Connection to 127.0.0.1#12345(127.0.0.1) for isc.org. failed: timed out.
; <<>> DiG 9.17.6 <<>> @127.0.0.1 -p 12345 isc.org. +tcp +tries=2 +time=2
; (1 server found)
;; global options: +cmd
;; connection timed out; no servers could be reached
;; Connection to 127.0.0.1#12345(127.0.0.1) for isc.org. failed: timed out.
>>>>> dig @127.0.0.1 -p 12345 isc.org. +tcp +tries=1 +time=3
;; Connection to 127.0.0.1#12345(127.0.0.1) for isc.org. failed: connection refused.
>>>>> dig @127.0.0.1 -p 12345 isc.org. +tcp +tries=2 +time=3
;; Connection to 127.0.0.1#12345(127.0.0.1) for isc.org. failed: connection refused.
>>>>> dig @10.53.0.5 -p 12345 isc.org. +tcp +tries=1 +time=2
; <<>> DiG 9.17.6 <<>> @10.53.0.5 -p 12345 isc.org. +tcp +tries=1 +time=2
; (1 server found)
;; global options: +cmd
;; connection timed out; no servers could be reached
;; Connection to 10.53.0.5#12345(10.53.0.5) for isc.org. failed: timed out.
>>>>> dig @10.53.0.5 -p 12345 isc.org. +tcp +tries=2 +time=2
;; Connection to 10.53.0.5#12345(10.53.0.5) for isc.org. failed: timed out.
; <<>> DiG 9.17.6 <<>> @10.53.0.5 -p 12345 isc.org. +tcp +tries=2 +time=2
; (1 server found)
;; global options: +cmd
;; connection timed out; no servers could be reached
;; Connection to 10.53.0.5#12345(10.53.0.5) for isc.org. failed: timed out.
>>>>> dig @10.53.0.5 -p 12345 isc.org. +tcp +tries=1 +time=3
;; Connection to 10.53.0.5#12345(10.53.0.5) for isc.org. failed: connection refused.
>>>>> dig @10.53.0.5 -p 12345 isc.org. +tcp +tries=2 +time=3
;; Connection to 10.53.0.5#12345(10.53.0.5) for isc.org. failed: connection refused.
```
Observations:
- the message logged depends on what `+time` is set to,
- traffic sniffer shows multiple SYN/RST+ACK exchanges whose count
depends on whether the connection "times out" or is "refused".
This is what happens for 3a366622074d8e55b361356e89c19d6e139bd0b6
(after !4115, before !4444):
```
>>>>> dig @127.0.0.1 -p 12345 isc.org. +tcp +tries=1 +time=2
;; Connection to 127.0.0.1#12345(127.0.0.1) for isc.org. failed: connection refused.
>>>>> dig @127.0.0.1 -p 12345 isc.org. +tcp +tries=2 +time=2
;; Connection to 127.0.0.1#12345(127.0.0.1) for isc.org. failed: connection refused.
;; Connection to 127.0.0.1#12345(127.0.0.1) for isc.org. failed: connection refused.
>>>>> dig @127.0.0.1 -p 12345 isc.org. +tcp +tries=1 +time=3
;; Connection to 127.0.0.1#12345(127.0.0.1) for isc.org. failed: connection refused.
>>>>> dig @127.0.0.1 -p 12345 isc.org. +tcp +tries=2 +time=3
;; Connection to 127.0.0.1#12345(127.0.0.1) for isc.org. failed: connection refused.
;; Connection to 127.0.0.1#12345(127.0.0.1) for isc.org. failed: connection refused.
>>>>> dig @10.53.0.5 -p 12345 isc.org. +tcp +tries=1 +time=2
;; Connection to 10.53.0.5#12345(10.53.0.5) for isc.org. failed: timed out.
>>>>> dig @10.53.0.5 -p 12345 isc.org. +tcp +tries=2 +time=2
;; Connection to 10.53.0.5#12345(10.53.0.5) for isc.org. failed: timed out.
;; Connection to 10.53.0.5#12345(10.53.0.5) for isc.org. failed: timed out.
>>>>> dig @10.53.0.5 -p 12345 isc.org. +tcp +tries=1 +time=3
;; Connection to 10.53.0.5#12345(10.53.0.5) for isc.org. failed: connection refused.
>>>>> dig @10.53.0.5 -p 12345 isc.org. +tcp +tries=2 +time=3
;; Connection to 10.53.0.5#12345(10.53.0.5) for isc.org. failed: connection refused.
;; Connection to 10.53.0.5#12345(10.53.0.5) for isc.org. failed: connection refused.
```
Observations:
- Queries sent towards 127.0.0.1 seem to work fine:
- `dig` invocations exit quickly,
- traffic sniffer shows just one SYN/RST+ACK pair being exchanged
per each `+tries`,
- "connection refused" is consistently logged, no matter what
`+time` is set to.
- Queries sent towards 10.53.0.5 (configured on a loopback interface)
behave oddly:
- `dig` invocations takes a longer while to return,
- traffic sniffer shows multiple SYN/RST+ACK pairs being exchanged
per each `+tries`,
- the message logged depends on what `+time` is set to.
!4444 seems to break things in an even different way:
```
>>>>> dig @127.0.0.1 -p 12345 isc.org. +tcp +tries=1 +time=2
;; Connection to 127.0.0.1#12345(127.0.0.1) for isc.org. failed: connection refused.
>>>>> dig @127.0.0.1 -p 12345 isc.org. +tcp +tries=2 +time=2
;; Connection to 127.0.0.1#12345(127.0.0.1) for isc.org. failed: connection refused.
;; Connection to 127.0.0.1#12345(127.0.0.1) for isc.org. failed: connection refused.
>>>>> dig @127.0.0.1 -p 12345 isc.org. +tcp +tries=1 +time=3
;; Connection to 127.0.0.1#12345(127.0.0.1) for isc.org. failed: connection refused.
>>>>> dig @127.0.0.1 -p 12345 isc.org. +tcp +tries=2 +time=3
;; Connection to 127.0.0.1#12345(127.0.0.1) for isc.org. failed: connection refused.
;; Connection to 127.0.0.1#12345(127.0.0.1) for isc.org. failed: connection refused.
>>>>> dig @10.53.0.5 -p 12345 isc.org. +tcp +tries=1 +time=2
;; Connection to 10.53.0.5#12345(10.53.0.5) for isc.org. failed: timed out.
>>>>> dig @10.53.0.5 -p 12345 isc.org. +tcp +tries=2 +time=2
;; Connection to 10.53.0.5#12345(10.53.0.5) for isc.org. failed: timed out.
;; Connection to 10.53.0.5#12345(10.53.0.5) for isc.org. failed: timed out.
>>>>> dig @10.53.0.5 -p 12345 isc.org. +tcp +tries=1 +time=3
;; Connection to 10.53.0.5#12345(10.53.0.5) for isc.org. failed: timed out.
>>>>> dig @10.53.0.5 -p 12345 isc.org. +tcp +tries=2 +time=3
;; Connection to 10.53.0.5#12345(10.53.0.5) for isc.org. failed: timed out.
;; Connection to 10.53.0.5#12345(10.53.0.5) for isc.org. failed: timed out.
```
Observations:
- Queries sent towards 127.0.0.1 still seem to work fine.
- Queries sent towards 10.53.0.5 never return "connection refused" any
more despite the packet sniffer showing multiple SYN/RST+ACK
exchanges per each `+tries`.
Windows Firewall is disabled on the host on which the above tests were
run.
All in all, with the code in its current shape (after !4444), the
`legacy` system test is consistently failing due to no "connection
refused" messages being logged for queries sent towards `named`
instances that intentionally do not listen for TCP connections.
From a user's perspective, I would expect the following:
- 1 SYN/RST+ACK exchange per each `+tries`,
- timeouts should never be logged if "connection refused" is
detected,
- `dig` should behave consistently for all target addresses.
September 2021 (9.16.21, 9.16.21-S1, 9.17.18)