How does this interact with
interface-interval and why does it require routing sockets?
I was running 9.11.5 with no
lock-file directive and no lock file on disk. When I added a
lock-file directive specifying /var/run/named/named.lock that file was created. I don't think the ARM is entirely correct about the default. Also, your implementation of the lock file could be better. The file BIND created was just an empty file. If BIND dies and leaves that lock file in place it will prevent BIND from being restarted. If you write BIND's PID into the file, then if the file exists you can send a signal 0 to that PID to verify that a process with that number is running. If the kill fails you can allow BIND to start even if the lock file exists. This works unless another process with the same PID just happens to have been started in the meantime; that's unlikely, but this is still better than not testing at all.
The ARM doesn't indicate the default setting (or why one might pick yes versus no, for that matter). It says "Require a valid server cookie before sending a full response to a UDP request from a cookie aware client. BADCOOKIE is sent if there is a bad or no existent server cookie." I think this is trying to tell me that if I turn this on I get a BADCOOKIE error response and if I leave it off I get a normal response which is possibly truncated by the nocookie-udp-size parameter. It's a shame that's not what it says.
bin/named/config.c seems to tell me that the default is no. Why would I want to set it to yes? "yes" seems like it would be a really bad setting for servers behind a load balancer unless they were all configured with the same secret....
This option appears in the grammar but there is no description. The code tells me the default is 3.
Ditto. The code tells me the default is 800.
The ARM doesn't indicate the default setting. Again, the code seems to tell me the default is true.
stale-answer-enable This option appears in the grammar but there is no description. The code tells me the default is false.
edit: more from the same source
dnskey-sig-validity the ARM says "If set to a non-zero value, this overrides the value set by
sig-validity-interval. The default is zero, meaning
sig-validity-interval is used" However, if I specify "
dnskey-sig-validity 0;" it says "'0' is out of range (1..3660)".