1. 14 Mar, 2019 10 commits
  2. 13 Mar, 2019 2 commits
  3. 12 Mar, 2019 7 commits
    • Evan Hunt's avatar
      Merge branch '834-fix-race-in-fctx-cancel-v9_11' into 'v9_11' · d87f1932
      Evan Hunt authored
      fix race in socket code
      See merge request !1674
    • Witold Kręcicki's avatar
      CHANGES · 3993503d
      Witold Kręcicki authored and Evan Hunt's avatar Evan Hunt committed
      (cherry picked from commit 50f60542)
    • Witold Kręcicki's avatar
      Fix a race in fctx_cancelquery. · ff401e67
      Witold Kręcicki authored and Evan Hunt's avatar Evan Hunt committed
      When sending an udp query (resquery_send) we first issue an asynchronous
      isc_socket_connect and increment query->connects, then isc_socket_sendto2
      and increment query->sends.
      If we happen to cancel this query (fctx_cancelquery) we need to cancel
      all operations we might have issued on this socket. If we are under very high
      load the callback from isc_socket_connect (resquery_udpconnected) might have
      not yet been fired. In this case we only cancel the CONNECT event on socket,
      and ignore the SEND that's waiting there (as there is an `else if`).
      Then we call dns_dispatch_removeresponse which kills the dispatcher socket
      and calls isc_socket_close - but if system is under very high load, the send
      we issued earlier might still not be complete - which triggers an assertion
      because we're trying to close a socket that's still in use.
      The fix is to always check if we have incomplete sends on the socket and cancel
      them if we do.
      (cherry picked from commit 56183a39)
    • Michał Kępień's avatar
      Merge branch 'michal/silence-a-perl-warning-output-by-stop.pl-v9_11' into 'v9_11' · 369f3c39
      Michał Kępień authored
      [v9_11] Silence a Perl warning output by stop.pl
      See merge request !1670
    • Michał Kępień's avatar
      Silence a Perl warning output by stop.pl · 42a210b7
      Michał Kępień authored
      On Unix systems, the CYGWIN environment variable is not set at all when
      BIND system tests are run.  If a named instance crashes on shutdown or
      otherwise fails to clean up its pidfile and the CYGWIN environment
      variable is not set, stop.pl will print an uninitialized value warning
      on standard error.  Prevent this by using defined().
      (cherry picked from commit 91e5a99b)
    • Mark Andrews's avatar
      Merge branch 'ifconfig.sh-anywhere-v9_11' into 'v9_11' · e57796dd
      Mark Andrews authored
      Allow ifconfig to be called from any directory
      See merge request !1668
    • Petr Menšík's avatar
      Allow ifconfig to be called from any directory · 1f32ad60
      Petr Menšík authored and Mark Andrews's avatar Mark Andrews committed
      ifconfig.sh depends on config.guess for platform guessing. It uses it to
      choose between ifconfig or ip tools to configure interfaces. If
      system-wide automake script is installed and local was not found, use
      platform guess. It should work well on mostly any sane platform. Still
      prefers local guess, but passes when if cannot find it.
      (cherry picked from commit 38301052)
  4. 11 Mar, 2019 12 commits
    • Evan Hunt's avatar
      Merge branch '892-fix-redirect-name-v9_11' into 'v9_11' · a4fef634
      Evan Hunt authored
      use qname in redirect2
      See merge request !1664
    • Mark Andrews's avatar
      add CHANGES · 6115670b
      Mark Andrews authored and Evan Hunt's avatar Evan Hunt committed
      (cherry picked from commit ad785e4f)
    • Mark Andrews's avatar
      use client->query.qname · 93ee793d
      Mark Andrews authored and Evan Hunt's avatar Evan Hunt committed
      (cherry picked from commit 8758d36a)
    • Michał Kępień's avatar
      Merge branch 'michal/stabilize-the-gost-system-test' into 'v9_11' · 64d16586
      Michał Kępień authored
      Stabilize the "gost" system test
      See merge request !1642
    • Michał Kępień's avatar
      Stabilize the "gost" system test · 170cb442
      Michał Kępień authored
      In the "gost" system test, the ./NS RRset returned in the response to
      ns2's priming query might not yet be validated when ns2 assembles the
      response to the ./SOA query.  If that happens, the ./NS RRset will not
      be placed in the AUTHORITY section of the response to the ./SOA query,
      triggering a false positive for the "gost" system test as the ./NS RRset
      is always present in the response sent by ns1 (since it is authoritative
      for the root zone).  As the purpose of the "gost" system test is to
      check whether a zone signed using GOST is properly validated and only
      positive responses are inspected, use the +noauth dig option for all
      queries in that test, so that the contents of the AUTHORITY section do
      not influence its outcome.
    • Michał Kępień's avatar
      Merge branch '928-stabilize-delzsk.example-zone-checks-v9_11' into 'v9_11' · 23435c42
      Michał Kępień authored
      [v9_11] Stabilize "delzsk.example" zone checks
      See merge request !1659
    • Michał Kępień's avatar
      Stabilize "delzsk.example" zone checks · 780e1134
      Michał Kępień authored
      When a zone is converted from NSEC to NSEC3, the private record at zone
      apex indicating that NSEC3 chain creation is in progress may be removed
      during a different (later) zone_nsec3chain() call than the one which
      adds the NSEC3PARAM record.  The "delzsk.example" zone check only waits
      for the NSEC3PARAM record to start appearing in dig output while private
      records at zone apex directly affect "rndc signing -list" output.  This
      may trigger false positives for the "autosign" system test as the output
      of the "rndc signing -list" command used for checking ZSK deletion
      progress may contain extra lines which are not accounted for.  Ensure
      the private record is removed from zone apex before triggering ZSK
      deletion in the aforementioned check.
      Also future-proof the ZSK deletion progress check by making it only look
      at lines it should care about.
      (cherry picked from commit e02de04e)
    • Michał Kępień's avatar
      Merge branch '129-dnssec-system-test-tweaks-v9_11' into 'v9_11' · 08713b33
      Michał Kępień authored
      [v9_11] "dnssec" system test tweaks
      See merge request !1657
    • Mark Andrews's avatar
      ${ttl} must exist and be non null · e6718cf4
      Mark Andrews authored and Michał Kępień's avatar Michał Kępień committed
      (cherry picked from commit dee1f1a4)
    • Michał Kępień's avatar
      Make ANSWER TTL capping checks stricter · 7656e743
      Michał Kępień authored
      For checks querying a named instance with "dnssec-accept-expired yes;"
      set, authoritative responses have a TTL of 300 seconds.  Assuming empty
      resolver cache, TTLs of RRsets in the ANSWER section of the first
      response to a given query will always match their authoritative
      counterparts.  Also note that for a DNSSEC-validating named resolver,
      validated RRsets replace any existing non-validated RRsets with the same
      owner name and type, e.g. cached from responses received while resolving
      CD=1 queries.  Since TTL capping happens before a validated RRset is
      inserted into the cache and RRSIG expiry time does not impose an upper
      TTL bound when "dnssec-accept-expired yes;" is set and, as pointed out
      above, the original TTLs of the relevant RRsets equal 300 seconds, the
      RRsets in the ANSWER section of the responses to expiring.example/SOA
      and expired.example/SOA queries sent with CD=0 should always be exactly
      120 seconds, never a lower value.  Make the relevant TTL checks stricter
      to reflect that.
      (cherry picked from commit a85cc414)
    • Michał Kępień's avatar
      Relax ADDITIONAL TTL capping checks · bacbe3a5
      Michał Kępień authored
      Always expecting a TTL of exactly 300 seconds for RRsets found in the
      ADDITIONAL section of responses received for CD=1 queries sent during
      TTL capping checks is too strict since these responses will contain
      records cached from multiple DNS messages received during the resolution
      In responses to queries sent with CD=1, ns.expiring.example/A in the
      ADDITIONAL section will come from a delegation returned by ns2 while the
      ANSWER section will come from an authoritative answer returned by ns3.
      If the queries to ns2 and ns3 happen at different Unix timestamps,
      RRsets cached from the older response will have a different TTL by the
      time they are returned to dig, triggering a false positive.
      Allow a safety margin of 60 seconds for checks inspecting the ADDITIONAL
      section of responses to queries sent with CD=1 to fix the issue.  A
      safety margin this large is likely overkill, but it is used nevertheless
      for consistency with similar safety margins used in other TTL capping
      (cherry picked from commit 8baf8590)
    • Michał Kępień's avatar
      Fix NTA-related races · 38da4bdf
      Michał Kępień authored
      Changes introduced by commit 6b8e4d6e
      were incomplete as not all time-sensitive checks were updated to match
      revised "nta-lifetime" and "nta-recheck" values.  Prevent rare false
      positives by updating all NTA-related checks so that they work reliably
      with "nta-lifetime 12s;" and "nta-recheck 9s;".  Update comments as well
      to prevent confusion.
      (cherry picked from commit 9a36a1bb)
  5. 08 Mar, 2019 9 commits