1. 06 Apr, 2019 17 commits
    • Tinderbox User's avatar
      a3721741
    • Tinderbox User's avatar
      doc rebuild · b4411520
      Tinderbox User authored
      b4411520
    • Tinderbox User's avatar
      prep 9.14.1 · c7004347
      Tinderbox User authored
      c7004347
    • Evan Hunt's avatar
      Merge branch 'security-tcp-client-crash-security-v9_14' into 'security-v9_14' · 36001c44
      Evan Hunt authored
      Address a potential overrun in tcp-clients that could exhaust resources
      
      See merge request isc-private/bind9!72
      36001c44
    • Evan Hunt's avatar
      CHANGES, release note · 79fad84b
      Evan Hunt authored
      (cherry picked from commit 244e44af432121a05e0a308b7ccce96a8ecd28ab)
      79fad84b
    • Evan Hunt's avatar
      restore allowance for tcp-clients < interfaces · cae79e1b
      Evan Hunt authored
      in the "refactor tcpquota and pipeline refs" commit, the counting
      of active interfaces was tightened in such a way that named could
      fail to listen on an interface if there were more interfaces than
      tcp-clients. when checking the quota to start accepting on an
      interface, if the number of active clients was above zero, then
      it was presumed that some other client was able to handle accepting
      new connections. this, however, ignored the fact that the current client
      could be included in that count, so if the quota was already exceeded
      before all the interfaces were listening, some interfaces would never
      listen.
      
      we now check whether the current client has been marked active; if so,
      then the number of active clients on the interface must be greater
      than 1, not 0.
      
      (cherry picked from commit 02365b87ea0b1ea5ea8b17376f6734c811c95e61)
      cae79e1b
    • Evan Hunt's avatar
      refactor tcpquota and pipeline refs; allow special-case overrun in isc_quota · 4a8fc979
      Evan Hunt authored
      - if the TCP quota has been exceeded but there are no clients listening
        for new connections on the interface, we can now force attachment to the
        quota using isc_quota_force(), instead of carrying on with the quota not
        attached.
      - the TCP client quota is now referenced via a reference-counted
        'ns_tcpconn' object, one of which is created whenever a client begins
        listening for new connections, and attached to by members of that
        client's pipeline group. when the last reference to the tcpconn
        object is detached, it is freed and the TCP quota slot is released.
      - reduce code duplication by adding mark_tcp_active() function
      - convert counters to stdatomic
      
      (cherry picked from commit a8dd133d270873b736c1be9bf50ebaa074f5b38f)
      4a8fc979
    • Evan Hunt's avatar
      better tcpquota accounting and client mortality checks · 08968412
      Evan Hunt authored
      - ensure that tcpactive is cleaned up correctly when accept() fails.
      - set 'client->tcpattached' when the client is attached to the tcpquota.
        carry this value on to new clients sharing the same pipeline group.
        don't call isc_quota_detach() on the tcpquota unless tcpattached is
        set.  this way clients that were allowed to accept TCP connections
        despite being over quota (and therefore, were never attached to the
        quota) will not inadvertently detach from it and mess up the
        accounting.
      - simplify the code for tcpquota disconnection by using a new function
        tcpquota_disconnect().
      - before deciding whether to reject a new connection due to quota
        exhaustion, check to see whether there are at least two active
        clients. previously, this was "at least one", but that could be
        insufficient if there was one other client in READING state (waiting
        for messages on an open connection) but none in READY (listening
        for new connections).
      - before deciding whether a TCP client object can to go inactive, we
        must ensure there are enough other clients to maintain service
        afterward -- both accepting new connections and reading/processing new
        queries.  A TCP client can't shut down unless at least one
        client is accepting new connections and (in the case of pipelined
        clients) at least one additional client is waiting to read.
      
      (cherry picked from commit 427a2fb4d17bc04ca3262f58a9dcf5c93fc6d33e)
      08968412
    • Michał Kępień's avatar
      use reference counter for pipeline groups (v3) · 22111202
      Michał Kępień authored
      Track pipeline groups using a shared reference counter
      instead of a linked list.
      
      (cherry picked from commit 31f392db20207a1b05d6286c3c56f76c8d69e574)
      22111202
    • Witold Krecicki's avatar
      tcp-clients could still be exceeded (v2) · d7e84cee
      Witold Krecicki authored
      the TCP client quota could still be ineffective under some
      circumstances.  this change:
      
      - improves quota accounting to ensure that TCP clients are
        properly limited, while still guaranteeing that at least one client
        is always available to serve TCP connections on each interface.
      - uses more descriptive names and removes one (ntcptarget) that
        was no longer needed
      - adds comments
      
      (cherry picked from commit 9e74969f85329fe26df2fad390468715215e2edd)
      d7e84cee
    • Evan Hunt's avatar
      Merge branch '880-secure-asdfasdfasdf-abacadabra-crash-security-v9_14' into 'security-v9_14' · 94ba15ea
      Evan Hunt authored
      Fix nxdomain-redirect crash when recursive query results in ncache nxdomain
      
      See merge request isc-private/bind9!83
      94ba15ea
    • Witold Krecicki's avatar
      fix enforcement of tcp-clients (v1) · 9e7617cc
      Witold Krecicki authored
      tcp-clients settings could be exceeded in some cases by
      creating more and more active TCP clients that are over
      the set quota limit, which in the end could lead to a
      DoS attack by e.g. exhaustion of file descriptors.
      
      If TCP client we're closing went over the quota (so it's
      not attached to a quota) mark it as mortal - so that it
      will be destroyed and not set up to listen for new
      connections - unless it's the last client for a specific
      interface.
      
      (cherry picked from commit eafcff07c25bdbe038ae1e4b6660602a080b9395)
      9e7617cc
    • Evan Hunt's avatar
      CHANGES, release note · 404be595
      Evan Hunt authored
      (cherry picked from commit ab5473007e91f011d003ff0ba5ab32fa0d56360c)
      404be595
    • Matthijs Mekking's avatar
      Fix nxdomain-redirect assertion failure · 486a2011
      Matthijs Mekking authored
      - Always set is_zonep in query_getdb; previously it was only set if
        result was ISC_R_SUCCESS or ISC_R_NOTFOUND.
      - Don't reset is_zone for redirect.
      - Style cleanup.
      
      (cherry picked from commit a85cc641d7a4c66cbde03cc4e31edc038a24df46)
      486a2011
    • Matthijs Mekking's avatar
      Add test for nxdomain-redirect ncachenxdomain · 05d29443
      Matthijs Mekking authored
      (cherry picked from commit 2d65626630c19bb8159a025accb18e5179da5dc3)
      05d29443
    • Evan Hunt's avatar
      Merge branch '973-pause-dbiterator-in-rpz-v9_14' into 'v9_14' · e5de594d
      Evan Hunt authored
      Fix deadlock in RPZ update code.
      
      See merge request isc-projects/bind9!1772
      e5de594d
    • Witold Krecicki's avatar
      Fix deadlock in RPZ update code. · 6e63d704
      Witold Krecicki authored
      In dns_rpz_update_from_db we call setup_update which creates the db
      iterator and calls dns_dbiterator_first. This unpauses the iterator and
      might cause db->tree_lock to be acquired. We then do isc_task_send(...)
      on an event to do quantum_update, which (correctly) after each iteration
      calls dns_dbiterator_pause, and re-isc_task_sends itself.
      
      That's an obvious bug, as we're holding a lock over an async task send -
      if a task requesting write (e.g. prune_tree) is scheduled on the same
      workers queue as update_quantum but before it, it will wait for the
      write lock indefinitely, resulting in a deadlock.
      
      To fix it we have to pause dbiterator in setup_update.
      
      (cherry picked from commit 06021b35)
      6e63d704
  2. 03 Apr, 2019 5 commits
  3. 26 Mar, 2019 6 commits
  4. 22 Mar, 2019 12 commits