1. 04 Aug, 2020 3 commits
  2. 03 Aug, 2020 5 commits
  3. 31 Jul, 2020 9 commits
  4. 30 Jul, 2020 14 commits
    • Michal Nowak's avatar
      Merge branch 'mnowak/various-system-test-fixes-v9_16' into 'v9_16' · 2fa1c953
      Michal Nowak authored
      [v9_16] Various system test fixes
      See merge request !3898
    • Michal Nowak's avatar
      Remove cross-test dependency on ckdnsrps.sh · 0f319908
      Michal Nowak authored
    • Michal Nowak's avatar
    • Michal Nowak's avatar
    • Ondřej Surý's avatar
      Merge branch... · d8f7f0e7
      Ondřej Surý authored
      Merge branch '1775-resizing-growing-of-cache-hash-tables-causes-delays-in-processing-of-client-queries-v9_16' into 'v9_16'
      Resolve "Resizing (growing) of cache hash tables causes delays in processing of client queries"
      See merge request !3871
    • Ondřej Surý's avatar
      Add CHANGES and release note for #1775 · 34333041
      Ondřej Surý authored
      (cherry picked from commit 2b4f0f03)
    • Ondřej Surý's avatar
      Change the dns_name hashing to use 32-bit values · 0fff3008
      Ondřej Surý authored
      Change the dns_hash_name() and dns_hash_fullname() functions to use
      isc_hash32() as the maximum hashtable size in rbt is 0..UINT32_MAX
      (cherry picked from commit a9182c89)
    • Ondřej Surý's avatar
      Add isc_hash32() and rename isc_hash_function() to isc_hash64() · ebb2b055
      Ondřej Surý authored
      As the names suggest the original isc_hash64 function returns 64-bit
      long hash values and the isc_hash32() returns 32-bit values.
      (cherry picked from commit f59fd49f)
    • Ondřej Surý's avatar
      Add HalfSipHash 2-4 reference implementation · 1e5df7f3
      Ondřej Surý authored
      The HalfSipHash implementation has 32-bit keys and returns 32-bit
      (cherry picked from commit 344d66aa)
    • Ondřej Surý's avatar
      Remove OpenSSL based SipHash 2-4 implementation · d89eb403
      Ondřej Surý authored
      Creation of EVP_MD_CTX and EVP_PKEY is quite expensive, so until we fix the code
      to reuse the OpenSSL contexts and keys we'll use our own implementation of
      siphash instead of trying to integrate with OpenSSL.
      (cherry picked from commit 21d751df)
    • Ondřej Surý's avatar
      Fix the rbt hashtable and grow it when setting max-cache-size · aa72c314
      Ondřej Surý authored
      There were several problems with rbt hashtable implementation:
      1. Our internal hashing function returns uint64_t value, but it was
         silently truncated to unsigned int in dns_name_hash() and
         dns_name_fullhash() functions.  As the SipHash 2-4 higher bits are
         more random, we need to use the upper half of the return value.
      2. The hashtable implementation in rbt.c was using modulo to pick the
         slot number for the hash table.  This has several problems because
         modulo is: a) slow, b) oblivious to patterns in the input data.  This
         could lead to very uneven distribution of the hashed data in the
         hashtable.  Combined with the single-linked lists we use, it could
         really hog-down the lookup and removal of the nodes from the rbt
         tree[a].  The Fibonacci Hashing is much better fit for the hashtable
         function here.  For longer description, read "Fibonacci Hashing: The
         Optimization that the World Forgot"[b] or just look at the Linux
         kernel.  Also this will make Diego very happy :).
      3. The hashtable would rehash every time the number of nodes in the rbt
         tree would exceed 3 * (hashtable size).  The overcommit will make the
         uneven distribution in the hashtable even worse, but the main problem
         lies in the rehashing - every time the database grows beyond the
         limit, each subsequent rehashing will be much slower.  The mitigation
         here is letting the rbt know how big the cache can grown and
         pre-allocate the hashtable to be big enough to actually never need to
         rehash.  This will consume more memory at the start, but since the
         size of the hashtable is capped to `1 << 32` (e.g. 4 mio entries), it
         will only consume maximum of 32GB of memory for hashtable in the
         worst case (and max-cache-size would need to be set to more than
         4TB).  Calling the dns_db_adjusthashsize() will also cap the maximum
         size of the hashtable to the pre-computed number of bits, so it won't
         try to consume more gigabytes of memory than available for the
         FIXME: What is the average size of the rbt node that gets hashed?  I
         chose the pagesize (4k) as initial value to precompute the size of
         the hashtable, but the value is based on feeling and not any real
      For future work, there are more places where we use result of the hash
      value modulo some small number and that would benefit from Fibonacci
      Hashing to get better distribution.
      a. A doubly linked list should be used here to speedup the removal of
         the entries from the hashtable.
      b. https://probablydance.com/2018/06/16/fibonacci-hashing-the-optimization-that-the-world-forgot-or-a-better-alternative-to-integer-modulo/
      (cherry picked from commit e24bc324)
    • Michał Kępień's avatar
      Merge branch '2024-fix-idle-timeout-for-connected-tcp-sockets-v9_16' into 'v9_16' · 57b29d89
      Michał Kępień authored
      [v9_16] Fix idle timeout for connected TCP sockets
      See merge request !3896
    • Michał Kępień's avatar
      Add CHANGES for GL #2024 · 8b301450
      Michał Kępień authored
      (cherry picked from commit 18efb245)
    • Michał Kępień's avatar
      Fix idle timeout for connected TCP sockets · b6c33087
      Michał Kępień authored
      When named acting as a resolver connects to an authoritative server over
      TCP, it sets the idle timeout for that connection to 20 seconds.  This
      fixed timeout was picked back when the default processing timeout for
      each client query was hardcoded to 30 seconds.  Commit
      000a8970 made this processing timeout
      configurable through "resolver-query-timeout" and decreased its default
      value to 10 seconds, but the idle TCP timeout was not adjusted to
      reflect that change.  As a result, with the current defaults in effect,
      a single hung TCP connection will consistently cause the resolution
      process for a given query to time out.
      Set the idle timeout for connected TCP sockets to half of the client
      query processing timeout configured for a resolver.  This allows named
      to handle hung TCP connections more robustly and prevents the timeout
      mismatch issue from resurfacing in the future if the default is ever
      changed again.
      (cherry picked from commit 953d704b)
  5. 28 Jul, 2020 3 commits
  6. 27 Jul, 2020 3 commits
    • Diego dos Santos Fronza's avatar
      Add CHANGES entry · 31af3af5
      Diego dos Santos Fronza authored
    • Diego dos Santos Fronza's avatar
    • Diego dos Santos Fronza's avatar
      Fix rpz wildcard name matching · a8ce7b46
      Diego dos Santos Fronza authored
      Whenever an exact match is found by dns_rbt_findnode(),
      the highest level node in the chain will not be put into
      chain->levels[] array, but instead the chain->end
      pointer will be adjusted to point to that node.
      Suppose we have the following entries in a rpz zone:
      example.com     CNAME rpz-passthru.
      *.example.com   CNAME rpz-passthru.
      A query for www.example.com would result in the
      following chain object returned by dns_rbt_findnode():
      chain->level_count = 2
      chain->level_matches = 2
      chain->levels[0] = .
      chain->levels[1] = example.com
      chain->levels[2] = NULL
      chain->end = www
      Since exact matches only care for testing rpz set bits,
      we need to test for rpz wild bits through iterating the nodechain, and
      that includes testing the rpz wild bits in the highest level node found.
      In the case of an exact match, chain->levels[chain->level_matches]
      will be NULL, to address that we must use chain->end as the start point,
      then iterate over the remaining levels in the chain.
  7. 24 Jul, 2020 3 commits