Fix the rbt hashtable and grow it when setting maxcachesize
There were several problems with rbt hashtable implementation:

Our internal hashing function returns uint64_t value, but it was silently truncated to unsigned int in dns_name_hash() and dns_name_fullhash() functions. As the SipHash 24 higher bits are more random, we need to use the upper half of the return value.

The hashtable implementation in rbt.c was using modulo to pick the slot number for the hash table. This has several problems because modulo is: a) slow, b) oblivious to patterns in the input data. This could lead to very uneven distribution of the hashed data in the hashtable. Combined with the singlelinked lists we use, it could really hogdown the lookup and removal of the nodes from the rbt tree[a]. The Fibonacci Hashing is much better fit for the hashtable function here. For longer description, read "Fibonacci Hashing: The Optimization that the World Forgot"[b] or just look at the Linux kernel. Also this will make Diego very happy :).

The hashtable would rehash every time the number of nodes in the rbt tree would exceed 3 * (hashtable size). The overcommit will make the uneven distribution in the hashtable even worse, but the main problem lies in the rehashing  every time the database grows beyond the limit, each subsequent rehashing will be much slower. The mitigation here is letting the rbt know how big the cache can grown and preallocate the hashtable to be big enough to actually never need to rehash. This will consume more memory at the start, but since the size of the hashtable is capped to
1 << 31
(e.g. 2 mio entries), it will only consume maximum of 16GB of memory for hashtable in the worst case. Calling the dns_db_adjusthashsize() will also cap the maximum size of the hashtable to the precomputed number of bits, so it won't try to consume more gigabytes of memory than available for the database.
For future work, there are more places where we use result of the hash value modulo some small number and that would benefit from Fibonacci Hashing to get better distribution.
Notes: a. A doubly linked list should be used here to speedup the removal of the entries from the hashtable. b. https://probablydance.com/2018/06/16/fibonaccihashingtheoptimizationthattheworldforgotorabetteralternativetointegermodulo/
Closes #1775 (closed)