
On 12/24/19 4:26 PM, D. Hugh Redelmeier via talk wrote:
I should have pointed out the difference between the problem for recursive and non-recursive name servers.
Recursive name servers take any query from their client and get an answer if possible. That's what your ISP provides you. The "recursive" term is used because the name server may have to issue a bunch of queries of its own to get that answer.
Non-recusive name servers are typically authoritative for some domain(s) and only answer queries for those domains. The root name servers (authoritative for .) and the TLD name servers are non-recursive.
Queries to non-recursive servers require no persistent state. The query comes in; the information for the answer comes from an internal table; the answer is returned to the query. Not having state reduces the cost.
As soon as you use TCP instead of UDP, some state is required. These days many answers are too long for a UDP packet so TCP must be used.
If I remember correctly, DNSsec is designed to avoid state for queries. But it generally forces TCP since signatures and keys are so bulky.
Using SSL also adds considerable state.
But the proposals for DNS over HTTPS are surely for recursive name servers.
Form follows funding.
Non-recursive servers are in theory funded by the domain owners. They are cheapskates.
Recursive name servers are generally provided by an entity funded by the clients. Like: an ISP's recursive DNS is for paying customers. If the load gets high, that means you have more customers (scaling funding) or you have misbehaving customers (fire them).
The weird case is no-charge recursive servers. Like Google's 80.80.80.80. And whoever is providing the DNS over HTTPS server for Mozilla. Those gift horses need their mouths examined.
If I remember correctly, DNSsec is designed to avoid state for queries. But it generally forces TCP since signatures and keys are so bulky.
If I remember correctly, DNSsec is designed to avoid state for queries. But it generally forces TCP since signatures and keys are so bulky.
| From: Nicholas Krause via talk <talk@gtalug.org>
| On 12/24/19 2:45 AM, D. Hugh Redelmeier via talk wrote: | > | From: Nicholas Krause via talk <talk@gtalug.org>
| > We've been able to afford both of these for quite some time.
| Not 256bit or higher it seems which matters. Its not the | algorithm but the number of bits and this goes for SHA | as well.
(Hashes and symmetric ciphers are NOT a problem for DNS servers. The number of bits for the public-key and DH systems do matter. Bits of strength in each system are not commensurate.)
It all depends on your threat model. And what you think you can afford.
Giles pointed out that just protecting DNS does not solve the surveillance problem.
I raised the active Man-in-the-Middle attack. Partly because I think that it is more outrageously bad and partly because it can be solved more affordably.
But there is another dimension. Do you want to protect your queries indefinitely? Then the ciphers you pick had better be very very very strong. I don't care so much (the cost/benefit is too high). Protecting them now means picking a cipher that is strong enough now. (A MITM attack ten years from now on today's exchange isn't possible.)
So: 20 years ago, weaker ciphers were affordable and sufficient.
| > I don't know exactly what you mean by "memory sanitizing". | > | > If you mean detecting the use of the value of an uninitialized | > variable, effective solutions have existed for over 50 years. Watfor | > did this in the mid-1960s (originally by abusing a hardware feature | > of the IBM 7040/44).
| No this is what I've talking about but in hardware, the GCC side is | implementing it as well: | https://clang.llvm.org/docs/HardwareAssistedAddressSanitizerDesign.html | Its more complex than only uninitialized variables, its similar to | my knowledge of valgrind or other memory profilers but in hardware. | Estimated overhead is around 5 to 10 percent it seems as compared | to 2 to 3x slower in valgrind depending on program.
Ahh. "Address Sanitizer", not "Memory Sanitizer". But in either case, not a good name.
I don't have much sympathy for hardware additions to make up for programming language deficiencies.
When testing C code, I often use libefence (Electric Fence). It uses the existing x86 MMU to catch the errors that would be caught by the proposed hardware. For my programs, the cost is quite affordable but this isn't true for all programs. --- That's the idea but its done in the compiler not by a separate tool so this has possible benefits including cost as you mentioned.
Nick
Post to this mailing list talk@gtalug.org Unsubscribe from this mailing list https://gtalug.org/mailman/listinfo/talk