
Firefox now makes available DNS-over-HTTPS. I'm a big fan of security and privacy, but I'm struggling to see the gains here: we stop some hypothetical observer from finding out what domain name we're querying ... and then immediately turn around and ask that domain for a web page. You hid the destination in your first query ... only to immediately expose it with your next query. I admit I'm thinking of our hypothetical advisor being at the ISP: they'll see both types of queries anyway. I suppose the argument can be made that an observer on the path to the DNS but not at the ISP has been stymied, but this seems ... lower value. Still, is that primarily what this will stop? -- Giles https://www.gilesorr.com/ gilesorr@gmail.com

On 2019-12-23 10:04 AM, Giles Orr via talk wrote:
Firefox now makes available DNS-over-HTTPS. I'm a big fan of security and privacy, but I'm struggling to see the gains here: we stop some hypothetical observer from finding out what domain name we're querying ... and then immediately turn around and ask that domain for a web page. You hid the destination in your first query ... only to immediately expose it with your next query.
I admit I'm thinking of our hypothetical advisor being at the ISP: they'll see both types of queries anyway. I suppose the argument can be made that an observer on the path to the DNS but not at the ISP has been stymied, but this seems ... lower value. Still, is that primarily what this will stop?
I also wonder about that. I can understand DNSSEC, to prevent DNS highjacking, etc.. Also, this means that TCP will be required, complete with the full sync/ack process, whereas DNS normally uses UDP.

On 12/23/19 10:04 AM, Giles Orr via talk wrote:
Firefox now makes available DNS-over-HTTPS. I'm a big fan of security and privacy, but I'm struggling to see the gains here: we stop some hypothetical observer from finding out what domain name we're querying ... and then immediately turn around and ask that domain for a web page. You hid the destination in your first query ... only to immediately expose it with your next query. That assumes a 1:1 relationship between the IP address and the domain name searched. Web servers now supports the ability to have multiple domains appear on a single IP even with HTTPS. So if your using a proxy service like Cloudflair then it may be very difficult to know exactly what domain the request is going to. I admit I'm thinking of our hypothetical advisor being at the ISP: they'll see both types of queries anyway. I suppose the argument can be made that an observer on the path to the DNS but not at the ISP has been stymied, but this seems ... lower value. Still, is that primarily what this will stop?
This will also make it harder for people who are on your wifi link to snoop on what your trying to connect to. Still any security enhancement is a security enhancement and makes it harder for others to steal your information, and generally that is a good thing. -- Alvin Starr || land: (647)478-6285 Netvel Inc. || Cell: (416)806-0133 alvin@netvel.net ||

On 2019-12-23 10:19 AM, Alvin Starr via talk wrote:
This will also make it harder for people who are on your wifi link to snoop on what your trying to connect to. Still any security enhancement is a security enhancement and makes it harder for others to steal your information, and generally that is a good thing.
Some people have other ideas: https://www.zdnet.com/article/dns-over-https-causes-more-problems-than-it-so...

On 12/23/19 10:24 AM, James Knott via talk wrote:
On 2019-12-23 10:19 AM, Alvin Starr via talk wrote:
This will also make it harder for people who are on your wifi link to snoop on what your trying to connect to. Still any security enhancement is a security enhancement and makes it harder for others to steal your information, and generally that is a good thing.
Some people have other ideas: https://www.zdnet.com/article/dns-over-https-causes-more-problems-than-it-so...
--- Post to this mailing list talk@gtalug.org Unsubscribe from this mailing list https://gtalug.org/mailman/listinfo/talk Its an interesting set of issues.
From a quick browse through the URL the complains seem to break into 2 categories. - it makes tracking harder - if not properly implemented it provides no extra security. Both things tend to be true of encryption technologies. I am not sure I would be running out to implement DoH any time soon because it does not seem like a great value. -- Alvin Starr || land: (647)478-6285 Netvel Inc. || Cell: (416)806-0133 alvin@netvel.net ||

On Mon, 23 Dec 2019 at 10:58, Alvin Starr via talk <talk@gtalug.org> wrote:
On 12/23/19 10:24 AM, James Knott via talk wrote:
On 2019-12-23 10:19 AM, Alvin Starr via talk wrote:
This will also make it harder for people who are on your wifi link to snoop on what your trying to connect to. Still any security enhancement is a security enhancement and makes it harder for others to steal your information, and generally that is a good thing.
Some people have other ideas: https://www.zdnet.com/article/dns-over-https-causes-more-problems-than-it-so...
Its an interesting set of issues.
From a quick browse through the URL the complains seem to break into 2 categories. - it makes tracking harder - if not properly implemented it provides no extra security.
Both things tend to be true of encryption technologies.
I am not sure I would be running out to implement DoH any time soon because it does not seem like a great value.
I'm also not enthusiastic about taking DNS out of the hands of the operating system: not only does this break "do one thing and do it well" (although browsers did that long ago), it also means that if you have name resolution problems the solution becomes split on "is this in the browser or somewhere else?" It seems to me that this solution - if implemented at all, and it's sounding like a bad idea - should be done at the OS level, not the browser. I'm going to pass on this little development and see how it plays out ... Thanks everyone. -- Giles https://www.gilesorr.com/ gilesorr@gmail.com

On 12/23/19 1:37 PM, Giles Orr via talk wrote:
Both things tend to be true of encryption technologies.
I am not sure I would be running out to implement DoH any time soon because it does not seem like a great value.
I'm also not enthusiastic about taking DNS out of the hands of the operating system: not only does this break "do one thing and do it well" (although browsers did that long ago), it also means that if you have name resolution problems the solution becomes split on "is this in the browser or somewhere else?" It seems to me that this solution - if implemented at all, and it's sounding like a bad idea - should be done at the OS level, not the browser.
I've been using DoH since it showed up in Firefox Nightly. DoH can be set to fallback to an OS resolver in the event that the browser's resolvers are unavailable. The value of DoH is in not letting ISPs or employers or parties x, y, and z track, monetize, and deanonymize DNS requests. For example: ISPs as resolvers can take DNS requests and sell that data on to a data broker to target ads and no one is the wiser. Likewise sharing with law enforcement or government. Our ISPs are total black boxes when it comes to how they run, share, and monetize our DNS data. Another example: employers can track browsing habits on networks using a VPN, DHCP, or preconfigured resolver. The recent case of Kathryn Spiers at Google is roughly analogous. She made a browser extension to notify users about their rights, but I have no doubt that every Google employee's DNS queries to union busting sites are logged and can be correlated if someone higher up decides to embark on further union busting programs. Then there are the countries with questionable human rights records who surveil their citizens, activists, journalists etc. I think that DNS is one of those things that we all take for granted and trust without realizing how easy it is to monitor, subvert/tamper, monetize, and identify individuals with. I'm personally all for making surveillance capitalism incrementally more costly to the data brokers and ad networks out there. Moreover tools like DoH that make privacy a default setting go at least some way to encouraging the idea that privacy online should be a fundamental right (which is admittedly a matter of personal belief, but I haven't come across a compelling argument to the contrary). Cheers, Jamon

On the technical standpoint, I fully agree with Giles: let the browser render pages, and OS do the resolution. This pattern of letting the browser do more and more OS tasks is awkward. It have a couple issues, as is a new protocol, not everyone agrees how things were done, Firefox forces it instead of letting the OS decide, things like that. As it is now, it's a mess. On the other hand, I see governments fuming over DNS-over-HTTPS (DoH), and that alone makes me wonder why. The old "terrorists and pedophiles" label attached to it implies the government is losing access to something they want to have. As DoH uses the same port as HTTPS (443), it's more difficult to identify a DNS request among all HTTPS traffic, and that does not happen with DNS-over-TLS(DoT). For people in the "Free World," there's nothing much to fear by letting the ISP know the domains you browse, except more spam and directed ads. For people on the Chinese/Russian/Muslim block, a "restricted domain" can lead to trouble. With DoH in place, and Cloudflare proxy for the "restricted domain", anyone can access anything, and the ISP/government only knows you are accessing one of the myriad of domains protected by Cloudflare. Or Akamay. I would install a dns-proxy that receives plain old DNS queries and forwards them to a trusted DoH/DoT server somewhere else. So the OS would do the resolution, not my programs. Mauro http://mauro.limeiratem.com - registered Linux User: 294521 Scripture is both history, and a love letter from God. Em seg., 23 de dez. de 2019 às 15:37, Giles Orr via talk <talk@gtalug.org> escreveu:
On Mon, 23 Dec 2019 at 10:58, Alvin Starr via talk <talk@gtalug.org> wrote:
On 12/23/19 10:24 AM, James Knott via talk wrote:
On 2019-12-23 10:19 AM, Alvin Starr via talk wrote:
This will also make it harder for people who are on your wifi link to snoop on what your trying to connect to. Still any security enhancement is a security enhancement and makes it harder for others to steal your information, and generally that is a good thing.
Some people have other ideas:
https://www.zdnet.com/article/dns-over-https-causes-more-problems-than-it-so...
Its an interesting set of issues.
From a quick browse through the URL the complains seem to break into 2 categories. - it makes tracking harder - if not properly implemented it provides no extra security.
Both things tend to be true of encryption technologies.
I am not sure I would be running out to implement DoH any time soon because it does not seem like a great value.
I'm also not enthusiastic about taking DNS out of the hands of the operating system: not only does this break "do one thing and do it well" (although browsers did that long ago), it also means that if you have name resolution problems the solution becomes split on "is this in the browser or somewhere else?" It seems to me that this solution - if implemented at all, and it's sounding like a bad idea - should be done at the OS level, not the browser.
I'm going to pass on this little development and see how it plays out ...
Thanks everyone.
-- Giles https://www.gilesorr.com/ gilesorr@gmail.com --- Post to this mailing list talk@gtalug.org Unsubscribe from this mailing list https://gtalug.org/mailman/listinfo/talk

| From: Giles Orr via talk <talk@gtalug.org> | Firefox now makes available DNS-over-HTTPS. I'm a big fan of security | and privacy, but I'm struggling to see the gains here: we stop some | hypothetical observer from finding out what domain name we're querying | ... and then immediately turn around and ask that domain for a web | page. You hid the destination in your first query ... only to | immediately expose it with your next query. I've not thought much about this so I probably have missed some issues. It is horrible that we haven't transitioned to DNSsec. DNS is the single worst technical weakness with no excuse. It's been 20 years of almost no adoption. Without DNSsec, active man-in-the-middle (MITM) attacks are easy and undetected. By "active", I mean: modify the query or query results, not just observe them. ISPs do active MITM attacks. Routers do them. Corporations do them for their employees. Governments do them. Invisibly. I'd rather trust Mozilla than everyone on my query's route. My network does not use outside recursive nameservers. I use my own in-house (literally) recursive nameserver. I could switch it to use HTTPS to talk to Mozilla's recursive nameserver if I chose to (it would involve a small matter of programming because the off-the-shelf programs don't currently support this). The fact that Mozilla put the resolver in the browser is a little ugly but understandable. - they control the browser - they have to modify one piece of software, not try to push a new feature on unwilling maintainers of perhaps a dozen pieces of software, the most important of which are closed source. This does not preclude those dozen pieces of software each adopting the new protocol. (I've been annoyed that so much of a resolver is embedded in glibc.) ================ Conceptually, I don't like Mozilla being a single point of attack. This solution really must be replaced by something better. But will it? There is a chance that a make-do solution like this will de-motivate possible adoption of DHSsec. That would be bad. ================ HTTPS has more setup time than UDP or TCP. But a single HTTPS pipe between your browser and Mozilla can carry many queries and responses (I assume that the DHS-over-HTTPS has been designed to support this). ================ My tentative conclusion is that this is a hack but it is a quick and simple way to somewhat better security. The slow but better ways are not working.

On 12/23/19 5:43 PM, D. Hugh Redelmeier via talk wrote:
| From: Giles Orr via talk <talk@gtalug.org>
| Firefox now makes available DNS-over-HTTPS. I'm a big fan of security | and privacy, but I'm struggling to see the gains here: we stop some | hypothetical observer from finding out what domain name we're querying | ... and then immediately turn around and ask that domain for a web | page. You hid the destination in your first query ... only to | immediately expose it with your next query.
I've not thought much about this so I probably have missed some issues.
It is horrible that we haven't transitioned to DNSsec. DNS is the single worst technical weakness with no excuse. It's been 20 years of almost no adoption. That's true. However the question is perhaps the cost of SSL encryption on older 20 year computers so too expensive. Actually most modern CPUs do encryption in instructions due to this. Not sure if SSL is as expensive as AES or SHA to compute but maybe it was.
Also encryption or at least implementations of it in protocols was not taken that serious at that time. Perhaps society has changed too which is good, but a lot of people forget that the technical is dependent on other factors. Just stating its a weakness does not really matter but asking why I should care does. Look at Ipv6 or hardware supported memory sanitizing as a example which was just being added to hardware even through the issue has been known for the last 5 years.
Without DNSsec, active man-in-the-middle (MITM) attacks are easy and undetected. By "active", I mean: modify the query or query results, not just observe them.
ISPs do active MITM attacks. Routers do them. Corporations do them for their employees. Governments do them. Invisibly.
I'd rather trust Mozilla than everyone on my query's route.
My network does not use outside recursive nameservers. I use my own in-house (literally) recursive nameserver. I could switch it to use HTTPS to talk to Mozilla's recursive nameserver if I chose to (it would involve a small matter of programming because the off-the-shelf programs don't currently support this).
The fact that Mozilla put the resolver in the browser is a little ugly but understandable.
- they control the browser
- they have to modify one piece of software, not try to push a new feature on unwilling maintainers of perhaps a dozen pieces of software, the most important of which are closed source.
This does not preclude those dozen pieces of software each adopting the new protocol.
(I've been annoyed that so much of a resolver is embedded in glibc.) That makes sense from any systems programmer who requires nameservers or DNS. At this point not having it is like not having a sockets library in my view. So the question is why don't you think this outside of glibc being already bloated in size compared to other c libraries like musl?
================
Conceptually, I don't like Mozilla being a single point of attack. This solution really must be replaced by something better. But will it? Doubtful in the future probably due to security not being a "profit" to most companies sadly. You could set it up on the root DNS servers and have a legacy version for non DHSsec connections through. Granted this would require a major agreement from all involved parties and basically require rewriting core parts of the Internet.
There is a chance that a make-do solution like this will de-motivate possible adoption of DHSsec. That would be bad.
================
HTTPS has more setup time than UDP or TCP. But a single HTTPS pipe between your browser and Mozilla can carry many queries and responses (I assume that the DHS-over-HTTPS has been designed to support this).
================
My tentative conclusion is that this is a hack but it is a quick and simple way to somewhat better security. The slow but better ways are not working. --- Post to this mailing list talk@gtalug.org Unsubscribe from this mailing list https://gtalug.org/mailman/listinfo/talk Cheers, Nick

| From: Nicholas Krause via talk <talk@gtalug.org> | On 12/23/19 5:43 PM, D. Hugh Redelmeier via talk wrote: | > It is horrible that we haven't transitioned to DNSsec. DNS is the | > single worst technical weakness with no excuse. It's been 20 years of | > almost no adoption. | That's true. However the question is perhaps the cost of | SSL encryption on older 20 year computers so too expensive. | Actually most modern CPUs do encryption in instructions | due to this. Not sure if SSL is as expensive as AES or SHA | to compute but maybe it was. SSL is a protocol, not a cypher. It builds on cypher suites. The protocol enables negotiation of cypher suites. The expensive part is setting up the trust: - checking identity (e.g. with x.509 certificates) - proving liveness (to prevent replay attacks) - creating an ephemiral shared secret (using a Diffie Hellman exchange) - mitigating Denial of Service attacks. Computing AES and other symmetric cyphers has been quite affordable all along, especially when you consider that DNS isn't transmitting very much data. We've been able to afford both of these for quite some time. The main cost would be borne by the core. That's where fan-in causes a larger load (each server handles a large number of clients). The core can surely afford to pay for the extra hardware. I imagine that it's a lot less than all the neural nets in the core that recognize speech for us. | Also encryption or at least implementations of it in protocols | was not taken that serious at that time. Perhaps society has | changed too which is good, but a lot of people forget that | the technical is dependent on other factors. Just stating | its a weakness does not really matter but asking why I | should care does. I think that we are agreeing here. This has always been important for the integrity of the internet. The cost is quite affordable. Not doing this is like leaving seat belts out of cars. | Look at Ipv6 or hardware supported | memory sanitizing as a example which was just being | added to hardware even through the issue has been | known for the last 5 years. I don't know exactly what you mean by "memory sanitizing". If you mean detecting the use of the value of an uninitialized variable, effective solutions have existed for over 50 years. Watfor did this in the mid-1960s (originally by abusing a hardware feature of the IBM 7040/44). | > (I've been annoyed that so much of a resolver is embedded in glibc.) | That makes sense from any systems programmer who | requires nameservers or DNS. At this point not having it | is like not having a sockets library in my view. So the | question is why don't you think this outside of glibc | being already bloated in size compared to other | c libraries like musl? The resolver is best implemented in a separate process. The way the resolver is implemented in glibc is blocking. This is fine for trivial programs. It forces more complicated programs to use threads for no good reason. A few little changes would allow an asynchronous interface. | > Conceptually, I don't like Mozilla being a single point of attack. | > This solution really must be replaced by something better. But will | > it? | Doubtful in the future probably due to security not being | a "profit" to most companies sadly. The other side of security is fear. Much of our society is driven by fear. It isn't a small second-order effect. | You could set it up on the | root DNS servers and have a legacy version for non | DHSsec connections through. Granted this would require | a major agreement from all involved parties and basically | require rewriting core parts of the Internet. It's all agreed to already: DNSsec.

On 12/24/19 2:45 AM, D. Hugh Redelmeier via talk wrote:
| From: Nicholas Krause via talk <talk@gtalug.org>
| On 12/23/19 5:43 PM, D. Hugh Redelmeier via talk wrote:
| > It is horrible that we haven't transitioned to DNSsec. DNS is the | > single worst technical weakness with no excuse. It's been 20 years of | > almost no adoption.
| That's true. However the question is perhaps the cost of | SSL encryption on older 20 year computers so too expensive. | Actually most modern CPUs do encryption in instructions | due to this. Not sure if SSL is as expensive as AES or SHA | to compute but maybe it was.
SSL is a protocol, not a cypher. It builds on cypher suites. The protocol enables negotiation of cypher suites.
The expensive part is setting up the trust:
- checking identity (e.g. with x.509 certificates)
- proving liveness (to prevent replay attacks)
- creating an ephemiral shared secret (using a Diffie Hellman exchange)
- mitigating Denial of Service attacks.
Computing AES and other symmetric cyphers has been quite affordable all along, especially when you consider that DNS isn't transmitting very much data.
We've been able to afford both of these for quite some time. Not 256bit or higher it seems which matters. Its not the algorithm but the number of bits and this goes for SHA as well.
The main cost would be borne by the core. That's where fan-in causes a larger load (each server handles a large number of clients). The core can surely afford to pay for the extra hardware. I imagine that it's a lot less than all the neural nets in the core that recognize speech for us.
| Also encryption or at least implementations of it in protocols | was not taken that serious at that time. Perhaps society has | changed too which is good, but a lot of people forget that | the technical is dependent on other factors. Just stating | its a weakness does not really matter but asking why I | should care does.
I think that we are agreeing here.
This has always been important for the integrity of the internet. The cost is quite affordable. Not doing this is like leaving seat belts out of cars.
| Look at Ipv6 or hardware supported | memory sanitizing as a example which was just being | added to hardware even through the issue has been | known for the last 5 years.
I don't know exactly what you mean by "memory sanitizing".
If you mean detecting the use of the value of an uninitialized variable, effective solutions have existed for over 50 years. Watfor did this in the mid-1960s (originally by abusing a hardware feature of the IBM 7040/44). No this is what I've talking about but in hardware, the GCC side is implementing it as well: https://clang.llvm.org/docs/HardwareAssistedAddressSanitizerDesign.html Its more complex than only uninitialized variables, its similar to my knowledge of valgrind or other memory profilers but in hardware. Estimated overhead is around 5 to 10 percent it seems as compared to 2 to 3x slower in valgrind depending on program.
| > (I've been annoyed that so much of a resolver is embedded in glibc.)
| That makes sense from any systems programmer who | requires nameservers or DNS. At this point not having it | is like not having a sockets library in my view. So the | question is why don't you think this outside of glibc | being already bloated in size compared to other | c libraries like musl?
The resolver is best implemented in a separate process.
The way the resolver is implemented in glibc is blocking. This is fine for trivial programs. It forces more complicated programs to use threads for no good reason. A few little changes would allow an asynchronous interface. That makes more sense then so its similar to the issues with mallloc and locking before arenas were added in that it does not scale in complicated programs, or in the case of malloc multi-threaded programs.
| > Conceptually, I don't like Mozilla being a single point of attack. | > This solution really must be replaced by something better. But will | > it? | Doubtful in the future probably due to security not being | a "profit" to most companies sadly.
The other side of security is fear. Much of our society is driven by fear. It isn't a small second-order effect.
| You could set it up on the | root DNS servers and have a legacy version for non | DHSsec connections through. Granted this would require | a major agreement from all involved parties and basically | require rewriting core parts of the Internet.
It's all agreed to already: DNSsec. Cheers, Nick --- Post to this mailing list talk@gtalug.org Unsubscribe from this mailing list https://gtalug.org/mailman/listinfo/talk

I should have pointed out the difference between the problem for recursive and non-recursive name servers. Recursive name servers take any query from their client and get an answer if possible. That's what your ISP provides you. The "recursive" term is used because the name server may have to issue a bunch of queries of its own to get that answer. Non-recusive name servers are typically authoritative for some domain(s) and only answer queries for those domains. The root name servers (authoritative for .) and the TLD name servers are non-recursive. Queries to non-recursive servers require no persistent state. The query comes in; the information for the answer comes from an internal table; the answer is returned to the query. Not having state reduces the cost. As soon as you use TCP instead of UDP, some state is required. These days many answers are too long for a UDP packet so TCP must be used. If I remember correctly, DNSsec is designed to avoid state for queries. But it generally forces TCP since signatures and keys are so bulky. Using SSL also adds considerable state. But the proposals for DNS over HTTPS are surely for recursive name servers. Form follows funding. Non-recursive servers are in theory funded by the domain owners. They are cheapskates. Recursive name servers are generally provided by an entity funded by the clients. Like: an ISP's recursive DNS is for paying customers. If the load gets high, that means you have more customers (scaling funding) or you have misbehaving customers (fire them). The weird case is no-charge recursive servers. Like Google's 80.80.80.80. And whoever is providing the DNS over HTTPS server for Mozilla. Those gift horses need their mouths examined. If I remember correctly, DNSsec is designed to avoid state for queries. But it generally forces TCP since signatures and keys are so bulky. If I remember correctly, DNSsec is designed to avoid state for queries. But it generally forces TCP since signatures and keys are so bulky. | From: Nicholas Krause via talk <talk@gtalug.org> | On 12/24/19 2:45 AM, D. Hugh Redelmeier via talk wrote: | > | From: Nicholas Krause via talk <talk@gtalug.org> | > We've been able to afford both of these for quite some time. | Not 256bit or higher it seems which matters. Its not the | algorithm but the number of bits and this goes for SHA | as well. (Hashes and symmetric ciphers are NOT a problem for DNS servers. The number of bits for the public-key and DH systems do matter. Bits of strength in each system are not commensurate.) It all depends on your threat model. And what you think you can afford. Giles pointed out that just protecting DNS does not solve the surveillance problem. I raised the active Man-in-the-Middle attack. Partly because I think that it is more outrageously bad and partly because it can be solved more affordably. But there is another dimension. Do you want to protect your queries indefinitely? Then the ciphers you pick had better be very very very strong. I don't care so much (the cost/benefit is too high). Protecting them now means picking a cipher that is strong enough now. (A MITM attack ten years from now on today's exchange isn't possible.) So: 20 years ago, weaker ciphers were affordable and sufficient. | > I don't know exactly what you mean by "memory sanitizing". | > | > If you mean detecting the use of the value of an uninitialized | > variable, effective solutions have existed for over 50 years. Watfor | > did this in the mid-1960s (originally by abusing a hardware feature | > of the IBM 7040/44). | No this is what I've talking about but in hardware, the GCC side is | implementing it as well: | https://clang.llvm.org/docs/HardwareAssistedAddressSanitizerDesign.html | Its more complex than only uninitialized variables, its similar to | my knowledge of valgrind or other memory profilers but in hardware. | Estimated overhead is around 5 to 10 percent it seems as compared | to 2 to 3x slower in valgrind depending on program. Ahh. "Address Sanitizer", not "Memory Sanitizer". But in either case, not a good name. I don't have much sympathy for hardware additions to make up for programming language deficiencies. When testing C code, I often use libefence (Electric Fence). It uses the existing x86 MMU to catch the errors that would be caught by the proposed hardware. For my programs, the cost is quite affordable but this isn't true for all programs.

On 12/24/19 4:26 PM, D. Hugh Redelmeier via talk wrote:
I should have pointed out the difference between the problem for recursive and non-recursive name servers.
Recursive name servers take any query from their client and get an answer if possible. That's what your ISP provides you. The "recursive" term is used because the name server may have to issue a bunch of queries of its own to get that answer.
Non-recusive name servers are typically authoritative for some domain(s) and only answer queries for those domains. The root name servers (authoritative for .) and the TLD name servers are non-recursive.
Queries to non-recursive servers require no persistent state. The query comes in; the information for the answer comes from an internal table; the answer is returned to the query. Not having state reduces the cost.
As soon as you use TCP instead of UDP, some state is required. These days many answers are too long for a UDP packet so TCP must be used.
If I remember correctly, DNSsec is designed to avoid state for queries. But it generally forces TCP since signatures and keys are so bulky.
Using SSL also adds considerable state.
But the proposals for DNS over HTTPS are surely for recursive name servers.
Form follows funding.
Non-recursive servers are in theory funded by the domain owners. They are cheapskates.
Recursive name servers are generally provided by an entity funded by the clients. Like: an ISP's recursive DNS is for paying customers. If the load gets high, that means you have more customers (scaling funding) or you have misbehaving customers (fire them).
The weird case is no-charge recursive servers. Like Google's 80.80.80.80. And whoever is providing the DNS over HTTPS server for Mozilla. Those gift horses need their mouths examined.
If I remember correctly, DNSsec is designed to avoid state for queries. But it generally forces TCP since signatures and keys are so bulky.
If I remember correctly, DNSsec is designed to avoid state for queries. But it generally forces TCP since signatures and keys are so bulky.
| From: Nicholas Krause via talk <talk@gtalug.org>
| On 12/24/19 2:45 AM, D. Hugh Redelmeier via talk wrote: | > | From: Nicholas Krause via talk <talk@gtalug.org>
| > We've been able to afford both of these for quite some time.
| Not 256bit or higher it seems which matters. Its not the | algorithm but the number of bits and this goes for SHA | as well.
(Hashes and symmetric ciphers are NOT a problem for DNS servers. The number of bits for the public-key and DH systems do matter. Bits of strength in each system are not commensurate.)
It all depends on your threat model. And what you think you can afford.
Giles pointed out that just protecting DNS does not solve the surveillance problem.
I raised the active Man-in-the-Middle attack. Partly because I think that it is more outrageously bad and partly because it can be solved more affordably.
But there is another dimension. Do you want to protect your queries indefinitely? Then the ciphers you pick had better be very very very strong. I don't care so much (the cost/benefit is too high). Protecting them now means picking a cipher that is strong enough now. (A MITM attack ten years from now on today's exchange isn't possible.)
So: 20 years ago, weaker ciphers were affordable and sufficient.
| > I don't know exactly what you mean by "memory sanitizing". | > | > If you mean detecting the use of the value of an uninitialized | > variable, effective solutions have existed for over 50 years. Watfor | > did this in the mid-1960s (originally by abusing a hardware feature | > of the IBM 7040/44).
| No this is what I've talking about but in hardware, the GCC side is | implementing it as well: | https://clang.llvm.org/docs/HardwareAssistedAddressSanitizerDesign.html | Its more complex than only uninitialized variables, its similar to | my knowledge of valgrind or other memory profilers but in hardware. | Estimated overhead is around 5 to 10 percent it seems as compared | to 2 to 3x slower in valgrind depending on program.
Ahh. "Address Sanitizer", not "Memory Sanitizer". But in either case, not a good name.
I don't have much sympathy for hardware additions to make up for programming language deficiencies.
When testing C code, I often use libefence (Electric Fence). It uses the existing x86 MMU to catch the errors that would be caught by the proposed hardware. For my programs, the cost is quite affordable but this isn't true for all programs. --- That's the idea but its done in the compiler not by a separate tool so this has possible benefits including cost as you mentioned.
Nick
Post to this mailing list talk@gtalug.org Unsubscribe from this mailing list https://gtalug.org/mailman/listinfo/talk
participants (7)
-
Alvin Starr
-
D. Hugh Redelmeier
-
Giles Orr
-
James Knott
-
Jamon Camisso
-
Mauro Souza
-
Nicholas Krause