BIND Still Susceptible To DNS Cache Poisoning 146
An anonymous reader writes "John Markoff of the NYTimes writes about a Russian hacker, Evgeniy Polyakov, who has successfully poisoned the latest, patched BIND with randomized ports. Originally, the randomized ports were never supposed to completely solve the problem, but just make it harder to do. It was thought that with port randomization, it would take roughly a week to get a hit. Using his own exploit code, two desktop computers and a GigE link, Polyakov reduced the time to 10 hours."
Oh no terminology (Score:1, Funny)
Russian hacker, Evgeniy Polyakov
a Russian physicist
Which one which one aaaaaarhhhhh I'm so confused.
Re: (Score:2)
Aren't all physicists hackers?
Re: (Score:2)
Old, obvious, but just so damn neccessary
IPv6 could solve this! (Score:4, Insightful)
With IPv6, you would have enough source addresses to add that to the 'random pool' too. Another 64K addresses would make it harder to hack.
Does anyone else think that maybe we are approaching this problem the wrong way?
Re: (Score:2)
The source addresses would be the same though - there are only a limited number of DNS servers and it's not hard to sniff a link and work out what the common ones are... so you're not adding anything, just creating a situation where the average home user can't actuallly use your DNS server.
Re: (Score:3, Insightful)
I meant the source address of your request. Eg when your internal dns caching server sends a dns request, your router nat's the source address from a random pool of 64k (or more) addresses. In order for someone to spoof the reply, they would need to know the dns request id, the source port you used to send, and the source IP address chosen by your nat router. It's a client side solution only.
As a solution, it would rely on the following:
. a useful number of DNS servers being reachable via IPv6 (not the case
Re:IPv6 could solve this! (Score:4, Insightful)
Another 64K addresses would make it harder to hack.
You said it, it'd make it harder, but not impossible, specially with hardware geting faster every year.
Re:IPv6 could solve this! (Score:4, Informative)
Yes, the wrong way being tacking on extra transaction ID space by means of fragile kludges such as random source port numbers and, possibly, random IPv6 addresses.
It will require a lot more effort, but the right way to solve this problem is by improving the protocol itself. That may mean putting a much larger transaction ID field in the packets, where it cannot be mangled by NAT devices. Or it may mean delegating nameservers by IP address rather than domain name so that resolvers will no longer need to accept potentially-malicious glue records. But preferably, it means moving to a cryptographically-strong domain name system such as DNSSEC.
Re: (Score:2)
I haven't studied this issue in detail, but wouldn't it help a lot to use TCP instead of UDP? Then you don't even need transaction ID's; the transaction is simply the TCP connection.
Re: (Score:2)
Yep.
Re: (Score:3, Informative)
DNS does already work over TCP, and is used where the response will be over a certain size, eg a zone transfer from primary to secondary DNS server.
The problem is one of efficiency. TCP has much higher overheads, you need three packets just to get a connection started and then you have to keep track of the connection and shut it down properly. Three packets doesn't sound like much but over a high latency link (eg 500ms) it makes for a huge increase in the time it takes to resolve a name.
Re: (Score:3, Informative)
Or it may mean delegating nameservers by IP address rather than domain name so that resolvers will no longer need to accept potentially-malicious glue records.
Good post. Forgive me for focusing in on this one point and nitpicking it.. ;)
0. Glue used to have a specific meaning: records configured in a parent to help delegate a zone. You (and many people reporting on the current flaws) seem additionally to use it to refer to "additional answers" in DNS replies. While such answers often are glue records, they'
Re: (Score:2)
but the right way to solve this problem is by improving the protocol itself.
Which, if one reads various proposals to do just that, appears to be hampered by the group that thinks we should let the old DNS protocol be crap until people adopt, tada, DNSSEC.
But preferably, it means moving to a cryptographically-strong domain name system such as DNSSEC.
I'm fine with DNSSEC. As long as I get to have the root keys, m'kay?
In the end, I think the trust issue is the killer and final showstopper for DNSSEC. Until DNS
Re: (Score:3, Informative)
The third party hierarchical trust you disdain is one of the primary benefits of DNSSEC, because DNSSEC can eventually replace certificates for distribution of public keys. Currently, the only PKI we have is from a third-party non-hierarchical trust—the CAs—who are really not that trustworthy. DNS, however, is already hierarchical, and it makes a lot more sense to use a hierarchical system of trust—the same system in fact—to validate it. Do you really think having hundreds of trust a
Re: (Score:3, Informative)
who are really not that trustworthy.
I generally don't trust the CA's further than I can throw them. Who do you figure is trustworthy enough to handle it for DNS? Who could be regarded as trustworthy, no matter who in the world you ask? There seems to be some administrative problems with handing the keys to Mother Theresa.
hundreds of trust anchors
Having trust anchors at all is the problem. You need to verify against several independent sources, preferably sources you have some reason to trust, to avoid singl
Re: (Score:3, Informative)
Yet, they are the current standard for providing end-to-end security. So how mainstream to you think your level of doubt is?
Mind you, I don't trust the CAs either, which is why I want DNSSEC, since it can provide a superior mechanism with far fewer vectors for subversion, which I can control for my own domains, and which also is not vulnerable to cache poisoning.
Re: (Score:2)
Does anyone else think that maybe we are approaching this problem the wrong way?
Of course 'we' are.
Making something harder to exploit != fixing the exploit.
Re: (Score:2)
Does anyone else think that maybe we are approaching this problem the wrong way?
No, although I think that quite a few people may have the wrong end of the stick. I got the distinct impression that while it's still a good idea, using random source ports wasn't intended to be THE fix for the problem. Rather that it was just a generic, vendor neutral workaround to enable people to have a chance to secure themselves against the immediate threat without revealing enough information to Black Hats to exploit the issue. A more permanent solution, that might otherwise have entailed revealing
Re: (Score:2)
I haven't been too far into the technical aspects of this issue, but from what I gather, it is related to brutally "predicting" the source ports used for recursion, and injecting fraudulent responses?
It would generate more traffic, sure, but wouldn't an immediately obvious solution be to demand multiple confirmatory replies to recursion, each request using a different randomisation algorithm for the source port used?
Re: (Score:2)
Why not simply wait for two responses (i.e. reopen the port after you got an answer and wait a few seconds)? If you get two, you know something's fishy 'cause you should only get one.
Less traffic and not really slower.
Re: (Score:2)
Wouldn't that break if concurrent attacks were happening? Sure, you could bind the hold-down timer to a specific IP address, but then people would just start randomising their addresses.
Re: (Score:2)
It's not that I wait for the second answer.
What is the normal flow of operation? You ask ONE question, you get ONE answer. The attacker can't keep the genuine server from answering, so you will get this one, no matter what. If you get TWO (or more) answers, something's bogus.
One answer is what you expect. Because one answer is what you get, when everything runs normal. Just open the port and wait for a few seconds. If you get another answer, discard what you got and ask again. How big is the chance that he
Re: (Score:2)
Yeah just make the transaction id 64bits. Fixed.
Or go with the whole dnssec system.
Re: (Score:2)
Re: (Score:2)
64K was just a rough number I pulled out of the air, as 16 bits of address wouldn't be a huge slice of a end site's address space.
You could easily make it 640k though, then it truly should be enough for anyone!
Why do people still use BIND? (Score:1, Insightful)
Why do people still use BIND? It has a track record of security vulnerabilities almost as long as Sendmail's.
Re: (Score:2, Funny)
This isn't a BIND problem. (Score:5, Informative)
This has nothing to do with BIND vulnerabilities. DJdns, or whatever you feel is more secure, has exactly the same problem. It is a protocol weakness. The article mentions BIND only because it is the reference implementation for DNS.
The most interesting idea I've seen is to use IPv6 for DNS [slashdot.org]. The oldest idea is to start using DNSSEC.
Re:This isn't a BIND problem. (Score:4, Insightful)
Since the basis of the attack is spoofing server IPs, how does DJBDNS detect spoofed packets? "only come from defined servers" is useless when the packets are spoofed. It helps, of course, to not accept new glue records whenever they appear, but keep existing ones until they expire. But this just makes the attack take a little longer.
Re:This isn't a BIND problem. (Score:5, Informative)
The basis of the attack is to include "extra" information in a forged response to a query for a non-existent host. Bind trusts that extra information and other DNS servers only pay attention to that information if it falls under certain strict rules.
I ask for aaaae3fcg.bankofamerica.com and also send 100,000 responses to that query to that same recursive DNS server, that all say something to the effect of "a record aaae3fcg.bankofamerica.com = bah, also look to 666.666.666.666 for anything else related to bankofamerica.com. Oh, and cache this until the sun goes dark"
Nobody asks Bind to believe the part about THE REST OF THE WHOLE BLOODY DOMAIN in the response for a single record in the domain. No other servers cache that information.
That bind also used non-random ports made it a 5 minute attack over a fast link, instead of a 10 hour attack. That in the past bind used bad random numbers for the transaction ID made it a 30 packet attack...
Who's the fanboy now?
Re: (Score:2, Interesting)
It does not make the attack take "a little longer". It makes cache poisoning take as long as it took before the new attack method. If you only get a chance to poison the cache once whenever the cache purges the target record, then you have to guess the transaction ID correctly on the first try. The new thing about the current attack is that you get as many tries as you want at guessing transaction IDs and port numbers. That only works because servers allow glue to replace already cached records.
Port randomi
Re: (Score:3, Informative)
Not all dns servers cache the glue records beyond that transaction. Those that don't *cache* the glue are not vulnerable to this attack.
You Will Never Solve This Problem! (Score:5, Insightful)
If your DNS server says that slashdot.org resolves to something other than 216.34.181.45 then that's where you're going to end up. There are also legitimate reasons why someone might want to do something like that, and it is part of the inherent flexibility that has made the internet and its technologies as ubiquitous and as well used as they are. No one said that there weren't downsides. If you locked everything down in the manner that some idiots will inevitably now talk about, shouting and squealing about financial institutions, then I'm willing yo bet that you will lose a good portion of the flexibility that makes the 'internet' actually work on a wide scale.
Re: (Score:2)
Re: (Score:3, Interesting)
Isn't the real issue here our continued reliance on passwords that can be used more than once? When are we going to move wholeheartedly into a single-use password environment?
Incidentally, when is somebody going to throw the fact that US banks have completely ignored the two-factor authentication requirement (part of the Patriot Act, I believe; maybe we should start sending *bankers* to Gitmo and see if *that* gets their attention) back at the finance industry when they start to squeal?
Re: (Score:2)
Isn't the real issue here our continued reliance on passwords that can be used more than once? When are we going to move wholeheartedly into a single-use password environment?
No, that's not the real issue. Two factor authentication does not solve the problem of DNS poisoning: the user will enter the one-time password into the fake site, which in turn will log in the real site and transfer one million $ to Nigeria.
SSL does not solve the problem of DNS poisoning in a practical sense: it only works if the user opens a https:/// [https] shortcut; the large majority of users that type "paypal.com" in the address bar, will not observe that the fake PayPal site they are seeing failed to redire
Re: (Score:2)
Re: (Score:3, Insightful)
I wonder why the parent is modded Insightful. You don't seem to have gotten the problem.
The problem is not the servers being able to redcirect you to a different address, but the fact that any person (not only the people that control the servers you query) can make you server direct people to anywhere.
The problem is not about trust, but not being able to make sure who you are really getting a message from. You can't even have a trust problem if you are not sure who is talking to you.
Re: You Will Never Solve This Problem! (Score:2)
Unfortunately that requires a protocol change, which is a hard social problem. Adding 16 bits of source port randomization didn't require a protocol change, and they thought it was good enough. But maybe it wasn't (this particular demonstration is a little too laboratory-science for me; the flood of wrong responses would probably turn into a visible DoS attack in the real
Re: (Score:2)
It would not "solve" the problem. It would just make exploiting it harder. The underlying problem is that a DNS-Server cannot verify whether the answer it got is actually from the server it asked. Sure, a 128bit key would make it all a lot more difficult, with its 2^128 possible TXIDs, until someone found a problem in the random key generator or some other hidden flaw in the whole system that allows him to either make a guess at far better odds than 1:2^128 or hammer out a billion attempts per second withou
Re: (Score:2)
Re: (Score:2)
Well, sort of. If you have a DNSSEC-aware resolver, and you are looking up a record in a signed zone, then the man-in-the-middle attack you're proposing doesn't work, because the signatures don't check out. So it is possible to prevent the problem you're describing.
The reason we have this problem is, very simply, that in many of the larger TLDs, the top-level zone is not signed. So there's no chain of trust, so even if you sign your zone, I have no way to get your key, because I have no chain of trust
GigE (Score:2, Interesting)
Re: (Score:2)
Given a setup like that you could poison just about any protocol unless it was using SSL... anything that has a two way conversation expects replies and you can inject packets into it by getting there 'first.
TBH though given that setup I'd just respond to ARP requests for the router and intercept the entire traffic flow. DNS poisoning not required.
I'm safe, in my ADSL utopia (Score:2)
So, if you have a GigE lan, any trojaned machine can poison your DNS during one night...
People at home are safe though - that's the main thing. People on the local net at home are generally known people, with access to your house (WiFi excepted), and could probably find easier ways to steal your identity, capture keystrokes, etc. And you're safe from Internet people too - at the end of my 8Mb connection, I think I'd notice a Gb of traffic heading my way, to say nothing of it taking 125 times longer anyway.
Re: (Score:1)
So, if you have a GigE lan, any trojaned machine can poison your DNS during one night...
People at home are safe though - that's the main thing. People on the local net at home are generally known people, with access to your house (WiFi excepted), and could probably find easier ways to steal your identity, capture keystrokes, etc. And you're safe from Internet people too - at the end of my 8Mb connection, I think I'd notice a Gb of traffic heading my way, to say nothing of it taking 125 times longer anyway.
Unfortunately most people on ADSL don't run their own name server, and instead use their ISPs nameserver. Hopefully not too many people will have GigE access to the ISPs nameserver so this attack probably won't work anyway.
Re: (Score:2)
A server at a hosting-provider might be a nice place for this exploit. But everyone in the know, already knew this was a possible target.
Re: (Score:1, Troll)
Compared to ARP spoofing which is much simpler and gains you the entire traffic flow to an IP address? I wouldn't bother with a DNS attack to be honest. Any attack that requires you be on the local network is uninteresting just because there are so many damned ways to do it already.
Re: (Score:3)
It depends ARP spoofing is just confined to the broadcast-domain (possible a VLAN), while a DNS-server probably is used by a much broader 'audiance'.
Re: (Score:1)
Re: (Score:2)
You local machine's cache is probably safe, yes (or reasonably so). What about your ISP's, which in all likelihood you're using when you don't have a local cache of the required information? Not only are you vulnerable to that, but so is everyone else using your ISP.
Isn't it a birthday attack? (Score:3, Interesting)
Re: (Score:2)
They do, mostly. There's a certain amount of caching built in at all levels these days (which is why for example on windows you have to do ipconfig/flushdns sometimes if DHCP changes the address of a machine).
Limit the bandwidth, compare notes (Score:4, Insightful)
The exploit depends on a GigE connection to the DNS server. So a caching server behind a T1 is going to take much longer to exploit. So running your own caching server on a T1, DSL, or cable is going to be more resistant than using the ISP DNS with a fat pipe.
If there is actually 1 GigE of DNS traffic at an ISP, they could distribute the requests to 100 bandwidth limited servers. Then the attack would only manage to poison one of the servers in 10 hours. Even more interesting would be if the 100 servers could compare notes to detect the poisoning.
Re:Limit the bandwidth, compare notes (Score:4, Insightful)
A decent firewall could be trained to recognize an attack like this take preventative action easily enough - to even get it to work you'd have to saturate the link with packets hoping to get a 'hit'.. So you can do it in gigE in 10 hours. You can attack just about any connection based system using similar methods, but you'd have to saturate the link and it'd get noticed... especially if you did it at gigE bandwidth for 10 hours!!
Re: (Score:2)
Re: (Score:3, Insightful)
The packets won't look like that though will they - at that bandwidth they'd have to be on the local network so they'd be coming from a different source mac (and that's pretty much the only way to do this attack anyway - any ISP worth the money will drop any packets with fake source addresses on the floor before they get routed externally, so it'd have to be an internal attack).
Worst case you shut down the DNS server and everyone drops to the backups until the attacker is traced and shut down.
Re: (Score:2)
at that bandwidth they'd have to be on the local network
Or be a medium-large botnet.
(and that's pretty much the only way to do this attack anyway - any ISP worth the money will drop any packets with fake source addresses on the floor before they get routed externally, so it'd have to be an internal attack)
So why was the original problem considered to be such a big deal? Any DNS poisoning attack requires that you pretend to be the real DNS server, so if it's only possible from the local network why was that big coordinated patch worth the effort?
Re: (Score:2)
There's a surprising number of providers that don't do egress source filtering. I definitely wouldn't rely on other peoples' security.
Re: (Score:2)
I'm no expert, but would asking twice make it ^2 harder to get a hit?
Re: (Score:2)
The proper place to traffic this would be within the server code.
Anytime you receive a response that doesn't jive with the requestor's session ID, you should be suspicious. If you're bombarded with millions of them, you should throttle appropriately. Maybe switch to TCP queries exclusively.
Double check (Score:2)
or when updating your cache, compare with your cached copy, and if different ask again to double check.
That is the best idea I've heard yet.
Gigabit link? (Score:2)
Re: (Score:2)
The internet at large is safe until either:
1. Everyone is connected by a gigabit cable to a common nameserver, and the admin of the nameserver is too stupid to realize that their dns being saturated with bogus packets at gigE speeds for 10 hours is not normal.
2. Both ISPs and routers for some reason decide stop filtering source addresses so that such an attack is possible without being directly connected.
Re: (Score:2)
3. Attackers find a way to remotely deploy and control malware on hundreds of thousands of computers in
That's a lot of bandwidth (Score:2)
Let's see 10 hours * 1Gbps / 512kbps = 2.22 years.
If you have a 10Mbps link that makes it 41 days.
I think I would have made a dns request and got the valid dns reply into my cache before the 2 years are up. Or my connection would have gone down and I'd get a different IP by then. Thanks to TM Net for protecting me from such attacks
Either that or I'll be safe because the site would have DoSed me off the net wi
I guess it's time... (Score:2)
for DNSv2.
(whatever that means)
Re:I guess it's time... for Secure DNS (Score:4, Insightful)
It's long past time for Secure DNS, which is a combination of TSIG+TKEY, SIG(0), and DNSSEC. End to end crypto authentication. Protects not just against off-path spoofed-source attacks like Kaminsky's, but also on-disk attacks against zone files, and provider-in-the-middle attackers who remap your NXDOMAIN responses into pointers to their advertising servers.
Sadly, it's a year away even if everybody started now, and most people want to be last not first, so very few people have started, and some of those people are saying "why bother, if it's not an instant solution there's no point to it, let's scrap the design and start over." (Had it not taken 12 years to get Secure DNS defined, then the prospect of doubling that time would not daunt me as much as it does.)
So, everybody please start already. NSD and Unbound from NLNetLabs supports DNSSEC. So does BIND, obviously. Sign your zones, and if your registrar won't accept keys from you, send them to a DLV registry [isc.org] while you wait for that. Turn on DNSSEC validation in your recursive nameservers. Write a letter to your congresscritter saying "please instruct US-DoC to give ICANN permission to sign the root DNS zone." In the time it would take for this Russian physicist's attack to work over your 512K DSL line (2.2 years, I heard?) we could completely secure the DNS or at least the parts of DNS whose operators gave a rat's ass about security (which is not the majority but it certainly includes your server, right?)
Re: (Score:2)
People who are interested in signing their zones may want to read up on how things work at www.dnssec.net [dnssec.net] and take a look at the Sparta tools [dnssec-tools.org]. It's really not difficult, and there is a lot of information out there.
Not surprised (Score:2)
I'm not surprised. Port randomization doesn't make the attack impossible, just harder. It doesn't eliminate the birthday attack, it just increases the space you have to blanket to generate a collision. The only real fix for the attack is DNSSEC, allowing the software to reject forged responses completely. Short of that, I can only think of two more things that'd help:
DJB's take . . . (Score:5, Informative)
For those that haven't seen it, djb threw up some information regarding this problem and various options a few years ago.
http://cr.yp.to/djbdns/forgery.html [cr.yp.to]
Re: (Score:3, Informative)
For those that haven't seen it, djb threw up some information regarding this problem and various options a few years ago.
http://cr.yp.to/djbdns/forgery.html [cr.yp.to]
I went and had a look at the thread (dated from Jul 30 2001) referenced in the excerpt at djb's site (follow the posting link in the URL above). As far as I can tell, Jim Reid was pooh-poohing the usefulness of port randomization, the approach used as an emergency backstop against Kaminsky's attach just over seven years later. To be fair, Reid was doing so in the context of advocating for Secure DNS.
djb drives people crazy (particularly the BIND folks), but he's someone to listen to - is it the case, as I
Re: (Score:2)
djb drives people crazy (particularly the BIND folks), but he's someone to listen to - is it the case, as I understand from reading through these docs, that in 2001, djb's dnscache performed the port randomization that everyone's been scrambling to deploy over the past several weeks for other implementations, including BIND?
Or am I mis-interpreting here?
You are correct. djbdns was "not vulnerable" (in the same sense that BIND is "not vulnerable" now) to this attack.
As you mentioned, he can be abrasive, but he's definitely contributed some valuable things. See SYN cookies [cr.yp.to] as another djb-contributed and widely-deployed solution to a problem.
Re: (Score:2)
That's because there's a significant performance hit associated with:
Re: (Score:2)
You're disingenuous to the extreme, sir. There was reasoning behind his "inefficient" design - it's not like he got up one morning and said "Oh hey, I know what I'll do today - I'll go and implement DNS in the most ass-backwards way I can think of!". Luck has nothing to do with it.
Re: (Score:2)
I'm not being disingenuous at all. Needless complexity is the enemy of security, and until you describe an attack, the complexity of source port randomization is needless. What's disingenuous is to pretend that djb wasn't adding complexity speculatively. Anyone can toss additional complexity into an implementation; this is usually considered "security through obscurity". The more complex implementation is trickier to audit, and may have obscure degenerate failure modes. It doesn't become useful security unt
Re: (Score:2)
Oh, but you are. Here's a comment [cr.yp.to] from 2001 clearly stating the class of attacks against which source port randomization works as a mitigating factor. The attack vector was known, the solution known... but not implemented, except by DJB.
Are you seriously suggesting that no hacker ever found out about that particular trick until Kaminsky made such a fuss about it?
Re: (Score:2)
The entropy of query IDs was thought to be adequate to defend against all known methods of blind collision. Of course, adding entropy in the source port makes it harder, but that's not a reason to do so by itself. You could also, for example, note when a nameserver has multiple IP addresses and have it randomize source address as well, but djb didn't do that, nor did he demons
Re: (Score:2)
The entropy of query IDs was thought to be adequate to defend against all known methods of blind collision.
There seems to have been no valid technical reason for that particular belief, no?
You could also, for example, note when a nameserver has multiple IP addresses and have it randomize source address as well,
What good would that do? How would treating a particular deployment scenario do anything for the security of all the different servers?
If there's any disingenuity here, it's in how you keep ignoring the effect additional complexity has on analyzing the security of the implementation, and that the approach djb chose may in fact have created vulnerabilities in other areas.
You meant "might" instead of "may", probably. It's improbable (verging on damn near impossible) that the multi-vendor patch would have gone out (after months of hand-wringing) if anyone had suspected the fix might introduce other problems.
It's obvious no one knew about this attack until recently. Attacks don't stay secret for 7 years.
How is it obvious?
Re: (Score:2)
At the time, there was no valid technical reason against it, so adding source port randomization was speculative. You seem to be having a hard time understanding that speculative measures require luck in order to pan out. That's why they're speculative.
Re: (Score:2)
It may not help every scenario, but it adds log2(number of addresses) bits of entropy.
That's not much, and as you stated it doesn't help in every scenario, unlike the source port thing.
Point taken on the DoS scenarios you propose, but they're definitely less serious, as far as the user is concerned. An unavailable DNS is better than a spoofed one, imo.
Your statistical oracle is off here too.
I was going to take that statistical oracle for a check-up one of these days. Seems the date just got moved up a bit.
The solution to DNS cache poisoning has a name: DNSSEC
Afaik, there are known key distribution and rollover problems with the standard as proposed. Also, I'm not so fond of the idea
Re: (Score:2)
We agree on that.
DNSSEC as proposed ends up with a single root trust anchor, which is the easiest possible configuration as far as key distribution is concerned. There aren't any rollover problems that I know of; what's currently missing is the capabi
Re: (Score:2)
There was nothing speculative about it. As Magada has noted, his comment in 2001 clearly outlined the vulnerability. 65k is 65k.. It's not a very good barrier against mischief. This has seriously been known about for a while - thanks partially to djb. I find it funny, however, that it has all of a sudden become such a huge blip on the radar. His solution wasn't a perfect one, but it takes about 2^16 times longer to crack than previous implementations and it was fully compatible with what everyone was
Re: (Score:2)
No, djb's comment doesn't outline the vulnerability Kaminsky identified at all. It is purely speculative on a "blind collision" attack, i.e. assuming you can find some way to effect one. There were blind collision attacks before, and perhaps there will be other blind collision attacks in the future; it's a class of attack, not a vulnerability. Kaminsky identified a new way to effect one, and all the source port randomization djbdns
Re: (Score:2, Informative)
Here's something DJB posted to his mailing list on Thursday. Don't know if I'm allowed to post this here but what the heck:
http://cr.yp.to/djbdns/forgery.html [cr.yp.to] has, for several years, stated the results of exactly this attack:
The dnscache program uses a cryptographic generator for the ID and
query port to make them extremely difficult to predict. However,
* an attacker who makes a few billion random guesses is likely to
Re: (Score:2)
More FUD. It's hard to imagine how DNS could be less reliable than it is now, and port randomization actually decreases performance significantly without even assuring security; effective port randomization additionally starves the system for entropy making everything else the system does less secure.
DNSSEC is the only alternative currently on the table that actually add
32 bit guess vs. 16 bit guess. (Score:2)
Right. Before the fix, you had to guess a 16-bit number. After the fix, you have to guess a 32-bit number. About 10 hours on a gigabit Ethernet should let you try the necessary 4 billion packets. This isn't an attack one could run against a client out on a DSL line, but if you were able to take over one machine in a colo, you might be able, over time, to get traffic for other machines directed to yours.
If DNS used a 64-bit or 128 bit number to tie the response to the request, and the DNS client had a
Re: (Score:3, Interesting)
This isn't an attack one could run against a client out on a DSL line, but if you were able to take over one machine in a colo, you might be able, over time, to get traffic for other machines directed to yours.
True. On the other hand, if you are on the same network segment then there are many other options available to you if you want to do evil. Blasting about 4 terabytes (1 Gb/s for 10H) at a DNS server isn't exactly a quiet attack, so if you intend to stay below the radar you're probably a lot better off trying some good old arp spoofing or tcp hijacking instead.
Why not just do flood-checking? (Score:2)
Why not have the DNS server check for flooding?
Basically when DNS poisoning is done you'll be sending thousands of fake/false packets that the DNS server receives and then ultimately rejects until one slips through because it guessed the correct ID/source port.
If the DNS server were to count the number of false/wrong packets from each source address, it would quickly detect when something is wrong. It could then just reject all packets from this IP and perhaps use a secondary DNS server for the specific dom
Re: (Score:2)
In addition:
An AC mentioned the following in this same thread:
"or when updating your cache, compare with your cached copy, and if different ask again to double check."
The combination of these two solutions (flood-checking and double checking) would solve the issue completely. The DNS server could do double or triple checking when it detects a flood.
Re: (Score:2)
Not if you'd use double checking when you detect an attack. Then it would only slow the DNS lookup down a little bit.
Press release from DJB (Score:2)
DJB made a press release about this:
---D. J. Bernstein, Professor, Mathematics, Statistics, and Computer Science, University of Illinois at Chicago
DNS still vulnerable, Bernstein says.
CHICAGO, Thursday 7 August 2008 - Do you bank over the Internet? If so, beware: recent Internet patches don't stop determined attackers.
Network administrators have been rushing to deploy DNS source-port randomization patches in response to an attack announced by security researcher Dan Kaminsky last month. But the inventor of
Unworthy of mention (Score:2)
Sure, because nobody is going to notice a gigabit of traffic pouring into their DNS server for 10 hours in order to get -just-one- cache poisoning.
Sorry, but this extension of the attack is simply unworthy of mention. What is worthy of mention is the danger posed by corporate NAT boxes that reorder the source ports sequentially, defeating randomization.
Re: (Score:2, Informative)
$ apt-cache -n search power dns | wc -l
0
Re:Power DNS Recursor.. (Score:4, Informative)
% apt-cache -n search pdns-recursor
pdns-recursor - PowerDNS recursor
Granted, it *is* actually missing on several architectures because of some unimplemented system calls, but that shouldn't bother too many people.
Re:Power DNS Recursor.. (Score:5, Informative)
Consider reading the links in the article. Obfuscation isn't a fix.
Article says, that DJBDNS does not suffer from this attack. It does. Everyone does. With some tweaks it can take longer than BIND, but overall problem is there.
Re:BIND (Score:5, Funny)
I think you mean B0wnd
Re: (Score:2)
Here's what DJB said about this:
Last week's surveys by the DNSSEC developers ("SecSpider") have found a
grand total of 99 signed dot-com names out of the 70 million dot-com
names on the Internet.
Am I the only person amazed by this? We've had fifteen years of
forgeries, fifteen years of concentrated work on DNSSEC, and we can't
even get simple cryptographic signatures deployed. What an embarrassment
for cryptography!
Jos Backus writes:
http://cr.yp.to/djbdns/forgery.html [cr.yp.to] states:
"My top priority for djbdns is to sup
Re: (Score:2)
Your quotation attributed to djb doesn't seem to make much sense, and you don't indicate where you found it.
Re: (Score:2)
It was posted on the djbdns list : http://marc.info/?l=djbdns&m=121832806123954&w=2 [marc.info]