Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Networking Communications Security The Internet

BIND Still Susceptible To DNS Cache Poisoning 146

An anonymous reader writes "John Markoff of the NYTimes writes about a Russian hacker, Evgeniy Polyakov, who has successfully poisoned the latest, patched BIND with randomized ports. Originally, the randomized ports were never supposed to completely solve the problem, but just make it harder to do. It was thought that with port randomization, it would take roughly a week to get a hit. Using his own exploit code, two desktop computers and a GigE link, Polyakov reduced the time to 10 hours."
This discussion has been archived. No new comments can be posted.

BIND Still Susceptible To DNS Cache Poisoning

Comments Filter:
  • by Anonymous Coward

    Russian hacker, Evgeniy Polyakov

    a Russian physicist

    Which one which one aaaaaarhhhhh I'm so confused.

  • by jamesh ( 87723 ) on Saturday August 09, 2008 @09:19AM (#24536787)

    With IPv6, you would have enough source addresses to add that to the 'random pool' too. Another 64K addresses would make it harder to hack.

    Does anyone else think that maybe we are approaching this problem the wrong way?

    • The source addresses would be the same though - there are only a limited number of DNS servers and it's not hard to sniff a link and work out what the common ones are... so you're not adding anything, just creating a situation where the average home user can't actuallly use your DNS server.

      • Re: (Score:3, Insightful)

        by jamesh ( 87723 )

        I meant the source address of your request. Eg when your internal dns caching server sends a dns request, your router nat's the source address from a random pool of 64k (or more) addresses. In order for someone to spoof the reply, they would need to know the dns request id, the source port you used to send, and the source IP address chosen by your nat router. It's a client side solution only.

        As a solution, it would rely on the following:
        . a useful number of DNS servers being reachable via IPv6 (not the case

    • by diegocgteleline.es ( 653730 ) on Saturday August 09, 2008 @10:04AM (#24537019)

      Another 64K addresses would make it harder to hack.

      You said it, it'd make it harder, but not impossible, specially with hardware geting faster every year.

    • by Niten ( 201835 ) on Saturday August 09, 2008 @10:35AM (#24537177)

      Does anyone else think that maybe we are approaching this problem the wrong way?

      Yes, the wrong way being tacking on extra transaction ID space by means of fragile kludges such as random source port numbers and, possibly, random IPv6 addresses.

      It will require a lot more effort, but the right way to solve this problem is by improving the protocol itself. That may mean putting a much larger transaction ID field in the packets, where it cannot be mangled by NAT devices. Or it may mean delegating nameservers by IP address rather than domain name so that resolvers will no longer need to accept potentially-malicious glue records. But preferably, it means moving to a cryptographically-strong domain name system such as DNSSEC.

      • by vrt3 ( 62368 )

        I haven't studied this issue in detail, but wouldn't it help a lot to use TCP instead of UDP? Then you don't even need transaction ID's; the transaction is simply the TCP connection.

        • Yep.

        • Re: (Score:3, Informative)

          by jamesh ( 87723 )

          DNS does already work over TCP, and is used where the response will be over a certain size, eg a zone transfer from primary to secondary DNS server.

          The problem is one of efficiency. TCP has much higher overheads, you need three packets just to get a connection started and then you have to keep track of the connection and shut it down properly. Three packets doesn't sound like much but over a high latency link (eg 500ms) it makes for a huge increase in the time it takes to resolve a name.

      • Re: (Score:3, Informative)

        by Paul Jakma ( 2677 )

        Or it may mean delegating nameservers by IP address rather than domain name so that resolvers will no longer need to accept potentially-malicious glue records.

        Good post. Forgive me for focusing in on this one point and nitpicking it.. ;)

        0. Glue used to have a specific meaning: records configured in a parent to help delegate a zone. You (and many people reporting on the current flaws) seem additionally to use it to refer to "additional answers" in DNS replies. While such answers often are glue records, they'

      • by Znork ( 31774 )

        but the right way to solve this problem is by improving the protocol itself.

        Which, if one reads various proposals to do just that, appears to be hampered by the group that thinks we should let the old DNS protocol be crap until people adopt, tada, DNSSEC.

        But preferably, it means moving to a cryptographically-strong domain name system such as DNSSEC.

        I'm fine with DNSSEC. As long as I get to have the root keys, m'kay?

        In the end, I think the trust issue is the killer and final showstopper for DNSSEC. Until DNS

        • Re: (Score:3, Informative)

          by Antibozo ( 410516 )

          The third party hierarchical trust you disdain is one of the primary benefits of DNSSEC, because DNSSEC can eventually replace certificates for distribution of public keys. Currently, the only PKI we have is from a third-party non-hierarchical trust—the CAs—who are really not that trustworthy. DNS, however, is already hierarchical, and it makes a lot more sense to use a hierarchical system of trust—the same system in fact—to validate it. Do you really think having hundreds of trust a

          • Re: (Score:3, Informative)

            by Znork ( 31774 )

            who are really not that trustworthy.

            I generally don't trust the CA's further than I can throw them. Who do you figure is trustworthy enough to handle it for DNS? Who could be regarded as trustworthy, no matter who in the world you ask? There seems to be some administrative problems with handing the keys to Mother Theresa.

            hundreds of trust anchors

            Having trust anchors at all is the problem. You need to verify against several independent sources, preferably sources you have some reason to trust, to avoid singl

            • Re: (Score:3, Informative)

              by Antibozo ( 410516 )

              I generally don't trust the CA's further than I can throw them.

              Yet, they are the current standard for providing end-to-end security. So how mainstream to you think your level of doubt is?

              Mind you, I don't trust the CAs either, which is why I want DNSSEC, since it can provide a superior mechanism with far fewer vectors for subversion, which I can control for my own domains, and which also is not vulnerable to cache poisoning.

              Who do you figure is trustworthy enough to handle it for DNS? Who could be regarded

    • Does anyone else think that maybe we are approaching this problem the wrong way?

      Of course 'we' are.

      Making something harder to exploit != fixing the exploit.

    • by Zocalo ( 252965 )

      Does anyone else think that maybe we are approaching this problem the wrong way?

      No, although I think that quite a few people may have the wrong end of the stick. I got the distinct impression that while it's still a good idea, using random source ports wasn't intended to be THE fix for the problem. Rather that it was just a generic, vendor neutral workaround to enable people to have a chance to secure themselves against the immediate threat without revealing enough information to Black Hats to exploit the issue. A more permanent solution, that might otherwise have entailed revealing

    • I haven't been too far into the technical aspects of this issue, but from what I gather, it is related to brutally "predicting" the source ports used for recursion, and injecting fraudulent responses?

      It would generate more traffic, sure, but wouldn't an immediately obvious solution be to demand multiple confirmatory replies to recursion, each request using a different randomisation algorithm for the source port used?

      • Why not simply wait for two responses (i.e. reopen the port after you got an answer and wait a few seconds)? If you get two, you know something's fishy 'cause you should only get one.

        Less traffic and not really slower.

        • Wouldn't that break if concurrent attacks were happening? Sure, you could bind the hold-down timer to a specific IP address, but then people would just start randomising their addresses.

          • It's not that I wait for the second answer.

            What is the normal flow of operation? You ask ONE question, you get ONE answer. The attacker can't keep the genuine server from answering, so you will get this one, no matter what. If you get TWO (or more) answers, something's bogus.

            One answer is what you expect. Because one answer is what you get, when everything runs normal. Just open the port and wait for a few seconds. If you get another answer, discard what you got and ask again. How big is the chance that he

    • Yeah just make the transaction id 64bits. Fixed.

      Or go with the whole dnssec system.

    • I forget where I heard this, but 64K addresses ought to be enough for anyone.
      • by jamesh ( 87723 )

        64K was just a rough number I pulled out of the air, as 16 bits of address wouldn't be a huge slice of a end site's address space.

        You could easily make it 640k though, then it truly should be enough for anyone!

  • Why do people still use BIND? It has a track record of security vulnerabilities almost as long as Sendmail's.

    • Re: (Score:2, Funny)

      by PJCRP ( 1314653 )
      Because most Networkers engage netorking-BDSM in regular practice?
    • by CustomDesigned ( 250089 ) <stuart@gathman.org> on Saturday August 09, 2008 @09:31AM (#24536855) Homepage Journal

      This has nothing to do with BIND vulnerabilities. DJdns, or whatever you feel is more secure, has exactly the same problem. It is a protocol weakness. The article mentions BIND only because it is the reference implementation for DNS.

      The most interesting idea I've seen is to use IPv6 for DNS [slashdot.org]. The oldest idea is to start using DNSSEC.

  • by segedunum ( 883035 ) on Saturday August 09, 2008 @09:25AM (#24536817)
    I might not have one of the lowest Slashdot IDs around, but I am absolutely astonished at some peoples' astonishment over this. DNS, by definition, is all about trusting the forwarders you are using or other DNS servers you are caching from and trusting the DNS server you use from there. That's where the problem is, so if people are shouting and screaming about trust now then it's all a bit late.

    If your DNS server says that slashdot.org resolves to something other than 216.34.181.45 then that's where you're going to end up. There are also legitimate reasons why someone might want to do something like that, and it is part of the inherent flexibility that has made the internet and its technologies as ubiquitous and as well used as they are. No one said that there weren't downsides. If you locked everything down in the manner that some idiots will inevitably now talk about, shouting and squealing about financial institutions, then I'm willing yo bet that you will lose a good portion of the flexibility that makes the 'internet' actually work on a wide scale.
    • This isn't about evil servers. It's about impersonating servers by spoofing their address, and about how the passwords build into the question/response packets aren't long enough to prevent this.
    • Re: (Score:3, Interesting)

      by gruntled ( 107194 )

      Isn't the real issue here our continued reliance on passwords that can be used more than once? When are we going to move wholeheartedly into a single-use password environment?

      Incidentally, when is somebody going to throw the fact that US banks have completely ignored the two-factor authentication requirement (part of the Patriot Act, I believe; maybe we should start sending *bankers* to Gitmo and see if *that* gets their attention) back at the finance industry when they start to squeal?

      • Isn't the real issue here our continued reliance on passwords that can be used more than once? When are we going to move wholeheartedly into a single-use password environment?

        No, that's not the real issue. Two factor authentication does not solve the problem of DNS poisoning: the user will enter the one-time password into the fake site, which in turn will log in the real site and transfer one million $ to Nigeria.
        SSL does not solve the problem of DNS poisoning in a practical sense: it only works if the user opens a https:/// [https] shortcut; the large majority of users that type "paypal.com" in the address bar, will not observe that the fake PayPal site they are seeing failed to redire

    • Comment removed based on user account deletion
    • Re: (Score:3, Insightful)

      by boto ( 145530 )

      I wonder why the parent is modded Insightful. You don't seem to have gotten the problem.

      The problem is not the servers being able to redcirect you to a different address, but the fact that any person (not only the people that control the servers you query) can make you server direct people to anywhere.

      The problem is not about trust, but not being able to make sure who you are really getting a message from. You can't even have a trust problem if you are not sure who is talking to you.

    • You're wrong; the problem is trivially solveable. For example, 128-bit transaction IDs would pretty much solve this particular scenario.

      Unfortunately that requires a protocol change, which is a hard social problem. Adding 16 bits of source port randomization didn't require a protocol change, and they thought it was good enough. But maybe it wasn't (this particular demonstration is a little too laboratory-science for me; the flood of wrong responses would probably turn into a visible DoS attack in the real

      • It would not "solve" the problem. It would just make exploiting it harder. The underlying problem is that a DNS-Server cannot verify whether the answer it got is actually from the server it asked. Sure, a 128bit key would make it all a lot more difficult, with its 2^128 possible TXIDs, until someone found a problem in the random key generator or some other hidden flaw in the whole system that allows him to either make a guess at far better odds than 1:2^128 or hammer out a billion attempts per second withou

        • Digital signatures rely on jacking up the odds. It makes a lot more sense to just jack up the odds on the already essentially working system then to go to all those lengths.
    • by mellon ( 7048 )

      Well, sort of. If you have a DNSSEC-aware resolver, and you are looking up a record in a signed zone, then the man-in-the-middle attack you're proposing doesn't work, because the signatures don't check out. So it is possible to prevent the problem you're describing.

      The reason we have this problem is, very simply, that in many of the larger TLDs, the top-level zone is not signed. So there's no chain of trust, so even if you sign your zone, I have no way to get your key, because I have no chain of trust

  • GigE (Score:2, Interesting)

    It seems that this only works so quickly because he had 2 machines connected to the server via GigE. Which I would guess means most DNS servers can't be poisoned like this.
    • Given a setup like that you could poison just about any protocol unless it was using SSL... anything that has a two way conversation expects replies and you can inject packets into it by getting there 'first.

      TBH though given that setup I'd just respond to ARP requests for the router and intercept the entire traffic flow. DNS poisoning not required.

  • So, if you have a GigE lan, any trojaned machine can poison your DNS during one night...

    People at home are safe though - that's the main thing. People on the local net at home are generally known people, with access to your house (WiFi excepted), and could probably find easier ways to steal your identity, capture keystrokes, etc. And you're safe from Internet people too - at the end of my 8Mb connection, I think I'd notice a Gb of traffic heading my way, to say nothing of it taking 125 times longer anyway.

    • So, if you have a GigE lan, any trojaned machine can poison your DNS during one night...

      People at home are safe though - that's the main thing. People on the local net at home are generally known people, with access to your house (WiFi excepted), and could probably find easier ways to steal your identity, capture keystrokes, etc. And you're safe from Internet people too - at the end of my 8Mb connection, I think I'd notice a Gb of traffic heading my way, to say nothing of it taking 125 times longer anyway.

      Unfortunately most people on ADSL don't run their own name server, and instead use their ISPs nameserver. Hopefully not too many people will have GigE access to the ISPs nameserver so this attack probably won't work anyway.

      • by Lennie ( 16154 )

        A server at a hosting-provider might be a nice place for this exploit. But everyone in the know, already knew this was a possible target.

        • Re: (Score:1, Troll)

          by Tony Hoyle ( 11698 )

          Compared to ARP spoofing which is much simpler and gains you the entire traffic flow to an IP address? I wouldn't bother with a DNS attack to be honest. Any attack that requires you be on the local network is uninteresting just because there are so many damned ways to do it already.

          • by Lennie ( 16154 )

            It depends ARP spoofing is just confined to the broadcast-domain (possible a VLAN), while a DNS-server probably is used by a much broader 'audiance'.

    • by NetCow ( 117556 )
      No, people are home are not safe since their ISP nameservers are unlikely to run at people's homes... DNS servers typically reside on high bandwidth links.
    • by Firehed ( 942385 )

      You local machine's cache is probably safe, yes (or reasonably so). What about your ISP's, which in all likelihood you're using when you don't have a local cache of the required information? Not only are you vulnerable to that, but so is everyone else using your ISP.

  • by Timothy Brownawell ( 627747 ) <tbrownaw@prjek.net> on Saturday August 09, 2008 @09:39AM (#24536883) Homepage Journal
    Why can't the resolvers make sure to never have multiple outstanding requests that could potentially give the same answer? Check the cache for known zone boundaries and implied non-boundaries (if the server for foo.com also answers requests for x.y.z.foo.com, there's no zone boundary in between), and only send one request crossing a particular potential boundary at a time to a particular server (like a.c.foo.com and b.c.foo.com, we don't know yet that .c.foo.com is answered by the same server as .foo.com, since nothing under that domain is in the cache).
    • They do, mostly. There's a certain amount of caching built in at all levels these days (which is why for example on windows you have to do ipconfig/flushdns sometimes if DHCP changes the address of a machine).

  • by CustomDesigned ( 250089 ) <stuart@gathman.org> on Saturday August 09, 2008 @09:40AM (#24536891) Homepage Journal

    The exploit depends on a GigE connection to the DNS server. So a caching server behind a T1 is going to take much longer to exploit. So running your own caching server on a T1, DSL, or cable is going to be more resistant than using the ISP DNS with a fat pipe.

    If there is actually 1 GigE of DNS traffic at an ISP, they could distribute the requests to 100 bandwidth limited servers. Then the attack would only manage to poison one of the servers in 10 hours. Even more interesting would be if the 100 servers could compare notes to detect the poisoning.

    • by Tony Hoyle ( 11698 ) <tmh@nodomain.org> on Saturday August 09, 2008 @09:51AM (#24536943) Homepage

      A decent firewall could be trained to recognize an attack like this take preventative action easily enough - to even get it to work you'd have to saturate the link with packets hoping to get a 'hit'.. So you can do it in gigE in 10 hours. You can attack just about any connection based system using similar methods, but you'd have to saturate the link and it'd get noticed... especially if you did it at gigE bandwidth for 10 hours!!

      • What sort of preventative action? This already relies on the packets looking like the come from the real nameserver, so you can't just block them without cutting off large parts of the DNS hierarchy from your customers...
        • Re: (Score:3, Insightful)

          by Tony Hoyle ( 11698 )

          The packets won't look like that though will they - at that bandwidth they'd have to be on the local network so they'd be coming from a different source mac (and that's pretty much the only way to do this attack anyway - any ISP worth the money will drop any packets with fake source addresses on the floor before they get routed externally, so it'd have to be an internal attack).

          Worst case you shut down the DNS server and everyone drops to the backups until the attacker is traced and shut down.

          • at that bandwidth they'd have to be on the local network

            Or be a medium-large botnet.

            (and that's pretty much the only way to do this attack anyway - any ISP worth the money will drop any packets with fake source addresses on the floor before they get routed externally, so it'd have to be an internal attack)

            So why was the original problem considered to be such a big deal? Any DNS poisoning attack requires that you pretend to be the real DNS server, so if it's only possible from the local network why was that big coordinated patch worth the effort?

          • by geniusj ( 140174 )

            There's a surprising number of providers that don't do egress source filtering. I definitely wouldn't rely on other peoples' security.

        • I'm no expert, but would asking twice make it ^2 harder to get a hit?

      • The proper place to traffic this would be within the server code.

        Anytime you receive a response that doesn't jive with the requestor's session ID, you should be suspicious. If you're bombarded with millions of them, you should throttle appropriately. Maybe switch to TCP queries exclusively.

  • So, the Internet at large is safe (at least as safe as before) until most computers are connected with gigabit links?
    • The internet at large is safe until either:

      1. Everyone is connected by a gigabit cable to a common nameserver, and the admin of the nameserver is too stupid to realize that their dns being saturated with bogus packets at gigE speeds for 10 hours is not normal.
      2. Both ISPs and routers for some reason decide stop filtering source addresses so that such an attack is possible without being directly connected.

      • The internet at large is safe until either:

        1. Everyone is connected by a gigabit cable to a common nameserver, and the admin of the nameserver is too stupid to realize that their dns being saturated with bogus packets at gigE speeds for 10 hours is not normal.

        2. Both ISPs and routers for some reason decide stop filtering source addresses so that such an attack is possible without being directly connected.

        3. Attackers find a way to remotely deploy and control malware on hundreds of thousands of computers in

  • Good thing my ISP (TM Net/Streamyx) sucks eh? They're not even giving me the 512kbps I paid for.

    Let's see 10 hours * 1Gbps / 512kbps = 2.22 years.

    If you have a 10Mbps link that makes it 41 days.

    I think I would have made a dns request and got the valid dns reply into my cache before the 2 years are up. Or my connection would have gone down and I'd get a different IP by then. Thanks to TM Net for protecting me from such attacks ;).

    Either that or I'll be safe because the site would have DoSed me off the net wi
  • for DNSv2.
    (whatever that means)

    • by mibh ( 920980 ) on Saturday August 09, 2008 @11:43AM (#24537509) Homepage

      It's long past time for Secure DNS, which is a combination of TSIG+TKEY, SIG(0), and DNSSEC. End to end crypto authentication. Protects not just against off-path spoofed-source attacks like Kaminsky's, but also on-disk attacks against zone files, and provider-in-the-middle attackers who remap your NXDOMAIN responses into pointers to their advertising servers.

      Sadly, it's a year away even if everybody started now, and most people want to be last not first, so very few people have started, and some of those people are saying "why bother, if it's not an instant solution there's no point to it, let's scrap the design and start over." (Had it not taken 12 years to get Secure DNS defined, then the prospect of doubling that time would not daunt me as much as it does.)

      So, everybody please start already. NSD and Unbound from NLNetLabs supports DNSSEC. So does BIND, obviously. Sign your zones, and if your registrar won't accept keys from you, send them to a DLV registry [isc.org] while you wait for that. Turn on DNSSEC validation in your recursive nameservers. Write a letter to your congresscritter saying "please instruct US-DoC to give ICANN permission to sign the root DNS zone." In the time it would take for this Russian physicist's attack to work over your 512K DSL line (2.2 years, I heard?) we could completely secure the DNS or at least the parts of DNS whose operators gave a rat's ass about security (which is not the majority but it certainly includes your server, right?)

      • Sign your zones, and if your registrar won't accept keys from you, send them to a DLV registry while you wait for that.

        People who are interested in signing their zones may want to read up on how things work at www.dnssec.net [dnssec.net] and take a look at the Sparta tools [dnssec-tools.org]. It's really not difficult, and there is a lot of information out there.

  • I'm not surprised. Port randomization doesn't make the attack impossible, just harder. It doesn't eliminate the birthday attack, it just increases the space you have to blanket to generate a collision. The only real fix for the attack is DNSSEC, allowing the software to reject forged responses completely. Short of that, I can only think of two more things that'd help:

    • Ignore additional data in responses, or at least additional data not responsive to the query itself. This goes beyond bailiwick checking. It
  • DJB's take . . . (Score:5, Informative)

    by geniusj ( 140174 ) on Saturday August 09, 2008 @11:47AM (#24537529) Homepage

    For those that haven't seen it, djb threw up some information regarding this problem and various options a few years ago.

    http://cr.yp.to/djbdns/forgery.html [cr.yp.to]

    • Re: (Score:3, Informative)

      by vic-traill ( 1038742 )

      For those that haven't seen it, djb threw up some information regarding this problem and various options a few years ago.

      http://cr.yp.to/djbdns/forgery.html [cr.yp.to]

      I went and had a look at the thread (dated from Jul 30 2001) referenced in the excerpt at djb's site (follow the posting link in the URL above). As far as I can tell, Jim Reid was pooh-poohing the usefulness of port randomization, the approach used as an emergency backstop against Kaminsky's attach just over seven years later. To be fair, Reid was doing so in the context of advocating for Secure DNS.

      djb drives people crazy (particularly the BIND folks), but he's someone to listen to - is it the case, as I

      • by geniusj ( 140174 )

        djb drives people crazy (particularly the BIND folks), but he's someone to listen to - is it the case, as I understand from reading through these docs, that in 2001, djb's dnscache performed the port randomization that everyone's been scrambling to deploy over the past several weeks for other implementations, including BIND?

        Or am I mis-interpreting here?

        You are correct. djbdns was "not vulnerable" (in the same sense that BIND is "not vulnerable" now) to this attack.

        As you mentioned, he can be abrasive, but he's definitely contributed some valuable things. See SYN cookies [cr.yp.to] as another djb-contributed and widely-deployed solution to a problem.

    • Re: (Score:2, Informative)

      by Maniacal ( 12626 )

      Here's something DJB posted to his mailing list on Thursday. Don't know if I'm allowed to post this here but what the heck:

      http://cr.yp.to/djbdns/forgery.html [cr.yp.to] has, for several years, stated the results of exactly this attack:

      The dnscache program uses a cryptographic generator for the ID and
      query port to make them extremely difficult to predict. However,

      * an attacker who makes a few billion random guesses is likely to

      • Bernstein said that DNSSEC offers "a surprisingly low level of security" while causing severe problems for DNS reliability and performance.

        More FUD. It's hard to imagine how DNS could be less reliable than it is now, and port randomization actually decreases performance significantly without even assuring security; effective port randomization additionally starves the system for entropy making everything else the system does less secure.

        DNSSEC is the only alternative currently on the table that actually add

  • Right. Before the fix, you had to guess a 16-bit number. After the fix, you have to guess a 32-bit number. About 10 hours on a gigabit Ethernet should let you try the necessary 4 billion packets. This isn't an attack one could run against a client out on a DSL line, but if you were able to take over one machine in a colo, you might be able, over time, to get traffic for other machines directed to yours.

    If DNS used a 64-bit or 128 bit number to tie the response to the request, and the DNS client had a

    • Re: (Score:3, Interesting)

      by LarsG ( 31008 )

      This isn't an attack one could run against a client out on a DSL line, but if you were able to take over one machine in a colo, you might be able, over time, to get traffic for other machines directed to yours.

      True. On the other hand, if you are on the same network segment then there are many other options available to you if you want to do evil. Blasting about 4 terabytes (1 Gb/s for 10H) at a DNS server isn't exactly a quiet attack, so if you intend to stay below the radar you're probably a lot better off trying some good old arp spoofing or tcp hijacking instead.

  • Why not have the DNS server check for flooding?
    Basically when DNS poisoning is done you'll be sending thousands of fake/false packets that the DNS server receives and then ultimately rejects until one slips through because it guessed the correct ID/source port.

    If the DNS server were to count the number of false/wrong packets from each source address, it would quickly detect when something is wrong. It could then just reject all packets from this IP and perhaps use a secondary DNS server for the specific dom

    • In addition:
      An AC mentioned the following in this same thread:
      "or when updating your cache, compare with your cached copy, and if different ask again to double check."

      The combination of these two solutions (flood-checking and double checking) would solve the issue completely. The DNS server could do double or triple checking when it detects a flood.

  • DJB made a press release about this:

    ---D. J. Bernstein, Professor, Mathematics, Statistics, and Computer Science, University of Illinois at Chicago

    DNS still vulnerable, Bernstein says.

    CHICAGO, Thursday 7 August 2008 - Do you bank over the Internet? If so, beware: recent Internet patches don't stop determined attackers.

    Network administrators have been rushing to deploy DNS source-port randomization patches in response to an attack announced by security researcher Dan Kaminsky last month. But the inventor of

  • Sure, because nobody is going to notice a gigabit of traffic pouring into their DNS server for 10 hours in order to get -just-one- cache poisoning.

    Sorry, but this extension of the attack is simply unworthy of mention. What is worthy of mention is the danger posed by corporate NAT boxes that reorder the source ports sequentially, defeating randomization.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...