Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
The Internet Software Apache

New Apache Module For Fending Off DoS Attacks 62

Network Dweebs Corporation writes "A new Apache DoS mod, called mod_dosevasive (short for dos evasive maneuvers) is now available for Apache 1.3. This new module gives Apache the ability to deny (403) web page retrieval from clients requesting more than one or two pages per second, and helps protect bandwidth and system resources in the event of a single-system or distributed request-based DoS attack. This freely distributable, open-source mod can be found at http://www.networkdweebs.com/stuff/security.html"
This discussion has been archived. No new comments can be posted.

New Apache Module For Fending Off DoS Attacks

Comments Filter:
  • Handling all of those requests still takes processing time and bandwidth. What is needed is some type of hardware "filter" out front that can recognize a DoS attack and throw packets away.
    • Problem is, that this aproach doesn't solve any problems, creates new ones and is a great DoS tool in itself.

      This is the same problem as with all filters automagically cutting off all requests from given ip/netblock after spotting some abuse.

      Think big LAN behind masquerading firewall, or caching proxy for large organization, where one person using it can block access to the site using this automatic defenses.

      Funny thing is that this broken-by-design solution is known for years, its flaws are known for years, and yet we see every once in a while another tool using this scheme.

      Robert
      • (yeah,
        1. write
        2. preview
        3. post
        4. think
        5. reply to you own post
        ;)

        Think big LAN behind masquerading firewall, or caching proxy for large organization, where one person using it can block access to the site using this automatic defenses.

        Or think impostor sending requests with forged source IP.

        What? TCP sequence numbers? Impossible to impersonate TCP session?

        Think [bindview.com] again [coredump.cx].

        Robert
      • by Anonymous Coward
        The website says: "Obviously, this module will not fend off attacks consuming all available bandwidth or more resources than are available to send 403's, but is very successful in typical flood attacks or cgi flood attacks."

        This tool wasn't designed as an end-all be-all solution, it was designed as a starting point for cutting off extraneous requests (so you don't have a few thousand CGIs running on your server, or a few thousand page sends) and to provide a means of detection. You could easily take this code and have it talk to your firewalls to shut down the ip addresses that are being blacklisted. If you don't have decentralized content or at the very least a distributed design, you're going to be DoS'd regardless, but this tool can at least make it take more power to do it.
    • It is all a question of scale...

      The hardware devices that you propose already exist. And they work to some extend.

      The problem is bigger the most would think. What does diferenciate a attack from a legitim access? How do you detect an attack and start to counter it? Do you have bandwidth to withold even a pit bucket for the attacking packets?

      And finally how much money are you investing in the DoS protection...

      The apache module have as normal a very interisting cost/effectiveness ratio... [even if there are other more efficient solutions for the DoS problems - they are also very expensive].

      Cheers...
  • How clever is it? (Score:2, Insightful)

    by cilix ( 538057 )
    Does anyone know how clever it is? There are several things that I suppose
    you could do to make sure that this doesn't get in the way of normal browsing, but still catches DOS attacks. What sort of things does this module include to work intelligently? How tunable is it?

    One thing that jumps to mind is that you could have some kind of ratio between images and html which has to be adhered to for any x second period. This would hopefully mean that going to webpages with lots of images (which are all requested really quickly) wouldn't cause any problems. Also, more than one request can be made in a single http session (I think - I don't really know anything about this) so I guess you could make use of that to assess whether the traffic fitted the normal profile of a websurfer for that particular site.

    Also, is there anything you can do to ensure that several people behind a NATing firewall all surfing to the same site don't trip the anti-DOS features?

    Just thinking while I type really...
    • by The Whinger ( 255233 ) on Wednesday October 30, 2002 @10:33AM (#4564006) Homepage
      "Also, is there anything you can do to ensure that several people behind a NATing firewall all surfing to the same site don't trip the anti-DOS features?"

      Whilst not totally impossible ... the chances of this are SMALL. Same URI same minute ... possible, same URI same second ... rare I guess ...
      • here's one realistic scenario that could be seen incorrectly as a DoS attack...

        Setup: You are teaching classes to a lab full(lets say 30 for the sake of discussion)of kids in a school setting(gee, ya wonder where I work?). Let's say you instruct all your kids to go to some site with material for the astronomy class you teach. Let's assume that all the kids do as they are told and they all immediately type in the URL you gave them and request a page.

        Let's assume your school district is behind a firewall that also uses a NAT/Proxy setup. Therefore all the requests are coming en masse from one "real" IP. Wouldn't this possibly be deemed as a DoS attack by this plugin?

        .....
      • Not if the site is linked to from Slashdot... But then again, the site will be /.ed soon enough so it probably doesn't matter if it appears to happen a few seconds early....
    • One thing that jumps to mind is that you could have some kind of ratio between images and html which has to be adhered to for any x second period.

      lynx users wouldn't be too impressed.

  • by GigsVT ( 208848 ) on Wednesday October 30, 2002 @09:42AM (#4563583) Journal
    On the securityfocus incidents list, there was a guy that ran a little web site that was being DoSed by a competitor in a strange way. The much higher traffic competitor had a bunch of 1 pixel by 1 pixel frames and each one loaded a copy of the little guy's site. The effect was he was using his own users to DoS his competition.

    People suggessted a javascript popup telling them the truth about what was going on, or an HTTP redirect to a very large file on the big guy's site, but Jonathan A. Zdziarski at the site linked above decided to write this patch as an ad-hoc solution.

    I'd be very careful with this patch in production, as it is ad-hoc and not tested very much at all.
    • The much higher traffic competitor had a bunch of 1 pixel by 1 pixel frames and each one loaded a copy of the little guy's site. The effect was he was using his own users to DoS his competition.
      One wonders why he didn't just use some javascript to break out of the frame jail, and then explain that users had been redirected to foo because bar was loading foo's pages? [Granted, it would have been caught eventually, but for the time being, legitimate traffic might win you a few customers...]
      • One wonders why he didn't just use some javascript to break out of the frame jail, and then explain that users had been redirected to foo because bar was loading foo's pages?


        Or break out and redirect to a goatse-esque page or something similar... Since they're viewing his competitor's site it would appear to be his content right?


        =tkk

      • How about just something with a referer check? If the referer is the other guy's site, do a: window.open("www.somedirtypornsite.com", _top);
    • I'll go read the security focus list, but i'm wondering why he didn't fix this by checking the referer tag?
      • That was one suggestion, but it would still cause the web server to have to handle the requests.
        • right, but since this functionality is already there, i thought it might be lighter than the new mod - which has to maintain a list of requests (either in memory on the fs) and then check this list every time a request comes in... I wonder what kind of IO is involved here. But, that question is better answered in the source code, so off i go...
          • Well, this patch is a little more generalized too, it throttles any IP that accesses the site too quickly... Something like this would have probably throttled nimda to some extent also, and misbehaved robots that really slam your site.

    • simple (Score:2, Interesting)

      by krappie ( 172561 )
      I work as tech support for a webhosting company. I see things like this all the time. People tend to think its impossible to block because its not from any one specific ip address, but the requests are coming from all over. People need to learn the awesome power of mod_rewrite.

      RewriteEngine on
      RewriteCond %{HTTP_REFERER} ^http://(.+\.)*bigguysite.com/ [NC]
      RewriteRule /* - [F]

      I've also seen people who had bad domain names pointed at their ips, where you can check the HTTP_HOST. I've seen recursive download programs totally crush webservers, mod_rewrite can check the HTTP_USER_AGENT for that. Of course, download programs could always change the specified user agent, which is I guess where this apache module could come in handy. Good idea..
      • Other examples.. I've seen one random picture on a guy's server get linked to from thehun.net. It ended up getting over 2 million requests a day and totally killed his server.

        I also like to keep any interesting multimedia files up on a shared directory accessible from apache running on my home computer. Just so any of my friends can browse through and such. Eventually, I got listed on some warez search engines...

        RewriteEngine on
        RewriteCond %{HTTP_REFERER} ^http://(.+\.)*warezsite.com/ [NC]
        RewriteRule /* http://goatse.cx/ [L,R]

        Teehee. I got removed pretty quickly.

        In the case of the 1x1 frames on every page... I wonder what would happen if you redirected them back to the origional page, which would have a frame that would redirect them back to the origional page.. I guess browsers probably protect against recursive frames.

        You could at least redirect their browsers back to the most resource intensive page or script on the big guy's site, at least doubling his resources while barely using yours. Ah.. sweet justice.

        I like someone else's suggestion about frame-busting javascript, that'd be pretty interesting and would definantly get that frame removed right away. I sometimes wish my websites got these kind of attacks, I'd have so much fun :D
        • I guess browsers probably protect against recursive frames.

          Sorta, though not deliberately, they are limited to something between 4-6 levels of nesting I believe... Same with nested tables.

  • Too slow/too fast. (Score:3, Insightful)

    by perlyking ( 198166 ) on Wednesday October 30, 2002 @10:04AM (#4563732) Homepage
    "This new module gives Apache the ability to deny (403) web page retrieval from clients requesting more than one or two pages per second."

    I can easily request a couple of pages a second, if i'm spawning off links to read in the background. On the other hand wouldnt an automated attack be requesting much faster than 2 per second?
    • "I can easily request a couple of pages a second, if i'm spawning off links to read in the background. On the other hand wouldnt an automated attack be requesting much faster than 2 per second?"

      Why would you spawn off links to the same page? Do you read the same content more than once? The key to the article is "the SAME page in the 2 second period".
      • Yeah, if the page is a script that gives out different content based on some parameter, you could easily do this. I would imagine that the module lets you *configure it*.. Gee, imagine being able to change a parameter?!?!

        ~GoRK
  • A possible problem? (Score:3, Interesting)

    by n-baxley ( 103975 ) <nate@baxleysIII.org minus threevowels> on Wednesday October 30, 2002 @10:12AM (#4563793) Homepage Journal
    I'm sure they've thought of this, but will this affect frame pages where the browser requests multiple pages at the same time? How about scripting and stylesheet includes which are made as seperate requests, usually right on the heels of the original page? I hope they've handled this. It seems like the number should be set higher. Maybe 10 requests a second is a better point. That's probably adjustable though. I suppose I should RTFM.
    • by Anonymous Coward
      It's not based on the # of requests it's based on the # of requests to the same URI. It'll only blacklist you if you request the same file more than twice per second. Once you're blacklisted you can't retrieve ANY files for 10 seconds (or longer if you keep trying to retrieve files) but the only way you're going to get on the blacklist would be if all those frames were for the same page or script.
      • if all those frames were for the same page or script.
        Some silly designers uses to have multiple frames of a blank frame, eg blank.html. These all would be busted. I do not think that you should use this new module in production, do you?
  • by NetworkDweebs ( 621769 ) on Wednesday October 30, 2002 @10:31AM (#4563987)
    Hi there,

    Just wanted to clear up a bit of misunderstanding about this module. First off, please forgive me for screwing up the story submission. What it *should* have said was "...This new module gives Apache the ability to deny (403) web page retrieval from clients requesting THE SAME FILES more than once or twice per second...". That's the way this tool works; if you request the same file more than once or twice per second, it adds you to a blacklist which prevents you from getting any web pages for 10 seconds; if you try and request more pages, it adds to that 10 seconds.

    Second, I'd like to address the idea that we designed this as the "ultimate solution to DoSes". This tool should help in the event of your average DoS attack, however to be successful in heavy distributed attacks, you'll need to have an infrastructure capable of handling such an attack. A web server can only handle so many 403's before it'll stop servicing valid requests (but the # of 403's it can handle as opposed to web page or script retrievals is greater). It's our hope that anyone serious enough about circumventing a DoS attack will also have a distributed model and decentralized content, along with a network built for resisting DoS attacks.

    This tool is not only useful for providing some initial frontline defense, but can (should) also be adapted to talk directly to a company's border routers or firewalls so that the blacklisted IPs can be handled before any more requests get to the server; e.g. it's a great detection tool for web-based DoS attacks.

    Anyhow, please enjoy the tool, and I'd be very interested in hearing what kind of private adaptations people have made to it to talk to other requipment on the network.
    • Heres a simple hack to your service: simply get 10 or so files from the server, and use your scripts to randomely fetch all 10...or 100, or 1000.
      • Funny you should mention that. We released version 1.3 on the site that now has a separate threshhold for total hits per child per second. The default is 50 objects per child per second. Even if you have a large site and a fast client connection, a browser is going to open up four or more concurrent connections splitting the total number of objects up. Nevertheless if 50 is still too low you can always adjust it.
    • Run a wget -r type of attack (only dump the resulting files into /dev/null). This module would seem to have no effect.
    • is blocking anyone who requests NIMDA/CodeRed related URLs.

      I currently use a scheme where I created the appropriate directories on my web document tree
      (/scripts for example)
      and then set up 'deny to all' rules for them.

      This way, the apache server doesn't even bother with a filesystem seek to tell that the file isn't there it just denies it.

      Dropping packets would be even better.
    • "...deny (403) web page retrieval from clients requesting THE SAME FILES more than once or twice per second."

      If your logo is at the top and the bottom of the page, that's two references within a second. But if the browser is caching images, there will only be one request to the web server. So in practice that shouldn't be a problem...unless the browser checks if the image file changed for the second reference?

    • If you're looking for an easy way to automate blocking at the border router, take a look at:

      http://www.ipblocker.org [ipblocker.org]

      With a simple command line call to a Perl script you can have the ACL on a Cisco router updated to deny traffic from the offending user.

  • Now, I am going to start off my admitting I have never taken any classes on TCP/IP and only have a user's level of understanding. Now I can see an attack by making a web server dump its data too you so often that it cant keep up w/ everyone else as being effective if it doesnt have any sort of client balancing, or whatever. But I thought that DoS proper involved looking at the connection at a lower level, where you would fill the TCP handler's queue with requests that would never get past a certain point, so the server would have a ton of socket connections waiting to be completed, handshaked, whatever happens (So many in fact that it's queue was completely full and could not even open a socket connection to any more users to even give the "403" error message.) Thats why it was called "Denial of Service" because valid clients would not even get a SLOW response from the server, they would get nothing because their TCP/IP connections would never even be opened. Isnt that right?
    • There are many different types of DoS attacks, and the kind you're describing have other methods of circumvention. The type of DoS this module was designed to fight/detect was a request-based attack where a website was flooded with requests to increase bandwidth and system load.
    • To stop a non-bandwidth bogus-request attack, you just turn on syncookies and that's that. This module is designed to stop a different kind of attack, wherein the clients are completing entire transactions too many times and thus consuming your bandwidth. There are other types of DOS attacks too -- reflection attacks (where you get a ton of ACK packets from all over the internet, using up all your bandwidth), for example, have to be stopped at the router level upstream, which prevents the server from completing any transactions as a client (over the internet; it can still get through over the LAN, of course).

  • by Anonymous Coward
    A while back I wrote an Apache module similar to this one (mod_antihak), but it protected against CodeRed bandwidth consumption. It also had a slightly more brutal method of blocking offenders: ipchains :) There's inherant problems with this though, the 403 would be the way I would go too if I did it all again.
  • I'm not sure how this is any different than the feature of mod_bandwidth that limits the number of requests per user per second. I'm definately going to test it out it's unclear how this is any different, except that it doesn't have all the other overhead functionality of mod_bandwidth.

    --CTH
  • Perhaps one of these is needed to ward off the ... effect? I suppose it would be damn easy to do; it just needs to be in the config by default.

  • If you design web sites pay attention.

    So many designers that I ran into in my travels, still don't understand, that when you put Flash animations (Which I can't stand 99% of the time), large png files, or complex front pages, especially public pages, you increase you bandwidth costs.

    Seems very simple to most. I am still surprised how many companies redesign sites, with gaudy graphics all over the place, and then find ALL OF A SUDDEN after deployment thier website goes down.

    I can remember many customers I use to deal with, that had fixed contracts for hosting, yet they maintained thier own content, calling up and claiming our server was slow, and or down/experiencing technical difficulties.

    I would usually say: "OH REALLY, I don't see any problems with the server per se. Did you happen to modify anything lately on the site?"

    "Yes" they would reply: "We just put a flash movie movie on the front page..."

    Immediately I knew what the problem is, they blew thier bandwidth budget. At times I would see companies quadruple the size of thier front pages, which reduces by about a quarter the number of users they can support at quality page download times. Especially if they are close to thier bandwidth limit as IS without the new pages.

    The bigger the pages, the better the DOS or the easier the DOS is too perform.

    In my design philosophy for my companies site, you can't get access to big pages without signing in first. If you sign in a zillion times at one or more pages, obviously that isn't normal behavior, and the software on my site is intelligent enough to figure that pout and disables the login, which then points you to a 2K Error page.

    In any case, if you are trying to protect your website and you don't want to resort to highly technical and esoteric methods, to minimize DOS attacks. You might want to start with the design of the website content.

    The lighter the weight of the pages, the harder it is for an individual to amass enough machines to prevent legitimate users from using your site.

    IMHO, Flash plugins, and applets and other such features should be available only to registered users, and logins strictly controlled.

    Hack

For God's sake, stop researching for a while and begin to think!

Working...