Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

We are sorry to see you leave - Beta is different and we value the time you took to try it out. Before you decide to go, please take a look at some value-adds for Beta and learn more about it. Thank you for reading Slashdot, and for making the site better!

200-400 Gbps DDoS Attacks Are Now Normal

Soulskill posted about 9 months ago | from the distributed-denial-of-sherbet dept.

Networking 92

An anonymous reader writes "Brian Krebs has a followup to this week's 400 Gbps DDoS attack using NTP amplification. Krebs, as a computer security writer, has often been the target of DDoS attacks. He was also hit by a 200Gbps attack this week (apparently, from a 15-year-old in Illinois). That kind of volume would have been record-breaking only a couple of years ago, but now it's just normal. Arbor Networks says we've entered the 'hockey stick' era of DDoS attacks, as a graph of attack volume spikes sharply over the past year. CloudFlare's CEO wrote, 'Monday's DDoS proved these attacks aren't just theoretical. To generate approximately 400Gbps of traffic, the attacker used 4,529 NTP servers running on 1,298 different networks. On average, each of these servers sent 87Mbps of traffic to the intended victim on CloudFlare's network. Remarkably, it is possible that the attacker used only a single server running on a network that allowed source IP address spoofing to initiate the requests. An attacker with a 1 Gbps connection can theoretically generate more than 200Gbps of DDoS traffic.' In a statement to Krebs, he added, 'We have an attack of over 100 Gbps almost every hour of every day.'"

Sorry! There are no comments related to the filter you selected.

Well (5, Insightful)

The Cat (19816) | about 9 months ago | (#46254857)

The obvious solution is to unplug the Internet. I'm sure the government and the movie people will be thrilled.

Re:Well (5, Funny)

Travis Mansbridge (830557) | about 9 months ago | (#46254867)

Then wait 10 seconds before plugging it back in.

lol (1)

riis138 (3020505) | about 9 months ago | (#46255211)

Would you like us to send a refresh signal to your cable modem?

Re:lol (0)

Anonymous Coward | about 9 months ago | (#46257827)

Remote reset. Occasionally works with some old, fucked modems, but usually doesn't do anything. Placates the customer and gets them off the phone, though. CSRs will do it even if they look at the signal history and see it obviously fluctuating, which would generally indicate a line or node issue and not a damn modem issue.

Re:Well (2)

dreamchaser (49529) | about 9 months ago | (#46255949)

No. You need to wait at least 30 seconds to make sure the Internet's RAM is cleared and it's ready to reboot.

Re:Well (0)

Anonymous Coward | about 9 months ago | (#46256085)

Then wait 10 seconds before plugging it back in.

So your a Time Warner customer ... now Comcast press 1 for English

Re:Well (1)

segin (883667) | about 9 months ago | (#46258745)

I use Mediacom you insensitive clod!

Re:Well (1)

DarwinSurvivor (1752106) | about 9 months ago | (#46256463)

It could be a corrupted configuration. Try doing a 30-30-30 reset and reloading the settings from a previous backup.

Re:Well (0)

Anonymous Coward | about 9 months ago | (#46254887)

This is why we can't have nice things!

Re:Well (2)

Burz (138833) | about 9 months ago | (#46255561)

Notice the sidebar to the Krebs article: "The value of a hacked PC".

I say unplug Windows from the Internet. The world has had enough of this already.

Re: Well (0, Insightful)

Anonymous Coward | about 9 months ago | (#46256035)

Yes.. because everyone of those NTP servers that made up the 400Gb/s attack were running Windows...

Idiot.

Re: Well (1)

Burz (138833) | about 9 months ago | (#46256125)

Yes.. because everyone of those NTP servers that made up the 400Gb/s attack were running Windows...

Idiot.

Then who is generating the forged packets, Einstein? Its not some guy hammering at NTP servers directly from his home PC or hosting account; You still need a very big network presence to generate the calibre of attacks seen lately, even with amplification.

Re: Well (-1)

Anonymous Coward | about 9 months ago | (#46256443)

Then who is generating the forged packets, Einstein? Its not some guy hammering at NTP servers directly from his home PC or hosting account; You still need a very big network presence to generate the calibre of attacks seen lately, even with amplification.

TFS says one 1gbps connection forging packets could do it. Again, what does Windows have to do with it?

Re: Well (1)

Cramer (69040) | about 9 months ago | (#46280383)

And where. exactly, does a script kiddie get a "single 1G connection"? Google Fiber? (and do you really think Google wouldn't notice a flooded connection?) Plus, firing off a DoS from a single location makes it down right trivial to kill -- there's only one machine that has to be unplugged.

No they will not be thrilled (2)

nurb432 (527695) | about 9 months ago | (#46256951)

How can you push out propaganda if your main distribution method goes away?

Re:No they will not be thrilled (1)

RockDoctor (15477) | about 9 months ago | (#46260645)

How can you push out propaganda if your main distribution method goes away?

By continuing to use your multiple other methods of distribution?

One system may be fastest or cheapest, but you have to be really, really sure that you're never going to need what you used previously before you unplug it and sell (or throw away) the hardware.

Case in point : I'm working on a 100-million dollar ship equipped with around ten million dollars of the best shiniest and newest of equipment for robotically handling one of the most dangerous of every-day operations. And in the last few days, the robot broke down, so we went down to the heavy tool store and broke out the manual tools (whose design hasn't changed significantly in about 70 years) and carried on working while the mechanics and electronics technicians got on with repairing the robot.

Why did we have those tools in the heavy tool store? Because someone planned for system level redundancy.

Re:No they will not be thrilled (1)

nurb432 (527695) | about 9 months ago | (#46261057)

By continuing to use your multiple other methods of distribution?

That wont be effective on the next generation ( kids today ). Time has moved on, the method must too.

Re:No they will not be thrilled (1)

RockDoctor (15477) | about 9 months ago | (#46264797)

( kids today )

Part of the job of us grey-beards is to make sure that when (not if) the things that the kids depend on get broken (including by other kids), then there's a backup system in place. You see, kids today haven't seen things fuck up completely. So they know that it's not going to happen to them because they're immortal and of infinite intelligence.

When (not if) the "kids" encounter their first major fuck up - they have friends killed in a car crash ; their employer goes bust because of something completely unrelated to their actions ; someone puts an excavating machine through their communications cable; or the power goes out for a week - and they have to use other techniques ... then they're starting to have their innocent youth torn away from them and they're proceeding towards adulthood and impending grey-beard-hood.

I remember a conversation with a friend in infrastructure maintenance work once - when I was bitching about having to tear down, move and re-build my laboratory's power and sensor equipment every couple of months. He was telling me that the equipment he works with is intended for an average (not maximum or mean, but mode) lifetime of 50 years. You're going to be living with that equipment for a long time, and you'd better hope that it was built with redundancy in mind.

Root cause of DDoS attacks .. (0)

Anonymous Coward | about 9 months ago | (#46257203)

@by The Cat: "The obvious solution is to unplug the Internet. I'm sure the government and the movie people will be thrilled."

The obvious solution is to unplug Microsoft Windows from the Internet. It's all those Windows desktops out there that are the root cause of these DDoS attacks ..

Re:Root cause of DDoS attacks .. (1)

Cramer (69040) | about 9 months ago | (#46280571)

CURRENT, low hanging fruit, root cause... take away windows(tm) and they'll target something else.

Root issue is lack of URPF and similar (5, Informative)

silas_moeckel (234313) | about 9 months ago | (#46254873)

Hosting/Colo/Transit providers are the real core issue. There is absolutely no reason that URPF or similar or at least ingress ACL's are not in place. Lets face it if your limiting the prefixes announced you should be filtering on them as well. Anything even close to core can do this in hardware, URPF and similar there is generally no config required more than turning it on. At Hosting/Colo levels do you still have something on the public side that can not do at least ACL's in hardware? Plenty of automation packages can do this stuff in an automated fashion. The root cause is lazy and broken providers that just do not care, DDOS traffic can make some of them piles of cash directly in transit billing or indirectly as the only people with a big enough pipe to do ddos protection.

Re:Root issue is lack of URPF and similar (4, Funny)

BSAtHome (455370) | about 9 months ago | (#46254907)

Indeed, reverse path filtering should be mandatory, especially because it is so easy.

Also, RFC3514 should become a part of the IP standard. Not setting the appropriate bit from the sender side should then be punishable with eternal flogging.

Re:Root issue is lack of URPF and similar (1)

Pinky's Brain (1158667) | about 9 months ago | (#46254921)

Exactly ... is there some perverse incentive at work which makes backbones not try to implement ingress/egress filtering at the internet edge?

AFAICS it would be trivial for them to require it through new contracts, put some fines on not implementing it and all this disappears ... I'm sure there are some owned computers on core networks but I doubt the owners would want to expose them on a vindictive DDOS attack.

Re:Root issue is lack of URPF and similar (1)

Zocalo (252965) | about 9 months ago | (#46254979)

Laziness on behalf of end user ISPs, mostly, but also because BCP38 / RFC2827 [bcp38.info] is harder to implement for larger customers that do their own routing from multiple subnets and might legitimately start to send traffic from an IP allocation you have no idea about and thus have blocked because one of their upstream links with a different provider went down. Still, there's no real excuse for not doing this on the edge of networks that are only ever going to have a single known block of IPs behind them though. Just taking out the potential for home and SoHo users from spoofing source addresses would have a huge impact on the ability to perform amplification based attacks on this kind of scale.

Nothing to stop a suitibly inclined end user fixing up their own network though, just in case they do get compromised. Line #1 in my router's egress rules is always "if source IP != my subnet, drop the packet & log to syslog".

Re:Root issue is lack of URPF and similar (2)

Pinky's Brain (1158667) | about 9 months ago | (#46255083)

It would be hard to do ingress filtering by the backbones for those larger companies, but the companies can surely do egress filtering at each edge of their networks ... just a question of sufficient (financial) incentives.

Most of the internet edge could be ingress filtered by the core, the rest should do it's own egress filtering.

Re:Root issue is lack of URPF and similar (2)

AK Marc (707885) | about 9 months ago | (#46255393)

Working for a company with 4,000,000 users, we are ingress filtered (but only over a very tiny subset of links). It works fine. Why would it fail for a larger company? We know every "legitimate" IP on or through our network, and notify those, when required. IP address ranges are static. They don't change who they are assigned to. And the number of changes to providers for those ranges is low, easily manageable for providers and users alike.

Re:Root issue is lack of URPF and similar (1)

Pinky's Brain (1158667) | about 9 months ago | (#46255481)

Well if those companies want to have completely control and just add IP address ranges willy nilly without dealing with admins of their providers it becomes impossible, maybe just more for political reasons than technological but that doesn't really matter.

So I say fine ... if they don't want to let the providers do ingress filtering for them, just make it mandatory for the companies to have egress filtering on their network (with fines for non compliance/due diligence). If there are real financial risks the technical objections will probably quickly disappear though and they'll gladly let their providers handle it.

Re:Root issue is lack of URPF and similar (1)

AK Marc (707885) | about 9 months ago | (#46256057)

You *can't* add addresses "willy nilly" You need applications and justifications for new blocks, at this point, it's almost easier to buy a company that owns IP blocks. But, as you imply, someone that doesn't want to be filtered, and refuses to filter themselves, should be kicked off the network.

Re:Root issue is lack of URPF and similar (1)

sjames (1099) | about 9 months ago | (#46255269)

In cases of multi-homing or failure recovery, the customer has to let their providers know what routes they will/may be announcing in order to get the BGP filtering set up correctly. Might as well set up the source address filtering at the same time.

Re:Root issue is lack of URPF and similar (2)

CyprusBlue113 (1294000) | about 9 months ago | (#46255333)

The problem with this becomes what if you're a transit provider yourself. The logistics of managing that kind of fitering suck. It's why most peers don't.

There needs to be a middle ground between loose and strict like feasable. I don't want to accept packets for any route I have, nor do I want to drop any packet that doesn't head back the same direction. For reasonable filtering at that level, it needs to be "allow any packets that should reasonably come from this peer per their advisement that I can filter". Sure you can base it of IRR or something, but it would be much more effective if this was signaled than configured.

Re:Root issue is lack of URPF and similar (1)

sjames (1099) | about 9 months ago | (#46256369)

That sort of transit is a level above edge. In such a case it is probably sufficient to stipulate in any contract that the transit customer has fully implemented source filtering. Of course, if there is ever any abuse, they'll have to pay penalties and face stricter filtering.

That is already dealt with for BGP.

Re:Root issue is lack of URPF and similar (1)

CyprusBlue113 (1294000) | about 9 months ago | (#46282393)

The problem is you have to trust that peer to police their network.

It leads to a situation where one bad actor network with content can make it never successful.

Re:Root issue is lack of URPF and similar (1)

sjames (1099) | about 9 months ago | (#46282481)

Trust but verify, and cut them off if they refuse to do the right thing.

Re:Root issue is lack of URPF and similar (1)

Cramer (69040) | about 9 months ago | (#46280687)

As one who has maintained an ISP's peering, it is no where near as complicated as you make it sound. Enterprise class hardware (from Cisco, Juniper, etc.) have builtin support for unicast reverse path filtering (uRPF) that's effectively processing free -- based on the routing table ("FIB" -- forwarding information base) -- very effectively preventing traffic from entering (or leaving) your network that doesn't belong there.

(As an end user, uRPF presents a small problem as the ISP DHCP server is a 10-net host and I null route 10/8.)

Re:Root issue is lack of URPF and similar (1)

CyprusBlue113 (1294000) | about 9 months ago | (#46281193)

As one who has maintained an ISP's peering, it is no where near as complicated as you make it sound. Enterprise class hardware (from Cisco, Juniper, etc.) have builtin support for unicast reverse path filtering (uRPF) that's effectively processing free -- based on the routing table ("FIB" -- forwarding information base) -- very effectively preventing traffic from entering (or leaving) your network that doesn't belong there.

(As an end user, uRPF presents a small problem as the ISP DHCP server is a 10-net host and I null route 10/8.)

Yes obviously, which can be implemented in 2 modes: strict, which is useless as an upstream peer because you don't necessarily have best path down to them for everything you're hearing, or as loose, which is again useless as an upstream peer because you might as well turn it right off.

Dude, clearly you have no idea what you just read.

Re:Root issue is lack of URPF and similar (1)

Cramer (69040) | about 9 months ago | (#46281971)

Incorrect. I'll say it again: It's not as complicated as the "haters" claim. I've maintained "BCP38" in an ISP network with transit links (aka. isp-edge routers with default routes.) While it's not perfect -- because "everything" is potentially on the other side, there are steps to be taken. (I ultimately cannot prove a packet with a source address of paypal actually came from them unless I'm directly peered with them.) You know what's inside your network (read: as the network operator, you d*** well better know), and thus, can prevent ingress traffic from what's already inside your network; and prevent everything inside your network from pretending to be someone else.

And that last bit is the key point. If everyone ensured their egress traffic isn't spoofed, these sorts of things would no longer be possible. Host A wouldn't be able to send packets to NTP servers with their source set to paypal (for example.)

(Also, there's two way to do transit... the way consumer do it -- a single default route -- and the way ISPs and big enterprise does it -- full route table from the upstream provider.)

Re:Root issue is lack of URPF and similar (1)

CyprusBlue113 (1294000) | about 9 months ago | (#46282367)

Let me try this as simple as I can. Just because you ran BGP with your provider, does not make you a peer or transit network.

You just said default route. That is a leaf node. You're at the end of the world. You are not peering. uRPF is useful when you're a leaf. It is *completely useless as a real peer* in it's current form.

Let me illustrate this for you with a completely made up scenario: You are Telia, you peer with Abovenet in 3 places, how do you configure uRPF on those links so that it keeps spoofed packets out and doesn't break all your downstreams?

Re:Root issue is lack of URPF and similar (1)

Cramer (69040) | about 9 months ago | (#46282613)

(per peer interface) "ip verify unicast source reachable-via any" This is the less desirable "loose" method, and doesn't work if you have a default route (with 0/0 matching everything.) It won't necessarily stop all spoofing, but it will significantly cut it down. "via rx" is always preferred, but in this case, each site may not prefer a given network through it's local connection, instead crossing an internal link to another site. (this is also asymmetric routing.)

Once again... the only way to completely end this crap is for every operator to take steps to prevent their own clients from lying about who they are. uRPF works over 99% of the time; the odd-man-out multihomed setups get a fully defined ACL.

Furthermore, each BGP peering session should itself be filtered to a list of allowable prefixes -- often managed in an automated fashion through RADB's. (every ISP I've dealt with filtered, as did I) That db can also be used to maintain ACLs. The address space I managed didn't change often enough for anyone (read: me) to automate it. The likes of UUNet and AT&T, their prefixes change constantly.

Re:Root issue is lack of URPF and similar (1)

AK Marc (707885) | about 9 months ago | (#46255377)

r larger customers that do their own routing from multiple subnets and might legitimately start to send traffic from an IP allocation you have no idea about and thus have blocked because one of their upstream links with a different provider went down.

Assuming they aren't relying on asymmetric routing (a bad thing), if you don't know about a range being sent to you by a customer, how can they receive a reply?

Still, there's no real excuse for not doing this on the edge of networks that are only ever going to have a single known block of IPs behind them though.

Works for dynamically routed networks as well, if they aren't advertising the range through you, don't accept it. That's not going to block any legitimate traffic unless you have route filters and the customer didn't follow the process to get new ranges added to your filters.

The customers and edge ISPs should stop all this. If they don't, they should be sued for billions.

Re:Root issue is lack of URPF and similar (2)

DarkOx (621550) | about 9 months ago | (#46255471)

I agree for edge networks there is no good reason for RPF not being enable but you hit the nail on the head when it comes larger customers that have an AS or multiple AS allocations and ip addressing they may not share with you. Its not really as simple as just throwing a switch at most of the sites which really matter.

As far as the home and SoHo users I don't know how the rest of the world is but I don't know any main stream ISP that isn't doing some kind of reverse filtering. I have not been able to get packets with spoofed source addresses to the internet on any of the cable or DSL providers I have had at my homes in the last decade. Can I send some spoofed packets to my neighbor who is probably on the same cable segment, very likely maybe I could even push them around Cox's network, but they don't get to the Internet.

RPF is not going solve the problem of these big amplification DOS attacks either. All it really takes is a handful sites with a decent amount of upstream to not be running RPF or other effective egress filtering and an amplification attack like the recent NTP jobs is possible. So its going to go back to those sites where you can't just enable RPF and go back to playing FlappyBird for the rest of the afternoon. Essentially this is any place where you a significant number of customers who are multi homed. Which in turn describes many corporate entities who do not specialize in Internetworking and like have plenty of vulnerabilities attackers can use to get control of a host or two inside of and launch their DDOS.

Re:Root issue is lack of URPF and similar (1)

Pinky's Brain (1158667) | about 9 months ago | (#46255691)

All it takes is to kick those handful of sites off the internet and problem solved.

Re:Root issue is lack of URPF and similar (1)

silas_moeckel (234313) | about 9 months ago | (#46255965)

RPF wont, ACL's will and it's trivial to take a BGP prefix list and turn it into an ACL. The more it's implemented the better it works, 100% penetration is not required for it to be effective though. As you push these attackers to use the undefended spaces more and more pressure is put on them to clean up there act. Most of the source points for this seem to be Hosting/Colo where the filters are pretty trivial to get in place even if it's just on your own edge outbound.

Re:Root issue is lack of URPF and similar (1)

DarkOx (621550) | about 9 months ago | (#46256377)

I agree that will help a lot but it still won't solve the problem. The problems is the size of the sub that's through skateboarders just upset on out there. You can always weapon eyes local subnet cause her is no router to enforce ACL hosts and talk to each other directly. You spoof a few packets 100 or so little Soho routers out there each with 5 Mb upload and you got quite a lot of bandwidth right there. All of that traffic will indeed be sourcing local network with both the ACL's OraVerse path filtering allow to the target. Add a botnet weaponize few more subnets and you are back to a fairly high-bandwidth distributed attack.

Weapon eyes (1)

tepples (727027) | about 9 months ago | (#46272143)

You wrote both "weaponize" and "weapon eyes", and you made some reference to "skateboarders" that I couldn't puzzle apart. What's going on here? Is your device set to "wreck a nice beach" [hyperorg.com] ?

Re:Root issue is lack of URPF and similar (2)

silas_moeckel (234313) | about 9 months ago | (#46254999)

Is transit billing not a good enough one for you? Selling there own DDOS protection or transit bandwidth to others to do the same. Seems like good reasons for them to not want to.

There are potentially serious issues with tier 1's putting this in place today with there peers etc. Anything that is not a BGP speaker should have his on today, BGP speaking clients should be given a timeline to be ready for this to be turned on (there is some broken bits out there). Tier 1 peers is another story but if everything else is done it does not matter much.

Re:Root issue is lack of URPF and similar (0)

Anonymous Coward | about 9 months ago | (#46255035)

Because it takes time, configuration knowledge, and is likely to break something stupid that someone is reliant on. The obligatory XKCD explaining this is below.

          https://xkcd.com/1172/

Couple this with the inevitable attitude of poorly mentored operations people that "if we have an intruder inside our network, we have much bigger problems" and you have an absolute refusal to perform even basic protections against atacks, and you have people who send passwords in clear text and don't worry about how Subversion stores your passwords to Sourceforge open source projects in plain text in your home directory.

Why not rate limit? (1)

Chemisor (97276) | about 9 months ago | (#46254927)

So why don't NTP servers limit their responses to, say, 1 per 10 seconds per IP address? Even if spoofing, it would not take that long to exhaust the subnet of the attack target.

Re:Why not rate limit? (3, Insightful)

Pinky's Brain (1158667) | about 9 months ago | (#46254935)

They're all buggy commodity routers which are never getting updates.

Re:Why not rate limit? (2)

mysidia (191772) | about 9 months ago | (#46255275)

They're all buggy commodity routers which are never getting updates.

Relatively recent Juniper JunOS versions respond to ntpdc monlist, as well, so they're vulnerable. The only way to address these, I found.... was to completely firewall off NTP on the loopback interface.

The same for a number of other appliances, that are still technically supported, but the vendors seem uninterested and unconcerned about NTP issues, so much so, that they are only suggesting workarounds such as "turn off NTP", no indication that a patch will be forthcoming

Re:Why not rate limit? (1)

greenfruitsalad (2008354) | about 9 months ago | (#46256163)

i see you've also had to deal with this plague. stupid juniper switches are unable to work as ntp clients only. as soon as you configure ntp settings, they reply to client requests. a few weeks ago, i learned this the hard way when all my switches suddenly became overloaded and all my bfd sessions started flapping.

Re:Why not rate limit? (0)

Anonymous Coward | about 9 months ago | (#46263321)

as soon as you configure ntp settings, they reply to client requests.

Because you failed to properly configure it.
If you need your hardware to "hold your hand" then you need to stop playing with Big Boy equipment.

term allow-ntp {
        from {
                source-address {
                        ;
                        ;
                }
                protocol udp;
                port ntp;
        }
        then accept;
}

term block-ntp {
        from {
                protocol udp;
                port ntp;
        }
        then {
                discard;
        }
}

Re:Why not rate limit? (1)

klui (457783) | about 9 months ago | (#46258517)

Juniper advisory:
http://kb.juniper.net/InfoCent... [juniper.net]

JunOSe and ScreenOS unaffected.

Re:Why not rate limit? (2)

Gerald (9696) | about 9 months ago | (#46254939)

Most modern servers don't respond to the offending command (monlist) at all. Older/misconfigured servers are the problem and there are enough of them to cause trouble.

Re:Why not rate limit? (1)

mysidia (191772) | about 9 months ago | (#46255263)

So why don't NTP servers limit their responses to, say, 1 per 10 seconds per IP address?

You couldn't bother spending 5 minutes reading to learn that the issue only exists on NTP implementations that allow administrative queries, and on modern NTP implementations that's off by default?

By the way, NTP servers CAN [redhat.com] be configured with 'discard' and 'restrict limited' statements, to restrict the rate at which clients can query, and send KOD packets if a client is querying too often..

But that's not the DoS amplification issue. The NTP servers need to be configured with NOQUERY by default.

Or the older ancient BSD implementations need to be upgraded to modern ones.

How is a 12 year old finding out his IP address? (1)

Marrow (195242) | about 9 months ago | (#46254975)

Maybe this is another reason to use TOR or something more generic to mask IP? Not for privacy, but to hide in the crowd. Google wants to know everything anyway....maybe they should offer a service to be a web-proxy-server.

Re:How is a 12 year old finding out his IP address (0)

Anonymous Coward | about 9 months ago | (#46255153)

If someone starts DDOSing tor nodes with 200Gbps the entire network will become unusable.

booster/stresser sites (1)

Revek (133289) | about 9 months ago | (#46255007)

These services are available for any kid with five dollars. The last one that hit my network knocked us off and our upstream provider. They use spoofed packets to machines with services such as chargen/echo to amplify the attacks. If you contact one of these services they will threaten or try to extort money from you.

Re:booster/stresser sites (1)

Redmancometh (2676319) | about 9 months ago | (#46255467)

No $5 booter is going to do anything close to this kind of damage. A gigabit maybe, but that's about as high as it would go, despite their claims.

However I'm sure some will be adding NTP amplification to their "services."

Re:booster/stresser sites (1)

throwaway18 (521472) | about 9 months ago | (#46256459)

I tried one against myself for a minute last year and saw about 4Gbit/second of port 53 UDP traffic. Enough to cause problems for an amateur-hour webhosting service. Any half decent webhost can handle that these days.

Re:booster/stresser sites (1)

Redmancometh (2676319) | about 9 months ago | (#46256605)

4Gbit? Did you have a 10 gigabit port or something?

Even so that's not even in the same league as what we're talking about.

Then there's the human end (1)

rbrander (73222) | about 9 months ago | (#46255023)

I can't help but notice all the comments so far are about technical prevention. If it is possible, well, that would be great. But for those who dodge all technical barriers and pull this off, maybe its time for some laws equivalent to those insanely high penalties for file-sharing. It's not like a 200Gbps attacks are inadvertent or accidental; they take some deliberate effort. Make it a criminal-record, no-passport, ruin-your-employability, year-in-jail kind of crime. I suppose the 15-year-old in Illinois will have his computer taken away; what if HE were taken away?

Re:Then there's the human end (2, Interesting)

SuricouRaven (1897204) | about 9 months ago | (#46255051)

The problem with that approach is that a lot of those internet criminals are actually just immature teenagers - all they really need is a slap on the wrist to scare them straight and a good talking-to by their parents. Throwing them in jail is a good way to make sure they turn into real career criminals - if you can't get employment in legitimate work, what other choice is there? It's the same problem with heavy sentences for drug possession.

Almost every decent computer security expert dabbled in black-hating a little when they were learning, if only to prove to themselves what they could do or for the fun of adventuring into forbidden places. I used to port-scan for open netbios shares back in the win9x era - found a lot of people who had their entire C: drive open to the world. I left text files on their desktops warning them about the open access.

Re:Then there's the human end (2)

Concerned Onlooker (473481) | about 9 months ago | (#46255077)

"Almost every decent computer security expert dabbled in black-hating a little"

Oh, my. I had no idea that the computer security field was so rife with racists.

Re:Then there's the human end (1)

Jeremy Erwin (2054) | about 9 months ago | (#46255543)

Since you care to differentiate a hatter from a hater, perhaps the word you're looking for is "blackhatting". Note the spelling.

Re:Then there's the human end (1)

Anonymous Coward | about 9 months ago | (#46255119)

Almost every decent computer security expert dabbled in black-hating a little when they were learning

Nope, I was never racist about it.

Re:Then there's the human end (1)

jafiwam (310805) | about 9 months ago | (#46255155)

The problem with that approach is that a lot of those internet criminals are actually just immature teenagers - all they really need is a slap on the wrist to scare them straight and a good talking-to by their parents. Throwing them in jail is a good way to make sure they turn into real career criminals - if you can't get employment in legitimate work, what other choice is there? It's the same problem with heavy sentences for drug possession.

Almost every decent computer security expert dabbled in black-hating a little when they were learning, if only to prove to themselves what they could do or for the fun of adventuring into forbidden places. I used to port-scan for open netbios shares back in the win9x era - found a lot of people who had their entire C: drive open to the world. I left text files on their desktops warning them about the open access.

Ok, public caning in the town square. One lash for each gigabit of wasted bandwidth, plus $100 fine for each of the same.

Re:Then there's the human end (2)

rbrander (73222) | about 9 months ago | (#46255285)

Appropriate to what 1961 would have called a science-fiction crime, the punishment taken from Starship Troopers. I like it.

Re:Then there's the human end (4, Insightful)

jythie (914043) | about 9 months ago | (#46255091)

We already have pretty strict (and overused) laws involving cybercrime.

Problem is, people who do this stuff professionally are pretty much immune from being caught, and the people who do get caught are usually teenagers which, while we like talking about personal responsibility, biologically young brains really do have physical issues when it comes to impulse control and risk analysis. So punishing them harshly does not actually do any good other then satisfying a certain bloodlust.

I'm calling bullshit (0)

Anonymous Coward | about 9 months ago | (#46255551)

All this talk about young brains not being capable of knowing what consequences follow from their action, I'll call bullshit right now.

Thing is, I was a bit of a teenager once. Prone to pranks and doing things I shouldn't have. But, I *always* knew that I did, the results that would have, and how people would be disadvantaged by it. I knew. I also knew I shouldn't have done it. That I did it anyway is in no way due to my inadequately developed brain, but completely due to my lack of upbringing and moral values. I knew better. Then, and now. I should have been thrown the book at. Repeatedly, I might add.

DDOS'ing a site? Lock 'm up for, at least, twenty years. The Internet is essential for civilized life nowadays. We wouldn't be lenient on people that blew up power lines, why be soft on the cyber criminals? Arguably they commit worse crimes.

Find them, charge them, incarcerate them. Put it all over the news. Make it known the Internet is not the be tampered with, just like power lines, gas lines, and other essential infrastructure.

That will stop them. Or at least it will stop them for doing it *again*, once caught.

Re:I'm calling bullshit (1)

mars-nl (2777323) | about 9 months ago | (#46259083)

DDOS'ing a site? Lock 'm up for, at least, twenty years. The Internet is essential for civilized life nowadays. We wouldn't be lenient on people that blew up power lines, why be soft on the cyber criminals? Arguably they commit worse crimes.

Find them, charge them, incarcerate them. Put it all over the news. Make it known the Internet is not the be tampered with, just like power lines, gas lines, and other essential infrastructure.

That will stop them. Or at least it will stop them for doing it *again*, once caught.

May I remind you the internet is a global borderless network, which makes such laws impossible to implement. Also, it does not solve the problem, because there will always be a new guy stupid enough to DDOS someone. So, forget about laws. Just fix the internet.

Then introduce borders with Son of SOPA (1)

tepples (727027) | about 9 months ago | (#46272207)

May I remind you the internet is a global borderless network, which makes such laws impossible to implement.

Then perhaps the solution is to introduce borders, to implement something like SOPA except reworded to be not quite as unpalatable to civil libertarian types.

Re:Then there's the human end (1)

Jim Sadler (3430529) | about 9 months ago | (#46255633)

Punishments often do not help the criminal but they certainly do knock down impulses in friends when they see what happens to the violator. The IRS thrives off of the same tactic. Catching one tax cheat probably represents an expense to the IRS less than the recovery. But everyone who sees the cheat tossed down the rabbit hole suddenly becomes more honest in their reporting. The catch is that sometimes simple errors cause the violation other than a deliberate notion to commit the act.

Y2K (1)

jythie (914043) | about 9 months ago | (#46255125)

While we patch and patch, we might be getting close to the point where a real restructuring or protocol update needs to happen. Various researchers have proposed technologies that could make the internet far more resilient to stuff like this, and maybe it is time we switch over.

But I am not thinking some nice gradual switch over, but a nice 'if you don't upgrade by X time you loose your insurance and can no longer peer'. If nothing else we could kill at least two birds with one stone... think about the massive economic fallout from the Y2K update, all the money that flowed into tech and job for that had a ripple effect through the economy. Requiring a complete upgrade of the internet would put a real dent in the current economic downturn.

Re: Y2K (0)

Anonymous Coward | about 9 months ago | (#46256099)

We could start by enforcing IPv6 only.

Re: Y2K (0)

Anonymous Coward | about 9 months ago | (#46259777)

We could start by enforcing IPv6 only.

Yeah, and just think how big THOSE reply packets would be.

Re:Y2K (1)

satch89450 (186046) | about 9 months ago | (#46257693)

But I am not thinking some nice gradual switch over, but a nice 'if you don't upgrade by X time you loose your insurance and can no longer peer'. If nothing else we could kill at least two birds with one stone... think about the massive economic fallout from the Y2K update, all the money that flowed into tech and job for that had a ripple effect through the economy. Requiring a complete upgrade of the internet would put a real dent in the current economic downturn.

Another benefit: we can see the sequel, Office Space 2, and see how Initech inflates the work needed to solve the spoofed-source problem. Will it end in another fire? Will Milton come back?

Solution: Get rid of kids! (1)

Anonymous Coward | about 9 months ago | (#46255135)

Require ISPs to do checks on IP spoofing. Case closed for most DDoS attacks. Optimization always comes at a cost of security. I'm not even an expert and still know the solution, just like a kid can read and click through a premade tool, fill out some forms and do attacks.

Kids don't have the moral subroutines to understand restraint. Anyone with a minimum amount of knowledge can fire off attacks these days, it seems.

Solvable (1)

gmuslera (3436) | about 9 months ago | (#46255305)

Compared with the mostly unsolvable new normal of having most of basic internet infrastructure backdoored by a government i'd say that is pretty benign. You can diminish a lot asking administrators to fix their NTP servers or ban their IPs. But no matter how much you try, internet as a worldwide network is broken beyond repair, you can choose to ignore that fact (as much you can ignore to being hit by a 400gbps attack), but it will still be broken.

Well at least it's an impartial opinion... (0)

Anonymous Coward | about 9 months ago | (#46255321)

...I mean it's not like CloudFlare's CEO could drum up any more business by exaggerating the threat of DDOSes or anything...

Re: Well at least it's an impartial opinion... (0)

Anonymous Coward | about 9 months ago | (#46256111)

Nothing he said isn't true though. My company has 400Gb/s of combined bandwidth to provide service. 6 months ago we were fine (we only need 40 to actually provide service, the rest is for DDOS safety and redundancy), now we're in a position where we could be taken off line if we piss off the wrong people.

Better OSes, better regulations (1)

aslashdotaccount (539214) | about 9 months ago | (#46255737)

Since Windows started issuing certification warnings for third-party software, fewer relatively fewer trojans have effected Windows boxes. The same tactic has always helped reduce the infection rate for Mac OS. iOS fairs even better because all software approved by Apple for Appstore are screened. This is one way of reducing the bandwidth available for perpetrators: reduce the pasturing grounds for bot-herders.

That 99% of all mobile malware targets Android, as per Kaspersky, is evidence enough that the Appstore model works better (see heading 'Malware for Android' in link http://www.securelist.com/en/a... [securelist.com] ). With well over a billion Android activations to date, this is a whole new playground for bandwidth bandits to exploit (and are exploiting very effectively). Unless Google does something to ensure that their stores are sanitized this epidemic will continue to get worse.

Finally, penalizing countries that continue to support software piracy will also help. The main vector for the propagation of trojans is pirated software. Some countries have so much malware (take a look at the table under the title 'Local threats' in this link http://www.securelist.com/en/a... [securelist.com] ) that you have to wonder if their national bandwidth capacity is utilized for any productive use at all. Should these countries be penalized in terms of bandwidth available to them unless they proactively combat their piracy markets?

Re:Better OSes, better regulations (1)

Redmancometh (2676319) | about 9 months ago | (#46255793)

You're talking about an entirely seperate issue. This is all about the NTP amplification, as the article says a server on a gigabit port could theoretically push 200 gigabits.

I run a service that has 2 dedis on 10 gigabit ports..what if those got compromised or were owned by those unethical?

Re:Better OSes, better regulations (1)

aslashdotaccount (539214) | about 9 months ago | (#46255941)

No one with a gigabit port will allow a single IP (spoofed or not) to make infinite consecutive NTP requests. You'd not even let a single IP do more than 1 NTP request without some form of throttling. The only way you effectively launch a massive reflective attack is if you have a whole lotta IPs under your control, and you don't achieve that (without raising suspicion) unless you have a botnet.

As for your 2 hosts on 10 gig ports, you wouldn't be running them unless you were doing some serious networking work (service related?), which means you have the skills to make sure that those machines are safe. Also, I bet you've got redundancies in place to ensure that a compromise is rectified (perhaps a daemon to completely neutralize the server using if a given semaphore is not set before a set timeout?).

Overregulation (1)

tepples (727027) | about 9 months ago | (#46272167)

iOS [fares] even better because all software approved by Apple for Appstore are screened.

But if most home computers are locked down to run only software chosen by the monopoly "App Store" chosen by the computer's manufacturer, then how will high school students enrolled in an introductory programming class complete their homework?

Finally, penalizing countries that continue to support software piracy will also help.

That or penalizing companies that refuse to sell their products at all in certain countries. In affected countries, copyright infringement is the only way to obtain a copy of the work at all.

Re: Overregulation (1)

aslashdotaccount (539214) | about 9 months ago | (#46273595)

You're right about possibilities of monopolisation. However, as long as the right legal systems and enterprising businesses exist, ventures like Android will keep popping up to balance out (and eventually crush?) 'monopolies' like iOS.

As for young aspiring coders, they can use a free student certificate to develop and deploy their software on their own (and their friends') devices. It doesn't need to get approved by the OS developer. The real issue in this regard will be the effect on the open-source market. Then again, even Linux users are heavily dependent on online centralised package repositories, which could start adopting screening schemes.

Of course, there's also the advice I always give my clients: gift horse or not, make sure them teeth aren't rotten. In other words, if your can't read code then you're not going to be able to leverage one of the most important aspects of open-source software, which is determining for yourself just how safe it is.

With regards to countries being marginalised by big software vendors, you're right about people using the excuse of disenfranchisement. But they (we, since i'm in such a country) are not willing to accept that their legal systems are too corrupt and unpredictable for software vendors to trust them. What software enters the market does so through various regional distributors, in order to reduce liabilities. Appstore has not come here because they could never settle disputes without lining the pockets of a judge. In these countries there are much more important issues that people should be concerned with than the latest flappy bird clone. If they want to enjoy the software available in mature global markets they too have attain the same maturity.

Like I said, I'm in a country where we don't have a legal Appstore or Google Play presence. However, instead of resorting to Cydia, I get store credit to buy my apps and stuff. Of course, not everyone in my country gets paid well, so not all can afford to spend money on software. As such more than 95% of mobile software in the market is pirated. Should we continue advocate this piracy with the excuse of disenfranchisement, and desensitise the community to criminality of the act? Or should people live according to their means, which they can start improving by putting an effort (the effort that goes into piracy?) into improving governance of their countries?

No high school iOS developer program (1)

tepples (727027) | about 9 months ago | (#46273747)

However, as long as the right legal systems and enterprising businesses exist, ventures like Android will keep popping up to balance out (and eventually crush?) 'monopolies' like iOS.

Then where's the 4" Wi-Fi-only tablet that can run applications designed for recent versions of Android as a competitor to the iPod touch? Or are people supposed to just buy a phone, not activate cellular data service on it, and pay for a GSM radio that they'll never use?

[In an App Store world,] how will high school students enrolled in an introductory programming class complete their homework?

As for young aspiring coders, they can use a free student certificate

Since when? This page [apple.com] states that only accredited postsecondary degree-granting institutions, not high schools, are allowed to participate. Besides, the parents would still need to buy the student a Mac on which to run Xcode; it does not run on an iPad even with a Bluetooth keyboard.

The real issue in this regard will be the effect on the open-source market. Then again, even Linux users are heavily dependent on online centralised package repositories, which could start adopting screening schemes.

Official repos already have screening schemes. But Linux distros also give system administrators the power to add third-party repos that someone other than the distro publisher screens. Ubuntu has PPAs, Android has Amazon Appstore and F-Droid, etc.

Or should people live according to their means

These "means" themselves are hard to compare between countries. Developers in the developed world expect to get paid on an exchange rate basis, while people earn wages on a purchasing power parity basis. The Balassa-Samuelson model predicts that currencies of markets without an established export industry will have disadvantageous exchange rates with more industrialized markets. Do a corrupt judicial branch and an economy oriented towards locally consumed goods and services go hand-in-hand?

Spoofing permits attacks here's the solution (1)

bl968 (190792) | about 9 months ago | (#46256107)

Then make the originating network legally and financially responsible for not filtering the spoofed packets originating from their network with a IDP (Internet Death Protocol) for any networks which do not fix their network within 3 days following a attack launched from their network.

Re:Spoofing permits attacks here's the solution (0)

Anonymous Coward | about 9 months ago | (#46257773)

Don't forget permanently removing the offending ISP's local monopoly contracts.

My work's $17K/month Teir-1 ISP line is routing private addresses (they show up on our router's link to them as the source)

Make ISP's conform to the FUCKING standards under penalty of criminal laws and these thing won't happen.

NTP vs NSA (0)

Anonymous Coward | about 9 months ago | (#46256575)

Could NTP amplification be used to over stress the P.R.I.S.M. server banks? Or could these unsecured networks be used to spoof packets to poison "metadata" (NSA version of metadata)?

Arbor Networks says... (0)

Anonymous Coward | about 9 months ago | (#46259295)

Who cares... they are still in business? Focus on making us a better O-scope and keep your tripe to yourselves.

DoS/DDoS truly IS preventable (0)

Anonymous Coward | about 9 months ago | (#46260191)

DDoS/DoS CAN be stopped (Microsoft & Amazon are setup PERFECTLY vs. it in fact, read on below on that note)!

---

Microsoft Windows NT-based OS settings vs. DoS:

Protect Against SYN Attacks

FROM -> http://msdn.microsoft.com/en-u... [microsoft.com]

A SYN attack exploits a vulnerability in the TCP/IP connection establishment mechanism. To mount a SYN flood attack, an attacker uses a program to send a flood of TCP SYN requests to fill the pending connection queue on the server. This prevents other users from establishing network connections.

To protect the network against SYN attacks, follow these generalized steps, explained later in this document:

Enable SYN attack protection
Set SYN protection thresholds
Set additional protections

Enable SYN Attack Protection

---

The named value to enable SYN attack protection is located beneath the registry key:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\TcpIp\Parameters.

Value name: SynAttackProtect

Recommended value: 2

Valid values: 0, 1, 2

Description: Causes TCP to adjust retransmission of SYN-ACKS. When you configure this value the connection responses timeout more quickly in the event of a SYN attack. A SYN attack is triggered when the values of TcpMaxHalfOpen or TcpMaxHalfOpenRetried are exceeded.

---

Set SYN Protection Thresholds

The following values determine the thresholds for which SYN protection is triggered. All of the keys and values in this section are under the registry key

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\TcpIp\Parameters

These keys and values are:

Value name: TcpMaxPortsExhausted

Recommended value: 5

Valid values: 0?65535

Description: Specifies the threshold of TCP connection requests that must be exceeded before SYN flood protection is triggered.

Value name: TcpMaxHalfOpen

Recommended value data: 500

Valid values: 100?65535

Description: When SynAttackProtect is enabled, this value specifies the threshold of TCP connections in the SYN_RCVD state. When SynAttackProtect is exceeded, SYN flood protection is triggered.

Value name: TcpMaxHalfOpenRetried

Recommended value data: 400

Valid values: 80?65535

Description: When SynAttackProtect is enabled, this value specifies the threshold of TCP connections in the SYN_RCVD state for which at least one retransmission has been sent. When SynAttackProtect is exceeded, SYN flood protection is triggered.

---

Set Additional Protections

All the keys and values in this section are located under the registry key

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\TcpIp\Parameters. These keys and values are:

Value name: TcpMaxConnectResponseRetransmissions

Recommended value data: 2

Valid values: 0?255

Description: Controls how many times a SYN-ACK is retransmitted before canceling the attempt when responding to a SYN request.

Value name: TcpMaxDataRetransmissions

Recommended value data: 2

Valid values: 0?65535

Description: Specifies the number of times that TCP retransmits an individual data segment (not connection request segments) before aborting the connection.

Value name: EnablePMTUDiscovery

Recommended value data: 0

Valid values: 0, 1

Description: Setting this value to 1 (the default) forces TCP to discover the maximum transmission unit or largest packet size over the path to a remote host. An attacker can force packet fragmentation, which overworks the stack.

Specifying 0 forces the MTU of 576 bytes for connections from hosts not on the local subnet.

Value name: KeepAliveTime

Recommended value data: 300000

Valid values: 80?4294967295

Description: Specifies how often TCP attempts to verify that an idle connection is still intact by sending a keep-alive packet.

---

Lastly, of course, there IS the "null-route" option (you need to have a network with multiple IP addresses, ala multi-homed servers BEFORE your production ones since this must be done "upstream" of them though - plus, many routers have this functionality built in, so that is another way to 'blackhole' such attacks) noted here:

http://en.wikipedia.org/wiki/N... [wikipedia.org]

The route command can do the job, per the specs/requirements noted above!

This use of the route command, however, is a MANUAL & slow/stodgy method, since it is commandline driven...

(However: A script or program using a listbox COULD automate this, given the data for the originating attack IP addresses).

---

DDoS Appliances:

http://www.google.com/search?s... [google.com]

* Hope that helps...

Microsoft &/or Amazon - they have such TREMENDOUSLY POWERFUL setups for monitoring + alerting them to DoS/DDoS, they can start "shutting down" IP address sources of packets for DDoS easily, & way, Way, WAY before it's time to "panic" - it's the reason WHY "Anonymous" & the like can't "take them down" (& yes, they HAVE tried)...

For some material on what they do? See here (MS):

---

Microsoft: We're not vulnerable to DDoS attacks

http://www.networkworld.com/co... [networkworld.com]

PERTINENT QUOTE/EXCERPT:

"At Microsoft we have robust mechanisms to ensure we don't have unpatched servers. We have training for staff so they know how to be secure and be wise to social engineering. We have massively overbuilt our internet capacity, this protects us against DoS attacks. We won't notice until the data column gets to 2GB/s, and even then we won't sweat until it reaches 5GB/s. Even then we have edge protection to shun addresses that we suspect of being malicious."

---

&/or

---

Why attackers can't take down Amazon.com:

http://money.cnn.com/2010/12/0... [cnn.com]

PERTINENT QUOTE/EXCERPT:

"So Amazon (AMZN, Fortune 500) has spent years creating and refining an "elastic" infrastructure, called EC2, designed to automatically scale to handle giant traffic spikes... But Amazon's entire business model is built around handling intense traffic spikes. The holiday shopping season essentially is a month-long DDoS attack on Amazon's servers -- so the company has spent lavishly to fortify itself."

INTERESTING STUFF - Hope the read helps those of you dealing with DDoS/DoS attacks...

APK

P.S.=> Others on the page note the usage of CDN - to distribute loads & "attack surface area" which helps also...

... apk

DDOS is always big (1)

fulldecent (598482) | about 9 months ago | (#46267231)

DDOS causes more lost money than other "security" breaches. Therefore it is a top priority of companies and by extension public/private partnerships.

Of course, this is an asymmetric attack and you can't stop it. In other words, it is a democratizing attack.

When I worked with the FBI on security issues in the financial sector, I was disgusted by how little attention and funds were available to fix problems like unauthorized transactions but attention is available for issues like this.

Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?