[Tfug] (D)DoS countermeasures
Bexley Hall
bexley401 at yahoo.com
Wed May 15 23:40:52 MST 2013
Hi Yan,
On 5/15/2013 8:42 PM, Yan wrote:
>> Ah, makes sense. E.g., requesting a large object via HTTP/FTP
>> is a bigger hit than GET-ing a tiny web page, etc. Especially
>> as each such large object ties up resources for a lot longer
>> and allows the possibility of other such "loads" being placed
>> on your system.
>
> It goes a bit deeper than that, even. With a modern web app architecture, a
> big static file would likely be cached, and the server used for that could
> probably withstand a decent amount of abuse. It's not really doing anything
> to send you that big file other than disk IO, and if it's got an SSD
> (gasp!) or enough RAM to cache the whole file and an HTTP daemon tailored
> for static content, I'd imagine it could serve a lot of clients before
> buckling.
Yes, I understand. I'm not concerned with the service(s) being
exploited/taxed/etc. I can ensure they are robust.
But, I have no control over the traffic on the data link into
the facility. I.e., no amount of coding/hardware can prevent
something from tying up that link and, in the process, denying
legitimate *inbound* requests from being serviced AS WELL AS
incoming replies to *outgoing* service requests.
E.g., if it is a legitimate request to request that "large
object", then I will have to "waste" the required bandwidth
serving it up. (granted, I can throttle that service and
put heuristics in to watch for "abusive clients") But, even
if I "go silent" and refuse to honor *any* incoming requests,
I still can't STOP them from consuming my inbound bandwidth.
> You could still bring it down with enough load, but that's kind of a
> lamer's game: the attacker exerts as many resources as the defender does.
> That's ok if you have a big network of bots (like the Spamhaus attack), but
> such attacks are very high profile and risky.
>
> On the other hand, if you want to take down (for example) a forum and know
> (maybe from timing analysis) that their subforum search is not cached or
> not indexed or something, you'd have a much smaller number of bots do
> subforum searches over and over. Each hit takes a small GET request ("GET
> /search?subforum=blah&query=foo", just a few dozen bytes plus TCP overhead)
> but might cause the DB server to do a table scan and the app server to keep
> the connection open and waiting. You could take down the site with a much
> smaller army of bots (and MUCH less bandwidth) than what'd be required if
> you were just pulling files.
>
>> Understood. In theory, there are no such "well known" services
>> exported. OTOH, a DoS attack would still cripple the ability
>> to (e.g.) read your mail, access other web sites, etc.
>
> I think we're talking about two different scenarios. I mostly see DDOSes in
> the context of web site operators and such, where the employees reading
> their mail would probably not be a concern (and where it might not be
> feasible to saturate the network link). I've heard of DDOSes to residential
> connections, but not often. There was some (possibly theoretical?) mention
> of cybercrime gangs DDOSing people (both with a network DDOS and by
> flooding their phones with phone calls, the latter of which can be easily
> done by puppeteering a bunch of Skype accounts) after stealing their
> banking credentials so that the victims couldn't call their bank to lock
> things down. I'm not sure how frequently such individual-level DDOSes
> happen, though.
I think individual residences are too small of a target. Nothing to
be gained except, perhaps, "personal vendetta". And, unless the
residence/residents in question were noteworthy in some way, you
wouldn't even get any *press* from your efforts. Most residences
don't offer anything worth *hacking*.
However, I see that changing in the future as automation becomes
ubiquitous. There are already "trivial" (1970's style) automation
schemes available that allow remote control of (few) devices
within the home. But, these are (currently) few and far between.
More importantly, they can be easily secured which means the only
*effective* attack would be a DoS attack.
But, *that* would be problematic because the attacker would never
know *when* the resident would be trying to remotely access his
systems/automation. (How often would you MANUALLY contact your home
to turn on a light, change HVAC settings, water some plants, etc.?)
However, as those systems become more *autonomous*, then *any*
time can be a time of vulnerability as the system can be accessing
public infrastructure in order to fulfill its goals. Knowing that
the system will be downloading an updated weather forecast nightly
at 11:37P allows you to interfere with that functionality by
staging your attack *at* 11:36-11:39P.
[How do you know that? The same reason you know "admin" is the
default password for a model XYZ wireless switch, etc. I.e., *you*
can buy the same automation system and see how it works and then
target any such deployed automation system!]
The problem is worse for businesses where key assets or mechanisms
may be controlled by such systems. Imagine disabling the chiller
(that makes ice overnight for the air conditioning system to use
to cool the facility over the course of the next day) one or more
times at a competitor's facility... no real theft or damage but a
lot of unhappy employees trying to work in a building that can't
be cooled before sunset!)
E.g., I can prevent you from issuing a command to disable the
chiller. But, I can't prevent you from interfering with the
system's *need* to gather external information from which it
makes such decisions.
--don
More information about the tfug
mailing list