[Tfug] Static/Dynamic (IP,name) bindings

Bexley Hall bexley401 at yahoo.com
Fri Sep 14 00:16:50 MST 2012


Hi Robert,

--- On Thu, 9/13/12, Robert Hunter <hunter at tfug.org> wrote:

> I thought you were just setting up something for yourself.  Now I see
> that you are thinking in terms of a consumer product. 

Yes and no.

I'm looking at an "immediate solution".  But, also at a solution
that is appropriate for others -- especially non-techies -- to
deploy *reliably* and without fear of making themselves
vulnerable to present or future exploit.

I want to be able to release my designs for others to commercialize
(even if its just dumping them off at Sparkfun "as is") and know
that they represent *complete* products... not something half-baked
or half-tested ("Well, it *mostly* works.  But there are these
few ways that an adversary can compromise its functionality and
these few features that don't quite work as intended...")

> Furthermore, you are describing a kind of worst-case scenario, where a
> non-techie user is beset by high-tech bad guys, who could rappel in
> from the roof, wearing night vision goggles, and satellite-fed wrist
> computers.

Note that many of the bad guys are often little more than
script-kiddies regurgitating exploits that others have discovered
or engineered.  The fact that the person employing the exploit
isn't technically capable doesn't make it any less effective!
Hint: the folks who hack casinos aren't all PhD's!  :>

Nowadays, an exploit is easily disseminated to "all the wrong
people".  This leaves a consumer of such kit at the mercy of
the vendor -- does the vendor actively patch vulnerabilities?
And, leaves the consumer with the responsibility of ensuring
that the latest patches are installed and validated *in* each
instance of a vulnerable product that he owns.

[I am *not* of the belief that devices should constantly phone
home and update themselves.  "Gee, I don't know, Honey.  It
was working *yesterday*.  And I know *I* didn't do anything to
it.  Maybe one of the kids..."]

> Well, that simplifies things tremendously: in that scenario, there is
> no such thing as a "secure system". :)

Sure there is!  You just have to decide how much functionality
you want to "guarantee" in a given level of security.

E.g., worst case, you can mount a denial of service attack on my
household thermostat so I can't legitimately command it to a
different temperature setpoint (assume I am not located on the
premises).  But, you can't *alter* the setpoint to something that
will cause collateral damage.  I.e., you can't turn the heat OFF
during an unusual cold spell.  Or, set it to 130 degrees during
the summer *cooling* season, etc.  Even if you had surreptitiously
left a super fast laptop hidden in one of my bedrooms before
I departed the house!

> > Yes.  But, again, that assumes the consumer is aware of this
> > risk, understands it and is willing to invest the time and
> > money to make those changes.  "Why can't I keep things the
> > way they are?"
> 
> I don't know -- that's a marketing issue. :P

If you want people to use your designs, its something that you
think about.  It's not enough to say, "Gee, look at this really
cool feature (that is unusable, unfinished or unreliable)!"
 
> > How many folks *actively* worry about their internet exposure?
> > Or, information leaks from their cell phones?  etc.
> 
> Probably not many -- and for those that do, probably not enough.  In a
> previous thread, someone mentioned Ken Thompson's "Trusting Trust"
> essay.  It's a must-read for anyone concerned with
> computer security.
> 
> http://cm.bell-labs.com/who/ken/trust.html
> 
> After reading Thompson's article, you may start thinking of "security"
> in relative terms.

I always have.  E.g., I've designed devices that handled money (cash).
People care far more about cash than pretty much any other intangible!
(like, say, "medical records", financial statements, etc.)  <grin>
("Sir, the slot machine in aisle 6 has been paying out continuously
for the past 30 minutes.  Do you think, perhaps, there's something
fishy going on, there?")

> Linus Torvalds said that he has three firewalls between his development
> machine and the Internet.

He's far more trusting than I!  I won't let any of my work machines
talk to the outside world.  The designs I work on often aren't my
own.  So, exposing them to theft or corruption would be amoral.

> I wonder if he has verified his tool chain.  And what about the 
> firmware and microcode of his computers?  And what about the high-tech
> rappelling bad guys? :)

You can't protect against a physical assault.  All you can do is
raise the stakes to some level where you hope the assault becomes
impractical (given other environmental criteria).

In the early 80's, we designed some of the first self-serve
gas stations (electronics in the pumps, cash register, etc.).
To support the ability for customers to "pay at the pump"
(commonplace, nowadays; not so in the 80's), we built an
"island terminal" -- basically a bill validator, credit card
reader, keypad and display.  The entire electromechanical
package was no larger than a PC motherboard as it doesn't
have much real capability/responsibility.

Yet, we packaged it in a "Hulk-sized" steel container that the
combined force of several people couldn't topple.  And, recommended
installing these behind cement filled posts sunk in the ground to
prevent vehicles from smashing into them (i.e., since people
power wasn't enough to affect them, the only real threat in an
*unattended* deployment would be someone smashing into one
with a car, truck, etc.).  Add to this, the presence of a human
attendant and closed circuit surveillance cameras as a further
deterrent....

While there have been some such brazen "attacks" on these sorts
of devices in recent years, by far, the sheer mass of the
island terminal has more than paid for itself in discouraging
"less aggressive" attacks.

OTOH, if all an adversary has to do is plug a laptop (or a tablet
or any of a number of other small computer-based devices) into
an RJ45 behind your couch to "cause you grief/pain/loss/etc."
then that's not *any* security.  Why not save yourself the grief
and just leave the key under the door mat and hope the adversary
rewards your cooperation by not trashing the place??  :>

The few times (in my life) when I've had to "reprogram" (or
initially program) the garage door opener, I've made a casual
circuit of the neighborhood before considering the job finished.
While driving around, I would repeatedly press the button on the
remote and observe the garage doors I was passing to see if,
BY CHANCE, I had stumbled on a "code" that someone nearby also
happened to be using.  (Nowadays, you don't *pick* a code but, 
rather, let the opener and remote "negotiate one" on your behalf.)

The last time I did this, I didn't get out of my driveway
on that leisurely neighborhood test circuit before watching
my neighbors' ACROSS THE STREET garage door open and close
with each successive actuation of *my* remote!  Yet, if you
were to listen to the folks who make these openers, they
would cite the HUGE number of possible combinations/codes
and the UNLIKELINESS of two units *ever* being on the same
code "by chance"!

Except for the time it happened to us!  <grin>  (I wonder how
often this happens in practice and folks never realize the
"coincidence":  "Honey, you left the garage door open last
night."  "No, I closed it when I came home."  "Well, it was
*open* this morning when I woke up!")

--don




More information about the tfug mailing list