[Tfug] good enough is good enough (long)
Bexley Hall
bexley401 at yahoo.com
Fri Jul 12 16:51:55 MST 2013
Hi Zack,
On 7/12/2013 3:48 PM, Zack Williams wrote:
> On Fri, Jul 12, 2013 at 12:49 PM, Bexley Hall <bexley401 at yahoo.com> wrote:
>> YMMV, of course.
>
> Both opinions are right. It's the circumstances that are different.
>
> People frequently don't know what they want,
Yup. But they usually know what they DON'T want -- *after* they see
it (implemented). What a developer (product/software/hardware) has to
do is understand the problem domain so that he can propose a solution
that "makes sense (perhaps not "ideal") in that problem domain.
When you let users design things, you end up with the camel syndrome
(i.e., a horse designed by committee). Or, really clumsy behaviors
that lead to lots of "... except when you want to..." in the manual!
(One of the first commercial products I worked on was largely designed
by marketing/sales. We had all sorts of "mode 99" type features that
got tacked on haphazardly because the original design didn't plan
for them. "oops!" Of course, explaining all these oddball "modes"
to the user just left them glassy-eyed: "Which button do I push
to make it do ...")
> and program their way to
> a solution which may be slightly or frequently substantially different
> than what was originally envisioned. I can't remember the last time
> I didn't add or remove a feature that I came up with midway through a
> programming project for one reason or another.
The problem with changing goals/directions midstream is that it is ripe
for bugs to creep in. Interfaces/modules that were developed (and
UNDER documented!) with one approach in mind are rarely re-examined to
determine/verify their assumptions and implementations remain valid
in the "new order of things". ("Oh, I figured this would never be
longer than 256 bytes...")
[I got a letter from one of my banks many years ago claiming
that they would have to withhold 10% of my interest income (by law)
because they didn't have my SSN on file. Though my SSN was printed
on the top of the letter alongside my name and account number!
Ooops! Apparently, they figured '0' was a great way to represent
"none" in their database (instead of using NULL like any sane
DBA would) and, on top of that, had poorly implemented the
"test for 0" such that any SSN that *began* with a '0' was
considered to be "none". No harm. Except the anxiety of this
unexpected letter and the need to telephone to ask "Why?"
Given that ~10% of the US population has SSN's that begin with
0, one has to wonder what that blunder "cost" society -- even
though it was largely born by people like me!]
And when was the last time you saw a formal test suite to
accompany a "release" -- whether that be for a FOSS product
or a "proprietary" one. I.e., how do you *know* what you
did/assumed previously still works? This is one of the big
fallacies behind the dillusion that "enough eyes" will find
your bugs -- how do you know *anyone* is looking for them?
How do I (as a contributor/developer) know that this bit
of code has NOT been examined thoroughly? Why should I
spend my time checking it if someone else already has??
"A room full of moneys and typewriters will eventually produce
one of Shakespeare's plays!" Yeah, but which one of *them* will
be qualified to recognize it???! And, how long will we have to
put up with:
"Romero, Oh Romberto, whyforth art thoo, Rombelonie?"
until the "right" version comes along? :>
> Other engineering tasks are much better defined, and in some ways
> subtractive - you know the hard goal at the outset, work to that goal.
I contend that you can know the goal for most "projects" if
you just *think* about it -- instead of rushing to code something
just to see what's WRONG with your idea.
I worked at a place, once, where my officemate and I were hired
for the same project at about the same time. A month or so after
hire, he was in the lab, prototyping his circuit.
*Many* months went by and I was still sitting in my office in
front of a drafting table.
[My office mate was *still* in the lab trying to get his circuit
to work (i.e., "not catch fire!")]
Boss was getting *really* nervous about me. He hadn't "seen"
anything from me and was worried that I was going to jeopardize the
project (since my part of the project was in the critical path).
After 6 months, I delivered my design to a technician. He spent
a few days wiring up a prototype. I spent 3 hours fixing his
wiring errors. Applied power.
And was done.
My office mate was *still* working on his design. Suddenly, the
boss is concerned that *he* might be holding up the project!
Different approaches. I spent my time up-front thinking about
the problem domain, the things I would have to contend with,
the things that were likely to "go wrong" -- and I addressed
them all "on paper" in my design *before* it was prototyped.
So, there were no surprises once the design was produced
(aside from careless errors on the part of the technician).
And, no wasted effort as folks tried to use a "poorly functioning"
early version of an inadequate design, "released before its time".
My officemate, however, came up with an "80% solution" and
figured he could "tweak" it to get the last 20% right. But,
he had no idea if his initial approach was even appropriate
to coming up with a tweaked solution! I.e., there may have
been some fundamental flaw that he would never be able to overcome!
Because he was preoccupied trying to make it work instead of
thinking about whether it *would* work!
One of the big problems in engineering is knowing when to STOP
pursuing one line of attack and "start over". When you have
something physically tangible in your hands that "almost works"
(except for its tendency to catch fire, literally!), its a lot
harder to walk away and start fresh. You *think* you "have more"
than you really do!
By contrast, when all you have is words and drawings on paper,
*all* you have is "paper" (and the knowledge that you have
gained while writing it). Much easier to start over. Much less
strain on the ego.
I find the same to be true writing code. If "all" I have invested
is words for a specification, I am more likely to revise it when I
discover something that won't work than if I've already coded up,
documented and debugged a few tens of KLoC of code!
> There are example in software of this - cryptography is incredibly
> hard to get right and breaks badly when it's not done correctly:
> http://pilif.github.io/2013/07/why-I-dont-touch-crypto/
Yup. And, this is true of a lot of things that people *think*
they "understand". Including entire application domains!
Unfortunately, with software, there is a tendency to *think*
you understand the "black box" better than you actually do.
And, often the documentation for that black box (library, etc.)
is lacking. Or, was written by/for someone who already *knows*
how it works.
When you write a script in Python, do you have any idea A PRIORI
as to how long it will take to execute? What sorts of resources it
will consume over its lifetime? Have you even *thought* about those
things? Or, do you "discover" them later, when the script just
seems to sit there forever -- or, cores with ENOMEM?
Instead, you convince yourself that you don't *have* to know
these things -- it's not important (you'll deal with the problem
when/if it manifests... unless it manifests AFTER you've released
it and some secretary in Administration is complaining because
her machine is "just sitting there").
If you pat yourself on the back for using that COTS crypto library
instead of rolling your own, does that mean you can safely use it to
encrypt a single byte? ("Oooops! These algorithms expect you to
be encrypting long strings! A single byte can be guessed with
a few hundred attempts!!")
> It's the difference between building a bridge and writing a book. If
> the bridge falls down, it's a failure. If the book has a few typos or
> plot holes, it isn't the end of the world.
I think that obscures the difference. Some bridges, you don't care
if they fall/fail; some *books* have to be absolutely free of typos!
(You probably wouldn't be happy if the county assessor had your
house recorded as owned by Jack Williams! :> )
I see it as a question of "expectations". Do you *expect* the
power windows in your new Lexus to misbehave when the key is in
the ACC position? Do you *expect* "payroll" to screw up your
deductions and short change your pay, this week? Do you *expect*
the PoS system at the restaurant to mangle your order so the
pizza arrives with pineapple and anchovies on it?? (blech)
When clients have asked me to help them hire someone for a
"software position", one of the first things I look for is
background and likely attitude towards their "deliverables".
If the (successful) applicant thinks "some bugs are OK", then
your product and distribution system had better be consistent
with that! (Ford doesn't like having to recall cars because
of a bug in one of the many processors in the vehicle!)
If you *expect* bugs, then you will *accept* bugs. And, learn to
tolerate increasing numbers of bugs (if one is OK, surely two
must be BETTER! :> ). OTOH, if you expect things to "work as
advertized", then you will hold vendors accountable when you
encounter a product that fails to do so.
[This is what drove me away from MS products... "I don't have
time to debug YOUR products for you!" Unfortunately, it has
also kept me away from many FOSS products -- for exactly the
same reason! :< "Show me what *you* have already tested and
verified before expecting me to go on a wild goose chase..."]
I've actually considered releasing products as "generic hardware"
with a guarantee that *just* addresses the hardware. Then, let
the user load the "free" software that converts the product into
it's intended purpose -- cuz no one EXPECTS guarantees on software
and I think the courts would back up the seller in this case
("The hardware is warrantied. *Your* alleged problem is with
the software -- which was 'free' and not warranteed!")
> Different goals, different methods.
Keep cool (dry?)!
--don
More information about the tfug
mailing list