[Tfug] When RTFM is not enough

Bowie J. Poag tfug@tfug.org
Mon Jul 8 05:26:01 2002


> Ok, its shutup time.  Every command has its place.  Even though you don't
> know how to use it well, find makes some tasks easy.  Suppose you want to
> make a report of all setuid root or setgid group 241 files?  Don't tell me
> about /usr/local, because we know it was protected at the time.  Also
> include the creation and last modification dates (M D Y) in your report.

Brian, you ignorant slut. You have completely lost your quick wit and acumen
for *nix problem solving. :)

First of all, listing with setuid/gid feilds AND ctime/atime information are
BOTH supported in ls! What you've asked for is as simple as adding the
appropriate ls flags in the foor loop, so the file can be parsed with grep
according to those feilds. Come on, Bri... I would have expected a far
better attempt at undermining my logic, especially from you. :)  You should
have proposed a situation that would have b0rken my method! Instead, you've
actually reinforced it, ya dumbass. Read below.

Anyway, like I said... have a look at the manpage for ls, specifically
the --format, --time and --sort arguments. ls supports formatted output,
much the same as find does.. For all intents and purposes, they are
interchangable utilities. Its just one method is terribly wasteful (find)
and the other produces something useful to others (ls, du, updatedb, etc.)

A comprehensive list of files that gets refreshed *regularly* is still the
best option. Theres nothing you're going to be able to do with find and be
wasteful that you cant do with ls/du and be tidy. In all but the rarest of
occasions, theres a better tool for the job than "find". Period.

This is going to be true especially in true "real world" multiuser
environments. Most people perform searches on files that have been on the
system for some time, at least long enough to have been "picked up" in the
last refresh.  The argument that "well, locate isn't realtime!" simply
doesnt hold water. If you're going to take that stance, then  why not simply
update locate's database?  "find", by its nature, is going to throw away a
massive amount of data while it does its work -- Why not collect it?

So...You may bow to me at your convenience.

Have your lips email me with a street address, and i'll send them Mapquest
directions to my ass. :)

Bowie









>
> find / \( -path '/usr/local' -prune -o -perm +4000 -user root -o -perm
+2000 -gid 241 \) -printf "%CB %Cd %CY\t%TB %Td %TY\t%p\n" | grep -v
'\W/usr/local$'
>
> Can you (Bowie) do it faster/more efficiently?  Assume a huge /usr/local.
> Also remember, as a highly paid unix admin, we don't want you spending all
> day writing some marginal, inflexible, bash/perl script.




> > Yeesh
>
> Agreed.  Anytime you need to look at file attributes beyond the filename,
> find is the best tool.  Pushing a ton of data thru a bunch of pipes is a
> suboptimal solution.  This is because pipes only have a 4k buffer.  Once
> the buffer fills, the writer will block.  Your pipeline will bottleneck on
> the slowest reader.
>
> Brian
> _______________________________________________
> tfug mailing list
> tfug@tfug.org
> http://www.tfug.org/mailman/listinfo/tfug