[Tfug] Question tres: Overbuffering overkill

John Gruenenfelder jetpackjohn at gmail.com
Thu Sep 4 00:48:46 MST 2014


Hello again TFUG,

And, once again, thank you for your advice on my VCS questions.  I have a
script/utility to write and began, as I usually would such things, writing it
as a shell script.  Then I realized that I'm attempting to learn Python and
this would be an excellend opportunity.  Not overly complex and something that
Python should excell at.  At the same time, I figure this is also a good
opportunity to experiment with Git and see what all the fuss is about.  So,
thanks for that.


And now for my third question.  This has to do with, I believe, an absurdly
excessive amount of I/O buffering being done by the system as I attempt to
copy approximately 9 GB of data from my super fast SSD to a microSD card
(plugged into my laptops SD card slot via an adapter).

I, of course, don't have a problem with the system buffering and caching data
to improve performance, but in this case it is actually causing the whole
system to become broken, most directly my wireless network connections.

For whatever reason, my Android phone decided to eat my microSD card rendering
it unmountable under Android as well as under Linux.  I don't know precisely
what it did, the partitions were still there as was enough FS metadata for the
kernel to have some idea of what type of FS was present, but try as I might I
could not mount them.  So I wiped the thing, recreated the two file systems,
and Android still wasn't happy.  So I let it format the card, and then used
Linux to shrink the vfat partition and remake the layout as it was before.
Now Android was happy enough.  The next step was to put all the data back on
the card and this is where the buffering issues appeared.

The data in question was my music collection.  From the command prompt I did
something along the lines of a "cp -a" to copy all of the files and subdirs.
Immediately, pages and pages of "copy src to dest" (I had used the verbose
switch) flew across the terminal.  Obviously there was no way it could write
data to the card that rapidly so I assumed that it had simply read all of
these source files into the FS cache in memory.  After that, the messages
slowed as the data was steadily, and slowly, written to the card.

Unfortunately, for reasons I am completely unsure about, this caused immense
problems with my ongoing network connections.  At the time I was SSHd into two
other machines, one on my LAN and the other on the Internet.  I also had my
browser open.  While this super-buffered copy was going on, I began to have
enormous latencies in my SSH connections.  Occasionally they would respond in
a semi-timely manner, but most of the time I would get no respose at all.
Eventually, both connections were dropped due to, I believe, timeouts or maybe
packet loss.  At the same time I was also unable to browse to any web pages or
establish any new SSH connections.  My NFS connection to that same computer on
my LAN also timed out and eventually the automounter decided the remote
machine was unavailable.

To make things a little worse, when the file copying was finally done, the
network didn't recover.  Using Gnome's network manager, I turned off wifi,
waited a few seconds, then turned it back on.  When it reconnected to the AP
it once again behaved properly.

Back in the "old days", I can remember poor/spotty system performance when the
system would be bogged down by really heavy I/O, but that usually meant
copying large volumes of data from one HDD to another or from one HDD to
another area on the same HDD.  The data rates were much higher, the disks were
much slower, and system performance suffered.  In this case, however, the rest
of the system never skipped a beat.  The disk in question is an SSD, so the
max possible data rate is much higher and with SATA uses a lot less system
resources than those older drives.  In this case, the much slower write speed
of the SD card was the limiting factor.

Fortunately, this isn't something I do frequently, but it is still puzzling
and I'd to have some idea of why it happened and if there is anything I can
do/configure to make it better.  If it helps, I'm not using the normal HDD I/O
scheduler, "cfq".  Rather, after reading some things on the Net about
optimizing for SSD systems, I'm using the "deadline" scheduler.  If I recall
correctly, cfq has a lot of code that worries about optimizing for seek times
and platter locations, details which have no meaning with an SSD.  Perhaps
this has had some unforseen consequences.

Anybody have any ideas?


As an aside, Android still isn't particularly happy with my SD card.  It
mounts it fine now, and does seem to have any problem reading the data from
it, but if I look at it with a file manager, the root directory is full of a
bunch of zero byte nonsense files, most with the same bizarre name: the Greek
capital letter omega.  Hmmm...


And for another aside, something I did pick up while trying to research this
had to do with NFS performance.  I haven't been at all happy with it and I had
assumed that perhaps it was due to poor configuration on my part.  Turns out
it is simply wifi.  NFS (or, apparently, most any networked file system) does
not play well with wifi no matter how good the quality of your connection.  It
is simply very inefficient and there's not much you can do about it.  So at
least I can stop trying to "solve" that particular problem...

Thanks again!


-- 
--John Gruenenfelder    Systems Manager, MKS Imaging Technology, LLC.
Try Weasel Reader for PalmOS  --  http://weaselreader.org
"This is the most fun I've had without being drenched in the blood
of my enemies!"
        --Sam of Sam & Max
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: Digital signature
URL: <http://tfug.org/pipermail/tfug_tfug.org/attachments/20140904/4377da85/attachment.bin>


More information about the tfug mailing list