[Tfug] more stuff on ssds -pugezeje
shanna leonard
ssl at email.arizona.edu
Thu Jan 31 09:33:48 MST 2013
Bexley Hall wrote: Hi Shanna,
>> I like the price point for reliabiity of the intel 320's - I'm planning
>> to use them in a ZFS-based storage server soon, and I fully expect that
>> if I over-provision the Zil (ZFS Intent Log - caches writes) by 100% I
>> will have it last a couple of years.. Which is all I would count on from
>> a hard-drive anyway.
> I think most hard drives have expected lifetimes in the 5-8 year
> range (in "regular use"). I have drives that are easily that
> old (though seen much lower use).
>
> What are the consequences when your drive fails?
In my use case, the SSDs are used for read/write caching to speed up
access to a pool of drives. So the consequence is that access times are
slower. not good, not catastrophic.
> Who "notices"
> the failure and acts to repair/replace it?
Good question. I believe that the management software will give
notification in a gui which is monitored daily in the case of complete
disk failure.
> are there "staff" actively responsible for maintaining this?
yes
> How much of your *personal* life would you rely on it?
I would say that, interestingly for ssds, failure is more predictable
than for hdds, so I would prefer them. So let's imagine I were using
this to control my own pacemaker :0
I might mirror the drives, and If it were linux, I would install
smartmontools, and use smartctl in a script something like this:
http://blog.samat.org/2011/05/09/Monitoring-Intel-SSD-Lifetime-with-S.M.A.R.T.
and have it trigger a daily report on the readout from the Media Wearout
Indicator.
I'd probably also use SLC nand in that application :)
If it were just my house, I'd probably be comfortable with an Intel MLC
drive like the 320, smartctl reporting, and a replacement strategy,
(have a cold spare available)
OTOH, I'm comfortable using candles for an hour. A little hardship every
now and then breeds character!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://tfug.org/pipermail/tfug_tfug.org/attachments/20130131/5cf84169/attachment.html>
More information about the tfug
mailing list