[Tfug] RAID containers
Bexley Hall
bexley401 at yahoo.com
Wed May 13 10:41:20 MST 2009
--- On Tue, 5/12/09, Dean Jones <dean.jones at gmail.com> wrote:
> > I thought I would get clever and use one server to build RAID (5)
> > containers to be used in an identical server.
[snip]
> > I.e., I can't see why it *must* do this so assume it is just a
> > piss-poor implementation?
>
> You do not say what your connection method is to the host
> so maybe SCSI?
Yes, AFAIK, the entire PERC line are SCSI based.
> For FC and SAS connected arrays this behavior usually
> exists due to the possibility of having the container
> connected to multiple hosts for failover purposes.
Ah, good point! But, this isn't an external array (I doubt
there is even a mechanism to export a connection to it)
> If the first server failed you want the second host to
> check the status
> of the RAID groups etc to make sure nothing horrible
> happened when the
> other system died, then bring the arrays online (or bring
> them online and then check).
Yes, but, presumably, the controller (*identical* to the one
on which the drives were built/configured) could examine the
media and verify that they are, in fact, "not corrupted"
(since they were *built* using that same model / firmware
controller). The only thing "suspect" is that the identifiers
on the drives don't match the identifiers that the controller
had stored internally for "most recently seen containers".
> It would be nice to be able to turn that behavior off of
> course but that is up to your raid card.
Yes, it just seems like a bad implementation. I can't really
understand a rational justification for anything done so
unilaterally. Especially, as I say, in the case where the
media are perfectly intact -- they just happen to have been
created on another controller.
N.B. I suspect that if I were to create the containers on this
very same controller, remove them, create anotehr set of
containers on *other* drives, remove THOSE drives and reinstall
the original containers/drives, it would *still* complain.
(i.e., I don't think it is the fact that the drives were
initialized/configured on a controller "having a different
serial number" but, rather, they aren't the drives the
controller "most recently was configured to operate with" :< )
> My guess is that the RAID card drops a GUID or some such
> onto the RAID
> array, so when another controller connects it says, 'Oh
> that isn't my array!'
It may scribble something on the drives *or* may just
store the serial numbers of the drives in local NVRAM
and verify these when next started.
> Personally I have moved away from hardware RAID controllers
> and onto ZFS for my storage since so much (pain) can vary from
> controller to controller.
Reliability hasn't (so far) been an issue. Rather, it appears
to be a case where using the controller in ways that the (narrow
minded, uninspired, dilbert-esque) developer had *planned* on
can lead to no joy.
> Too bad the CDDL and GPL aren't compatible because it is
> quite amazing and I would like to see ZFS in linux instead
> of LVM/MD etc.
More information about the tfug
mailing list