[Tfug] "Blade" servers
Yan
zardus at gmail.com
Mon Jul 22 22:42:28 MST 2013
> These are individual *servers*? Or, blade servers??
We use a server form factor that fits 4 servers in one 2U case. Other than
being in the same case, though, they aren't related at all (is, no central
power management or some such).
> OK, so you're "virtually" doing something akin to my approach
> (I add actual power control to the process since "half" of the
> computational power is not collocated with the blade server
> and I have to ensure the "correct" processors are spun up to
> make the "necessary" I/O's available for the task set at hand).
>
> In my case, processors boot differently depending on their
> locations, roles, etc. E.g., one boots of a physical disk;
> some PXE boot; others boot specialized kernels from ROM/FLASH;
> etc. I can't allow the power cycling/bootstrapping to become
> a visible/perceptible activity (i.e., users would never tolerate
> waiting seconds for a node/service to come on-line -- so, most
> nodes boot and load their applications in a fraction of a second).
I think the way we'd handle that in our paradigm is to have some amount of
compute nodes standing by, ready yo start computing at any point. That way,
spinning up of new machines would only happen if we went over that minimal
capacity. Of course, if you really have non-abstractable differences in
your nodes, this wouldn't be possible. I tend to think that any such
differences can be abstracted away using a tradeoff between
general-purposeness and efficiency. Personally, I think the tradeoff is
worth it.
> Are they *capable* of being powered down and you just don't
> take advantage of that (extra complexity? *you* aren't paying
> for the electricity? etc.) Any idea as to what it costs
> to idle a processor vs. having it *do* something useful?
> (I suspect it's not a huge difference given all the other
> cruft in each box)
I'd say anything is capable. The machines can certainly wake-on-lan and
start processing, but we don't bother. The actual case is that we don't pay
the electricity costs :-). I would imagine that idling (including idle
disks and so forth) modern machines costs significantly less than keeping
them utilized, but significantly more than keeping them off. No better
numbers than that here, sorry :-)
> I think businesses have historically been much less concerned with
> energy consumption. E.g., PBX's stay lit even when the building
> is deserted; most places don't even enforce a policy of requiring
> employees to power down their PC's before leaving for the day, etc.
>
> I think businesses have a much higher -- and "narrower" -- peak
> consumption period than residences. E.g., ~10 hours (single
> shift) of very high demand followed by ~14 hours of very little
> demand. Contrast this with residences that have a spurt of
> demand early in the morning, some demand during the day (while
> some residents are "away at work"), significantly increased
> demand in the evening (meal prep, entertainment) followed by
> virtually no demand "while sleeping".
>
> And, residents tend to *feel* the cost of the energy that is
> consumed on their behalf!
>
> (Many businesses pay for electricity using different tariffs...
> power used "off hours" is often "free" -- or, comparatively so
> when contrasted with "peak" consumption)
>
> But, that doesn't mean one should ignore power requirements in
> a system's design if you have a choice! Especially if the capability
> is there (and just isn't *typically* used -- at the present time).
There's definitely a tradeoff here, though. If you spend X dollars more
designing a custom solution, is it going to be offset by the Y dollars in
energy savings? I would say the answer could go either way... On top of
that, if you design this thing to be run on general-purpose hardware with a
general-purpose cloud backing it, it'd *greatly* increase the amount of
people that would understand the components enough to contribute, and could
positively impact the success of the project.
- Yan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://tfug.org/pipermail/tfug_tfug.org/attachments/20130722/ad568abe/attachment-0002.html>
More information about the tfug
mailing list