[Tfug] "Blade" servers
Yan
zardus at gmail.com
Sun Jul 21 02:25:10 MST 2013
So this probably isn't what you really want to hear, but this is the way we
do it here in the lab:
We have a bunch (I think the current number is 16) big, beefy (16 core, 96
gigs of ram, filled to the brim with SSDs) that are in an Openstack
deployment. When we need resources, we spin up some VMs (using something
like Saltstack, for example) which do some work (usually by using Celery to
handle task distribution) and then shut back down. The physical machines
themselves always stay on (although, if they're not being utilized, they
idle).
That's probably overkill for what you want, unless you need to automate an
entire compound of buildings, but there you go. For us, it's really nice.
Spinning up our own VMs let us ignore the fact that we're actually sharing
the hardware, and a Celery/RabbitMQ/MongoDB task workflow abstracts away
much of the distributed processing headaches.
I'm personally not aware of anyone that really powers on and off physical
servers on demand anymore. It's all cloud, all the time.
On Sun, Jul 14, 2013 at 8:24 PM, Bexley Hall <bexley401 at yahoo.com> wrote:
> Hi,
>
> My automation/multimedia system is designed as a physically
> distributed, loosely coupled collection of (physical) processors
> intercommunicating by way of various high-speed wired/wireless
> network media. All, save one, are diskless. All are headless.
>
> Most of these processors are "satellite" nodes servicing particular
> bits of "field" devices (i.e., cameras, speakers, irrigation valves,
> appliance controls, etc.). They provide a means for remotely locating
> the I/O's for relatively low cost (no need to run hundreds of long
> wires from all these I/O's to one central machine). I.e., put a
> little smarts on the end of a network drop and *encode* the command
> and status information as *messages* over that medium.
>
> A similar number of processors reside *in* the "switch" (more
> appropriately called a router). These ensure the integrity of
> communications going through the system (i.e., *who* can say
> *what* to *whom*). They also support powering their respective
> satellite nodes on/off (PoE) as dictated by the needs of the system.
>
> The remaining processors are colocated (currently) in a cluster
> and provide the heavier-weight services that the system requires
> (RDBMS, media transcoding, time synchronization, localization, etc.).
> The resources available to these nodes are an order of magnitude
> greater than those of the satellites (e.g., GHz vs 100's of MHz;
> GB's vs MB's; 100+W vs 10W; etc.) But, there are far fewer of
> them (e.g., a dozen vs. several dozen satellites and a couple score
> in the "router").
>
> So, while they can "do more" (per MIPS?), it also costs a lot more
> to *do* it (power, radiated heat, etc.). Powering up/down one of
> these bigger processors has greater consequences -- and potential
> cost/savings.
>
> *All* of the satellite/router processors have spare capacity.
> This capacity is used to run bits of the application (i.e.,
> the processor's don't *just* serialize/deserialize messages
> to/from the network). "What runs where" varies, dynamically,
> based on the system's needs. (i.e., a Grid)
>
> [I've now amply described "what I am trying to do", eh?]
>
> I rescued an old (ancient?) "BladeCenter" as a test platform
> to see how well my multimedia/automation system "scales" for
> use in, e.g., business/commercial settings (i.e., the hardware
> that I have designed is intended for homes -- 2 to 4 users -- and
> wouldn't scale well to locations with dozens/hundreds of users.
> Nor am I inclined to start designing hardware that competes with
> COTS commercial offerings... like blade servers! :> )
>
> As with anything from Big Blue, part numbers *effectively* have
> more digits than letters in the alphabet -- by the time you finish
> specifying all the scads of options! :<
>
> What I am looking for is an "overview" document that is something
> like a "programming model" would be to a programmer -- something
> that gives me an overall view of what such a system looks like in
> terms of hardware interconnects, etc. I.e., can I power down
> individual blades or do I just have to idle nodes that I want
> to "burn less power"? Can blades be added at will to increase
> the available MIPS? Does the chassis do any load balancing or
> is that up to the application? etc.
>
> I am not *too* concerned with the specifics on the box I currently
> have. It's disposable (I would never deploy anything this power
> hungry, here!) Rather, I'm looking to figure out what sorts of
> products of this sort are available COTS. I.e., are they all
> just different versions of the same basic design? Or, do different
> companies have different approaches to this sort of product?
>
> [I've seen "blade servers" in various physical form factors and
> assumed it was just a "packaging convenience". I've never looked
> at most of them to see *what* the individual blades' capabilities
> were.]
>
> Do developers think of each blade as a node sitting on the network
> talking directly with "clients"? Or, do they think that they are
> talking to an intermediary who, in turn, is talking to the clients?
> (Or, do they think one thing but reality is slightly different!)
>
> I.e., how "specialized" are developers (end users) working on a
> "Product A platform" vs those working on a "Product B platform"?
> I would assume the design of the system tries to hide much of
> these underlying issues from the application developers (?)
> (i.e., a developer would probably not be concerned with *where*
> his code was executing??)
>
> Lastly, any pointers as to where "mainstream business" is going
> with these sorts of devices (customers)? Changes in architecture
> that might be coming down the road?? Or, will all this "go away"
> for all but big "Internet Presences"?
>
> Thx,
> --don
>
> ______________________________**_________________
> Tucson Free Unix Group - tfug at tfug.org
> Subscription Options:
> http://www.tfug.org/mailman/**listinfo/tfug_tfug.org<http://www.tfug.org/mailman/listinfo/tfug_tfug.org>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://tfug.org/pipermail/tfug_tfug.org/attachments/20130721/95bc7987/attachment-0002.html>
More information about the tfug
mailing list