Two instances of Amahi or Cluster install
-
- Posts: 25
- Joined: Sat May 22, 2010 10:16 am
Two instances of Amahi or Cluster install
Is it possible to have to amahi servers on a network or cluster in case one breaks?
Re: Two instances of Amahi or Cluster install
basically yes you can create a cluster. Note that you will have to work with shared storage for the Amahi database and all the filesystems. There's more to it like copying the Amahi scripts and binaries, but that's the gist of it.
I don't know if it's feasible considering the impact:
- another machine (exact duplicate or at least very close to it is recommended) constantly running as hot spare
- investment in redundant storage (if 1 NAS/SAN breaks you need a spare that will take over seamlessly)
- investment in redundant network (if 1 switch breaks you need a spare that will take over seamlessly)
- investment in other infrastructural areas (NIC bonding/teaming, double network connections everywhere on the servers (front-end and back-end/storage LANs)
It might be fun as an experiment (you can test it in VMs of course), but considering the investments you'll have to make it makes more sense to spec a server with more redundancy: it'll be cheaper and run more stable (clusters are finicky things).
Just for fun:
- 2x server: decent server grade case/mainboard/CPU/PSU, 2GB RAM, RAID 1 (2x250G HDD) for OS, 1 NIC onboard, 1 2xGb NIC card would be around $1000 each so that's $2000
- 2x decent NAS/SAN with RAID 5 would be easily around $800 each so that's another $1600
- 2x decent managed switch would be around $150 each so add another $300
if you'd build a normal server (or buy a decent HP/Dell/IBM/FujitsuSiemens/Acer) you'd have the option to add in a lot of redundant stuff (additional hot-spare PSU for instance), hot swap everything. Those can be had for around $2500 (I specced a Dell with a quad Xeon, 4GB RAM, 4-port SAS/SATA hot swap controller with 1x 250G HDD (you can buy your own disks cheaper, I factored 4x1TB into the price already), dual Gb NIC, redundant hot-swap power supply, 1000W UPS and 3yr Next business day onsite repair). That would be cheaper - compare $2500 for a nicely specced server to $3900 for the clustered solution. Then count your running costs: 2x hardware everything will use far more energy than the single server.
Then you'd have to factor in your own time for researching, testing, and rolling out the cluster solution, as well as maintaining it (the last part is very important, clusters need more maintenance time than a single server solution). Let's say you know what you're doing so you'll have everything done in about 30 hrs. In contrast you setup the single server in around 2hrs including tweaks and customizations.
your call
I don't know if it's feasible considering the impact:
- another machine (exact duplicate or at least very close to it is recommended) constantly running as hot spare
- investment in redundant storage (if 1 NAS/SAN breaks you need a spare that will take over seamlessly)
- investment in redundant network (if 1 switch breaks you need a spare that will take over seamlessly)
- investment in other infrastructural areas (NIC bonding/teaming, double network connections everywhere on the servers (front-end and back-end/storage LANs)
It might be fun as an experiment (you can test it in VMs of course), but considering the investments you'll have to make it makes more sense to spec a server with more redundancy: it'll be cheaper and run more stable (clusters are finicky things).
Just for fun:
- 2x server: decent server grade case/mainboard/CPU/PSU, 2GB RAM, RAID 1 (2x250G HDD) for OS, 1 NIC onboard, 1 2xGb NIC card would be around $1000 each so that's $2000
- 2x decent NAS/SAN with RAID 5 would be easily around $800 each so that's another $1600
- 2x decent managed switch would be around $150 each so add another $300
if you'd build a normal server (or buy a decent HP/Dell/IBM/FujitsuSiemens/Acer) you'd have the option to add in a lot of redundant stuff (additional hot-spare PSU for instance), hot swap everything. Those can be had for around $2500 (I specced a Dell with a quad Xeon, 4GB RAM, 4-port SAS/SATA hot swap controller with 1x 250G HDD (you can buy your own disks cheaper, I factored 4x1TB into the price already), dual Gb NIC, redundant hot-swap power supply, 1000W UPS and 3yr Next business day onsite repair). That would be cheaper - compare $2500 for a nicely specced server to $3900 for the clustered solution. Then count your running costs: 2x hardware everything will use far more energy than the single server.
Then you'd have to factor in your own time for researching, testing, and rolling out the cluster solution, as well as maintaining it (the last part is very important, clusters need more maintenance time than a single server solution). Let's say you know what you're doing so you'll have everything done in about 30 hrs. In contrast you setup the single server in around 2hrs including tweaks and customizations.
your call

echo '16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D2173656C7572206968616D41snlbxq' | dc
Galileo - HP Proliant ML110 G6 quad core Xeon 2.4GHz, 4GB RAM, 2x750GB RAID1 + 2x1TB RAID1 HDD
Galileo - HP Proliant ML110 G6 quad core Xeon 2.4GHz, 4GB RAM, 2x750GB RAID1 + 2x1TB RAID1 HDD
Who is online
Users browsing this forum: No registered users and 3 guests