Page 1 of 1

dos and don'ts .. my edition

Posted: Wed Apr 18, 2012 5:16 pm
by sgtfoo
Here I'm gonna add dos and don'ts for virtualizing Amahi within a hosting OS or program.
I've found a few over time and I'll try to compile them here..

- don't test Amahi in a VM that bridges the network to the live Amahi.. that will conflict with your production Amahi server's DHCP and DNS.... at least without following the wiki to have 2 Amahi's co-exist on one subnet.
- a workaround is having 2 virtual NICs on the VM and manually disconnecting one of them after install.

- do take advantage of virtual networks if your host program/OS allows for it (VMWare has a great implementation of virtual networking)... the others I'm not certain about.

.. more to follow..

Re: dos and don'ts .. my edition

Posted: Wed Apr 18, 2012 6:19 pm
by ciscoh
with regards to Proxmox and actually seems not too bad to me.

It's really just setting up bridges. The bridges are either internal to Proxmox or bridged to a physical interface.

In ESXi, the virtual switches are just bridges. There you add port groups (another bridge) and physical interfaces to the switch in order to have the network internal or externally bridge.

ESXi does give you more granular netwotk interface setting control but i am sure there is way to do the same thing somewhere on Proxmox...just havent found it yet

Re: dos and don'ts .. my edition

Posted: Wed Jun 20, 2012 10:44 am
by sgtfoo
some more..

- don't run your amahi machine without a dedicated LZ ... this is more so just good advice for simplifying things.. I messed up a bunch of things and saturated a system volume without a dedicated LZ....

- do setup a Greyhole LZ inside a re-sizable volume such as LVM or a dedicated image file... very handy inside managing the VM and allowing for backups

- take advantage of LVM for storage management!

- amahi er.. greyhole plays well with having drives exclusively routed to the VM from the host system. I'm doing this now to have my 2 drives for pooling (no duplication just yet)
I might do duplication via some other method... or just do some kind of offsite copy for duplication.
I really just like the pooling feature of greyhole in my current case

- do play with bonding interfaces at the Host level so as to route the bonded NICs to the single virtio ethernet driver on the VM... provides delightful balanced-rr bonding throughput!

Re: dos and don'ts .. my edition

Posted: Fri Sep 06, 2013 9:15 am
by sgtfoo
I'm adding to this thread as I've come across new experience and learned a few things and software is changing...

I wish I could avoid some of my opinion, but the thing about greyhole is it abstracts your data with symlinks all over the place and with the complexity of virtualization on top of that, it's a lot more daunting than need be for a hopefully simpler storage model. I would say that for duplication for safety against failure, stripe LVM across mirrored drives and give your linux file server or HDA a virtual drive that can always be expanded whenever needed. Then you still always have storage for your HDA shares and it's protected and expandable as long as you are fine with adding physical drives to LVM setups.
There are too many symlinks in my old 2 greyhole pooled drives that seem to have lost their true file references. Something in the SQL database eventually deprecated at some point. It may have even be at my own hand, but in any case, for my own usage, I see it as an un-necessary complication.
Greyhole is a wonderful idea and application for storage pooling of drives, but if you care about your data and have more knowledge in the Linux realm, AND you like using virtualization, then greyhole may not be the best idea.

Honestly, there has been no problem with the latest update to Prox, except for the nag box at login.... and really.. how often should I really need to go in there once my VMs are up?.. not often enough for the nag box to bug me. And as for updates... wait longer before accepting updates from the "test" repo. The system is stable and sound and hasn't given me ANY issues. In fact, we would benefit from learning to use the qemu system via command line and thereby avoid the web GUI. As for Proxmox asking for money... it's for the same reason as why Amahi asks for money... the convenience of a well-built-and-packaged open-source software product/service.

Now as for alternatives, running Virtualbox with phpVirtualbox is just as handy but for some lacks the raw transfer of power and control from a baremetal KVM hypervisor with the benefits of openVZ containers (referring to Proxmox). Some people even still swear by the free version of ESXi5 which is still also and has always been commercial-support primarily.

Another alternative... I recently came across Cloudmin, brought to us by the friends who make Webmin... which is a very promising alternative...
It's great because the GPL version supports a single KVM host. I will definitely try this out because in the end if we just want a handy web gui, this is spot-on, once KVM has been installed onto ANY compatible Linux distro.

There's also open-node, which is excellently promising and it even uses a LTS OS, being CentOS, so even as training for enterprise grade RHEV, it's VERY close. Open-node just has a sharper curve for setup (as far as I've tried).

The prime deterrent I would imagine for anyone looking for alternatives to anything would be moving/migrating VMs to other systems. Unless you've used ubiquitous formats for drive images, it's hard to convert and make VMs happy in new places.... transplanting is dangerous for our data too.

check my outline thread of the other hypervisors for alternatives too.