Clustering Virtualmin

34 posts / 0 new
Last post
#1 Mon, 08/27/2007 - 03:19
DouglasHeckaman

Clustering Virtualmin

In order to allow virtualmin to be used across multiple hosts do I need to worry about any files other then those within /etc/webmin?

Also how would I license this?

Fri, 06/15/2007 - 19:31
KrisKenyon

I run a cluster with Virtualmin. The way I do it is to move all shared configuration and user files on a separate NFS cluster thus automated the changes made. I only run Virtualmin on one of the nodes so if it goes down management is impacted. In order to provide the HA services I use heartbeat and locate the mail like services on one node and the web services on the other with cross failover.

I hope this helps.

Thu, 07/19/2007 - 02:01 (Reply to #2)
gulmezt

<b>KrisKenyon wrote:</b>
<div class='quote'>I run a cluster with Virtualmin. The way I do it is to move all shared configuration and user files on a separate NFS cluster thus automated the changes made. I only run Virtualmin on one of the nodes so if it goes down management is impacted. In order to provide the HA services I use heartbeat and locate the mail like services on one node and the web services on the other with cross failover.

I hope this helps.

</div>

Hello
If anyone can help me or direct me any resource i will be appreciate for it.I want to use Virtualmin to serve small reliable hosting platform.I want to use clustering use virtualmin .MySQL and mail servers SAN or NFS to storage system .Is there anyone out to help me and maybe someone has got network diagram and similar data and how to set up clustering .Thank you very much.

Sun, 10/21/2007 - 11:45 (Reply to #3)
ghomem

Can someone provide a list of configuration files that need bo be synced from the master to the slave server?

This would be very useful.

Wed, 01/23/2008 - 15:24 (Reply to #4)
merlynx

(Bump) I've been looking at this for some time and really hoping that Virtualmin will put together a howto or a plugin/feature that allows users to have a mirrored hot-swap(in the least) and ideally loadbalanced servers. This is something I'll be experimenting with here...

http://www.howtoforge.com/high_availability_loadbalanced_apache_cluster

I'm hoping I can user virtualmin servers and isolate the Virtualmin server's management interface that is going to be the &quot;master&quot; of the cluster...and use rsync scripts or the cluster features of Virtualmin to execute the synchronization between the two servers.

Is this even reasonable? Has anyone tried this sort of thing? I'd like to know if this is a rabbit hole not worth the time...

Fri, 06/15/2007 - 22:36
TonyShadwick

Same type of configuration here.

I've been pondering how to cluster virtualmin's management interface for high availability and load balancing, but I haven't had a chance to seriously look into it yet.

Fri, 07/06/2007 - 14:04
Joe
Joe's picture

Can you elaborate on how you managed the clustering using a shared NFS mount? Which files did you put on the shared mount and which other files do you have to eventually keep synchronized?

Thanks in advance!
Niclas

--

Check out the forum guidelines!

Sun, 01/27/2008 - 14:45 (Reply to #7)
merlynx

Load-balancing and fail-over has a significant appeal to most sys-admins in the context of client service and disaster recovery - if there were a module that implemented web services (80/443/etc) I would consider investing in it before I work on building out our new systems.

Joe mentioned the concept of a &quot;hot swap&quot; server being on the horizon, but naturally, the greatest appeal is High-Availability AND load-balancing. Right now just getting the topology of our network solid - but as this develops and if I actually get it to work, then I'll be sure fill people in.

In the meantime, unison is a good suggestion and I'll look into it.
http://www.cis.upenn.edu/~bcpierce/unison/index.html

Thanks!

Thu, 06/19/2008 - 22:38 (Reply to #8)
nickdwhite

<b>TonyShadwick wrote:</b>
<div class='quote'>Unison would be a much better choice than rsync for this. :)

That said, I had started developing the Apache Clustering module and stopped, for lack of time, and apparent lack of interest. If there's sufficient interest I'll look into finishing it...</div>
Any chance you'd be interested in finishing it or providing your code? I'm very interested in getting something like this done, but if some work has already been made, I don't want to reinvent the wheel.

Fri, 09/05/2008 - 08:27 (Reply to #9)
merlynx

(bump)
I would love it if someone who has created a &quot;hot-swap&quot; setup would share a few details of how they have done it. I've been using rsync and I've not had any luck with how to configure the file transfer and which config files where to transfer over...

Thanks for any help you can offer!

Sat, 09/06/2008 - 10:20 (Reply to #10)
nickdwhite

So far, this is how I've done it.

Use 4 servers. 2 are NFS/file servers in a heartbeat/drbd setup. Plenty of details on that, although howtoforge has lots of good guides that I recommend. (not associated with them)

The 2 NFS servers are &quot;backend&quot;, the other 2 are &quot;frontend&quot;

Setup a private IP to be shared on the NFS servers with failover. Frontend servers use this IP for mounting directories, that way if the primary NFS server dies, the secondary immediately takes over.

Frontend servers serve HTTP, HTTPS, MySQL, SMTP, POP3, IMAP, etc... These are load balanced using heartbeat and ldirectord. Again, plenty of articles on that. Static IPs for SSL domains need to be shared.

The following directories are mounted on the Frontend servers from the NFS servers - essentially &quot;shared&quot;:
/home
/var/lib/php
/etc/postfix
/etc/apache/sites-available
/etc/apache/sites-enabled

You could probably also share the /etc/webmin and /etc/usermin folders, although I haven't looked into that fully.

Setup MySQL to run on both Frontend servers in a circular replication. Good notes here http://www.onlamp.com/pub/a/onlamp/2006/04/20/advanced-mysql-replication...

All 4 of my servers are the same in terms of horsepower - CPU, RAM, HDD, etc. But you could probably use slower/older machines for the NFS/Backend servers.

I tried doing this setup on just 2 servers, but had problems with NFS Client and NFS Server running on the same machines. Whenever trying to simulate a failover, the client wouldn't release files/folders for DRBD.

Need to make sure both Frontend servers are setup in the Webmin Cluster modules. Then whenever you add a domain or user, you need to refresh the users/groups, and then re-synchronize them on both servers. Then restart apache, restart postfix. Also setup domain transfers in Bind on both machines too.

Mon, 10/06/2008 - 08:57 (Reply to #11)
merlynx

What if i only have two machines to use? Don't have the budget right now for &quot;4.&quot; I really appreciate your detailed response. This is exactly what I've been trying to implement as far as backup and disaster recovery. I was hoping to see a &quot;hot swap&quot; feature - where ServerA.com was updated to ServerB.com and ServerB.com was an up to date stand by. Is DRBD my best recourse?

Wed, 10/08/2008 - 07:08 (Reply to #12)
brent_nesbitt

DRBD is great for keeping partitions on 2 separate machines in sync. I've heard it described as a kind of &quot;distributed raid array&quot;.

I guess this would work well if you used DRBD on your web-content partition. The only drawback is that you can only access one of the DRBD drives at any given time (this might have changed with more recent DRBD releases) - which might be just what you want. Heartbeat can take care of starting apache on the backup server when the primary one goes down.

If it were me, though, I would want to set it up somehow so that you could use both webservers to load balance all the time, and just point them at the &quot;live&quot; content on one server or the other.

The way I would do this is to set up an NFS server on the DRBD content, and mount it as a separate NFS mount point on both servers. (ie. one server would be mounting its own content via NFS - the other would be mounting the opposite's content. The source of this mount would automatically change whenever the other DRBD host became primary, but the content would always be the same)
Then I would point my apache instances to this mount point, and load balance them using ldirectord.

In this way you could take advantage of both apache servers whenever both machines were turned on, and if one ever went down it would automatically fail-over to the remaining server without interruption.

Wed, 10/08/2008 - 23:40 (Reply to #13)
merlynx

You may be interested in this thread as well...in particular, Jamie's response to this feature request...

http://www.virtualmin.com/forums/virtualmin/clustering-virtualmin.html

Fri, 10/31/2008 - 06:51 (Reply to #14)
merlynx

I am not sure what you mean by &quot;/etc/apache/sites-available&quot; and &quot;sites-enabled&quot;. What is your linux distro? I am using CentOS. There is a /etc/httpd/ folder with config files in there for global and for clients and they are in /etc/httpd/conf/.

Also - are you saying that when you run the synchronization on your front-end webservers that the /home/[user] directories get synced as well? There must be something wrong with my install, I've tried this and found a few bugs.

http://www.virtualmin.com/index.php?option=com_flyspray&amp;Itemid=82&am...

Anywho - thanks a ton for posting this. It would be awesome to have a section in the documentation or on the wiki that covers different scenarios of this sort of implementation.

Sat, 09/06/2008 - 10:22 (Reply to #15)
nickdwhite

Sorry, didn't mean to post that 4 times. And it won't let me edit now. =/

Mon, 09/22/2008 - 11:01 (Reply to #16)
brent_nesbitt

I have an very similar setup as follows:

7 virtual servers as follows: / Virtualmin GPL with some customizations

2 - linux-ha/drbd redundant content servers making web content available via nfs
2 - front-end webservers load balanced via ldirectord, mounting above nfs content
2 - linux-ha/drbd redundant mysql DB servers
1 - Developer server with all the user accounts, Virtualmin, also mounting nfs content.

/etc/apache2 and /var/lib/php4 and /var/lib/php5 are also shared mountpoints on the apache servers. (ie. no synching necessary)

Works well, but I am facing memory shortages on the webservers at 150 domains right now which I need to resolve.
Also, because of the customizations I have made to VirtualMin, I am always nervous about upgrading when new releases come out, or even about purchasing the commercial version.

I would be interested in comments regarding performance / limitations from anyone else with a similar setup.

Wed, 10/08/2008 - 23:38 (Reply to #17)
merlynx

This is kind of what I figured...

In the VirtualMin paradigm though, everything is by user and the apache content directory is a pluralism. Basically, to accurately provide fail over AND load balancing, all the /home/[user_xyz]/public_html directories have to be in sync. (As opposed to /var/www/htdocs/ or something like that) In addition to that, in the virtual hosting model, there are databases and variances in the php.ini configurations per user...and user passwords/id/groups and such are stored elsewhere...meaning that if /home/[user_xyz] is in sync, it does not mean that the users, data, and virtualmin working parts and pointers are all in sync.

I don't have much experience with NFS but I think I understand what you are suggesting. I am using older gear, so my intranet is 100MB and I would assume that this would affect performance in terms of read(s)/write(s) to the NFS mounted volumes? Basically, all but &quot;boot&quot; would need to be replicated...I've never used the load balancing module you mentioned.

Right now - I am just striving to get the Hot Swap to work. DRBD is the soundest means I can think of to replicate all the databases, users, files, and such. Basically, the &quot;/&quot; partition on serverA is the same as the &quot;/&quot; partition on serverB.

This is a simple guide...
http://wiki.centos.org/HowTos/Ha-Drbd

But of course if you did not have the foresight to keep some of your drive partition un-allocated, then you'll have to either use external meta-data for DRBD or &quot;shrink&quot; that partition's file system.

http://www.drbd.org/users-guide/ch-internals.html
(The external metadata option is, um, not so good for recovery.)

..and of course, shrinking a mounted &quot;/&quot; is not exactly fun...

http://www.howtoforge.com/linux_resizing_ext3_partitions

I'll let you know how this adventure plays out. Many VirtualMin users have a configuration similar to the one you have mentioned. I'm a bit leery about using NFS - but I am willing to sacrifice a little performance if it means solid fail-over...

Thanks for your suggestions!

Fri, 10/31/2008 - 16:06 (Reply to #18)
merlynx

Brent,

Are you using virtualmin for your hosting environment?

I am kinda confused. In the context of Virtualmin on CentOS there is no &quot;enabled-sites&quot; like you mention in Debian, and I am not sure if I understand how that would paradigm with CentOS's configuration with Apache in the context of Virtualmin handling the /etc/httpd/conf/httpd.conf file and the VirtualHost entries there.

Which machine is virtualmin installed on?

You have two front-end webservers mounting the same content directories from one development server (which is clustered - for failover). That much I think I get.

#1 Development server --mounted to --&gt;Server A
|| --mounted to --&gt;Server B
||
#2 [drbd clustered Dev server]

So if Server A goes down - Server B will &quot;fail over.&quot;
If #1 goes down, #2 will fail over.
In both instances, vise versa.

Do you run virtualmin on the Development Server then?

Where do you run bind? What does your bind config look like here? Do you route all FTP requests to the Dev server as well? I notice you put postfix config on the webserver(s) A &amp; B - does this mean that mail is routed through them but ultimately lands in the &quot;Development&quot; server box?

What about usermin and email sending/receiving?

I am just trying to understand how you got all the pieces to fit together...

Thanks for the dialogue - it is very helpful!

Fri, 10/31/2008 - 16:23 (Reply to #19)
Joe
Joe's picture

<div class='quote'>CentOS there is no &quot;enabled-sites&quot; like you mention in Debian, and I am not sure if I understand how that would paradigm with CentOS's configuration with Apache in the context of Virtualmin handling the /etc/httpd/conf/httpd.conf file and the VirtualHost entries there.</div>

Virtualmin works with the OS, not separately from it. If the OS keeps things in separate VirtualHost files (as is the case on Debian and Ubuntu), Virtualmin does, too. If the OS keeps everything in one httpd.conf (as is the case on CentOS), Virtualmin does that.

Don't let a minor semantic difference between two systems throw you--they're doing the same thing in slightly different ways.

As for some of your questions, I don't know how Brent does it (and there are many ways, with their own positives and negatives). But, here's how I'd probably do it:

I'm assuming either server can take over the IP(s) from the other, in the event of failure. Everything gets a lot harder if you can't take over the IP.

<div class='quote'>Where do you run bind?</div>

On both systems. You need two anyway. So, have one slave to the other, and vice versa. This is no different than what the documentation for cluster slave DNS autoconfiguration tells you to do...you just do it twice, once on each machine. DNS is, <i>by far</i> the easiest service to provide redundancy for. Don't let it intimidate you--it's designed from the ground up to be highly redundant. And Webmin takes that one step farther by being able to easily promote a slave zone into a master zone, if you lose your server A and need for server B to become the master for a week while you get server A back into production, or whatever.

<div class='quote'>What does your bind config look like here?</div>

Follow the autoconfiguration guide for both server A and server B, and you won't need to care. Really don't fret over BIND. It should just work.

I don't know about the other stuff, as I'm not remembering what all this thread was about. But I just wanted to point you in the right direction on BIND, because I could tell you wanted to make it way more complicated than it needs to be. ;-)

--

Check out the forum guidelines!

Mon, 11/03/2008 - 06:28 (Reply to #20)
brent_nesbitt

I have VirtualMin running on my &quot;Developer's&quot; server. This is the only part that I do not have redundant servers for (because everything on this server can afford to be down for a reboot here and there)

So I have the Development server set up pretty-much as a standalone VirtualMin server, with Virtualmin, Bind, FTP, popa3d, and Postfix running there. Then I export the apache configs from this server to my webservers, and have created a custom /etc/init.d/remote-apache script to start and stop apache on those servers from the Virtualmin server. (All my servers are running Debian Etch)

I think you understand the web content part, so I won't go over that again here.

Thu, 02/12/2009 - 21:40 (Reply to #21)
merlynx

@Nick White,

Thanks for this post, I made a graph of the set-up and nearly have a path of execution. I have four servers like you (A,B,C,D)

I have a few questions. Is DRBD necessary for the NFS back-end servers? (C &amp; D) Could I use an rsync script running from the &quot;master&quot; NFS server (C) to the &quot;slave&quot; nfs server (D) every minute or so?

Currently, the &quot;master&quot; NFS is the production machine running 60+ websites (server &quot;C&quot;). It's a bit overworked, and lags - especially when there is high demand for media (video/audio) content.

I am thinking to set it up where this master NFS (C) is the only way users can add/edit their sites, and the rsync script runs to push from the &quot;master&quot; nfs (which acts as the webmin/virtualmin box as well) to the &quot;slave&quot; nfs. Does this sound feasible? Can you see any caveats with C-&gt;rsync cron-&gt;D?

C--exports nfs--&gt;A and B (front end servers)
D--mirror of--C

The front end servers - is one your &quot;primary.nameserver.com&quot; (A) and the other your &quot;secondary.nameserver.com?&quot; (B) Jamie suggested I set (A) to have (B) as a slave DNS in virtualmin and (B) to have (A) as a slave in virtualmin. Is this what you mean by &quot;Domain Transfers?&quot; Would it be OK to have the main NFS/webmin server (C) configured as &quot;primary.nameserver.com&quot; or would all DNS need to go to the front-end servers?

Because of the nature of this configuration, I wanted to also ask if there are any &quot;gotcha's&quot; that I should be on the lookout for in this context? You mentioned the Webmin cluster and the need to refresh all users when a new one is added to the main NFS server (C)? I wonder if there is a way to automate this as part of the process that virtualmin runs when a new user account is created? As well as the apache and postfix (and bind, I would assume?) restarts necessary...

Thanks for your excellent example and any advice given will be greatly appreciated!!!

Mon, 02/16/2009 - 22:57 (Reply to #22)
merlynx

Currently I've got the four servers in the initial stages of configuration.

serverA and serverB are the web content servers, sharing 1 public IP that is the fulcrum of load balancing.

serverC and serverD are the nfs file servers for serverA and serverB.

serverC is the primary file server, this is the virtualmin enabled machine which all users changes are made and all machines are clustered with (users syncronized). serverC exports the shares for content and postfix to serverA &amp; serverB. serverC's content is rsynced to serverD. serverD takes over serverC's IP when server serverC fails.

Right now, in the context of serverC's shares over nfs4, apache is failing to start because in the CentOS implementation of httpd, there are symbolic links to /var/logs/httpd, /usr/lib64/httpd/modules, and /var/run, and these are not &quot;seen&quot; by a nfs4 mounted export with these settings in the clients fstab:
auto,rw,async,_netdev,proto=tcp,retry=10,wsize=65536,rsize=65536,hard,intr 0 0

So - researching other ways to make sure that when Apache starts, it sees the right directory. Either going to hardcode the references in the httpd.conf file or find a setting/method for nfs4 to resolve links. Thoughts?

Tue, 07/24/2007 - 08:00
tabletguy

I'm also interested in this feature for next year (purchasing a second server about then).

But, this is a topic that I think could be a good addition to the documentation wiki.

Could the people who have succeeded in this write up something on the wiki (rather than here)? At that point, we can &quot;trick&quot; Joe and Jamie to support it, since it's on the documentation pages (little joke here).

I know there's some sort of clustering in Webmin, but does it handle Virtualmin files, etc. as well?

I'd like to see this officially part of the package at some point.

Wed, 08/29/2007 - 03:54
Joe
Joe's picture

Hey Kris,

That really depends on what you're doing. Quite a bit is stored outside of /etc/webmin--all of the actual configuration files for all of the services are elsewhere, for example.

If you can give me a specific use case, I can probably be more specific--like DNS clustering is well-documented (check the wiki) and well understood, or mail with LDAP (maybe we can leash some of the folks here who are using LDAP for mail users into answering questions), or MySQL tables, etc. Clustering is a many faceted thing, and everybody needs something a little or a lot different.

--

Check out the forum guidelines!

Tue, 09/25/2007 - 17:35
sitespark

Yes, Ive noticed the features in VM for a time, and been trying to find info re: cluster/ing in Wiki, and it doesnt seem to yet be there.

Joe - can you add this asap!?

Thu, 01/24/2008 - 03:17
TonyShadwick

Unison would be a much better choice than rsync for this. :)

That said, I had started developing the Apache Clustering module and stopped, for lack of time, and apparent lack of interest. If there's sufficient interest I'll look into finishing it...

Fri, 10/31/2008 - 07:11
brent_nesbitt

Oh yes, Debian does this with it's apache2 package. They're just basically directories of symlinks to the config files you want to &quot;activate&quot;. Sites-enabled is then read by apache.conf (httpd.conf).

/etc/apache2/conf.d
/sites-available
/sites-enabled (symlinks to configs in sites-available)

The way I have things set up, there is no synchronization. Both webservers mount the same web content, stored on a DRBD-based nfs cluster. I guess I forgot to mention that the Developer's server mounts this content under /home, and the webservers mount the same content under /var/www. (this makes absolute paths a bit confusing though)

Content Server A \_shared IP --&gt; Mounted on Webserver A \__ load balanced
Contnet Server B / --&gt; Also on Webserver B /
--&gt; Also on developer server &lt;-- access via VPN

Fri, 02/13/2009 - 06:36
brent_nesbitt

I guess there are two reasons for the backup content server. One would be for high-availability, and the other would be for data redundancy.

Our setup is designed for high-availability, and hence I can failover the content server and reboot it for updates at any time, without concern that the websites will experience any downtime, or that the data might be out-of-sync. My understanding is that even if a client is in the middle of deploying a site during my failover, it will continue on the second server with only a bit of a hiccup. I think you would lose this with the rsync model you mentioned.

However, if your goal is one of data redundancy, perhaps rsync would be better. In this model you would only sync the servers often enough to keep the data relatively current (ie. daily / weekly), but infrequently enough that if your server were hacked, your data deleted or had a hardware failure, these undesirable changes would NOT be instantly replicated to the other server. In our case, we depend on daily tape backups for this.

As for DNS, we just use a single VirtualMin server which also acts as our DNS server, so replication is not a factor. This server, however, is last in a chain of Active Directory DNS servers, and not queried directly by clients, so everything is theoretically cached on the AD servers which reduces the load and increases the redundancy of the DNS system.

Tue, 02/17/2009 - 03:39
brent_nesbitt

Hmmm.

I don't know how you're doing your nfs, but for me:
* My frontend servers are configured as apache servers, and they will run standalone, without nfs, with all log/module/etc directories intact. (of course there may be errors about missing content)
* I mount a single &quot;content&quot; share via nfs on each of my webservers which contains only the various DocumentRoots for the various sites. This is the redundant part that fails-over between servers
* I also mount the /etc/apache2 directory from the NFS server so that both webservers use the same configs - but I think this is optional.
* I am also currently logging to mysql instead of to disk. The module for doing this is a bit flaky, so I wouldn't recommend it - but we're transitioning between webfarms, so this is an easy way to consolidate all our logs. I have done this in the past with apache logging to the local machine, and it works fine like that.

Note: in order to make virtualmin play nice with multiple apache servers I had to write a script: /etc/init.d/remote-apache, which restarts apache on both frontend servers in turn.

Wed, 07/15/2009 - 05:39 (Reply to #30)
ashishpadave

If you mount the etc/apache2 directory from the NFS server, won't you have to restart all webservers that share the config, when the config file changes? The config file will change everytime you add a new Virtual Host via VirtualMin? If you have to restart, then how do you do it?

Also each VirtualHost has an IP eg.

<VirtualHost 10.20.144.133:80>
</VirtualHost>

How do both servers have same IP ?

Sun, 09/29/2013 - 02:13
partha06

thats i am looking for too, thats why need to be name based virtual server. is anybody there to help how to customise the virtualmin so that it will create name based virtual hosting ?

Sun, 09/29/2013 - 10:53
partha06

Go Webmin > Virtualmin> module config replace shared ip with *

Tue, 08/02/2016 - 05:04
soydemadrid

Hi I realise this thread is pretty old, but I'm looking also for a 2 server solution to failover, mirroring and HA...

I found this guide which looks like a pretty awesome solution to give 2 servers failover AND load balancing!

http://gcharriere.com/blog/?p=339

My problem is I already have my server running and I'm now wanting to add a second server (which is exactly the same hardware etc) to provide the failover, load balancing and HA/mirroring...

So I don't really know how to do any of these guides without mashing up my server already in production use! Or how to mirror/sync everything properly including Virtualmin/Webmin/Usermin...

Did anybody get anywhere over the years of this discussion with a 2 server setup? And if so I'd be happy to pay for some help to get this to work.

Thanks for any help at all.

Topic locked