Author Topic: High Availability in Zentyal Server 3.4  (Read 8692 times)

jkerihuel

  • Zentyal Staff
  • Zen Warrior
  • *****
  • Posts: 152
  • Karma: +18/-3
    • View Profile
High Availability in Zentyal Server 3.4
« on: February 27, 2014, 12:29:02 pm »
Good afternoon,

Some of you have already noticed the forthcoming version of Zentyal Server 3.4 comes with High Availability for UTM services. To this regards, we have released a new article in Zentyal Labs that introduces this feature and provides a quick tutorial on a common deployment scenario.

http://labs.zentyal.org/high-availability-in-zentyal/

Any feedback about this feature from early adopters before it goes live would be much appreciated!

We have taking good notes some of you had already started :-)

Br,
Juien.
Twitter: http://twitter.com/jkerihuel
Key fingerprint = 08BA 50B1 9EFF 8E1E FB4A  24FA B2A9 D5F3 9624 1CC2

half_life

  • Bug Hunter
  • Zen Hero
  • *****
  • Posts: 867
  • Karma: +59/-0
    • View Profile
Re: High Availability in Zentyal Server 3.4
« Reply #1 on: February 27, 2014, 03:22:31 pm »
Wow!!  I have already updated the testbed machine.  I have many questions.   How did you address quorum?  Quorum disk?   Can I for instance bring mysql up HA without getting into a tug of war with Zentyal about proxy configs?  I haven't started poking around in the internals so if some of these questions are way off,  I am sorry.  Any problems setting up HAproxy as a load balancer for http etc?   


I am off to kick the tires and snoop under the hood.


Good Job!

peterpugh

  • Guest
Re: High Availability in Zentyal Server 3.4
« Reply #2 on: February 27, 2014, 05:40:14 pm »
You can have a quorum disk on a two disk cluster. The third partition isn't essential just recommended.
At a guess its not using a third.

Cheapest third partition is a raspberry pi off ebay, my top tip anyway. :)
« Last Edit: February 27, 2014, 05:43:54 pm by peterpugh »

half_life

  • Bug Hunter
  • Zen Hero
  • *****
  • Posts: 867
  • Karma: +59/-0
    • View Profile
Re: High Availability in Zentyal Server 3.4
« Reply #3 on: February 27, 2014, 06:05:07 pm »
You can have a quorum disk on a two disk cluster. The third partition isn't essential just recommended.
At a guess its not using a third.

Cheapest third partition is a raspberry pi off ebay, my top tip anyway. :)

Sorta where I was going with   " How did you address quorum?  Quorum disk? ".  :)  Not a perfect solution but it does work.  Less to quiz you as to bring out a downside to those with less exposure to HA techniques,  what happens when the quorum machine/disk dies?  A 3+ machine cluster without quorum disk is a better solution for those who can afford it.

HA can be done inexpensively not cheaply.

peterpugh

  • Guest
Re: High Availability in Zentyal Server 3.4
« Reply #4 on: February 27, 2014, 09:24:45 pm »
Sorta where I was going with   " How did you address quorum?  Quorum disk? ".  :)  Not a perfect solution but it does work.  Less to quiz you as to bring out a downside to those with less exposure to HA techniques,  what happens when the quorum machine/disk dies?  A 3+ machine cluster without quorum disk is a better solution for those who can afford it.

HA can be done inexpensively not cheaply.

I think Christian knows more about HA than I.
I took the quorum disk use to be the method that says which server is the working one. Its a voting method for the current live node of the cluster.
If the quorum disk dies I presumed you get an error that the quorum disk had died but all should continue as is, until a fault develops. At that stage you have problems because there is no mechanism to choose the right node.

You could place the quorum partition on two mirrored usb pens if you wished on one of the nodes, or shared space on one of the nodes. That is the cheapest and nastiest method.
Then if you have a client somewhere that is always available then some shared space there is sort of the next level. The next level of cheap and nasty.

If you have some form of NAS that is shared then this is much better, RAID NAS bettter again going all the way up to Networked distributed RAID NAS.

I quite like the raspberry as for £40 you get a dedicated quorum disk that is pretty robust.

I start thinking when people say 3+ clusters are a better option in the SMB market they are missing the reality of the market.

We are talking about fit for purpose which works to a plausible budget and you can make High Availability cheaper. Looks like Zentyal have done it and hats off to them as the HA has took me by surprise and didn't know it was planned.

Zentyal for me has just moved up a gear and more markets. Definitely in the M of SMB and probably much higher.

Really exciting news and a brilliant plus to Zentyal, this is a real biggie in my opinion. It looks like Zentyal have provided a simple and inexpensive Cluster.

I am not talking about you here Halflife but there is so much bull mud in the IT scene like the only way to make sure data is destroyed from a hard drive is by deguassing or destruction. Total tosh, or Y2K and everything was going to come to a halt. I hate the misinformation in this arena and the wastage in resources it causes.

My opinion is a two node cluster in the SMB market is very feasible, 3+ lets say I have never seen and I have worked for some pretty big companies but not blue chip level.

A shared RAID NAS is probably the best fit for purpose if you are in the M region.

I feel it gets a bit silly when it gets down to the mine is bigger than yours argument. :)

Mines smaller than yours, as my raspbery PI as a dedicated quorum node with two mirrored flash drives would quite comfortably do the job for about $60.

:) Its joke really, honest its huge.
   

« Last Edit: February 27, 2014, 09:39:54 pm by peterpugh »

sixstone

  • Zentyal Staff
  • Zen Hero
  • *****
  • Posts: 1417
  • Karma: +26/-0
    • View Profile
    • Sixstone's blog
Re: High Availability in Zentyal Server 3.4
« Reply #5 on: February 27, 2014, 10:07:22 pm »
Wow!!  I have already updated the testbed machine.  I have many questions.   How did you address quorum?  Quorum disk?   Can I for instance bring mysql up HA without getting into a tug of war with Zentyal about proxy configs?  I haven't started poking around in the internals so if some of these questions are way off,  I am sorry.  Any problems setting up HAproxy as a load balancer for http etc?

Firstly, thanks very much for your kind words. The quorum is set by corosync software. You can check there how they reach the quorum. Zentyal configured differently if you have two hosts or more since a quorum with two nodes is different than with more nodes.

You can manually configure MySQL Master/Slave or Master/Master. As stated in the article, you may set up the resources pacemaker support including MySQL.

My secret is my silence...

half_life

  • Bug Hunter
  • Zen Hero
  • *****
  • Posts: 867
  • Karma: +59/-0
    • View Profile
Re: High Availability in Zentyal Server 3.4
« Reply #6 on: February 27, 2014, 10:12:17 pm »
A quorum disk is a shared disk that all nodes in the cluster write to continuously.  They all declare that they are still living while checking the disk to make sure all the other nodes are still alive.  If you lose one node from quorum the rest decide that that node must have died and issue a STONITH (shoot the other node in the head) . The kicker is what policy is set in pacemaker when quorum is lost.  The default behavior is to stop all services managed by the cluster.  You are describing the freeze policy or "hold what you have".  Which did Zentyal opt for?  I don't know,  I haven't gotten that far yet. A two node cluster that stops when one node is lost wouldn't be much use though, would it? 

Next scenario would be loss of quorum on a reboot.   Your datacenter loses power and your backup power is exhausted .  The machines shutdown until power is restored.  The quorum disk machine doesn't come back.  What happens? Using the freeze policy isn't going to help here.

The answer in all of these cases is the third machine without a quorum disk.  The machine only needs to participate in quorum not run any services so your Raspberry Pi could serve in the role.  So could an atom netbook.

These days my horizons are somewhat broader than they were.  Where I work we use openstack and vmware as well as good old fashioned redhat clustering.  The small team I am on provide direct support to over one thousand machines (virtual and physical)  while the company manages over half a million if I read the numbers correctly.  I am confident when I say that you would be surprised what small businesses leverage in the IT segment today.  3 nodes running the full Zentyal stack +24/7/'365 admin support for the platform can be had for $190 + bandwidth  charges.  Does HA sound too expensive to do, now?

Back on point ,  we most definitely agree that this IS HUGE for Zentyal.  For the out of the box solution Zentyal is making huge strides.  The caveat that there are a few failure modes that it is susceptible to is probably very acceptable to the typical small business.  A little tweeking and your run of the mill sys-ad could make this a hell of a lot more robust on a budget.

Don' worry peterpugh,  where I come from it perfectly ok to say "you are full of it"  as long as it is followed by "and here's why".  Then we hash it out like big kids.  No hurt feelings at all.

half_life

  • Bug Hunter
  • Zen Hero
  • *****
  • Posts: 867
  • Karma: +59/-0
    • View Profile
Re: High Availability in Zentyal Server 3.4
« Reply #7 on: February 27, 2014, 10:16:44 pm »
Wow!!  I have already updated the testbed machine.  I have many questions.   How did you address quorum?  Quorum disk?   Can I for instance bring mysql up HA without getting into a tug of war with Zentyal about proxy configs?  I haven't started poking around in the internals so if some of these questions are way off,  I am sorry.  Any problems setting up HAproxy as a load balancer for http etc?

Firstly, thanks very much for your kind words. The quorum is set by corosync software. You can check there how they reach the quorum. Zentyal configured differently if you have two hosts or more since a quorum with two nodes is different than with more nodes.

You can manually configure MySQL Master/Slave or Master/Master. As stated in the article, you may set up the resources pacemaker support including MySQL.



Thanks and you are very welcome.  I am more likely to pursue plans to replace mysql with Xtradb for its out of the box simplicity to setup clustering.   I have just finished installing a second node to test clustering .  Now for the fun!


Thanks

peterpugh

  • Guest
Re: High Availability in Zentyal Server 3.4
« Reply #8 on: February 27, 2014, 10:19:29 pm »
In your case of a data center offering PaaS and other solutions then I can see your argument.

I guess I am stuck with my SBS days where I still see SBS and Zentyal very much onsite.

There was no offense meant and the thread is very constructive as it showed two sides of a solution and opinion.

PS I don't mind the reciprocal some good minds on here who I learn much from and arguments are much needed but should never be taken personally.

« Last Edit: February 27, 2014, 10:21:41 pm by peterpugh »

Torsten73

  • Zen Warrior
  • ***
  • Posts: 174
  • Karma: +6/-1
    • View Profile
Re: High Availability in Zentyal Server 3.4
« Reply #9 on: February 27, 2014, 10:27:06 pm »
I have never used a HA before, but i would like to see what it will be able to give.

Is it possible to do what iam thinking about it:
i have 2 KVM Server connected by a fairly fast (50000/5000) ipsec tunnel. The tunnel is provided by a fritzbox to fritzbox vpn over a cabel provider.

When i have on both kvm servers in an vm a zentyal 3.4 setup with HA would this be a way? Because both zentyals are not working as a gateway, just only as DC and Groupware.
So in case that one system dies the failover will switch to the second sever and we can work without brake (but with less speed due to the internet speed) ?
--------------------------------------------------------------
Zentyal 3.5 (offline) unter Ubuntu12.04.3 YAVDR 0.5 als KVM Host
Action Pack Abo with a running Exc. 2013 :-)

half_life

  • Bug Hunter
  • Zen Hero
  • *****
  • Posts: 867
  • Karma: +59/-0
    • View Profile
Re: High Availability in Zentyal Server 3.4
« Reply #10 on: February 27, 2014, 10:27:49 pm »
I did read the numbers wrong earlier.  The 450k number is the tally for my group.  the 1.3k number was the delta week to week.  I know I haven't put my hands on more than a couple of hundred in the last few months.

half_life

  • Bug Hunter
  • Zen Hero
  • *****
  • Posts: 867
  • Karma: +59/-0
    • View Profile
Re: High Availability in Zentyal Server 3.4
« Reply #11 on: February 27, 2014, 10:35:49 pm »
I have never used a HA before, but i would like to see what it will be able to give.

Is it possible to do what iam thinking about it:
i have 2 KVM Server connected by a fairly fast (50000/5000) ipsec tunnel. The tunnel is provided by a fritzbox to fritzbox vpn over a cabel provider.

When i have on both kvm servers in an vm a zentyal 3.4 setup with HA would this be a way? Because both zentyals are not working as a gateway, just only as DC and Groupware.
So in case that one system dies the failover will switch to the second sever and we can work without brake (but with less speed due to the internet speed) ?

As long as you are only using the Zentyal machine for authentication and authorization I doubt you would notice anything changed at all.  Use glusterfs in replication mode or zfs with two separate volumes rsynching back and forth and you should have a usable HA fileshare.

In your scenario I think it would be better if you didn't cluster email and instead let it go to backup MX.   

sixstone

  • Zentyal Staff
  • Zen Hero
  • *****
  • Posts: 1417
  • Karma: +26/-0
    • View Profile
    • Sixstone's blog
Re: High Availability in Zentyal Server 3.4
« Reply #12 on: February 27, 2014, 10:50:02 pm »
The kicker is what policy is set in pacemaker when quorum is lost.  The default behavior is to stop all services managed by the cluster.  You are describing the freeze policy or "hold what you have".  Which did Zentyal opt for?  I don't know,  I haven't gotten that far yet. A two node cluster that stops when one node is lost wouldn't be much use though, would it? 
We opt for holding what you have for 2-node cluster and the stop all resources if we lose the quorum in >=3 node scenario.

Great explanation about what Quorum Disk is. Thanks :)
My secret is my silence...

christian

  • Guest
Re: High Availability in Zentyal Server 3.4
« Reply #13 on: February 27, 2014, 11:25:31 pm »
In your scenario I think it would be better if you didn't cluster email and instead let it go to backup MX.

You're very welcome if you want to elaborate on this  ;) (if you see what I mean  :P)

I'm also very interested to read you feedback after HA test you may perform with 3.4
pacemaker + corosync is definitely a good starting point but to me, unless I misunderstand, it doesn't provide (yet) significant added value in real life deployment except some very specific scenario.

HA scope is only FW, DHCP, DNS and OpenVPN.
As written in the beta section, I don't understand what FW cluster could provide given the monolithic aspect of Zentyal server.

Let me explain what I understand:

- you deploy 2 Zentyal servers, each running basic set of service, e.g. HTTP proxy, mail and file sharing, to make it simple. Of course, it comes with FW, DHCP and DNS.
- you set up your cluster, floating IPs inside and outside.
- out of the box, services are defined to be accessed using real IPs, but as you are clever, you manually change it to use floating IP when this is relevant, therefore here for... DNS and OpenVPN. No need to define anything for DHCP  :) and for firewall... I still don't understand  :-[
- If server 1 fails, what's the scenario ? DHCP, DNS and OpenVPN swing to server 2. So clients get new lease if needed (this is very unlikely to be a problem in most organisations), access from outside to VPN works if you connect to floating (external) IP but default route is still server 1 for internal servers or are you using internal floating IP as default route. Much better like this but HTTP proxy is not part of your cluster isn't it ? So no transparent proxy... Same for mail, file sharing and domain controller.
This rather promote something I was not expecting in term of deployment that is to have one Zentyal server as border/internet gateway and perhaps another internal server providing services internally. In such case, from outside, thanks to highly available VPN, you can still access services. The bad news is that Zentyal is still very monolithic in its design, meaning such split of services on different Zentyal servers is uneasy.

This to explain that moving to HA is very interesting on the principle but without any roadmap and understanding of what could be delivered (HA scope) and when, deployment in prod is not yet achievable, even, to me, partially.

half_life

  • Bug Hunter
  • Zen Hero
  • *****
  • Posts: 867
  • Karma: +59/-0
    • View Profile
Re: High Availability in Zentyal Server 3.4
« Reply #14 on: February 28, 2014, 12:06:23 am »
This is all rather early on for me to be commenting but one thing to remember is the addition of haproxy.  It can and does act as a transparent loadbalancer.  I am sorry about the mail comment,  I fully intend on trying running Zarafa behind such a cluster.  More on this when my thoughts are a little more organized and I have played with the setup a little.