mirror of
https://github.com/EnterpriseDB/repmgr.git
synced 2026-03-25 16:16:29 +00:00
doc: clarify witness server location
This commit is contained in:
@@ -16,15 +16,22 @@
|
|||||||
<para>
|
<para>
|
||||||
A typical use case for a witness server is a two-node streaming replication
|
A typical use case for a witness server is a two-node streaming replication
|
||||||
setup, where the primary and standby are in different locations (data centres).
|
setup, where the primary and standby are in different locations (data centres).
|
||||||
By creating a witness server in the same location as the primary, if the primary
|
By creating a witness server in the same location (data centre) as the primary,
|
||||||
becomes unavailable it's possible for the standby to decide whether it can
|
if the primary becomes unavailable it's possible for the standby to decide whether
|
||||||
promote itself without risking a "split brain" scenario: if it can't see either the
|
it can promote itself without risking a "split brain" scenario: if it can't see either the
|
||||||
witness or the primary server, it's likely there's a network-level interruption
|
witness or the primary server, it's likely there's a network-level interruption
|
||||||
and it should not promote itself. If it can seen the witness but not the primary,
|
and it should not promote itself. If it can seen the witness but not the primary,
|
||||||
this proves there is no network interruption and the primary itself is unavailable,
|
this proves there is no network interruption and the primary itself is unavailable,
|
||||||
and it can therefore promote itself (and ideally take action to fence the
|
and it can therefore promote itself (and ideally take action to fence the
|
||||||
former primary).
|
former primary).
|
||||||
</para>
|
</para>
|
||||||
|
<note>
|
||||||
|
<para>
|
||||||
|
<emphasis>Never</emphasis> install a witness server on the same physical host
|
||||||
|
as another node in the replication cluster managed by &repmgr; - it's essential
|
||||||
|
the witness is not affected in any way by failure of another node.
|
||||||
|
</para>
|
||||||
|
</note>
|
||||||
<para>
|
<para>
|
||||||
For more complex replication scenarios,e.g. with multiple datacentres, it may
|
For more complex replication scenarios,e.g. with multiple datacentres, it may
|
||||||
be preferable to use location-based failover, which ensures that only nodes
|
be preferable to use location-based failover, which ensures that only nodes
|
||||||
|
|||||||
Reference in New Issue
Block a user