diff --git a/doc/filelist.sgml b/doc/filelist.sgml
index ae980054..90fc8810 100644
--- a/doc/filelist.sgml
+++ b/doc/filelist.sgml
@@ -54,7 +54,6 @@
-
diff --git a/doc/repmgr.sgml b/doc/repmgr.sgml
index a4daf9db..16cac2b9 100644
--- a/doc/repmgr.sgml
+++ b/doc/repmgr.sgml
@@ -84,7 +84,6 @@
&repmgrd-automatic-failover;
&repmgrd-configuration;
&repmgrd-operation;
- &repmgrd-witness-server;
&repmgrd-bdr;
diff --git a/doc/repmgrd-automatic-failover.sgml b/doc/repmgrd-automatic-failover.sgml
index 1521fe5d..d89b6de5 100644
--- a/doc/repmgrd-automatic-failover.sgml
+++ b/doc/repmgrd-automatic-failover.sgml
@@ -13,36 +13,45 @@
providing monitoring information about the state of each standby.
-
+
repmgrd
- cascading replication
+ witness server
- cascading replication
+ witness server
repmgrd
- repmgrd and cascading replication
+ Using a witness server with repmgrd
- Cascading replication - where a standby can connect to an upstream node and not
- the primary server itself - was introduced in PostgreSQL 9.2. &repmgr; and
- repmgrd support cascading replication by keeping track of the relationship
- between standby servers - each node record is stored with the node id of its
- upstream ("parent") server (except of course the primary server).
+ In a situation caused e.g. by a network interruption between two
+ data centres, it's important to avoid a "split-brain" situation where
+ both sides of the network assume they are the active segment and the
+ side without an active primary unilaterally promotes one of its standbys.
- In a failover situation where the primary node fails and a top-level standby
- is promoted, a standby connected to another standby will not be affected
- and continue working as normal (even if the upstream standby it's connected
- to becomes the primary node). If however the node's direct upstream fails,
- the "cascaded standby" will attempt to reconnect to that node's parent
- (unless failover is set to manual in
- repmgr.conf).
+ To prevent this situation happening, it's essential to ensure that one
+ network segment has a "voting majority", so other segments will know
+ they're in the minority and not attempt to promote a new primary. Where
+ an odd number of servers exists, this is not an issue. However, if each
+ network has an even number of nodes, it's necessary to provide some way
+ of ensuring a majority, which is where the witness server becomes useful.
+
+
+ This is not a fully-fledged standby node and is not integrated into
+ replication, but it effectively represents the "casting vote" when
+ deciding which network segment has a majority. A witness server can
+ be set up using . Note that it only
+ makes sense to create a witness server in conjunction with running
+ repmgrd; the witness server will require its own
+ repmgrd instance.
-
+
+
+
repmgrd
@@ -96,4 +105,36 @@
+
+
+ repmgrd
+ cascading replication
+
+
+
+ cascading replication
+ repmgrd
+
+
+ repmgrd and cascading replication
+
+ Cascading replication - where a standby can connect to an upstream node and not
+ the primary server itself - was introduced in PostgreSQL 9.2. &repmgr; and
+ repmgrd support cascading replication by keeping track of the relationship
+ between standby servers - each node record is stored with the node id of its
+ upstream ("parent") server (except of course the primary server).
+
+
+ In a failover situation where the primary node fails and a top-level standby
+ is promoted, a standby connected to another standby will not be affected
+ and continue working as normal (even if the upstream standby it's connected
+ to becomes the primary node). If however the node's direct upstream fails,
+ the "cascaded standby" will attempt to reconnect to that node's parent
+ (unless failover is set to manual in
+ repmgr.conf).
+
+
+
+
+
diff --git a/doc/repmgrd-witness-server.sgml b/doc/repmgrd-witness-server.sgml
deleted file mode 100644
index 3bebca43..00000000
--- a/doc/repmgrd-witness-server.sgml
+++ /dev/null
@@ -1,31 +0,0 @@
-
-
- repmgrd
- witness server
-
-
- Using a witness server with repmgrd
-
- In a situation caused e.g. by a network interruption between two
- data centres, it's important to avoid a "split-brain" situation where
- both sides of the network assume they are the active segment and the
- side without an active primary unilaterally promotes one of its standbys.
-
-
- To prevent this situation happening, it's essential to ensure that one
- network segment has a "voting majority", so other segments will know
- they're in the minority and not attempt to promote a new primary. Where
- an odd number of servers exists, this is not an issue. However, if each
- network has an even number of nodes, it's necessary to provide some way
- of ensuring a majority, which is where the witness server becomes useful.
-
-
- This is not a fully-fledged standby node and is not integrated into
- replication, but it effectively represents the "casting vote" when
- deciding which network segment has a majority. A witness server can
- be set up using . Note that it only
- makes sense to create a witness server in conjunction with running
- repmgrd; the witness server will require its own
- repmgrd instance.
-
-