diff --git a/doc/filelist.sgml b/doc/filelist.sgml
index c46d8eeb..ae980054 100644
--- a/doc/filelist.sgml
+++ b/doc/filelist.sgml
@@ -54,7 +54,6 @@
-
diff --git a/doc/repmgr.sgml b/doc/repmgr.sgml
index 6061a918..a4daf9db 100644
--- a/doc/repmgr.sgml
+++ b/doc/repmgr.sgml
@@ -84,7 +84,6 @@
&repmgrd-automatic-failover;
&repmgrd-configuration;
&repmgrd-operation;
- &repmgrd-network-split;
&repmgrd-witness-server;
&repmgrd-bdr;
diff --git a/doc/repmgrd-automatic-failover.sgml b/doc/repmgrd-automatic-failover.sgml
index 2695881a..1521fe5d 100644
--- a/doc/repmgrd-automatic-failover.sgml
+++ b/doc/repmgrd-automatic-failover.sgml
@@ -43,5 +43,57 @@
+
+
+ repmgrd
+ network splits
+
+
+
+ network splits
+
+
+ Handling network splits with repmgrd
+
+ A common pattern for replication cluster setups is to spread servers over
+ more than one datacentre. This can provide benefits such as geographically-
+ distributed read replicas and DR (disaster recovery capability). However
+ this also means there is a risk of disconnection at network level between
+ datacentre locations, which would result in a split-brain scenario if
+ servers in a secondary data centre were no longer able to see the primary
+ in the main data centre and promoted a standby among themselves.
+
+
+ &repmgr; enables provision of "" to
+ artificially create a quorum of servers in a particular location, ensuring
+ that nodes in another location will not elect a new primary if they
+ are unable to see the majority of nodes. However this approach does not
+ scale well, particularly with more complex replication setups, e.g.
+ where the majority of nodes are located outside of the primary datacentre.
+ It also means the witness node needs to be managed as an
+ extra PostgreSQL instance outside of the main replication cluster, which
+ adds administrative and programming complexity.
+
+
+ repmgr4 introduces the concept of location:
+ each node is associated with an arbitrary location string (default is
+ default); this is set in repmgr.conf, e.g.:
+
+ node_id=1
+ node_name=node1
+ conninfo='host=node1 user=repmgr dbname=repmgr connect_timeout=2'
+ data_directory='/var/lib/postgresql/data'
+ location='dc1'
+
+
+ In a failover situation, repmgrd will check if any servers in the
+ same location as the current primary node are visible. If not, repmgrd
+ will assume a network interruption and not promote any node in any
+ other location (it will however enter degraded monitoring
+ mode until a primary becomes visible).
+
+
+
+
diff --git a/doc/repmgrd-network-split.sgml b/doc/repmgrd-network-split.sgml
deleted file mode 100644
index 0eacadfa..00000000
--- a/doc/repmgrd-network-split.sgml
+++ /dev/null
@@ -1,48 +0,0 @@
-
-
- repmgrd
- network splits
-
-
- Handling network splits with repmgrd
-
- A common pattern for replication cluster setups is to spread servers over
- more than one datacentre. This can provide benefits such as geographically-
- distributed read replicas and DR (disaster recovery capability). However
- this also means there is a risk of disconnection at network level between
- datacentre locations, which would result in a split-brain scenario if
- servers in a secondary data centre were no longer able to see the primary
- in the main data centre and promoted a standby among themselves.
-
-
- &repmgr; enables provision of "" to
- artificially create a quorum of servers in a particular location, ensuring
- that nodes in another location will not elect a new primary if they
- are unable to see the majority of nodes. However this approach does not
- scale well, particularly with more complex replication setups, e.g.
- where the majority of nodes are located outside of the primary datacentre.
- It also means the witness node needs to be managed as an
- extra PostgreSQL instance outside of the main replication cluster, which
- adds administrative and programming complexity.
-
-
- repmgr4 introduces the concept of location:
- each node is associated with an arbitrary location string (default is
- default); this is set in repmgr.conf, e.g.:
-
- node_id=1
- node_name=node1
- conninfo='host=node1 user=repmgr dbname=repmgr connect_timeout=2'
- data_directory='/var/lib/postgresql/data'
- location='dc1'
-
-
- In a failover situation, repmgrd will check if any servers in the
- same location as the current primary node are visible. If not, repmgrd
- will assume a network interruption and not promote any node in any
- other location (it will however enter degraded monitoring
- mode until a primary becomes visible).
-
-
-
-