diff --git a/doc/cloning-standbys.sgml b/doc/cloning-standbys.sgml
index 4d409264..05b513a8 100644
--- a/doc/cloning-standbys.sgml
+++ b/doc/cloning-standbys.sgml
@@ -163,6 +163,173 @@
+
+
+ cloning
+ replication slots
+
+
+
+ replication slots
+ cloning
+
+ Cloning and replication slots
+
+ Replication slots were introduced with PostgreSQL 9.4 and are designed to ensure
+ that any standby connected to the primary using a replication slot will always
+ be able to retrieve the required WAL files. This removes the need to manually
+ manage WAL file retention by estimating the number of WAL files that need to
+ be maintained on the primary using wal_keep_segments.
+ Do however be aware that if a standby is disconnected, WAL will continue to
+ accumulate on the primary until either the standby reconnects or the replication
+ slot is dropped.
+
+
+ To enable &repmgr; to use replication slots, set the boolean parameter
+ use_replication_slots in repmgr.conf:
+
+ use_replication_slots=true
+
+
+
+ Replication slots must be enabled in postgresql.conf` by
+ setting the parameter max_replication_slots to at least the
+ number of expected standbys (changes to this parameter require a server restart).
+
+
+ When cloning a standby, &repmgr; will automatically generate an appropriate
+ slot name, which is stored in the repmgr.nodes table, and create the slot
+ on the upstream node:
+
+ repmgr=# SELECT node_id, upstream_node_id, active, node_name, type, priority, slot_name
+ FROM repmgr.nodes ORDER BY node_id;
+ node_id | upstream_node_id | active | node_name | type | priority | slot_name
+ ---------+------------------+--------+-----------+---------+----------+---------------
+ 1 | | t | node1 | primary | 100 | repmgr_slot_1
+ 2 | 1 | t | node2 | standby | 100 | repmgr_slot_2
+ 3 | 1 | t | node3 | standby | 100 | repmgr_slot_3
+ (3 rows)
+
+
+ repmgr=# SELECT slot_name, slot_type, active, active_pid FROM pg_replication_slots ;
+ slot_name | slot_type | active | active_pid
+ ---------------+-----------+--------+------------
+ repmgr_slot_2 | physical | t | 23658
+ repmgr_slot_3 | physical | t | 23687
+ (2 rows)
+
+
+ Note that a slot name will be created by default for the primary but not
+ actually used unless the primary is converted to a standby using e.g.
+ repmgr standby switchover.
+
+
+ Further information on replication slots in the PostgreSQL documentation:
+ https://www.postgresql.org/docs/current/interactive/warm-standby.html#STREAMING-REPLICATION-SLOTS
+
+
+
+ While replication slots can be useful for streaming replication, it's
+ recommended to monitor for inactive slots as these will cause WAL files to
+ build up indefinitely, possibly leading to server failure.
+
+
+ As an alternative we recommend using 2ndQuadrant's Barman,
+ which offloads WAL management to a separate server, negating the need to use replication
+ slots to reserve WAL. See section
+ for more details on using &repmgr; together with Barman.
+
+
+
+
+
+
+ cloning
+ cascading replication
+
+ Cloning and cascading replication
+
+ Cascading replication, introduced with PostgreSQL 9.2, enables a standby server
+ to replicate from another standby server rather than directly from the primary,
+ meaning replication changes "cascade" down through a hierarchy of servers. This
+ can be used to reduce load on the primary and minimize bandwith usage between
+ sites. For more details, see the
+
+ PostgreSQL cascading replication documentation.
+
+
+ &repmgr; supports cascading replication. When cloning a standby,
+ set the command-line parameter --upstream-node-id to the
+ node_id of the server the standby should connect to, and
+ &repmgr; will create recovery.conf to point to it. Note
+ that if --upstream-node-id is not explicitly provided,
+ &repmgr; will set the standby's recovery.conf to
+ point to the primary node.
+
+
+ To demonstrate cascading replication, first ensure you have a primary and standby
+ set up as shown in the .
+ Then create an additional standby server with repmgr.conf looking
+ like this:
+
+ node_id=3
+ node_name=node3
+ conninfo='host=node3 user=repmgr dbname=repmgr'
+ data_directory='/var/lib/postgresql/data'
+
+
+ Clone this standby (using the connection parameters for the existing standby),
+ ensuring --upstream-node-id is provide with the node_id
+ of the previously created standby (if following the example, this will be 2):
+
+ $ repmgr -h node2 -U repmgr -d repmgr -f /etc/repmgr.conf standby clone --upstream-node-id=2
+ NOTICE: using configuration file "/etc/repmgr.conf"
+ NOTICE: destination directory "/var/lib/postgresql/data" provided
+ INFO: connecting to upstream node
+ INFO: connected to source node, checking its state
+ NOTICE: checking for available walsenders on upstream node (2 required)
+ INFO: sufficient walsenders available on upstream node (2 required)
+ INFO: successfully connected to source node
+ DETAIL: current installation size is 29 MB
+ INFO: creating directory "/var/lib/postgresql/data"...
+ NOTICE: starting backup (using pg_basebackup)...
+ HINT: this may take some time; consider using the -c/--fast-checkpoint option
+ INFO: executing: 'pg_basebackup -l "repmgr base backup" -D /var/lib/postgresql/data -h node2 -U repmgr -X stream '
+ NOTICE: standby clone (using pg_basebackup) complete
+ NOTICE: you can now start your PostgreSQL server
+ HINT: for example: pg_ctl -D /var/lib/postgresql/data start
+
+ then register it (note that --upstream-node-id must be provided here
+ too):
+
+ $ repmgr -f /etc/repmgr.conf standby register --upstream-node-id=2
+ NOTICE: standby node "node2" (ID: 2) successfully registered
+
+
+
+ After starting the standby, the cluster will look like this, showing that node3
+ is attached to node3, not the primary (node1).
+
+ $ repmgr -f /etc/repmgr.conf cluster show
+ ID | Name | Role | Status | Upstream | Location | Connection string
+ ----+-------+---------+-----------+----------+----------+--------------------------------------
+ 1 | node1 | primary | * running | | default | host=node1 dbname=repmgr user=repmgr
+ 2 | node2 | standby | running | node1 | default | host=node2 dbname=repmgr user=repmgr
+ 3 | node3 | standby | running | node2 | default | host=node3 dbname=repmgr user=repmgr
+
+
+
+
+ Under some circumstances when setting up a cascading replication
+ cluster, you may wish to clone a downstream standby whose upstream node
+ does not yet exist. In this case you can clone from the primary (or
+ another upstream node); provide the parameter --upstream-conninfo
+ to explictly set the upstream's primary_conninfo string
+ in recovery.conf.
+
+
+
+
cloning
@@ -230,6 +397,7 @@
cloning a node or executing .
-
+
+