mirror of
https://github.com/EnterpriseDB/repmgr.git
synced 2026-03-23 23:26:30 +00:00
Update standby cloning manual
This commit is contained in:
@@ -163,6 +163,173 @@
|
||||
</sect2>
|
||||
</sect1>
|
||||
|
||||
<sect1 id="cloning-replication-slots" xreflabel="Cloning and replication slots">
|
||||
<indexterm>
|
||||
<primary>cloning</primary>
|
||||
<secondary>replication slots</secondary>
|
||||
</indexterm>
|
||||
|
||||
<indexterm>
|
||||
<primary>replication slots</primary>
|
||||
<secondary>cloning</secondary>
|
||||
</indexterm>
|
||||
<title>Cloning and replication slots</title>
|
||||
<para>
|
||||
Replication slots were introduced with PostgreSQL 9.4 and are designed to ensure
|
||||
that any standby connected to the primary using a replication slot will always
|
||||
be able to retrieve the required WAL files. This removes the need to manually
|
||||
manage WAL file retention by estimating the number of WAL files that need to
|
||||
be maintained on the primary using <varname>wal_keep_segments</varname>.
|
||||
Do however be aware that if a standby is disconnected, WAL will continue to
|
||||
accumulate on the primary until either the standby reconnects or the replication
|
||||
slot is dropped.
|
||||
</para>
|
||||
<para>
|
||||
To enable &repmgr; to use replication slots, set the boolean parameter
|
||||
<varname>use_replication_slots</varname> in <filename>repmgr.conf</filename>:
|
||||
<programlisting>
|
||||
use_replication_slots=true
|
||||
</programlisting>
|
||||
</para>
|
||||
<para>
|
||||
Replication slots must be enabled in <filename>postgresql.conf</filename>` by
|
||||
setting the parameter <varname>max_replication_slots</varname> to at least the
|
||||
number of expected standbys (changes to this parameter require a server restart).
|
||||
</para>
|
||||
<para>
|
||||
When cloning a standby, &repmgr; will automatically generate an appropriate
|
||||
slot name, which is stored in the <literal>repmgr.nodes</literal> table, and create the slot
|
||||
on the upstream node:
|
||||
<programlisting>
|
||||
repmgr=# SELECT node_id, upstream_node_id, active, node_name, type, priority, slot_name
|
||||
FROM repmgr.nodes ORDER BY node_id;
|
||||
node_id | upstream_node_id | active | node_name | type | priority | slot_name
|
||||
---------+------------------+--------+-----------+---------+----------+---------------
|
||||
1 | | t | node1 | primary | 100 | repmgr_slot_1
|
||||
2 | 1 | t | node2 | standby | 100 | repmgr_slot_2
|
||||
3 | 1 | t | node3 | standby | 100 | repmgr_slot_3
|
||||
(3 rows)</programlisting>
|
||||
|
||||
<programlisting>
|
||||
repmgr=# SELECT slot_name, slot_type, active, active_pid FROM pg_replication_slots ;
|
||||
slot_name | slot_type | active | active_pid
|
||||
---------------+-----------+--------+------------
|
||||
repmgr_slot_2 | physical | t | 23658
|
||||
repmgr_slot_3 | physical | t | 23687
|
||||
(2 rows)</programlisting>
|
||||
</para>
|
||||
<para>
|
||||
Note that a slot name will be created by default for the primary but not
|
||||
actually used unless the primary is converted to a standby using e.g.
|
||||
<command>repmgr standby switchover</command>.
|
||||
</para>
|
||||
<para>
|
||||
Further information on replication slots in the PostgreSQL documentation:
|
||||
<ulink url="https://www.postgresql.org/docs/current/interactive/warm-standby.html#STREAMING-REPLICATION-SLOTS">https://www.postgresql.org/docs/current/interactive/warm-standby.html#STREAMING-REPLICATION-SLOTS</ulink>
|
||||
</para>
|
||||
<tip>
|
||||
<simpara>
|
||||
While replication slots can be useful for streaming replication, it's
|
||||
recommended to monitor for inactive slots as these will cause WAL files to
|
||||
build up indefinitely, possibly leading to server failure.
|
||||
</simpara>
|
||||
<simpara>
|
||||
As an alternative we recommend using 2ndQuadrant's <ulink url="https://www.pgbarman.org/">Barman</ulink>,
|
||||
which offloads WAL management to a separate server, negating the need to use replication
|
||||
slots to reserve WAL. See section <xref linkend="cloning-from-barman">
|
||||
for more details on using &repmgr; together with Barman.
|
||||
</simpara>
|
||||
</tip>
|
||||
</sect1>
|
||||
|
||||
<sect1 id="cloning-cascading" xreflabel="Cloning and cascading replication">
|
||||
<indexterm>
|
||||
<primary>cloning</primary>
|
||||
<secondary>cascading replication</secondary>
|
||||
</indexterm>
|
||||
<title>Cloning and cascading replication</title>
|
||||
<para>
|
||||
Cascading replication, introduced with PostgreSQL 9.2, enables a standby server
|
||||
to replicate from another standby server rather than directly from the primary,
|
||||
meaning replication changes "cascade" down through a hierarchy of servers. This
|
||||
can be used to reduce load on the primary and minimize bandwith usage between
|
||||
sites. For more details, see the
|
||||
<ulink url="https://www.postgresql.org/docs/current/static/warm-standby.html#CASCADING-REPLICATION">
|
||||
PostgreSQL cascading replication documentation</ulink>.
|
||||
</para>
|
||||
<para>
|
||||
&repmgr; supports cascading replication. When cloning a standby,
|
||||
set the command-line parameter <literal>--upstream-node-id</literal> to the
|
||||
<varname>node_id</varname> of the server the standby should connect to, and
|
||||
&repmgr; will create <filename>recovery.conf</filename> to point to it. Note
|
||||
that if <literal>--upstream-node-id</literal> is not explicitly provided,
|
||||
&repmgr; will set the standby's <filename>recovery.conf</filename> to
|
||||
point to the primary node.
|
||||
</para>
|
||||
<para>
|
||||
To demonstrate cascading replication, first ensure you have a primary and standby
|
||||
set up as shown in the <xref linkend="quickstart">.
|
||||
Then create an additional standby server with <filename>repmgr.conf</filename> looking
|
||||
like this:
|
||||
<programlisting>
|
||||
node_id=3
|
||||
node_name=node3
|
||||
conninfo='host=node3 user=repmgr dbname=repmgr'
|
||||
data_directory='/var/lib/postgresql/data'</programlisting>
|
||||
</para>
|
||||
<para>
|
||||
Clone this standby (using the connection parameters for the existing standby),
|
||||
ensuring <literal>--upstream-node-id</literal> is provide with the <varname>node_id</varname>
|
||||
of the previously created standby (if following the example, this will be <literal>2</literal>):
|
||||
<programlisting>
|
||||
$ repmgr -h node2 -U repmgr -d repmgr -f /etc/repmgr.conf standby clone --upstream-node-id=2
|
||||
NOTICE: using configuration file "/etc/repmgr.conf"
|
||||
NOTICE: destination directory "/var/lib/postgresql/data" provided
|
||||
INFO: connecting to upstream node
|
||||
INFO: connected to source node, checking its state
|
||||
NOTICE: checking for available walsenders on upstream node (2 required)
|
||||
INFO: sufficient walsenders available on upstream node (2 required)
|
||||
INFO: successfully connected to source node
|
||||
DETAIL: current installation size is 29 MB
|
||||
INFO: creating directory "/var/lib/postgresql/data"...
|
||||
NOTICE: starting backup (using pg_basebackup)...
|
||||
HINT: this may take some time; consider using the -c/--fast-checkpoint option
|
||||
INFO: executing: 'pg_basebackup -l "repmgr base backup" -D /var/lib/postgresql/data -h node2 -U repmgr -X stream '
|
||||
NOTICE: standby clone (using pg_basebackup) complete
|
||||
NOTICE: you can now start your PostgreSQL server
|
||||
HINT: for example: pg_ctl -D /var/lib/postgresql/data start</programlisting>
|
||||
|
||||
then register it (note that <literal>--upstream-node-id</literal> must be provided here
|
||||
too):
|
||||
<programlisting>
|
||||
$ repmgr -f /etc/repmgr.conf standby register --upstream-node-id=2
|
||||
NOTICE: standby node "node2" (ID: 2) successfully registered
|
||||
</programlisting>
|
||||
</para>
|
||||
<para>
|
||||
After starting the standby, the cluster will look like this, showing that <literal>node3</literal>
|
||||
is attached to <literal>node3</literal>, not the primary (<literal>node1</literal>).
|
||||
<programlisting>
|
||||
$ repmgr -f /etc/repmgr.conf cluster show
|
||||
ID | Name | Role | Status | Upstream | Location | Connection string
|
||||
----+-------+---------+-----------+----------+----------+--------------------------------------
|
||||
1 | node1 | primary | * running | | default | host=node1 dbname=repmgr user=repmgr
|
||||
2 | node2 | standby | running | node1 | default | host=node2 dbname=repmgr user=repmgr
|
||||
3 | node3 | standby | running | node2 | default | host=node3 dbname=repmgr user=repmgr
|
||||
</programlisting>
|
||||
</para>
|
||||
<tip>
|
||||
<simpara>
|
||||
Under some circumstances when setting up a cascading replication
|
||||
cluster, you may wish to clone a downstream standby whose upstream node
|
||||
does not yet exist. In this case you can clone from the primary (or
|
||||
another upstream node); provide the parameter <literal>--upstream-conninfo</literal>
|
||||
to explictly set the upstream's <varname>primary_conninfo</varname> string
|
||||
in <filename>recovery.conf</filename>.
|
||||
</simpara>
|
||||
</tip>
|
||||
</sect1>
|
||||
|
||||
<sect1 id="cloning-advanced" xreflabel="Advanced cloning options">
|
||||
<indexterm>
|
||||
<primary>cloning</primary>
|
||||
@@ -230,6 +397,7 @@
|
||||
cloning a node or executing <xref linkend="repmgr-standby-follow">.
|
||||
</para>
|
||||
</sect2>
|
||||
|
||||
</sect1>
|
||||
|
||||
|
||||
</chapter>
|
||||
|
||||
Reference in New Issue
Block a user