mirror of
https://github.com/EnterpriseDB/repmgr.git
synced 2026-03-26 00:26:30 +00:00
Expand documentation
This commit is contained in:
162
doc/cloning-standbys.sgml
Normal file
162
doc/cloning-standbys.sgml
Normal file
@@ -0,0 +1,162 @@
|
|||||||
|
<chapter id="cloning-standbys" xreflabel="cloning standbys">
|
||||||
|
<title>Cloning standbys</title>
|
||||||
|
<para>
|
||||||
|
</para>
|
||||||
|
|
||||||
|
<sect1 id="cloning-from-barman" xreflabel="Cloning from Barman">
|
||||||
|
<indexterm><primary>Barman</primary></indexterm>
|
||||||
|
<title>Cloning a standby from Barman</title>
|
||||||
|
<para>
|
||||||
|
<xref linkend="repmgr-standby-clone"> can use
|
||||||
|
<ulink url="https://www.2ndquadrant.com/">2ndQuadrant</ulink>'s
|
||||||
|
<ulink url="https://www.pgbarman.org/">Barman</ulink> application
|
||||||
|
to clone a standby (and also as a fallback source for WAL files).
|
||||||
|
</para>
|
||||||
|
<tip>
|
||||||
|
<simpara>
|
||||||
|
Barman (aka PgBarman) should be considered as an integral part of any
|
||||||
|
PostgreSQL replication cluster. For more details see:
|
||||||
|
<ulink url="https://www.pgbarman.org/">https://www.pgbarman.org/</ulink>.
|
||||||
|
</simpara>
|
||||||
|
</tip>
|
||||||
|
<para>
|
||||||
|
Barman support provides the following advantages:
|
||||||
|
<itemizedlist spacing="compact" mark="bullet">
|
||||||
|
<listitem>
|
||||||
|
<para>
|
||||||
|
the primary node does not need to perform a new backup every time a
|
||||||
|
new standby is cloned
|
||||||
|
</para>
|
||||||
|
</listitem>
|
||||||
|
<listitem>
|
||||||
|
<para>
|
||||||
|
a standby node can be disconnected for longer periods without losing
|
||||||
|
the ability to catch up, and without causing accumulation of WAL
|
||||||
|
files on the primary node
|
||||||
|
</para>
|
||||||
|
</listitem>
|
||||||
|
<listitem>
|
||||||
|
<para>
|
||||||
|
WAL management on the primary becomes much easier as there's no need
|
||||||
|
to use replication slots, and <varname>wal_keep_segments</varname>
|
||||||
|
does not need to be set.
|
||||||
|
</para>
|
||||||
|
</listitem>
|
||||||
|
</itemizedlist>
|
||||||
|
</para>
|
||||||
|
|
||||||
|
<sect2 id="cloning-from-barman-prerequisites" xreflabel="Prerequisites for cloning from Barman">
|
||||||
|
<title>Prerequisites for cloning from Barman</title>
|
||||||
|
<para>
|
||||||
|
In order to enable Barman support for <command>repmgr standby clone</command>, following
|
||||||
|
prerequisites must be met:
|
||||||
|
<itemizedlist spacing="compact" mark="bullet">
|
||||||
|
<listitem>
|
||||||
|
<para>
|
||||||
|
the <varname>barman_server</varname> setting in <filename>repmgr.conf</filename> is the same as the
|
||||||
|
server configured in Barman;
|
||||||
|
</para>
|
||||||
|
</listitem>
|
||||||
|
<listitem>
|
||||||
|
<para>
|
||||||
|
the <varname>barman_host</varname> setting in <filename>repmgr.conf</filename> is set to the SSH
|
||||||
|
hostname of the Barman server;
|
||||||
|
</para>
|
||||||
|
</listitem>
|
||||||
|
<listitem>
|
||||||
|
<para>
|
||||||
|
the <varname>restore_command</varname> setting in <filename>repmgr.conf</filename> is configured to
|
||||||
|
use a copy of the <command>barman-wal-restore</command> script shipped with the
|
||||||
|
<literal>barman-cli</literal> package (see below);
|
||||||
|
</para>
|
||||||
|
</listitem>
|
||||||
|
<listitem>
|
||||||
|
<para>
|
||||||
|
the Barman catalogue includes at least one valid backup for this server.
|
||||||
|
</para>
|
||||||
|
</listitem>
|
||||||
|
</itemizedlist>
|
||||||
|
</para>
|
||||||
|
<note>
|
||||||
|
<simpara>
|
||||||
|
Barman support is automatically enabled if <varname>barman_server</varname>
|
||||||
|
is set. Normally it is good practice to use Barman, for instance
|
||||||
|
when fetching a base backup while cloning a standby; in any case,
|
||||||
|
Barman mode can be disabled using the <literal>--without-barman</literal>
|
||||||
|
command line option.
|
||||||
|
</simpara>
|
||||||
|
</note>
|
||||||
|
<tip>
|
||||||
|
<simpara>
|
||||||
|
If you have a non-default SSH configuration on the Barman
|
||||||
|
server, e.g. using a port other than 22, then you can set those
|
||||||
|
parameters in a dedicated Host section in <filename>~/.ssh/config</filename>
|
||||||
|
corresponding to the value of<varname>barman_host</varname> in
|
||||||
|
<filename>repmgr.conf</filename>. See the <literal>Host</literal>
|
||||||
|
section in <command>man 5 ssh_config</command> for more details.
|
||||||
|
</simpara>
|
||||||
|
</tip>
|
||||||
|
<para>
|
||||||
|
It's now possible to clone a standby from Barman, e.g.:
|
||||||
|
<programlisting>
|
||||||
|
NOTICE: using configuration file "/etc/repmgr.conf"
|
||||||
|
NOTICE: destination directory "/var/lib/postgresql/data" provided
|
||||||
|
INFO: connecting to Barman server to verify backup for test_cluster
|
||||||
|
INFO: checking and correcting permissions on existing directory "/var/lib/postgresql/data"
|
||||||
|
INFO: creating directory "/var/lib/postgresql/data/repmgr"...
|
||||||
|
INFO: connecting to Barman server to fetch server parameters
|
||||||
|
INFO: connecting to upstream node
|
||||||
|
INFO: connected to source node, checking its state
|
||||||
|
INFO: successfully connected to source node
|
||||||
|
DETAIL: current installation size is 29 MB
|
||||||
|
NOTICE: retrieving backup from Barman...
|
||||||
|
receiving file list ...
|
||||||
|
(...)
|
||||||
|
NOTICE: standby clone (from Barman) complete
|
||||||
|
NOTICE: you can now start your PostgreSQL server
|
||||||
|
HINT: for example: pg_ctl -D /var/lib/postgresql/data start</programlisting>
|
||||||
|
|
||||||
|
</para>
|
||||||
|
</sect2>
|
||||||
|
<sect2 id="cloning-from-barman-restore-command" xreflabel="Using Barman as a WAL file source">
|
||||||
|
<title>Using Barman as a WAL file source</title>
|
||||||
|
<para>
|
||||||
|
As a fallback in case streaming replication is interrupted, PostgreSQL can optionally
|
||||||
|
retrieve WAL files from an archive, such as that provided by Barman. This is done by
|
||||||
|
setting <varname>restore_command</varname> in <filename>recovery.conf</filename> to
|
||||||
|
a valid shell command which can retrieve a specified WAL file from the archive.
|
||||||
|
</para>
|
||||||
|
<para>
|
||||||
|
<command>barman-wal-restore</command> is a Python script provided as part of the <literal>barman-cli</literal>
|
||||||
|
package (Barman 2.0 and later; for Barman 1.x the script is provided separately as
|
||||||
|
<command>barman-wal-restore.py</command>) which performs this function for Barman.
|
||||||
|
</para>
|
||||||
|
<para>
|
||||||
|
To use <command>barman-wal-restore</command> with &repmgr;
|
||||||
|
and assuming Barman is located on the <literal>barmansrv</literal> host
|
||||||
|
and that <command>barman-wal-restore</command> is located as an executable at
|
||||||
|
<filename>/usr/bin/barman-wal-restore</filename>,
|
||||||
|
<filename>repmgr.conf</filename> should include the following lines:
|
||||||
|
<programlisting>
|
||||||
|
barman_host=barmansrv
|
||||||
|
barman_server=somedb
|
||||||
|
restore_command=/usr/bin/barman-wal-restore barmansrv somedb %f %p</programlisting>
|
||||||
|
</para>
|
||||||
|
<note>
|
||||||
|
<simpara>
|
||||||
|
<command>barman-wal-restore</command> supports command line switches to
|
||||||
|
control parallelism (<literal>--parallel=N</literal>) and compression (
|
||||||
|
<literal>--bzip2</literal>, <literal>--gzip</literal>).
|
||||||
|
</simpara>
|
||||||
|
</note>
|
||||||
|
<note>
|
||||||
|
<para>
|
||||||
|
To use a non-default Barman configuration file on the Barman server,
|
||||||
|
specify this in <filename>repmgr.conf</filename> with <filename>barman_config</filename>:
|
||||||
|
<programlisting>
|
||||||
|
barman_config=/path/to/barman.conf</programlisting>
|
||||||
|
</para>
|
||||||
|
</note>
|
||||||
|
</sect2>
|
||||||
|
</sect1>
|
||||||
|
</chapter>
|
||||||
147
doc/command-reference.sgml
Normal file
147
doc/command-reference.sgml
Normal file
@@ -0,0 +1,147 @@
|
|||||||
|
<chapter id="command-reference" xreflabel="command reference">
|
||||||
|
<title>repmgr command reference</title>
|
||||||
|
|
||||||
|
<para>
|
||||||
|
Overview of repmgr commands.
|
||||||
|
</para>
|
||||||
|
|
||||||
|
<sect1 id="repmgr-standby-clone" xreflabel="repmgr standby clone">
|
||||||
|
<indexterm><primary>repmgr standby clone</primary></indexterm>
|
||||||
|
<title>repmgr standby clone</title>
|
||||||
|
<para>
|
||||||
|
<command>repmgr standby clone</command> clones a PostgreSQL node from another
|
||||||
|
PostgreSQL node, typically the primary, but optionally from any other node in
|
||||||
|
the cluster or from Barman. It creates the <filename>recovery.conf</filename> file required
|
||||||
|
to attach the cloned node to the primary node (or another standby, if cascading replication
|
||||||
|
is in use).
|
||||||
|
</para>
|
||||||
|
<note>
|
||||||
|
<simpara>
|
||||||
|
<command>repmgr standby clone</command> does not start the standby, and after cloning
|
||||||
|
<command>repmgr standby register</command> must be executed to notify &repmgr; of its presence.
|
||||||
|
</simpara>
|
||||||
|
</note>
|
||||||
|
|
||||||
|
|
||||||
|
<sect2 id="repmgr-standby-clone-config-file-copying" xreflabel="Copying configuration files">
|
||||||
|
<title>Handling configuration files</title>
|
||||||
|
|
||||||
|
<para>
|
||||||
|
Note that by default, all configuration files in the source node's data
|
||||||
|
directory will be copied to the cloned node. Typically these will be
|
||||||
|
<filename>postgresql.conf</filename>, <filename>postgresql.auto.conf</filename>,
|
||||||
|
<filename>pg_hba.conf</filename> and <filename>pg_ident.conf</filename>.
|
||||||
|
These may require modification before the standby is started.
|
||||||
|
</para>
|
||||||
|
<para>
|
||||||
|
In some cases (e.g. on Debian or Ubuntu Linux installations), PostgreSQL's
|
||||||
|
configuration files are located outside of the data directory and will
|
||||||
|
not be copied by default. &repmgr; can copy these files, either to the same
|
||||||
|
location on the standby server (provided appropriate directory and file permissions
|
||||||
|
are available), or into the standby's data directory. This requires passwordless
|
||||||
|
SSH access to the primary server. Add the option <literal>--copy-external-config-files</literal>
|
||||||
|
to the <command>repmgr standby clone</command> command; by default files will be copied to
|
||||||
|
the same path as on the upstream server. Note that the user executing <command>repmgr</command>
|
||||||
|
must have write access to those directories.
|
||||||
|
</para>
|
||||||
|
<para>
|
||||||
|
To have the configuration files placed in the standby's data directory, specify
|
||||||
|
<literal>--copy-external-config-files=pgdata</literal>, but note that
|
||||||
|
any include directives in the copied files may need to be updated.
|
||||||
|
</para>
|
||||||
|
<tip>
|
||||||
|
<simpara>
|
||||||
|
For reliable configuration file management we recommend using a
|
||||||
|
configuration management tool such as Ansible, Chef, Puppet or Salt.
|
||||||
|
</simpara>
|
||||||
|
</tip>
|
||||||
|
</sect2>
|
||||||
|
|
||||||
|
<sect2 id="repmgr-standby-clone-wal-management" xreflabel="Managing WAL during the cloning process">
|
||||||
|
<title>Managing WAL during the cloning process</title>
|
||||||
|
<para>
|
||||||
|
When initially cloning a standby, you will need to ensure
|
||||||
|
that all required WAL files remain available while the cloning is taking
|
||||||
|
place. To ensure this happens when using the default `pg_basebackup` method,
|
||||||
|
&repmgr; will set <command>pg_basebackup</command>'s <literal>--xlog-method</literal>
|
||||||
|
parameter to <literal>stream</literal>,
|
||||||
|
which will ensure all WAL files generated during the cloning process are
|
||||||
|
streamed in parallel with the main backup. Note that this requires two
|
||||||
|
replication connections to be available (&repmgr; will verify sufficient
|
||||||
|
connections are available before attempting to clone, and this can be checked
|
||||||
|
before performing the clone using the <literal>--dry-run</literal> option).
|
||||||
|
</para>
|
||||||
|
<para>
|
||||||
|
To override this behaviour, in <filename>repmgr.conf</filename> set
|
||||||
|
<command>pg_basebackup</command>'s <literal>--xlog-method</literal>
|
||||||
|
parameter to <literal>fetch</literal>:
|
||||||
|
<programlisting>
|
||||||
|
pg_basebackup_options='--xlog-method=fetch'</programlisting>
|
||||||
|
|
||||||
|
and ensure that <literal>wal_keep_segments</literal> is set to an appropriately high value.
|
||||||
|
See the <ulink url="https://www.postgresql.org/docs/current/static/app-pgbasebackup.html">
|
||||||
|
pg_basebackup</ulink> documentation for details.
|
||||||
|
</para>
|
||||||
|
|
||||||
|
<note>
|
||||||
|
<simpara>
|
||||||
|
From PostgreSQL 10, <command>pg_basebackup</command>'s
|
||||||
|
<literal>--xlog-method</literal> parameter has been renamed to
|
||||||
|
<literal>--wal-method</literal>.
|
||||||
|
</simpara>
|
||||||
|
</note>
|
||||||
|
</sect2>
|
||||||
|
</sect1>
|
||||||
|
|
||||||
|
|
||||||
|
<sect1 id="repmgr-standby-register" xreflabel="repmgr standby register">
|
||||||
|
<indexterm><primary>repmgr standby register</primary></indexterm>
|
||||||
|
<title>repmgr standby register</title>
|
||||||
|
<para>
|
||||||
|
<command>repmgr standby register</command> adds a standby's information to
|
||||||
|
the &repmgr; metadata. This command needs to be executed to enable
|
||||||
|
promote/follow operations and to allow <command>repmgrd</command> to work with the node.
|
||||||
|
An existing standby can be registered using this command. Execute with the
|
||||||
|
<literal>--dry-run</literal> option to check what would happen without actually registering the
|
||||||
|
standby.
|
||||||
|
</para>
|
||||||
|
|
||||||
|
<sect2 id="rempgr-standby-register-wait" xreflabel="rempgr standby register --wait">
|
||||||
|
<title>Waiting for the registration to propagate to the standby</title>
|
||||||
|
<para>
|
||||||
|
Depending on your environment and workload, it may take some time for
|
||||||
|
the standby's node record to propagate from the primary to the standby. Some
|
||||||
|
actions (such as starting <command>repmgrd</command>) require that the standby's node record
|
||||||
|
is present and up-to-date to function correctly.
|
||||||
|
</para>
|
||||||
|
<para>
|
||||||
|
By providing the option <literal>--wait-sync</literal> to the
|
||||||
|
<command>repmgr standby register</command> command, &repmgr; will wait
|
||||||
|
until the record is synchronised before exiting. An optional timeout (in
|
||||||
|
seconds) can be added to this option (e.g. <literal>--wait-sync=60</literal>).
|
||||||
|
</para>
|
||||||
|
</sect2>
|
||||||
|
|
||||||
|
<sect2 id="rempgr-standby-register-inactive-node" xreflabel="Registering an inactive node">
|
||||||
|
<title>Registering an inactive node</title>
|
||||||
|
<para>
|
||||||
|
Under some circumstances you may wish to register a standby which is not
|
||||||
|
yet running; this can be the case when using provisioning tools to create
|
||||||
|
a complex replication cluster. In this case, by using the <literal>-F/--force</literal>
|
||||||
|
option and providing the connection parameters to the primary server,
|
||||||
|
the standby can be registered.
|
||||||
|
</para>
|
||||||
|
<para>
|
||||||
|
Similarly, with cascading replication it may be necessary to register
|
||||||
|
a standby whose upstream node has not yet been registered - in this case,
|
||||||
|
using <literal>-F/--force</literal> will result in the creation of an inactive placeholder
|
||||||
|
record for the upstream node, which will however later need to be registered
|
||||||
|
with the <literal>-F/--force</literal> option too.
|
||||||
|
</para>
|
||||||
|
<para>
|
||||||
|
When used with <command>repmgr standby register</command>, care should be taken that use of the
|
||||||
|
<literal>-F/--force</literal> option does not result in an incorrectly configured cluster.
|
||||||
|
</para>
|
||||||
|
</sect2>
|
||||||
|
</sect1>
|
||||||
|
</chapter>
|
||||||
@@ -39,4 +39,8 @@
|
|||||||
<!ENTITY configuration SYSTEM "configuration.sgml">
|
<!ENTITY configuration SYSTEM "configuration.sgml">
|
||||||
<!ENTITY configuration-file SYSTEM "configuration-file.sgml">
|
<!ENTITY configuration-file SYSTEM "configuration-file.sgml">
|
||||||
<!ENTITY configuration-file-settings SYSTEM "configuration-file-settings.sgml">
|
<!ENTITY configuration-file-settings SYSTEM "configuration-file-settings.sgml">
|
||||||
|
<!ENTITY cloning-standbys SYSTEM "cloning-standbys.sgml">
|
||||||
|
<!ENTITY command-reference SYSTEM "command-reference.sgml">
|
||||||
<!ENTITY appendix-signatures SYSTEM "appendix-signatures.sgml">
|
<!ENTITY appendix-signatures SYSTEM "appendix-signatures.sgml">
|
||||||
|
|
||||||
|
<!ENTITY bookindex SYSTEM "bookindex.sgml">
|
||||||
|
|||||||
@@ -119,10 +119,10 @@
|
|||||||
</sect1>
|
</sect1>
|
||||||
|
|
||||||
<sect1 id="quickstart-repmgr-user-database">
|
<sect1 id="quickstart-repmgr-user-database">
|
||||||
<title>repmgr user and database</title>
|
<title>Create the repmgr user and database</title>
|
||||||
<para>
|
<para>
|
||||||
Create a dedicated PostgreSQL superuser account and a database for
|
Create a dedicated PostgreSQL superuser account and a database for
|
||||||
the `repmgr` metadata, e.g.
|
the &repmgr; metadata, e.g.
|
||||||
</para>
|
</para>
|
||||||
<programlisting>
|
<programlisting>
|
||||||
createuser -s repmgr
|
createuser -s repmgr
|
||||||
@@ -147,12 +147,24 @@
|
|||||||
overridden by specifying a separate replication user when registering each node.
|
overridden by specifying a separate replication user when registering each node.
|
||||||
</para>
|
</para>
|
||||||
</note>
|
</note>
|
||||||
|
|
||||||
|
<tip>
|
||||||
|
<para>
|
||||||
|
&repmgr; will install the <literal>repmgr</literal> extension, which creates a
|
||||||
|
<literal>repmgr</literal> schema containing the &repmgr;'s metadata tables as
|
||||||
|
well as other functions and views. We also recommend that you set the
|
||||||
|
<literal>repmgr</literal> user's search path to include this schema name, e.g.
|
||||||
|
<programlisting>
|
||||||
|
ALTER USER repmgr SET search_path TO repmgr, "$user", public;</programlisting>
|
||||||
|
</para>
|
||||||
|
</tip>
|
||||||
|
|
||||||
</sect1>
|
</sect1>
|
||||||
|
|
||||||
<sect1 id="quickstart-authentication">
|
<sect1 id="quickstart-authentication">
|
||||||
<title>Configuring authentication in pg_hba.conf</title>
|
<title>Configuring authentication in pg_hba.conf</title>
|
||||||
<para>
|
<para>
|
||||||
Ensure the `repmgr` user has appropriate permissions in <filename>pg_hba.conf</filename> and
|
Ensure the <literal>repmgr</literal> user has appropriate permissions in <filename>pg_hba.conf</filename> and
|
||||||
can connect in replication mode; <filename>pg_hba.conf</filename> should contain entries
|
can connect in replication mode; <filename>pg_hba.conf</filename> should contain entries
|
||||||
similar to the following:
|
similar to the following:
|
||||||
</para>
|
</para>
|
||||||
@@ -166,6 +178,7 @@
|
|||||||
host repmgr repmgr 192.168.1.0/24 trust
|
host repmgr repmgr 192.168.1.0/24 trust
|
||||||
</programlisting>
|
</programlisting>
|
||||||
<para>
|
<para>
|
||||||
|
Note that these are simple settings for testing purposes.
|
||||||
Adjust according to your network environment and authentication requirements.
|
Adjust according to your network environment and authentication requirements.
|
||||||
</para>
|
</para>
|
||||||
</sect1>
|
</sect1>
|
||||||
@@ -176,7 +189,7 @@
|
|||||||
On the standby, do not create a PostgreSQL instance, but do ensure the destination
|
On the standby, do not create a PostgreSQL instance, but do ensure the destination
|
||||||
data directory (and any other directories which you want PostgreSQL to use)
|
data directory (and any other directories which you want PostgreSQL to use)
|
||||||
exist and are owned by the <literal>postgres</literal> system user. Permissions
|
exist and are owned by the <literal>postgres</literal> system user. Permissions
|
||||||
should be set to <literal>0700</literal> (<literal>drwx------</literal>).
|
must be set to <literal>0700</literal> (<literal>drwx------</literal>).
|
||||||
</para>
|
</para>
|
||||||
<para>
|
<para>
|
||||||
Check the primary database is reachable from the standby using <application>psql</application>:
|
Check the primary database is reachable from the standby using <application>psql</application>:
|
||||||
@@ -208,16 +221,226 @@
|
|||||||
data_directory='/var/lib/postgresql/data'
|
data_directory='/var/lib/postgresql/data'
|
||||||
</programlisting>
|
</programlisting>
|
||||||
|
|
||||||
<para>
|
|
||||||
|
|
||||||
</para>
|
|
||||||
|
|
||||||
<para>
|
<para>
|
||||||
<filename>repmgr.conf</filename> should not be stored inside the PostgreSQL data directory,
|
<filename>repmgr.conf</filename> should not be stored inside the PostgreSQL data directory,
|
||||||
as it could be overwritten when setting up or reinitialising the PostgreSQL
|
as it could be overwritten when setting up or reinitialising the PostgreSQL
|
||||||
server. See sections on <xref linkend="configuration-file"> and <xref linkend="configuration-file-settings">
|
server. See sections on <xref linkend="configuration-file"> and <xref linkend="configuration-file-settings">
|
||||||
for further details about <filename>repmgr.conf</filename>.
|
for further details about <filename>repmgr.conf</filename>.
|
||||||
</para>
|
</para>
|
||||||
|
<tip>
|
||||||
|
<simpara>
|
||||||
|
For Debian-based distributions we recommend explictly setting
|
||||||
|
<literal>pg_bindir</literal> to the directory where <command>pg_ctl</command> and other binaries
|
||||||
|
not in the standard path are located. For PostgreSQL 9.6 this would be <filename>/usr/lib/postgresql/9.6/bin/</filename>.
|
||||||
|
</simpara>
|
||||||
|
</tip>
|
||||||
|
|
||||||
|
<para>
|
||||||
|
See the file
|
||||||
|
<ulink url="https://raw.githubusercontent.com/2ndQuadrant/repmgr/master/repmgr.conf.sample">repmgr.conf.sample</>
|
||||||
|
for details of all available configuration parameters.
|
||||||
|
</para>
|
||||||
|
|
||||||
|
</sect1>
|
||||||
|
|
||||||
|
|
||||||
|
<sect1 id="quickstart-primary-register">
|
||||||
|
<title>Register the primary server</title>
|
||||||
|
<para>
|
||||||
|
To enable &repmgr; to support a replication cluster, the primary node must
|
||||||
|
be registered with &repmgr;. This installs the <literal>repmgr</literal>
|
||||||
|
extension and metadata objects, and adds a metadata record for the primary server:
|
||||||
|
</para>
|
||||||
|
|
||||||
|
<programlisting>
|
||||||
|
$ repmgr -f /etc/repmgr.conf primary register
|
||||||
|
INFO: connecting to primary database...
|
||||||
|
NOTICE: attempting to install extension "repmgr"
|
||||||
|
NOTICE: "repmgr" extension successfully installed
|
||||||
|
NOTICE: primary node record (id: 1) registered</programlisting>
|
||||||
|
|
||||||
|
<para>
|
||||||
|
Verify status of the cluster like this:
|
||||||
|
</para>
|
||||||
|
<programlisting>
|
||||||
|
$ repmgr -f /etc/repmgr.conf cluster show
|
||||||
|
ID | Name | Role | Status | Upstream | Connection string
|
||||||
|
----+-------+---------+-----------+----------+--------------------------------------------------------
|
||||||
|
1 | node1 | primary | * running | | host=node1 dbname=repmgr user=repmgr connect_timeout=2
|
||||||
|
</programlisting>
|
||||||
|
<para>
|
||||||
|
The record in the <literal>repmgr</literal> metadata table will look like this:
|
||||||
|
</para>
|
||||||
|
<programlisting>
|
||||||
|
repmgr=# SELECT * FROM repmgr.nodes;
|
||||||
|
-[ RECORD 1 ]----+-------------------------------------------------------
|
||||||
|
node_id | 1
|
||||||
|
upstream_node_id |
|
||||||
|
active | t
|
||||||
|
node_name | node1
|
||||||
|
type | primary
|
||||||
|
location | default
|
||||||
|
priority | 100
|
||||||
|
conninfo | host=node1 dbname=repmgr user=repmgr connect_timeout=2
|
||||||
|
repluser | repmgr
|
||||||
|
slot_name |
|
||||||
|
config_file | /etc/repmgr.conf</programlisting>
|
||||||
|
<para>
|
||||||
|
Each server in the replication cluster will have its own record. If <command>repmgrd</command>
|
||||||
|
is in use, the fields <literal>upstream_node_id</literal>, <literal>active</literal> and
|
||||||
|
<literal>type</literal> will be updated when the node's status or role changes.
|
||||||
|
</para>
|
||||||
|
</sect1>
|
||||||
|
|
||||||
|
<sect1 id="quickstart-standby-clone">
|
||||||
|
<title>Clone the standby server</title>
|
||||||
|
<para>
|
||||||
|
Create a <filename>repmgr.conf</filename> file on the standby server. It must contain at
|
||||||
|
least the same parameters as the primary's <filename>repmgr.conf</filename>, but with
|
||||||
|
the mandatory values <literal>node</literal>, <literal>node_name</literal>, <literal>conninfo</literal>
|
||||||
|
(and possibly <literal>data_directory</literal>) adjusted accordingly, e.g.:
|
||||||
|
</para>
|
||||||
|
<programlisting>
|
||||||
|
node=2
|
||||||
|
node_name=node2
|
||||||
|
conninfo='host=node2 user=repmgr dbname=repmgr connect_timeout=2'
|
||||||
|
data_directory='/var/lib/postgresql/data'
|
||||||
|
</programlisting>
|
||||||
|
<para>
|
||||||
|
Use the <command>--dry-run</command> option to check the standby can be cloned:
|
||||||
|
</para>
|
||||||
|
<programlisting>
|
||||||
|
$ repmgr -h node1 -U repmgr -d repmgr -f /etc/repmgr.conf standby clone --dry-run
|
||||||
|
NOTICE: using provided configuration file "/etc/repmgr.conf"
|
||||||
|
NOTICE: destination directory "/var/lib/postgresql/data" provided
|
||||||
|
INFO: connecting to source node
|
||||||
|
NOTICE: checking for available walsenders on source node (2 required)
|
||||||
|
INFO: sufficient walsenders available on source node (2 required)
|
||||||
|
NOTICE: standby will attach to upstream node 1
|
||||||
|
HINT: consider using the -c/--fast-checkpoint option
|
||||||
|
INFO: all prerequisites for "standby clone" are met</programlisting>
|
||||||
|
<para>
|
||||||
|
If no problems are reported, the standby can then be cloned with:
|
||||||
|
</para>
|
||||||
|
<programlisting>
|
||||||
|
$ repmgr -h node1 -U repmgr -d repmgr -f /etc/repmgr.conf standby clone
|
||||||
|
|
||||||
|
NOTICE: using configuration file "/etc/repmgr.conf"
|
||||||
|
NOTICE: destination directory "/var/lib/postgresql/data" provided
|
||||||
|
INFO: connecting to source node
|
||||||
|
NOTICE: checking for available walsenders on source node (2 required)
|
||||||
|
INFO: sufficient walsenders available on source node (2 required)
|
||||||
|
INFO: creating directory "/var/lib/postgresql/data"...
|
||||||
|
NOTICE: starting backup (using pg_basebackup)...
|
||||||
|
HINT: this may take some time; consider using the -c/--fast-checkpoint option
|
||||||
|
INFO: executing:
|
||||||
|
pg_basebackup -l "repmgr base backup" -D /var/lib/postgresql/data -h node1 -U repmgr -X stream
|
||||||
|
NOTICE: standby clone (using pg_basebackup) complete
|
||||||
|
NOTICE: you can now start your PostgreSQL server
|
||||||
|
HINT: for example: pg_ctl -D /var/lib/postgresql/data start
|
||||||
|
</programlisting>
|
||||||
|
<para>
|
||||||
|
This has cloned the PostgreSQL data directory files from the primary <literal>node1</literal>
|
||||||
|
using PostgreSQL's <command>pg_basebackup</command> utility. A <filename>recovery.conf</filename>
|
||||||
|
file containing the correct parameters to start streaming from this primary server will be created
|
||||||
|
automatically.
|
||||||
|
</para>
|
||||||
|
<note>
|
||||||
|
<simpara>
|
||||||
|
By default, any configuration files in the primary's data directory will be
|
||||||
|
copied to the standby. Typically these will be <filename>postgresql.conf</filename>,
|
||||||
|
<filename>postgresql.auto.conf</filename>, <filename>pg_hba.conf</filename> and
|
||||||
|
<filename>pg_ident.conf</filename>. These may require modification before the standby
|
||||||
|
is started.
|
||||||
|
</simpara>
|
||||||
|
</note>
|
||||||
|
<para>
|
||||||
|
Make any adjustments to the standby's PostgreSQL configuration files now,
|
||||||
|
then start the server.
|
||||||
|
</para>
|
||||||
|
<para>
|
||||||
|
For more details on <command>repmgr standby clone</command>, see the
|
||||||
|
<link linkend="repmgr-standby-clone">command reference</link>.
|
||||||
|
A more detailed overview of cloning options is available in the
|
||||||
|
<link linkend="cloning-standbys">administration manual</link>.
|
||||||
|
</para>
|
||||||
|
</sect1>
|
||||||
|
|
||||||
|
<sect1 id="quickstart-verify-replication">
|
||||||
|
<title>Verify replication is functioning</title>
|
||||||
|
<para>
|
||||||
|
Connect to the primary server and execute:
|
||||||
|
<programlisting>
|
||||||
|
repmgr=# SELECT * FROM pg_stat_replication;
|
||||||
|
-[ RECORD 1 ]----+------------------------------
|
||||||
|
pid | 19111
|
||||||
|
usesysid | 16384
|
||||||
|
usename | repmgr
|
||||||
|
application_name | node2
|
||||||
|
client_addr | 192.168.1.12
|
||||||
|
client_hostname |
|
||||||
|
client_port | 50378
|
||||||
|
backend_start | 2017-08-28 15:14:19.851581+09
|
||||||
|
backend_xmin |
|
||||||
|
state | streaming
|
||||||
|
sent_location | 0/7000318
|
||||||
|
write_location | 0/7000318
|
||||||
|
flush_location | 0/7000318
|
||||||
|
replay_location | 0/7000318
|
||||||
|
sync_priority | 0
|
||||||
|
sync_state | async</programlisting>
|
||||||
|
This shows that the previously cloned standby (<literal>node2</literal> shown in the field
|
||||||
|
<literal>application_name</literal>) has connected to the primary from IP address
|
||||||
|
<literal>192.168.1.12</literal>.
|
||||||
|
</para>
|
||||||
|
<para>
|
||||||
|
From PostgreSQL 9.6 you can also use the view
|
||||||
|
<ulink url="https://www.postgresql.org/docs/current/static/monitoring-stats.html#PG-STAT-WAL-RECEIVER-VIEW">
|
||||||
|
<literal>pg_stat_wal_receiver</literal></ulink> to check the replication status from the standby.
|
||||||
|
|
||||||
|
<programlisting>
|
||||||
|
repmgr=# SELECT * FROM pg_stat_wal_receiver;
|
||||||
|
Expanded display is on.
|
||||||
|
-[ RECORD 1 ]---------+--------------------------------------------------------------------------------
|
||||||
|
pid | 18236
|
||||||
|
status | streaming
|
||||||
|
receive_start_lsn | 0/3000000
|
||||||
|
receive_start_tli | 1
|
||||||
|
received_lsn | 0/7000538
|
||||||
|
received_tli | 1
|
||||||
|
last_msg_send_time | 2017-08-28 15:21:26.465728+09
|
||||||
|
last_msg_receipt_time | 2017-08-28 15:21:26.465774+09
|
||||||
|
latest_end_lsn | 0/7000538
|
||||||
|
latest_end_time | 2017-08-28 15:20:56.418735+09
|
||||||
|
slot_name |
|
||||||
|
conninfo | user=repmgr dbname=replication host=node1 application_name=node2
|
||||||
|
</programlisting>
|
||||||
|
Note that the <varname>conninfo</varname> value is that generated in <filename>recovery.conf</filename>
|
||||||
|
and will differ slightly from the primary's <varname>conninfo</varname> as set in <filename>repmgr.conf</filename> -
|
||||||
|
among others it will contain the connecting node's name as <varname>application_name</varname>.
|
||||||
|
</para>
|
||||||
|
</sect1>
|
||||||
|
|
||||||
|
<sect1 id="quickstart-register-standby">
|
||||||
|
<title>Register the standby</title>
|
||||||
|
<para>
|
||||||
|
Register the standby server with:
|
||||||
|
<programlisting>
|
||||||
|
$ repmgr -f /etc/repmgr.conf standby register
|
||||||
|
NOTICE: standby node "node2" (ID: 2) successfully registered</programlisting>
|
||||||
|
</para>
|
||||||
|
<para>
|
||||||
|
Check the node is registered by executing <command>repmgr cluster show</command> on the standby:
|
||||||
|
<programlisting>
|
||||||
|
$ repmgr -f /etc/repmgr.conf cluster show
|
||||||
|
ID | Name | Role | Status | Upstream | Location | Connection string
|
||||||
|
----+-------+---------+-----------+----------+----------+--------------------------------------
|
||||||
|
1 | node1 | primary | * running | | default | host=node1 dbname=repmgr user=repmgr
|
||||||
|
2 | node2 | standby | running | node1 | default | host=node2 dbname=repmgr user=repmgr</programlisting>
|
||||||
|
</para>
|
||||||
|
<para>
|
||||||
|
Both nodes are now registered with &repmgr; and the records have been copied to the standby server.
|
||||||
|
</para>
|
||||||
</sect1>
|
</sect1>
|
||||||
|
|
||||||
</chapter>
|
</chapter>
|
||||||
|
|||||||
@@ -23,10 +23,9 @@
|
|||||||
|
|
||||||
<abstract>
|
<abstract>
|
||||||
<para>
|
<para>
|
||||||
Thisis the official documentation of repmgr &repmgrversion; for
|
Thisis the official documentation of &repmgr; &repmgrversion; for
|
||||||
use with PostgreSQL 9.4 - PostgreSQL 10.
|
use with PostgreSQL 9.3 - PostgreSQL 10.
|
||||||
It describes all the functionality that the current version of repmgr officially
|
It describes the functionality supported by the current version of &repmgr;.
|
||||||
supports.
|
|
||||||
</para>
|
</para>
|
||||||
|
|
||||||
<para>
|
<para>
|
||||||
@@ -69,8 +68,14 @@
|
|||||||
<title>repmgr administration manual</title>
|
<title>repmgr administration manual</title>
|
||||||
|
|
||||||
&configuration;
|
&configuration;
|
||||||
|
&cloning-standbys;
|
||||||
|
&command-reference;
|
||||||
</part>
|
</part>
|
||||||
|
|
||||||
|
|
||||||
&appendix-signatures;
|
&appendix-signatures;
|
||||||
|
|
||||||
|
<![%include-index;[&bookindex;]]>
|
||||||
|
<![%include-xslt-index;[<index id="bookindex"></index>]]>
|
||||||
|
|
||||||
</book>
|
</book>
|
||||||
|
|||||||
Reference in New Issue
Block a user