repmgr standby clone cloning repmgr standby clone repmgr standby clone clone a PostgreSQL standby node from another PostgreSQL node Description repmgr standby clone clones a PostgreSQL node from another PostgreSQL node, typically the primary, but optionally from any other node in the cluster or from Barman. It creates the replication configuration required to attach the cloned node to the primary node (or another standby, if cascading replication is in use). repmgr standby clone does not start the standby, and after cloning a standby, the command repmgr standby register must be executed to notify &repmgr; of its existence. Handling configuration files Note that by default, all configuration files in the source node's data directory will be copied to the cloned node. Typically these will be postgresql.conf, postgresql.auto.conf, pg_hba.conf and pg_ident.conf. These may require modification before the standby is started. In some cases (e.g. on Debian or Ubuntu Linux installations), PostgreSQL's configuration files are located outside of the data directory and will not be copied by default. &repmgr; can copy these files, either to the same location on the standby server (provided appropriate directory and file permissions are available), or into the standby's data directory. This requires passwordless SSH access to the primary server. Add the option to the repmgr standby clone command; by default files will be copied to the same path as on the upstream server. Note that the user executing repmgr must have write access to those directories. To have the configuration files placed in the standby's data directory, specify --copy-external-config-files=pgdata, but note that any include directives in the copied files may need to be updated. When executing repmgr standby clone with the aand options, &repmgr; will check the SSH connection to the source node, but will not verify whether the files can actually be copied. During the actual clone operation, a check will be made before the database itself is cloned to determine whether the files can actually be copied; if any problems are encountered, the clone operation will be aborted, enabling the user to fix any issues before retrying the clone operation. For reliable configuration file management we recommend using a configuration management tool such as Ansible, Chef, Puppet or Salt. Customising replication configuration recovery.conf customising with "repmgr standby clone" replication configuration customising with "repmgr standby clone" By default, &repmgr; will create a minimal replication configuration containing following parameters: primary_conninfo primary_slot_name (if replication slots in use) For PostgreSQL 11 and earlier, these parameters will also be set: standby_mode (always 'on') recovery_target_timeline (always 'latest') The following additional parameters can be specified in repmgr.conf for inclusion in the replication configuration: restore_command archive_cleanup_command recovery_min_apply_delay We recommend using Barman to manage WAL file archiving. For more details on combining &repmgr; and Barman, in particular using restore_command to configure Barman as a backup source of WAL files, see . Managing WAL during the cloning process When initially cloning a standby, you will need to ensure that all required WAL files remain available while the cloning is taking place. To ensure this happens when using the default pg_basebackup method, &repmgr; will set pg_basebackup's --wal-method parameter to stream, which will ensure all WAL files generated during the cloning process are streamed in parallel with the main backup. Note that this requires two replication connections to be available (&repmgr; will verify sufficient connections are available before attempting to clone, and this can be checked before performing the clone using the --dry-run option). To override this behaviour, in repmgr.conf set pg_basebackup's --wal-method parameter to fetch: pg_basebackup_options='--wal-method=fetch' and ensure that wal_keep_segments (PostgreSQL 13 and later: wal_keep_size) is set to an appropriately high value. Note however that this is not a particularly reliable way of ensuring sufficient WAL is retained and is not recommended. See the pg_basebackup documentation for details. If using PostgreSQL 9.6 or earlier, replace --wal-method with --xlog-method. Placing WAL files into a different directory To ensure that WAL files are placed in a directory outside of the main data directory (e.g. to keep them on a separate disk for performance reasons), specify the location with (PostgreSQL 9.6 and earlier: ) in the repmgr.conf parameter , e.g.: pg_basebackup_options='--waldir=/path/to/wal-directory' This setting will also be honored by &repmgr; when cloning from Barman (&repmgr; 5.2 and later). Using a standby cloned by another method replication configuration generating for a standby cloned by another method recovery.conf generating for a standby cloned by another method &repmgr; supports standbys cloned by another method (e.g. using barman's barman recover command). To integrate the standby as a &repmgr; node, once the standby has been cloned, ensure the repmgr.conf file is created for the node, and that it has been registered using repmgr standby register. To register a standby which is not running, execute repmgr standby register --force and provide the connection details for the primary. See for more details. Then execute the command repmgr standby clone --replication-conf-only. This will create the recovery.conf file needed to attach the node to its upstream (in PostgreSQL 12 and later: append replication configuration to postgresql.auto.conf), and will also create a replication slot on the upstream node if required. The upstream node must be running so the correct replication configuration can be obtained. If the standby is running, the replication configuration will not be written unless the option is provided. Execute repmgr standby clone --replication-conf-only --dry-run to check the prerequisites for creating the recovery configuration, and display the configuration changes which would be made without actually making any changes. In PostgreSQL 13 and later, the PostgreSQL configuration must be reloaded for replication configuration changes to take effect. In PostgreSQL 12 and earlier, the PostgreSQL instance must be restarted for replication configuration changes to take effect. Options Connection string of the upstream node to use for cloning. Check prerequisites but don't actually clone the standby. If specified, the contents of the generated recovery configuration will be displayed but not written. Force fast checkpoint (not effective when cloning from Barman). Copy configuration files located outside the data directory on the source node to the same path on the standby (default) or to the PostgreSQL data directory. Note that to be able to use this option, the &repmgr; user must be a superuser or member of the pg_read_all_settings predefined role. If this is not the case, provide a valid superuser with the / option. When using Barman, do not connect to upstream node. Set PostgreSQL configuration parameter to the provided value. This overrides any provided via repmgr.conf. For more details on this parameter, see: recovery_min_apply_delay. Remote system username for SSH operations (default: current local system username). Create recovery configuration for a previously cloned instance. In PostgreSQL 12 and later, the replication configuration will be written to postgresql.auto.conf. In PostgreSQL 11 and earlier, the replication configuration will be written to recovery.conf. User to make replication connections with (optional, not usually required). / The name of a valid PostgreSQL superuser can be provided with this option. This is only required if the was provided and the &repmgr; user is not a superuser or member of the pg_read_all_settings predefined role. primary_conninfo value to include in the recovery configuration when the intended upstream server does not yet exist. Note that &repmgr; may modify the provided value, in particular to set the correct application_name. ID of the upstream node to replicate from (optional, defaults to primary node) Verify a cloned node using the pg_verifybackup utility (PostgreSQL 13 and later). This option can currently only be used when cloning directly from an upstream node. Do not use Barman even if configured. Event notifications A standby_clone event notification will be generated. See also See for details about various aspects of cloning.