repmgr: Replication Manager for PostgreSQL ========================================== `repmgr` is a suite of open-source tools to manage replication and failover within a cluster of PostgreSQL servers. It enhances PostgreSQL's built-in replication capabilities with utilities to set up standby servers, monitor replication, and perform administrative tasks such as failover or switchover operations. The current `repmgr` version, 3.1.5, supports all PostgreSQL versions from 9.3, including the upcoming 9.6. Overview -------- The `repmgr` suite provides two main tools: - `repmgr` - a command-line tool used to perform administrative tasks such as: - setting up standby servers - promoting a standby server to master - switching over master and standby servers - displaying the status of servers in the replication cluster - `repmgrd` is a daemon which actively monitors servers in a replication cluster and performs the following tasks: - monitoring and recording replication performance - performing failover by detecting failure of the master and promoting the most suitable standby server - provide notifications about events in the cluster to a user-defined script which can perform tasks such as sending alerts by email `repmgr` supports and enhances PostgreSQL's built-in streaming replication, which provides a single read/write master server and one or more read-only standbys containing near-real time copies of the master server's database. For a multi-master replication solution, please see 2ndQuadrant's BDR (bi-directional replication) extension. http://2ndquadrant.com/en-us/resources/bdr/ For selective replication, e.g. of individual tables or databases from one server to another, please see 2ndQuadrant's pglogical extension. http://2ndquadrant.com/en-us/resources/pglogical/ ### Concepts This guide assumes that you are familiar with PostgreSQL administration and streaming replication concepts. For further details on streaming replication, see this link: https://www.postgresql.org/docs/current/interactive/warm-standby.html#STREAMING-REPLICATION The following terms are used throughout the `repmgr` documentation. - `replication cluster` In the `repmgr` documentation, "replication cluster" refers to the network of PostgreSQL servers connected by streaming replication. - `node` A `node` is a server within a replication cluster. - `upstream node` This is the node a standby server is connected to; either the master server or in the case of cascading replication, another standby. - `failover` This is the action which occurs if a master server fails and a suitable standby is promoted as the new master. The `repmgrd` daemon supports automatic failover to minimise downtime. - `switchover` In certain circumstances, such as hardware or operating system maintenance, it's necessary to take a master server offline; in this case a controlled switchover is necessary, whereby a suitable standby is promoted and the existing master removed from the replication cluster in a controlled manner. The `repmgr` command line client provides this functionality. - `witness server` `repmgr` provides functionality to set up a so-called "witness server" to assist in determining a new master server in a failover situation with more than one standby. The witness server itself is not part of the replication cluster, although it does contain a copy of the repmgr metadata schema (see below). The purpose of a witness server is to provide a "casting vote" where servers in the replication cluster are split over more than one location. In the event of a loss of connectivity between locations, the presence or absence of the witness server will decide whether a server at that location is promoted to master; this is to prevent a "split-brain" situation where an isolated location interprets a network outage as a failure of the (remote) master and promotes a (local) standby. A witness server only needs to be created if `repmgrd` is in use. ### repmgr user and metadata In order to effectively manage a replication cluster, `repmgr` needs to store information about the servers in the cluster in a dedicated database schema. This schema is automatically created during the first step in initialising a `repmgr`-controlled cluster (`repmgr master register`) and contains the following objects: tables: - `repl_events`: records events of interest - `repl_nodes`: connection and status information for each server in the replication cluster - `repl_monitor`: historical standby monitoring information written by `repmgrd` views: - `repl_show_nodes`: based on the table `repl_nodes`, additionally showing the name of the server's upstream node - `repl_status`: when `repmgrd`'s monitoring is enabled, shows current monitoring status for each node The `repmgr` metadata schema can be stored in an existing database or in its own dedicated database. A dedicated database superuser is required to own the meta-database as well as carry out administrative actions. Installation ------------ ### System requirements `repmgr` is developed and tested on Linux and OS X, but should work on any UNIX-like system supported by PostgreSQL itself. Current versions of `repmgr` support PostgreSQL from version 9.3. If you are interested in using `repmgr` on earlier versions of PostgreSQL you can download version 2.1 which supports PostgreSQL from version 9.1. All servers in the replication cluster must be running the same major version of PostgreSQL, and we recommend that they also run the same minor version. The `repmgr` tools must be installed on each server in the replication cluster. A dedicated system user for `repmgr` is *not* required; as many `repmgr` and `repmgrd` actions require direct access to the PostgreSQL data directory, these commands should be executed by the `postgres` user. Passwordless `ssh` connectivity between all servers in the replication cluster is not required, but is necessary in the following cases: * if you need `repmgr` to copy configuration files from outside the PostgreSQL data directory * when using `rsync` to clone a standby * to perform switchover operations * when executing `repmgr cluster matrix` and `repmgr cluster crosscheck` In these cases `rsync` is required on all servers too. * * * > *TIP*: We recommend using a session multiplexer utility such as `screen` or > `tmux` when performing long-running actions (such as cloning a database) > on a remote server - this will ensure the `repmgr` action won't be prematurely > terminated if your `ssh` session to the server is interrupted or closed. * * * ### Packages We recommend installing `repmgr` using the available packages for your system. - RedHat/CentOS: RPM packages for `repmgr` are available via Yum through the PostgreSQL Global Development Group RPM repository ( http://yum.postgresql.org/ ). Follow the instructions for your distribution (RedHat, CentOS, Fedora, etc.) and architecture as detailed at yum.postgresql.org. 2ndQuadrant also provides its own RPM packages which are made available at the same time as each `repmgr` release, as it can take some days for them to become available via the main PGDG repository. See here for details: http://repmgr.org/yum-repository.html - Debian/Ubuntu: the most recent `repmgr` packages are available from the PostgreSQL Community APT repository ( http://apt.postgresql.org/ ). Instructions can be found in the APT section of the PostgreSQL Wiki ( https://wiki.postgresql.org/wiki/Apt ). See `PACKAGES.md` for details on building .deb and .rpm packages from the `repmgr` source code. ### Source installation `repmgr` source code can be obtained directly from the project GitHub repository: git clone https://github.com/2ndQuadrant/repmgr Release tarballs are also available: https://github.com/2ndQuadrant/repmgr/releases http://repmgr.org/downloads.php `repmgr` is compiled in the same way as a PostgreSQL extension using the PGXS infrastructure, e.g.: sudo make USE_PGXS=1 install `repmgr` can be built from source in any environment suitable for building PostgreSQL itself. ### Configuration `repmgr` and `repmgrd` use a common configuration file, by default called `repmgr.conf` (although any name can be used if explicitly specified). At the very least, `repmgr.conf` must contain the connection parameters for the local `repmgr` database; see `repmgr configuration file` below for more details. The configuration file will be searched for in the following locations: - a configuration file specified by the `-f/--config-file` command line option - `repmgr.conf` in the local directory - `/etc/repmgr.conf` - the directory reported by `pg_config --sysconfdir` Note that if a file is explicitly specified with `-f/--config-file`, an error will be raised if it is not found or not readable and no attempt will be made to check default locations; this is to prevent `repmgr` reading the wrong file. For a full list of annotated configuration items, see the file `repmgr.conf.sample`. The following parameters in the configuration file can be overridden with command line options: - `-L/--log-level` - `-b/--pg_bindir` ### Command line options and environment variables For some commands, e.g. `repmgr standby clone`, database connection parameters need to be provided. Like other PostgreSQL utilities, following standard parameters can be used: - `-d/--dbname=DBNAME` - `-h/--host=HOSTNAME` - `-p/--port=PORT` - `-U/--username=USERNAME` If `-d/--dbname` contains an `=` sign or starts with a valid URI prefix (`postgresql://` or `postgres://`), it is treated as a conninfo string. See the PostgreSQL documentation for further details: https://www.postgresql.org/docs/current/static/libpq-connect.html#LIBPQ-CONNSTRING Note that if a `conninfo` string is provided, values set in this will override any provided as individual parameters. For example, with `-d 'host=foo' --host bar`, `foo` will be chosen over `bar`. Like other PostgreSQL utilities, `repmgr` will default to any values set in environment variables if explicit command line parameters are not provided. See the PostgreSQL documentation for further details: https://www.postgresql.org/docs/current/static/libpq-envars.html Setting up a simple replication cluster with repmgr --------------------------------------------------- The following section will describe how to set up a basic replication cluster with a master and a standby server using the `repmgr` command line tool. It is assumed PostgreSQL is installed on both servers in the cluster, `rsync` is available and password-less SSH connections are possible between both servers. * * * > *TIP*: for testing `repmgr`, it's possible to use multiple PostgreSQL > instances running on different ports on the same computer, with > password-less SSH access to `localhost` enabled. * * * ### PostgreSQL configuration On the master server, a PostgreSQL instance must be initialised and running. The following replication settings may need to be adjusted: # Enable replication connections; set this figure to at least one more # than the number of standbys which will connect to this server # (note that repmgr will execute `pg_basebackup` in WAL streaming mode, # which requires two free WAL senders) max_wal_senders = 10 # Ensure WAL files contain enough information to enable read-only queries # on the standby wal_level = 'hot_standby' # Enable read-only queries on a standby # (Note: this will be ignored on a master but we recommend including # it anyway) hot_standby = on # Enable WAL file archiving archive_mode = on # Set archive command to a script or application that will safely store # you WALs in a secure place. /bin/true is an example of a command that # ignores archiving. Use something more sensible. archive_command = '/bin/true' # If cloning using rsync, or you have configured `pg_basebackup_options` # in `repmgr.conf` to include the setting `--xlog-method=fetch`, *and* # you have not set `restore_command` in `repmgr.conf`to fetch WAL files # from another source such as Barman, you'll need to set `wal_keep_segments` # to a high enough value to ensure that all WAL files generated while # the standby is being cloned are retained until the standby starts up. # wal_keep_segments = 5000 * * * > *TIP*: rather than editing these settings in the default `postgresql.conf` > file, create a separate file such as `postgresql.replication.conf` and > include it from the end of the main configuration file with: > `include 'postgresql.replication.conf'` * * * Create a dedicated PostgreSQL superuser account and a database for the `repmgr` metadata, e.g. createuser -s repmgr createdb repmgr -O repmgr For the examples in this document, the name `repmgr` will be used for both user and database, but any names can be used. Ensure the `repmgr` user has appropriate permissions in `pg_hba.conf` and can connect in replication mode; `pg_hba.conf` should contain entries similar to the following: local replication repmgr trust host replication repmgr 127.0.0.1/32 trust host replication repmgr 192.168.1.0/24 trust local repmgr repmgr trust host repmgr repmgr 127.0.0.1/32 trust host repmgr repmgr 192.168.1.0/24 trust Adjust according to your network environment and authentication requirements. On the standby, do not create a PostgreSQL instance, but do ensure an empty directory is available for the `postgres` system user to create a data directory. ### repmgr configuration file Create a `repmgr.conf` file on the master server. The file must contain at least the following parameters: cluster=test node=1 node_name=node1 conninfo='host=repmgr_node1 user=repmgr dbname=repmgr' - `cluster`: an arbitrary name for the replication cluster; this must be identical on all nodes - `node`: a unique integer identifying the node - `node_name`: a unique string identifying the node; we recommend a name specific to the server (e.g. 'server_1'); avoid names indicating the current replication role like 'master' or 'standby' as the server's role could change. - `conninfo`: a valid connection string for the `repmgr` database on the *current* server. (On the standby, the database will not yet exist, but `repmgr` needs to know the connection details to complete the setup process). `repmgr.conf` should not be stored inside the PostgreSQL data directory, as it could be overwritten when setting up or reinitialising the PostgreSQL server. See section `Configuration` above for further details about `repmgr.conf`. `repmgr` will create a schema named after the cluster and prefixed with `repmgr_`, e.g. `repmgr_test`; we also recommend that you set the `repmgr` user's search path to include this schema name, e.g. ALTER USER repmgr SET search_path TO repmgr_test, "$user", public; * * * > *TIP*: for Debian-based distributions we recommend explictly setting > `pg_bindir` to the directory where `pg_ctl` and other binaries not in > the standard path are located. For PostgreSQL 9.5 this would be > `/usr/lib/postgresql/9.5/bin/`. * * * ### Initialise the master server To enable `repmgr` to support a replication cluster, the master node must be registered with `repmgr`, which creates the `repmgr` database and adds a metadata record for the server: $ repmgr -f repmgr.conf master register [2016-01-07 16:56:46] [NOTICE] master node correctly registered for cluster test with id 1 (conninfo: host=repmgr_node1 user=repmgr dbname=repmgr) The metadata record looks like this: repmgr=# SELECT * FROM repmgr_test.repl_nodes; id | type | upstream_node_id | cluster | name | conninfo | slot_name | priority | active ----+---------+------------------+---------+-------+---------------------------------------------+-----------+----------+-------- 1 | master | | test | node1 | host=repmgr_node1 dbname=repmgr user=repmgr | | 100 | t (1 row) Each server in the replication cluster will have its own record and will be updated when its status or role changes. ### Clone the standby server Create a `repmgr.conf` file on the standby server. It must contain at least the same parameters as the master's `repmgr.conf`, but with the values `node`, `node_name` and `conninfo` adjusted accordingly, e.g.: cluster=test node=2 node_name=node2 conninfo='host=repmgr_node2 user=repmgr dbname=repmgr' Clone the standby with: $ repmgr -h repmgr_node1 -U repmgr -d repmgr -D /path/to/node2/data/ -f /etc/repmgr.conf standby clone [2016-01-07 17:21:26] [NOTICE] destination directory '/path/to/node2/data/' provided [2016-01-07 17:21:26] [NOTICE] starting backup... [2016-01-07 17:21:26] [HINT] this may take some time; consider using the -c/--fast-checkpoint option NOTICE: pg_stop_backup complete, all required WAL segments have been archived [2016-01-07 17:21:28] [NOTICE] standby clone (using pg_basebackup) complete [2016-01-07 17:21:28] [NOTICE] you can now start your PostgreSQL server [2016-01-07 17:21:28] [HINT] for example : pg_ctl -D /path/to/node2/data/ start This will clone the PostgreSQL data directory files from the master at `repmgr_node1` using PostgreSQL's `pg_basebackup` utility. A `recovery.conf` file containing the correct parameters to start streaming from this master server will be created automatically. Note that by default, any configuration files in the master's data directory will be copied to the standby. Typically these will be `postgresql.conf`, `postgresql.auto.conf`, `pg_hba.conf` and `pg_ident.conf`. These may require modification before the standby is started so it functions as desired. In some cases (e.g. on Debian or Ubuntu Linux installations), PostgreSQL's configuration files are located outside of the data directory and will not be copied by default. `repmgr` can copy these files, either to the same location on the standby server (provided appropriate directory and file permissions are available), or into the standby's data directory. This requires passwordless SSH access to the master server. Add the option `--copy-external-config-files` to the `repmgr standby clone` command; by default files will be copied to the same path as on the upstream server. To have them placed in the standby's data directory, specify `--copy-external-config-files=pgdata`, but note that any include directives in the copied files may need to be updated. *Caveat*: when copying external configuration files: `repmgr` will only be able to detect files which contain active settings. If a file is referenced by an include directive but is empty, only contains comments or contains settings which have not been activated, the file will not be copied. * * * > *TIP*: for reliable configuration file management we recommend using a > configuration management tool such as Ansible, Chef, Puppet or Salt. * * * Be aware that when initially cloning a standby, you will need to ensure that all required WAL files remain available while the cloning is taking place. To ensure this happens when using the default `pg_basebackup` method, `repmgr` will set `pg_basebackup`'s `--xlog-method` parameter to `stream`, which will ensure all WAL files generated during the cloning process are streamed in parallel with the main backup. Note that this requires two replication connections to be available. To override this behaviour, in `repmgr.conf` set `pg_basebackup`'s `--xlog-method` parameter to `fetch`: pg_basebackup_options='--xlog-method=fetch' and ensure that `wal_keep_segments` is set to an appropriately high value. See the `pg_basebackup` documentation for details: https://www.postgresql.org/docs/current/static/app-pgbasebackup.html Make any adjustments to the standby's PostgreSQL configuration files now, then start the server. * * * > *NOTE*: `repmgr standby clone` does not require `repmgr.conf`, however we > recommend providing this as `repmgr` will set the `application_name` parameter > in `recovery.conf` as the value provided in `node_name`, making it easier to > identify the node in `pg_stat_replication`. It's also possible to provide some > advanced options for controlling the standby cloning process; see next section > for details. * * * ### Verify replication is functioning Connect to the master server and execute: repmgr=# SELECT * FROM pg_stat_replication; -[ RECORD 1 ]----+------------------------------ pid | 7704 usesysid | 16384 usename | repmgr application_name | node2 client_addr | 192.168.1.2 client_hostname | client_port | 46196 backend_start | 2016-01-07 17:32:58.322373+09 backend_xmin | state | streaming sent_location | 0/3000220 write_location | 0/3000220 flush_location | 0/3000220 replay_location | 0/3000220 sync_priority | 0 sync_state | async ### Register the standby Register the standby server with: repmgr -f /etc/repmgr.conf standby register [2016-01-08 11:13:16] [NOTICE] standby node correctly registered for cluster test with id 2 (conninfo: host=repmgr_node2 user=repmgr dbname=repmgr) Connect to the standby server's `repmgr` database and check the `repl_nodes` table: repmgr=# SELECT * FROM repmgr_test.repl_nodes ORDER BY id; id | type | upstream_node_id | cluster | name | conninfo | slot_name | priority | active ----+---------+------------------+---------+-------+---------------------------------------------+-----------+----------+-------- 1 | master | | test | node1 | host=repmgr_node1 dbname=repmgr user=repmgr | | 100 | t 2 | standby | 1 | test | node2 | host=repmgr_node2 dbname=repmgr user=repmgr | | 100 | t (2 rows) The standby server now has a copy of the records for all servers in the replication cluster. Note that the relationship between master and standby is explicitly defined via the `upstream_node_id` value, which shows here that the standby's upstream server is the replication cluster master. While of limited use in a simple master/standby replication cluster, this information is required to effectively manage cascading replication (see below). * * * > *TIP*: depending on your environment and workload, it may take some time for > the standby's node record to propagate from the master to the standby. Some > actions (such as starting `repmgrd`) require that the standby's node record > is present and up-to-date to function correctly - by providing the option > `--wait-sync` to the `repmgr standby register` command, `repmgr` will wait > until the record is synchronised before exiting. An optional timeout (in > seconds) can be added to this option (e.g. `--wait-sync=60`). * * * ### Using Barman to clone a standby `repmgr standby clone` also supports Barman, the Backup and Replication manager (http://www.pgbarman.org/), as a provider of both base backups and WAL files. Barman support provides the following advantages: - the primary node does not need to perform a new backup every time a new standby is cloned; - a standby node can be disconnected for longer periods without losing the ability to catch up, and without causing accumulation of WAL files on the primary node; - therefore, `repmgr` does not need to use replication slots, and the primary node does not need to set `wal_keep_segments`. > *NOTE*: In view of the above, Barman support is incompatible with > the `use_replication_slots` setting in `repmgr.conf`. In order to enable Barman support for `repmgr standby clone`, you must ensure that: - the name of the server configured in Barman is equal to the `cluster_name` setting in `repmgr.conf`; - the `barman_server` setting in `repmgr.conf` is set to the SSH hostname of the Barman server; - the `restore_command` setting in `repmgr.conf` is configured to use a copy of the `barman-wal-restore.py` script shipped with Barman (see below); - the Barman catalogue includes at least one valid backup for this server. > *NOTE*: Barman support is automatically enabled if `barman_server` > is set. Normally it is a good practice to use Barman, for instance > when fetching a base backup while cloning a standby; in any case, > Barman mode can be disabled using the `--without-barman` command > line option. > *NOTE*: if you have a non-default SSH configuration on the Barman > server, e.g. using a port other than 22, then you can set those > parameters in a dedicated Host section in `~/.ssh/config` > corresponding to the value of `barman_server` in `repmgr.conf`. See > the "Host" section in `man 5 ssh_config` for more details. `barman-wal-restore.py` is a Python script provided by the Barman development team, which must be copied in a location accessible to `repmgr`, and marked as executable; `restore_command` must then be set in `repmgr.conf` as follows: