Fix formatting

This commit is contained in:
Ian Barwick
2014-07-07 11:51:10 +09:00
parent fa7d8df534
commit 2cbee90f35

View File

@@ -17,8 +17,8 @@ Conceptual Overview
repmgr provides two binaries:
- `repmgr`: a command-line client to manage replication and repmgr configuration
- `repmgrd`: an optional daemon process which runs on standby nodes to monitor
- ``repmgr``: a command-line client to manage replication and repmgr configuration
- ``repmgrd``: an optional daemon process which runs on standby nodes to monitor
replication and node status
Each PostgreSQL node requires a repmgr configuration file; additionally
@@ -64,13 +64,13 @@ Configuration
* Server configuration
Password-less SSH logins must be enabled for the database system user (typically `postgres`)
Password-less SSH logins must be enabled for the database system user (typically ``postgres``)
between all server nodes to enable repmgr to copy required files.
* PostgreSQL configuration
The master PostgreSQL node needs to be configured for replication with the
following settings:
following settings::
wal_level = 'hot_standby' # minimal, archive, hot_standby, or logical
archive_mode = on # allows archiving to be done
@@ -80,7 +80,7 @@ following settings:
hot_standby = on # "on" allows queries during recovery
Note that repmgr expects a default of 5000 wal_keep_segments, although this
value can be overridden when executing the `repmgr` client.
value can be overridden when executing the ``repmgr`` client.
Additionally, repmgr requires a dedicated PostgreSQL superuser account
and a database in which to store monitoring and replication data. The
@@ -90,8 +90,8 @@ one, however it's probably advisable to create a dedicated repmgr database.
* repmgr configuration
Each PostgreSQL node requires a repmgr configuration file containing
identification and database connection information.
Each PostgreSQL node requires a repmgr configuration file containing
identification and database connection information::
cluster=test
node=1
@@ -99,14 +99,14 @@ identification and database connection information.
conninfo='host=repmgr_node1 user=repmgr_usr dbname=repmgr_db'
pg_bindir=/path/to/postgres/bin
* `cluster`: common name for the replication cluster; this must be the same on all nodes
* `node`: a unique, abitrary integer identifier
* `name`: a unique, human-readable name
* `conninfo`: standard conninfo string enabling repmgr to connect to the
* ``cluster``: common name for the replication cluster; this must be the same on all nodes
* ``node``: a unique, abitrary integer identifier
* ``name``: a unique, human-readable name
* ``conninfo``: a standard conninfo string enabling repmgr to connect to the
control database; user and name must be the same on all nodes, while other
parameters such as port may differ. The `host` parameter *must* be a hostname
parameters such as port may differ. The ``host`` parameter *must* be a hostname
resolvable by all nodes on the cluster.
* `pg_bindir`: (optional) location of PostgreSQL binaries, if not in the default $PATH
* ``pg_bindir``: (optional) location of PostgreSQL binaries, if not in the default $PATH
Note that the configuration file should not be stored inside the PostgreSQL
data directory.
@@ -120,7 +120,7 @@ Replication setup and monitoring
================================
For the purposes of this guide, we'll assume the database user will be
`repmgr_usr` and the database will be `repmgr_db`, and that the following
``repmgr_usr`` and the database will be ``repmgr_db``, and that the following
environment variables are set on each node:
- $HOME: the PostgreSQL system user's home directory
@@ -132,22 +132,22 @@ Master setup
1. Configure PostgreSQL
- create user and database
- create user and database::
CREATE ROLE repmgr_usr LOGIN SUPERUSER;
CREATE DATABASE repmgr_db OWNER repmgr_usr;
- configure postgresql.conf for replication (see above)
- update pg_hba.conf:
- update pg_hba.conf::
host repmgr_usr repmgr_db 192.168.1.0/24 trust
host replication all 192.168.1.0/24 trust
Restart the PostgreSQL server after making these changes.
Restart the PostgreSQL server after making these changes.
2. Create the repmgr configuration file:
2. Create the repmgr configuration file::
$ cat $HOME/repmgr/repmgr.conf
cluster=test
@@ -156,7 +156,7 @@ Master setup
conninfo='host=repmgr_node1 user=repmgr_usr dbname=repmgr_db'
pg_bindir=/path/to/postgres/bin
3. Register the master node with repmgr:
3. Register the master node with repmgr::
$ repmgr -f $HOME/repmgr/repmgr.conf --verbose master register
[2014-07-04 10:43:42] [INFO] repmgr mgr connecting to master database
@@ -169,7 +169,7 @@ Master setup
Slave/standby setup
-------------------
1. Use repmgr to clone the master:
1. Use repmgr to clone the master::
$ repmgr -f $HOME/repmgr/repmgr.conf -D $PGDATA -d repmgr_db -U repmgr_usr -R postgres --verbose standby clone 192.168.1.2
Opening configuration file: ./repmgr.conf
@@ -188,14 +188,14 @@ Slave/standby setup
[2014-07-04 10:53:21] [NOTICE] for example : /etc/init.d/postgresql start
-R is the database system user on the master node. At this point it does not matter
if the `repmgr.conf` file is not found.
if the ``repmgr.conf`` file is not found.
This will clone the PostgreSQL database files from the master, and additionally
create an appropriate `recovery.conf` file.
create an appropriate ``recovery.conf`` file.
2. Start the PostgreSQL server
3. Create the repmgr configuration file:
3. Create the repmgr configuration file::
$ cat $HOME/repmgr/repmgr.conf
cluster=test
@@ -204,7 +204,7 @@ Slave/standby setup
conninfo='host=repmgr_node2 user=repmgr_usr dbname=repmgr_db'
pg_bindir=/path/to/postgres/bin
4. Register the master node with repmgr:
4. Register the master node with repmgr::
$ repmgr -f $HOME/repmgr/repmgr.conf --verbose standby register
Opening configuration file: /path/to/repmgr/repmgr.conf
@@ -221,19 +221,19 @@ Slave/standby setup
Monitoring
----------
`repmgrd` is a management and monitoring daemon which runs on standby nodes
and which and can automate remote actions. It can be started simply with e.g.:
``repmgrd`` is a management and monitoring daemon which runs on standby nodes
and which and can automate remote actions. It can be started simply with e.g.::
repmgrd -f $HOME/repmgr/repmgr.conf --verbose > $HOME/repmgr/repmgr.log 2>&1
or alternatively
or alternatively::
repmgrd -f $HOME/repmgr/repmgr.conf --verbose --monitoring-history > $HOME/repmgr/repmgrd.log 2>&1
which will track advance or lag of the replication in every standby in the
`repl_monitor` table.
``repl_monitor`` table.
Example log output:
Example log output::
[2014-07-04 11:55:17] [INFO] repmgrd Connecting to database 'host=localhost user=repmgr_usr dbname=repmgr_db'
[2014-07-04 11:55:17] [INFO] repmgrd Connected to database, checking its state
@@ -249,18 +249,18 @@ Example log output:
Failover
--------
To promote a standby to master, on the standby execute execute e.g.:
To promote a standby to master, on the standby execute e.g.::
repmgr -f $HOME/repmgr/repmgr.conf --verbose standby promote
repmgr will attempt to connect to the current master to verify that it
is not available (if it is, repmgr will not promote the standby).
Other standby servers need to be told to follow the new master with:
Other standby servers need to be told to follow the new master with::
repmgr -f $HOME/repmgr/repmgr.conf --verbose standby follow
See file `autofailover_quick_setup.rst` for information on how to set up
See file ``autofailover_quick_setup.rst`` for information on how to set up
automated failover.
@@ -269,18 +269,18 @@ repmgr database schema
repmgr creates a small schema for its own use in the database specified in
each node's conninfo configuration parameter. This database can in principle
be any database. The schema name is the global `cluster` name prefixed
with `repmgr_`, so for the example setup above the schema name is
`repmgr_test`.
be any database. The schema name is the global ``cluster`` name prefixed
with ``repmgr_``, so for the example setup above the schema name is
``repmgr_test``.
The schema contains two tables:
* `repl_nodes`
* ``repl_nodes``
stores information about all registered servers in the cluster
* `repl_monitor`
* ``repl_monitor``
stores monitoring information about each node
and one view, `repl_status`, which summarizes the latest monitoring information
and one view, ``repl_status``, which summarizes the latest monitoring information
for each node.