Whitespace cleanup

This commit is contained in:
Ian Barwick
2014-12-18 10:32:28 +09:00
parent 94f520e1de
commit 763881b1ed
2 changed files with 60 additions and 61 deletions

View File

@@ -62,7 +62,7 @@ including how far they are lagging behind the master.
If you lose node1 you can then run this on node2::
repmgr -f /var/lib/pgsql/repmgr/repmgr.conf standby promote
repmgr -f /var/lib/pgsql/repmgr/repmgr.conf standby promote
To make node2 the new master. Then on node3 run::
@@ -74,7 +74,7 @@ If now we want to add a new node, we can a prepare a new server (node4)
and run::
repmgr -D /var/lib/pgsql/9.0 standby clone node2
And if a previously failed node becomes available again, such as
the lost node1 above, you can get it to resynchronize by only copying
over changes made while it was down. That happens with what's
@@ -91,7 +91,7 @@ Installation Outline
To install and use repmgr and repmgrd follow these steps:
1. Build repmgr programs
1. Build repmgr programs
2. Set up trusted copy between postgres accounts, needed for the
``STANDBY CLONE`` step
@@ -172,7 +172,7 @@ occur::
/usr/bin/ld: cannot find -lxslt
/usr/bin/ld: cannot find -lpam
Install the following packages to correct those::
yum install libxslt-devel
@@ -210,7 +210,7 @@ here is an example sessions demonstrating the problem case appearing::
---> Package postgresql90-devel.i386 0:9.0.2-2PGDG.rhel5 set to be updated
---> Package postgresql90-devel.x86_64 0:9.0.2-2PGDG.rhel5 set to be updated
--> Finished Dependency Resolution
Dependencies Resolved
=========================================================================
@@ -272,7 +272,7 @@ You can also make a deb package of repmgr using::
make USE_PGXS=1 deb
This will build a Debian package one level up from where you build, normally the
This will build a Debian package one level up from where you build, normally the
same directory that you have your repmgr/ directory in.
Confirm software was built correctly
@@ -301,18 +301,18 @@ Below this binary installation base directory is referred to as PGDIR.
Set up trusted copy between postgres accounts
---------------------------------------------
Initial copy between nodes uses the rsync program running over ssh. For this
to work, the postgres accounts on each system need to be able to access files
Initial copy between nodes uses the rsync program running over ssh. For this
to work, the postgres accounts on each system need to be able to access files
on their partner node without a password.
First generate a ssh key, using an empty passphrase, and copy the resulting
First generate a ssh key, using an empty passphrase, and copy the resulting
keys and a maching authorization file to a privledged user on the other system::
[postgres@node1]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/var/lib/pgsql/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Enter file in which to save the key (/var/lib/pgsql/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /var/lib/pgsql/.ssh/id_rsa.
Your public key has been saved in /var/lib/pgsql/.ssh/id_rsa.pub.
The key fingerprint is:
@@ -322,7 +322,7 @@ keys and a maching authorization file to a privledged user on the other system::
[postgres@node1]$ cd ~/.ssh
[postgres@node1]$ scp id_rsa.pub id_rsa authorized_keys user@node2:
Login as a user on the other system, and install the files into the postgres
Login as a user on the other system, and install the files into the postgres
user's account::
[user@node2 ~]$ sudo chown postgres.postgres authorized_keys id_rsa.pub id_rsa
@@ -331,7 +331,7 @@ user's account::
[user@node2 ~]$ sudo mv authorized_keys id_rsa.pub id_rsa ~postgres/.ssh
[user@node2 ~]$ sudo chmod -R go-rwx ~postgres/.ssh
Now test that ssh in both directions works. You may have to accept some new
Now test that ssh in both directions works. You may have to accept some new
known hosts in the process.
Primary server configuration
@@ -343,13 +343,13 @@ is a sample of changes to the ``postgresql.conf`` file::
listen_addresses='*'
wal_level = 'hot_standby'
archive_mode = on
archive_command = 'cd .' # we can also use exit 0, anything that
archive_command = 'cd .' # we can also use exit 0, anything that
# just does nothing
max_wal_senders = 10
wal_keep_segments = 5000 # 80 GB required on pg_xlog
hot_standby = on
Also you need to add the machines that will participate in the cluster in
Also you need to add the machines that will participate in the cluster in
``pg_hba.conf`` file. One possibility is to trust all connections from the
replication users from all internal addresses, such as::
@@ -379,19 +379,19 @@ instances on seperate servers, both running under the ``postgres`` user account
and both using the default port (5432). This walkthrough assumes the following
setup:
* A primary (master) server called "node1," running as the "postgres" user
* A primary (master) server called "node1," running as the "postgres" user
who is also the owner of the files. This server is operating on port 5432. This
server will be known as "node1" in the cluster "test".
* A secondary (standby) server called "node2," running as the "postgres" user
* A secondary (standby) server called "node2," running as the "postgres" user
who is also the owner of the files. This server is operating on port 5432. This
server will be known as "node2" in the cluster "test".
* Another standby server called "node3" with a similar configuration to "node2".
* The Postgres installation in each of the above is defined as $PGDATA,
* The Postgres installation in each of the above is defined as $PGDATA,
which is represented here as ``/var/lib/pgsql/9.0/data``
Creating some sample data
-------------------------
@@ -401,7 +401,7 @@ data in this cluster to replication, you can create some like this::
createdb pgbench
pgbench -i -s 10 pgbench
Examples below will use the database name ``pgbench`` to match this.
Substitute the name of your database instead. Note that the standby
nodes created here will include information for every database in the
@@ -432,12 +432,12 @@ installation on the existing standby nodes.
* Stop any server on "node2" and "node3". You can confirm the database
servers running using a command like this::
ps -eaf | grep postgres
And looking for the various database server processes: server, logger,
wal writer, and autovacuum launcher.
* Go to "node2" and "node3" database directories and remove the PostgreSQL installation::
cd $PGDATA
@@ -474,7 +474,7 @@ Possible sources for a problem here include:
this situation you would be able to connect to the "node1" server
on itself, but not from any other host, and you'd just get a timeout
when trying rather than a proper error message.
* The ``pg_hba.conf`` file does not list appropriate statements to allow
this user to login. In this case you should connect to the server,
but see an error message mentioning the ``pg_hba.conf``.
@@ -582,7 +582,7 @@ Some tests you might do at this point include:
repl_status view advances accordingly.
* Verify that you can run queries against the standby server, but
cannot make insertions into the standby database.
cannot make insertions into the standby database.
Simulating the failure of the primary server
--------------------------------------------
@@ -590,7 +590,7 @@ Simulating the failure of the primary server
To simulate the loss of the primary server, simply stop the "node1" server.
At this point, the standby contains the database as it existed at the time of
the "failure" of the primary server. If looking at ``repl_status`` on
"node2", you should see the time_lag value increase the longer "node1"
"node2", you should see the time_lag value increase the longer "node1"
is down.
Promoting the Standby to be the Primary
@@ -612,7 +612,7 @@ restoring the original roles, type the following on node1::
repmgr -D $PGDATA -d pgbench -p 5432 -U repmgr -R postgres --verbose --force standby clone node2
Then start the "node1" server, which is now acting as a standby server.
Check
Check
Make sure the record(s) inserted the earlier step are still available on the
now standby (prime). Confirm the database on "node1" is read-only.
@@ -647,7 +647,7 @@ Another test setup assumes you might be using the default installation of
PostgreSQL on port 5432 for some other purpose, and instead relocates these
instances onto different ports running as different users. In places where
``127.0.0.1`` is used as a host name, a more traditional configuration
would instead use the name of the relevant host for that parameter.
would instead use the name of the relevant host for that parameter.
You can usually leave out changes to the port number in this case too.
* A primary (master) server called "prime," with a user as "prime," who is
@@ -660,8 +660,8 @@ You can usually leave out changes to the port number in this case too.
* A database exists on "prime" called "testdb."
* The Postgres installation in each of the above is defined as $PGDATA,
which is represented here with ``/data/prime`` as the "prime" server and
* The Postgres installation in each of the above is defined as $PGDATA,
which is represented here with ``/data/prime`` as the "prime" server and
``/data/standby`` as the "standby" server.
You might setup such an installation by adjusting the login script for the
@@ -771,7 +771,7 @@ Some tests you might do at this point include:
repl_status view advances accordingly.
* Verify that you can run queries against the standby server, but
cannot make insertions into the standby database.
cannot make insertions into the standby database.
Simulating the failure of the primary server
--------------------------------------------
@@ -836,10 +836,10 @@ Once you have changed roles (with a failover or to restore original roles)
you would end up with records saying that node1 is primary and other records
saying that node2 is the primary. Which could be confusing.
Also, if you don't do anything about it the monitor history will keep growing.
For both of those reasons you sometime want to make some maintainance of the
For both of those reasons you sometime want to make some maintainance of the
``repl_monitor`` table.
If you want to clean the history after a few days you can execute the
If you want to clean the history after a few days you can execute the
CLUSTER CLEANUP command in a cron. For example to keep just one day of history
you can put this in your crontab::
@@ -854,7 +854,7 @@ Configuration File
``repmgr.conf`` is looked for in the directory repmgrd or repmgr exists in.
The configuration file should have 3 lines:
1. cluster: A string (single quoted) that identify the cluster we are on
1. cluster: A string (single quoted) that identify the cluster we are on
2. node: An integer that identify our node in the cluster
@@ -869,10 +869,10 @@ Command line syntax
The current supported syntax for the program can be seen using::
repmgr --help
The output from this program looks like this::
repmgr: Replicator manager
repmgr: Replicator manager
Usage:
repmgr [OPTIONS] master {register}
repmgr [OPTIONS] standby {register|clone|promote|follow}
@@ -913,7 +913,7 @@ repmgr commands
Not all of these commands need the ``repmgr.conf`` file, but they need to be able to
connect to the remote and local databases.
You can teach it which is the remote database by using the -h parameter or
You can teach it which is the remote database by using the -h parameter or
as a last parameter in standby clone and standby follow. If you need to specify
a port different then the default 5432 you can specify a -p parameter.
Standby is always considered as localhost and a second -p parameter will indicate
@@ -929,9 +929,9 @@ its port if is different from the default one.
* Registers a standby in a cluster, it needs to be executed before
repmgrd will function on the node.
* standby clone [node to be cloned]
* standby clone [node to be cloned]
* Does a backup via ``rsync`` of the data directory of the primary. And it
* Does a backup via ``rsync`` of the data directory of the primary. And it
creates the recovery file we need to start a new hot standby server.
It doesn't need the ``repmgr.conf`` so it can be executed anywhere on the
new node. You can change to the directory you want the new database
@@ -952,7 +952,7 @@ its port if is different from the default one.
executing ``pg_ctl``; check the server startup script you are using
and try to match what it does.
* standby promote
* standby promote
* Allows manual promotion of a specific standby into a new primary in the
event of a failover. This needs to be executed on the same directory
@@ -964,7 +964,7 @@ its port if is different from the default one.
That will restart your standby postgresql service.
* standby follow
* standby follow
* Allows the standby to base itself to the new primary passed as a
parameter. This needs to be executed on the same directory where the
@@ -973,21 +973,21 @@ its port if is different from the default one.
./repmgr standby follow
* cluster show
* cluster show
* Shows the role (standby/master) and connection string for all nodes configured
in the cluster or "FAILED" if the node doesn't respond. This allow us to know
* Shows the role (standby/master) and connection string for all nodes configured
in the cluster or "FAILED" if the node doesn't respond. This allow us to know
which nodes are alive and which one needs attention and to have a notion of the
structure of clusters we just have access to. Example::
./repmgr cluster show
* cluster cleanup
* cluster cleanup
* Cleans the monitor's history from repmgr tables. This avoids the repl_monitor table
to grow excesivelly which in turns affects repl_status view performance, also
to grow excesivelly which in turns affects repl_status view performance, also
keeps controlled the space in disk used by repmgr. This command can be used manually
or in a cron to make it periodically.
or in a cron to make it periodically.
There is also a --keep-history (-k) option to indicate how many days of history we
want to keep, so the command will clean up history older than "keep-history" days. Example::
@@ -1002,20 +1002,20 @@ Command line syntax
The current supported syntax for the program can be seen using::
repmgrd --help
The output from this program looks like this::
repmgrd: Replicator manager daemon
repmgrd: Replicator manager daemon
Usage:
repmgrd [OPTIONS]
Options:
--help show this help, then exit
--version output version information, then exit
--verbose output verbose activity information
--monitoring-history track advance or lag of the replication in every standby in repl_monitor
-f, --config-file=PATH path to the configuration file
repmgrd monitors a cluster of servers.
The ``--verbose`` option can be useful in troubleshooting issues with
@@ -1055,8 +1055,8 @@ in ``repl_node``, consult the ``repl_status`` view::
psql -d postgres -c "SELECT * FROM repmgr_test.repl_status"
This view shows the latest monitor info from every node.
* replication_lag: in bytes. This is how far the latest xlog record
* replication_lag: in bytes. This is how far the latest xlog record
we have received is from master.
* apply_lag: in bytes. This is how far the latest xlog record
@@ -1068,7 +1068,7 @@ Error codes
-----------
When the repmgr or repmgrd program exits, it will set one of the
following
following
* SUCCESS 0: Program ran successfully.
* ERR_BAD_CONFIG 1: One of the configuration checks the program makes failed.

View File

@@ -1763,7 +1763,7 @@ do_witness_create(void)
PQfinish(masterconn);
exit(ERR_BAD_CONFIG);
}
/* check if we need to create a user */
if (runtime_options.username[0] && runtime_options.localport[0] && strcmp(runtime_options.username,"postgres")!=0 )
{
@@ -1780,12 +1780,12 @@ do_witness_create(void)
exit(ERR_BAD_CONFIG);
}
}
/* check if we need to create a database */
if(runtime_options.dbname[0] && strcmp(runtime_options.dbname,"postgres")!=0 && runtime_options.localport[0])
{
/* create required db */
sprintf(script, "%s/createdb -p %s -U postgres --owner=%s %s",
sprintf(script, "%s/createdb -p %s -U postgres --owner=%s %s",
options.pg_bindir, runtime_options.localport,runtime_options.username, runtime_options.dbname);
log_info("Create database for witness db: %s.\n", script);
@@ -1830,7 +1830,7 @@ do_witness_create(void)
PQfinish(masterconn);
exit(ERR_BAD_CONFIG);
}
/* reload to adapt for changed pg_hba.conf */
sprintf(script, "%s/pg_ctl %s -w -D %s reload", options.pg_bindir,
options.pgctl_options, runtime_options.dest_dir);
@@ -1842,8 +1842,7 @@ do_witness_create(void)
PQfinish(masterconn);
exit(ERR_BAD_CONFIG);
}
/* register ourselves in the master */
sqlquery_snprintf(sqlquery, "INSERT INTO %s.repl_nodes(id, cluster, name, conninfo, priority, witness) "
"VALUES (%d, '%s', '%s', '%s', %d, true)",