Compare commits

...

67 Commits

Author SHA1 Message Date
Lev Kokotov
3f16123cc5 Config docs (#37)
* config docs

* space

* space

* shards
2022-02-21 23:57:25 -08:00
Lev Kokotov
f6f5471aa0 Update README.md 2022-02-21 20:43:40 -08:00
Lev Kokotov
a6fc935040 Can pass config path as argument (#36)
* show better errors for config parsing

* lint
2022-02-21 20:41:32 -08:00
Lev Kokotov
754381fc6c Update Dockerfile 2022-02-21 17:49:32 -08:00
Lev Kokotov
b1e9a406fb Update README.md 2022-02-21 17:48:08 -08:00
Lev Kokotov
f21a3d8d8c Add server login stat; refactor for better naming (#34) 2022-02-21 17:28:50 -08:00
Lev Kokotov
f805b43a08 test session mode and config reload for real (#33)
* test session mode and config reload for real

* period

* run them at the end

* typo

* wrong dir
2022-02-21 00:16:33 -08:00
Lev Kokotov
86941d62e4 Reset query router setting to default (#32) 2022-02-21 00:00:50 -08:00
Lev Kokotov
aceb2ace24 Update issue templates 2022-02-20 23:52:50 -08:00
Lev Kokotov
303fec063b Ruby (#30)
* cop

* log
2022-02-20 23:33:04 -08:00
Lev Kokotov
64574211c6 Update CONTRIBUTING.md 2022-02-20 22:49:30 -08:00
Lev Kokotov
44b5e7eeee use logger lib; minor refactor; sv_* stats (#29) 2022-02-20 22:47:08 -08:00
Lev Kokotov
108f5715c0 License as MIT (#28)
* License as MIT

* license

* s
2022-02-20 13:49:30 -08:00
Lev Kokotov
3b795464a8 Fix client states reporting (#27)
* Fix client states reporting

* clean
2022-02-20 12:40:09 -08:00
Lev Kokotov
d4c1fc87ee Reloadable config (#26)
* Reloadable config

* readme

* live config reload

* test matrix
2022-02-19 13:57:35 -08:00
Lev Kokotov
4ca50b9a71 Update README.md 2022-02-19 08:59:20 -08:00
Lev Kokotov
a556ec1c43 More query router commands; settings last until changed again; docs (#25)
* readme

* touch up docs

* stuff

* refactor query router

* remove unused

* less verbose

* docs

* no link

* method rename
2022-02-19 08:57:24 -08:00
Lev Kokotov
bbacb9cf01 Explicit shard selection; Rails tests (#24)
* Explicit shard selection; Rails tests

* try running ruby tests

* try without lockfile

* aha

* ok
2022-02-18 09:43:07 -08:00
Lev Kokotov
aa796289bf Query parser 3.0 (#23)
* Starting query parsing

* Query parser

* working config

* disable by default

* fix tsets

* introducing log crate; test for query router; comments

* typo

* fixes for banning

* added test for prepared stmt
2022-02-18 07:10:18 -08:00
Lev Kokotov
4c8a3987fe Refactor query routing into its own module (#22)
* Refactor query routing into its own module

* commments; tests; dead code

* error message

* safer startup

* hm

* dont have to be public

* wow

* fix ci

* ok

* nl

* no more silent errors
2022-02-16 22:52:11 -08:00
Lev Kokotov
7b0ceefb96 Constants, comments, CI fixes, dead code clean-up (#21)
* constants

* server.rs docs

* client.rs comments

* dead code; comments

* comment

* query cancellation comments

* remove unnecessary cast

* move db setup up one step

* query cancellation test

* new line; good night
2022-02-15 22:45:45 -08:00
Lev Kokotov
bb84dcee64 Update README.md 2022-02-15 13:11:24 -08:00
Lev Kokotov
1c406e0fc6 Update README.md 2022-02-15 13:10:32 -08:00
Lev Kokotov
05b4cccb97 More statistics (#20)
* cleaner stats

* remove shard selection there
2022-02-15 08:18:01 -08:00
Lev Kokotov
659b1e00b8 bump statsd 2022-02-14 10:27:32 -08:00
Lev Kokotov
8e5e28a139 Some stats (#19) 2022-02-14 10:00:55 -08:00
Lev Kokotov
574ebe02b8 TODOs (#18) 2022-02-14 06:36:05 -08:00
Lev Kokotov
9c521f07c1 parse startup client parameters (#16) 2022-02-14 05:11:53 -08:00
Lev Kokotov
4aa9c3d3c7 Cleaner shutdown (#12)
* Cleaner shutdown

* mark as bad just in case although im pretty sure we dont need it

* server session duration

* test clean shutdown

* ah
2022-02-12 10:16:05 -08:00
Lev Kokotov
20ceb729a0 print session duration; connect to all servers when validating (#11) 2022-02-12 09:24:24 -08:00
Lev Kokotov
eb45d65110 extended protocol tests 2022-02-11 22:24:27 -08:00
Lev Kokotov
526b9eb666 Pass real server info to the client (#10) 2022-02-11 22:19:49 -08:00
Lev Kokotov
ab8573c94f docker image (#9)
* docker image

* nl
2022-02-11 12:02:08 -08:00
Lev Kokotov
bc5b9e422f Merge pull request #8 from levkk/default-role
add default server role; bug fix
2022-02-11 11:23:58 -08:00
Lev Kokotov
e8263430a3 nl 2022-02-11 11:21:32 -08:00
Lev Kokotov
0d369ab90a add default server role; bug fix 2022-02-11 11:19:40 -08:00
Lev Kokotov
595e564216 Merge pull request #7 from levkk/once-cell-regex
once_cell is way faster
2022-02-10 17:08:01 -08:00
Lev Kokotov
06575eae7b once_cell is way faster 2022-02-10 17:05:20 -08:00
Lev Kokotov
0bec14ba1c wrong benchmark 2022-02-10 14:25:04 -08:00
Lev Kokotov
ee9f609d4e bench setup 2022-02-10 14:13:47 -08:00
Lev Kokotov
19e9f26467 readme 2022-02-10 14:03:38 -08:00
Lev Kokotov
070c38ddc5 benchmarks 2022-02-10 14:02:24 -08:00
Lev Kokotov
1798065b76 max_workers = 4 = so much faster 2022-02-10 13:48:56 -08:00
Lev Kokotov
a3b910ea72 Merge pull request #6 from levkk/levkk-ci-tests
End-to-end CI tests
2022-02-10 11:23:10 -08:00
Lev Kokotov
b9b89db708 maybe were breaking the terminal? 2022-02-10 11:20:33 -08:00
Lev Kokotov
604af32b94 print whats going on 2022-02-10 11:16:08 -08:00
Lev Kokotov
39028282b9 hmm 2022-02-10 11:13:31 -08:00
Lev Kokotov
9d51fe8985 print whats going on 2022-02-10 11:11:56 -08:00
Lev Kokotov
12011be3ec tests 2022-02-10 11:08:57 -08:00
Lev Kokotov
86386c7377 background 2022-02-10 10:59:45 -08:00
Lev Kokotov
66c5271453 sudo 2022-02-10 10:54:06 -08:00
Lev Kokotov
17aed5dcee hmm 2022-02-10 10:53:15 -08:00
Lev Kokotov
89dc33f8aa test ci 2022-02-10 10:50:19 -08:00
Lev Kokotov
c6ccc6b6ae Merge pull request #5 from levkk/levkk-replica-primary
#4 Primary/replica selection
2022-02-10 10:40:53 -08:00
Lev Kokotov
495d6ce6c3 fmt 2022-02-10 10:38:06 -08:00
Lev Kokotov
883b6ee793 todo complete 2022-02-10 10:37:55 -08:00
Lev Kokotov
22c6f13dc7 removed atomic round-robin 2022-02-10 10:37:49 -08:00
Lev Kokotov
c1476d29da config tests 2022-02-10 09:07:10 -08:00
Lev Kokotov
8209633e05 pool fixes 2022-02-10 08:54:06 -08:00
Lev Kokotov
daf120aeac more tests 2022-02-10 08:35:25 -08:00
Lev Kokotov
6d5ab79ed3 readme 2022-02-09 21:25:17 -08:00
Lev Kokotov
fccfb40258 nl 2022-02-09 21:20:20 -08:00
Lev Kokotov
a9b2a41a9b fixes to the banlist 2022-02-09 21:19:14 -08:00
Lev Kokotov
28c70d47b6 #1 Primary/replica selection 2022-02-09 20:02:20 -08:00
Lev Kokotov
00f2d39446 Merge branch 'main' of github.com:levkk/pgcat into main 2022-02-09 06:51:47 -08:00
Lev Kokotov
4c16ba3848 some comments 2022-02-09 06:51:31 -08:00
Lev Kokotov
d530f30def Update README.md 2022-02-08 21:59:56 -08:00
34 changed files with 3018 additions and 1182 deletions

View File

@@ -10,18 +10,36 @@ jobs:
# See: https://circleci.com/docs/2.0/configuration-reference/#docker-machine-macos-windows-executor
docker:
- image: cimg/rust:1.58.1
environment:
RUST_LOG: info
- image: cimg/postgres:14.0
auth:
username: mydockerhub-user
password: $DOCKERHUB_PASSWORD
environment:
POSTGRES_USER: postgres
POSTGRES_DB: postgres
# Add steps to the job
# See: https://circleci.com/docs/2.0/configuration-reference/#steps
steps:
- checkout
- restore_cache:
key: cargo-lock-2-{{ checksum "Cargo.lock" }}
- run:
name: "Lint"
command: "cargo fmt --check"
- run:
name: "Install dependencies"
command: "sudo apt-get update && sudo apt-get install -y psmisc postgresql-contrib-12 postgresql-client-12 ruby ruby-dev libpq-dev"
- run:
name: "Build"
command: "cargo build"
- run:
name: "Test"
command: "cargo test"
- run:
name: "Test end-to-end"
command: "bash .circleci/run_tests.sh"
- save_cache:
key: cargo-lock-2-{{ checksum "Cargo.lock" }}
paths:
@@ -34,4 +52,4 @@ jobs:
workflows:
build:
jobs:
- build
- build

59
.circleci/run_tests.sh Normal file
View File

@@ -0,0 +1,59 @@
#!/bin/bash
set -e
set -o xtrace
psql -e -h 127.0.0.1 -p 5432 -U postgres -f tests/sharding/query_routing_setup.sql
./target/debug/pgcat &
sleep 1
# Setup PgBench
pgbench -i -h 127.0.0.1 -p 6432
# Run it
pgbench -h 127.0.0.1 -p 6432 -t 500 -c 2 --protocol simple
# Extended protocol
pgbench -h 127.0.0.1 -p 6432 -t 500 -c 2 --protocol extended
# COPY TO STDOUT test
psql -h 127.0.0.1 -p 6432 -c 'COPY (SELECT * FROM pgbench_accounts LIMIT 15) TO STDOUT;' > /dev/null
# Query cancellation test
(psql -h 127.0.0.1 -p 6432 -c 'SELECT pg_sleep(5)' || true) &
killall psql -s SIGINT
# Sharding insert
psql -e -h 127.0.0.1 -p 6432 -f tests/sharding/query_routing_test_insert.sql
# Sharding select
psql -e -h 127.0.0.1 -p 6432 -f tests/sharding/query_routing_test_select.sql > /dev/null
# Replica/primary selection & more sharding tests
psql -e -h 127.0.0.1 -p 6432 -f tests/sharding/query_routing_test_primary_replica.sql > /dev/null
#
# ActiveRecord tests!
#
cd tests/ruby
sudo gem install bundler
bundle install
ruby tests.rb
cd ../../
# Test session mode (and config reload)
sed -i 's/pool_mode = "transaction"/pool_mode = "session"/' pgcat.toml
# Reload config
kill -SIGHUP $(pgrep pgcat)
# Prepared statements that will only work in session mode
pgbench -h 127.0.0.1 -p 6432 -t 500 -c 2 --protocol prepared
# Attempt clean shut down
killall pgcat -s SIGINT
# Allow for graceful shutdown
sleep 1

4
.dockerignore Normal file
View File

@@ -0,0 +1,4 @@
target/
tests/
tracing/
.circleci/

38
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@@ -0,0 +1,38 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: ''
assignees: ''
---
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
**Smartphone (please complete the following information):**
- Device: [e.g. iPhone6]
- OS: [e.g. iOS8.1]
- Browser [e.g. stock browser, safari]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.

View File

@@ -0,0 +1,20 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: ''
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.

View File

@@ -1,6 +1,19 @@
## Introduction
Thank you for contributing! Just a few tips here:
1. `cargo fmt` your code before opening up a PR
2. Run the "test suite" (i.e. PgBench) to make sure everything still works.
2. Run the test suite (e.g. `pgbench`) to make sure everything still works. The tests are in `.circleci/run_tests.sh`.
3. Performance is important, make sure there are no regressions in your branch vs. `main`.
Happy hacking!
## TODOs
A non-exhaustive list of things that would be useful to implement:
#### Client authentication
MD5 is probably sufficient, but maybe others too.
#### Admin
Admin database for stats collection and pooler administration. PgBouncer gives us a nice example on how to do that, specifically how to implement `RowDescription` and `DataRow` messages, [example here](https://github.com/pgbouncer/pgbouncer/blob/4f9ced8e63d317a6ff45c8b0efa876b32161f6db/src/admin.c#L813).

79
Cargo.lock generated
View File

@@ -11,6 +11,12 @@ dependencies = [
"memchr",
]
[[package]]
name = "arc-swap"
version = "1.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c5d78ce20460b82d3fa150275ed9d55e21064fc7951177baacf86a145c4a4b1f"
[[package]]
name = "async-trait"
version = "0.1.52"
@@ -22,6 +28,17 @@ dependencies = [
"syn",
]
[[package]]
name = "atty"
version = "0.2.14"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d9b39be18770d11421cdb1b9947a45dd3f37e93092cbf377614828a319d5fee8"
dependencies = [
"hermit-abi",
"libc",
"winapi",
]
[[package]]
name = "autocfg"
version = "1.0.1"
@@ -110,6 +127,19 @@ dependencies = [
"generic-array",
]
[[package]]
name = "env_logger"
version = "0.9.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0b2cf0344971ee6c64c31be0d530793fba457d322dfec2810c453d0ef228f9c3"
dependencies = [
"atty",
"humantime",
"log",
"regex",
"termcolor",
]
[[package]]
name = "futures-channel"
version = "0.3.19"
@@ -175,6 +205,12 @@ dependencies = [
"libc",
]
[[package]]
name = "humantime"
version = "2.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9a3a5bfb195931eeb336b2a7b4d761daec841b97f947d34394601737a7bba5e4"
[[package]]
name = "instant"
version = "0.1.12"
@@ -318,16 +354,23 @@ dependencies = [
name = "pgcat"
version = "0.1.0"
dependencies = [
"arc-swap",
"async-trait",
"bb8",
"bytes",
"chrono",
"env_logger",
"log",
"md-5",
"num_cpus",
"once_cell",
"rand",
"regex",
"serde",
"serde_derive",
"sha-1",
"sqlparser",
"statsd",
"tokio",
"toml",
]
@@ -489,6 +532,24 @@ version = "1.8.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f2dd574626839106c320a323308629dcb1acfc96e32a8cba364ddc61ac23ee83"
[[package]]
name = "sqlparser"
version = "0.14.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b8f192f29f4aa49e57bebd0aa05858e0a1f32dd270af36efe49edb82cbfffab6"
dependencies = [
"log",
]
[[package]]
name = "statsd"
version = "0.15.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "df1efceb4bf2c0b5ebec94354285a43bbbed1375605bdf2ebe4132299434a330"
dependencies = [
"rand",
]
[[package]]
name = "syn"
version = "1.0.86"
@@ -500,6 +561,15 @@ dependencies = [
"unicode-xid",
]
[[package]]
name = "termcolor"
version = "1.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2dfed899f0eb03f32ee8c6a0aabdb8a7949659e3466561fc0adf54e26d88c5f4"
dependencies = [
"winapi-util",
]
[[package]]
name = "time"
version = "0.1.44"
@@ -590,6 +660,15 @@ version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ac3b87c63620426dd9b991e5ce0329eff545bccbbb34f3be09ff6fb6ab51b7b6"
[[package]]
name = "winapi-util"
version = "0.1.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "70ec6ce85bb158151cae5e5c87f95a8e97d2c0c4b001223f33a334e3ce5de178"
dependencies = [
"winapi",
]
[[package]]
name = "winapi-x86_64-pc-windows-gnu"
version = "0.4.0"

View File

@@ -18,3 +18,10 @@ toml = "0.5"
serde = "1"
serde_derive = "1"
regex = "1"
num_cpus = "1"
once_cell = "1"
statsd = "0.15"
sqlparser = "0.14"
log = "0.4"
arc-swap = "1"
env_logger = "0.9"

11
Dockerfile Normal file
View File

@@ -0,0 +1,11 @@
FROM rust:1.58-slim-buster AS builder
COPY . /app
WORKDIR /app
RUN cargo build --release
FROM debian:buster-slim
COPY --from=builder /app/target/release/pgcat /usr/bin/pgcat
COPY --from=builder /app/pgcat.toml /etc/pgcat/pgcat.toml
WORKDIR /etc/pgcat
ENV RUST_LOG=info
ENTRYPOINT ["/usr/bin/pgcat"]

694
LICENSE
View File

@@ -1,674 +1,20 @@
GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<http://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<http://www.gnu.org/philosophy/why-not-lgpl.html>.
Copyright (c) 2022 Lev Kokotov <lev@levthe.dev>
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

510
README.md
View File

@@ -6,18 +6,64 @@
Meow. PgBouncer rewritten in Rust, with sharding, load balancing and failover support.
**Alpha**: don't use in production just yet.
**Alpha**: looking for alpha testers, see [#35](https://github.com/levkk/pgcat/issues/35).
## Features
| **Feature** | **Status** | **Comments** |
|--------------------------------|-----------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------|
| Transaction pooling | :heavy_check_mark: | Identical to PgBouncer. |
| Session pooling | :heavy_check_mark: | Identical to PgBouncer. |
| `COPY` support | :heavy_check_mark: | Both `COPY TO` and `COPY FROM` are supported. |
| Query cancellation | :heavy_check_mark: | Supported both in transaction and session pooling modes. |
| Load balancing of read queries | :heavy_check_mark: | Using round-robin between replicas. Primary is included when `primary_reads_enabled` is enabled (default). |
| Sharding | :heavy_check_mark: | Transactions are sharded using `SET SHARD TO` and `SET SHARDING KEY TO` syntax extensions; see examples below. |
| Failover | :heavy_check_mark: | Replicas are tested with a health check. If a health check fails, remaining replicas are attempted; see below for algorithm description and examples. |
| Statistics reporting | :heavy_check_mark: | Statistics similar to PgBouncers are reported via StatsD. |
| Live configuration reloading | :construction_worker: | Reload config with a `SIGHUP` to the process, e.g. `kill -s SIGHUP $(pgrep pgcat)`. Not all settings can be reloaded without a restart. |
| Client authentication | :x: :wrench: | On the roadmap; currently all clients are allowed to connect and one user is used to connect to Postgres. |
## Deployment
See `Dockerfile` for example deployment using Docker. The pooler is configured to spawn 4 workers so 4 CPUs are recommended for optimal performance.
That setting can be adjusted to spawn as many (or as little) workers as needed.
### Config
| **Name** | **Description** | **Examples** |
|-------------------------|--------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------|
| **`general`** | | |
| `host` | The pooler will run on this host, 0.0.0.0 means accessible from everywhere. | `0.0.0.0` |
| `port` | The pooler will run on this port. | `6432` |
| `pool_size` | Maximum allowed server connections per pool. Pools are separated for each user/shard/server role. The connections are allocated as needed. | `15` |
| `pool_mode` | The pool mode to use, i.e. `session` or `transaction`. | `transaction` |
| `connect_timeout` | Maximum time to establish a connection to a server (milliseconds). If reached, the server is banned and the next target is attempted. | `5000` |
| `healthcheck_timeout` | Maximum time to pass a health check (`SELECT 1`, milliseconds). If reached, the server is banned and the next target is attempted. | `1000` |
| `ban_time` | Ban time for a server (seconds). It won't be allowed to serve transactions until the ban expires; failover targets will be used instead. | `60` |
| `statsd_address` | StatsD host and port. Statistics will be sent there every 15 seconds. | `127.0.0.1:8125` |
| | | |
| **`user`** | | |
| `name` | The user name. | `sharding_user` |
| `password` | The user password in plaintext. | `hunter2` |
| | | |
| **`shards`** | Shards are numerically numbered starting from 0; the order in the config is preserved by the pooler to route queries accordingly. | `[shards.0]` |
| `servers` | List of servers to connect to and their roles. A server is: `[host, port, role]`, where `role` is either `primary` or `replica`. | `["127.0.0.1", 5432, "primary"]` |
| `database` | The name of the database to connect to. This is the same on all servers that are part of one shard. | |
| **`query_router`** | | |
| `default_role` | Traffic is routed to this role by default (round-robin), unless the client specifies otherwise. Default is `any`, for any role available. | `any`, `primary`, `replica` |
| `query_parser_enabled` | Enable the query parser which will inspect incoming queries and route them to a primary or replicas. | `false` |
| `primary_reads_enabled` | Enable this to allow read queries on the primary; otherwise read queries are routed to the replicas. | `true` |
## Local development
1. Install Rust (latest stable will work great).
2. `cargo run --release` (to get better benchmarks).
2. `cargo build --release` (to get better benchmarks).
3. Change the config in `pgcat.toml` to fit your setup (optional given next step).
4. Install Postgres and run `psql -f tests/sharding/query_routing_setup.sql`
4. Install Postgres and run `psql -f tests/sharding/query_routing_setup.sql` (user/password may be required depending on your setup)
5. `RUST_LOG=info cargo run --release` You're ready to go!
### Tests
You can just PgBench to test your changes:
Quickest way to test your changes is to use pgbench:
```
pgbench -i -h 127.0.0.1 -p 6432 && \
@@ -27,65 +73,154 @@ pgbench -t 1000 -p 6432 -h 127.0.0.1 --protocol extended
See [sharding README](./tests/sharding/README.md) for sharding logic testing.
## Features
| **Feature** | **Tested in CI** | **Tested manually** | **Comments** |
|-----------------------|--------------------|---------------------|--------------------------------------------------------------------------------------------------------------------------|
| Transaction pooling | :heavy_check_mark: | :heavy_check_mark: | Used by default for all tests. |
| Session pooling | :heavy_check_mark: | :heavy_check_mark: | Tested by running pgbench with `--protocol prepared` which only works in session mode. |
| `COPY` | :heavy_check_mark: | :heavy_check_mark: | `pgbench -i` uses `COPY`. `COPY FROM` is tested as well. |
| Query cancellation | :heavy_check_mark: | :heavy_check_mark: | `psql -c 'SELECT pg_sleep(1000);'` and press `Ctrl-C`. |
| Load balancing | :x: | :heavy_check_mark: | We could test this by emitting statistics for each replica and compare them. |
| Failover | :x: | :heavy_check_mark: | Misconfigure a replica in `pgcat.toml` and watch it forward queries to spares. CI testing could include using Toxiproxy. |
| Sharding | :heavy_check_mark: | :heavy_check_mark: | See `tests/sharding` and `tests/ruby` for an Rails/ActiveRecord example. |
| Statistics reporting | :x: | :heavy_check_mark: | Run `nc -l -u 8125` and watch the stats come in every 15 seconds. |
| Live config reloading | :heavy_check_mark: | :heavy_check_mark: | Run `kill -s SIGHUP $(pgrep pgcat)` and watch the config reload. |
1. Session mode.
2. Transaction mode.
3. `COPY` protocol support.
4. Query cancellation.
5. Round-robin load balancing of replicas.
6. Banlist & failover
7. Sharding!
## Usage
### Session mode
Each client owns its own server for the duration of the session. Commands like `SET` are allowed.
This is identical to PgBouncer session mode.
In session mode, a client talks to one server for the duration of the connection. Prepared statements, `SET`, and advisory locks are supported. In terms of supported features, there is very little if any difference between session mode and talking directly to the server.
To use session mode, change `pool_mode = "session"`.
### Transaction mode
The connection is attached to the server for the duration of the transaction. `SET` will pollute the connection,
but `SET LOCAL` works great. Identical to PgBouncer transaction mode.
In transaction mode, a client talks to one server for the duration of a single transaction; once it's over, the server is returned to the pool. Prepared statements, `SET`, and advisory locks are not supported; alternatives are to use `SET LOCAL` and `pg_advisory_xact_lock` which are scoped to the transaction.
### COPY protocol
That one isn't particularly special, but good to mention that you can `COPY` data in and from the server
using this pooler.
This mode is enabled by default.
### Query cancellation
Okay, this is just basic stuff, but we support cancelling queries. If you know the Postgres protocol,
this might be relevant given than this is a transactional pooler but if you're new to Pg, don't worry about it, it works.
### Load balancing of read queries
All queries are load balanced against the configured servers using the round-robin algorithm. The most straight forward configuration example would be to put this pooler in front of several replicas and let it load balance all queries.
### Round-robin load balancing
This is the novel part. PgBouncer doesn't support it and suggests we use DNS or a TCP proxy instead.
We prefer to have everything as part of one package; arguably, it's easier to understand and optimize.
This pooler will round-robin between multiple replicas keeping load reasonably even.
If the configuration includes a primary and replicas, the queries can be separated with the built-in query parser. The query parser will interpret the query and route all `SELECT` queries to a replica, while all other queries including explicit transactions will be routed to the primary.
### Banlist & failover
This is where it gets even more interesting. If we fail to connect to one of the replicas or it fails a health check,
we add it to a ban list. No more new transactions will be served by that replica for, in our case, 60 seconds. This
gives it the opportunity to recover while clients are happily served by the remaining replicas.
The query parser is disabled by default.
This decreases error rates substantially! Worth noting here that on busy systems, if the replicas are running too hot,
failing over could bring even more load and tip over the remaining healthy-ish replicas. In this case, a decision should be made:
either lose 1/x of your traffic or risk losing it all eventually. Ideally you overprovision your system, so you don't necessarily need
to make this choice :-).
### Sharding
We're implemeting Postgres' `PARTITION BY HASH` sharding function for `BIGINT` fields. This works well for tables that use `BIGSERIAL` primary key which I think is common enough these days. We can also add many more functions here, but this is a good start. See `src/sharding.rs` and `tests/sharding/partition_hash_test_setup.sql` for more details on the implementation.
The biggest advantage of using this sharding function is that anyone can shard the dataset using Postgres partitions
while also access it for both reads and writes using this pooler. No custom obscure sharding function is needed and database sharding can be done entirely in Postgres.
To select the shard we want to talk to, we introduced special syntax:
#### Query parser
The query parser will do its best to determine where the query should go, but sometimes that's not possible. In that case, the client can select which server it wants using this custom SQL syntax:
```sql
-- To talk to the primary for the duration of the next transaction:
SET SERVER ROLE TO 'primary';
-- To talk to the replica for the duration of the next transaction:
SET SERVER ROLE TO 'replica';
-- Let the query parser decide
SET SERVER ROLE TO 'auto';
-- Pick any server at random
SET SERVER ROLE TO 'any';
-- Reset to default configured settings
SET SERVER ROLE TO 'default';
```
The setting will persist until it's changed again or the client disconnects.
By default, all queries are routed to the first available server; `default_role` setting controls this behavior.
### Failover
All servers are checked with a `SELECT 1` query before being given to a client. If the server is not reachable, it will be banned and cannot serve any more transactions for the duration of the ban. The queries are routed to the remaining servers. If all servers become banned, the ban list is cleared: this is a safety precaution against false positives. The primary can never be banned.
The ban time can be changed with `ban_time`. The default is 60 seconds.
### Sharding
We use the `PARTITION BY HASH` hashing function, the same as used by Postgres for declarative partitioning. This allows to shard the database using Postgres partitions and place the partitions on different servers (shards). Both read and write queries can be routed to the shards using this pooler.
To route queries to a particular shard, we use this custom SQL syntax:
```sql
-- To talk to a shard explicitely
SET SHARD TO '1';
-- To let the pooler choose based on a value
SET SHARDING KEY TO '1234';
```
This sharding key will be hashed and the pooler will select a shard to use for the next transaction. If the pooler is in session mode, this sharding key will be used until it's set again or the client disconnects.
The active shard will last until it's changed again or the client disconnects. By default, the queries are routed to shard 0.
For hash function implementation, see `src/sharding.rs` and `tests/sharding/partition_hash_test_setup.sql`.
## Missing
#### ActiveRecord/Rails
```ruby
class User < ActiveRecord::Base
end
# Metadata will be fetched from shard 0
ActiveRecord::Base.establish_connection
# Grab a bunch of users from shard 1
User.connection.execute "SET SHARD TO '1'"
User.take(10)
# Using id as the sharding key
User.connection.execute "SET SHARDING KEY TO '1234'"
User.find_by_id(1234)
# Using geographical sharding
User.connection.execute "SET SERVER ROLE TO 'primary'"
User.connection.execute "SET SHARDING KEY TO '85'"
User.create(name: "test user", email: "test@example.com", zone_id: 85)
# Let the query parser figure out where the query should go.
# We are still on shard = hash(85) % shards.
User.connection.execute "SET SERVER ROLE TO 'auto'"
User.find_by_email("test@example.com")
```
#### Raw SQL
```sql
-- Grab a bunch of users from shard 1
SET SHARD TO '1';
SELECT * FROM users LIMT 10;
-- Find by id
SET SHARDING KEY TO '1234';
SELECT * FROM USERS WHERE id = 1234;
-- Writing in a primary/replicas configuration.
SET SHARDING ROLE TO 'primary';
SET SHARDING KEY TO '85';
INSERT INTO users (name, email, zome_id) VALUES ('test user', 'test@example.com', 85);
SET SERVER ROLE TO 'auto'; -- let the query router figure out where the query should go
SELECT * FROM users WHERE email = 'test@example.com'; -- shard setting lasts until set again; we are reading from the primary
```
### Statistics reporting
Stats are reported using StatsD every 15 seconds. The address is configurable with `statsd_address`, the default is `127.0.0.1:8125`. The stats are very similar to what Pgbouncer reports and the names are kept to be comparable.
### Live configuration reloading
The config can be reloaded by sending a `kill -s SIGHUP` to the process. Not all settings are currently supported by live reload:
| **Config** | **Requires restart** |
|-------------------------|----------------------|
| `host` | yes |
| `port` | yes |
| `pool_mode` | no |
| `connect_timeout` | yes |
| `healthcheck_timeout` | no |
| `ban_time` | no |
| `statsd_address` | yes |
| `user` | yes |
| `shards` | yes |
| `default_role` | no |
| `primary_reads_enabled` | no |
| `query_parser_enabled` | no |
1. Authentication, ehem, this proxy is letting anyone in at the moment.
## Benchmarks
@@ -95,114 +230,231 @@ You can setup PgBench locally through PgCat:
pgbench -h 127.0.0.1 -p 6432 -i
```
Coincidenly, this uses `COPY` so you can test if that works.
Coincidenly, this uses `COPY` so you can test if that works. Additionally, we'll be running the following PgBench configurations:
1. 16 clients, 2 threads
2. 32 clients, 2 threads
3. 64 clients, 2 threads
4. 128 clients, 2 threads
All queries will be `SELECT` only (`-S`) just so disks don't get in the way, since the dataset will be effectively all in RAM.
My setup:
- 8 cores, 16 hyperthreaded (AMD Ryzen 5800X)
- 32GB RAM (doesn't matter for this benchmark, except to prove that Postgres will fit the whole dataset into RAM)
### PgBouncer
#### Config
```ini
[databases]
shard0 = host=localhost port=5432 user=sharding_user password=sharding_user
[pgbouncer]
pool_mode = transaction
max_client_conn = 1000
```
$ pgbench -i -h 127.0.0.1 -p 6432 && pgbench -t 1000 -p 6432 -h 127.0.0.1 --protocol simple && pgbench -t 1000 -p 6432 -h 127.0.0.1 --protocol extended
dropping old tables...
creating tables...
generating data...
100000 of 100000 tuples (100%) done (elapsed 0.01 s, remaining 0.00 s)
vacuuming...
creating primary keys...
done.
Everything else stays default.
#### Runs
```
$ pgbench -t 1000 -c 16 -j 2 -p 6432 -h 127.0.0.1 -S --protocol extended shard0
starting vacuum...end.
transaction type: <builtin: TPC-B (sort of)>
scaling factor: 1
query mode: simple
number of clients: 1
number of threads: 1
number of transactions per client: 1000
number of transactions actually processed: 1000/1000
latency average = 1.089 ms
tps = 918.687098 (including connections establishing)
tps = 918.847790 (excluding connections establishing)
starting vacuum...end.
transaction type: <builtin: TPC-B (sort of)>
transaction type: <builtin: select only>
scaling factor: 1
query mode: extended
number of clients: 1
number of threads: 1
number of clients: 16
number of threads: 2
number of transactions per client: 1000
number of transactions actually processed: 1000/1000
latency average = 1.136 ms
tps = 880.622009 (including connections establishing)
tps = 880.769550 (excluding connections establishing)
number of transactions actually processed: 16000/16000
latency average = 0.155 ms
tps = 103417.377469 (including connections establishing)
tps = 103510.639935 (excluding connections establishing)
$ pgbench -t 1000 -c 32 -j 2 -p 6432 -h 127.0.0.1 -S --protocol extended shard0
starting vacuum...end.
transaction type: <builtin: select only>
scaling factor: 1
query mode: extended
number of clients: 32
number of threads: 2
number of transactions per client: 1000
number of transactions actually processed: 32000/32000
latency average = 0.290 ms
tps = 110325.939785 (including connections establishing)
tps = 110386.513435 (excluding connections establishing)
$ pgbench -t 1000 -c 64 -j 2 -p 6432 -h 127.0.0.1 -S --protocol extended shard0
starting vacuum...end.
transaction type: <builtin: select only>
scaling factor: 1
query mode: extended
number of clients: 64
number of threads: 2
number of transactions per client: 1000
number of transactions actually processed: 64000/64000
latency average = 0.692 ms
tps = 92470.427412 (including connections establishing)
tps = 92618.389350 (excluding connections establishing)
$ pgbench -t 1000 -c 128 -j 2 -p 6432 -h 127.0.0.1 -S --protocol extended shard0
starting vacuum...end.
transaction type: <builtin: select only>
scaling factor: 1
query mode: extended
number of clients: 128
number of threads: 2
number of transactions per client: 1000
number of transactions actually processed: 128000/128000
latency average = 1.406 ms
tps = 91013.429985 (including connections establishing)
tps = 91067.583928 (excluding connections establishing)
```
### PgCat
#### Config
The only thing that matters here is the number of workers in the Tokio pool. Make sure to set it to < than the number of your CPU cores.
Also account for hyper-threading, so if you have that, take the number you got above and divide it by two, that way only "real" cores serving
requests.
My setup is 16 threads, 8 cores (`htop` shows as 16 CPUs), so I set the `max_workers` in Tokio to 4. Too many, and it starts conflicting with PgBench
which is also running on the same system.
#### Runs
```
$ pgbench -i -h 127.0.0.1 -p 6432 && pgbench -t 1000 -p 6432 -h 127.0.0.1 --protocol simple && pgbench -t 1000 -p 6432 -h 127.0.0.1 --protocol extended
dropping old tables...
creating tables...
generating data...
100000 of 100000 tuples (100%) done (elapsed 0.01 s, remaining 0.00 s)
vacuuming...
creating primary keys...
done.
$ pgbench -t 1000 -c 16 -j 2 -p 6432 -h 127.0.0.1 -S --protocol extended
starting vacuum...end.
transaction type: <builtin: TPC-B (sort of)>
scaling factor: 1
query mode: simple
number of clients: 1
number of threads: 1
number of transactions per client: 1000
number of transactions actually processed: 1000/1000
latency average = 1.142 ms
tps = 875.645437 (including connections establishing)
tps = 875.799995 (excluding connections establishing)
starting vacuum...end.
transaction type: <builtin: TPC-B (sort of)>
transaction type: <builtin: select only>
scaling factor: 1
query mode: extended
number of clients: 1
number of threads: 1
number of clients: 16
number of threads: 2
number of transactions per client: 1000
number of transactions actually processed: 1000/1000
latency average = 1.181 ms
tps = 846.539176 (including connections establishing)
tps = 846.713636 (excluding connections establishing)
number of transactions actually processed: 16000/16000
latency average = 0.164 ms
tps = 97705.088232 (including connections establishing)
tps = 97872.216045 (excluding connections establishing)
$ pgbench -t 1000 -c 32 -j 2 -p 6432 -h 127.0.0.1 -S --protocol extended
starting vacuum...end.
transaction type: <builtin: select only>
scaling factor: 1
query mode: extended
number of clients: 32
number of threads: 2
number of transactions per client: 1000
number of transactions actually processed: 32000/32000
latency average = 0.288 ms
tps = 111300.488119 (including connections establishing)
tps = 111413.107800 (excluding connections establishing)
$ pgbench -t 1000 -c 64 -j 2 -p 6432 -h 127.0.0.1 -S --protocol extended
starting vacuum...end.
transaction type: <builtin: select only>
scaling factor: 1
query mode: extended
number of clients: 64
number of threads: 2
number of transactions per client: 1000
number of transactions actually processed: 64000/64000
latency average = 0.556 ms
tps = 115190.496139 (including connections establishing)
tps = 115247.521295 (excluding connections establishing)
$ pgbench -t 1000 -c 128 -j 2 -p 6432 -h 127.0.0.1 -S --protocol extended
starting vacuum...end.
transaction type: <builtin: select only>
scaling factor: 1
query mode: extended
number of clients: 128
number of threads: 2
number of transactions per client: 1000
number of transactions actually processed: 128000/128000
latency average = 1.135 ms
tps = 112770.562239 (including connections establishing)
tps = 112796.502381 (excluding connections establishing)
```
### Direct Postgres
Always good to have a base line.
#### Runs
```
$ pgbench -i -h 127.0.0.1 -p 5432 && pgbench -t 1000 -p 5432 -h 127.0.0.1 --protocol simple && pgbench -t 1000 -p
5432 -h 127.0.0.1 --protocol extended
Password:
dropping old tables...
creating tables...
generating data...
100000 of 100000 tuples (100%) done (elapsed 0.01 s, remaining 0.00 s)
vacuuming...
creating primary keys...
done.
Password:
$ pgbench -t 1000 -c 16 -j 2 -p 5432 -h 127.0.0.1 -S --protocol extended shard0
Password:
starting vacuum...end.
transaction type: <builtin: TPC-B (sort of)>
scaling factor: 1
query mode: simple
number of clients: 1
number of threads: 1
number of transactions per client: 1000
number of transactions actually processed: 1000/1000
latency average = 0.902 ms
tps = 1109.014867 (including connections establishing)
tps = 1112.318595 (excluding connections establishing)
Password:
starting vacuum...end.
transaction type: <builtin: TPC-B (sort of)>
transaction type: <builtin: select only>
scaling factor: 1
query mode: extended
number of clients: 1
number of threads: 1
number of clients: 16
number of threads: 2
number of transactions per client: 1000
number of transactions actually processed: 1000/1000
latency average = 0.931 ms
tps = 1074.017747 (including connections establishing)
tps = 1077.121752 (excluding connections establishing)
```
number of transactions actually processed: 16000/16000
latency average = 0.115 ms
tps = 139443.955722 (including connections establishing)
tps = 142314.859075 (excluding connections establishing)
$ pgbench -t 1000 -c 32 -j 2 -p 5432 -h 127.0.0.1 -S --protocol extended shard0
Password:
starting vacuum...end.
transaction type: <builtin: select only>
scaling factor: 1
query mode: extended
number of clients: 32
number of threads: 2
number of transactions per client: 1000
number of transactions actually processed: 32000/32000
latency average = 0.212 ms
tps = 150644.840891 (including connections establishing)
tps = 152218.499430 (excluding connections establishing)
$ pgbench -t 1000 -c 64 -j 2 -p 5432 -h 127.0.0.1 -S --protocol extended shard0
Password:
starting vacuum...end.
transaction type: <builtin: select only>
scaling factor: 1
query mode: extended
number of clients: 64
number of threads: 2
number of transactions per client: 1000
number of transactions actually processed: 64000/64000
latency average = 0.420 ms
tps = 152517.663404 (including connections establishing)
tps = 153319.188482 (excluding connections establishing)
$ pgbench -t 1000 -c 128 -j 2 -p 5432 -h 127.0.0.1 -S --protocol extended shard0
Password:
starting vacuum...end.
transaction type: <builtin: select only>
scaling factor: 1
query mode: extended
number of clients: 128
number of threads: 2
number of transactions per client: 1000
number of transactions actually processed: 128000/128000
latency average = 0.854 ms
tps = 149818.594087 (including connections establishing)
tps = 150200.603049 (excluding connections establishing)
```

View File

@@ -29,6 +29,9 @@ healthcheck_timeout = 1000
# For how long to ban a server if it fails a health check (seconds).
ban_time = 60 # Seconds
# Stats will be sent here
statsd_address = "127.0.0.1:8125"
#
# User to use for authentication against the server.
[user]
@@ -43,26 +46,53 @@ password = "sharding_user"
# Shard 0
[shards.0]
# [ host, port ]
# [ host, port, role ]
servers = [
[ "127.0.0.1", 5432 ],
[ "localhost", 5432 ],
[ "127.0.0.1", 5432, "primary" ],
[ "localhost", 5432, "replica" ],
# [ "127.0.1.1", 5432, "replica" ],
]
# Database name (e.g. "postgres")
database = "shard0"
[shards.1]
# [ host, port ]
# [ host, port, role ]
servers = [
[ "127.0.0.1", 5432 ],
[ "localhost", 5432 ],
[ "127.0.0.1", 5432, "primary" ],
[ "localhost", 5432, "replica" ],
# [ "127.0.1.1", 5432, "replica" ],
]
database = "shard1"
[shards.2]
# [ host, port ]
# [ host, port, role ]
servers = [
[ "127.0.0.1", 5432 ],
[ "localhost", 5432 ],
[ "127.0.0.1", 5432, "primary" ],
[ "localhost", 5432, "replica" ],
# [ "127.0.1.1", 5432, "replica" ],
]
database = "shard2"
database = "shard2"
# Settings for our query routing layer.
[query_router]
# If the client doesn't specify, route traffic to
# this role by default.
#
# any: round-robin between primary and replicas,
# replica: round-robin between replicas only without touching the primary,
# primary: all queries go to the primary unless otherwise specified.
default_role = "any"
# Query parser. If enabled, we'll attempt to parse
# every incoming query to determine if it's a read or a write.
# If it's a read query, we'll direct it to a replica. Otherwise, if it's a write,
# we'll direct it to the primary.
query_parser_enabled = false
# If the query parser is enabled and this setting is enabled, the primary will be part of the pool of databases used for
# load balancing of read queries. Otherwise, the primary will only be used for write
# queries. The primary can always be explicitely selected with our custom protocol.
primary_reads_enabled = true

View File

@@ -2,18 +2,23 @@
/// We are pretending to the server in this scenario,
/// and this module implements that.
use bytes::{Buf, BufMut, BytesMut};
use regex::Regex;
use log::error;
use tokio::io::{AsyncReadExt, BufReader};
use tokio::net::tcp::{OwnedReadHalf, OwnedWriteHalf};
use tokio::net::TcpStream;
use tokio::net::{
tcp::{OwnedReadHalf, OwnedWriteHalf},
TcpStream,
};
use std::collections::HashMap;
use crate::config::get_config;
use crate::constants::*;
use crate::errors::Error;
use crate::messages::*;
use crate::pool::{ClientServerMap, ConnectionPool};
use crate::query_router::{Command, QueryRouter};
use crate::server::Server;
use crate::sharding::Sharder;
const SHARDING_REGEX: &str = r"SET SHARDING KEY TO '[0-9]+';";
use crate::stats::Reporter;
/// The client state. One of these is created per client.
pub struct Client {
@@ -43,8 +48,12 @@ pub struct Client {
// to connect and cancel a query.
client_server_map: ClientServerMap,
// sharding regex
sharding_regex: Regex,
// Client parameters, e.g. user, client_encoding, etc.
#[allow(dead_code)]
parameters: HashMap<String, String>,
// Statistics
stats: Reporter,
}
impl Client {
@@ -54,10 +63,12 @@ impl Client {
pub async fn startup(
mut stream: TcpStream,
client_server_map: ClientServerMap,
transaction_mode: bool,
server_info: BytesMut,
stats: Reporter,
) -> Result<Client, Error> {
let sharding_regex = Regex::new(SHARDING_REGEX).unwrap();
let config = get_config();
let transaction_mode = config.general.pool_mode.starts_with("t");
drop(config);
loop {
// Could be StartupMessage or SSLRequest
// which makes this variable length.
@@ -79,7 +90,7 @@ impl Client {
match code {
// Client wants SSL. We don't support it at the moment.
80877103 => {
SSL_REQUEST_CODE => {
let mut no = BytesMut::with_capacity(1);
no.put_u8(b'N');
@@ -87,16 +98,16 @@ impl Client {
}
// Regular startup message.
196608 => {
PROTOCOL_VERSION_NUMBER => {
// TODO: perform actual auth.
// TODO: record startup parameters client sends over.
let parameters = parse_startup(bytes.clone())?;
// Generate random backend ID and secret key
let process_id: i32 = rand::random();
let secret_key: i32 = rand::random();
auth_ok(&mut stream).await?;
server_parameters(&mut stream).await?;
write_all(&mut stream, server_info).await?;
backend_key_data(&mut stream, process_id, secret_key).await?;
ready_for_query(&mut stream).await?;
@@ -113,12 +124,13 @@ impl Client {
process_id: process_id,
secret_key: secret_key,
client_server_map: client_server_map,
sharding_regex: sharding_regex,
parameters: parameters,
stats: stats,
});
}
// Query cancel request.
80877102 => {
CANCEL_REQUEST_CODE => {
let (read, write) = stream.into_split();
let process_id = bytes.get_i32();
@@ -133,7 +145,8 @@ impl Client {
process_id: process_id,
secret_key: secret_key,
client_server_map: client_server_map,
sharding_regex: sharding_regex,
parameters: HashMap::new(),
stats: stats,
});
}
@@ -145,78 +158,132 @@ impl Client {
}
/// Client loop. We handle all messages between the client and the database here.
pub async fn handle(&mut self, pool: ConnectionPool) -> Result<(), Error> {
// Special: cancelling existing running query
pub async fn handle(&mut self, mut pool: ConnectionPool) -> Result<(), Error> {
// The client wants to cancel a query it has issued previously.
if self.cancel_mode {
let (process_id, secret_key, address, port) = {
let guard = self.client_server_map.lock().unwrap();
match guard.get(&(self.process_id, self.secret_key)) {
// Drop the mutex as soon as possible.
// We found the server the client is using for its query
// that it wants to cancel.
Some((process_id, secret_key, address, port)) => (
process_id.clone(),
secret_key.clone(),
address.clone(),
port.clone(),
),
// The client doesn't know / got the wrong server,
// we're closing the connection for security reasons.
None => return Ok(()),
}
};
// TODO: pass actual server host and port somewhere.
// Opens a new separate connection to the server, sends the backend_id
// and secret_key and then closes it for security reasons. No other interactions
// take place.
return Ok(Server::cancel(&address, &port, process_id, secret_key).await?);
}
// Active shard we're talking to.
// The lifetime of this depends on the pool mode:
// - if in session mode, this lives until client disconnects or changes it,
// - if in transaction mode, this lives for the duration of one transaction.
let mut shard: Option<usize> = None;
let mut query_router = QueryRouter::new();
// Our custom protocol loop.
// We expect the client to either start a transaction with regular queries
// or issue commands for our sharding and server selection protocols.
loop {
// Client idle, waiting for messages.
self.stats.client_idle(self.process_id);
// Read a complete message from the client, which normally would be
// either a `Q` (query) or `P` (prepare, extended protocol).
// We can parse it here before grabbing a server from the pool,
// in case the client is sending some control messages, e.g.
// SET sharding_context.key = '1234';
// SET SHARDING KEY TO 'bigint';
let mut message = read_message(&mut self.read).await?;
// Parse for special select shard command.
// SET SHARDING KEY TO 'bigint';
match self.select_shard(message.clone(), pool.shards()).await {
Some(s) => {
set_sharding_key(&mut self.write).await?;
shard = Some(s);
// Handle all custom protocol commands here.
match query_router.try_execute_command(message.clone()) {
// Normal query
None => {
if query_router.query_parser_enabled() && query_router.role() == None {
query_router.infer_role(message.clone());
}
}
Some((Command::SetShard, _)) | Some((Command::SetShardingKey, _)) => {
custom_protocol_response_ok(&mut self.write, &format!("SET SHARD")).await?;
continue;
}
Some((Command::SetServerRole, _)) => {
custom_protocol_response_ok(&mut self.write, "SET SERVER ROLE").await?;
continue;
}
Some((Command::ShowServerRole, value)) => {
show_response(&mut self.write, "server role", &value).await?;
continue;
}
Some((Command::ShowShard, value)) => {
show_response(&mut self.write, "shard", &value).await?;
continue;
}
None => (),
};
// The message is part of the regular protocol.
// self.buffer.put(message);
// Make sure we selected a valid shard.
if query_router.shard() >= pool.shards() {
error_response(
&mut self.write,
&format!(
"shard '{}' is more than configured '{}'",
query_router.shard(),
pool.shards()
),
)
.await?;
continue;
}
// Grab a server from the pool.
// None = any shard
let connection = pool.get(shard).await.unwrap();
let mut proxy = connection.0;
// Waiting for server connection.
self.stats.client_waiting(self.process_id);
// Grab a server from the pool: the client issued a regular query.
let connection = match pool.get(query_router.shard(), query_router.role()).await {
Ok(conn) => conn,
Err(err) => {
error!("Could not get connection from pool: {:?}", err);
error_response(&mut self.write, "could not get connection from the pool")
.await?;
continue;
}
};
let mut reference = connection.0;
let _address = connection.1;
let server = &mut *proxy;
let server = &mut *reference;
// Claim this server as mine for query cancellation.
server.claim(self.process_id, self.secret_key);
// Client active & server active
self.stats.client_active(self.process_id);
self.stats.server_active(server.process_id());
// Transaction loop. Multiple queries can be issued by the client here.
// The connection belongs to the client until the transaction is over,
// or until the client disconnects if we are in session mode.
loop {
// No messages in the buffer, read one.
let mut message = if message.len() == 0 {
match read_message(&mut self.read).await {
Ok(message) => message,
Err(err) => {
// Client disconnected without warning.
if server.in_transaction() {
// TODO: this is what PgBouncer does
// which leads to connection thrashing.
//
// I think we could issue a ROLLBACK here instead.
// server.mark_bad();
// Client left dirty server. Clean up and proceed
// without thrashing this connection.
server.query("ROLLBACK; DISCARD ALL;").await?;
}
@@ -229,16 +296,26 @@ impl Client {
msg
};
let original = message.clone(); // To be forwarded to the server
// The message will be forwarded to the server intact. We still would like to
// parse it below to figure out what to do with it.
let original = message.clone();
let code = message.get_u8() as char;
let _len = message.get_i32() as usize;
match code {
// ReadyForQuery
'Q' => {
// TODO: implement retries here for read-only transactions.
server.send(original).await?;
// Read all data the server has to offer, which can be multiple messages
// buffered in 8196 bytes chunks.
loop {
// TODO: implement retries here for read-only transactions.
let response = server.recv().await?;
// Send server reply to the client.
match write_all_half(&mut self.write, response).await {
Ok(_) => (),
Err(err) => {
@@ -252,13 +329,24 @@ impl Client {
}
}
// Release server
if !server.in_transaction() && self.transaction_mode {
shard = None;
break;
// Report query executed statistics.
self.stats.query();
// The transaction is over, we can release the connection back to the pool.
if !server.in_transaction() {
// Report transaction executed statistics.
self.stats.transaction();
// Release server back to the pool if we are in transaction mode.
// If we are in session mode, we keep the server until the client disconnects.
if self.transaction_mode {
self.stats.server_idle(server.process_id());
break;
}
}
}
// Terminate
'X' => {
// Client closing. Rollback and clean up
// connection before releasing into the pool.
@@ -272,32 +360,46 @@ impl Client {
return Ok(());
}
// Parse
// The query with placeholders is here, e.g. `SELECT * FROM users WHERE email = $1 AND active = $2`.
'P' => {
// Extended protocol, let's buffer most of it
self.buffer.put(&original[..]);
}
// Bind
// The placeholder's replacements are here, e.g. 'user@email.com' and 'true'
'B' => {
self.buffer.put(&original[..]);
}
// Describe
// Command a client can issue to describe a previously prepared named statement.
'D' => {
self.buffer.put(&original[..]);
}
// Execute
// Execute a prepared statement prepared in `P` and bound in `B`.
'E' => {
self.buffer.put(&original[..]);
}
// Sync
// Frontend (client) is asking for the query result now.
'S' => {
// Extended protocol, client requests sync
self.buffer.put(&original[..]);
// TODO: retries for read-only transactions.
server.send(self.buffer.clone()).await?;
self.buffer.clear();
// Read all data the server has to offer, which can be multiple messages
// buffered in 8196 bytes chunks.
loop {
// TODO: retries for read-only transactions
let response = server.recv().await?;
match write_all_half(&mut self.write, response).await {
Ok(_) => (),
Err(err) => {
@@ -311,10 +413,18 @@ impl Client {
}
}
// Release server
if !server.in_transaction() && self.transaction_mode {
shard = None;
break;
// Report query executed statistics.
self.stats.query();
// Release server back to the pool if we are in transaction mode.
// If we are in session mode, we keep the server until the client disconnects.
if !server.in_transaction() {
self.stats.transaction();
if self.transaction_mode {
self.stats.server_idle(server.process_id());
break;
}
}
}
@@ -325,10 +435,13 @@ impl Client {
server.send(original).await?;
}
// CopyDone or CopyFail
// Copy is done, successfully or not.
'c' | 'f' => {
// Copy is done.
server.send(original).await?;
let response = server.recv().await?;
match write_all_half(&mut self.write, response).await {
Ok(_) => (),
Err(err) => {
@@ -337,53 +450,40 @@ impl Client {
}
};
// Release the server
if !server.in_transaction() && self.transaction_mode {
println!("Releasing after copy done");
shard = None;
break;
// Release server back to the pool if we are in transaction mode.
// If we are in session mode, we keep the server until the client disconnects.
if !server.in_transaction() {
self.stats.transaction();
if self.transaction_mode {
self.stats.server_idle(server.process_id());
break;
}
}
}
// Some unexpected message. We either did not implement the protocol correctly
// or this is not a Postgres client we're talking to.
_ => {
println!(">>> Unexpected code: {}", code);
error!("Unexpected code: {}", code);
}
}
}
// The server is no longer bound to us, we can't cancel it's queries anymore.
self.release();
}
}
/// Release the server from being mine. I can't cancel its queries anymore.
pub fn release(&mut self) {
pub fn release(&self) {
let mut guard = self.client_server_map.lock().unwrap();
guard.remove(&(self.process_id, self.secret_key));
}
}
async fn select_shard(&mut self, mut buf: BytesMut, shards: usize) -> Option<usize> {
let code = buf.get_u8() as char;
match code {
'Q' => (),
// 'P' => (),
_ => return None,
};
let len = buf.get_i32();
let query = String::from_utf8_lossy(&buf[..len as usize - 4 - 1]).to_ascii_uppercase(); // Don't read the ternminating null
if self.sharding_regex.is_match(&query) {
let shard = query.split("'").collect::<Vec<&str>>()[1];
match shard.parse::<i64>() {
Ok(shard) => {
let sharder = Sharder::new(shards);
Some(sharder.pg_bigint_hash(shard))
}
Err(_) => None,
}
} else {
None
}
impl Drop for Client {
fn drop(&mut self) {
self.stats.client_disconnecting(self.process_id);
}
}

View File

@@ -1,16 +1,59 @@
use arc_swap::{ArcSwap, Guard};
use log::{error, info};
use once_cell::sync::Lazy;
use serde_derive::Deserialize;
use tokio::fs::File;
use tokio::io::AsyncReadExt;
use toml;
use std::collections::HashMap;
use std::collections::{HashMap, HashSet};
use std::sync::Arc;
use crate::errors::Error;
static CONFIG: Lazy<ArcSwap<Config>> = Lazy::new(|| ArcSwap::from_pointee(Config::default()));
#[derive(Clone, PartialEq, Deserialize, Hash, std::cmp::Eq, Debug, Copy)]
pub enum Role {
Primary,
Replica,
}
impl PartialEq<Option<Role>> for Role {
fn eq(&self, other: &Option<Role>) -> bool {
match other {
None => true,
Some(role) => *self == *role,
}
}
}
impl PartialEq<Role> for Option<Role> {
fn eq(&self, other: &Role) -> bool {
match *self {
None => true,
Some(role) => role == *other,
}
}
}
#[derive(Clone, PartialEq, Hash, std::cmp::Eq, Debug)]
pub struct Address {
pub host: String,
pub port: String,
pub shard: usize,
pub role: Role,
}
impl Default for Address {
fn default() -> Address {
Address {
host: String::from("127.0.0.1"),
port: String::from("5432"),
shard: 0,
role: Role::Replica,
}
}
}
#[derive(Clone, PartialEq, Hash, std::cmp::Eq, Deserialize, Debug)]
@@ -19,6 +62,15 @@ pub struct User {
pub password: String,
}
impl Default for User {
fn default() -> User {
User {
name: String::from("postgres"),
password: String::new(),
}
}
}
#[derive(Deserialize, Debug, Clone)]
pub struct General {
pub host: String,
@@ -28,28 +80,99 @@ pub struct General {
pub connect_timeout: u64,
pub healthcheck_timeout: u64,
pub ban_time: i64,
pub statsd_address: String,
}
impl Default for General {
fn default() -> General {
General {
host: String::from("localhost"),
port: 5432,
pool_size: 15,
pool_mode: String::from("transaction"),
connect_timeout: 5000,
healthcheck_timeout: 1000,
ban_time: 60,
statsd_address: String::from("127.0.0.1:8125"),
}
}
}
#[derive(Deserialize, Debug, Clone)]
pub struct Shard {
pub servers: Vec<(String, u16)>,
pub servers: Vec<(String, u16, String)>,
pub database: String,
}
impl Default for Shard {
fn default() -> Shard {
Shard {
servers: vec![(String::from("localhost"), 5432, String::from("primary"))],
database: String::from("postgres"),
}
}
}
#[derive(Deserialize, Debug, Clone)]
pub struct QueryRouter {
pub default_role: String,
pub query_parser_enabled: bool,
pub primary_reads_enabled: bool,
}
impl Default for QueryRouter {
fn default() -> QueryRouter {
QueryRouter {
default_role: String::from("any"),
query_parser_enabled: false,
primary_reads_enabled: true,
}
}
}
#[derive(Deserialize, Debug, Clone)]
pub struct Config {
pub general: General,
pub user: User,
pub shards: HashMap<String, Shard>,
pub query_router: QueryRouter,
}
pub async fn parse(path: &str) -> Result<Config, Error> {
// let path = Path::new(path);
impl Default for Config {
fn default() -> Config {
Config {
general: General::default(),
user: User::default(),
shards: HashMap::from([(String::from("1"), Shard::default())]),
query_router: QueryRouter::default(),
}
}
}
impl Config {
pub fn show(&self) {
info!("Pool size: {}", self.general.pool_size);
info!("Pool mode: {}", self.general.pool_mode);
info!("Ban time: {}s", self.general.ban_time);
info!(
"Healthcheck timeout: {}ms",
self.general.healthcheck_timeout
);
info!("Connection timeout: {}ms", self.general.connect_timeout);
}
}
pub fn get_config() -> Guard<Arc<Config>> {
CONFIG.load()
}
/// Parse the config.
pub async fn parse(path: &str) -> Result<(), Error> {
let mut contents = String::new();
let mut file = match File::open(path).await {
Ok(file) => file,
Err(err) => {
println!("> Config error: {:?}", err);
error!("Could not open '{}': {}", path, err.to_string());
return Err(Error::BadConfig);
}
};
@@ -57,7 +180,7 @@ pub async fn parse(path: &str) -> Result<Config, Error> {
match file.read_to_string(&mut contents).await {
Ok(_) => (),
Err(err) => {
println!("> Config error: {:?}", err);
error!("Could not read config file: {}", err.to_string());
return Err(Error::BadConfig);
}
};
@@ -65,12 +188,84 @@ pub async fn parse(path: &str) -> Result<Config, Error> {
let config: Config = match toml::from_str(&contents) {
Ok(config) => config,
Err(err) => {
println!("> Config error: {:?}", err);
error!("Could not parse config file: {}", err.to_string());
return Err(Error::BadConfig);
}
};
Ok(config)
// Quick config sanity check.
for shard in &config.shards {
// We use addresses as unique identifiers,
// let's make sure they are unique in the config as well.
let mut dup_check = HashSet::new();
let mut primary_count = 0;
match shard.0.parse::<usize>() {
Ok(_) => (),
Err(_) => {
error!(
"Shard '{}' is not a valid number, shards must be numbered starting at 0",
shard.0
);
return Err(Error::BadConfig);
}
};
if shard.1.servers.len() == 0 {
error!("Shard {} has no servers configured", shard.0);
return Err(Error::BadConfig);
}
for server in &shard.1.servers {
dup_check.insert(server);
// Check that we define only zero or one primary.
match server.2.as_ref() {
"primary" => primary_count += 1,
_ => (),
};
// Check role spelling.
match server.2.as_ref() {
"primary" => (),
"replica" => (),
_ => {
error!(
"Shard {} server role must be either 'primary' or 'replica', got: '{}'",
shard.0, server.2
);
return Err(Error::BadConfig);
}
};
}
if primary_count > 1 {
error!("Shard {} has more than on primary configured", &shard.0);
return Err(Error::BadConfig);
}
if dup_check.len() != shard.1.servers.len() {
error!("Shard {} contains duplicate server configs", &shard.0);
return Err(Error::BadConfig);
}
}
match config.query_router.default_role.as_ref() {
"any" => (),
"primary" => (),
"replica" => (),
other => {
error!(
"Query router default_role must be 'primary', 'replica', or 'any', got: '{}'",
other
);
return Err(Error::BadConfig);
}
};
CONFIG.store(Arc::new(config.clone()));
Ok(())
}
#[cfg(test)]
@@ -79,9 +274,11 @@ mod test {
#[tokio::test]
async fn test_config() {
let config = parse("pgcat.toml").await.unwrap();
assert_eq!(config.general.pool_size, 15);
assert_eq!(config.shards.len(), 3);
assert_eq!(config.shards["1"].servers[0].0, "127.0.0.1");
parse("pgcat.toml").await.unwrap();
assert_eq!(get_config().general.pool_size, 15);
assert_eq!(get_config().shards.len(), 3);
assert_eq!(get_config().shards["1"].servers[0].0, "127.0.0.1");
assert_eq!(get_config().shards["0"].servers[0].2, "primary");
assert_eq!(get_config().query_router.default_role, "any");
}
}

22
src/constants.rs Normal file
View File

@@ -0,0 +1,22 @@
/// Various protocol constants, as defined in
/// https://www.postgresql.org/docs/12/protocol-message-formats.html
/// and elsewhere in the source code.
/// Also other constants we use elsewhere.
// Used in the StartupMessage to indicate regular handshake.
pub const PROTOCOL_VERSION_NUMBER: i32 = 196608;
// SSLRequest: used to indicate we want an SSL connection.
pub const SSL_REQUEST_CODE: i32 = 80877103;
// CancelRequest: the cancel request code.
pub const CANCEL_REQUEST_CODE: i32 = 80877102;
// AuthenticationMD5Password
pub const MD5_ENCRYPTED_PASSWORD: i32 = 5;
// AuthenticationOk
pub const AUTHENTICATION_SUCCESSFUL: i32 = 0;
// ErrorResponse: A code identifying the field type; if zero, this is the message terminator and no string follows.
pub const MESSAGE_TERMINATOR: u8 = 0;

View File

@@ -8,4 +8,5 @@ pub enum Error {
// ServerTimeout,
// DirtyServer,
BadConfig,
AllServersDown,
}

View File

@@ -1,123 +1,251 @@
// PgCat, a PostgreSQL pooler with load balancing, failover, and sharding support.
// Copyright (C) 2022 Lev Kokotov <lev@levthe.dev>
// Copyright (c) 2022 Lev Kokotov <lev@levthe.dev>
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
// Permission is hereby granted, free of charge, to any person obtaining
// a copy of this software and associated documentation files (the
// "Software"), to deal in the Software without restriction, including
// without limitation the rights to use, copy, modify, merge, publish,
// distribute, sublicense, and/or sell copies of the Software, and to
// permit persons to whom the Software is furnished to do so, subject to
// the following conditions:
// This program is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU General Public License for more details.
// The above copyright notice and this permission notice shall be
// included in all copies or substantial portions of the Software.
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <http://www.gnu.org/licenses/>.
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
// EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
// NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
// LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
// OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
// WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
extern crate arc_swap;
extern crate async_trait;
extern crate bb8;
extern crate bytes;
extern crate env_logger;
extern crate log;
extern crate md5;
extern crate num_cpus;
extern crate once_cell;
extern crate serde;
extern crate serde_derive;
extern crate sqlparser;
extern crate statsd;
extern crate tokio;
extern crate toml;
use log::{error, info};
use tokio::net::TcpListener;
use tokio::{
signal,
signal::unix::{signal as unix_signal, SignalKind},
sync::mpsc,
};
use std::collections::HashMap;
use std::sync::{Arc, Mutex};
mod client;
mod config;
mod constants;
mod errors;
mod messages;
mod pool;
mod query_router;
mod server;
mod sharding;
mod stats;
// Support for query cancellation: this maps our process_ids and
// secret keys to the backend's.
use config::get_config;
use pool::{ClientServerMap, ConnectionPool};
use stats::{Collector, Reporter};
/// Main!
#[tokio::main]
#[tokio::main(worker_threads = 4)]
async fn main() {
println!("> Welcome to PgCat! Meow.");
env_logger::init();
info!("Welcome to PgCat! Meow.");
let config = match config::parse("pgcat.toml").await {
Ok(config) => config,
// Prepare regexes
if !query_router::QueryRouter::setup() {
error!("Could not setup query router");
return;
}
let args = std::env::args().collect::<Vec<String>>();
let config_file = if args.len() == 2 {
args[1].to_string()
} else {
String::from("pgcat.toml")
};
// Prepare the config
match config::parse(&config_file).await {
Ok(_) => (),
Err(err) => {
println!("> Config parse error: {:?}", err);
error!("Config parse error: {:?}", err);
return;
}
};
let config = get_config();
let addr = format!("{}:{}", config.general.host, config.general.port);
let listener = match TcpListener::bind(&addr).await {
Ok(sock) => sock,
Err(err) => {
println!("> Error: {:?}", err);
error!("Listener socket error: {:?}", err);
return;
}
};
println!("> Running on {}", addr);
info!("Running on {}", addr);
config.show();
// Tracks which client is connected to which server for query cancellation.
let client_server_map: ClientServerMap = Arc::new(Mutex::new(HashMap::new()));
println!("> Pool size: {}", config.general.pool_size);
println!("> Pool mode: {}", config.general.pool_mode);
println!("> Ban time: {}s", config.general.ban_time);
println!(
"> Healthcheck timeout: {}ms",
config.general.healthcheck_timeout
);
// Collect statistics and send them to StatsD
let (tx, rx) = mpsc::channel(100);
let pool = ConnectionPool::from_config(config.clone(), client_server_map.clone()).await;
let transaction_mode = config.general.pool_mode == "transaction";
tokio::task::spawn(async move {
let mut stats_collector = Collector::new(rx);
stats_collector.collect().await;
});
println!("> Waiting for clients...");
let mut pool =
ConnectionPool::from_config(client_server_map.clone(), Reporter::new(tx.clone())).await;
loop {
let pool = pool.clone();
let client_server_map = client_server_map.clone();
let server_info = match pool.validate().await {
Ok(info) => info,
Err(err) => {
error!("Could not validate connection pool: {:?}", err);
return;
}
};
let (socket, addr) = match listener.accept().await {
Ok((socket, addr)) => (socket, addr),
Err(err) => {
println!("> Listener: {:?}", err);
continue;
}
};
info!("Waiting for clients");
// Client goes to another thread, bye.
tokio::task::spawn(async move {
println!(
">> Client {:?} connected, transaction pooling: {}",
addr, transaction_mode
);
match client::Client::startup(socket, client_server_map, transaction_mode).await {
Ok(mut client) => {
println!(">> Client {:?} authenticated successfully!", addr);
match client.handle(pool).await {
Ok(()) => {
println!(">> Client {:?} disconnected.", addr);
}
Err(err) => {
println!(">> Client disconnected with error: {:?}", err);
client.release();
}
}
}
// Main app runs here.
tokio::task::spawn(async move {
loop {
let pool = pool.clone();
let client_server_map = client_server_map.clone();
let server_info = server_info.clone();
let reporter = Reporter::new(tx.clone());
let (socket, addr) = match listener.accept().await {
Ok((socket, addr)) => (socket, addr),
Err(err) => {
println!(">> Error: {:?}", err);
error!("{:?}", err);
continue;
}
};
});
}
// Client goes to another thread, bye.
tokio::task::spawn(async move {
let start = chrono::offset::Utc::now().naive_utc();
match client::Client::startup(socket, client_server_map, server_info, reporter)
.await
{
Ok(mut client) => {
info!("Client {:?} connected", addr);
match client.handle(pool).await {
Ok(()) => {
let duration = chrono::offset::Utc::now().naive_utc() - start;
info!(
"Client {:?} disconnected, session duration: {}",
addr,
format_duration(&duration)
);
}
Err(err) => {
error!("Client disconnected with error: {:?}", err);
client.release();
}
}
}
Err(err) => {
error!("Client failed to login: {:?}", err);
}
};
});
}
});
// Reload config
// kill -SIGHUP $(pgrep pgcat)
tokio::task::spawn(async move {
let mut stream = unix_signal(SignalKind::hangup()).unwrap();
loop {
stream.recv().await;
info!("Reloading config");
match config::parse("pgcat.toml").await {
Ok(_) => {
get_config().show();
}
Err(err) => {
error!("{:?}", err);
return;
}
};
}
});
// Setup shut down sequence
match signal::ctrl_c().await {
Ok(()) => {
info!("Shutting down...");
}
Err(err) => {
error!("Unable to listen for shutdown signal: {}", err);
}
};
}
/// Format chrono::Duration to be more human-friendly.
///
/// # Arguments
///
/// * `duration` - A duration of time
fn format_duration(duration: &chrono::Duration) -> String {
let seconds = {
let seconds = duration.num_seconds() % 60;
if seconds < 10 {
format!("0{}", seconds)
} else {
format!("{}", seconds)
}
};
let minutes = {
let minutes = duration.num_minutes() % 60;
if minutes < 10 {
format!("0{}", minutes)
} else {
format!("{}", minutes)
}
};
let hours = {
let hours = duration.num_hours() % 24;
if hours < 10 {
format!("0{}", hours)
} else {
format!("{}", hours)
}
};
let days = duration.num_days().to_string();
format!("{}d {}:{}:{}", days, hours, minutes, seconds)
}

View File

@@ -1,18 +1,17 @@
use bytes::{BufMut, BytesMut};
/// Helper functions to send one-off protocol messages
/// and handle TcpStream (TCP socket).
use bytes::{Buf, BufMut, BytesMut};
use md5::{Digest, Md5};
use tokio::io::{AsyncReadExt, AsyncWriteExt, BufReader};
use tokio::net::tcp::{OwnedReadHalf, OwnedWriteHalf};
use tokio::net::TcpStream;
use tokio::net::{
tcp::{OwnedReadHalf, OwnedWriteHalf},
TcpStream,
};
use std::collections::HashMap;
use crate::errors::Error;
// This is a funny one. `psql` parses this to figure out which
// queries to send when using shortcuts, e.g. \d+.
//
// TODO: Actually get the version from the server itself.
//
const SERVER_VESION: &str = "12.9 (Ubuntu 12.9-0ubuntu0.20.04.1)";
/// Tell the client that authentication handshake completed successfully.
pub async fn auth_ok(stream: &mut TcpStream) -> Result<(), Error> {
let mut auth_ok = BytesMut::with_capacity(9);
@@ -24,29 +23,6 @@ pub async fn auth_ok(stream: &mut TcpStream) -> Result<(), Error> {
Ok(write_all(stream, auth_ok).await?)
}
/// Send server parameters to the client. This will tell the client
/// what server version and what's the encoding we're using.
pub async fn server_parameters(stream: &mut TcpStream) -> Result<(), Error> {
let client_encoding = BytesMut::from(&b"client_encoding\0UTF8\0"[..]);
let server_version =
BytesMut::from(&format!("server_version\0{}\0", SERVER_VESION).as_bytes()[..]);
// Client encoding
let len = client_encoding.len() as i32 + 4; // TODO: add more parameters here
let mut res = BytesMut::with_capacity(64);
res.put_u8(b'S');
res.put_i32(len);
res.put_slice(&client_encoding[..]);
let len = server_version.len() as i32 + 4;
res.put_u8(b'S');
res.put_i32(len);
res.put_slice(&server_version[..]);
Ok(write_all(stream, res).await?)
}
/// Give the client the process_id and secret we generated
/// used in query cancellation.
pub async fn backend_key_data(
@@ -62,6 +38,17 @@ pub async fn backend_key_data(
Ok(write_all(stream, key_data).await?)
}
#[allow(dead_code)]
pub fn simple_query(query: &str) -> BytesMut {
let mut res = BytesMut::from(&b"Q"[..]);
let query = format!("{}\0", query);
res.put_i32(query.len() as i32 + 4);
res.put_slice(&query.as_bytes());
res
}
/// Tell the client we're ready for another query.
pub async fn ready_for_query(stream: &mut TcpStream) -> Result<(), Error> {
let mut bytes = BytesMut::with_capacity(5);
@@ -104,6 +91,51 @@ pub async fn startup(stream: &mut TcpStream, user: &str, database: &str) -> Resu
}
}
/// Parse StartupMessage parameters.
/// e.g. user, database, application_name, etc.
pub fn parse_startup(mut bytes: BytesMut) -> Result<HashMap<String, String>, Error> {
let mut result = HashMap::new();
let mut buf = Vec::new();
let mut tmp = String::new();
while bytes.has_remaining() {
let mut c = bytes.get_u8();
// Null-terminated C-strings.
while c != 0 {
tmp.push(c as char);
c = bytes.get_u8();
}
if tmp.len() > 0 {
buf.push(tmp.clone());
tmp.clear();
}
}
// Expect pairs of name and value
// and at least one pair to be present.
if buf.len() % 2 != 0 && buf.len() >= 2 {
return Err(Error::ClientBadStartup);
}
let mut i = 0;
while i < buf.len() {
let name = buf[i].clone();
let value = buf[i + 1].clone();
let _ = result.insert(name, value);
i += 2;
}
// Minimum required parameters
// I want to have the user at the very minimum, according to the protocol spec.
if !result.contains_key("user") {
return Err(Error::ClientBadStartup);
}
Ok(result)
}
/// Send password challenge response to the server.
/// This is the MD5 challenge.
pub async fn md5_password(
@@ -131,6 +163,7 @@ pub async fn md5_password(
password.push(0);
let mut message = BytesMut::with_capacity(password.len() as usize + 5);
message.put_u8(b'p');
message.put_i32(password.len() as i32 + 4);
message.put_slice(&password[..]);
@@ -138,16 +171,151 @@ pub async fn md5_password(
Ok(write_all(stream, message).await?)
}
pub async fn set_sharding_key(stream: &mut OwnedWriteHalf) -> Result<(), Error> {
/// Implements a response to our custom `SET SHARDING KEY`
/// and `SET SERVER ROLE` commands.
/// This tells the client we're ready for the next query.
pub async fn custom_protocol_response_ok(
stream: &mut OwnedWriteHalf,
message: &str,
) -> Result<(), Error> {
let mut res = BytesMut::with_capacity(25);
let set_complete = BytesMut::from(&"SET SHARDING KEY\0"[..]);
let set_complete = BytesMut::from(&format!("{}\0", message)[..]);
let len = (set_complete.len() + 4) as i32;
// CommandComplete
res.put_u8(b'C');
res.put_i32(len);
res.put_slice(&set_complete[..]);
// ReadyForQuery (idle)
res.put_u8(b'Z');
res.put_i32(5);
res.put_u8(b'I');
write_all_half(stream, res).await
}
/// Send a custom error message to the client.
/// Tell the client we are ready for the next query and no rollback is necessary.
/// Docs on error codes: https://www.postgresql.org/docs/12/errcodes-appendix.html
pub async fn error_response(stream: &mut OwnedWriteHalf, message: &str) -> Result<(), Error> {
let mut error = BytesMut::new();
// Error level
error.put_u8(b'S');
error.put_slice(&b"FATAL\0"[..]);
// Error level (non-translatable)
error.put_u8(b'V');
error.put_slice(&b"FATAL\0"[..]);
// Error code: not sure how much this matters.
error.put_u8(b'C');
error.put_slice(&b"58000\0"[..]); // system_error, see Appendix A.
// The short error message.
error.put_u8(b'M');
error.put_slice(&format!("{}\0", message).as_bytes());
// No more fields follow.
error.put_u8(0);
// Ready for query, no rollback needed (I = idle).
let mut ready_for_query = BytesMut::new();
ready_for_query.put_u8(b'Z');
ready_for_query.put_i32(5);
ready_for_query.put_u8(b'I');
// Compose the two message reply.
let mut res = BytesMut::with_capacity(error.len() + ready_for_query.len() + 5);
res.put_u8(b'E');
res.put_i32(error.len() as i32 + 4);
res.put(error);
res.put(ready_for_query);
Ok(write_all_half(stream, res).await?)
}
/// Respond to a SHOW SHARD command.
pub async fn show_response(
stream: &mut OwnedWriteHalf,
name: &str,
value: &str,
) -> Result<(), Error> {
// A SELECT response consists of:
// 1. RowDescription
// 2. One or more DataRow
// 3. CommandComplete
// 4. ReadyForQuery
// RowDescription
let mut row_desc = BytesMut::new();
// Number of columns: 1
row_desc.put_i16(1);
// Column name
row_desc.put_slice(&format!("{}\0", name).as_bytes());
// Doesn't belong to any table
row_desc.put_i32(0);
// Doesn't belong to any table
row_desc.put_i16(0);
// Text
row_desc.put_i32(25);
// Text size = variable (-1)
row_desc.put_i16(-1);
// Type modifier: none that I know
row_desc.put_i32(0);
// Format being used: text (0), binary (1)
row_desc.put_i16(0);
// DataRow
let mut data_row = BytesMut::new();
// Number of columns
data_row.put_i16(1);
// Size of the column content (length of the string really)
data_row.put_i32(value.len() as i32);
// The content
data_row.put_slice(value.as_bytes());
// CommandComplete
let mut command_complete = BytesMut::new();
// Number of rows returned (just one)
command_complete.put_slice(&b"SELECT 1\0"[..]);
// The final messages sent to the client
let mut res = BytesMut::new();
// RowDescription
res.put_u8(b'T');
res.put_i32(row_desc.len() as i32 + 4);
res.put(row_desc);
// DataRow
res.put_u8(b'D');
res.put_i32(data_row.len() as i32 + 4);
res.put(data_row);
// CommandComplete
res.put_u8(b'C');
res.put_i32(command_complete.len() as i32 + 4);
res.put(command_complete);
// ReadyForQuery
res.put_u8(b'Z');
res.put_i32(5);
res.put_u8(b'I');

View File

@@ -1,36 +1,39 @@
/// Pooling and failover and banlist.
use async_trait::async_trait;
use bb8::{ManageConnection, Pool, PooledConnection};
use bytes::BytesMut;
use chrono::naive::NaiveDateTime;
use log::{error, info, warn};
use crate::config::{Address, Config, User};
use crate::config::{get_config, Address, Role, User};
use crate::errors::Error;
use crate::server::Server;
use crate::stats::Reporter;
use std::collections::HashMap;
use std::sync::{
atomic::{AtomicUsize, Ordering},
Arc, Mutex,
};
use std::sync::{Arc, Mutex};
use std::time::Instant;
// Banlist: bad servers go in here.
pub type BanList = Arc<Mutex<Vec<HashMap<Address, NaiveDateTime>>>>;
pub type Counter = Arc<AtomicUsize>;
pub type ClientServerMap = Arc<Mutex<HashMap<(i32, i32), (i32, i32, String, String)>>>;
#[derive(Clone, Debug)]
pub struct ConnectionPool {
databases: Vec<Vec<Pool<ServerPool>>>,
addresses: Vec<Vec<Address>>,
round_robin: Counter,
round_robin: usize,
banlist: BanList,
healthcheck_timeout: u64,
ban_time: i64,
stats: Reporter,
}
impl ConnectionPool {
/// Construct the connection pool from a config file.
pub async fn from_config(config: Config, client_server_map: ClientServerMap) -> ConnectionPool {
pub async fn from_config(
client_server_map: ClientServerMap,
stats: Reporter,
) -> ConnectionPool {
let config = get_config();
let mut shards = Vec::new();
let mut addresses = Vec::new();
let mut banlist = Vec::new();
@@ -42,15 +45,26 @@ impl ConnectionPool {
.collect::<Vec<String>>();
shard_ids.sort_by_key(|k| k.parse::<i64>().unwrap());
for shard in shard_ids {
let shard = &config.shards[&shard];
for shard_idx in shard_ids {
let shard = &config.shards[&shard_idx];
let mut pools = Vec::new();
let mut replica_addresses = Vec::new();
for server in &shard.servers {
for server in shard.servers.iter() {
let role = match server.2.as_ref() {
"primary" => Role::Primary,
"replica" => Role::Replica,
_ => {
error!("Config error: server role can be 'primary' or 'replica', have: '{}'. Defaulting to 'replica'.", server.2);
Role::Replica
}
};
let address = Address {
host: server.0.clone(),
port: server.1.to_string(),
role: role,
shard: shard_idx.parse::<usize>().unwrap(),
};
let manager = ServerPool::new(
@@ -58,6 +72,7 @@ impl ConnectionPool {
config.user.clone(),
&shard.database,
client_server_map.clone(),
stats.clone(),
);
let pool = Pool::builder()
@@ -79,92 +94,162 @@ impl ConnectionPool {
banlist.push(HashMap::new());
}
assert_eq!(shards.len(), addresses.len());
let address_len = addresses.len();
ConnectionPool {
databases: shards,
addresses: addresses,
round_robin: Arc::new(AtomicUsize::new(0)),
round_robin: rand::random::<usize>() % address_len, // Start at a random replica
banlist: Arc::new(Mutex::new(banlist)),
healthcheck_timeout: config.general.healthcheck_timeout,
ban_time: config.general.ban_time,
stats: stats,
}
}
/// Connect to all shards and grab server information.
/// Return server information we will pass to the clients
/// when they connect.
/// This also warms up the pool for clients that connect when
/// the pooler starts up.
pub async fn validate(&mut self) -> Result<BytesMut, Error> {
let mut server_infos = Vec::new();
for shard in 0..self.shards() {
for _ in 0..self.servers(shard) {
let connection = match self.get(shard, None).await {
Ok(conn) => conn,
Err(err) => {
error!("Shard {} down or misconfigured: {:?}", shard, err);
continue;
}
};
let mut proxy = connection.0;
let _address = connection.1;
let server = &mut *proxy;
server_infos.push(server.server_info());
}
}
// TODO: compare server information to make sure
// all shards are running identical configurations.
if server_infos.len() == 0 {
return Err(Error::AllServersDown);
}
Ok(server_infos[0].clone())
}
/// Get a connection from the pool.
pub async fn get(
&self,
shard: Option<usize>,
&mut self,
shard: usize,
role: Option<Role>,
) -> Result<(PooledConnection<'_, ServerPool>, Address), Error> {
// Set this to false to gain ~3-4% speed.
let with_health_check = true;
let now = Instant::now();
let addresses = &self.addresses[shard];
let shard = match shard {
Some(shard) => shard,
None => 0, // TODO: pick a shard at random
let mut allowed_attempts = match role {
// Primary-specific queries get one attempt, if the primary is down,
// nothing we should do about it I think. It's dangerous to retry
// write queries.
Some(Role::Primary) => 1,
// Replicas get to try as many times as there are replicas
// and connections in the pool.
_ => addresses.len(),
};
loop {
let index =
self.round_robin.fetch_add(1, Ordering::SeqCst) % self.databases[shard].len();
let address = self.addresses[shard][index].clone();
let exists = match role {
Some(role) => addresses.iter().filter(|addr| addr.role == role).count() > 0,
None => true,
};
if self.is_banned(&address, shard) {
if !exists {
error!("Requested role {:?}, but none are configured", role);
return Err(Error::BadConfig);
}
while allowed_attempts > 0 {
// Round-robin each client's queries.
// If a client only sends one query and then disconnects, it doesn't matter
// which replica it'll go to.
self.round_robin += 1;
let index = self.round_robin % addresses.len();
let address = &addresses[index];
// Make sure you're getting a primary or a replica
// as per request. If no specific role is requested, the first
// available will be chosen.
if address.role != role {
continue;
}
allowed_attempts -= 1;
if self.is_banned(address, shard, role) {
continue;
}
// Check if we can connect
// TODO: implement query wait timeout, i.e. time to get a conn from the pool
let mut conn = match self.databases[shard][index].get().await {
Ok(conn) => conn,
Err(err) => {
println!(">> Banning replica {}, error: {:?}", index, err);
self.ban(&address, shard);
error!("Banning replica {}, error: {:?}", index, err);
self.ban(address, shard);
continue;
}
};
if !with_health_check {
return Ok((conn, address));
}
// // Check if this server is alive with a health check
let server = &mut *conn;
let healthcheck_timeout = get_config().general.healthcheck_timeout;
self.stats.server_tested(server.process_id());
match tokio::time::timeout(
tokio::time::Duration::from_millis(self.healthcheck_timeout),
tokio::time::Duration::from_millis(healthcheck_timeout),
server.query("SELECT 1"),
)
.await
{
// Check if health check succeeded
Ok(res) => match res {
Ok(_) => return Ok((conn, address)),
Ok(_) => {
self.stats.checkout_time(now.elapsed().as_micros());
self.stats.server_idle(conn.process_id());
return Ok((conn, address.clone()));
}
Err(_) => {
println!(
">> Banning replica {} because of failed health check",
index
);
self.ban(&address, shard);
error!("Banning replica {} because of failed health check", index);
// Don't leave a bad connection in the pool.
server.mark_bad();
self.ban(address, shard);
continue;
}
},
// Health check never came back, database is really really down
Err(_) => {
println!(
">> Banning replica {} because of health check timeout",
index
);
self.ban(&address, shard);
error!("Banning replica {} because of health check timeout", index);
// Don't leave a bad connection in the pool.
server.mark_bad();
self.ban(address, shard);
continue;
}
}
}
return Err(Error::AllServersDown);
}
/// Ban an address (i.e. replica). It no longer will serve
/// traffic for any new transactions. Existing transactions on that replica
/// will finish successfully or error out to the clients.
pub fn ban(&self, address: &Address, shard: usize) {
println!(">> Banning {:?}", address);
error!("Banning {:?}", address);
let now = chrono::offset::Utc::now().naive_utc();
let mut guard = self.banlist.lock().unwrap();
guard[shard].insert(address.clone(), now);
@@ -179,14 +264,25 @@ impl ConnectionPool {
/// Check if a replica can serve traffic. If all replicas are banned,
/// we unban all of them. Better to try then not to.
pub fn is_banned(&self, address: &Address, shard: usize) -> bool {
pub fn is_banned(&self, address: &Address, shard: usize, role: Option<Role>) -> bool {
let replicas_available = match role {
Some(Role::Replica) => self.addresses[shard]
.iter()
.filter(|addr| addr.role == Role::Replica)
.count(),
None => self.addresses[shard].len(),
Some(Role::Primary) => return false, // Primary cannot be banned.
};
// If you're not asking for the primary,
// all databases are treated as replicas.
let mut guard = self.banlist.lock().unwrap();
// Everything is banned, nothig is banned
if guard[shard].len() == self.databases[shard].len() {
// Everything is banned = nothing is banned.
if guard[shard].len() == replicas_available {
guard[shard].clear();
drop(guard);
println!(">> Unbanning all replicas.");
warn!("Unbanning all replicas.");
return false;
}
@@ -194,8 +290,9 @@ impl ConnectionPool {
match guard[shard].get(address) {
Some(timestamp) => {
let now = chrono::offset::Utc::now().naive_utc();
if now.timestamp() - timestamp.timestamp() > self.ban_time {
// 1 minute
let config = get_config();
// Ban expired.
if now.timestamp() - timestamp.timestamp() > config.general.ban_time {
guard[shard].remove(address);
false
} else {
@@ -210,6 +307,10 @@ impl ConnectionPool {
pub fn shards(&self) -> usize {
self.databases.len()
}
pub fn servers(&self, shard: usize) -> usize {
self.addresses[shard].len()
}
}
pub struct ServerPool {
@@ -217,6 +318,7 @@ pub struct ServerPool {
user: User,
database: String,
client_server_map: ClientServerMap,
stats: Reporter,
}
impl ServerPool {
@@ -225,12 +327,14 @@ impl ServerPool {
user: User,
database: &str,
client_server_map: ClientServerMap,
stats: Reporter,
) -> ServerPool {
ServerPool {
address: address,
user: user,
database: database.to_string(),
client_server_map: client_server_map,
stats: stats,
}
}
}
@@ -242,17 +346,36 @@ impl ManageConnection for ServerPool {
/// Attempts to create a new connection.
async fn connect(&self) -> Result<Self::Connection, Self::Error> {
println!(">> Creating a new connection for the pool");
info!(
"Creating a new connection to {:?} using user {:?}",
self.address, self.user.name
);
Server::startup(
&self.address.host,
&self.address.port,
&self.user.name,
&self.user.password,
// Put a temporary process_id into the stats
// for server login.
let process_id = rand::random::<i32>();
self.stats.server_login(process_id);
match Server::startup(
&self.address,
&self.user,
&self.database,
self.client_server_map.clone(),
self.stats.clone(),
)
.await
{
Ok(conn) => {
// Remove the temporary process_id from the stats.
self.stats.server_disconnecting(process_id);
Ok(conn)
}
Err(err) => {
// Remove the temporary process_id from the stats.
self.stats.server_disconnecting(process_id);
Err(err)
}
}
}
/// Determines if the connection is still connected to the database.

489
src/query_router.rs Normal file
View File

@@ -0,0 +1,489 @@
use crate::config::{get_config, Role};
use crate::sharding::Sharder;
/// Route queries automatically based on explicitely requested
/// or implied query characteristics.
use bytes::{Buf, BytesMut};
use log::{debug, error};
use once_cell::sync::OnceCell;
use regex::RegexSet;
use sqlparser::ast::Statement::{Query, StartTransaction};
use sqlparser::dialect::PostgreSqlDialect;
use sqlparser::parser::Parser;
const CUSTOM_SQL_REGEXES: [&str; 5] = [
r"(?i)SET SHARDING KEY TO '[0-9]+'",
r"(?i)SET SHARD TO '[0-9]+'",
r"(?i)SHOW SHARD",
r"(?i)SET SERVER ROLE TO '(PRIMARY|REPLICA|ANY|AUTO|DEFAULT)'",
r"(?i)SHOW SERVER ROLE",
];
#[derive(PartialEq, Debug)]
pub enum Command {
SetShardingKey,
SetShard,
ShowShard,
SetServerRole,
ShowServerRole,
}
static CUSTOM_SQL_REGEX_SET: OnceCell<RegexSet> = OnceCell::new();
pub struct QueryRouter {
// By default, queries go here, unless we have better information
// about what the client wants.
default_server_role: Option<Role>,
// Number of shards in the cluster.
shards: usize,
// Which shard we should be talking to right now.
active_shard: Option<usize>,
// Should we be talking to a primary or a replica?
active_role: Option<Role>,
// Include the primary into the replica pool?
primary_reads_enabled: bool,
// Should we try to parse queries?
query_parser_enabled: bool,
}
impl QueryRouter {
pub fn setup() -> bool {
let set = match RegexSet::new(&CUSTOM_SQL_REGEXES) {
Ok(rgx) => rgx,
Err(err) => {
error!("QueryRouter::setup Could not compile regex set: {:?}", err);
return false;
}
};
match CUSTOM_SQL_REGEX_SET.set(set) {
Ok(_) => true,
Err(_) => false,
}
}
pub fn new() -> QueryRouter {
let config = get_config();
let default_server_role = match config.query_router.default_role.as_ref() {
"any" => None,
"primary" => Some(Role::Primary),
"replica" => Some(Role::Replica),
_ => unreachable!(),
};
QueryRouter {
default_server_role: default_server_role,
shards: config.shards.len(),
active_role: default_server_role,
active_shard: None,
primary_reads_enabled: config.query_router.primary_reads_enabled,
query_parser_enabled: config.query_router.query_parser_enabled,
}
}
/// Try to parse a command and execute it.
pub fn try_execute_command(&mut self, mut buf: BytesMut) -> Option<(Command, String)> {
let code = buf.get_u8() as char;
if code != 'Q' {
return None;
}
let len = buf.get_i32() as usize;
let query = String::from_utf8_lossy(&buf[..len - 5]).to_string(); // Ignore the terminating NULL.
let regex_set = match CUSTOM_SQL_REGEX_SET.get() {
Some(regex_set) => regex_set,
None => return None,
};
let matches: Vec<_> = regex_set.matches(&query).into_iter().collect();
if matches.len() != 1 {
return None;
}
let command = match matches[0] {
0 => Command::SetShardingKey,
1 => Command::SetShard,
2 => Command::ShowShard,
3 => Command::SetServerRole,
4 => Command::ShowServerRole,
_ => unreachable!(),
};
let mut value = match command {
Command::SetShardingKey | Command::SetShard | Command::SetServerRole => {
query.split("'").collect::<Vec<&str>>()[1].to_string()
}
Command::ShowShard => self.shard().to_string(),
Command::ShowServerRole => match self.active_role {
Some(Role::Primary) => String::from("primary"),
Some(Role::Replica) => String::from("replica"),
None => {
if self.query_parser_enabled {
String::from("auto")
} else {
String::from("any")
}
}
},
};
match command {
Command::SetShardingKey => {
let sharder = Sharder::new(self.shards);
let shard = sharder.pg_bigint_hash(value.parse::<i64>().unwrap());
self.active_shard = Some(shard);
value = shard.to_string();
}
Command::SetShard => {
self.active_shard = Some(value.parse::<usize>().unwrap());
}
Command::SetServerRole => {
self.active_role = match value.to_ascii_lowercase().as_ref() {
"primary" => {
self.query_parser_enabled = false;
Some(Role::Primary)
}
"replica" => {
self.query_parser_enabled = false;
Some(Role::Replica)
}
"any" => {
self.query_parser_enabled = false;
None
}
"auto" => {
self.query_parser_enabled = true;
None
}
"default" => {
// TODO: reset query parser to default here.
self.active_role = self.default_server_role;
self.query_parser_enabled = get_config().query_router.query_parser_enabled;
self.active_role
}
_ => unreachable!(),
};
}
_ => (),
}
Some((command, value))
}
/// Try to infer which server to connect to based on the contents of the query.
pub fn infer_role(&mut self, mut buf: BytesMut) -> bool {
let code = buf.get_u8() as char;
let len = buf.get_i32() as usize;
let query = match code {
'Q' => String::from_utf8_lossy(&buf[..len - 5]).to_string(),
'P' => {
let mut start = 0;
let mut end;
// Skip the name of the prepared statement.
while buf[start] != 0 && start < buf.len() {
start += 1;
}
start += 1; // Skip terminating null
// Find the end of the prepared stmt (\0)
end = start;
while buf[end] != 0 && end < buf.len() {
end += 1;
}
let query = String::from_utf8_lossy(&buf[start..end]).to_string();
query.replace("$", "") // Remove placeholders turning them into "values"
}
_ => return false,
};
let ast = match Parser::parse_sql(&PostgreSqlDialect {}, &query) {
Ok(ast) => ast,
Err(err) => {
debug!("{:?}, query: {}", err, query);
return false;
}
};
if ast.len() == 0 {
return false;
}
match ast[0] {
// All transactions go to the primary, probably a write.
StartTransaction { .. } => {
self.active_role = Some(Role::Primary);
}
// Likely a read-only query
Query { .. } => {
self.active_role = match self.primary_reads_enabled {
false => Some(Role::Replica), // If primary should not be receiving reads, use a replica.
true => None, // Any server role is fine in this case.
}
}
// Likely a write
_ => {
self.active_role = Some(Role::Primary);
}
};
true
}
/// Get the current desired server role we should be talking to.
pub fn role(&self) -> Option<Role> {
self.active_role
}
/// Get desired shard we should be talking to.
pub fn shard(&self) -> usize {
match self.active_shard {
Some(shard) => shard,
None => 0,
}
}
/// Reset the router back to defaults.
/// This must be called at the end of every transaction in transaction mode.
pub fn _reset(&mut self) {
self.active_role = self.default_server_role;
self.active_shard = None;
}
/// Should we attempt to parse queries?
pub fn query_parser_enabled(&self) -> bool {
self.query_parser_enabled
}
#[allow(dead_code)]
pub fn toggle_primary_reads(&mut self, value: bool) {
self.primary_reads_enabled = value;
}
}
#[cfg(test)]
mod test {
use super::*;
use crate::messages::simple_query;
use bytes::BufMut;
#[test]
fn test_defaults() {
QueryRouter::setup();
let qr = QueryRouter::new();
assert_eq!(qr.role(), None);
}
#[test]
fn test_infer_role_replica() {
QueryRouter::setup();
let mut qr = QueryRouter::new();
assert!(qr.try_execute_command(simple_query("SET SERVER ROLE TO 'auto'")) != None);
assert_eq!(qr.query_parser_enabled(), true);
qr.toggle_primary_reads(false);
let queries = vec![
simple_query("SELECT * FROM items WHERE id = 5"),
simple_query(
"SELECT id, name, value FROM items INNER JOIN prices ON item.id = prices.item_id",
),
simple_query("WITH t AS (SELECT * FROM items) SELECT * FROM t"),
];
for query in queries {
// It's a recognized query
assert!(qr.infer_role(query));
assert_eq!(qr.role(), Some(Role::Replica));
}
}
#[test]
fn test_infer_role_primary() {
QueryRouter::setup();
let mut qr = QueryRouter::new();
let queries = vec![
simple_query("UPDATE items SET name = 'pumpkin' WHERE id = 5"),
simple_query("INSERT INTO items (id, name) VALUES (5, 'pumpkin')"),
simple_query("DELETE FROM items WHERE id = 5"),
simple_query("BEGIN"), // Transaction start
];
for query in queries {
// It's a recognized query
assert!(qr.infer_role(query));
assert_eq!(qr.role(), Some(Role::Primary));
}
}
#[test]
fn test_infer_role_primary_reads_enabled() {
QueryRouter::setup();
let mut qr = QueryRouter::new();
let query = simple_query("SELECT * FROM items WHERE id = 5");
qr.toggle_primary_reads(true);
assert!(qr.infer_role(query));
assert_eq!(qr.role(), None);
}
#[test]
fn test_infer_role_parse_prepared() {
QueryRouter::setup();
let mut qr = QueryRouter::new();
qr.try_execute_command(simple_query("SET SERVER ROLE TO 'auto'"));
qr.toggle_primary_reads(false);
let prepared_stmt = BytesMut::from(
&b"WITH t AS (SELECT * FROM items WHERE name = $1) SELECT * FROM t WHERE id = $2\0"[..],
);
let mut res = BytesMut::from(&b"P"[..]);
res.put_i32(prepared_stmt.len() as i32 + 4 + 1 + 2);
res.put_u8(0);
res.put(prepared_stmt);
res.put_i16(0);
assert!(qr.infer_role(res));
assert_eq!(qr.role(), Some(Role::Replica));
}
#[test]
fn test_regex_set() {
QueryRouter::setup();
let tests = [
// Upper case
"SET SHARDING KEY TO '1'",
"SET SHARD TO '1'",
"SHOW SHARD",
"SET SERVER ROLE TO 'replica'",
"SET SERVER ROLE TO 'primary'",
"SET SERVER ROLE TO 'any'",
"SET SERVER ROLE TO 'auto'",
"SHOW SERVER ROLE",
// Lower case
"set sharding key to '1'",
"set shard to '1'",
"show shard",
"set server role to 'replica'",
"set server role to 'primary'",
"set server role to 'any'",
"set server role to 'auto'",
"show server role",
];
let set = CUSTOM_SQL_REGEX_SET.get().unwrap();
for test in &tests {
let matches: Vec<_> = set.matches(test).into_iter().collect();
assert_eq!(matches.len(), 1);
}
}
#[test]
fn test_try_execute_command() {
QueryRouter::setup();
let mut qr = QueryRouter::new();
// SetShardingKey
let query = simple_query("SET SHARDING KEY TO '13'");
assert_eq!(
qr.try_execute_command(query),
Some((Command::SetShardingKey, String::from("1")))
);
assert_eq!(qr.shard(), 1);
// SetShard
let query = simple_query("SET SHARD TO '1'");
assert_eq!(
qr.try_execute_command(query),
Some((Command::SetShard, String::from("1")))
);
assert_eq!(qr.shard(), 1);
// ShowShard
let query = simple_query("SHOW SHARD");
assert_eq!(
qr.try_execute_command(query),
Some((Command::ShowShard, String::from("1")))
);
// SetServerRole
let roles = ["primary", "replica", "any", "auto", "primary"];
let verify_roles = [
Some(Role::Primary),
Some(Role::Replica),
None,
None,
Some(Role::Primary),
];
let query_parser_enabled = [false, false, false, true, false];
for (idx, role) in roles.iter().enumerate() {
let query = simple_query(&format!("SET SERVER ROLE TO '{}'", role));
assert_eq!(
qr.try_execute_command(query),
Some((Command::SetServerRole, String::from(*role)))
);
assert_eq!(qr.role(), verify_roles[idx],);
assert_eq!(qr.query_parser_enabled(), query_parser_enabled[idx],);
// ShowServerRole
let query = simple_query("SHOW SERVER ROLE");
assert_eq!(
qr.try_execute_command(query),
Some((Command::ShowServerRole, String::from(*role)))
);
}
}
#[test]
fn test_enable_query_parser() {
QueryRouter::setup();
let mut qr = QueryRouter::new();
let query = simple_query("SET SERVER ROLE TO 'auto'");
qr.toggle_primary_reads(false);
assert!(qr.try_execute_command(query) != None);
assert!(qr.query_parser_enabled());
assert_eq!(qr.role(), None);
let query = simple_query("INSERT INTO test_table VALUES (1)");
assert_eq!(qr.infer_role(query), true);
assert_eq!(qr.role(), Some(Role::Primary));
let query = simple_query("SELECT * FROM test_table");
assert_eq!(qr.infer_role(query), true);
assert_eq!(qr.role(), Some(Role::Replica));
assert!(qr.query_parser_enabled());
let query = simple_query("SET SERVER ROLE TO 'default'");
assert!(qr.try_execute_command(query) != None);
assert!(!qr.query_parser_enabled());
}
}

View File

@@ -1,43 +1,43 @@
#![allow(dead_code)]
#![allow(unused_variables)]
use bytes::{Buf, BufMut, BytesMut};
///! Implementation of the PostgreSQL server (database) protocol.
///! Here we are pretending to the a Postgres client.
use bytes::{Buf, BufMut, BytesMut};
use log::{error, info};
use tokio::io::{AsyncReadExt, BufReader};
use tokio::net::tcp::{OwnedReadHalf, OwnedWriteHalf};
use tokio::net::TcpStream;
use tokio::net::{
tcp::{OwnedReadHalf, OwnedWriteHalf},
TcpStream,
};
use crate::config::Address;
use crate::config::{Address, User};
use crate::constants::*;
use crate::errors::Error;
use crate::messages::*;
use crate::stats::Reporter;
use crate::ClientServerMap;
/// Server state.
pub struct Server {
// Server host, e.g. localhost
host: String,
// Server host, e.g. localhost,
// port, e.g. 5432, and role, e.g. primary or replica.
address: Address,
// Server port: e.g. 5432
port: String,
// Buffered read socket
// Buffered read socket.
read: BufReader<OwnedReadHalf>,
// Unbuffered write socket (our client code buffers)
// Unbuffered write socket (our client code buffers).
write: OwnedWriteHalf,
// Our server response buffer
// Our server response buffer. We buffer data before we give it to the client.
buffer: BytesMut,
// Server information the server sent us over on startup
// Server information the server sent us over on startup.
server_info: BytesMut,
// Backend id and secret key used for query cancellation.
backend_id: i32,
process_id: i32,
secret_key: i32,
// Is the server inside a transaction at the moment.
// Is the server inside a transaction or idle.
in_transaction: bool,
// Is there more data for the client to read.
@@ -48,34 +48,42 @@ pub struct Server {
// Mapping of clients and servers used for query cancellation.
client_server_map: ClientServerMap,
// Server connected at.
connected_at: chrono::naive::NaiveDateTime,
// Reports various metrics, e.g. data sent & received.
stats: Reporter,
}
impl Server {
/// Pretend to be the Postgres client and connect to the server given host, port and credentials.
/// Perform the authentication and return the server in a ready-for-query mode.
/// Perform the authentication and return the server in a ready for query state.
pub async fn startup(
host: &str,
port: &str,
user: &str,
password: &str,
address: &Address,
user: &User,
database: &str,
client_server_map: ClientServerMap,
stats: Reporter,
) -> Result<Server, Error> {
let mut stream = match TcpStream::connect(&format!("{}:{}", host, port)).await {
Ok(stream) => stream,
Err(err) => {
println!(">> Could not connect to server: {}", err);
return Err(Error::SocketError);
}
};
let mut stream =
match TcpStream::connect(&format!("{}:{}", &address.host, &address.port)).await {
Ok(stream) => stream,
Err(err) => {
error!("Could not connect to server: {}", err);
return Err(Error::SocketError);
}
};
// Send the startup packet.
startup(&mut stream, user, database).await?;
// Send the startup packet telling the server we're a normal Postgres client.
startup(&mut stream, &user.name, database).await?;
let mut server_info = BytesMut::with_capacity(25);
let mut backend_id: i32 = 0;
let mut server_info = BytesMut::new();
let mut process_id: i32 = 0;
let mut secret_key: i32 = 0;
// We'll be handling multiple packets, but they will all be structured the same.
// We'll loop here until this exchange is complete.
loop {
let code = match stream.read_u8().await {
Ok(code) => code as char,
@@ -88,16 +96,18 @@ impl Server {
};
match code {
// Authentication
'R' => {
// Auth can proceed
let code = match stream.read_i32().await {
Ok(code) => code,
// Determine which kind of authentication is required, if any.
let auth_code = match stream.read_i32().await {
Ok(auth_code) => auth_code,
Err(_) => return Err(Error::SocketError),
};
match code {
// MD5
5 => {
match auth_code {
MD5_ENCRYPTED_PASSWORD => {
// The salt is 4 bytes.
// See: https://www.postgresql.org/docs/12/protocol-message-formats.html
let mut salt = vec![0u8; 4];
match stream.read_exact(&mut salt).await {
@@ -105,19 +115,20 @@ impl Server {
Err(_) => return Err(Error::SocketError),
};
md5_password(&mut stream, user, password, &salt[..]).await?;
md5_password(&mut stream, &user.name, &user.password, &salt[..])
.await?;
}
// Authentication handshake complete.
0 => (),
AUTHENTICATION_SUCCESSFUL => (),
_ => {
println!(">> Unsupported authentication mechanism: {}", code);
error!("Unsupported authentication mechanism: {}", auth_code);
return Err(Error::ServerError);
}
}
}
// ErrorResponse
'E' => {
let error_code = match stream.read_u8().await {
Ok(error_code) => error_code,
@@ -125,46 +136,62 @@ impl Server {
};
match error_code {
0 => (), // Terminator
// No error message is present in the message.
MESSAGE_TERMINATOR => (),
// An error message will be present.
_ => {
// Read the error message without the terminating null character.
let mut error = vec![0u8; len as usize - 4 - 1];
match stream.read_exact(&mut error).await {
Ok(_) => (),
Err(_) => return Err(Error::SocketError),
};
println!(">> Server error: {}", String::from_utf8_lossy(&error));
// TODO: the error message contains multiple fields; we can decode them and
// present a prettier message to the user.
// See: https://www.postgresql.org/docs/12/protocol-error-fields.html
error!("Server error: {}", String::from_utf8_lossy(&error));
}
};
return Err(Error::ServerError);
}
// ParameterStatus
'S' => {
// Parameter
let mut param = vec![0u8; len as usize - 4];
match stream.read_exact(&mut param).await {
Ok(_) => (),
Err(_) => return Err(Error::SocketError),
};
// Save the parameter so we can pass it to the client later.
// These can be server_encoding, client_encoding, server timezone, Postgres version,
// and many more interesting things we should know about the Postgres server we are talking to.
server_info.put_u8(b'S');
server_info.put_i32(len);
server_info.put_slice(&param[..]);
}
// BackendKeyData
'K' => {
// Query cancellation data.
backend_id = match stream.read_i32().await {
// The frontend must save these values if it wishes to be able to issue CancelRequest messages later.
// See: https://www.postgresql.org/docs/12/protocol-message-formats.html
process_id = match stream.read_i32().await {
Ok(id) => id,
Err(err) => return Err(Error::SocketError),
Err(_) => return Err(Error::SocketError),
};
secret_key = match stream.read_i32().await {
Ok(id) => id,
Err(err) => return Err(Error::SocketError),
Err(_) => return Err(Error::SocketError),
};
}
// ReadyForQuery
'Z' => {
let mut idle = vec![0u8; len as usize - 4];
@@ -173,34 +200,38 @@ impl Server {
Err(_) => return Err(Error::SocketError),
};
// Startup finished
// This is the last step in the client-server connection setup,
// and indicates the server is ready for to query it.
let (read, write) = stream.into_split();
return Ok(Server {
host: host.to_string(),
port: port.to_string(),
address: address.clone(),
read: BufReader::new(read),
write: write,
buffer: BytesMut::with_capacity(8196),
server_info: server_info,
backend_id: backend_id,
process_id: process_id,
secret_key: secret_key,
in_transaction: false,
data_available: false,
bad: false,
client_server_map: client_server_map,
connected_at: chrono::offset::Utc::now().naive_utc(),
stats: stats,
});
}
// We have an unexpected message from the server during this exchange.
// Means we implemented the protocol wrong or we're not talking to a Postgres server.
_ => {
println!(">> Unknown code: {}", code);
error!("Unknown code: {}", code);
return Err(Error::ProtocolSyncError);
}
};
}
}
/// Issue a cancellation request to the server.
/// Issue a query cancellation request to the server.
/// Uses a separate connection that's not part of the connection pool.
pub async fn cancel(
host: &str,
@@ -211,33 +242,35 @@ impl Server {
let mut stream = match TcpStream::connect(&format!("{}:{}", host, port)).await {
Ok(stream) => stream,
Err(err) => {
println!(">> Could not connect to server: {}", err);
error!("Could not connect to server: {}", err);
return Err(Error::SocketError);
}
};
let mut bytes = BytesMut::with_capacity(16);
bytes.put_i32(16);
bytes.put_i32(80877102);
bytes.put_i32(CANCEL_REQUEST_CODE);
bytes.put_i32(process_id);
bytes.put_i32(secret_key);
Ok(write_all(&mut stream, bytes).await?)
}
/// Send data to the server from the client.
/// Send messages to the server from the client.
pub async fn send(&mut self, messages: BytesMut) -> Result<(), Error> {
self.stats.data_sent(messages.len());
match write_all_half(&mut self.write, messages).await {
Ok(_) => Ok(()),
Err(err) => {
println!(">> Terminating server because of: {:?}", err);
error!("Terminating server because of: {:?}", err);
self.bad = true;
Err(err)
}
}
}
/// Receive data from the server in response to a client request sent previously.
/// Receive data from the server in response to a client request.
/// This method must be called multiple times while `self.is_data_available()` is true
/// in order to receive all data the server has to offer.
pub async fn recv(&mut self) -> Result<BytesMut, Error> {
@@ -245,86 +278,96 @@ impl Server {
let mut message = match read_message(&mut self.read).await {
Ok(message) => message,
Err(err) => {
println!(">> Terminating server because of: {:?}", err);
error!("Terminating server because of: {:?}", err);
self.bad = true;
return Err(err);
}
};
// Buffer the message we'll forward to the client in a bit.
// Buffer the message we'll forward to the client later.
self.buffer.put(&message[..]);
let code = message.get_u8() as char;
let _len = message.get_i32();
match code {
// ReadyForQuery
'Z' => {
// Ready for query, time to forward buffer to client.
let transaction_state = message.get_u8() as char;
match transaction_state {
// In transaction.
'T' => {
self.in_transaction = true;
}
// Idle, transaction over.
'I' => {
self.in_transaction = false;
}
// Error client didn't clean up!
// We shuold drop this server
// Some error occured, the transaction was rolled back.
'E' => {
self.in_transaction = true;
}
// Something totally unexpected, this is not a Postgres server we know.
_ => {
self.bad = true;
return Err(Error::ProtocolSyncError);
}
};
// There is no more data available from the server.
self.data_available = false;
break;
}
// DataRow
'D' => {
// More data is available after this message, this is not the end of the reply.
self.data_available = true;
// Don't flush yet, the more we buffer, the faster this goes.
// Up to a limit of course.
// Don't flush yet, the more we buffer, the faster this goes...
// up to a limit of course.
if self.buffer.len() >= 8196 {
break;
}
}
// CopyInResponse: copy is starting from client to server
// CopyInResponse: copy is starting from client to server.
'G' => break,
// CopyOutResponse: copy is starting from the server to the client
// CopyOutResponse: copy is starting from the server to the client.
'H' => {
self.data_available = true;
break;
}
// CopyData
// CopyData: we are not buffering this one because there will be many more
// and we don't know how big this packet could be, best not to take a risk.
'd' => break,
// CopyDone
'c' => {
self.data_available = false;
// Buffer until ReadyForQuery shows up
}
// Buffer until ReadyForQuery shows up, so don't exit the loop yet.
'c' => (),
_ => {
// Keep buffering,
}
// Anything else, e.g. errors, notices, etc.
// Keep buffering until ReadyForQuery shows up.
_ => (),
};
}
let bytes = self.buffer.clone();
// Keep track of how much data we got from the server for stats.
self.stats.data_received(bytes.len());
// Clear the buffer for next query.
self.buffer.clear();
// Pass the data back to the client.
Ok(bytes)
}
@@ -354,7 +397,7 @@ impl Server {
/// Indicate that this server connection cannot be re-used and must be discarded.
pub fn mark_bad(&mut self) {
println!(">> Server marked bad");
error!("Server marked bad");
self.bad = true;
}
@@ -364,20 +407,20 @@ impl Server {
guard.insert(
(process_id, secret_key),
(
self.backend_id,
self.process_id,
self.secret_key,
self.host.clone(),
self.port.clone(),
self.address.host.clone(),
self.address.port.clone(),
),
);
}
/// Execute an arbitrary query against the server.
/// It will use the Simple query protocol.
/// It will use the simple query protocol.
/// Result will not be returned, so this is useful for things like `SET` or `ROLLBACK`.
pub async fn query(&mut self, query: &str) -> Result<(), Error> {
let mut query = BytesMut::from(&query.as_bytes()[..]);
query.put_u8(0);
query.put_u8(0); // C-string terminator (NULL character).
let len = query.len() as i32 + 4;
@@ -388,8 +431,10 @@ impl Server {
msg.put_slice(&query[..]);
self.send(msg).await?;
loop {
let _ = self.recv().await?;
if !self.data_available {
break;
}
@@ -399,16 +444,48 @@ impl Server {
}
/// A shorthand for `SET application_name = $1`.
#[allow(dead_code)]
pub async fn set_name(&mut self, name: &str) -> Result<(), Error> {
Ok(self
.query(&format!("SET application_name = '{}'", name))
.await?)
}
/// Get the servers address.
#[allow(dead_code)]
pub fn address(&self) -> Address {
Address {
host: self.host.to_string(),
port: self.port.to_string(),
}
self.address.clone()
}
pub fn process_id(&self) -> i32 {
self.process_id
}
}
impl Drop for Server {
/// Try to do a clean shut down. Best effort because
/// the socket is in non-blocking mode, so it may not be ready
/// for a write.
fn drop(&mut self) {
self.stats.server_disconnecting(self.process_id());
let mut bytes = BytesMut::with_capacity(4);
bytes.put_u8(b'X');
bytes.put_i32(4);
match self.write.try_write(&bytes) {
Ok(_) => (),
Err(_) => (),
};
self.bad = true;
let now = chrono::offset::Utc::now().naive_utc();
let duration = now - self.connected_at;
info!(
"Server connection closed, session duration: {}",
crate::format_duration(&duration)
);
}
}

View File

@@ -18,7 +18,7 @@ impl Sharder {
let mut lohalf = key as u32;
let hihalf = (key >> 32) as u32;
lohalf ^= if key >= 0 { hihalf } else { !hihalf };
Self::combine(0, Self::pg_u32_hash(lohalf)) as usize % self.shards as usize
Self::combine(0, Self::pg_u32_hash(lohalf)) as usize % self.shards
}
#[inline]

348
src/stats.rs Normal file
View File

@@ -0,0 +1,348 @@
use log::info;
use statsd::Client;
/// Events collector and publisher.
use tokio::sync::mpsc::{Receiver, Sender};
use std::collections::HashMap;
use std::time::Instant;
use crate::config::get_config;
#[derive(Debug, Clone, Copy)]
enum EventName {
CheckoutTime,
Query,
Transaction,
DataSent,
DataReceived,
ClientWaiting,
ClientActive,
ClientIdle,
ClientDisconnecting,
ServerActive,
ServerIdle,
ServerTested,
ServerLogin,
ServerDisconnecting,
}
#[derive(Debug)]
pub struct Event {
name: EventName,
value: i64,
process_id: Option<i32>,
}
#[derive(Clone, Debug)]
pub struct Reporter {
tx: Sender<Event>,
}
impl Reporter {
pub fn new(tx: Sender<Event>) -> Reporter {
Reporter { tx: tx }
}
pub fn query(&self) {
let statistic = Event {
name: EventName::Query,
value: 1,
process_id: None,
};
let _ = self.tx.try_send(statistic);
}
pub fn transaction(&self) {
let statistic = Event {
name: EventName::Transaction,
value: 1,
process_id: None,
};
let _ = self.tx.try_send(statistic);
}
pub fn data_sent(&self, amount: usize) {
let statistic = Event {
name: EventName::DataSent,
value: amount as i64,
process_id: None,
};
let _ = self.tx.try_send(statistic);
}
pub fn data_received(&self, amount: usize) {
let statistic = Event {
name: EventName::DataReceived,
value: amount as i64,
process_id: None,
};
let _ = self.tx.try_send(statistic);
}
pub fn checkout_time(&self, ms: u128) {
let statistic = Event {
name: EventName::CheckoutTime,
value: ms as i64,
process_id: None,
};
let _ = self.tx.try_send(statistic);
}
pub fn client_waiting(&self, process_id: i32) {
let statistic = Event {
name: EventName::ClientWaiting,
value: 1,
process_id: Some(process_id),
};
let _ = self.tx.try_send(statistic);
}
pub fn client_active(&self, process_id: i32) {
let statistic = Event {
name: EventName::ClientActive,
value: 1,
process_id: Some(process_id),
};
let _ = self.tx.try_send(statistic);
}
pub fn client_idle(&self, process_id: i32) {
let statistic = Event {
name: EventName::ClientIdle,
value: 1,
process_id: Some(process_id),
};
let _ = self.tx.try_send(statistic);
}
pub fn client_disconnecting(&self, process_id: i32) {
let statistic = Event {
name: EventName::ClientDisconnecting,
value: 1,
process_id: Some(process_id),
};
let _ = self.tx.try_send(statistic);
}
pub fn server_active(&self, process_id: i32) {
let statistic = Event {
name: EventName::ServerActive,
value: 1,
process_id: Some(process_id),
};
let _ = self.tx.try_send(statistic);
}
pub fn server_idle(&self, process_id: i32) {
let statistic = Event {
name: EventName::ServerIdle,
value: 1,
process_id: Some(process_id),
};
let _ = self.tx.try_send(statistic);
}
pub fn server_login(&self, process_id: i32) {
let statistic = Event {
name: EventName::ServerLogin,
value: 1,
process_id: Some(process_id),
};
let _ = self.tx.try_send(statistic);
}
pub fn server_tested(&self, process_id: i32) {
let statistic = Event {
name: EventName::ServerTested,
value: 1,
process_id: Some(process_id),
};
let _ = self.tx.try_send(statistic);
}
pub fn server_disconnecting(&self, process_id: i32) {
let statistic = Event {
name: EventName::ServerDisconnecting,
value: 1,
process_id: Some(process_id),
};
let _ = self.tx.try_send(statistic);
}
}
pub struct Collector {
rx: Receiver<Event>,
client: Client,
}
impl Collector {
pub fn new(rx: Receiver<Event>) -> Collector {
Collector {
rx: rx,
client: Client::new(&get_config().general.statsd_address, "pgcat").unwrap(),
}
}
pub async fn collect(&mut self) {
info!("Events reporter started");
let mut stats = HashMap::from([
("total_query_count", 0),
("total_xact_count", 0),
("total_sent", 0),
("total_received", 0),
("total_wait_time", 0),
("maxwait_us", 0),
("maxwait", 0),
("cl_waiting", 0),
("cl_active", 0),
("cl_idle", 0),
("sv_idle", 0),
("sv_active", 0),
("sv_login", 0),
("sv_tested", 0),
]);
let mut client_server_states: HashMap<i32, EventName> = HashMap::new();
let mut now = Instant::now();
loop {
let stat = match self.rx.recv().await {
Some(stat) => stat,
None => {
info!("Events collector is shutting down");
return;
}
};
// Some are counters, some are gauges...
match stat.name {
EventName::Query => {
let counter = stats.entry("total_query_count").or_insert(0);
*counter += stat.value;
}
EventName::Transaction => {
let counter = stats.entry("total_xact_count").or_insert(0);
*counter += stat.value;
}
EventName::DataSent => {
let counter = stats.entry("total_sent").or_insert(0);
*counter += stat.value;
}
EventName::DataReceived => {
let counter = stats.entry("total_received").or_insert(0);
*counter += stat.value;
}
EventName::CheckoutTime => {
let counter = stats.entry("total_wait_time").or_insert(0);
*counter += stat.value;
let counter = stats.entry("maxwait_us").or_insert(0);
// Report max time here
if stat.value > *counter {
*counter = stat.value;
}
let counter = stats.entry("maxwait").or_insert(0);
let seconds = *counter / 1_000_000;
if seconds > *counter {
*counter = seconds;
}
}
EventName::ClientActive
| EventName::ClientWaiting
| EventName::ClientIdle
| EventName::ServerActive
| EventName::ServerIdle
| EventName::ServerTested
| EventName::ServerLogin => {
client_server_states.insert(stat.process_id.unwrap(), stat.name);
}
EventName::ClientDisconnecting | EventName::ServerDisconnecting => {
client_server_states.remove(&stat.process_id.unwrap());
}
};
// It's been 15 seconds. If there is no traffic, it won't publish anything,
// but it also doesn't matter then.
if now.elapsed().as_secs() > 15 {
for (_, state) in &client_server_states {
match state {
EventName::ClientActive => {
let counter = stats.entry("cl_active").or_insert(0);
*counter += 1;
}
EventName::ClientWaiting => {
let counter = stats.entry("cl_waiting").or_insert(0);
*counter += 1;
}
EventName::ClientIdle => {
let counter = stats.entry("cl_idle").or_insert(0);
*counter += 1;
}
EventName::ServerIdle => {
let counter = stats.entry("sv_idle").or_insert(0);
*counter += 1;
}
EventName::ServerActive => {
let counter = stats.entry("sv_active").or_insert(0);
*counter += 1;
}
EventName::ServerTested => {
let counter = stats.entry("sv_tested").or_insert(0);
*counter += 1;
}
EventName::ServerLogin => {
let counter = stats.entry("sv_login").or_insert(0);
*counter += 1;
}
_ => unreachable!(),
};
}
info!("{:?}", stats);
let mut pipeline = self.client.pipeline();
for (key, value) in stats.iter_mut() {
pipeline.gauge(key, *value as f64);
*value = 0;
}
pipeline.send(&self.client);
now = Instant::now();
}
}
}
}

1
tests/ruby/.ruby-version Normal file
View File

@@ -0,0 +1 @@
2.7.1

4
tests/ruby/Gemfile Normal file
View File

@@ -0,0 +1,4 @@
source "https://rubygems.org"
gem "pg"
gem "activerecord"

30
tests/ruby/Gemfile.lock Normal file
View File

@@ -0,0 +1,30 @@
GEM
remote: https://rubygems.org/
specs:
activemodel (7.0.2.2)
activesupport (= 7.0.2.2)
activerecord (7.0.2.2)
activemodel (= 7.0.2.2)
activesupport (= 7.0.2.2)
activesupport (7.0.2.2)
concurrent-ruby (~> 1.0, >= 1.0.2)
i18n (>= 1.6, < 2)
minitest (>= 5.1)
tzinfo (~> 2.0)
concurrent-ruby (1.1.9)
i18n (1.10.0)
concurrent-ruby (~> 1.0)
minitest (5.15.0)
pg (1.3.2)
tzinfo (2.0.4)
concurrent-ruby (~> 1.0)
PLATFORMS
x86_64-linux
DEPENDENCIES
activerecord
pg
BUNDLED WITH
2.3.7

View File

@@ -1,11 +1,112 @@
require 'pg'
# frozen_string_literal: true
conn = PG.connect(host: '127.0.0.1', port: 5433, dbname: 'test')
require 'active_record'
conn.exec( "SELECT * FROM pg_stat_activity" ) do |result|
puts " PID | User | Query"
result.each do |row|
puts " %7d | %-16s | %s " %
row.values_at('pid', 'usename', 'query')
# Uncomment these two to see all queries.
# ActiveRecord.verbose_query_logs = true
# ActiveRecord::Base.logger = Logger.new(STDOUT)
ActiveRecord::Base.establish_connection(
adapter: 'postgresql',
host: '127.0.0.1',
port: 6432,
username: 'sharding_user',
password: 'sharding_user',
database: 'rails_dev',
prepared_statements: false, # Transaction mode
advisory_locks: false # Same
)
class TestSafeTable < ActiveRecord::Base
self.table_name = 'test_safe_table'
end
class ShouldNeverHappenException < RuntimeError
end
class CreateSafeShardedTable < ActiveRecord::Migration[7.0]
# Disable transasctions or things will fly out of order!
disable_ddl_transaction!
SHARDS = 3
def up
SHARDS.times do |x|
# This will make this migration reversible!
connection.execute "SET SHARD TO '#{x.to_i}'"
connection.execute "SET SERVER ROLE TO 'primary'"
connection.execute <<-SQL
CREATE TABLE test_safe_table (
id BIGINT PRIMARY KEY,
name VARCHAR,
description TEXT
) PARTITION BY HASH (id);
CREATE TABLE test_safe_table_data PARTITION OF test_safe_table
FOR VALUES WITH (MODULUS #{SHARDS.to_i}, REMAINDER #{x.to_i});
SQL
end
end
end
def down
SHARDS.times do |x|
connection.execute "SET SHARD TO '#{x.to_i}'"
connection.execute "SET SERVER ROLE TO 'primary'"
connection.execute 'DROP TABLE test_safe_table CASCADE'
end
end
end
SHARDS = 3
2.times do
begin
CreateSafeShardedTable.migrate(:down)
rescue Exception
puts "Tables don't exist yet"
end
CreateSafeShardedTable.migrate(:up)
SHARDS.times do |x|
TestSafeTable.connection.execute "SET SHARD TO '#{x.to_i}'"
TestSafeTable.connection.execute "SET SERVER ROLE TO 'primary'"
TestSafeTable.connection.execute "TRUNCATE #{TestSafeTable.table_name}"
end
# Equivalent to Makara's stick_to_master! except it sticks until it's changed.
TestSafeTable.connection.execute "SET SERVER ROLE TO 'primary'"
200.times do |x|
x += 1 # Postgres ids start at 1
TestSafeTable.connection.execute "SET SHARDING KEY TO '#{x.to_i}'"
TestSafeTable.create(id: x, name: "something_special_#{x.to_i}", description: "It's a surprise!")
end
TestSafeTable.connection.execute "SET SERVER ROLE TO 'replica'"
100.times do |x|
x += 1 # 0 confuses our sharding function
TestSafeTable.connection.execute "SET SHARDING KEY TO '#{x.to_i}'"
TestSafeTable.find_by_id(x).id
end
# Will use the query parser to direct reads to replicas
TestSafeTable.connection.execute "SET SERVER ROLE TO 'auto'"
100.times do |x|
x += 101
TestSafeTable.connection.execute "SET SHARDING KEY TO '#{x.to_i}'"
TestSafeTable.find_by_id(x).id
end
end
# Test wrong shard
TestSafeTable.connection.execute "SET SHARD TO '1'"
begin
TestSafeTable.create(id: 5, name: 'test', description: 'test description')
raise ShouldNeverHappenException('Uh oh')
rescue ActiveRecord::StatementInvalid
puts 'OK'
end

View File

@@ -23,4 +23,4 @@ SELECT * FROM shard_0 ORDER BY id LIMIT 10;
SELECT * FROM shard_1 ORDER BY id LIMIT 10;
SELECT * FROM shard_2 ORDER BY id LIMIT 10;
SELECT * FROM shard_3 ORDER BY id LIMIT 10;
SELECT * FROM shard_4 ORDER BY id LIMIT 10;
SELECT * FROM shard_4 ORDER BY id LIMIT 10;

View File

@@ -1,12 +1,19 @@
#/bin/bash
set -e
# Setup all the shards.
sudo service postgresql restart
# sudo service postgresql restart
psql -f query_routing_setup.sql
echo "Giving Postgres 5 seconds to start up..."
# sleep 5
# psql -f query_routing_setup.sql
psql -h 127.0.0.1 -p 6432 -f query_routing_test_insert.sql
psql -h 127.0.0.1 -p 6432 -f query_routing_test_select.sql
psql -f query_routing_test_validate.sql
psql -e -h 127.0.0.1 -p 6432 -f query_routing_test_primary_replica.sql
psql -f query_routing_test_validate.sql

View File

@@ -58,4 +58,4 @@ GRANT ALL ON TABLE data TO sharding_user;
\c shard2
GRANT ALL ON SCHEMA public TO sharding_user;
GRANT ALL ON TABLE data TO sharding_user;
GRANT ALL ON TABLE data TO sharding_user;

View File

@@ -1,3 +1,5 @@
\set ON_ERROR_STOP on
SET SHARDING KEY TO '1';
INSERT INTO data (id, value) VALUES (1, 'value_1');
@@ -44,4 +46,10 @@ SET SHARDING KEY TO '15';
INSERT INTO data (id, value) VALUES (15, 'value_1');
SET SHARDING KEY TO '16';
INSERT INTO data (id, value) VALUES (16, 'value_1');
INSERT INTO data (id, value) VALUES (16, 'value_1');
set sharding key to '17';
INSERT INTO data (id, value) VALUES (17, 'value_1');
SeT SHaRDInG KeY to '18';
INSERT INTO data (id, value) VALUES (18, 'value_1');

View File

@@ -0,0 +1,153 @@
\set ON_ERROR_STOP on
SET SERVER ROLE TO 'primary';
SET SHARDING KEY TO '1';
INSERT INTO data (id, value) VALUES (1, 'value_1');
SET SERVER ROLE TO 'replica';
SET SHARDING KEY TO '1';
SELECT * FROM data WHERE id = 1;
---
SET SERVER ROLE TO 'primary';
SET SHARDING KEY TO '2';
INSERT INTO data (id, value) VALUES (2, 'value_1');
SET SERVER ROLE TO 'replica';
SET SHARDING KEY TO '2';
SELECT * FROM data WHERE id = 2;
---
SET SERVER ROLE TO 'primary';
SET SHARDING KEY TO '3';
INSERT INTO data (id, value) VALUES (3, 'value_1');
SET SERVER ROLE TO 'replica';
SET SHARDING KEY TO '3';
SELECT * FROM data WHERE id = 3;
---
SET SERVER ROLE TO 'primary';
SET SHARDING KEY TO '4';
INSERT INTO data (id, value) VALUES (4, 'value_1');
SET SERVER ROLE TO 'replica';
SET SHARDING KEY TO '4';
SELECT * FROM data WHERE id = 4;
---
SET SERVER ROLE TO 'primary';
SET SHARDING KEY TO '5';
INSERT INTO data (id, value) VALUES (5, 'value_1');
SET SERVER ROLE TO 'replica';
SET SHARDING KEY TO '5';
SELECT * FROM data WHERE id = 5;
---
SET SERVER ROLE TO 'primary';
SET SHARDING KEY TO '6';
INSERT INTO data (id, value) VALUES (6, 'value_1');
SET SERVER ROLE TO 'replica';
SET SHARDING KEY TO '6';
SELECT * FROM data WHERE id = 6;
---
SET SERVER ROLE TO 'primary';
SET SHARDING KEY TO '7';
INSERT INTO data (id, value) VALUES (7, 'value_1');
SET SERVER ROLE TO 'replica';
SET SHARDING KEY TO '7';
SELECT * FROM data WHERE id = 7;
---
SET SERVER ROLE TO 'primary';
SET SHARDING KEY TO '8';
INSERT INTO data (id, value) VALUES (8, 'value_1');
SET SERVER ROLE TO 'replica';
SET SHARDING KEY TO '8';
SELECT * FROM data WHERE id = 8;
---
SET SERVER ROLE TO 'primary';
SET SHARDING KEY TO '9';
INSERT INTO data (id, value) VALUES (9, 'value_1');
SET SERVER ROLE TO 'replica';
SET SHARDING KEY TO '9';
SELECT * FROM data WHERE id = 9;
---
\set ON_ERROR_STOP on
SET SERVER ROLE TO 'primary';
SET SHARDING KEY TO '10';
INSERT INTO data (id, value) VALUES (10, 'value_1');
SET SERVER ROLE TO 'replica';
SET SHARDING KEY TO '10';
SELECT * FROM data WHERE id = 10;
---
SET SERVER ROLE TO 'primary';
SET SHARDING KEY TO '11';
INSERT INTO data (id, value) VALUES (11, 'value_1');
SET SERVER ROLE TO 'replica';
SET SHARDING KEY TO '11';
SELECT * FROM data WHERE id = 11;
---
SET SERVER ROLE TO 'primary';
SET SHARDING KEY TO '12';
INSERT INTO data (id, value) VALUES (12, 'value_1');
SET SERVER ROLE TO 'replica';
SET SHARDING KEY TO '12';
SELECT * FROM data WHERE id = 12;
---
SET SERVER ROLE TO 'primary';
SET SHARDING KEY TO '13';
INSERT INTO data (id, value) VALUES (13, 'value_1');
SET SERVER ROLE TO 'replica';
SET SHARDING KEY TO '13';
SELECT * FROM data WHERE id = 13;
---
SET SERVER ROLE TO 'primary';
SET SHARDING KEY TO '14';
INSERT INTO data (id, value) VALUES (14, 'value_1');
SET SERVER ROLE TO 'replica';
SET SHARDING KEY TO '14';
SELECT * FROM data WHERE id = 14;
---
SET SERVER ROLE TO 'primary';
SELECT 1;
SET SERVER ROLE TO 'replica';
SELECT 1;
set server role to 'replica';
SeT SeRver Role TO 'PrImARY';
select 1;

View File

@@ -1,3 +1,5 @@
\set ON_ERROR_STOP on
SET SHARDING KEY TO '1';
SELECT * FROM data WHERE id = 1;
@@ -44,4 +46,4 @@ SET SHARDING KEY TO '15';
SELECT * FROM data WHERE id = 15;
SET SHARDING KEY TO '16';
SELECT * FROM data WHERE id = 16;
SELECT * FROM data WHERE id = 16;

View File

@@ -8,4 +8,4 @@ SELECT * FROM data;
\c shard2
SELECT * FROM data;
SELECT * FROM data;