Use rust:bullseye base image
With the original rust:1.70-bullseye image, the container cannot be
built:
17.06 Installing /usr/local/cargo/bin/rustfilt
17.06 Installed package `rustfilt v0.2.1` (executable `rustfilt`)
17.06 error: failed to compile `cargo-binutils v0.3.6`, intermediate artifacts can be found at `/tmp/cargo-installrc6mPb`
17.06
17.06 Caused by:
17.06 package `cargo-platform v0.1.8` cannot be built because it requires rustc 1.73 or newer, while the currently active rustc version is 1.70.0
17.06 Try re-running cargo install with `--locked`
17.06 Summary Successfully installed rustfilt! Failed to install cargo-binutils (see error(s) above).
17.06 error: some crates failed to install
This is the same base image used on tests/docker/Dockerfile
* add workflow
* feat: add pgcat helm chart
* fix: set the right include into configmap
Signed-off-by: David ALEXANDRE <david.alexandre@w6d.io>
* update values and config
* prettifying config
---------
Signed-off-by: David ALEXANDRE <david.alexandre@w6d.io>
The pool maxwait metric currently operates differently from Pgbouncer.
The way it operates today is that we keep track of max_wait on each connected client, when SHOW POOLS query is made, we go over the connected clients and we get the max of max_wait times among clients. This means the pool maxwait will never reset, it will always be monotonically increasing until the client with the highest maxwait disconnects.
This PR changes this behavior, by keeping track of the wait_start time on each client, when a client goes into WAITING state, we record the time offset from connect_time. When we either successfully or unsuccessfully checkout a connection from the pool, we reset the wait_start time.
When SHOW POOLS query is made, we go over all connected clients and we only consider clients whose wait_start is non-zero, for clients that have non-zero wait times, we compare them and report the maximum waiting time as maxwait for the pool.
* Revert "Reset wait times when checked out successfully (#656)"
This reverts commit ec3920d60f.
* Revert "Not sure how this sneaked past CI"
This reverts commit 4c5498b915.
* Revert "only report wait times from clients currently waiting to match behavior of pgbouncer (#655)"
This reverts commit 0e8064b049.
* Fix prepared statement not found when prepared stmt has error
* cleanup debug
* remove more debug msgs
* sure debugged this..
* version bump
* add rust tests
* Expose clients maxwait time in SHOW CLIENTS response via PgCat admin
Displays the maxwait via maxwait_seconds and maxwait_us columns for each client that can be used to track down the wait time per client in a case where the overall pool stats shows waiting time. The maxwait_us, similar to the pool stats setup, is configured to display as a remainder alongside the maxwait_seconds.
* Use maxwait instead of maxwait_seconds to match pools column name
---------
Co-authored-by: Calvin Hughes <9379992+calvinhughes@users.noreply.github.com>
* Add golang test suite to reproduce issue with unnamed parameterized prepared statements
* Allow caching of unnamed prepared statements
* Passthrough describe on portals
* Remove unneeded kill
* Update Dockerfile.ci with golang
* Move out update of Dockerfiles to separate PR
* Initial commit
* Cleanup and add stats
* Use an arc instead of full clones to store the parse packets
* Use mutex instead
* fmt
* clippy
* fmt
* fix?
* fix?
* fmt
* typo
* Update docs
* Refactor custom protocol
* fmt
* move custom protocol handling to before parsing
* Support describe
* Add LRU for server side statement cache
* rename variable
* Refactoring
* Move docs
* Fix test
* fix
* Update tests
* trigger build
* Add more tests
* Reorder handling sync
* Support when a named describe is sent along with Parse (go pgx) and expecting results
* don't talk to client if not needed when client sends Parse
* fmt :(
* refactor tests
* nit
* Reduce hashing
* Reducing work done to decode describe and parse messages
* minor refactor
* Merge branch 'main' into zain/reimplment-prepared-statements-with-global-lru-cache
* Rewrite extended and prepared protocol message handling to better support mocking response packets and close
* An attempt to better handle if there are DDL changes that might break cached plans with ideas about how to further improve it
* fix
* Minor stats fixed and cleanup
* Cosmetic fixes (#64)
* Cosmetic fixes
* fix test
* Change server drop for statement cache error to a `deallocate all`
* Updated comments and added new idea for handling DDL changes impacting cached plans
* fix test?
* Revert test change
* trigger build, flakey test
* Avoid potential race conditions by changing get_or_insert to promote for pool LRU
* remove ps enabled variable on the server in favor of using an option
* Add close to the Extended Protocol buffer
---------
Co-authored-by: Lev Kokotov <levkk@users.noreply.github.com>
* Improved logging
* Improved logging for more `Address` usages
* Fixed lint issues.
* Reverted the `Address` logging changes.
* Applied the PR comment by @levkk.
* Applied the PR comment by @levkk.
* Applied the PR comment by @levkk.
* Applied the PR comment by @levkk.
It could be used to implement container health checks.
Example:
PGPASSWORD="<some-password>" psql -U pgcat -p 6432 -h 127.0.0.1 -tA -c "show version;" -d pgcat >/dev/null
The TL;DR for the change is that we allow QueryRouter to set the active shard to None. This signals to the Pool::get method that we have no shard selected. The get method follows a no_shard_specified_behavior config to know how to route the query.
Original PR description
Ruby-pg library makes a startup query to SET client_encoding to ... if Encoding.default_internal value is set (Code). This query is troublesome because we cannot possibly attach a routing comment to it. PgCat, by default, will route that query to the default shard.
Everything is fine until shard 0 has issues, Clients will all be attempting to send this query to shard0 which increases the connection latency significantly for all clients, even those not interested in shard0
This PR introduces no_shard_specified_behavior that defines the behavior in case we have routing-by-comment enabled but we get a query without a comment. The allowed behaviors are
random: Picks a shard at random
random_healthy: Picks a shard at random favoring shards with the least number of recent connection/checkout errors
shard_<number>: e.g. shard_0, shard_4, etc. picks a specific shard, everytime
In order to achieve this, this PR introduces an error_count on the Address Object that tracks the number of errors since the last checkout and uses that metric to sort shards by error count before making a routing decision.
I didn't want to use address stats to avoid introducing a routing dependency on internal stats (We might do that in the future but I prefer to avoid this for the time being.
I also made changes to the test environment to replace Ruby's TOML reader library, It appears to be abandoned and does not support mixed arrays (which we use in the config toml), and it also does not play nicely with single-quoted regular expressions. I opted for using yj which is a CLI tool that can convert from toml to JSON and back. So I refactor the tests to use that library.
* User server parameters struct instead of server info bytesmut
* Refactor to use hashmap for all params and add server parameters to client
* Sync parameters on client server checkout
* minor refactor
* update client side parameters when changed
* Move the SET statement logic from the C packet to the S packet.
* trigger build
* revert validation changes
* remove comment
* Try fix
* Reset cleanup state after sync
* fix server version test
* Track application name through client life for stats
* Add tests
* minor refactoring
* fmt
* fix
* fmt