PgDoorman: PostgreSQL Pooler
PgDoorman is a stable and high-performance alternative to PgBouncer, Odyssey, or PgCat. It was created with the Unix philosophy in mind. Development focused on performance, efficiency, and reliability. Additionally, PgDoorman provides improved driver support for languages like Go (pgx), .NET (npgsql), and asynchronous drivers for Python and Node.js.
Quick Start
Run PgDoorman instantly using Docker:
docker run -p 6432:6432 \
-v $(pwd)/pg_doorman.toml:/etc/pg_doorman/pg_doorman.toml \
ghcr.io/ozontech/pg_doorman
For more details, see the Installation Guide.
Why not multi-PgBouncer?
Why do we think that using multiple instances of PgBouncer is not a suitable solution? This approach has problems with reusing prepared statements and strange and inefficient control over query cancellation. Additionally, the main issue we have encountered is that the operating system distributes new clients round-robin, but each client can disconnect at any time, leading to an imbalance after prolonged use.
Why not Odyssey?
We had difficulties using NPGSQL and SCRAM, as well as with prepared_statements support.
However, the main serious problem related to data consistency and, for a long time, we were unable to solve it.
Differences from PgCat
While PgDoorman was initially based on the PgCat project, it has since evolved into a standalone solution with its own set of features. Some of the key differences include:
- Performance improvements compared to PgCat/PgBouncer/Odyssey.
- Support for extended protocol with popular programming language drivers.
- Enhanced monitoring metrics to improve visibility into database activity.
- Careful resource management to avoid memory issues (
max_memory_usage,message_size_to_be_stream). - SCRAM client/server authentication support.
- Gracefully binary upgrade.
- Supporting JWT for service-to-database authentication.
- Many micro-optimizations (for example, the time spent with the client is longer than the server's busy time).
Additional Binary: patroni_proxy
This repository also includes patroni_proxy — a specialized high-performance TCP proxy for Patroni-managed PostgreSQL clusters. Unlike HAProxy + confd, it preserves existing connections during cluster topology changes and provides native role-based routing with replication lag awareness.
PgDoorman Overview
What is PgDoorman?
PgDoorman is a high-performance PostgreSQL connection pooler based on PgCat. It acts as a middleware between your applications and PostgreSQL servers, efficiently managing database connections to improve performance and resource utilization.
graph LR
App1[Application A] --> Pooler(PgDoorman)
App2[Application B] --> Pooler
App3[Application C] --> Pooler
Pooler --> DB[(PostgreSQL)]
When an application connects to PgDoorman, it behaves exactly like a PostgreSQL server. Behind the scenes, PgDoorman either creates a new connection to the actual PostgreSQL server or reuses an existing connection from its pool, significantly reducing connection overhead.
Key Benefits
- Reduced Connection Overhead: Minimizes the performance impact of establishing new database connections
- Resource Optimization: Limits the number of connections to your PostgreSQL server
- Improved Scalability: Allows more client applications to connect to your database
- Connection Management: Provides tools to monitor and manage database connections
Pooling Modes
To maintain proper transaction semantics while providing efficient connection pooling, PgDoorman supports multiple pooling modes:
Transaction Pooling
In transaction pooling mode, a client is assigned a server connection only for the duration of a transaction. Once the transaction ends, the connection is released back into the pool.
- High Efficiency: Connections are shared between clients, allowing thousands of clients to share a small pool.
- Ideal for: Applications with many short-lived connections or those that don't rely on session state.
Session Pooling
In session pooling mode, each client is assigned a dedicated server connection for the entire duration of the client connection.
- Exclusive Allocation: The connection remains exclusively allocated to that client until disconnection.
- Support for Session Features: Ideal for applications that rely on temporary tables or session variables.
Administration
PgDoorman provides comprehensive tools for monitoring and management:
- Admin Console: A PostgreSQL-compatible interface for viewing statistics and managing the pooler
- Configuration Options: Extensive settings to customize behavior for your specific needs
- Monitoring: Detailed metrics about connection usage and performance
For detailed information on managing PgDoorman, see the Admin Console documentation.
Installing PgDoorman
This guide covers different methods for installing and running PgDoorman on your system.
System Requirements
- Linux (recommended) or macOS
- PostgreSQL server (version 10 or higher)
- Sufficient memory for connection pooling (depends on expected load)
Installation Methods
Pre-built Binaries (Recommended)
The simplest way to install PgDoorman is to download a pre-built binary from the GitHub releases page.
- Download the appropriate binary for your platform
- Make the file executable:
chmod +x pg_doorman - Move it to a directory in your PATH:
sudo mv pg_doorman /usr/local/bin/ - Create a configuration file (see Basic Usage for details)
Building from Source
If you prefer to build from source, you'll need to clone the repository first:
git clone https://github.com/ozontech/pg_doorman.git
cd pg_doorman
Then follow the instructions in the Contributing guide to build the project.
Docker Installation
Using the Official Docker Image (Recommended)
PgDoorman provides an official Docker image that you can use directly:
# Pull the official Docker image
docker pull ghcr.io/ozontech/pg_doorman
# Run PgDoorman with your configuration
docker run -p 6432:6432 \
-v /path/to/pg_doorman.toml:/etc/pg_doorman/pg_doorman.toml \
--rm -t -i ghcr.io/ozontech/pg_doorman
Using the Dockerfile
You can build and run PgDoorman using Docker:
# Build the Docker image
docker build -t pg_doorman -f Dockerfile .
# Run PgDoorman with your configuration
docker run -p 6432:6432 \
-v /path/to/pg_doorman.toml:/etc/pg_doorman/pg_doorman.toml \
--rm -t -i pg_doorman
Using Nix with Docker
If you use Nix, you can build a Docker image:
# Build the Docker image using Nix
nix build .#dockerImage
# Load the image into Docker
docker load -i result
# Run PgDoorman with your configuration
docker run -p 6432:6432 \
-v /path/to/pg_doorman.toml:/etc/pg_doorman/pg_doorman.toml \
--rm -t -i pg_doorman
Using Docker Compose or Podman Compose
For a more complete setup including PostgreSQL, you can use Docker Compose or Podman Compose.
A minimal compose configuration file is available in the repository examples directory.
Running with Docker Compose
docker compose up -d
Running with Podman Compose
podman-compose up -d
Verifying Installation
After installation, you can verify that PgDoorman is running correctly by:
- Checking the process:
ps aux | grep pg_doorman - Connecting to the admin console:
psql -h localhost -p 6432 -U admin pgdoorman - Running
SHOW VERSION;in the admin console
Next Steps
After installation, see the Basic Usage guide to configure and start using PgDoorman.
PgDoorman Basic Usage Guide
PgDoorman is a high-performance PostgreSQL connection pooler based on PgCat. This comprehensive guide will help you get started with configuring, running, and managing PgDoorman for your PostgreSQL environment.
Command Line Options
PgDoorman offers several command-line options to customize its behavior when starting the service:
$ pg_doorman --help
PgDoorman: Nextgen PostgreSQL Pooler (based on PgCat)
Usage: pg_doorman [OPTIONS] [CONFIG_FILE] [COMMAND]
Commands:
generate Generate configuration for pg_doorman by connecting to PostgreSQL and auto-detecting databases and users
help Print this message or the help of the given subcommand(s)
Arguments:
[CONFIG_FILE] [env: CONFIG_FILE=] [default: pg_doorman.toml]
Options:
-l, --log-level <LOG_LEVEL> [env: LOG_LEVEL=] [default: INFO]
-F, --log-format <LOG_FORMAT> [env: LOG_FORMAT=] [default: text] [possible values: text, structured, debug]
-n, --no-color disable colors in the log output [env: NO_COLOR=]
-d, --daemon run as daemon [env: DAEMON=]
-h, --help Print help
-V, --version Print version
Available Options
| Option | Description |
|---|---|
-d, --daemon | Run in the background. Without this option, the process will run in the foreground. In daemon mode, setting daemon_pid_file and syslog_prog_name is required. No log messages will be written to stderr after going into the background. |
-l, --log-level | Set log level: INFO, DEBUG, or WARN. |
-F, --log-format | Set log format. Possible values: text, structured, debug. |
-n, --no-color | Disable colors in the log output. |
-V, --version | Show version information. |
-h, --help | Show help information. |
Setup and Configuration
Configuration File Structure
PgDoorman supports both YAML and TOML configuration formats. YAML is recommended for new setups. The configuration is organized into several sections:
general: # Global settings for the PgDoorman service
pools:
<name>: # Settings for a specific database pool
users:
- ... # User settings for this pool
Some parameters must be specified in the configuration file for PgDoorman to start, even if they have default values. For example, you must specify an admin username and password to access the administrative console.
Minimal Configuration Example
Here's a minimal configuration example to get you started:
YAML (recommended)
general:
host: "0.0.0.0" # Listen on all interfaces
port: 6432 # Port for client connections
admin_username: "admin"
admin_password: "admin" # Change this in production!
pools:
exampledb:
server_host: "127.0.0.1" # PostgreSQL server address
server_port: 5432 # PostgreSQL server port
pool_mode: "transaction" # Connection pooling mode
users:
- pool_size: 40
username: "doorman"
password: "SCRAM-SHA-256$4096:6nD+Ppi9rgaNyP7...MBiTld7xJipwG/X4="
TOML
[general]
host = "0.0.0.0"
port = 6432
admin_username = "admin"
admin_password = "admin"
[pools.exampledb]
server_host = "127.0.0.1"
server_port = 5432
pool_mode = "transaction"
[pools.exampledb.users.0]
pool_size = 40
username = "doorman"
password = "SCRAM-SHA-256$4096:6nD+Ppi9rgaNyP7...MBiTld7xJipwG/X4="
For a complete list of configuration options and their descriptions, see the Settings Reference Guide.
Automatic Configuration Generation
PgDoorman provides a powerful generate command that can automatically create a configuration file by connecting to your PostgreSQL server and detecting databases and users. By default, the generated config includes detailed inline comments explaining every parameter.
# View all available options
pg_doorman generate --help
# Generate a YAML configuration file (recommended)
pg_doorman generate --output pg_doorman.yaml
# Generate a TOML configuration file (for backward compatibility)
pg_doorman generate --output pg_doorman.toml
# Generate a reference config with all settings (no PG connection needed)
pg_doorman generate --reference --output pg_doorman.yaml
# Generate a reference config with Russian comments for quick start
pg_doorman generate --reference --ru --output pg_doorman.yaml
# Generate a config without comments (plain serialization)
pg_doorman generate --no-comments --output pg_doorman.yaml
The generate command supports several options:
| Option | Description |
|---|---|
--host | PostgreSQL host to connect to (uses localhost if not specified) |
--port, -p | PostgreSQL port to connect to (default: 5432) |
--user, -u | PostgreSQL user to connect as (requires superuser privileges to read pg_shadow) |
--password | PostgreSQL password to connect with |
--database, -d | PostgreSQL database to connect to (uses same name as user if not specified) |
--ssl | PostgreSQL connection to server via SSL/TLS |
--pool-size | Pool size for the generated configuration (default: 40) |
--session-pool-mode, -s | Session pool mode for the generated configuration |
--output, -o | Output file for the generated configuration (uses stdout if not specified) |
--server-host | Override server_host in config (uses the host parameter if not specified) |
--no-comments | Disable inline comments in generated config (by default, comments are included) |
--reference | Generate a complete reference config with example values, no PG connection needed |
--russian-comments, --ru | Generate comments in Russian for quick start guide |
--format, -f | Output format: yaml (default) or toml. If --output is specified, format is auto-detected from file extension. This flag overrides auto-detection |
The command connects to your PostgreSQL server, automatically detects all databases and users, and creates a complete, well-documented configuration file. This is especially useful for quickly setting up PgDoorman in new environments.
The generate command also respects standard PostgreSQL environment variables like PGHOST, PGPORT, PGUSER, PGPASSWORD, and PGDATABASE.
PgDoorman uses passthrough authentication by default: the client's cryptographic proof (MD5 hash or SCRAM ClientKey) is automatically reused to authenticate to the backend PostgreSQL server. No plaintext passwords in config needed — just set password to the hash from pg_shadow / pg_authid.
Set server_username and server_password only when the backend user differs from the pool username (e.g., username mapping or JWT auth):
users:
- username: "app_user" # client-facing name
password: "md5..." # hash for client authentication
server_username: "pg_app_user" # different backend PostgreSQL user
server_password: "real_password" # plaintext password for that user
See also Pool User Settings.
Reading user information from PostgreSQL requires superuser privileges to access the pg_shadow table.
Client access control (pg_hba)
PgDoorman can enforce client access rules using PostgreSQL-style pg_hba.conf semantics via the general.pg_hba parameter.
You can embed rules directly in the config or reference a file path. See the reference section for full examples.
Trust mode: when a matching rule uses trust, PgDoorman will accept connections without prompting the client for a password,
mirroring PostgreSQL behavior. TLS-related rule types are honored: hostssl requires TLS, hostnossl forbids TLS.
Running PgDoorman
After creating your configuration file, you can run PgDoorman from the command line:
$ pg_doorman pg_doorman.toml
If you don't specify a configuration file, PgDoorman will look for pg_doorman.toml in the current directory.
Connecting to PostgreSQL via PgDoorman
Once PgDoorman is running, connect to it instead of connecting directly to your PostgreSQL database:
$ psql -h localhost -p 6432 -U doorman exampledb
Your application's connection string should be updated to point to PgDoorman instead of directly to PostgreSQL:
postgresql://doorman:password@localhost:6432/exampledb
PgDoorman will handle the connection pooling transparently, so your application doesn't need to be aware that it's connecting through a pooler.
Administration
Admin Console
PgDoorman provides a powerful administrative interface that allows you to monitor and manage the connection pooler. You can access this interface by connecting to the special administration database named pgdoorman:
$ psql -h localhost -p 6432 -U admin pgdoorman
Once connected, you can view available commands:
pgdoorman=> SHOW HELP;
NOTICE: Console usage
DETAIL:
SHOW HELP|CONFIG|DATABASES|POOLS|POOLS_EXTENDED|CLIENTS|SERVERS|USERS|VERSION
SHOW LISTS
SHOW CONNECTIONS
SHOW STATS
RELOAD
SHUTDOWN
SHOW
The admin console currently supports only the simple query protocol.
Some database drivers use the extended query protocol for all commands, making them unsuitable for admin console access. In such cases, use the psql command-line client for administration.
Only the user specified by admin_username in the configuration file is allowed to log in to the admin console.
If your general.pg_hba rules allow it, the admin console can also be accessed using the trust method (no password prompt), for example:
# Allow only local admin to access the admin DB without a password
host pgdoorman admin 127.0.0.1/32 trust
Use trust with extreme caution. Always restrict it by address and, where possible, require TLS via hostssl. In production, prefer password-based methods unless you fully understand the implications.
Monitoring PgDoorman
The admin console provides several commands to monitor the current state of PgDoorman:
SHOW STATS- View performance statisticsSHOW CLIENTS- List current client connectionsSHOW SERVERS- List current server connectionsSHOW POOLS- View connection pool statusSHOW DATABASES- List configured databasesSHOW USERS- List configured users
These commands are described in detail in the Admin Console Commands section below.
Reloading Configuration
If you make changes to the pg_doorman.toml file, you can apply them without restarting the service:
pgdoorman=# RELOAD;
When you reload the configuration:
- PgDoorman reads the updated configuration file
- Changes to database connection parameters are detected
- Existing server connections are closed when they're next released (according to the pooling mode)
- New server connections immediately use the updated parameters
This allows you to make configuration changes with minimal disruption to your applications.
Admin Console Commands
The admin console provides a set of commands to monitor and manage PgDoorman. These commands follow a SQL-like syntax and can be executed from any PostgreSQL client connected to the admin console.
Show Commands
The SHOW commands display information about PgDoorman's operation. Each command provides different insights into the pooler's performance and current state.
SHOW STATS
The SHOW STATS command displays comprehensive statistics about PgDoorman's operation:
pgdoorman=> SHOW STATS;
Statistics are presented per database with the following metrics:
| Metric | Description |
|---|---|
database | The database name these statistics apply to |
total_xact_count | Total number of SQL transactions processed since startup |
total_query_count | Total number of SQL commands processed since startup |
total_received | Total bytes of network traffic received from clients |
total_sent | Total bytes of network traffic sent to clients |
total_xact_time | Total microseconds spent in transactions (including idle in transaction) |
total_query_time | Total microseconds spent actively executing queries |
total_wait_time | Total microseconds clients spent waiting for a server connection |
avg_xact_count | Average transactions per second in the last 15-second period |
avg_query_count | Average queries per second in the last 15-second period |
avg_server_assignment_count | Average server assignments per second in the last 15-second period |
avg_recv | Average bytes received per second from clients |
avg_sent | Average bytes sent per second to clients |
avg_xact_time | Average transaction duration in microseconds |
avg_query_time | Average query duration in microseconds |
avg_wait_time | Average time clients spent waiting for a server in microseconds |
Pay special attention to the avg_wait_time metric. If this value is consistently high, it may indicate that your pool size is too small for your workload.
SHOW SERVERS
The SHOW SERVERS command displays detailed information about all server connections:
pgdoorman=> SHOW SERVERS;
| Column | Description |
|---|---|
server_id | Unique identifier for the server connection |
server_process_id | PID of the backend PostgreSQL server process (if available) |
database_name | Name of the database this connection is using |
user | Username PgDoorman uses to connect to the PostgreSQL server |
application_name | Value of the application_name parameter set on the server connection |
state | Current state of the connection: active, idle, or used |
wait | Wait state of the connection: idle, read, or write |
transaction_count | Total number of transactions processed by this connection |
query_count | Total number of queries processed by this connection |
bytes_sent | Total bytes sent to the PostgreSQL server |
bytes_received | Total bytes received from the PostgreSQL server |
age_seconds | Lifetime of the current server connection in seconds |
prepare_cache_hit | Number of prepared statement cache hits |
prepare_cache_miss | Number of prepared statement cache misses |
prepare_cache_size | Number of unique prepared statements in the cache |
- active: The connection is currently executing a query
- idle: The connection is available for use
- used: The connection is allocated to a client but not currently executing a query
SHOW CLIENTS
The SHOW CLIENTS command displays information about all client connections to PgDoorman:
pgdoorman=> SHOW CLIENTS;
| Column | Description |
|---|---|
client_id | Unique identifier for the client connection |
database | Name of the database (pool) the client is connected to |
user | Username the client used to connect |
addr | Client's IP address and port (IP:port) |
tls | Whether the connection uses TLS encryption (true or false) |
state | Current state of the client connection: active, idle, or waiting |
wait | Wait state of the client connection: idle, read, or write |
transaction_count | Total number of transactions processed for this client |
query_count | Total number of queries processed for this client |
age_seconds | Lifetime of the client connection in seconds |
The age_seconds column can help identify long-running connections that might be holding resources unnecessarily. Consider implementing connection timeouts in your application for idle connections.
SHOW POOLS
The SHOW POOLS command displays information about connection pools. A new pool entry is created for each (database, user) pair:
pgdoorman=> SHOW POOLS;
| Column | Description |
|---|---|
database | Name of the database |
user | Username associated with this pool |
pool_mode | Pooling mode in use: session or transaction |
cl_active | Number of active client connections (linked to servers or idle) |
cl_waiting | Number of client connections waiting for a server connection |
sv_active | Number of server connections linked to clients |
sv_idle | Number of idle server connections available for immediate use |
sv_login | Number of server connections currently in the login process |
maxwait | Maximum wait time in seconds for the oldest client in the queue |
maxwait_us | Microsecond part of the maximum waiting time |
If the maxwait value starts increasing, your server pool may not be handling requests quickly enough. This could be due to an overloaded PostgreSQL server or insufficient pool_size setting.
SHOW USERS
The SHOW USERS command displays information about all configured users:
pgdoorman=> SHOW USERS;
| Column | Description |
|---|---|
name | Username as configured in PgDoorman |
pool_mode | Pooling mode assigned to this user: session or transaction |
SHOW DATABASES
The SHOW DATABASES command displays information about all configured database pools:
pgdoorman=> SHOW DATABASES;
| Column | Description |
|---|---|
database | Name of the configured database pool |
host | Hostname of the PostgreSQL server PgDoorman connects to |
port | Port number of the PostgreSQL server |
pool_size | Maximum number of server connections for this database |
min_pool_size | Minimum number of server connections to maintain |
reserve_pool_size | Maximum number of additional connections allowed |
pool_mode | Default pooling mode for this database |
max_connections | Maximum allowed server connections (from max_db_connections) |
current_connections | Current number of server connections for this database |
Monitor the ratio between current_connections and pool_size to ensure your pool is properly sized. If current_connections frequently reaches pool_size, consider increasing the pool size.
SHOW SOCKETS
The SHOW SOCKETS command displays low-level information about network sockets:
pgdoorman=> SHOW SOCKETS;
This command includes all information shown in SHOW CLIENTS and SHOW SERVERS plus additional low-level details about the socket connections.
SHOW VERSION
The SHOW VERSION command displays the PgDoorman version information:
pgdoorman=> SHOW VERSION;
This is useful for verifying which version you're running, especially after upgrades.
Control Commands
PgDoorman provides control commands that allow you to manage the service operation directly from the admin console.
SHUTDOWN
The SHUTDOWN command gracefully terminates the PgDoorman process:
pgdoorman=> SHUTDOWN;
When executed:
- PgDoorman stops accepting new client connections
- Existing transactions are allowed to complete (within the configured timeout)
- All connections are closed
- The process exits
Using the SHUTDOWN command will terminate the PgDoorman service, disconnecting all clients. Use this command with caution in production environments.
RELOAD
The RELOAD command refreshes PgDoorman's configuration without restarting the service:
pgdoorman=> RELOAD;
This command:
- Rereads the configuration file
- Updates all changeable settings
- Applies changes to connection parameters for new connections
- Maintains existing connections until they're released back to the pool
The RELOAD command allows you to modify most configuration parameters without disrupting existing connections. This is ideal for production environments where downtime must be minimized.
Signal Handling
PgDoorman responds to standard Unix signals for control and management. These signals can be sent using the kill command (e.g., kill -HUP <pid>).
| Signal | Description | Effect |
|---|---|---|
| SIGHUP | Configuration reload | Equivalent to the RELOAD command in the admin console. Rereads the configuration file and applies changes to settings. |
| SIGTERM | Immediate shutdown | Forces PgDoorman to exit immediately. Active connections may be terminated abruptly. |
| SIGINT | Graceful shutdown | Initiates a binary upgrade process. The current process starts a new instance and gracefully transfers connections. See Binary Upgrade Process for details. |
In systemd-based environments, you can use systemctl reload pg_doorman to send SIGHUP and systemctl restart pg_doorman for a complete restart.
Binary Upgrade Process
Overview
PgDoorman supports seamless binary upgrades that allow you to update the software with minimal disruption to your database connections. This document explains how the upgrade process works and what to expect during an upgrade.
Triggering a Binary Upgrade
The recommended way to trigger a binary upgrade is to send SIGUSR2 to the PgDoorman process:
kill -USR2 $(pgrep pg_doorman)
Alternatively, you can use the admin console command:
UPGRADE;
Signal Reference
| Signal | Behavior |
|---|---|
SIGUSR2 | Binary upgrade + graceful shutdown (recommended) |
SIGINT | Binary upgrade + graceful shutdown (legacy, daemon/no-TTY only). In foreground mode with a TTY, SIGINT (Ctrl+C) performs graceful shutdown without binary upgrade. |
SIGTERM | Immediate shutdown |
SIGHUP | Reload configuration |
SIGINT still triggers binary upgrade when running in daemon mode or without a TTY (e.g. when spawned by systemd). If you are running pg_doorman interactively in a terminal, Ctrl+C will cleanly stop the process without spawning a new one. Use kill -USR2 or the UPGRADE admin command to trigger binary upgrade in foreground mode.
How the Upgrade Process Works
When PgDoorman receives the upgrade signal:
- The current PgDoorman instance validates the configuration of the new binary using
-tflag - If validation passes, a new process is started:
- Daemon mode: a new daemonized process is spawned
- Foreground mode: the listener socket is passed to the new process via
--inherit-fd
- The new process uses the
SO_REUSE_PORTsocket option, allowing the operating system to distribute incoming traffic to the new instance - The old instance then closes its socket for incoming connections
- Existing connections are handled gracefully during the transition
systemd Integration
The recommended systemd service configuration uses SIGUSR2 for reload:
ExecReload=/bin/kill -SIGUSR2 $MAINPID
This triggers a binary upgrade when you run systemctl reload pg_doorman.
Handling Existing Connections
During the upgrade process, PgDoorman handles existing connections as follows:
- Current queries and transactions are allowed to complete within the specified
shutdown_timeout(default: 10 seconds) - After each query or transaction completes successfully, PgDoorman returns error code
58006to the client - This error code indicates to the client that they need to reconnect to the server
- After reconnecting, clients can safely retry their queries with the new PgDoorman instance
Important Considerations
Repeating a query without receiving error code 58006 may cause problems as described in this issue. Make sure your client application properly handles reconnection scenarios.
Be careful when using client libraries like github.com/lib/pq or Go's standard database/sql package. Ensure they properly handle the reconnection process during binary upgrades.
Patroni Proxy
patroni_proxy is a specialized high-performance TCP proxy for Patroni-managed PostgreSQL clusters. Following the Unix philosophy of "do one thing and do it well", it focuses exclusively on TCP load balancing and failover for Patroni clusters.
Overview
Unlike traditional solutions like HAProxy, patroni_proxy provides seamless connection management without disrupting existing connections during cluster topology changes. When a new replica is added or removed, only the affected connections are handled — all other connections continue working without interruption.
Key Features
Zero-Downtime Connection Management
The main advantage over HAProxy is that patroni_proxy does not terminate existing connections when the upstream configuration changes. This is critical for long-running transactions and connection-heavy applications.
Hot Upstream Updates
- Automatic discovery of cluster members via Patroni REST API (
/clusterendpoint) - Periodic polling with configurable interval (
cluster_update_interval) - Immediate updates via HTTP API (
/update_clustersendpoint) - Configuration reload via SIGHUP signal without restart
Role-Based Routing
Route connections based on PostgreSQL node roles:
| Role | Description |
|---|---|
leader | Primary/master node |
sync | Synchronous standby replicas |
async | Asynchronous replicas |
any | Any available node |
Intelligent Load Balancing
- Least Connections strategy for distributing connections across backends
- Connection counters are preserved during cluster updates
- Automatic exclusion of nodes with
noloadbalancetag
Replication Lag Awareness
- Configurable
max_lag_in_bytesper port - Automatic disconnection of clients when replica lag exceeds threshold
- Only affects replica connections (leader has no lag)
Member State Filtering
- Only members with
state: "running"are used as backends - Members in
starting,stopped,crashedstates are automatically excluded - Dynamic state changes are handled during periodic updates
Recommended Deployment Architecture
For optimal performance, we recommend a two-tier architecture:
graph TD
App1[Application A] --> PP(patroni_proxy<br/>TCP load balancing)
App2[Application B] --> PP
App3[Application C] --> PP
PP --> D1(pg_doorman<br/>pooling)
PP --> D2(pg_doorman<br/>pooling)
PP --> D3(pg_doorman<br/>pooling)
D1 --> PG1[(PostgreSQL<br/>leader)]
D2 --> PG2[(PostgreSQL<br/>sync replica)]
D3 --> PG3[(PostgreSQL<br/>async replica)]
- pg_doorman should be deployed close to PostgreSQL servers — it handles connection pooling, prepared statement caching, and protocol-level optimizations that benefit from low latency to the database
- patroni_proxy should be deployed close to application clients — it handles TCP routing and failover, distributing connections across the cluster without the overhead of connection pooling
This separation allows each component to excel at its specific task while providing both high availability and optimal performance.
Configuration
Example patroni_proxy.yaml:
# Cluster update interval in seconds (default: 3)
cluster_update_interval: 3
# HTTP API listen address for health checks and manual updates (default: 127.0.0.1:8009)
listen_address: "127.0.0.1:8009"
clusters:
my_cluster:
# Patroni API endpoints (multiple for redundancy)
hosts:
- "http://192.168.1.1:8008"
- "http://192.168.1.2:8008"
- "http://192.168.1.3:8008"
# Optional: TLS configuration for Patroni API
# tls:
# ca_cert: "/path/to/ca.crt"
# client_cert: "/path/to/client.crt"
# client_key: "/path/to/client.key"
# skip_verify: false
ports:
# Primary/master connections
master:
listen: "0.0.0.0:6432"
roles: ["leader"]
host_port: 5432
# Read-only connections to replicas
replicas:
listen: "0.0.0.0:6433"
roles: ["sync", "async"]
host_port: 5432
max_lag_in_bytes: 16777216 # 16MB
Configuration Options
| Option | Default | Description |
|---|---|---|
cluster_update_interval | 3 | Interval in seconds between Patroni API polls |
listen_address | 127.0.0.1:8009 | HTTP API listen address |
clusters.<name>.hosts | - | List of Patroni API endpoints |
clusters.<name>.tls | - | Optional TLS configuration for Patroni API |
clusters.<name>.ports.<name>.listen | - | Listen address for this port |
clusters.<name>.ports.<name>.roles | - | List of allowed roles |
clusters.<name>.ports.<name>.host_port | - | PostgreSQL port on backend hosts |
clusters.<name>.ports.<name>.max_lag_in_bytes | - | Maximum replication lag (optional) |
Usage
Starting patroni_proxy
# Start with configuration file
patroni_proxy /path/to/patroni_proxy.yaml
# With debug logging
RUST_LOG=debug patroni_proxy /path/to/patroni_proxy.yaml
Configuration Reload
Reload configuration without restart (add/remove ports, update hosts):
kill -HUP $(pidof patroni_proxy)
Manual Cluster Update
Trigger immediate update of all cluster members via HTTP API:
curl http://127.0.0.1:8009/update_clusters
HTTP API
| Endpoint | Method | Description |
|---|---|---|
/update_clusters | GET | Trigger immediate update of all cluster members |
/ | GET | Health check (returns "OK") |
Comparison with HAProxy + confd
| Feature | patroni_proxy | HAProxy + confd |
|---|---|---|
| Connection preservation on update | ✅ Yes | ❌ No (reload drops connections) |
| Hot upstream updates | ✅ Native | ⚠️ Requires confd + reload |
| Replication lag awareness | ✅ Built-in | ⚠️ Requires custom checks |
| Configuration complexity | ✅ Single YAML | ❌ Multiple configs |
| Resource usage | ✅ Lightweight | ⚠️ HAProxy + confd processes |
| Role-based routing | ✅ Native | ⚠️ Requires custom templates |
Building
# Build release binary
cargo build --release --bin patroni_proxy
# Run tests
cargo test --test patroni_proxy_bdd
Troubleshooting
No backends available
If you see warnings like no backends available, check:
- Patroni API is accessible from patroni_proxy host
- Cluster members have
state: "running" - Roles in configuration match actual member roles
- If using
max_lag_in_bytes, check replica lag values
Connection drops after update
This should not happen with patroni_proxy. If connections are being dropped:
- Check if the backend host was actually removed from the cluster
- Verify
max_lag_in_bytesthreshold is not being exceeded - Enable debug logging to see detailed connection lifecycle
Troubleshooting
This guide helps you resolve common issues when using PgDoorman.
Authentication Errors When Connecting to PostgreSQL
Symptom: PgDoorman starts successfully but clients get authentication errors like password authentication failed when trying to execute queries.
If the pool username matches the backend PostgreSQL user
PgDoorman uses passthrough authentication by default — the client's cryptographic proof (MD5 hash or SCRAM ClientKey) is reused to authenticate to PostgreSQL. Make sure the password field in your config contains the exact hash from pg_authid / pg_shadow:
SELECT usename, passwd FROM pg_shadow WHERE usename = 'your_user';
Copy the hash (e.g., md5... or SCRAM-SHA-256$...) into your config's password field. The hash must match the one stored in PostgreSQL (same salt and iterations for SCRAM).
If the pool username differs from the backend user
When the client-facing username in PgDoorman is different from the actual PostgreSQL role, passthrough cannot work — you need explicit credentials:
users:
- username: "app_user" # client-facing name
password: "md5..." # hash for client authentication
server_username: "pg_app_user" # actual PostgreSQL role
server_password: "plaintext_pwd" # plaintext password for that role
pool_size: 40
This also applies to JWT authentication where there is no password to pass through.
You can get user password hashes from PostgreSQL using: SELECT usename, passwd FROM pg_shadow;
Or use the pg_doorman generate command which automatically retrieves them.
Configuration File Not Found
Symptom: PgDoorman exits with "configuration file not found" error.
Solution: Specify the configuration file path explicitly:
pg_doorman /path/to/pg_doorman.yaml
By default, PgDoorman looks for pg_doorman.toml in the current directory.
Pool Size Too Small
Symptom: Clients experience high wait times or receive errors about too many connections.
Solution: Increase pool_size for the affected user, or check the SHOW POOLS admin command to see cl_waiting and maxwait values. If maxwait is consistently high, your pool is undersized for your workload.
If you encounter a problem not listed here, please open an issue on GitHub.
Settings
Configuration File Format
pg_doorman supports two configuration file formats:
- YAML (
.yaml,.yml) - The primary and recommended format for new configurations. - TOML (
.toml) - Supported for backward compatibility with existing configurations.
The format is automatically detected based on the file extension. Both formats support the same configuration options and can be used interchangeably.
Example YAML Configuration (Recommended)
general:
host: "0.0.0.0"
port: 6432
admin_username: "admin"
admin_password: "admin"
pools:
mydb:
server_host: "localhost"
server_port: 5432
pool_mode: "transaction"
users:
- username: "myuser"
password: "mypassword"
pool_size: 40
Example TOML Configuration (Legacy)
[general]
host = "0.0.0.0"
port = 6432
admin_username = "admin"
admin_password = "admin"
[pools.mydb]
server_host = "localhost"
server_port = 5432
pool_mode = "transaction"
[[pools.mydb.users]]
username = "myuser"
password = "mypassword"
pool_size = 40
Generate Command
The generate command can output configuration in either format. The format is determined by the output file extension. By default, the generated config includes detailed inline comments explaining every parameter.
# Generate YAML configuration (recommended)
pg_doorman generate --output config.yaml
# Generate TOML configuration (for backward compatibility)
pg_doorman generate --output config.toml
# Generate a complete reference config without PG connection
pg_doorman generate --reference --output config.yaml
# Generate reference config with Russian comments
pg_doorman generate --reference --ru --output config.yaml
# Generate config without comments (plain serialization)
pg_doorman generate --no-comments --output config.yaml
| Flag | Description |
|---|---|
--no-comments | Disable inline comments in generated config (by default, comments are included) |
--reference | Generate a complete reference config with example values, no PostgreSQL connection needed |
--russian-comments, --ru | Generate comments in Russian for quick start guide |
--format, -f | Output format: yaml (default) or toml. If --output is specified, format is auto-detected from file extension. This flag overrides auto-detection |
Include Files
Include files can be in either format, and you can mix formats. For example, a YAML main config can include TOML files and vice versa:
include:
files:
- "pools.yaml"
- "users.toml"
Human-Readable Values
pg_doorman supports human-readable formats for duration and byte size values, while maintaining backward compatibility with numeric values.
Duration Format
Duration values can be specified as:
- Plain numbers: interpreted as milliseconds (e.g.,
5000= 5 seconds) - String with suffix:
ms- milliseconds (e.g.,"100ms")s- seconds (e.g.,"5s"= 5000 milliseconds)m- minutes (e.g.,"5m"= 300000 milliseconds)h- hours (e.g.,"1h"= 3600000 milliseconds)d- days (e.g.,"1d"= 86400000 milliseconds)
Examples:
general:
# All these are equivalent (3 seconds):
# connect_timeout: 3000 # backward compatible (milliseconds)
# connect_timeout: "3s" # human-readable
# connect_timeout: "3000ms" # explicit milliseconds
connect_timeout: "3s"
idle_timeout: "5m" # 5 minutes
server_lifetime: "1h" # 1 hour
Byte Size Format
Byte size values can be specified as:
- Plain numbers: interpreted as bytes (e.g.,
1048576= 1 MB) - String with suffix (case-insensitive):
B- bytes (e.g.,"1024B")KorKB- kilobytes (e.g.,"1K"or"1KB"= 1024 bytes)MorMB- megabytes (e.g.,"1M"or"1MB"= 1048576 bytes)GorGB- gigabytes (e.g.,"1G"or"1GB"= 1073741824 bytes)
Note: Uses binary prefixes (1 KB = 1024 bytes, not 1000 bytes).
Examples:
general:
# All these are equivalent (256 MB):
# max_memory_usage: 268435456 # backward compatible (bytes)
# max_memory_usage: "256MB" # human-readable
# max_memory_usage: "256M" # short form
max_memory_usage: "256MB"
unix_socket_buffer_size: "1MB" # 1 MB
worker_stack_size: "8MB" # 8 MB
General Settings
host
Listen host (TCP v4 only).
Default: "0.0.0.0".
port
Listen port for incoming connections.
Default: 5432.
backlog
TCP backlog for incoming connections. A value of zero sets the max_connections as value for the TCP backlog.
Default: 0.
max_connections
The maximum number of clients that can connect to the pooler simultaneously. When this limit is reached:
- A client connecting without SSL will receive the expected error (code:
53300, message:sorry, too many clients already). - A client connecting via SSL will see a message indicating that the server does not support the SSL protocol.
Default: 8192.
max_concurrent_creates
Maximum number of server connections that can be created concurrently per pool. This setting uses a semaphore to limit parallel connection creation, which significantly improves performance during cold start and burst scenarios.
Higher values allow faster pool warm-up but may increase load on the PostgreSQL server during connection storms. Lower values provide more gradual connection creation.
Default: 4.
tls_mode
The TLS mode for incoming connections. It can be one of the following:
allow- TLS connections are allowed but not required. The pg_doorman will attempt to establish a TLS connection if the client requests it.disable- TLS connections are not allowed. All connections will be established without TLS encryption.require- TLS connections are required. The pg_doorman will only accept connections that use TLS encryption.verify-full- TLS connections are required and the pg_doorman will verify the client certificate. This mode provides the highest level of security.
Default: "allow".
tls_ca_cert
The file containing the CA certificate to verify the client certificate. This is required when tls_mode is set to verify-full.
Default: None.
tls_private_key
The path to the private key file for TLS connections. This is required to enable TLS for incoming client connections. Must be used together with tls_certificate.
Default: None.
tls_certificate
The path to the certificate file for TLS connections. This is required to enable TLS for incoming client connections. Must be used together with tls_private_key.
Default: None.
tls_rate_limit_per_second
Limit the number of simultaneous attempts to create a TLS session. Any value other than zero implies that there is a queue through which clients must pass in order to establish a TLS connection. In some cases, this is necessary in order to launch an application that opens many connections at startup (the so-called "hot start").
Default: 0.
daemon_pid_file
Enabling this setting enables daemon mode. Comment this out if you want to run pg_doorman in the foreground with -d.
Default: "/tmp/pg_doorman.pid".
syslog_prog_name
When specified, pg_doorman starts sending messages to syslog (using /dev/log or /var/run/syslog). Comment this out if you want to log to stdout.
Default: None.
log_client_connections
Log client connections for monitoring.
Default: true.
log_client_disconnections
Log client disconnections for monitoring.
Default: true.
worker_threads
The number of worker processes (posix threads) that async serve clients, which affects the performance of pg_doorman. The more workers there are, the faster the system works, but only up to a certain limit (cpu count).
This parameter also controls the number of shards in internal concurrent hash maps (DashMap).
The shard count is calculated as worker_threads * 4 rounded up to the nearest power of 2 (minimum 4 shards).
This is important for Kubernetes deployments where CPU count detection may be incorrect, causing unnecessary overhead.
Default: 4.
worker_cpu_affinity_pinning
Automatically assign workers to different CPUs (man 3 cpu_set).
Default: false.
tokio_global_queue_interval
Tokio runtime settings. Controls how often the scheduler checks the global task queue. Modern tokio versions handle this well by default, so this parameter is optional.
Default: not set (uses tokio's default).
tokio_event_interval
Tokio runtime settings. Controls how often the scheduler checks for external events (I/O, timers). Modern tokio versions handle this well by default, so this parameter is optional.
Default: not set (uses tokio's default).
worker_stack_size
Tokio runtime settings. Sets the stack size for worker threads. Modern tokio versions handle this well by default, so this parameter is optional.
Default: not set (uses tokio's default).
max_blocking_threads
Tokio runtime settings. Sets the maximum number of threads for blocking operations. Modern tokio versions handle this well by default, so this parameter is optional.
Default: not set (uses tokio's default).
connect_timeout
Connection timeout to server in milliseconds.
Default: 3000 (3 sec).
query_wait_timeout
Maximum time to wait for a query to complete, in milliseconds.
Default: 5000 (5 sec).
idle_timeout
Server idle timeout in milliseconds.
Default: 300000000 (5000 min).
server_lifetime
Server lifetime in milliseconds.
Default: 300000 (5 min).
retain_connections_time
Interval for checking and closing idle connections that exceed idle_timeout or server_lifetime.
The retain task runs periodically at this interval to clean up expired connections.
Default: 30000 (30 sec).
retain_connections_max
Maximum number of idle connections to close per retain cycle.
When set to 0, all idle connections that exceed idle_timeout or server_lifetime will be closed immediately.
When set to a positive value, at most that many connections will be closed per cycle across all pools.
This parameter controls how aggressively pg_doorman closes idle connections. With the default value of 3,
up to 3 connections are closed per retain cycle, providing controlled cleanup. If you need faster cleanup of
expired connections, set to 0 (unlimited) to close all expired connections in each retain cycle.
Default: 3.
server_idle_check_timeout
Time after which an idle server connection should be checked before being given to a client. This helps detect dead connections caused by PostgreSQL restart, network issues, or server-side idle timeouts.
When a connection has been idle in the pool longer than this timeout, pg_doorman will send a minimal query (;)
to verify the connection is still alive before returning it to the client. If the check fails, the connection
is discarded and a new one is obtained.
Set to 0 to disable the check (not recommended for production environments with potential network instability
or PostgreSQL restarts).
Default: 60s (60 seconds).
server_round_robin
In transactional pool mode, we can choose whether the last free server backend will be used or the next one will be selected. By default, the LRU (Least Recently Used) method is used, which has a positive impact on performance.
Default: false.
sync_server_parameters
If enabled, we strive to restore the parameters (via query SET) that were set by the client (and application_name)
in transaction mode in other server backends. By default, this is disabled (false) due to performance.
If you need to know application_name, but don't want to experience performance issues due to constant server queries SET,
you can consider creating a separate pool for each application and using the application_name parameter in the pool settings.
Default: false.
tcp_so_linger
By default, pg_doorman send RST instead of keeping the connection open for a long time.
Default: 0.
tcp_no_delay
TCP_NODELAY to disable Nagle's algorithm for lower latency.
Default: true.
tcp_keepalives_count
Keepalive enabled by default and overwrite OS defaults.
Default: 5.
tcp_keepalives_idle
Keepalive enabled by default and overwrite OS defaults.
Default: 5.
tcp_keepalives_interval
TCP keepalive interval in seconds.
Default: 5.
tcp_user_timeout
Sets the TCP_USER_TIMEOUT socket option for client connections (in seconds). This option specifies
the maximum time that transmitted data may remain unacknowledged before TCP will forcibly close the
connection. This helps detect dead client connections faster than keepalive probes when the connection
is actively sending data but the remote end has become unreachable (e.g., network failure, client crash).
When set to a non-zero value, if data remains unacknowledged for this duration, the connection will be terminated. This is particularly useful to avoid 15-16 minute delays caused by TCP retransmission timeout when keepalive cannot help (e.g., during active data transmission).
Note: This option is only supported on Linux. On other operating systems, this setting is ignored.
Set to 0 to disable (use OS default).
Default: 60.
unix_socket_buffer_size
Buffer size for read and write operations when connecting to PostgreSQL via a unix socket.
Default: 1048576.
admin_username
Access to the virtual admin database is carried out through the administrator's username and password.
Default: "admin".
admin_password
Access to the virtual admin database is carried out through the administrator's username and password. It should be replaced with your secret.
Default: "admin".
prepared_statements
Switcher to enable/disable caching of prepared statements.
Default: true.
prepared_statements_cache_size
Cache size of prepared statements at the pool level (shared across all clients connecting to the same pool). This cache stores the mapping from query hash to rewritten prepared statement name.
Default: 8192.
client_prepared_statements_cache_size
Maximum number of prepared statements cached per client connection. This is a protection mechanism against
malicious or misbehaving clients that don't call DEALLOCATE and could cause memory exhaustion by creating
unlimited prepared statements over long-running connections.
When the limit is reached, the oldest entry is evicted from the client's cache. The evicted statement
can still be re-used later because the pool-level cache (prepared_statements_cache_size) retains the
query-to-server-name mapping.
Set to 0 to disable the limit (unlimited cache size, relies on client calling DEALLOCATE).
Default: 0 (unlimited).
message_size_to_be_stream
Data responses from the server (message type 'D') greater than this value will be transmitted through the proxy in small chunks (1 MB).
Default: 1048576.
scaling_warm_pool_ratio
Warm pool ratio as a percentage (0-100). When the pool size is below this threshold
of max_size, new connections are created immediately without backpressure.
Above this threshold, the pool applies fast retries and cooldown sleep to avoid
overwhelming the PostgreSQL server during connection storms.
Default: 20.
scaling_fast_retries
Number of fast retries using yield_now() for low-latency waiting when creating
connections above the warm pool threshold. Each retry takes approximately 1-5μs.
After exhausting fast retries, the pool falls back to scaling_cooldown_sleep.
Default: 10.
scaling_cooldown_sleep
Sleep duration after fast retries are exhausted during connection
creation above the warm pool threshold. This provides backpressure to prevent
overwhelming the PostgreSQL server. Set to 0 to disable (not recommended).
Accepts duration strings like "10ms", "1s", or plain numbers (milliseconds).
Default: "10ms".
max_memory_usage
We calculate the total amount of memory used by the internal buffers for all current queries. If the limit is reached, the client will receive an error (256 MB).
Default: 268435456.
shutdown_timeout
With a graceful shutdown, we wait for transactions to be completed within this time limit (10 seconds).
Default: 10000.
proxy_copy_data_timeout
Maximum time to wait for data copy operations during proxying, in milliseconds.
Default: 15000 (15 sec).
server_tls
Enable TLS for connections to the PostgreSQL server. When enabled, pg_doorman will attempt to establish TLS connections to the backend PostgreSQL servers.
Default: false.
verify_server_certificate
Verify the PostgreSQL server's TLS certificate when connecting with TLS. This setting is only relevant when server_tls is enabled.
Default: false.
hba
The list of IP addresses from which it is permitted to connect to the pg-doorman.
Default: [].
pg_hba
New-style client access control in native PostgreSQL pg_hba.conf format. This allows you to define fine-grained access rules similar to PostgreSQL, including per-database, per-user, address ranges, and TLS requirements.
You can specify general.pg_hba in three ways:
- As a multi-line string with the contents of a
pg_hba.conffile - As an object with
paththat points to a file on disk - As an object with
contentcontaining the rules as a string
Examples:
[general]
# Inline content (triple-quoted TOML string)
pg_hba = """
# type database user address method
host all all 10.0.0.0/8 md5
hostssl all all 0.0.0.0/0 scram-sha-256
hostnossl all all 192.168.1.0/24 trust
"""
# Or load from file
# pg_hba = { path = "./pg_hba.conf" }
# Or embed as a single-line string
# pg_hba = { content = "host all all 127.0.0.1/32 trust" }
Supported fields and methods:
- Connection types:
local,host,hostssl,hostnossl(TLS-aware matching is honored) - Database matcher: a name or
all - User matcher: a name or
all - Address: CIDR form like
1.2.3.4/32or::1/128(required for non-localrules) - Methods:
trust,md5,scram-sha-256(unknown methods are parsed but treated as not-allowed by the checker)
Precedence and compatibility:
general.pg_hbasupersedes the legacygeneral.hbalist. You cannot set both at the same time; configuration validation will reject this combination.- Rules are evaluated in order; the first matching rule decides the outcome.
Behavior of method = trust:
- When a matching rule has
trust, PgDoorman will accept the connection without requesting a password. This mirrors PostgreSQL behavior. - Specifically, if
trustmatches, PgDoorman will skip password verification even if the user has anmd5orscram-sha-256password stored. This affects both MD5 and SCRAM flows. - TLS constraints from the rule are respected:
hostsslrequires TLS,hostnosslforbids TLS.
Admin console access:
general.pg_hbarules apply to the special admin databasepgdoormanas well.- This means you can allow admin access with the
trustmethod when a matching rule is present, for example:host pgdoorman admin 127.0.0.1/32 trust
Notes and limitations:
- Only a minimal subset of
pg_hba.confis supported that is sufficient for most proxy use-cases (type, database, user, address, method). Additional options (likeclientcert) are currently ignored. - For authentication methods other than
trust, PgDoorman performs the corresponding challenge/response with the client. - For Talos/JWT/PAM flows configured at the pool/user level,
truststill bypasses the client password prompt; however, those modes may be used whentrustdoes not match.
pooler_check_query
This query will not be sent to the server if it is run as a SimpleQuery. It can be used to check the connection at the application level.
Default: ";".
Pool Settings
Each record in the pool is the name of the virtual database that the pg-doorman client can connect to.
[pools.exampledb] # Declaring the 'exampledb' database
server_host
The directory with unix sockets or the IPv4 address of the PostgreSQL server that serves this pool.
Example: "/var/run/postgresql" or "127.0.0.1".
Default: "127.0.0.1".
server_port
The port through which PostgreSQL server accepts incoming connections.
Default: 5432.
server_database
Optional parameter that determines which database should be connected to on the PostgreSQL server.
application_name
Parameter application_name, is sent to the server when opening a connection with PostgreSQL. It may be useful with the sync_server_parameters = false setting.
connect_timeout
Maximum time to allow for establishing a new server connection for this pool, in milliseconds. If not specified, the global connect_timeout setting is used.
Default: None (uses global setting).
idle_timeout
Close idle connections in this pool that have been opened for longer than this value, in milliseconds. If not specified, the global idle_timeout setting is used.
Default: None (uses global setting).
server_lifetime
Close server connections in this pool that have been opened for longer than this value, in milliseconds. Only applied to idle connections. If not specified, the global server_lifetime setting is used.
Default: None (uses global setting).
pool_mode
session— Server is released back to pool after client disconnects.transaction— Server is released back to pool after transaction finishes.
Default: "transaction".
log_client_parameter_status_changes
Log information about any SET command in the log.
Default: false.
cleanup_server_connections
When enabled, the pool will automatically clean up server connections that are no longer needed. This helps manage resources efficiently by closing idle connections.
Default: true.
scaling_warm_pool_ratio
Override global scaling_warm_pool_ratio for this pool. If not specified, the global setting is used.
scaling_fast_retries
Override global scaling_fast_retries for this pool. If not specified, the global setting is used.
scaling_cooldown_sleep
Override global scaling_cooldown_sleep for this pool. If not specified, the global setting is used. Accepts duration strings like "10ms", "1s", or plain numbers (milliseconds).
Auth Query Settings
The auth_query section enables dynamic user authentication by querying a PostgreSQL database for credentials at connection time. This allows pg_doorman to authenticate users without listing them statically in the configuration file.
pools:
mydb:
auth_query:
query: "SELECT passwd FROM pg_shadow WHERE usename = $1"
user: "doorman_auth"
password: "auth_password"
There are two modes of operation:
- Dedicated mode (
server_useris set): All dynamically authenticated users share a single connection pool that connects to PostgreSQL asserver_user. This is the simplest setup and works well when all users need the same backend access. - Passthrough mode (
server_useris not set): Each dynamically authenticated user gets their own connection pool that connects to PostgreSQL using their own credentials (MD5 pass-the-hash or SCRAM ClientKey passthrough). This preserves per-user identity on the backend.
Static users (defined in the users section) are always checked first. The auth_query is only used when the username is not found among static users.
The user that runs auth queries needs access to password hashes (e.g. from pg_shadow). Do not use a superuser for this purpose. Instead, create a SECURITY DEFINER function owned by a superuser and a dedicated role with minimal privileges:
```sql -- Create a dedicated role for auth queries CREATE ROLE doorman_auth LOGIN PASSWORD 'strong_password';
-- Create a SECURITY DEFINER function (runs with owner's privileges) CREATE OR REPLACE FUNCTION pg_doorman_get_auth(p_usename TEXT) RETURNS TABLE (usename name, passwd text) LANGUAGE sql SECURITY DEFINER SET search_path = pg_catalog AS $$ SELECT usename, passwd FROM pg_shadow WHERE usename = p_usename; $$;
-- Grant execute only to the dedicated role REVOKE ALL ON FUNCTION pg_doorman_get_auth(TEXT) FROM PUBLIC; GRANT EXECUTE ON FUNCTION pg_doorman_get_auth(TEXT) TO doorman_auth; ```
Then use this function in the query parameter:
```yaml
auth_query:
query: "SELECT * FROM pg_doorman_get_auth($1)"
user: "doorman_auth"
password: "strong_password"
```
query
SQL query to fetch credentials. Must return a column named passwd or password containing the MD5 or SCRAM hash. If the query returns exactly one column, it is used regardless of name. Any extra columns are ignored. Use $1 as the placeholder for the username parameter.
Example: "SELECT passwd FROM pg_shadow WHERE usename = $1"
user
PostgreSQL username for the executor connection that runs auth queries.
password
Password for the executor user (plaintext). Can be empty if the PostgreSQL server uses trust authentication for this user.
Default: "".
database
Database for executor connections. If not specified, the pool name is used.
Default: None (uses pool name).
pool_size
Number of executor connections (connections used to run auth queries, not data connections).
Default: 2.
server_user
Backend PostgreSQL user for data connections in dedicated mode. When set, all dynamically authenticated users share one connection pool that connects as this user. When not set, passthrough mode is used.
Default: None (passthrough mode).
server_password
Plaintext password for the server_user. Only meaningful when server_user is set.
Default: None.
default_pool_size
Pool size for dynamic user data connections (per-user in passthrough mode, shared in dedicated mode).
Default: 40.
cache_ttl
Maximum cache age for successfully fetched credentials. Accepts duration strings like "1h", "30m", "300s".
Default: "1h".
cache_failure_ttl
Cache TTL for "user not found" entries (negative cache). Prevents repeated queries for non-existent users.
Default: "30s".
min_interval
Minimum interval between re-fetches for the same username after an authentication failure. Protects the backend from excessive queries during brute-force attempts.
Default: "1s".
Pool Users Settings
[pools.exampledb.users.0]
username = "exampledb-user-0" # A virtual user who can connect to this virtual database.
username
A virtual username who can connect to this virtual database (pool).
password
The password for the virtual pool user.
Password can be specified in MD5, SCRAM-SHA-256, or JWT format.
Also, you can create a mirror list of users using secrets from the PostgreSQL instance: select usename, passwd from pg_shadow.
auth_pam_service
The pam-service that is responsible for client authorization. In this case, pg_doorman will ignore the password value.
server_username
The real PostgreSQL username used to connect to the database server.
By default, PgDoorman uses the same username for both client authentication and server connections, using passthrough authentication: the cryptographic material from the client's authentication (MD5 hash or SCRAM ClientKey) is reused to authenticate to the backend. This eliminates the need for plaintext server_password.
Passthrough mode (recommended for identity-matching users):
- Omit both
server_usernameandserver_password - pg_doorman reuses the client's auth proof to connect to PostgreSQL
- For MD5: the hash from
passwordis used directly - For SCRAM: the ClientKey is extracted from the client's first SCRAM auth and cached
- Requirement: the
passwordverifier must matchpg_authidon the backend (same salt/iterations for SCRAM, same hash for MD5)
Explicit credentials mode (when identities differ):
- Set
server_usernameandserver_passwordto the actual PostgreSQL credentials server_passwordrequiresserver_usernameto be setserver_usernamealone (withoutserver_password) is allowed for trust authentication
server_password
The plaintext password for the PostgreSQL server user specified in server_username.
When server_password is not set and the user is passthrough-eligible (no server_username or server_username equals username), PgDoorman uses passthrough authentication instead: the cryptographic material from the client's authentication is reused for the backend connection. This eliminates plaintext passwords from config files.
server_password requires server_username to be set.
pool_size
The maximum number of simultaneous connections to the PostgreSQL server available for this pool and user.
Default: 40.
min_pool_size
The minimum number of connections to maintain in the pool for this user. Connections are prewarmed at startup (before the first retain cycle) and then maintained by periodic replenishment. If specified, it must be less than or equal to pool_size.
Default: None.
server_lifetime
Close server connections for this user that have been opened for longer than this value, in milliseconds. Only applied to idle connections. If not specified, the pool's server_lifetime setting is used.
Default: None (uses pool setting).
By default, PgDoorman uses passthrough authentication: the client's cryptographic proof (MD5 hash or SCRAM ClientKey) is automatically reused to authenticate to PostgreSQL. No plaintext passwords in config needed.
Set server_username and server_password only when the backend PostgreSQL user differs from the pool username (e.g., username mapping or JWT auth):
users:
- username: "app_user" # client-facing name
password: "md5..." # hash for client authentication
server_username: "pg_app_user" # different backend PostgreSQL user
server_password: "plaintext_pwd" # plaintext password for that user
Prometheus Settings
pg_doorman includes a Prometheus metrics exporter that provides detailed insights into the performance and behavior of your connection pools. This document describes how to enable and use the Prometheus metrics exporter, as well as the available metrics.
Enabling Prometheus Metrics
To enable the Prometheus metrics exporter, add the following to your configuration file:
prometheus:
enabled: true
host: "0.0.0.0" # The host on which the metrics server will listen
port: 9127 # The port on which the metrics server will listen
Configuration Options
| Option | Description | Default |
|---|---|---|
enabled | Enable or disable the Prometheus metrics exporter. | false |
host | The host on which the Prometheus metrics exporter will listen. | "0.0.0.0" |
port | The port on which the Prometheus metrics exporter will listen. | 9127 |
Configuring Prometheus
Add the following job to your Prometheus configuration to scrape metrics from pg_doorman:
scrape_configs:
- job_name: 'pg_doorman'
static_configs:
- targets: ['<pg_doorman_host>:9127']
Replace <pg_doorman_host> with the hostname or IP address of your pg_doorman instance.
Available Metrics
pg_doorman exposes the following metrics:
System Metrics
| Metric | Description |
|---|---|
pg_doorman_total_memory | Total memory allocated to the pg_doorman process in bytes. Monitors the memory footprint of the application. |
Connection Metrics
| Metric | Description |
|---|---|
pg_doorman_connection_count | Counter of new connections by type handled by pg_doorman. Types include: 'plain' (unencrypted connections), 'tls' (encrypted connections), 'cancel' (connection cancellation requests), and 'total' (sum of all connections). |
Socket Metrics (Linux only)
| Metric | Description |
|---|---|
pg_doorman_sockets | Counter of sockets used by pg_doorman by socket type. Types include: 'tcp' (IPv4 TCP sockets), 'tcp6' (IPv6 TCP sockets), 'unix' (Unix domain sockets), and 'unknown' (sockets of unrecognized type). Only available on Linux systems. |
Pool Metrics
| Metric | Description |
|---|---|
pg_doorman_pools_clients | Number of clients in connection pools by status, user, and database. Status values include: 'idle' (connected but not executing queries), 'waiting' (waiting for a server connection), and 'active' (currently executing queries). Helps monitor connection pool utilization and client distribution. |
pg_doorman_pools_servers | Number of servers in connection pools by status, user, and database. Status values include: 'active' (actively serving clients) and 'idle' (available for new connections). Helps monitor server availability and load distribution. |
pg_doorman_pools_bytes | Total bytes transferred through connection pools by direction, user, and database. Direction values include: 'received' (bytes received from clients) and 'sent' (bytes sent to clients). Useful for monitoring network traffic and identifying high-volume connections. |
Query and Transaction Metrics
| Metric | Description |
|---|---|
pg_doorman_pools_queries_percentile | Query execution time percentiles by user and database. Percentile values include: '99', '95', '90', and '50' (median). Values are in milliseconds. Helps identify slow queries and performance trends across different users and databases. |
pg_doorman_pools_transactions_percentile | Transaction execution time percentiles by user and database. Percentile values include: '99', '95', '90', and '50' (median). Values are in milliseconds. Helps monitor transaction performance and identify long-running transactions that might impact database performance. |
pg_doorman_pools_transactions_count | Counter of transactions executed in connection pools by user and database. Helps track transaction volume and identify users or databases with high transaction rates. |
pg_doorman_pools_transactions_total_time | Total time spent executing transactions in connection pools by user and database. Values are in milliseconds. Helps monitor overall transaction performance and identify users or databases with high transaction execution times. |
pg_doorman_pools_queries_count | Counter of queries executed in connection pools by user and database. Helps track query volume and identify users or databases with high query rates. |
pg_doorman_pools_queries_total_time | Total time spent executing queries in connection pools by user and database. Values are in milliseconds. Helps monitor overall query performance and identify users or databases with high query execution times. |
pg_doorman_pools_avg_wait_time | Average wait time for clients in connection pools by user and database. Values are in milliseconds. Helps monitor client wait times and identify potential bottlenecks. |
Auth Query Metrics
These metrics are only available when auth_query is configured for one or more pools.
| Metric | Description |
|---|---|
pg_doorman_auth_query_cache | Auth query cache metrics by type and database. Types include: entries (current cached credentials), hits (cache lookups that found a valid entry), misses (cache lookups that required a PostgreSQL fetch), refetches (re-fetches triggered by auth failure with stale credentials), rate_limited (re-fetch attempts that were rate-limited by min_interval). |
pg_doorman_auth_query_auth | Auth query authentication outcomes by result and database. Results include: success (successful authentication) and failure (wrong password or credential mismatch). |
pg_doorman_auth_query_executor | Auth query executor metrics by type and database. Types include: queries (total queries executed against PostgreSQL to fetch credentials) and errors (queries that failed due to connection or execution errors). |
pg_doorman_auth_query_dynamic_pools | Auth query dynamic pool lifecycle metrics by type and database. Types include: current (currently active dynamic pools), created (total pools created since startup), destroyed (total pools garbage-collected or removed on RELOAD). Only relevant in passthrough mode. |
Server Metrics
| Metric | Description |
|---|---|
pg_doorman_servers_prepared_hits | Counter of prepared statement hits in databases backends by user and database. Helps track the effectiveness of prepared statements in reducing query parsing overhead. |
pg_doorman_servers_prepared_misses | Counter of prepared statement misses in databases backends by user and database. Helps identify queries that could benefit from being prepared to improve performance. |
Grafana Dashboard
You can create a Grafana dashboard to visualize these metrics. Here's a simple example of panels you might want to include:
- Connection counts by type
- Memory usage over time
- Client and server counts by pool
- Query and transaction performance percentiles
- Network traffic by pool
Example Queries
Here are some example Prometheus queries that you might find useful:
Connection Rate
rate(pg_doorman_connection_count{type="total"}[5m])
Pool Utilization
sum by (database) (pg_doorman_pools_clients{status="active"}) / sum by (database) (pg_doorman_pools_servers{status="active"} + pg_doorman_pools_servers{status="idle"})
Slow Queries
pg_doorman_pools_queries_percentile{percentile="99"}
Client Wait Time
pg_doorman_pools_avg_wait_time
Auth Query Cache Hit Rate
pg_doorman_auth_query_cache{type="hits"} / (pg_doorman_auth_query_cache{type="hits"} + pg_doorman_auth_query_cache{type="misses"})
Auth Query Failure Rate
rate(pg_doorman_auth_query_auth{result="failure"}[5m])
title: Benchmarks
Performance Benchmarks
Automated Benchmark Results
Last updated: 2026-02-23 16:21 UTC
These benchmarks are automatically generated by the CI pipeline using pgbench.
Test Environment
- Pool size: 40 connections
- Test duration: 30 seconds per test
- Instance: AWS Fargate (16 vCPU, 32 GB RAM)
- Workers: pg_doorman: 12, odyssey: 12
- pgbench jobs: 4 (global override)
- Started: 2026-02-23 14:58:04 UTC
- Finished: 2026-02-23 16:21:21 UTC
- Total duration: 1h 23m 16s
Legend
- +N%: pg_doorman is N% faster than competitor (e.g., +10% means pg_doorman is 10% faster)
- -N%: pg_doorman is N% slower than competitor (e.g., -10% means pg_doorman is 10% slower)
- ≈0%: Equal performance (difference less than 3%)
- ∞: Competitor failed (0 TPS), pg_doorman wins
- N/A: Test not supported by this pooler
- -: Test not executed
Simple Protocol
| Test | vs pgbouncer | vs odyssey |
|---|---|---|
| 1 client | -8% | -9% |
| 40 clients | +40% | -39% |
| 120 clients | x3.0 | -5% |
| 500 clients | x2.8 | +4% |
| 10,000 clients | x2.8 | +20% |
| 1 client + Reconnect | -4% | x233.7 |
| 40 clients + Reconnect | +22% | x2.1 |
| 120 clients + Reconnect | +22% | x2.2 |
| 500 clients + Reconnect | +21% | +93% |
| 10,000 clients + Reconnect | +69% | x2.3 |
| 1 client + SSL | -3% | -5% |
| 40 clients + SSL | +70% | -28% |
| 120 clients + SSL | x3.3 | ≈0% |
| 500 clients + SSL | x3.3 | +5% |
| 10,000 clients + SSL | x3.5 | +19% |
Extended Protocol
| Test | vs pgbouncer | vs odyssey |
|---|---|---|
| 1 client | ≈0% | +39% |
| 40 clients | +52% | -8% |
| 120 clients | x3.0 | +42% |
| 500 clients | x3.0 | +56% |
| 10,000 clients | x2.9 | +78% |
| 1 client + Reconnect | ≈0% | x15.5 |
| 40 clients + Reconnect | +21% | x3.1 |
| 120 clients + Reconnect | +21% | +87% |
| 500 clients + Reconnect | +21% | x2.5 |
| 10,000 clients + Reconnect | +68% | x3.2 |
| 1 client + SSL | +4% | +40% |
| 40 clients + SSL | +72% | ≈0% |
| 120 clients + SSL | x3.4 | +49% |
| 500 clients + SSL | x3.6 | +61% |
| 10,000 clients + SSL | x3.6 | +85% |
| 1 client + SSL + Reconnect | +6% | +11% |
| 40 clients + SSL + Reconnect | +93% | ≈0% |
| 120 clients + SSL + Reconnect | +95% | ≈0% |
| 500 clients + SSL + Reconnect | +91% | ≈0% |
| 10,000 clients + SSL + Reconnect | +90% | ≈0% |
Prepared Protocol
| Test | vs pgbouncer | vs odyssey |
|---|---|---|
| 1 client | -7% | -11% |
| 40 clients | +72% | -42% |
| 120 clients | x3.7 | -10% |
| 500 clients | x3.6 | +3% |
| 10,000 clients | x3.4 | +21% |
| 1 client + Reconnect | ≈0% | +48% |
| 40 clients + Reconnect | ≈0% | ∞ |
| 120 clients + Reconnect | +5% | ∞ |
| 500 clients + Reconnect | +8% | ∞ |
| 10,000 clients + Reconnect | +24% | ∞ |
| 1 client + SSL | -4% | -7% |
| 40 clients + SSL | x2.0 | -33% |
| 120 clients + SSL | x3.9 | -5% |
| 500 clients + SSL | x4.2 | +7% |
| 10,000 clients + SSL | x4.0 | +23% |
Notes
- Odyssey has poor support for extended query protocol in transaction pooling mode, resulting in significantly lower performance compared to pg_doorman and pgbouncer
- Important: The values shown are relative performance ratios, not absolute TPS numbers. While absolute TPS values may vary depending on hardware and system load, the relative ratios between poolers should remain consistent when tests are run sequentially in a short timeframe (30 seconds each). This allows for fair comparison across different connection poolers under identical conditions
Changelog
3.3.1 Feb 26, 2026
Bug Fixes:
-
Fix Ctrl+C in foreground mode: Pressing Ctrl+C in foreground mode (with TTY attached) now performs a clean graceful shutdown instead of triggering a binary upgrade. Previously, each Ctrl+C would spawn a new pg_doorman process via
--inherit-fd, leaving orphan processes accumulating. SIGINT in daemon mode (no TTY) retains its legacy binary upgrade behavior for backward compatibility with existingsystemdunits. -
Minimum pool size enforcement (
min_pool_size): Themin_pool_sizeuser setting is now enforced at runtime. After each connection retain cycle, pg_doorman checks pool sizes and creates new connections to maintain the configured minimum. Previously,min_pool_sizewas accepted in config but never applied — pools started empty and could drop to 0 connections even withmin_pool_sizeset. Replenishment stops on the first connection failure to avoid hammering an unavailable server.
New Features:
-
SIGUSR2 for binary upgrade: New dedicated signal
SIGUSR2triggers binary upgrade + graceful shutdown in all modes (daemon and foreground). This is now the recommended signal for binary upgrades. Thesystemdservice file has been updated to useSIGUSR2forExecReload. -
UPGRADEadmin command: New admin console command that triggers binary upgrade via SIGUSR2. Use it frompsqlconnected to the admin database:UPGRADE;.
Improvements:
-
Pool prewarm at startup: When
min_pool_sizeis configured, pg_doorman now creates the minimum number of connections immediately at startup, before the first retain cycle. Previously, pools started empty and connections were only created lazily on first client request or after the first retain interval (default 60s). This eliminates cold-start latency for the first clients connecting after pg_doorman restart. -
Configurable connection scaling parameters: New
generalsettingsscaling_warm_pool_ratio,scaling_fast_retries, andscaling_cooldown_sleepallow tuning connection pool scaling behavior. All three can be overridden at the pool level.scaling_cooldown_sleepuses the human-readableDurationtype (e.g."10ms","1s") consistent with other timeout fields. -
max_concurrent_createssetting: Controls the maximum number of server connections that can be created concurrently per pool. Uses a semaphore instead of a mutex for parallel connection creation.
3.3.0 Feb 23, 2026
New Features:
-
Dynamic user authentication (
auth_query): PgDoorman can now authenticate users dynamically by querying PostgreSQL at connection time — no need to list every user in the config. Supportspg_shadow, custom tables, andSECURITY DEFINERfunctions. The query must return a column namedpasswdorpassword(or any single column) containing an MD5 or SCRAM-SHA-256 hash. -
Passthrough authentication: Default mode for both static and dynamic users — PgDoorman reuses the client's cryptographic proof (MD5 hash or SCRAM ClientKey) to authenticate to the backend automatically. No plaintext
server_passwordin config needed when the pool user matches the backend PostgreSQL user. -
Two auth_query modes:
- Passthrough mode (default) — each dynamic user gets their own backend connection pool and authenticates as themselves, preserving per-user identity on the backend.
- Dedicated mode (
server_userset) — all dynamic users share a single backend pool under one PostgreSQL role.
-
Auth query caching: DashMap-based cache with configurable TTL, double-checked locking, rate-limited refetch, and request coalescing. Supports separate TTLs for successful and failed lookups.
-
SHOW AUTH_QUERYadmin command: Displays per-pool metrics — cache entries/hits/misses, auth success/failure counters, executor stats, and dynamic pool count. -
Prometheus metrics for auth_query: New metric families
pg_doorman_auth_query_cache,pg_doorman_auth_query_auth,pg_doorman_auth_query_executor,pg_doorman_auth_query_dynamic_pools. -
Idle dynamic pool garbage collection: Background task cleans up expired dynamic pools when all connections have been idle beyond
server_lifetime. Zero overhead for static-only configs. -
Smart password column lookup: Password column resolved by name (
passwd→password→ single-column fallback), works withpg_shadow, custom tables, and arbitrary single-column queries.
Improvements:
-
server_username/server_passwordnow optional: Previously documented as required for MD5/SCRAM hash configs. Now only needed when the backend user differs from the pool user (username mapping, JWT auth). -
Data-driven config & docs generation:
fields.yamlis the single source of truth for all config field descriptions (EN/RU). Reference docs, annotated configs, and inline comments are all generated from it.
Testing:
- 39 new BDD scenarios (260+ steps) covering auth_query executor, end-to-end auth, HBA integration, passthrough mode, SCRAM-only auth, RELOAD/GC lifecycle, observability, and static user passthrough.
3.2.4 Feb 20, 2026
New Features:
-
Annotated config generation: The
generatecommand now produces well-documented configuration files with inline comments for every parameter by default. Previously it only did plain serde serialization without any documentation. -
--referenceflag: Generates a complete reference config with example values without requiring a PostgreSQL connection. The rootpg_doorman.tomlandpg_doorman.yamlare now auto-generated from this flag, ensuring they always stay in sync with the codebase. -
--format(-f) flag: Explicitly choose output format (yamlortoml). Default output format changed from TOML to YAML. When--outputis specified, format is auto-detected from file extension;--formatoverrides auto-detection. -
--russian-comments(--ru) flag: Generates comments in Russian for quick start guide. All ~100+ comment strings are translated to clear, simple Russian. -
--no-commentsflag: Disables inline comments for minimal config output (plain serde serialization, the old default behavior). -
Passthrough authentication documentation: Documents passthrough auth as the default mode —
server_username/server_passwordare no longer needed when the pool user matches the backend PostgreSQL user. PgDoorman reuses the client's MD5 hash or SCRAM ClientKey to authenticate to the backend automatically.
Testing:
-
Config field coverage guarantee: New test parses config struct source files (
general.rs,pool.rs,user.rs, etc.) at compile time and verifies everypubfield appears in annotated output. If someone adds a new config parameter but forgets to add it toannotated.rs, CI will fail with a clear message listing the missing fields. -
BDD tests for generate command: End-to-end tests that generate TOML and YAML configs, start pg_doorman with them, and verify client connectivity.
Bug Fixes:
-
Fixed protocol desynchronization on prepared statement cache eviction in async mode: When asyncpg/SQLAlchemy uses
Flush(instead ofSync) for pipelinedParse+Describebatches and the prepared statement LRU cache is full, eviction sendsClose+Syncto the server. In async mode,recv()was exiting immediately whenexpected_responses==0, leavingCloseCompleteandReadyForQueryunread in the TCP buffer. The nextrecv()call would then read these stale messages instead of the expected response, causing protocol desynchronization. Fixed by temporarily disabling async mode during eviction so thatrecv()waits forReadyForQueryas the natural loop terminator. -
Fixed generated config startup failure:
syslog_prog_nameanddaemon_pid_fileare now commented out by default in generated configs. Previously they were uncommented, causing pg_doorman to fail when started in foreground mode or when syslog was unavailable. -
Fixed Go test goroutine leak:
TestLibPQPreparednow usessync.WaitGroupto wait for all goroutines before test exit, fixing sporadic panics caused by logging after test completion. -
Fixed protocol violation on flush timeout — client now receives ErrorResponse: When the 5-second flush timeout fires (server TCP write blocks because the backend is overloaded or unreachable), the
FlushTimeouterror was propagating via?throughhandle_sync_flush→ transaction loop →handle()without sending any PostgreSQL protocol message to the client. The TCP connection was simply dropped, causing drivers like Npgsql to report "protocol violation" due to unexpected EOF. Now pg_doorman sends a properErrorResponsewith SQLSTATE58006and message containing "pooler is shut down now" before closing the connection, allowing client drivers to detect the error and reconnect gracefully.
3.2.3 Feb 10, 2026
Improvements:
- Jitter for
server_lifetime(±20%): Connection lifetimes now have a random ±20% jitter applied to prevent mass disconnections from PostgreSQL. When pg_doorman is under heavy load, it creates many connections simultaneously, which previously caused them all to expire at the same time, creating spikes of connection closures. Now each connection gets an individual lifetime calculated asbase_lifetime ± random(20%). For example, withserver_lifetime: 300000(5 minutes), actual lifetimes range from 240s to 360s, spreading connection closures evenly over time.
3.2.2 Feb 9, 2026
New Features:
-
Configuration test mode (
-t/--test-config): Added nginx-style configuration validation flag. Runningpg_doorman -torpg_doorman --test-configwill parse and validate the configuration file, report success or errors, and exit without starting the server. Useful for CI/CD pipelines and pre-deployment configuration checks. -
Configuration validation before binary upgrade: When receiving SIGINT for graceful shutdown/binary upgrade, the server now validates the new binary's configuration using
-tflag before proceeding. If the configuration test fails, the shutdown is cancelled and critical error messages are logged to alert the operator. This prevents accidental downtime from deploying a binary with invalid configuration. -
New
retain_connections_maxconfiguration parameter: Controls the maximum number of idle connections to close per retain cycle. When set to0, all idle connections that exceedidle_timeoutorserver_lifetimeare closed immediately. Default is3, providing controlled cleanup while preventing connection buildup. Previously, only 1 connection was closed per cycle, which could lead to slow connection cleanup when many connections became idle simultaneously. Connection closures are now logged for better observability. -
Oldest-first connection closure: When
retain_connections_max > 0, connections are now closed in order of age (oldest first) rather than in queue order. This ensures that the oldest connections are always prioritized for closure, providing more predictable connection rotation behavior. -
New
server_idle_check_timeoutconfiguration parameter: Time after which an idle server connection should be checked before being given to a client (default: 30s). This helps detect dead connections caused by PostgreSQL restart, network issues, or server-side idle timeouts. When a connection has been idle longer than this timeout, pg_doorman sends a minimal query (;) to verify the connection is alive before returning it to the client. Set to0to disable. -
New
tcp_user_timeoutconfiguration parameter: Sets theTCP_USER_TIMEOUTsocket option for client connections (in seconds). This helps detect dead client connections faster than keepalive probes when the connection is actively sending data but the remote end has become unreachable. Prevents 15-16 minute delays caused by TCP retransmission timeout. Only supported on Linux. Default is60seconds. Set to0to disable. -
Removed
wait_rollbackmechanism: The pooler no longer attempts to automatically wait for ROLLBACK from clients when a transaction enters an aborted state. This complex mechanism was causing protocol desynchronization issues with async clients and extended query protocol. Server connections in aborted transactions are now simply returned to the pool and cleaned up normally via ROLLBACK during checkin. -
Removed savepoint tracking: Removed the
use_savepointflag and related logic that was tracking SAVEPOINT usage. The pooler now treats savepoints as regular PostgreSQL commands without special handling.
Bug Fixes:
- Fixed protocol desynchronization in async mode with simple prepared statements: When
prepared_statementswas disabled but clients used extended query protocol (Parse, Bind, Describe, Execute, Flush), the pooler wasn't tracking batch operations, causingexpected_responsesto be calculated as 0. This led to the pooler exiting the response loop immediately without waiting for server responses (ParseComplete, BindComplete, etc.). Now batch operations are tracked regardless of theprepared_statementssetting.
Performance:
- Removed timeout-based waiting in async protocol: The pooler now tracks expected responses based on batch operations (Parse, Bind, Execute, etc.) and exits immediately when all responses are received. This eliminates unnecessary latency in pipeline/async workloads.
3.1.8 Jan 31, 2026
Bug Fixes:
-
Fixed ParseComplete desynchronization in pipeline on errors: Fixed a protocol desynchronization issue (especially noticeable in .NET Npgsql driver) where synthetic
ParseCompletemessages were not being inserted if an error occurred during a pipelined batch. When the pooler caches a prepared statement and skips sendingParseto the server, it must still provide aParseCompleteto the client. If an error occurs before subsequent commands are processed, the server skips them, and the pooler now ensures all missing syntheticParseCompletemessages are inserted into the response stream upon receiving anErrorResponseorReadyForQuery. -
Fixed incorrect
use_savepointstate persistence: Fixed a bug where theuse_savepointflag (which disables automatic rollback on connection return if a savepoint was used) was not reset after a transaction ended.
3.1.7 Jan 28, 2026
Memory Optimization:
-
DEALLOCATE now clears client prepared statements cache: When a client sends
DEALLOCATE <name>orDEALLOCATE ALLvia simple query protocol, the pooler now properly clears the corresponding entries from the client's internal prepared statements cache. Previously, synthetic OK responses were sent but the client cache was not cleared, causing memory to grow indefinitely for long-running connections using many unique prepared statements. This fix allows memory to be reclaimed when clients properly deallocate their statements. -
New
client_prepared_statements_cache_sizeconfiguration parameter: Added protection against malicious or misbehaving clients that don't callDEALLOCATEand could exhaust server memory by creating unlimited prepared statements. When the per-client cache limit is reached, the oldest entry is evicted automatically. Set to0for unlimited (default, relies on client callingDEALLOCATE). Example:client_prepared_statements_cache_size: 1024limits each client to 1024 cached prepared statements.
3.1.6 Jan 27, 2026
Bug Fixes:
-
Fixed incorrect timing statistics (xact_time, wait_time, percentiles): The statistics module was using
recent()(cached clock) without proper clock cache updates, causing transaction time, wait time, and their percentiles to show extremely large incorrect values (e.g., 100+ seconds instead of actual milliseconds). The root cause was that thequanta::Upkeephandle was not being stored, causing the upkeep thread to stop immediately after starting. Now the handle is properly retained for the lifetime of the server, ensuringClock::recent()returns accurate cached time values. -
Fixed query time accumulation bug in transaction loop: Query times were incorrectly accumulated when multiple queries were executed within a single transaction. The
query_start_attimestamp was only set once at the beginning of the transaction, causing each subsequent query's elapsed time to include all previous queries' durations (e.g., 10 queries of 100ms each would report the last query as ~1 second instead of 100ms). Nowquery_start_atis updated for each new message in the transaction loop, ensuring accurate per-query timing.
New Features:
-
New
clock_resolution_statisticsconfiguration parameter: Addedgeneral.clock_resolution_statisticsparameter (default:0.1ms= 100 microseconds) that controls how often the internal clock cache is updated. Lower values provide more accurate timing measurements for query/transaction percentiles, while higher values reduce CPU overhead. This parameter affects the accuracy of all timing statistics reported in the admin console and Prometheus metrics. -
Sub-millisecond precision for Duration values: Duration configuration parameters now support sub-millisecond precision:
- New
ussuffix for microseconds (e.g.,"100us"= 100 microseconds) - Decimal milliseconds support (e.g.,
"0.1ms"= 100 microseconds) - Internal representation changed from milliseconds to microseconds for higher precision
- Full backward compatibility maintained: plain numbers are still interpreted as milliseconds
- New
3.1.5 Jan 25, 2026
Bug Fixes:
- Fixed PROTOCOL VIOLATION with batch PrepareAsync
- Rewritten ParseComplete insertion algorithm
Performance:
- Deferred connection acquisition for standalone BEGIN: When a client sends a standalone
BEGIN;orbegin;query (simple query protocol), the pooler now defers acquiring a server connection until the next message arrives. SinceBEGINitself doesn't perform any actual database operations, this optimization reduces connection pool contention when clients are slow to send their next query after starting a transaction.- Micro-optimized detection: first checks message size (12 bytes), then content using case-insensitive comparison
- If client sends Terminate (
X) afterBEGIN, no server connection is acquired at all - The deferred
BEGINis automatically sent to the server before the actual query
3.1.0 Jan 18, 2026
New Features:
- YAML configuration support: Added support for YAML configuration files (
.yaml,.yml) as the primary and recommended format. The format is automatically detected based on file extension. TOML format remains fully supported for backward compatibility.- The
generatecommand now outputs YAML or TOML based on the output file extension. - Include files can mix YAML and TOML formats.
- New array syntax for users in YAML:
users: [{ username: "user1", ... }]
- The
- TOML backward compatibility: Full backward compatibility with legacy TOML format
[pools.*.users.0]is maintained. Both the legacy map format and the new array format[[pools.*.users]]are supported. - Username uniqueness validation: Added validation to reject duplicate usernames within a pool, ensuring configuration correctness.
- Human-readable configuration values: Duration and byte size parameters now support human-readable formats while maintaining backward compatibility with numeric values:
- Duration:
"3s","5m","1h","1d"(or milliseconds:3000) - Byte size:
"1MB","256M","1GB"(or bytes:1048576) - Example:
connect_timeout: "3s"instead ofconnect_timeout: 3000
- Duration:
- Foreground mode binary upgrade: Added support for binary upgrade in foreground mode by passing the listener socket to the new process via
--inherit-fdargument. This enables zero-downtime upgrades without requiring daemon mode. - Optional tokio runtime parameters: The following tokio runtime parameters are now optional and default to
None(using tokio's built-in defaults):tokio_global_queue_interval,tokio_event_interval,worker_stack_size, and the newmax_blocking_threads. Modern tokio versions handle these parameters well by default, so explicit configuration is no longer required in most cases. - Improved graceful shutdown behavior:
- During graceful shutdown, only clients with active transactions are now counted (instead of all connected clients), allowing faster shutdown when clients are idle.
- After a client completes their transaction during shutdown, they receive a proper PostgreSQL protocol error (
58006 - pooler is shut down now) instead of a connection reset. - Server connections are immediately released (marked as bad) after transaction completion during shutdown to conserve PostgreSQL connections.
- All idle connections are immediately drained from pools when graceful shutdown starts, releasing PostgreSQL connections faster.
Performance:
- Statistics module optimization: Major refactoring of the
src/statsmodule for improved performance:- Replaced
VecDequewith HDR histograms (hdrhistogramcrate) for percentile calculations — O(1) percentile queries instead of O(n log n) sorting, ~95% memory reduction for latency tracking. - Histograms are now reset after each stats period (15 seconds) to provide accurate rolling window percentiles.
- Replaced
3.0.5 Jan 16, 2026
Bug Fixes:
- Fixed panic (
capacity overflow) in startup message handling when receiving malformed messages with invalid length (less than 8 bytes or exceeding 10MB). Now gracefully rejects such connections withClientBadStartuperror.
Testing:
- Integration fuzz testing framework: Added comprehensive BDD-based fuzz tests (
@fuzztag) that verify pg_doorman's resilience to malformed PostgreSQL protocol messages. - All fuzz tests connect and authenticate first, then send malformed data to test post-authentication resilience.
CI/CD:
- Added dedicated fuzz test job in GitHub Actions workflow (without retries, as fuzz tests should not be flaky).
3.0.4 Jan 16, 2026
New Features:
- Enhanced DEBUG logging for PostgreSQL protocol messages: Added grouped debug logging that displays message types in a compact format (e.g.,
[P(stmt1),B,D,E,S]or[3xD,C,Z]). Messages are buffered and flushed every 100ms or 100 messages to reduce log noise. - Protocol violation detection: Added real-time protocol state tracking that detects and warns about protocol violations (e.g., receiving ParseComplete when no Parse was pending). Helps diagnose client-server synchronization issues.
Bug Fixes:
- Fixed potential protocol violation when client disconnects during batch operations with cached prepared statements: disabled fast_release optimization when there are pending prepared statement operations.
- Fixed ParseComplete insertion for Describe flow: now correctly inserts one ParseComplete before each ParameterDescription ('t') or NoData ('n') message instead of inserting all at once.
3.0.3 Jan 15, 2026
Bug Fixes:
- Improved handling of Describe flow for cached prepared statements: added a separate counter (
pending_parse_complete_for_describe) to correctly insert ParseComplete messages before ParameterDescription or NoData responses when Parse was skipped due to caching.
Testing:
- Added comprehensive .NET client tests for Describe flow with cached prepared statements (
describe_flow_cached.cs). - Added aggressive mixed tests combining batch operations, prepared statements, and extended protocol (
aggressive_mixed.cs).
3.0.2 Jan 14, 2026
Bug Fixes:
- Fixed protocol mismatch for .NET clients (Npgsql) using named prepared statements with
Prepare(): ParseComplete messages are now correctly inserted before ParameterDescription and NoData messages in the Describe flow, not just before BindComplete.
3.0.1 Jan 14, 2026
Bug Fixes:
- Fixed protocol mismatch for .NET clients (Npgsql): prevented insertion of ParseComplete messages between DataRow messages when server has more data available.
Testing:
- Extended Node.js client test coverage with additional scenarios for prepared statements, error handling, transactions, and edge cases.
3.0.0 Jan 12, 2026
Major Release — Complete Architecture Refactoring
This release represents a significant milestone with a complete codebase refactoring that dramatically improves async protocol support, making PgDoorman the most efficient connection pooler for asynchronous PostgreSQL workloads.
New Features:
- patroni_proxy — A new high-performance TCP proxy for Patroni-managed PostgreSQL clusters:
- Zero-downtime connection management — existing connections are preserved during cluster topology changes
- Hot upstream updates — automatic discovery of cluster members via Patroni REST API without connection drops
- Role-based routing — route connections to leader, sync replicas, or async replicas based on configuration
- Replication lag awareness with configurable
max_lag_in_bytesper port - Least connections load balancing strategy
Improvements:
- Complete codebase refactoring — modular architecture with better separation of concerns:
- Client handling split into dedicated modules (core, entrypoint, protocol, startup, transaction)
- Configuration system reorganized into focused modules (general, pool, user, tls, prometheus, talos)
- Admin, auth, and prometheus subsystems extracted into separate modules
- Improved code maintainability and testability
- Enhanced async protocol support — significantly improved handling of asynchronous PostgreSQL protocol, providing better performance than other connection poolers for async workloads
- Extended protocol improvements — better client buffering and message handling for extended query protocol
- xxhash3 for prepared statement hashing — faster hash computation for prepared statement cache
- Comprehensive BDD testing framework — multi-language integration tests (Go, Rust, Python, Node.js, .NET) with Docker-based reproducible environment
2.5.0 Nov 18, 2025
Improvements:
- Reworked the statistics collection system, yielding up to 20% performance gain on fast queries.
- Improved detection of
SAVEPOINTusage, allowing the auto-rollback feature to be applied in more situations.
Bug Fixes / Behavior:
- Less aggressive behavior on write errors when sending a response to the client: the server connection is no longer immediately marked as "bad" and evicted from the pool. We now read the remaining server response and clean up its state, returning the connection to the pool in a clean state. This improves performance during client reconnections.
2.4.3 Nov 15, 2025
Bug Fixes:
- Fixed handling of nested transactions via
SAVEPOINT: auto-rollback now correctly rolls back to the savepoint instead of breaking the outer transaction. This prevents clients from getting stuck in an inconsistent transactional state.
2.4.2 Nov 13, 2025
Improvements:
pg_hbarules now apply to the admin console as well; thetrustmethod can be used for admin connections when a matching rule is present (use with caution; restrict by address/TLS).
Bug Fixes:
- Fixed
pg_hbaevaluation:localrecords were mistakenly considered; PgDoorman only handles TCP connections, solocalentries are now correctly ignored.
2.4.1 Nov 12, 2025
Improvements:
- Performance optimizations in request handling and message processing paths to reduce latency and CPU usage.
pg_hbarules now apply to the admin console as well; thetrustmethod can be used for admin connections when a matching rule is present (use with caution; restrict by address/TLS).
Bug Fixes:
- Corrected logic where
COMMITcould be mishandled similarly toROLLBACKin certain error states; now transactional state handling is aligned with PostgreSQL semantics.
2.4.0 Nov 10, 2025
Features:
- Added
pg_hbasupport to control client access in PostgreSQL format. Newgeneral.pg_hbasetting supports inline content or file path. - Clients that enter the
aborted in transactionstate are detached from their server backend; the proxy waits for the client to sendROLLBACK.
Improvements:
- Refined admin and metrics counters: separated
cancelconnections and corrected calculation oferrorconnections in admin output and Prometheus metrics descriptions. - Added configuration validation to prevent simultaneous use of legacy
general.hbaCIDR list with the newgeneral.pg_hbarules. - Improved validation and error messages for Talos token authentication.
2.2.2 Aug 17, 2025
Features:
- Added new generate feature functionality
Bug Fixes:
- Fixed deallocate issues with PGX5 compatibility
2.2.1 Aug 6, 2025
Features:
- Improve Prometheus exporter functionality
2.2.0 Aug 5, 2025
Features:
- Added Prometheus exporter functionality that provides metrics about connections, memory usage, pools, queries, and transactions
2.1.2 Aug 4, 2025
Features:
- Added docker image
ghcr.io/ozontech/pg_doorman
2.1.0 Aug 1, 2025
Features:
- The new command
generateconnects to your PostgreSQL server, automatically detects all databases and users, and creates a complete configuration file with appropriate settings. This is especially useful for quickly setting up PgDoorman in new environments or when you have many databases and users to configure.
2.0.1 July 24, 2025
Bug Fixes:
- Fixed
max_memory_usagecounter leak when clients disconnect improperly.
2.0.0 July 22, 2025
Features:
- Added
tls_modeconfiguration option to enhance security with flexible TLS connection management and client certificate validation capabilities.
1.9.0 July 20, 2025
Features:
- Added PAM authentication support.
- Added
talosJWT authentication support.
Improvements:
- Implemented streaming for COPY protocol with large columns to prevent memory exhaustion.
- Updated Rust and Tokio dependencies.
1.8.3 Jun 11, 2025
Bug Fixes:
- Fixed critical bug where Client's buffer wasn't cleared when no free connections were available in the Server pool (query_wait_timeout), leading to incorrect response errors. #38
- Fixed Npgsql-related issue. Npgsql#6115
1.8.2 May 24, 2025
Features:
- Added
application_nameparameter in pool. #30 - Added support for
DISCARD ALLandDEALLOCATE ALLclient queries.
Improvements:
- Implemented link-time optimization. #29
Bug Fixes:
- Fixed panics in admin console.
- Fixed connection leakage on improperly handled errors in client's copy mode.
1.8.1 April 12, 2025
Bug Fixes:
- Fixed config value of prepared_statements. #21
- Fixed handling of declared cursors closure. #23
- Fixed proxy server parameters. #25
1.8.0 Mar 20, 2025
Bug Fixes:
- Fixed dependencies issue. #15
Improvements:
- Added release vendor-licenses.txt file. Related thread
1.7.9 Mar 16, 2025
Improvements:
- Added release vendor.tar.gz for offline build. Related thread
Bug Fixes:
- Fixed issues with pqCancel messages over TLS protocol. Drivers should send pqCancel messages exclusively via TLS if the primary connection was established using TLS. Npgsql follows this rule, while PGX currently does not. Both behaviors are now supported.
1.7.8 Mar 8, 2025
Bug Fixes:
- Fixed message ordering issue when using batch processing with the extended protocol.
- Improved error message detail in logs for server-side login attempt failures.
1.7.7 Mar 8, 2025
Features:
- Enhanced
show clientscommand with new fields:state(waiting/idle/active) andwait(read/write/idle). - Enhanced
show serverscommand with new fields:state(login/idle/active),wait(read/write/idle), andserver_process_pid. - Added 15-second proxy timeout for streaming large
message_size_to_be_streamresponses.
Bug Fixes:
- Fixed
max_memory_usagecounter leak when clients disconnect improperly.
Contributing to PgDoorman
Thank you for your interest in contributing to PgDoorman! This guide will help you set up your development environment and understand the contribution process.
Getting Started
Prerequisites
For running integration tests, you only need:
Nix installation is NOT required — test environment reproducibility is ensured by Docker containers built with Nix.
For local development (optional):
Setting Up Your Development Environment
- Fork the repository on GitHub
- Clone your fork:
git clone https://github.com/YOUR-USERNAME/pg_doorman.git cd pg_doorman - Add the upstream repository:
git remote add upstream https://github.com/ozontech/pg_doorman.git
Local Development
-
Build the project:
cargo build -
Build for performance testing:
cargo build --release -
Configure PgDoorman:
- Copy the example configuration:
cp pg_doorman.toml.example pg_doorman.toml - Adjust the configuration in
pg_doorman.tomlto match your setup
- Copy the example configuration:
-
Run PgDoorman:
cargo run --release -
Run unit tests:
cargo test
Integration Testing
PgDoorman uses BDD (Behavior-Driven Development) tests with a Docker-based test environment. Reproducibility is guaranteed — all tests run inside Docker containers with identical environments.
Test Environment
The test Docker image (built with Nix) includes:
- PostgreSQL 16
- Go 1.24
- Python 3 with asyncpg, psycopg2, aiopg, pytest
- Node.js 22
- .NET SDK 8
- Rust 1.87.0
Running Tests
From the project root directory:
# Pull the test image from registry
make pull
# Or build locally (takes 10-15 minutes on first run)
make local-build
# Run all BDD tests
make test-bdd
# Run tests with specific tag
make test-bdd TAGS=@copy-protocol
make test-bdd TAGS=@cancel
make test-bdd TAGS=@admin-commands
# Open interactive shell in test container
make shell
Debug Mode
Enable debug output with the DEBUG=1 environment variable:
DEBUG=1 make test-bdd TAGS=@copy-protocol
When DEBUG=1 is set:
- Tracing is enabled with DEBUG level
- Thread IDs are shown in logs
- Line numbers are included
- PostgreSQL protocol details are visible
- Detailed step-by-step execution is logged
This is useful when:
- Debugging failing tests
- Understanding protocol-level communication
- Investigating timing issues
- Developing new test scenarios
Available Test Tags
| Tag | Description |
|---|---|
@go | Go client tests |
@python | Python client tests |
@nodejs | Node.js client tests |
@dotnet | .NET client tests |
@rust | Rust protocol-level tests |
@copy-protocol | COPY protocol tests |
@cancel | Query cancellation tests |
@admin-commands | Admin console commands |
@admin-leak | Admin connection leak tests |
@buffer-cleanup | Buffer cleanup tests |
@rollback | Rollback functionality tests |
@hba | HBA authentication tests |
@prometheus | Prometheus metrics tests |
Writing New Tests
Tests are organized as BDD feature files in tests/bdd/features/. Each feature file describes test scenarios using Gherkin syntax.
Shell Tests (Recommended for Client Libraries)
Shell tests run external test commands (Go, Python, Node.js, etc.) and verify their output. This is the simplest way to test client library compatibility.
Example (tests/bdd/features/my-feature.feature):
@go @mytag
Feature: My feature description
Background:
Given PostgreSQL started with pg_hba.conf:
"""
local all all trust
host all all 127.0.0.1/32 trust
"""
And fixtures from "tests/fixture.sql" applied
And pg_doorman started with config:
"""
[general]
host = "127.0.0.1"
port = ${DOORMAN_PORT}
admin_username = "admin"
admin_password = "admin"
[pools.example_db]
server_host = "127.0.0.1"
server_port = ${PG_PORT}
[pools.example_db.users.0]
username = "example_user_1"
password = "md58a67a0c805a5ee0384ea28e0dea557b6"
pool_size = 40
"""
Scenario: Test my Go client
When I run shell command:
"""
export DATABASE_URL="postgresql://example_user_1:test@127.0.0.1:${DOORMAN_PORT}/example_db?sslmode=disable"
cd tests/go && go test -v -run TestMyTest ./mypackage
"""
Then the command should succeed
And the command output should contain "PASS"
Test implementation (in your preferred language):
- Go:
tests/go/mypackage/my_test.go - Python:
tests/python/test_my.py - Node.js:
tests/nodejs/my.test.js - .NET:
tests/dotnet/MyTest.cs
Rust Protocol-Level Tests
For testing PostgreSQL protocol behavior at the wire level, use Rust-based tests. These tests directly send and receive PostgreSQL protocol messages, allowing precise control and comparison.
Example (tests/bdd/features/protocol-test.feature):
@rust @my-protocol-test
Feature: Protocol behavior test
Testing that pg_doorman handles protocol messages identically to PostgreSQL
Background:
Given PostgreSQL started with pg_hba.conf:
"""
local all all trust
host all all 127.0.0.1/32 trust
"""
And fixtures from "tests/fixture.sql" applied
And pg_doorman started with config:
"""
[general]
host = "127.0.0.1"
port = ${DOORMAN_PORT}
admin_username = "admin"
admin_password = "admin"
pg_hba.content = "host all all 127.0.0.1/32 trust"
[pools.example_db]
server_host = "127.0.0.1"
server_port = ${PG_PORT}
[pools.example_db.users.0]
username = "example_user_1"
password = ""
pool_size = 10
"""
@my-scenario
Scenario: Query gives identical results from PostgreSQL and pg_doorman
When we login to postgres and pg_doorman as "example_user_1" with password "" and database "example_db"
And we send SimpleQuery "SELECT 1" to both
Then we should receive identical messages from both
@session-test
Scenario: Session management test
When we create session "one" to pg_doorman as "example_user_1" with password "" and database "example_db"
And we send SimpleQuery "BEGIN" to session "one"
And we send SimpleQuery "SELECT pg_backend_pid()" to session "one" and store backend_pid
# ... more steps
Available Rust test steps:
Protocol comparison (sends to both PostgreSQL and pg_doorman):
we login to postgres and pg_doorman as "user" with password "pass" and database "db"we send SimpleQuery "SQL" to bothwe send CopyFromStdin "COPY ..." with data "..." to bothwe should receive identical messages from both
Session management (for complex scenarios):
we create session "name" to pg_doorman as "user" with password "pass" and database "db"we send SimpleQuery "SQL" to session "name"we send SimpleQuery "SQL" to session "name" and store backend_pidwe abort TCP connection for session "name"we sleep 100ms
Cancel request testing:
we create session "name" ... and store backend keywe send SimpleQuery "SQL" to session "name" without waiting for responsewe send cancel request for session "name"session "name" should receive cancel error containing "text"
Adding Dependencies
If you need additional packages in the test environment, modify tests/nix/flake.nix:
- Add Python packages to
pythonEnv - Add system packages to
runtimePackages
After modifying flake.nix, rebuild the image with make local-build.
Contribution Guidelines
Code Style
- Follow the Rust style guidelines
- Use meaningful variable and function names
- Add comments for complex logic
- Write tests for new functionality
Pull Request Process
- Create a new branch for your feature or bugfix
- Make your changes and commit them with clear, descriptive messages
- Write or update tests as necessary
- Update documentation to reflect any changes
- Submit a pull request to the main repository
- Address any feedback from code reviews
Reporting Issues
If you find a bug or have a feature request, please create an issue on the GitHub repository with:
- A clear, descriptive title
- A detailed description of the issue or feature
- Steps to reproduce (for bugs)
- Expected and actual behavior (for bugs)
Getting Help
If you need help with your contribution, you can:
- Ask questions in the GitHub issues
- Reach out to the maintainers
Thank you for contributing to PgDoorman!