Pool Coordinator
The Pool Coordinator caps total backend connections per database across all users in that pool, with priority eviction when the cap is reached. It is what PgBouncer's max_db_connections should have been: enforced fairly, with a reserve for short bursts, and per-user minimums to protect critical workloads.
This page explains the concept and when to use it. For tuning recipes and read-out from SHOW POOL_COORDINATOR, see Pool Pressure.
What problem it solves
Without a coordinator, every user-pool is independent. A pool_size of 40 across 5 users means up to 200 backend connections — and PostgreSQL fights to maintain its own limits.
max_db_connections in PgBouncer caps the total but blocks new requests once the cap is reached. Whoever hit the cap first keeps their connections regardless of how heavily they use them, and slow workloads never yield to fast ones.
PgDoorman's Pool Coordinator caps the total and:
- Evicts idle connections from over-allocated users when another user needs to grow.
- Ranks users by p95 transaction time so slow pools donate first. Users with fast queries keep their reuse advantage; users with long transactions release first because their connections were idle longer anyway.
- Reserves a small overflow for short bursts. Configured separately from the main cap.
- Guarantees a per-user minimum that is never evicted. Critical workloads keep their footing during contention.
When to use it
Turn on the coordinator when:
- Multiple distinct workloads share the same database and you need an upper bound on backend connection count (PostgreSQL
max_connections, RAM, file descriptors). - One workload has bursty demand and you want it to absorb idle slots from others without crowding them out permanently.
- You operate near the PostgreSQL connection ceiling and need fair degradation rather than first-come-first-served.
You do not need it when:
- Each user's
pool_sizeis small enough that the sum is comfortably below PostgreSQL'smax_connections. - Workloads are predictable and pre-sized.
- You want PgBouncer-level simplicity.
max_db_connectionswithout eviction is supported but discouraged for shared databases.
Configuration
pools:
shared_db:
server_host: "127.0.0.1"
server_port: 5432
pool_mode: "transaction"
# Total cap across all users in this pool.
max_db_connections: 80
# Reserve overflow above max_db_connections for short bursts.
# Acquired only when no idle connection is available within reserve_pool_timeout.
reserve_pool_size: 16
reserve_pool_timeout: "3s"
# Per-user safety net: connections never evicted from a user, even under pressure.
# Sum across users should be ≤ max_db_connections.
min_guaranteed_pool_size: 5
# Eviction grace period: connections younger than this are not evicted.
# Prevents thrashing when a workload briefly idles.
min_connection_lifetime: "30s"
users:
- username: "fast_app"
password: "md5..."
pool_size: 40
- username: "batch_job"
password: "md5..."
pool_size: 60
Effective ceiling: max_db_connections + reserve_pool_size = 96. The reserve absorbs sub-second spikes; if the spike persists, eviction kicks in.
How it picks who donates
When a user requests a new backend and the cap is reached:
- Find candidates with idle connections. A user holding only active connections cannot donate — its work is in flight.
- Skip protected users. A user below
min_guaranteed_pool_sizeis excluded. - Skip recently-created connections. Connections younger than
min_connection_lifetimeare not evicted (avoids churn during minor idle gaps). - Rank by surplus. Users with the most idle connections above their
min_guaranteed_pool_sizerank highest. - Tiebreak by p95 transaction time. Among equally-idle users, the slow pool donates first. Their connections were probably idle because their next query is still being prepared upstream.
The chosen idle connection is closed; the requesting user receives a fresh connection from PostgreSQL.
Observability
SHOW POOL_COORDINATOR shows current state per database:
database | max_db_conn | current | reserve_size | reserve_used | evictions | reserve_acq | exhaustions
shared_db | 80 | 78 | 16 | 2 | 142 | 18 | 0
evictionsrising fast — one user is starved repeatedly. Either raisemax_db_connectionsor setmin_guaranteed_pool_sizefor that user.reserve_acqhigh — bursts are normal but you might be undersized; consider raisingmax_db_connectionsinstead of relying on reserve.exhaustionsnon-zero — even reserve was full. Clients hitquery_wait_timeoutwaiting for a backend. Raise the cap.
Prometheus: pg_doorman_pool_coordinator{type="..."} (gauges) and pg_doorman_pool_coordinator_total{type="evictions|reserve_acquisitions|exhaustions"} (counters). See Admin commands and Prometheus reference.
Caveats
- The coordinator only operates within one pool (one database). Cross-pool / cross-database limits are not supported.
- Eviction picks idle connections; a user holding all connections in long transactions cannot donate, so other users may starve. If this is your shape, raise
max_db_connectionsor split the workload. min_guaranteed_pool_sizeis a floor for eviction, not amin_pool_sizefor warm-up. The pool still has to create those connections on demand.- Setting
max_db_connectionswithoutmin_guaranteed_pool_sizeis the PgBouncer mode — works, but starves smaller users under pressure. Always set both for shared databases.
Where to next
- Sizing recipe with worked examples: Pool Pressure → Sizing the cap.
- Tuning under load: Pool Pressure → Tuning parameters.
- Reading admin output: Admin Commands → SHOW POOL_COORDINATOR.
- Pool modes (transaction vs session): Pool Modes.