Postgres
Postgres-Connection-Pooling: pgBouncer, RDS Proxy und die Mathematik, die du übersprungen hast
Warum max_connections der falsche Hebel ist, wie sich pgBouncer-Pool-Modi wirklich unterscheiden, und die Faustformel, die dir die richtige Pool-Größe sagt.
Postgres connections cost 8-12 MB each and a context switch per query. Apps that open 1000 connections and use 50 are wasting most of them. Then someone raises max_connections and the next OOM is your fault. Connection pooling is the answer; the math behind “the right size” is the part most teams skip.
Pool size: the working formula
The classic HikariCP recipe applies to Postgres almost unchanged:
pool_size = (cores * 2) + effective_spindle_count # 8-core, NVMe (effective_spindle_count = 1): # pool_size = 17 # 16-core, NVMe RAID-10 (effective_spindle_count = 4): # pool_size = 36
At higher pool sizes you’re paying coordination overhead with diminishing returns. Real workloads rarely need more than 50; if you do, you usually need a bigger DB instead.
PgBouncer pool modes — what each really means
| Mode | When connection returns to pool | What breaks |
|---|---|---|
| session | When client disconnects | Nothing breaks. Largest pool needed. |
| transaction | End of transaction (COMMIT/ROLLBACK) | Prepared statements (without protocol-level fix), SET LOCAL outside txn, advisory locks |
| statement | End of every query | Almost everything. Pre-2.0 PgBouncer only — avoid. |
Transaction mode is the right default for most apps. The prepared-statements caveat is real but solved by PgBouncer 1.21+ with max_prepared_statements = 100 in the config.
RDS Proxy and PgCat: when to use which
- RDS Proxy: managed by AWS, IAM-aware, supports failover with sticky sessions. Costs ~$15/mo per instance hour and the latency hit is 1-3ms.
- PgCat (Rust): high-throughput drop-in for PgBouncer with sharding and load-balancing. Worth a look if you’re running PgBouncer at the edge of its single-thread limit.
- PgBouncer: still the safe default for most teams. Battle-tested at petabyte scale.
What to monitor on the pooler
SHOW POOLS—cl_waiting> 0 sustained means you’re undersized.SHOW STATS—avg_wait_timetrending up = pool saturated.- Server-side: pg_stat_activity + your pooler client_addr should show stable connection counts. Spikes mean reconnect storms.
Pitfalls
- Per-app pool isolation — give each app its own pool. One noisy app shouldn’t starve the others.
- SSL termination — terminating TLS at the pooler is fine but make sure pgbouncer-postgres connection is also TLS if you have compliance requirements.
- SET search_path — works in session mode, breaks in transaction mode unless wrapped in a transaction.
- Long-lived prepared statements — set
max_prepared_statements> 0 if your app uses them; otherwise see “prepared statement does not exist.”
FAQ
Why not just raise max_connections?+
PgBouncer transaction mode + Hibernate / SQLAlchemy?+
Can I run PgBouncer in Kubernetes?+
Keep reading
Postgres
Postgres slow queries: 12 causes and how to find each one
A field-tested playbook for diagnosing a slow Postgres query in production — from missing indexes to plan flips to bloated tables — with the SQL to find each cause and the fix.
Postgres
pg_stat_statements: the complete 2026 guide
Every column, every gotcha, the queries you should run today, and why pg_stat_statements is still the most useful 80 lines of telemetry in Postgres — even with five new alternatives in 2026.