Obsfly
pgbouncer / pool · in-use vs limitlivepool_size=200 · transaction mode · saturation alert at 95%limit

Postgres

Postgres コネクションプーリング:pgBouncer、RDS Proxy、見逃した計算

max_connectionsが間違ったツマミな理由、pgBouncerプールモードの本当の違い、そしてプールサイズの実用的な計算式。

Published ·14 min read

Postgres connections cost 8-12 MB each and a context switch per query. Apps that open 1000 connections and use 50 are wasting most of them. Then someone raises max_connections and the next OOM is your fault. Connection pooling is the answer; the math behind “the right size” is the part most teams skip.

On this page
  1. The math you skipped
  2. PgBouncer pool modes
  3. RDS Proxy and PgCat
  4. What to monitor
  5. Pitfalls
  6. FAQ

Pool size: the working formula

The classic HikariCP recipe applies to Postgres almost unchanged:

pool_size = (cores * 2) + effective_spindle_count

# 8-core, NVMe (effective_spindle_count = 1):
# pool_size = 17

# 16-core, NVMe RAID-10 (effective_spindle_count = 4):
# pool_size = 36

At higher pool sizes you’re paying coordination overhead with diminishing returns. Real workloads rarely need more than 50; if you do, you usually need a bigger DB instead.

PgBouncer pool modes — what each really means

ModeWhen connection returns to poolWhat breaks
sessionWhen client disconnectsNothing breaks. Largest pool needed.
transactionEnd of transaction (COMMIT/ROLLBACK)Prepared statements (without protocol-level fix), SET LOCAL outside txn, advisory locks
statementEnd of every queryAlmost everything. Pre-2.0 PgBouncer only — avoid.

Transaction mode is the right default for most apps. The prepared-statements caveat is real but solved by PgBouncer 1.21+ with max_prepared_statements = 100 in the config.

RDS Proxy and PgCat: when to use which

  • RDS Proxy: managed by AWS, IAM-aware, supports failover with sticky sessions. Costs ~$15/mo per instance hour and the latency hit is 1-3ms.
  • PgCat (Rust): high-throughput drop-in for PgBouncer with sharding and load-balancing. Worth a look if you’re running PgBouncer at the edge of its single-thread limit.
  • PgBouncer: still the safe default for most teams. Battle-tested at petabyte scale.

What to monitor on the pooler

  • SHOW POOLScl_waiting > 0 sustained means you’re undersized.
  • SHOW STATSavg_wait_time trending up = pool saturated.
  • Server-side: pg_stat_activity + your pooler client_addr should show stable connection counts. Spikes mean reconnect storms.

Pitfalls

  • Per-app pool isolation — give each app its own pool. One noisy app shouldn’t starve the others.
  • SSL termination — terminating TLS at the pooler is fine but make sure pgbouncer-postgres connection is also TLS if you have compliance requirements.
  • SET search_path — works in session mode, breaks in transaction mode unless wrapped in a transaction.
  • Long-lived prepared statements — set max_prepared_statements > 0 if your app uses them; otherwise see “prepared statement does not exist.”

FAQ

Why not just raise max_connections?+
Each idle Postgres connection costs ~10MB. 2000 connections = 20GB before any query runs. Pooling reuses them; raising the cap just defers OOM.
PgBouncer transaction mode + Hibernate / SQLAlchemy?+
Both have safe defaults that avoid unsupported features in transaction mode. SQLAlchemy: set isolation_level explicitly. Hibernate: use server-side connection pool, not client-side.
Can I run PgBouncer in Kubernetes?+
Yes — sidecar pattern works. Watch for connection storms on rolling deploys; pre-stop hook should drain the pool gracefully.

Keep reading

· · ·

サービスと同じようにデータベースを監視する。

30 分のデモをご予約ください。フリートを一緒に設計し、最初の 30 日間のお見積りをお出しします。

Postgres コネクションプーリング:pgBouncer、RDS Proxy、見逃した計算 · Obsfly