Obsfly
postgres / pg_stat_statements / topliveSELECT * FROM pg_stat_statements ORDER BY total_exec_time DESC LIMIT 5;queryidcallstotal_msmean_msrowscache_hita1b2c3d498,124412,3304.21.2M99.4%e5f6a7b812,400108,2008.7240k82.1%c9d0e1f21,080412,00038112k12.0%b3c4d5e65,20098,4001878k94.8%d7e8f9a032084,6002648.4k60.2%

Postgres

pg_stat_statements: der vollständige Leitfaden 2026

Jede Spalte, jede Falle, die Queries, die du heute ausführen solltest — und warum pg_stat_statements 2026 immer noch die nützlichsten 80 Zeilen Telemetrie in Postgres sind.

Published ·14 min read

pg_stat_statements is the most useful 80 lines of telemetry in Postgres. It gives you a per-query-shape rollup of execution count, total time, rows touched, and (since Postgres 13) planning time — for free, in shared memory, with no agent. Every meaningful Postgres monitoring product is built on top of it.

And yet most engineers use maybe four of its seventeen columns. This is the complete 2026 guide: every column, every gotcha, the queries you should run today, and an honest take on its limits and the five new alternatives that emerged in 2024–2026.

On this page
  1. What pg_stat_statements actually is
  2. Enabling it (and the right settings)
  3. Every column, decoded
  4. Five queries to run today
  5. Gotchas that bite production
  6. Limits — and the 2026 alternatives
  7. FAQ

What pg_stat_statements actually is

It’s a contrib extension that hooks into Postgres’s query executor. Every time a query runs, Postgres normalizes the statement (replaces literals with $1, $2), hashes it into a queryid, and accumulates per-execution stats into a fixed-size in-memory hash table. You read it like a regular view.

Two important properties: it’s aggregated (no per-call rows — only sums and counts), and the hash table has a maximum size (pg_stat_statements.max, default 5,000). Once you exceed that, Postgres evicts the least-used entries — which is where most surprises come from.

Enabling it (and the right settings)

pg_stat_statements ships with every contrib package and every cloud provider (RDS / Aurora / Cloud SQL / Azure / Crunchbridge). You enable it in two steps:

# postgresql.conf
shared_preload_libraries = 'pg_stat_statements'
pg_stat_statements.max = 10000          # default 5000 — bump for busy systems
pg_stat_statements.track = all          # 'top' (default) misses inner statements in functions
pg_stat_statements.track_planning = on  # since PG 13 — captures planning time too

Then create the extension once per database you care about:

CREATE EXTENSION pg_stat_statements;

Every column, decoded

ColumnWhat it meansWhen you actually use it
userid / dbidOwner and DB of the queryMulti-tenant DBs; almost always group by these
queryidStable hash of the normalized statementThe signature; join key for dashboards
queryNormalized statement text (pg_stat_statements rewrites literals)Show the operator what query is slow
plansNumber of times the query was plannedCompare with calls to detect parse-prepare ratio
total_plan_timeCumulative planning time, msWhen planning is your bottleneck (huge IN lists)
mean_plan_time / min_plan_time / max_plan_time / stddev_plan_timePlanning latency statsSame
callsNumber of executionsMost useful column. Sort by it.
total_exec_timeCumulative exec time, msSort by this to find what’s burning your DB
mean_exec_time / min_exec_time / max_exec_time / stddev_exec_timeExecution latency statsMean is the misleading one — see gotchas
rowsCumulative rows returned or affectedDetect 'small fast query, but it returns 100M rows'
shared_blks_hit / read / dirtied / writtenBuffer pool statsCache hit ratio per signature
local_blks_*Temp table buffer statsRare — mostly for ETL/large CTE work
temp_blks_read / writtenDisk-spill statsSort/hash spilling to disk → bump work_mem
blk_read_time / blk_write_timeTime spent in OS read/write (track_io_timing on)IO bottleneck detection
wal_records / wal_fpi / wal_bytesWAL volume per query (PG 13+)Hot writes that bloat your WAL
jit_*JIT compilation stats (PG 11+)Diagnose 'JIT made it slower'
stats_since / minmax_stats_sinceWhen the row started accumulating (PG 17+)Critical — see gotchas

Five queries to run today

1. Top time-burners

SELECT
  queryid,
  substring(query for 100) AS q,
  calls,
  round(total_exec_time::numeric, 1) AS total_ms,
  round(mean_exec_time::numeric, 2)  AS mean_ms,
  rows
FROM pg_stat_statements
ORDER BY total_exec_time DESC
LIMIT 20;

Sort by total_exec_time, not mean_exec_time. A query that runs 10ms a million times costs more than a query that runs 5s ten times. Most outages live at the top of this list.

2. Cache hit ratio per signature

SELECT
  queryid,
  shared_blks_hit,
  shared_blks_read,
  round(100.0 * shared_blks_hit /
        NULLIF(shared_blks_hit + shared_blks_read, 0), 2) AS cache_hit_pct
FROM pg_stat_statements
WHERE shared_blks_hit + shared_blks_read > 1000
ORDER BY cache_hit_pct
LIMIT 20;

Anything below 90% wants attention; anything below 50% on a busy signature is usually a missing index.

3. Write-amplifying queries

SELECT
  queryid,
  substring(query for 80) AS q,
  calls,
  pg_size_pretty(wal_bytes) AS wal_bytes,
  wal_fpi
FROM pg_stat_statements
ORDER BY wal_bytes DESC
LIMIT 20;

High wal_fpi (full-page images) means the query is hitting cold pages and triggering WAL bloat. Common culprit: bulk UPDATE on a wide table without a recent checkpoint.

4. Disk-spill detection

SELECT
  queryid,
  substring(query for 80) AS q,
  pg_size_pretty(temp_blks_read::bigint * 8192)    AS temp_read,
  pg_size_pretty(temp_blks_written::bigint * 8192) AS temp_written
FROM pg_stat_statements
WHERE temp_blks_written > 0
ORDER BY temp_blks_written DESC
LIMIT 20;

Anything in this list spilled a sort or hash to disk because work_mem wasn’t large enough. Either bump work_mem for that role/session, or rewrite the query.

5. Reset and re-measure

SELECT pg_stat_statements_reset();

Reset before a release, then read after — you’ll see exactly what the new code is doing instead of an average over weeks.

Gotchas that bite production

  • queryid is unstable across major versions. Postgres 14+ uses a different hash function than 13. Don’t use queryid as a join key across an upgrade.
  • mean_exec_time hides bimodals. A query that’s 1ms when warm and 5s when cold has a mean of ~50ms — useless. Always look at stddev_exec_time alongside mean. Better: store HDR histograms. (See Why your Postgres p99 lies.)
  • Eviction silently drops history. Hit pg_stat_statements.max and the least-used entries vanish. You won’t see a warning. Watch pg_stat_statements_info.dealloc to detect.
  • track = top misses inner SQL. Default mode skips statements inside functions. Set track = all if you have non-trivial PL/pgSQL.
  • Cloud-provider differences. RDS exposes most columns. Aurora has a slight delay. Cloud SQL truncates the query column more aggressively. Verify column-by-column before you trust a dashboard.
  • Reset destroys the global view. If two teams share a DB and one resets to debug, the other team’s dashboards lose history. Coordinate or use pg_stat_statements_reset(userid, dbid, queryid) for surgical resets (PG 12+).

Limits — and the 2026 alternatives

pg_stat_statements is rolling aggregate only. It can’t answer: “what was p99 of this query at 14:35 yesterday?” or “show me the actual plan from the slow execution.” That’s where DBM agents come in.

The 2026 alternatives — none replace it, all complement it:

  • pg_stat_kcache — adds CPU and OS-level disk-IO stats per signature.
  • pg_qualstats + hypopg — index recommendation built on missing-quals stats.
  • auto_explain — captures EXPLAIN plans for slow executions, so you have plans not just stats.
  • The new pg_stat_io view (PG 16+) — splits IO by backend and context, complementary to pg_stat_statements.
  • OpenTelemetry-flavored Postgres receivers, which scrape pg_stat_statements and ship to Tempo / Loki / Grafana.

Obsfly Query Summary is built on top of pg_stat_statements + pg_stat_kcache + auto_explain, with HDR histograms persisted in ClickHouse so you can answer time-bounded percentile questions that the in-memory view can’t.

FAQ

Does pg_stat_statements affect production performance?+
Negligibly. Postgres benchmarks show under 1% overhead on OLTP workloads at default settings. The hash table is in-memory and lock-free for reads.
Why is my queryid different than what my ORM logs?+
ORMs log the un-normalized statement; pg_stat_statements stores the normalized form. To find the row that matches your query, replace literals with $1, $2, … and look it up.
Can I use it without superuser?+
Yes. Grant the pg_read_all_stats role (PG 10+) to your monitoring user. They’ll see all statements without superuser.
Is it safe on a replica?+
Yes — the extension runs on the replica and tracks the read queries that hit it. Replica stats are independent of primary stats.

Keep reading

· · ·

Überwache deine Datenbanken wie deine Services.

Buche eine 30-minütige Demo. Wir besprechen deine Flotte und erstellen ein 30-Tage-Angebot.

pg_stat_statements: der vollständige Leitfaden 2026 · Obsfly