PostgreSQL Performance Tuning:
Settings Most DBAs Miss

A stock PostgreSQL installation is tuned for compatibility and minimal resource usage - not performance. The default shared_buffers is 128 MB, which is laughably small for any production workload. Here are the parameters that actually matter and the values to set them to.

The Problem with Default Configuration

PostgreSQL's default configuration dates from an era when 256 MB of RAM was generous. On a modern server with 16 GB of RAM, the defaults leave over 90% of available memory completely unused by the database engine. postgresql.conf is commented out for most settings, meaning PostgreSQL falls back to ancient defaults.

The good news is that the most impactful parameters are well-understood and follow a consistent formula based on your server's available RAM. You don't need to be a database expert - you need a spreadsheet.

💡
Use PGTune as a starting point. The open-source PGTune tool generates a tailored postgresql.conf snippet based on your RAM, CPU count, and workload type. Use it as a baseline, not a final answer.

Memory Settings

shared_buffers

shared_buffers is the amount of memory PostgreSQL uses for caching data pages. The standard recommendation is 25% of total RAM. On a 16 GB server, set it to 4 GB. PostgreSQL also relies on the OS page cache, so you don't want to give it everything.

shared_buffers = 4GB          # 25% of RAM

effective_cache_size

This parameter doesn't allocate memory - it tells the query planner how much memory it can assume is available for caching (including the OS page cache). Set it to 50–75% of total RAM. A higher value makes the planner prefer index scans over sequential scans.

effective_cache_size = 12GB   # 75% of RAM

work_mem

work_mem controls memory per sort or hash operation within a query. The catch: a single complex query can use work_mem many times simultaneously. Don't set this globally too high. A reasonable starting point is total RAM / (max_connections * 4).

work_mem = 64MB               # Conservative for 16GB / 100 connections

# For analytics workloads, override per session:
SET work_mem = '256MB';

maintenance_work_mem

Used by VACUUM, CREATE INDEX, and ALTER TABLE. Set it higher than work_mem - 256 MB to 1 GB is typical - because maintenance operations don't run concurrently in the same way as queries.

maintenance_work_mem = 512MB

WAL and Checkpoint Settings

The Write-Ahead Log (WAL) is PostgreSQL's mechanism for durability and crash recovery. Poorly tuned WAL settings cause frequent checkpoint activity, which shows up as periodic spikes in I/O.

# Allow more WAL data between checkpoints (reduces I/O spikes)
max_wal_size = 4GB
min_wal_size = 1GB

# Spread checkpoint I/O over 90% of the checkpoint interval
checkpoint_completion_target = 0.9

# Increase WAL buffer
wal_buffers = 64MB

# For SSDs - enable asynchronous commit for non-critical writes
# (small risk of losing last ~200ms of commits on crash)
# synchronous_commit = off

Connection Management with PgBouncer

Every PostgreSQL connection is a separate OS process that consumes around 5–10 MB of RAM. Setting max_connections = 500 wastes gigabytes of RAM on idle processes. The solution is a connection pooler.

PgBouncer sits between your application and PostgreSQL, maintaining a small pool of real database connections and multiplexing application connections across them. Install it on the same host as PostgreSQL:

apt install pgbouncer

Configure /etc/pgbouncer/pgbouncer.ini:

[databases]
mydb = host=127.0.0.1 port=5432 dbname=mydb

[pgbouncer]
listen_addr     = 127.0.0.1
listen_port     = 6432
auth_type       = md5
auth_file       = /etc/pgbouncer/userlist.txt
pool_mode       = transaction
max_client_conn = 1000
default_pool_size = 25

With pool_mode = transaction, a real DB connection is only held for the duration of a transaction, not the entire client connection lifetime. Your application connects to PgBouncer on port 6432 instead of Postgres on 5432. Drop your PostgreSQL max_connections to something sane like 100.

Finding Slow Queries with pg_stat_statements

Before tuning indexes or rewriting queries, you need to know which queries are actually slow. Enable the pg_stat_statements extension:

# Add to postgresql.conf
shared_preload_libraries = 'pg_stat_statements'

# After restarting PostgreSQL, enable the extension
psql -U postgres -d mydb -c "CREATE EXTENSION IF NOT EXISTS pg_stat_statements;"

Then identify your top slow queries:

SELECT
  round(total_exec_time::numeric, 2) AS total_ms,
  calls,
  round(mean_exec_time::numeric, 2)  AS mean_ms,
  round((100 * total_exec_time /
    sum(total_exec_time) OVER ())::numeric, 2) AS percentage,
  query
FROM pg_stat_statements
ORDER BY total_exec_time DESC
LIMIT 10;

Using EXPLAIN ANALYZE

Once you've identified slow queries, use EXPLAIN (ANALYZE, BUFFERS) to see the query plan and actual execution statistics. Look for Seq Scan on large tables - these often indicate missing indexes.

EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT)
SELECT * FROM orders WHERE customer_id = 12345 AND status = 'pending';

-- If you see: Seq Scan on orders (cost=0.00..8432.00 rows=1)
-- Add an index:
CREATE INDEX CONCURRENTLY idx_orders_customer_status
  ON orders (customer_id, status)
  WHERE status = 'pending';  -- Partial index for better selectivity
⚠️
Always use CONCURRENTLY for production indexes. CREATE INDEX CONCURRENTLY builds the index without locking the table for writes. Regular CREATE INDEX takes an ACCESS EXCLUSIVE lock that blocks all reads and writes - never use it on a table with live traffic.

A Minimal Production postgresql.conf

Here's a consolidated tuning block for a dedicated 16 GB PostgreSQL server with an SSD:

# Memory
shared_buffers          = 4GB
effective_cache_size    = 12GB
work_mem                = 64MB
maintenance_work_mem    = 512MB

# WAL
wal_buffers             = 64MB
max_wal_size            = 4GB
min_wal_size            = 1GB
checkpoint_completion_target = 0.9

# Connections
max_connections         = 100

# Planner
random_page_cost        = 1.1   # SSD - lower than HDD default of 4.0
effective_io_concurrency = 200  # SSD

# Logging (find slow queries)
log_min_duration_statement = 1000  # Log queries slower than 1s
log_checkpoints            = on
shared_preload_libraries   = 'pg_stat_statements'

After editing, restart PostgreSQL: systemctl restart postgresql. Monitor with pg_stat_bgwriter and pg_stat_checkpointer (PostgreSQL 17+) to verify checkpoint frequency drops.

Services Technologies Process Blog Get in Touch