Performance Tuning Basics
StablePractical recommendations for connections, SQL, soft cold start, and autoscale handoff instead of manual low-level tuning.
Updated: March 21, 2026
SPG99 already performs an important part of tuning automatically: Compute calculates the baseline PostgreSQL configuration on its own based on profile, workload type, and available resources. Therefore, the main task for the user is to work correctly with connections, queries, and observability, rather than manually set dozens of low-level parameters.
1. Control the number of connections
Even with a sufficiently high max_connections, too many simultaneous connections are harmful:
- overhead inside PostgreSQL increases;
- latency gets worse;
- Gateway pooling becomes less efficient;
- autoscale handoff becomes harder.
Recommended practice:
- for long-lived services, use a connection pool;
- for jobs and serverless workloads, open a connection only when needed and close it afterwards;
- do not treat a high
max_connectionsvalue as a goal in itself.
2. Account for soft cold start
In SPG99, a sleeping database wakes up through soft basebackup and a thin startup image. This is faster than a heavy full restore, but the first attempt can still be noticeably slower than the warm path.
Therefore:
- the driver must support retry on connect;
- critical services should be prewarmed with a normal connect/check in advance;
- not every first slow connect should be treated as “the platform is broken.”
3. Do not break the autoscaler with session state unless it is truly needed
The new writer autoscaler uses generation handoff. What hurts it most is when the application creates many session-heavy connections without a real reason.
The worst patterns include:
SETon long-lived sessions;- temp tables;
LISTEN/UNLISTEN;- named prepared statements, if they can be avoided;
- long idle but pinned connections.
If the workload is truly stateless, Gateway and the autoscaler work much more efficiently.
4. Do not try to replace the managed configuration manually
The platform calculates the following on its own:
shared_buffers;work_mem;maintenance_work_mem;effective_cache_size;max_connections;- part of the parallelism parameters and internal
spg99settings.
Therefore, the standard recommendation is:
- do not tune these parameters globally by hand;
- do not try to use forbidden environment overrides;
- if you need a local experiment for one query, use
SET LOCALinside a transaction and measure the effect through metrics.
5. Watch long transactions
In any PostgreSQL system, long transactions often create the real problems:
- they hold
xmin; - they interfere with vacuum;
- they bloat tables and indexes;
- they block safe downscale/handoff of the writer autoscaler.
Recommendations:
- do not keep transactions open for minutes without a reason;
- use
idle_in_transaction_session_timeout; - watch long transactions in Monitoring.
6. Use pg_stat_statements
In SPG99, pg_stat_statements is enabled by default, so for bottleneck analysis it is almost always the best place to start.
It allows you to:
- find the most expensive queries;
- see frequently repeated patterns;
- compare average time and total time.
Remember that Compute is a stateless executor, so statistics may reset after a restart. This is normal.
7. When it is time to move to the next size
Consider moving to the next size if you regularly see:
- CPU near its upper limit;
- insufficient memory for the working set;
- constant pool checkout timeouts;
- inability to meet the SLO even after SQL optimization.
In SPG99, it is usually more reliable to move to the next size class than to try to squeeze the current one through manual overrides.
8. Use observability as the main tuning tool
For regular diagnostics, watch:
- CPU, memory, connection, and cold-start metrics;
- active queries and locks;
current_profile,target_profile, andscale_state;- Gateway pooling and signs of pinned traffic;
- Compute/PostgreSQL logs.
The best tuning is not random tweaking, but changes based on actual data from Monitoring.
