Compute Profiles

Stable

How L1–L5 sizes work, what is actually active today, and how the writer profile affects the autoscaler and managed configuration.

Updated: March 21, 2026

A Compute profile defines how exactly the platform will run PostgreSQL for your database: which resources are available to the writer, which managed parameters will be calculated, and how the database behaves under load.

In the user-facing SPG99 model, there are two levels of choice:

  1. Size (size) — a clear product size chosen by the user.
  2. Compute profile (current_profile / target_profile) — the runtime writer profile used by the platform during operation and autoscale handoff.

The user-facing size model L1–L5

From the product and PAYG perspective, the platform is prepared for the following sizes:

  • L1
  • L2
  • L3
  • L4
  • L5

This is the convenient external model for choosing the class of the database.

What is actually active today

In the current production serverless contract, the writer autoscaler works in the range:

  • L1
  • L2

Therefore, for real profile handoff today, the practical rule is:

  • active writer autoscaler today = L1 <-> L2
  • PAYG and the L1–L5 line are already prepared as the product model

Workload: OLTP or Analytics

In addition to size, the platform accounts for workload type:

  • OLTP — many short transactions, more concurrent connections, lower latency;
  • Analytics — fewer simultaneous connections, more resources per heavy query.

The workload affects computed PostgreSQL parameters and internal Compute settings.

Which parameters the platform calculates automatically

At startup, the platform automatically chooses key PostgreSQL parameters based on the actually allocated CPU and memory, size, and workload type. The following are typically calculated automatically:

  • shared_buffers;
  • effective_cache_size;
  • maintenance_work_mem;
  • work_mem;
  • max_connections;
  • parallelism parameters;
  • part of the internal spg99 settings related to local I/O and WAL.

This matters for two reasons:

  • the database starts faster and more predictably;
  • the user does not have to maintain dozens of low-level settings manually.

What current_profile and target_profile mean

The new runtime fields are useful to interpret as follows:

  • current_profile — the profile on which the active writer is actually running right now;
  • target_profile — the profile the platform is trying to reach during the current autoscale transition;
  • candidate_profile — the profile of the pre-prepared candidate generation.

In other words, the writer profile is no longer only a static size label, but part of the live runtime state of the database.

Connection limits

The exact max_connections value is calculated by the platform based on profile and workload. In production, it is still better to rely on pooling rather than trying to fill the limit completely.

What compute_profile is

compute_profile is a lower-level name for the Compute startup profile. It usually defines infrastructure placement details more precisely than the public size.

For most users, the rule is simple:

  • choose size;
  • do not tie yourself to internal profile names without a reason;
  • use current_profile and target_profile as runtime signals, not as manual low-level knobs.

Practical recommendation

  • For most new databases, start with L1.
  • If you need more headroom and the platform already operates with L2 handoff, use L2.
  • If you are planning broader PAYG scaling on the product side, design naming and expectations around L1–L5, but remember that the currently active autoscale handoff works specifically in the L1/L2 range.