ESC

AI-powered search across all blog posts and tools

Architecture · April 26, 2026

Platform Cache Is Free Money for Your Integration Performance - And Nobody Uses It

Most Salesforce integration teams have never provisioned a Platform Cache partition. That's a structural mistake. Here is the Partition model, the capacity numbers, and the integration patterns that turn Platform Cache into meaningful latency reduction without writing a line of external caching code.

☕ 8 min read 📅 April 26, 2026
  • Platform Cache is available on Enterprise Edition and above — not on Professional
  • Partitions have two caches: Org (shared across sessions) and Session (per-user)
  • Minimum partition size is 5 MB; trial capacity is 10 MB; purchase in 10 MB blocks
  • Developer orgs start at 0 MB — you must request a trial cache to use it in dev
  • Use Org cache for expensive-but-shared data (rates, feature flags); Session cache for per-user derived data

Platform Cache is the most consistently underused free feature on the Salesforce platform. It’s been generally available for years. Every Enterprise-edition org has access to it. And yet, in every org I walk into, the first question I ask — “what’s your Platform Cache usage?” — is answered with some variant of “our what?”

That gap is a structural advantage for the teams that do use it. Cached FX rates instead of a nightly SOAP callout per transaction. Cached picklist metadata instead of repeated describe calls. Cached integration tokens instead of an OAuth round-trip on every outbound request. All of it sitting in a partition the platform manages for you, with the simple Cache.Org and Cache.Session APIs.

Here’s what every integration architect should know.

The Model in One Picture

Platform Cache — Two Partition Types, One Configuration
Org Cache vs Session CacheOrg CacheCache.Org APIShared across all sessionsFX rates, feature flagsMetadata describe resultsIntegration tokensStatic reference dataDefault TTL: up to 48 hoursSession CacheCache.Session APIScoped to a single user sessionPer-user derived dataMulti-step wizard stateExpensive permission checksUser-specific computationsDefault TTL: session lifetime

The mental model: a partition is a block of capacity you allocate. Inside the partition, data is split between Org Cache (shared across users) and Session Cache (scoped to a user’s session). You pick which one to use based on who benefits — everyone or one user.

Availability, Pricing, and the Developer Footgun

Per the Platform Cache Partitions documentation in the Apex Developer Guide and the Platform Cache Limits reference:

AspectDetail
EditionEnterprise Edition and above. Not available on Professional Edition
Minimum partition size5 MB
Developer org default0 MB — you must request trial capacity to use cache in dev
Trial capacity10 MB
Purchase increments10 MB blocks
Provider Free capacity3 MB, automatically available to security-reviewed managed packages
⚠️ The 0 MB developer-org trap

This is where most teams fail before they start. A fresh Developer Edition or Developer Sandbox has 0 MB of Platform Cache. Your Apex code will compile and run, but every Cache.Org.put() silently fails to persist, and every Cache.Org.get() returns null. If your dev loop seems to show “cache doesn’t work,” this is almost always the reason. Request trial capacity in Setup → Platform Cache → “Request Trial Capacity” and you’ll get 10 MB for 30 days.

When Platform Cache Pays Off for Integrations

Integration Patterns Where Platform Cache Wins
Four Repeatable WinsFX RatesCallout once per hourStore in Org Cache~500ms saved / callOAuth TokensRefresh per 45 minStore in Org Cache~300ms saved / callDescribe CallsPer-object metadataStore in Org Cache~50ms saved / callUser ContextSession CachePerms, profile data~100ms / actionCumulative effect on a busy org: minutes of Apex CPU saved daily.Often more impact than any single query optimization.

Pattern 1 — FX Rates

If your org handles multi-currency, you are almost certainly either pulling FX rates from an external service or loading them from a custom object on every Opportunity save. Either way, you’re doing work repeatedly for data that changes hourly at most.

Put the rate table in Cache.Org with a 1-hour TTL. The first transaction of the hour loads from the source. Every subsequent transaction reads from memory.

Pattern 2 — OAuth Tokens for Callouts

An integration to an external system with a 1-hour token lifetime — without caching, every callout either triggers a token refresh or you wear the lifecycle manually in Apex. With Org Cache, the first request fetches; subsequent requests reuse. Your effective callout count drops and latency drops with it.

Pattern 3 — Describe Call Results

Schema.SObjectType.describe() is expensive when called inside a hot loop. Cache the relevant subset once per picklist and reuse.

Pattern 4 — Per-User Permission State

If you compute complex authorization state at the start of an LWC flow, store it in Cache.Session keyed by the user. The next action in the same session skips the computation.

The API in Practice

The Apex surface is deliberately simple:

// Write to org cache — shared across all users
Cache.Org.put('IntegrationTokens.SAP', tokenString, 2700); // 45 min TTL

// Read
String token = (String) Cache.Org.get('IntegrationTokens.SAP');
if (token == null) {
    token = refreshTokenFromSAP();
    Cache.Org.put('IntegrationTokens.SAP', token, 2700);
}

// Session cache — scoped to the current user's session
Cache.Session.put('UserContext.Permissions', permissionMap);
Map<String, Boolean> perms = (Map<String, Boolean>) Cache.Session.get('UserContext.Permissions');

The key convention that works at scale: namespace your keys (IntegrationTokens.SAP, UserContext.Permissions) so you don’t step on collaborating code.

💡 Always plan for the null return

Cache is not durable storage. Partitions can be evicted under memory pressure, TTLs expire, and data can silently disappear. Every Cache.Org.get() must be followed by a null check and a fallback path. If you find yourself treating Platform Cache as durable, you’ve drifted into using it wrong.

The Partition Design You Actually Want

Per the Platform Cache Features documentation, you can have multiple partitions, each with its own Org/Session capacity split.

Most orgs benefit from three partitions:

PartitionPurposeTypical allocation
integrationsExternal callout tokens, reference data60-70% of capacity
uiLWC-level caches, wizard state20-30%
sharedOrg-wide configuration, feature flags10%
ℹ️ Partition boundaries are a governance control

Separate partitions don’t just help you reason about capacity. They let you assign clear ownership — the integrations team owns the integrations partition, the UI team owns the ui partition. Cross-team concerns don’t evict each other’s entries under memory pressure.

Real-World Scenario

🚨 When caching the wrong thing made things worse

Problem: A team cached full Account records in Org Cache to avoid SOQL queries during a save trigger. Two weeks later, users reported seeing stale account data in derived reports. Root cause: the cache was updated when the record was saved via the UI, but not when Data Loader bulk-updated the same records overnight.

Fix: Only cache slow-changing, read-mostly data. Account records fail that test — they change via multiple surfaces you can’t easily invalidate from. FX rates, OAuth tokens, and describe-call results all pass.

What Not To Cache

  • Record-level data that has multiple write paths (UI, API, Bulk API, Flow)
  • Anything subject to row-level security that varies between users
  • Sensitive data (there’s no field-level encryption for Platform Cache — use Shield or don’t cache it)
  • Data that must be strongly consistent across a transaction

Measuring the Win

Two instrumentation tricks that prove value to leadership

1. Hit rate logging. Wrap your cache reads in a helper that counts hit/miss with System.debug and pushes a metric to a custom object once an hour. After a week, you have concrete data on cache effectiveness — typically 85-98% for slow-changing data, and a clear “here’s what we saved” story for leadership.

2. Before/after latency. Pick your busiest integration, deploy the caching behind a Custom Metadata flag, run for a week with it off, a week with it on. Compare Apex CPU time from the limits log. Real numbers beat every performance-tuning argument you’ll ever have.

Where the Official Docs Live

Bookmark the Limits page. It’s the one that saves you from a surprise in production.


A new Developer Edition org shows Platform Cache code running cleanly but never actually caching data. Why?
Which data is the WORST fit for Platform Cache?

What’s the most expensive repeated query or callout in your org right now — and has it ever occurred to you to cache it?

How did this article make you feel?

Comments

Salesforce Tip

🎉

You finished this article!

What to read next

Contents