Platform Cache is the most consistently underused free feature on the Salesforce platform. It’s been generally available for years. Every Enterprise-edition org has access to it. And yet, in every org I walk into, the first question I ask — “what’s your Platform Cache usage?” — is answered with some variant of “our what?”
That gap is a structural advantage for the teams that do use it. Cached FX rates instead of a nightly SOAP callout per transaction. Cached picklist metadata instead of repeated describe calls. Cached integration tokens instead of an OAuth round-trip on every outbound request. All of it sitting in a partition the platform manages for you, with the simple Cache.Org and Cache.Session APIs.
Here’s what every integration architect should know.
The Model in One Picture
The mental model: a partition is a block of capacity you allocate. Inside the partition, data is split between Org Cache (shared across users) and Session Cache (scoped to a user’s session). You pick which one to use based on who benefits — everyone or one user.
Availability, Pricing, and the Developer Footgun
Per the Platform Cache Partitions documentation in the Apex Developer Guide and the Platform Cache Limits reference:
| Aspect | Detail |
|---|---|
| Edition | Enterprise Edition and above. Not available on Professional Edition |
| Minimum partition size | 5 MB |
| Developer org default | 0 MB — you must request trial capacity to use cache in dev |
| Trial capacity | 10 MB |
| Purchase increments | 10 MB blocks |
| Provider Free capacity | 3 MB, automatically available to security-reviewed managed packages |
This is where most teams fail before they start. A fresh Developer Edition or Developer Sandbox has 0 MB of Platform Cache. Your Apex code will compile and run, but every Cache.Org.put() silently fails to persist, and every Cache.Org.get() returns null. If your dev loop seems to show “cache doesn’t work,” this is almost always the reason. Request trial capacity in Setup → Platform Cache → “Request Trial Capacity” and you’ll get 10 MB for 30 days.
When Platform Cache Pays Off for Integrations
Pattern 1 — FX Rates
If your org handles multi-currency, you are almost certainly either pulling FX rates from an external service or loading them from a custom object on every Opportunity save. Either way, you’re doing work repeatedly for data that changes hourly at most.
Put the rate table in Cache.Org with a 1-hour TTL. The first transaction of the hour loads from the source. Every subsequent transaction reads from memory.
Pattern 2 — OAuth Tokens for Callouts
An integration to an external system with a 1-hour token lifetime — without caching, every callout either triggers a token refresh or you wear the lifecycle manually in Apex. With Org Cache, the first request fetches; subsequent requests reuse. Your effective callout count drops and latency drops with it.
Pattern 3 — Describe Call Results
Schema.SObjectType.describe() is expensive when called inside a hot loop. Cache the relevant subset once per picklist and reuse.
Pattern 4 — Per-User Permission State
If you compute complex authorization state at the start of an LWC flow, store it in Cache.Session keyed by the user. The next action in the same session skips the computation.
The API in Practice
The Apex surface is deliberately simple:
// Write to org cache — shared across all users
Cache.Org.put('IntegrationTokens.SAP', tokenString, 2700); // 45 min TTL
// Read
String token = (String) Cache.Org.get('IntegrationTokens.SAP');
if (token == null) {
token = refreshTokenFromSAP();
Cache.Org.put('IntegrationTokens.SAP', token, 2700);
}
// Session cache — scoped to the current user's session
Cache.Session.put('UserContext.Permissions', permissionMap);
Map<String, Boolean> perms = (Map<String, Boolean>) Cache.Session.get('UserContext.Permissions');The key convention that works at scale: namespace your keys (IntegrationTokens.SAP, UserContext.Permissions) so you don’t step on collaborating code.
Cache is not durable storage. Partitions can be evicted under memory pressure, TTLs expire, and data can silently disappear. Every Cache.Org.get() must be followed by a null check and a fallback path. If you find yourself treating Platform Cache as durable, you’ve drifted into using it wrong.
The Partition Design You Actually Want
Per the Platform Cache Features documentation, you can have multiple partitions, each with its own Org/Session capacity split.
Most orgs benefit from three partitions:
| Partition | Purpose | Typical allocation |
|---|---|---|
integrations | External callout tokens, reference data | 60-70% of capacity |
ui | LWC-level caches, wizard state | 20-30% |
shared | Org-wide configuration, feature flags | 10% |
Separate partitions don’t just help you reason about capacity. They let you assign clear ownership — the integrations team owns the integrations partition, the UI team owns the ui partition. Cross-team concerns don’t evict each other’s entries under memory pressure.
Real-World Scenario
Problem: A team cached full Account records in Org Cache to avoid SOQL queries during a save trigger. Two weeks later, users reported seeing stale account data in derived reports. Root cause: the cache was updated when the record was saved via the UI, but not when Data Loader bulk-updated the same records overnight.
Fix: Only cache slow-changing, read-mostly data. Account records fail that test — they change via multiple surfaces you can’t easily invalidate from. FX rates, OAuth tokens, and describe-call results all pass.
What Not To Cache
- Record-level data that has multiple write paths (UI, API, Bulk API, Flow)
- Anything subject to row-level security that varies between users
- Sensitive data (there’s no field-level encryption for Platform Cache — use Shield or don’t cache it)
- Data that must be strongly consistent across a transaction
Measuring the Win
Two instrumentation tricks that prove value to leadership
1. Hit rate logging. Wrap your cache reads in a helper that counts hit/miss with System.debug and pushes a metric to a custom object once an hour. After a week, you have concrete data on cache effectiveness — typically 85-98% for slow-changing data, and a clear “here’s what we saved” story for leadership.
2. Before/after latency. Pick your busiest integration, deploy the caching behind a Custom Metadata flag, run for a week with it off, a week with it on. Compare Apex CPU time from the limits log. Real numbers beat every performance-tuning argument you’ll ever have.
Where the Official Docs Live
- Platform Cache — Apex Developer Guide — API surface
- Platform Cache Partitions — partition design
- Platform Cache Limits — capacity, edition, eviction behaviour
- Platform Cache Features — feature overview
Bookmark the Limits page. It’s the one that saves you from a surprise in production.
What’s the most expensive repeated query or callout in your org right now — and has it ever occurred to you to cache it?
How did this article make you feel?
Comments
Salesforce Tip