ESC

AI-powered search across all blog posts and tools

Architecture Β· November 18, 2025

Governor Limits Every Salesforce Architect Must Know

A complete reference to Salesforce governor limits with exact numbers, visual comparisons, and bulkification patterns that prevent hitting them.

☕ 9 min read 📅 November 18, 2025
  • All governor limits reset per transaction, not per trigger or method call
  • Bulkification is not optional β€” it is the single most important Apex pattern
  • CPU time limit (10 seconds synchronous) is the hardest limit to diagnose and recover from

Every Salesforce developer learns about governor limits early. Most learn about them the hard way β€” a trigger that works perfectly in sandbox starts throwing System.LimitException in production when 200 records are imported via Data Loader. I have been that developer. I have also been the architect reviewing the code that caused it.

Governor limits exist because Salesforce is a multi-tenant platform. A single rogue transaction that hammers the database would degrade performance for every other org sharing that server. The limits are Salesforce’s mechanism for enforcing good citizenship. The good news is that once you deeply understand the limits, designing around them becomes instinct.

The Problem

A trigger that queries inside a loop works perfectly in developer sandbox with 5 records. It passes QA with 20 records. It reaches production and a Data Loader import of 200 Accounts triggers a System.LimitException: Too many SOQL queries: 101 β€” crashing the entire import and leaving users locked out of saving records.

The Solution

Move all SOQL outside of loops (bulkification), use the Limits class to monitor usage at runtime, and architect data flows so that a single transaction never needs more than a fraction of the available limit budget β€” leaving headroom for other automations sharing the same transaction.

The Visual Limits Reference

Salesforce Per-Transaction Governor Limits
Salesforce Per-Transaction Governor LimitsLimitMaxVisual (proportional)SOQL Queries10050% of bar = 50 queriesDML Statements150SOQL Rows Returned50,000DML Rows Processed10,000Heap Size6 MBCPU Time (Sync)10 secCPU Time (Async)60 secHTTP Callouts100Future Method Calls50Queueable Jobs Enqueued50All limits are per-transaction. Async context limits differ β€” see CPU time row.

The Rule That Governs All Rules: Bulkification

Before I go through each limit, I want to hammer this point: all governor limits are per-transaction, not per record, not per trigger invocation. A trigger invoked for 200 records in a single DML operation has the same 100 SOQL queries available as a trigger invoked for 1 record. If your trigger queries inside a loop, 200 records burns 200 of your 100 allowed queries β€” and the transaction fails.

This is bulkification: moving all database operations outside of loops.

πŸ’‘ Pro Tip

Governor limits are shared across the entire transaction β€” not just your trigger. Flows, Process Builders, and validation rules running in the same save operation all draw from the same 100-SOQL and 150-DML budget. Always review all automations on an object together when diagnosing limit issues.

Why does Salesforce process records in batches of 200?

Salesforce processes DML operations in chunks of 200 records per trigger invocation. This means your trigger’s Trigger.new will contain up to 200 records at a time, even if the actual operation affects more records. This batching applies to triggers, but NOT to the overall governor limits β€” those are per transaction. So if a Data Loader inserts 1,000 records, you get 5 trigger invocations of 200 records each, but each invocation is a separate transaction with its own fresh set of limits.

Bad Pattern

// WRONG β€” 1 SOQL per record, fails after 100 records
trigger AccountTrigger on Account (after insert) {
    for (Account acc : Trigger.new) {
        List<Contact> contacts = [SELECT Id FROM Contact WHERE AccountId = :acc.Id]; // NEVER do this
        // process contacts...
    }
}

Good Pattern

// RIGHT β€” 1 SOQL for all records
trigger AccountTrigger on Account (after insert) {
    Set<Id> accountIds = new Map<Id, Account>(Trigger.new).keySet();
    Map<Id, List<Contact>> contactsByAccount = new Map<Id, List<Contact>>();

    for (Contact c : [SELECT Id, AccountId FROM Contact WHERE AccountId IN :accountIds]) {
        if (!contactsByAccount.containsKey(c.AccountId)) {
            contactsByAccount.put(c.AccountId, new List<Contact>());
        }
        contactsByAccount.get(c.AccountId).add(c);
    }

    for (Account acc : Trigger.new) {
        List<Contact> contacts = contactsByAccount.get(acc.Id);
        if (contacts != null) {
            // process contacts...
        }
    }
}

SOQL Queries: 100 Per Transaction

One hundred synchronous SOQL queries sounds like a lot. It is not, in complex orgs. A single page load can involve multiple triggers, flows, processes, and validation rules β€” all sharing the same 100-query budget.

Patterns That Eat Your SOQL Budget

  • Trigger-in-a-loop: the most obvious offender, covered above.
  • Lazy loading in helper methods: a utility method that queries inside a loop called from a trigger eats the same budget.
  • Formula fields with VLOOKUP: VLOOKUP in formula fields executes a SOQL query on every read.
  • Flows with Get Records inside loops: a Record-Triggered Flow that loops over child records and executes a β€œGet Records” step inside the loop behaves identically to SOQL in a loop.
⚠️ Warning

Even if your Apex code is perfectly bulkified, a Flow on the same object that uses β€œGet Records” inside a loop can consume your remaining SOQL budget. Always audit all automations on an object together β€” not just your trigger.

The SOQL Rows Limit: 50,000

Even if you use only 10 queries, each query can return at most 50,000 rows total across the transaction. A SELECT Id FROM Contact on a large org will hit this. Always use selective WHERE clauses and LIMIT when appropriate.

// Add a LIMIT when you only need a count or a sample
List<Contact> sample = [SELECT Id FROM Contact WHERE AccountId IN :accountIds LIMIT 200];

DML Statements: 150 Per Transaction

150 DML statements covers: insert, update, delete, upsert, undelete, and merge. Each call counts as one DML statement regardless of how many records it processes.

// 1 DML statement for 10,000 records β€” perfectly fine
insert largeListOfAccounts; // list can have up to 10,000 items

// 3 DML statements β€” costs 3 of your 150 budget
insert accounts;
update contacts;
delete oldOpportunities;

The DML rows limit is 10,000 β€” meaning one insert call can contain at most 10,000 records. If you have more, you need to chunk them, usually in a Batch Apex job.

How does Batch Apex help with DML row limits?

Batch Apex splits your data into smaller chunks (default 200 records per execute call). Each execute invocation runs in its own transaction with a fresh set of governor limits. This means you can process millions of records β€” the limits apply per batch execution, not to the entire job. The tradeoff is that batch jobs are asynchronous and cannot guarantee real-time processing.

// Chunking in a Batchable class
public class LargeBatchInsert implements Database.Batchable<sObject> {
    List<Account> allAccounts;

    public Database.QueryLocator start(Database.BatchableContext bc) {
        return Database.getQueryLocator([SELECT Id FROM Account WHERE Type = null]);
    }

    public void execute(Database.BatchableContext bc, List<Account> scope) {
        // scope is automatically chunked (default 200 records per batch)
        for (Account a : scope) { a.Type = 'Prospect'; }
        update scope;
    }

    public void finish(Database.BatchableContext bc) {}
}

Heap Size: 6 MB Synchronous, 12 MB Asynchronous

The heap is the total memory used by all variables in your transaction. Six megabytes sounds generous, but large SOQL result sets, JSON serialization, and collections of complex objects can consume it fast.

The classic heap-killer is querying all fields on a large result set:

Bad Pattern

// DANGEROUS on large result sets β€” pulls every field into heap
List<Opportunity> opps = [SELECT FIELDS(ALL) FROM Opportunity WHERE IsClosed = false];

Good Pattern

// BETTER β€” only query what you need
List<Opportunity> opps = [SELECT Id, Name, Amount, CloseDate, StageName
                           FROM Opportunity
                           WHERE IsClosed = false];
⚠️ Warning

If you are processing large data sets, use stateful Batch Apex with Database.Stateful sparingly β€” state is preserved between batches and counts toward the heap limit. A growing map or list in your batch class can accumulate across hundreds of batch executions and blow the heap.

CPU Time: 10 Seconds Synchronous

CPU time is the hardest limit to diagnose. Unlike SOQL and DML which produce clear exceptions when you go over the limit mid-transaction, CPU time accumulates silently until the 10-second wall is hit.

Common CPU hogs:

  • String manipulation in loops: String.replace() and JSON serialization are expensive.
  • Nested loops over large collections: O(n^2) complexity on 10,000-record operations.
  • Repeated JSON deserialize/serialize: deserializing the same JSON payload multiple times.

Bad Pattern

// CPU-heavy: re-serializing JSON inside a loop
for (Account acc : accounts) {
    Map<String, Object> data = (Map<String, Object>) JSON.deserializeUntyped(acc.Metadata__c);
    // process data
    acc.Metadata__c = JSON.serialize(data); // expensive serialization per record
}

Good Pattern

// CPU-lighter: batch the serialization
Map<Id, Map<String, Object>> allData = new Map<Id, Map<String, Object>>();
for (Account acc : accounts) {
    allData.put(acc.Id, (Map<String, Object>) JSON.deserializeUntyped(acc.Metadata__c));
}
// process all data...
for (Account acc : accounts) {
    acc.Metadata__c = JSON.serialize(allData.get(acc.Id));
}

When in doubt, use Limits.getCpuTime() to measure usage at key points:

System.debug('CPU used: ' + Limits.getCpuTime() + ' / ' + Limits.getLimitCpuTime());

HTTP Callouts: 100 Per Transaction, Cannot Mix with DML (Usually)

Callouts from Apex have a hard constraint: you cannot make a callout after a DML statement in the same transaction unless the DML is in a separate savepoint scope. This is because an open database transaction locks rows, and waiting for an external HTTP response while holding locks can cause deadlocks.

🚨 Important

Attempting an HTTP callout after a DML statement throws a System.CalloutException. This is not a governor limit you can work around β€” it is a hard architectural constraint. Always separate your callout logic into an asynchronous context.

The correct pattern is to use @future(callout=true) or Queueable with Database.AllowsCallouts:

public class ExternalSyncService implements Queueable, Database.AllowsCallouts {
    List<Id> accountIds;

    public ExternalSyncService(List<Id> ids) {
        this.accountIds = ids;
    }

    public void execute(QueueableContext ctx) {
        List<Account> accounts = [SELECT Id, Name FROM Account WHERE Id IN :accountIds];
        for (Account acc : accounts) {
            // Safe to callout here β€” no DML before this in this transaction
            HttpRequest req = new HttpRequest();
            req.setEndpoint('callout:My_Named_Credential/accounts');
            req.setMethod('POST');
            req.setBody(JSON.serialize(acc));
            new Http().send(req);
        }
    }
}

The Limits Class: Your Runtime Safety Net

Every governor limit has a corresponding check via the Limits class. I use these defensively in long-running batch jobs and in trigger frameworks that need to gracefully degrade:

public static Boolean isSafeToQuery(Integer expectedRows) {
    Boolean soqlOk    = Limits.getQueries() < Limits.getLimitQueries() - 5;
    Boolean rowsOk    = (Limits.getQueryRows() + expectedRows) < Limits.getLimitQueryRows();
    Boolean cpuOk     = Limits.getCpuTime() < (Limits.getLimitCpuTime() * 0.85);
    return soqlOk && rowsOk && cpuOk;
}
πŸ’‘ Pro Tip

In a trigger framework, check isSafeToQuery() before attempting supplemental queries and log a warning if limits are running low, rather than letting the transaction crash and losing data. Graceful degradation is always better than a System.LimitException.

100SOQL Queries (Sync)
200SOQL Queries (Async)
50,000SOQL Rows Returned
150DML Statements
10,000DML Rows Processed
6 MBHeap Size (Sync)
10 secCPU Time (Sync)
60 secCPU Time (Async)
100HTTP Callouts
50Future Method Calls

Quick Reference: All Major Limits

LimitSynchronousAsynchronous
SOQL queries100200
SOQL rows returned50,00050,000
DML statements150150
DML rows10,00010,000
Heap size6 MB12 MB
CPU time10 seconds60 seconds
HTTP callouts100100
Future calls enqueued500 (N/A)
Queueable jobs enqueued501 (from another Queueable)
Batch Apex jobs (active)5 (Flex Queue holds up to 100 queued)β€”
Email sends1010
Email sends (org/day)5,000/day or (user licenses x per-license allocation), whichever is greaterβ€”

The Architecture Takeaway

Governor limits are not obstacles β€” they are design constraints, and design constraints produce better architectures. When limits force you to move SOQL outside loops, you end up with code that performs better even when the limits are not a concern. When limits force you to use batch processing for large data sets, you end up with code that scales to millions of records.

I like to think of governor limits as automated code review. They catch patterns that would cause performance problems in any system, and they catch them before you ever get to production.

What governor limit has given you the most trouble in production? Was it a new code issue or something that crept up as your org grew? Leave a comment β€” I am especially curious about heap size stories, because those are the ones that tend to be the most creative.


Test Your Knowledge

What happens when a trigger makes a SOQL query inside a for loop processing 200 records?
What is the heap size limit for synchronous Apex transactions?
Why can't you make an HTTP callout after a DML statement in the same transaction?

How did this article make you feel?

Comments

Salesforce Tip

🎉

You finished this article!

What to read next

Contents