ESC

AI-powered search across all blog posts and tools

Apex · March 30, 2026

Apex Cursors: The End of Batch Apex Pain

Spring 26 brings Apex Cursors — process up to 50 million records with fine-grained control, no batch classes needed

☕ 12 min read 📅 March 30, 2026
  • Cursors handle up to 50M records with bidirectional navigation — no batch boilerplate
  • PaginationCursor is purpose-built for LWC data table pagination
  • Cursors serialize natively — chain them through Queueable jobs without Database.Stateful
  • Testing is synchronous and direct — no Test.startTest/stopTest gymnastics
  • Use Batch Apex only when you exceed 50M records or need scheduled execution

If you’ve been building on Salesforce long enough, you’ve hit this wall: you need to process millions of records, so you write Batch Apex. Then you deal with the rigid start() -> execute() -> finish() lifecycle, the lack of fine-grained control over chunk sizes, the inability to easily chain jobs, and the painful debugging cycle when something breaks at batch #847 of 2,000.

Apex Cursors change this entirely. Introduced as Beta in Summer ‘24 and reaching GA maturity in Spring ‘26, cursors give you a pointer to a SOQL result set — not the result itself. You fetch only the slice you need, when you need it.

The Pain Every Salesforce Developer Knows

The Problem

Scenario: A financial services org processes 12 million Account records nightly to recalculate risk scores. Their Batch Apex job takes 4+ hours, and when it fails at record 8 million due to a transient lock error, the entire job restarts from scratch.

The Cursor Solution

Use an Apex Cursor inside a Queueable job. Process 2,000 records per execution, track position automatically, and chain the next Queueable. If one chunk fails, retry that chunk only — no restart required.

Batch Apex vs Apex Cursors
Batch Apex (Old Way)start()QueryLocator lockedexecute()Default 200 (max 2,000)Batch 1Batch 2FAIL!Restarts entire jobNo retry · Fixed chunks · 5 active (1 executes at a time)Apex Cursors (New Way)getCursor()Pointer onlyfetch(pos, n)You control sizeChunk 1Chunk 2RetryChunk 3Resumes from chunk 2Retry chunks · Custom sizes · 50M rowsKey DifferenceBatch loads everything into fixed batches. Cursor stores a pointer — you control everything.

How Apex Cursors Actually Work

An Apex Cursor is a server-side pointer to a SOQL query result set. When you call Database.getCursor(query), Salesforce executes the query and caches the result on the server — but returns only a lightweight locator object. No records are loaded into heap memory until you explicitly call cursor.fetch(position, count).

Core Syntax

// 1. Create a cursor — no records loaded yet
Database.Cursor cursor = Database.getCursor(
    'SELECT Id, Name, Amount FROM Opportunity WHERE StageName = \\'Closed Won\\''
);

// 2. Check total records
Integer totalRecords = cursor.getNumRecords();

// 3. Fetch in controlled chunks
Integer position = 0, chunkSize = 2000;
while (position < totalRecords) {
    List<SObject> records = cursor.fetch(position, chunkSize);
    processRecords(records);
    position += records.size();
}
Cursor Lifecycle — How Data Flows
Step 1getCursor(SOQL)Step 2Server cachesStep 3Returns pointerYour Codefetch(pos, n)as neededMemory ImpactWithout CursorsHEAP EXCEEDED — crash at ~60KWith Cursors~0.4 MB per fetch — process 50M!Full Query Result: 5,000,000 records (cached server-side — you only touch what you fetch)

Scenario I: Mass Email Campaign

public class ReEngagementEmailJob implements Queueable {
    private Database.Cursor cursorLocator;
    private Integer position;
    private Integer batchSize = 2000;
    private Integer retryCount = 0;

    public ReEngagementEmailJob() {
        this.cursorLocator = Database.getCursor(
            'SELECT Id, Name, Email FROM Contact ' +
            'WHERE LastActivityDate < LAST_N_DAYS:400 ' +
            'AND Email != null AND HasOptedOutOfEmail = false'
        );
        this.position = 0;
    }

    public void execute(QueueableContext ctx) {
        Integer totalRecords = cursorLocator.getNumRecords();
        if (position >= totalRecords) return;

        try {
            List<Contact> contacts = cursorLocator.fetch(position, batchSize);
            sendReEngagementEmails(contacts);
            position += contacts.size();
            retryCount = 0;
        } catch (TransientCursorException e) {
            retryCount++;
            if (retryCount > 3) { position += batchSize; }
        }

        if (position < totalRecords) {
            System.enqueueJob(
                new ReEngagementEmailJob(cursorLocator, position)
            );
        }
    }
}

Scenario II: 15M Record Data Migration

The Problem

An enterprise org is migrating 15 million Lead records with external API enrichment (address verification). Batch Apex’s fixed 200-record chunks paired with the 100 callout limit per transaction make orchestration a nightmare.

The Cursor Solution

Use a cursor with a chunk size of 50 records — matching your callout budget. The cursor tracks your exact position, so if the API is throttled, you pause and resume precisely where you left off.

public class LeadMigrationJob implements Queueable, Database.AllowsCallouts {
    private Database.Cursor cursor;
    private Integer position;
    private static final Integer CHUNK = 50;

    public void execute(QueueableContext ctx) {
        Integer total = cursor.getNumRecords();
        if (position >= total) return;

        List<Lead> leads = cursor.fetch(position, CHUNK);
        for (Lead ld : leads) {
            AddressVerification result =
                AddressAPI.verify(ld.Street, ld.City, ld.State);
            ld.Street = result.standardizedStreet;
            ld.PostalCode = result.zip;
        }

        Database.update(leads, false);
        position += leads.size();

        if (position < total) {
            System.enqueueJob(
                new LeadMigrationJob(cursor, position)
            );
        }
    }
}

Scenario III: LWC Pagination — 500K Records

public with sharing class CasePaginationController {

    @AuraEnabled
    public static PaginationResult getPage(String serializedCursor,
                                            Integer position,
                                            Integer pageSize) {
        Database.Cursor cursorLocator;

        // First call — initialize a new cursor
        if (serializedCursor == null) {
            cursorLocator = Database.getCursor(
                'SELECT Id, CaseNumber, Subject, Status, Priority ' +
                'FROM Case ORDER BY CreatedDate DESC'
            );
        } else {
            // Restore cursor from serialized state
            cursorLocator = (Database.Cursor)
                JSON.deserializeStrict(serializedCursor, Database.Cursor.class);
        }

        PaginationResult result = new PaginationResult();
        result.records = cursorLocator.fetch(position, pageSize);
        result.totalRecords = cursorLocator.getNumRecords();
        result.currentPosition = position;
        result.serializedCursor = JSON.serialize(cursorLocator);
        return result;
    }

    public class PaginationResult {
        @AuraEnabled public List<Case> records;
        @AuraEnabled public Integer totalRecords;
        @AuraEnabled public Integer currentPosition;
        @AuraEnabled public String serializedCursor;
    }
}
LWC + Apex Cursor — Pagination Architecture
Browser (LWC)Case #SubjectStatus00001Login IssueOpen00002Reset PasswordPending… 50 rows per page …Prev3NextShowing 101-150 of 500,000Only 50 records in JS memory@AuraEnabledgetPage(cursor, pos, 50)Salesforce ServerCasePaginationController.clscursor.fetch(position, pageSize)Database.Cursor (cached)Serialized between @AuraEnabled callsSalesforce Database

Feature Comparison

FeatureBatch ApexApex Cursors
Max Records50 million50 million per cursor
Chunk SizeFixed (up to 2,000)You choose any size
Error RecoveryRestarts entire jobRetry specific chunk
ChainingLimited chain supportQueueable chains
Concurrent Limit5 active (only 1 executes at a time)No concurrent limit
BidirectionalForward onlyForward and backward
Callout SupportRequires AllowsCalloutsRequires AllowsCallouts (same as Batch)
UI PaginationNot designed for UIPaginationCursor class
Heap UsageLoads entire batchOnly fetched chunk
Code Complexity3-method interface2-3 lines of code

Governor Limits to Know

Apex Cursors come with their own governor limits. Know these before deploying to production — build monitoring into your Queueable chains.

50MMax rows per cursor
10Fetch calls per transaction
10KCursors per 24 hours
100MTotal rows per day
💡 Pro Tip

Use Limits.getApexCursorRows() and Limits.getFetchCallsOnApexCursors() to monitor your cursor consumption at runtime. Pair these with OrgLimits.getMap().get('DailyApexCursors') for daily tracking.

Architect’s Best Practices

1. Right-Size Your Chunk Size

Your chunk size should be driven by what you’re doing in each transaction — not by a default. For DML-heavy operations (inserts, updates, deletes), keep chunks around 200 records to stay safely within DML limits. For read-only processing like aggregations or reporting, you can push up to 2,000 records per fetch. If you’re making external callouts, keep chunks under 100 — each transaction allows only 100 callouts, and you’ll want headroom for retries.

2. Always Handle TransientCursorException

The platform throws TransientCursorException when the server-side cursor cache is temporarily unavailable. This is not a fatal error — it’s a signal to retry. Build a retry counter into your Queueable, and set a maximum (typically 3 retries). If all retries fail, log the failure with the exact position and skip forward. This granular recovery is the single biggest advantage over Batch Apex.

3. Monitor with the Limits Class

Salesforce exposes detailed cursor metrics through both the Limits and OrgLimits classes. Check Limits.getApexCursorRows() within your transaction, and use OrgLimits for daily consumption tracking. Build guardrails: if you’re within 20% of the daily cursor limit, log a warning and optionally pause your chain until the next 24-hour window.

// Transaction-level checks
System.debug('Cursor rows used: ' + Limits.getApexCursorRows());
System.debug('Fetch calls used: ' + Limits.getFetchCallsOnApexCursors());
System.debug('Max fetch calls: ' + Limits.getLimitFetchCallsOnApexCursors());

// Daily org-level check
Map<String, System.OrgLimit> limitsMap = OrgLimits.getMap();
System.OrgLimit cursorLimit = limitsMap.get('DailyApexCursors');
Integer remaining = cursorLimit.getLimit() - cursorLimit.getValue();
System.debug('Daily cursors remaining: ' + remaining);

// Safety guardrail
if (remaining < 500) {
    System.debug('WARNING: Approaching daily cursor limit. Pausing chain.');
    return; // Don't enqueue next job
}
⚠️ Don't Replace ALL Batch Apex

Batch Apex is still the right choice for simple, scheduled, set-it-and-forget-it bulk processing where you don’t need fine-grained control. A nightly job that deactivates expired users? Batch is fine. Use cursors when you need custom chunk sizes, error recovery, Queueable chaining, or UI-driven pagination. Think of cursors as a scalpel and Batch Apex as a sledgehammer — both have their place in the toolkit.

5. Serialize Cursor State for Cross-Transaction Use

Cursors can be serialized to JSON and restored in a subsequent transaction. This is what makes the Queueable chaining pattern work — each job receives the serialized cursor string, restores it, fetches the next chunk, and passes it forward.

// Serialize cursor for passing between transactions
String serializedCursor = JSON.serialize(cursorLocator);

// Restore in next transaction
Database.Cursor restored = (Database.Cursor)
    JSON.deserializeStrict(
        serializedCursor,
        Database.Cursor.class
    );

Key Takeaways

Apex Cursors fundamentally change how we approach large data processing in Salesforce. They are not a minor convenience — they solve real architectural pain points that have plagued the platform for over a decade.

Use Cursors when: you need custom chunk sizes, granular error recovery, Queueable chaining, UI pagination beyond 2,000 records, or when your Batch Apex jobs are hitting the 5-concurrent-job ceiling and blocking other processes.

Keep Batch Apex when: you have simple, scheduled, set-it-and-forget-it jobs that process records in a straightforward manner without needing fine-grained control.

The bottom line? If you’ve ever watched a Batch Apex job fail at 3 AM and restart from scratch — Apex Cursors are your answer. This feature is GA, production-ready, and available now.

ℹ️ Next Steps

Set up a scratch org, write a Queueable job with a cursor, and process a million records. The Salesforce developer documentation has complete API references for Database.Cursor, PaginationCursor, and the related Limits methods.

What’s your experience with large data processing in Salesforce? Have you tried cursors yet, or are you still wrestling with Batch Apex? I’d love to hear how you’re handling high-volume scenarios in your org.


What is the maximum number of fetch() calls allowed per Apex transaction when using cursors?
What happens when a Batch Apex job fails at record 8 million vs. when a Cursor-based Queueable fails at the same point?

How did this article make you feel?

Comments

Salesforce Tip

🎉

You finished this article!

What to read next

Contents