One of the most impactful architecture decisions I have made on a Salesforce project was replacing a synchronous web service callout inside a trigger with Platform Events. The trigger was calling an external ERP system during an Opportunity close. When the ERP was slow — which was often — the Opportunity save timed out. Sales reps were losing work. The fix took one afternoon and has been rock-solid for three years.
Platform Events are Salesforce’s native implementation of the publish/subscribe pattern. They enable decoupled, asynchronous communication between systems — within an org, between orgs, and between Salesforce and external platforms. If you are building integrations in 2026 without using Platform Events where appropriate, you are making your architecture more brittle than it needs to be.
An Opportunity after trigger makes a synchronous HTTP callout to an ERP system to create a sales order. When the ERP is slow or unavailable, the callout times out after 120 seconds — causing the Opportunity save to fail. Sales reps lose their work, and the ERP team’s outages directly impact Salesforce users who have nothing to do with the integration.
Replace the synchronous callout with a Platform Event. The trigger publishes an Order_Update__e event and completes instantly. A separate subscriber trigger processes the event asynchronously and enqueues the ERP callout via a Queueable job. ERP downtime no longer blocks Opportunity saves, and the 24-hour event retention means no data is lost during outages.
The Publisher/Subscriber Architecture
Defining a Platform Event
Platform Events are first-class Salesforce objects with the __e suffix. You define them in Setup or via metadata, then add custom fields to carry the payload.
<?xml version="1.0" encoding="UTF-8"?>
<CustomObject xmlns="http://soap.sforce.com/2006/04/metadata">
<label>Order Update</label>
<pluralLabel>Order Updates</pluralLabel>
<deploymentStatus>Deployed</deploymentStatus>
<eventType>StandardVolume</eventType>
<fields>
<fullName>Order_Id__c</fullName>
<label>Order ID</label>
<type>Text</type>
<length>18</length>
</fields>
<fields>
<fullName>New_Status__c</fullName>
<label>New Status</label>
<type>Text</type>
<length>255</length>
</fields>
<fields>
<fullName>Total_Amount__c</fullName>
<label>Total Amount</label>
<type>Number</type>
<precision>16</precision>
<scale>2</scale>
</fields>
</CustomObject>Publishing Events from Apex
Publishing is a single EventBus.publish() call. The event is dispatched asynchronously — the caller does not wait for subscribers to process it.
public class OrderService {
public static void closeOrders(List<Order__c> orders) {
List<Order__c> toUpdate = new List<Order__c>();
List<Order_Update__e> events = new List<Order_Update__e>();
for (Order__c o : orders) {
o.Status__c = 'Closed';
toUpdate.add(o);
events.add(new Order_Update__e(
Order_Id__c = o.Id,
New_Status__c = 'Closed',
Total_Amount__c = o.Total__c
));
}
update toUpdate;
// Publish all events in one bulk call
List<Database.SaveResult> results = EventBus.publish(events);
for (Database.SaveResult sr : results) {
if (!sr.isSuccess()) {
for (Database.Error err : sr.getErrors()) {
System.debug('Event publish failed: ' + err.getMessage());
}
}
}
}
}Even though update toUpdate and EventBus.publish(events) appear in the same method, the event delivery to subscribers happens in a completely separate transaction. If the subscriber fails, the original update is not rolled back. This is the tradeoff of asynchronous decoupling — eventual consistency rather than transactional consistency.
Subscribing with an Apex Trigger
An Apex trigger on a Platform Event object fires after events are delivered from the bus. It runs in its own transaction context with fresh governor limits.
trigger OrderUpdateTrigger on Order_Update__e (after insert) {
List<Order_Update__e> events = Trigger.new;
Set<String> orderIds = new Set<String>();
for (Order_Update__e evt : events) {
orderIds.add(evt.Order_Id__c);
}
// Fresh governor limits — this is a separate async transaction
List<Order__c> relatedOrders = [
SELECT Id, Status__c, Total__c, Account__r.Name
FROM Order__c
WHERE Id IN :orderIds
];
// Enqueue the ERP sync — callouts are allowed in PE triggers (async context),
// but Queueable is preferred for complex retry logic and isolation
if (!relatedOrders.isEmpty()) {
System.enqueueJob(new ErpSyncJob(relatedOrders));
}
// Update the event's ReplayId checkpoint
// (Salesforce handles this automatically for Apex trigger subscribers)
}HTTP callouts are allowed directly inside Platform Event triggers because they execute in an asynchronous context. However, using a Queueable job (as shown above) is still the recommended pattern — it gives you better control over retry logic, error handling, and governor limit isolation for complex integrations.
Use Case 1: Order Processing Pipeline
The scenario I described at the start — replacing a synchronous ERP callout with an event — is a textbook Platform Event use case:
Full Order Processing Flow
- Sales rep closes an Opportunity in Salesforce
- An Apex trigger on Opportunity publishes
Order_Update__ewith the order details - The Opportunity save completes instantly — no waiting for ERP
- The Platform Event trigger fires asynchronously, enqueues an
ErpSyncJob - The job calls the ERP HTTP API
- If the ERP call fails, the job logs the error and reschedules without affecting the already-saved Opportunity
The sales rep’s experience: the Opportunity closes instantly. The ERP sync happens in the background. If ERP is down, the 24-hour event retention means the event can be replayed once ERP recovers.
Use Case 2: Cross-Org Communication
If you manage multiple Salesforce orgs — a common scenario with acquired companies, separate business units, or sandbox-to-production notification patterns — Platform Events can be published from one org and consumed by another via the CometD streaming API.
// JavaScript (Node.js) subscribing to Salesforce Platform Events via CometD
const jsforce = require('jsforce');
const conn = new jsforce.Connection({
loginUrl: 'https://login.salesforce.com'
});
await conn.login(process.env.SF_USERNAME, process.env.SF_PASSWORD);
conn.streaming.topic('/event/Order_Update__e').subscribe((message) => {
const payload = message.payload;
console.log('Received order update:', {
orderId: payload.Order_Id__c,
status: payload.New_Status__c,
amount: payload.Total_Amount__c,
replayId: message.event.replayId
});
// Forward to second org via REST API
forwardToSecondOrg(payload);
});The replayId is critical for reliability. Store the last processed replayId in your external system. When the subscriber reconnects after downtime, pass the stored replayId to resume from where it left off rather than replaying from the beginning.
Use Case 3: Audit Trail and Change Data Capture Alternative
Platform Events are excellent for building real-time audit trails. Publish an event every time a sensitive record changes, and have a subscriber write to an external audit log system outside Salesforce — where records cannot be deleted or modified by users with admin permissions.
trigger AccountAuditTrigger on Account (after update) {
List<Audit_Event__e> auditEvents = new List<Audit_Event__e>();
for (Account newAcc : Trigger.new) {
Account oldAcc = Trigger.oldMap.get(newAcc.Id);
// Only audit changes to financially sensitive fields
if (newAcc.AnnualRevenue != oldAcc.AnnualRevenue ||
newAcc.Type != oldAcc.Type) {
auditEvents.add(new Audit_Event__e(
Record_Id__c = newAcc.Id,
Object_Type__c = 'Account',
Changed_By__c = UserInfo.getUserId(),
Old_Values__c = JSON.serialize(new Map<String, Object>{
'AnnualRevenue' => oldAcc.AnnualRevenue,
'Type' => oldAcc.Type
}),
New_Values__c = JSON.serialize(new Map<String, Object>{
'AnnualRevenue' => newAcc.AnnualRevenue,
'Type' => newAcc.Type
}),
Changed_At__c = String.valueOf(Datetime.now())
));
}
}
if (!auditEvents.isEmpty()) {
EventBus.publish(auditEvents);
}
}Platform Event vs. Change Data Capture
Salesforce also offers Change Data Capture (CDC), which automatically publishes change events for standard and custom objects without requiring any code.
Platform Events
- Setup: Custom object + code to publish
- Payload: Fully custom — you choose what fields to include
- Triggers: Any condition in your code — publish selectively
- Retention: 24 hours (Standard Volume)
- Best for: Custom workflows, cross-system integration, selective publishing
Change Data Capture
- Setup: Enable in Setup (no code needed)
- Payload: Standard — all changed fields automatically included
- Triggers: Every CUD (Create/Update/Delete) operation
- Retention: 3 days
- Best for: Replication, external sync, data warehousing
Event Volume Limits
Platform Events come in two tiers. Standard Volume events: 250,000 publishes per day on Enterprise edition (more on higher editions). High Volume events: higher throughput with support for both Apex trigger subscribers and CometD/Pub/Sub API subscribers, but no rich query support on the event object.
For most integration use cases, Standard Volume is sufficient. If you are publishing tens of thousands of events per hour, evaluate High Volume events and the dedicated Pub/Sub API.
Error Handling and Idempotency
Because the subscriber runs in a separate transaction, you cannot roll back the publisher’s DML if the subscriber fails. Design your subscribers to be idempotent — processing the same event twice should produce the same result as processing it once.
// Idempotent subscriber: check if work was already done
trigger OrderSyncTrigger on Order_Update__e (after insert) {
for (Order_Update__e evt : Trigger.new) {
List<ERP_Sync_Log__c> existing = [
SELECT Id FROM ERP_Sync_Log__c
WHERE Order_Id__c = :evt.Order_Id__c
AND Event_Replay_Id__c = :String.valueOf(evt.ReplayId)
];
if (existing.isEmpty()) {
// Process the event and log it
ErpSyncService.syncOrder(evt.Order_Id__c);
insert new ERP_Sync_Log__c(
Order_Id__c = evt.Order_Id__c,
Event_Replay_Id__c = String.valueOf(evt.ReplayId),
Processed_At__c = Datetime.now()
);
}
// If record exists, event was already processed — skip silently
}
}Platform Events have fundamentally changed how I approach Salesforce integrations. The decoupling they provide — not just technical decoupling but organizational decoupling between teams and systems — is worth the additional complexity.
Have you used Platform Events to replace a synchronous integration that was causing timeout issues? What was the scenario, and how did the migration go?
Test Your Knowledge
How did this article make you feel?
Comments
Salesforce Tip