I have seen the consequences of uncontrolled trigger development in production orgs many times: five separate triggers on the Account object, none of them aware of the others, all firing on after update, three of them potentially causing recursion, and no way to safely disable any one of them without taking down the others. The developer who added the fifth one did not even know the first four existed.
A trigger framework solves this by establishing a single entry point per object, a consistent execution pattern, and centralized control mechanisms. Let me show you how to build one from scratch.
Why Frameworks Matter
Without a framework, every trigger on every object tends to grow into an undisciplined mix of context checks (if(Trigger.isBefore && Trigger.isInsert)), business logic, and utility calls. Adding a new developer to the project means they either extend the existing messy trigger or add a second trigger β both of which have bad outcomes.
The Three Laws of Trigger Design
Before writing a single line of framework code, internalize these:
- One trigger per object β Without this, no framework can save you. Order of execution between multiple triggers on the same object is not guaranteed.
- No business logic in the trigger file itself β The trigger file is a router. It calls the framework. The framework calls the handler. The handler contains the logic.
- All trigger context methods must be individually testable β Your test classes should call individual handler methods directly, not rely on DML to invoke trigger logic.
Step 1: The Base TriggerHandler Class
public virtual class TriggerHandler {
// Static map to track recursion depth per handler
private static Map<String, Integer> loopCountMap = new Map<String, Integer>();
// Static set for bypass
@TestVisible
private static Set<String> bypassedHandlers = new Set<String>();
private String handlerName;
private Boolean isTriggerExecuting;
public TriggerHandler() {
this.handlerName = getHandlerName();
this.isTriggerExecuting = Trigger.isExecuting;
}
// Entry point called by dispatcher
public void run() {
if (!isTriggerExecuting) {
throw new TriggerHandlerException(
'TriggerHandler.run() called outside of trigger context'
);
}
if (isBypassed(this.handlerName)) {
return;
}
if (!validateRun()) {
return;
}
addToLoopCount();
if (Trigger.isBefore) {
if (Trigger.isInsert) beforeInsert();
if (Trigger.isUpdate) beforeUpdate();
if (Trigger.isDelete) beforeDelete();
if (Trigger.isUndelete) beforeUndelete();
} else {
if (Trigger.isInsert) afterInsert();
if (Trigger.isUpdate) afterUpdate();
if (Trigger.isDelete) afterDelete();
if (Trigger.isUndelete) afterUndelete();
}
}
// Virtual methods β override in subclass
protected virtual void beforeInsert() {}
protected virtual void afterInsert() {}
protected virtual void beforeUpdate() {}
protected virtual void afterUpdate() {}
protected virtual void beforeDelete() {}
protected virtual void afterDelete() {}
protected virtual void beforeUndelete() {}
protected virtual void afterUndelete() {}
// Recursion control
protected void setMaxLoopCount(Integer max) {
loopCountMap.put(this.handlerName, max);
}
protected void clearMaxLoopCount() {
loopCountMap.remove(this.handlerName);
}
private void addToLoopCount() {
String handlerName = this.handlerName;
if (loopCountMap.containsKey(handlerName)) {
Integer max = loopCountMap.get(handlerName);
Integer current = Limits.getDmlRows(); // or track custom count
// Simplified: real implementation tracks per-transaction invocations
}
}
private Boolean validateRun() {
// Extensible: subclasses can override to add custom validation
return true;
}
// Bypass mechanism
public static void bypass(String handlerName) {
bypassedHandlers.add(handlerName);
}
public static void clearBypass(String handlerName) {
bypassedHandlers.remove(handlerName);
}
public static Boolean isBypassed(String handlerName) {
return bypassedHandlers.contains(handlerName);
}
public static void clearAllBypasses() {
bypassedHandlers.clear();
}
private String getHandlerName() {
return String.valueOf(this).substring(0, String.valueOf(this).indexOf(':'));
}
public class TriggerHandlerException extends Exception {}
}Step 2: The Trigger Dispatcher
The dispatcher is optional β some teams put the new AccountTriggerHandler().run() call directly in the trigger. But a dispatcher is useful when you need to inject handlers dynamically (for example, from Custom Metadata) or when you want a single place to add pre/post processing for all triggers.
Simple Dispatcher
public class TriggerDispatcher {
public static void run(TriggerHandler handler) {
handler.run();
}
}Custom Metadata-Driven
public class TriggerDispatcher {
public static void run(String objectApiName) {
List<Trigger_Configuration__mdt> configs = [
SELECT Handler_Class__c, Is_Active__c, Execution_Order__c
FROM Trigger_Configuration__mdt
WHERE Object_API_Name__c = :objectApiName
AND Is_Active__c = true
ORDER BY Execution_Order__c ASC
];
for (Trigger_Configuration__mdt config : configs) {
Type handlerType = Type.forName(config.Handler_Class__c);
if (handlerType != null) {
TriggerHandler handler = (TriggerHandler) handlerType.newInstance();
handler.run();
}
}
}
}The Custom Metadata approach is powerful because it lets you enable and disable trigger handlers per org without a deployment β useful for managed package testing or staging environment control.
Step 3: The Trigger File
trigger AccountTrigger on Account (
before insert, after insert,
before update, after update,
before delete, after delete,
before undelete, after undelete
) {
TriggerDispatcher.run(new AccountTriggerHandler());
}That is the entire trigger file. Nothing else belongs here.
Step 4: A Concrete Handler
public class AccountTriggerHandler extends TriggerHandler {
private List<Account> newRecords;
private List<Account> oldRecords;
private Map<Id, Account> newMap;
private Map<Id, Account> oldMap;
public AccountTriggerHandler() {
this.newRecords = (List<Account>) Trigger.new;
this.oldRecords = (List<Account>) Trigger.old;
this.newMap = (Map<Id, Account>) Trigger.newMap;
this.oldMap = (Map<Id, Account>) Trigger.oldMap;
}
protected override void beforeInsert() {
AccountService.setDefaultRegion(newRecords);
AccountService.validateBillingCountry(newRecords);
}
protected override void afterInsert() {
AccountService.createDefaultOpportunity(newMap);
AccountService.notifyAccountTeam(newMap);
}
protected override void beforeUpdate() {
AccountService.preventStatusDowngrade(newMap, oldMap);
}
protected override void afterUpdate() {
// Only process records where relevant fields changed
Map<Id, Account> changedAccounts = new Map<Id, Account>();
for (Account acc : newRecords) {
if (acc.Industry != oldMap.get(acc.Id).Industry) {
changedAccounts.put(acc.Id, acc);
}
}
if (!changedAccounts.isEmpty()) {
AccountService.syncIndustryToContacts(changedAccounts);
}
}
}Notice the field-change filtering in afterUpdate. This is one of the most common sources of performance issues in trigger logic β doing work for every updated record regardless of whether the relevant fields actually changed.
Recursion Control
Recursion happens when trigger logic causes a DML operation that fires the same trigger again. The classic example: an Account trigger that updates related Contacts, and a Contact trigger that updates its parent Account.
public class RecursionGuard {
private static Set<Id> processedAccountIds = new Set<Id>();
public static List<Account> filterUnprocessed(List<Account> accounts) {
List<Account> unprocessed = new List<Account>();
for (Account acc : accounts) {
if (!processedAccountIds.contains(acc.Id)) {
unprocessed.add(acc);
processedAccountIds.add(acc.Id);
}
}
return unprocessed;
}
public static void clear() {
processedAccountIds.clear();
}
}Use in the handler:
protected override void afterUpdate() {
List<Account> unprocessed = RecursionGuard.filterUnprocessed(newRecords);
if (unprocessed.isEmpty()) return;
AccountService.syncToContacts(unprocessed);
}Bypass Mechanisms
Bypass logic should operate at two levels: per-user (via Custom Permission) and per-transaction (via static flag).
Custom Permission Bypass
// In TriggerHandler.run(), before executing:
if (FeatureManagement.checkPermission('Bypass_All_Triggers')) {
return;
}This allows integration users or data migration profiles to bypass trigger logic without code changes.
Static Flag Bypass
// In a test or a service that needs to load data without firing triggers:
TriggerHandler.bypass('AccountTriggerHandler');
try {
insert testAccounts;
} finally {
TriggerHandler.clearBypass('AccountTriggerHandler');
}Always use try/finally when setting static bypasses so they are cleared even if an exception occurs.
Comparing Popular Frameworks
Detailed comparison of the three frameworks
Kevin OβHaraβs TriggerHandler is almost identical to what Iβve shown above. It is the most widely adopted open-source Salesforce trigger framework and a great starting point. My version adds Custom Metadata-driven configuration, which his does not include out of the box.
fflib-apex-common is a full enterprise architecture framework that includes a Domain layer (which wraps trigger logic), a Selector layer (for SOQL), a Service layer, and a Unit Of Work pattern. It is significantly more complex, requires more upfront investment, but delivers a very consistent architecture for large teams. I recommend it for ISV packages and enterprise orgs with 5+ Salesforce developers.
For a solo developer or a small team, the custom framework or Kevin OβHaraβs pattern gives you 90% of the benefit at 20% of the complexity.
The Problem
Your data migration script uses a service account to load 500,000 Account records into production. Every insert fires the AccountTrigger, which creates default Opportunities and sends notifications β generating hundreds of thousands of unwanted records and emails during the migration window.
The Solution
Assign a Custom Permission (e.g., Bypass_All_Triggers) to the migration service userβs profile. In TriggerHandler.run(), check FeatureManagement.checkPermission('Bypass_All_Triggers') and return early if true. The migration runs clean, and the permission is removed from the service user after the load β no deployment required, no risk of accidentally leaving a static bypass flag active.
Store your trigger bypass configuration in Custom Metadata (Trigger_Configuration__mdt) rather than only in static code. This lets you deactivate a specific handler in a production org without a deployment β invaluable during incident response when a trigger is causing data corruption and you need to turn it off immediately while a fix is being developed.
Testing the Framework
Your handler tests should instantiate the handler and call context methods directly rather than going through DML:
@isTest
static void testBeforeInsert_setsDefaultRegion() {
Account acc = new Account(Name = 'Test Corp', BillingCountry = 'Germany');
List<Account> accounts = new List<Account>{ acc };
// Test the service method directly (the handler delegates to service)
AccountService.setDefaultRegion(accounts);
System.assertEquals('EMEA', acc.Region__c,
'Germany should map to EMEA region');
}Only write tests that go through actual DML when you need to verify the trigger wiring itself β and even then, keep those tests minimal. The business logic tests should not require DML.
Have you implemented a trigger framework in your org, and what patterns did you find worked best for your teamβs size and coding style?
How did this article make you feel?
Comments
Salesforce Tip