ESC

AI-powered search across all blog posts and tools

Apex · December 1, 2025

Test Classes - Going Beyond 75% Coverage

Why 75% coverage is just the floor, not the goal — and how to write tests that actually catch bugs

☕ 9 min read 📅 December 1, 2025
  • Coverage percentage measures lines hit, not logic verified — 100% coverage with zero assertions is meaningless
  • Test Data Factories and @TestSetup dramatically reduce setup boilerplate and improve maintainability
  • Always test negative paths, bulk scenarios, and async operations — not just the happy path

The 75% code coverage requirement in Salesforce is one of the most misunderstood metrics in the ecosystem. I’ve seen orgs with 98% coverage that deploy broken code regularly, and I’ve seen orgs with exactly 76% coverage that never have a production incident. The number is a gate, not a goal.

In this article I’ll walk you through what meaningful test coverage actually looks like, the patterns I use on every project, and the anti-patterns that give a false sense of security.

Why Coverage Percentage Lies to You

Salesforce measures coverage by tracking which lines of code are executed during your test runs. A line is “covered” if it runs — even if you make zero assertions about what it returned.

Consider this deeply useless but 100%-covered test:

@IsTest
static void testEverything() {
    Account a = new Account(Name = 'Test');
    insert a;
    AccountService.processAccount(a.Id);
    // No assertions. Coverage: 100%.
}
🚨 Coverage Without Assertions Is Meaningless

This test will pass after a deployment but won’t catch a single bug. The Salesforce platform doesn’t care. The deploy goes through. The bug ships. Real test quality comes from assertions that verify behavior, not lines that execute.

The Test Pyramid for Salesforce

The Salesforce Test Pyramid
Unit TestsTriggers, Services, Helpers — Fast, Isolated, ManyIntegration TestsCross-object flows, callout mocksE2E~70% of tests~20% of tests~10%

Most Salesforce projects invert this pyramid — a handful of unit tests and one giant scenario test covering “everything.” That approach is slow, brittle, and hard to debug when it fails. The goal is many small, fast unit tests at the base, with integration and end-to-end tests used sparingly.

Test Data Factory Pattern

The most impactful structural improvement you can make is adopting a Test Data Factory. Instead of constructing records inline in every test method, you centralize creation logic.

@IsTest
public class TestDataFactory {

    public static Account makeAccount(String name, Boolean doInsert) {
        Account a = new Account(
            Name = name,
            Type = 'Customer',
            BillingCity = 'San Francisco',
            BillingState = 'CA',
            BillingCountry = 'USA'
        );
        if (doInsert) insert a;
        return a;
    }

    public static List<Contact> makeContacts(Id accountId, Integer count, Boolean doInsert) {
        List<Contact> contacts = new List<Contact>();
        for (Integer i = 0; i < count; i++) {
            contacts.add(new Contact(
                FirstName = 'Test',
                LastName = 'Contact ' + i,
                AccountId = accountId,
                Email = 'test' + i + '@example.com'
            ));
        }
        if (doInsert) insert contacts;
        return contacts;
    }
}

Factories mean that when a validation rule or required field is added to your org, you fix it in one place — not across 40 test files.

Without a Factory

A new required field is added to Account. You now have to update 40 test classes individually. Developers patch their own tests but miss others. Deploys start failing in CI for unrelated changes.

With a Factory

The new required field is added to TestDataFactory.makeAccount() in one place. Every test that uses the factory immediately benefits. CI stays green. No scavenger hunt across the codebase.

Using @TestSetup Correctly

@TestSetup runs once before all test methods in the class, and each test method gets its own savepoint-rolled-back copy of that data. This is a significant performance improvement for classes with many methods.

@IsTest
private class OpportunityServiceTest {

    @TestSetup
    static void makeData() {
        Account acc = TestDataFactory.makeAccount('ACME Corp', true);
        TestDataFactory.makeContacts(acc.Id, 5, true);

        List<Opportunity> opps = new List<Opportunity>();
        for (Integer i = 0; i < 10; i++) {
            opps.add(new Opportunity(
                Name = 'Deal ' + i,
                AccountId = acc.Id,
                StageName = 'Prospecting',
                CloseDate = Date.today().addDays(30)
            ));
        }
        insert opps;
    }

    @IsTest
    static void testCloseWon_updatesARR() {
        List<Opportunity> opps = [SELECT Id, StageName FROM Opportunity LIMIT 1];
        Test.startTest();
        opps[0].StageName = 'Closed Won';
        opps[0].Amount = 50000;
        update opps[0];
        Test.stopTest();

        Account acc = [SELECT AnnualRevenue FROM Account LIMIT 1];
        System.assertEquals(50000, acc.AnnualRevenue, 'ARR should reflect closed won amount');
    }
}
💡 Performance Tip

A test class with 20 methods that each insert their own Account + Contacts runs those DML operations 20 times. With @TestSetup, the inserts happen once and the data is rolled back to a clean snapshot for each method. For large test classes this can cut test execution time by 50% or more.

⚠️ Warning

Do not query for data in @TestSetup and store it in static variables — the IDs are valid but the records are re-fetched fresh for each test method, so stale static references can cause confusion.

Test.startTest() and Test.stopTest() — What They Actually Do

Most developers know Test.startTest() provides a fresh set of governor limits for the code between startTest() and stopTest(). What’s less understood is that Test.stopTest() also forces async operations to complete synchronously. This is essential for testing:

  • Future methods
  • Queueable jobs
  • Scheduled jobs
  • Batch Apex
@IsTest
static void testAsyncEmailJob() {
    Account acc = [SELECT Id FROM Account LIMIT 1];

    Test.startTest();
    // This enqueues a job
    System.enqueueJob(new AccountEmailQueueable(acc.Id));
    Test.stopTest();
    // By here, the queueable has run

    List<EmailMessage> emails = [SELECT Id FROM EmailMessage WHERE RelatedToId = :acc.Id];
    System.assertEquals(1, emails.size(), 'One email should have been sent');
}
⚠️ Warning

If you query for async results BEFORE Test.stopTest(), you’ll always get zero results and wonder why your test is broken.

Mocking HTTP Callouts

Any test that calls an external service will fail in Salesforce unless you implement HttpCalloutMock. This is not optional — it’s required for the test to even run.

The Mock Class

@IsTest
global class MockHttpResponse implements HttpCalloutMock {
    global HTTPResponse respond(HTTPRequest req) {
        HttpResponse res = new HttpResponse();
        res.setHeader('Content-Type', 'application/json');
        res.setStatusCode(200);
        res.setBody('{"status":"success","id":"abc123"}');
        return res;
    }
}

Using It in a Test

@IsTest
static void testCallout_successResponse() {
    Test.setMock(HttpCalloutMock.class, new MockHttpResponse());

    Test.startTest();
    String result = ExternalApiService.sendRecord('001xx000003GYkd');
    Test.stopTest();

    System.assertEquals('abc123', result, 'Should return the ID from the response');
}

For more complex scenarios, I parameterize the mock so a single class can simulate different HTTP status codes based on what the test needs — 200, 400, 500, timeout — rather than creating a separate mock class for each scenario.

Testing Triggers — The Bulk Scenario

🚨 The #1 Testing Gap

One of the most common gaps I see in test classes is that triggers are tested with a single record. Every trigger test should include a bulk scenario with 200 records, because that’s exactly where governor limit bugs hide.

Single-Record Test (Bad)

The test inserts 1 Account, the trigger runs, everything passes. A SOQL query inside a for loop goes undetected. In production, a data import of 200 records hits the 101 SOQL query limit and fails silently.

Bulk Test (Good)

@IsTest
static void testAccountTrigger_bulkInsert() {
    List<Account> accounts = new List<Account>();
    for (Integer i = 0; i < 200; i++) {
        accounts.add(new Account(Name = 'Bulk Account ' + i));
    }

    Test.startTest();
    insert accounts; // This is the critical bulk test
    Test.stopTest();

    List<Account> inserted = [SELECT Id, Name FROM Account WHERE Name LIKE 'Bulk Account%'];
    System.assertEquals(200, inserted.size(), 'All 200 accounts should have been inserted');
}

This single test has caught more production bugs for me than any other pattern.

Negative Path Testing

The happy path test is the first one you write. But the tests that actually prevent regressions are the negative paths — what happens when the input is wrong, the record is in the wrong state, or the user lacks permission.

@IsTest
static void testApproval_throwsExceptionWhenAlreadyApproved() {
    Opportunity opp = [SELECT Id, StageName FROM Opportunity LIMIT 1];
    opp.StageName = 'Closed Won';
    update opp;

    Boolean exceptionThrown = false;
    try {
        Test.startTest();
        ApprovalService.submitForApproval(opp.Id);
        Test.stopTest();
    } catch (ApprovalService.AlreadyApprovedException e) {
        exceptionThrown = true;
        System.assert(e.getMessage().contains('already approved'), 'Exception message should be descriptive');
    }
    System.assert(exceptionThrown, 'Exception should have been thrown for closed opportunity');
}

Testing that exceptions are thrown correctly is just as important as testing the success path. If your error handling is broken, your users get cryptic System.NullPointerException messages instead of helpful guidance.

Anti-Patterns to Avoid

Expand: The four worst testing anti-patterns

SeeAllData = true: This makes your tests depend on whatever data happens to exist in the org. Tests pass in sandbox, fail in production, pass again the next day. Never use it except for the narrow cases it’s genuinely required (like certain CPQ or Knowledge object tests).

// Never do this
@IsTest(SeeAllData=true)
static void myBadTest() { ... }

Hardcoded IDs: Using record type IDs, profile IDs, or any other IDs hardcoded in test classes is a deployment nightmare. Always query for them by name.

Testing private methods via @TestVisible abuse: If you find yourself marking internal helper methods @TestVisible to test them directly, that’s a signal that your class is doing too much. Restructure so behavior is tested through the public interface.

One assertion per test: I often see a 200-line test method with a single System.assert at the end. If it fails, you have no idea which of the 50 operations actually went wrong. One test method should test one behavior with focused, specific assertions.

💡 Naming Convention That Pays Off

Use the pattern testMethod_scenario_expectedOutcome for every test method name. For example: testSubmitApproval_alreadyApproved_throwsException. When a test fails in CI, you can read the method name alone and know exactly what broke without opening the file.

The Maintenance Factor

The most underrated quality of a good test suite is maintainability. Tests that are hard to update become tests that are disabled, commented out, or worked around when deadlines hit.

I use three rules to keep test classes maintainable:

  1. Each test method name describes exactly what behavior it verifies (use the pattern testMethod_scenario_expectedOutcome)
  2. All data setup goes through the factory — never inline new Account(...) in a test body
  3. Each test class tests exactly one class or trigger — no cross-class scenario tests unless you’re explicitly writing integration tests and labeling them as such

What Does Good Coverage Actually Look Like?

When I review a test suite I look for:

  • Coverage above 90% on business logic classes (services, handlers, utilities)
  • Every public method has at least one positive and one negative test
  • All branches of if/else and switch statements are covered
  • Bulk tests with 200 records for every trigger
  • Callout mocks for every HTTP integration
  • Test.startTest()/stopTest() wrapping every async operation

The 75% floor exists so that code with no tests at all cannot be deployed. It was never meant to define what “good” looks like. Good looks like a test suite you trust — one where a failing test genuinely tells you something broke.


What does your current test suite look like? Are you writing assertions that would actually catch a bug, or just executing lines to hit the number? Drop your approach in the comments — I’d love to hear how your team handles negative path coverage.


What does Test.stopTest() do in addition to resetting governor limits?
Why is @IsTest(SeeAllData=true) considered an anti-pattern?

How did this article make you feel?

Comments

Salesforce Tip

🎉

You finished this article!

What to read next

Contents