Every customer-success leader I’ve worked with has asked the same question at some point: “Can you just tell me which accounts are about to churn?”
The standard answer involves a data science hire, a warehouse, a churn model in Python, and a six-month rollout. Everybody nods, nobody does it, and the CSMs go back to sorting accounts by last-activity date.
The thing is — Salesforce already ships the pieces to do this natively. Einstein Prediction Builder trains a churn model against the records already in your org. Next Best Action turns that score into ranked, actionable recommendations that show up on the Account page, where your CSMs actually work. No warehouse. No data scientist. No custom code.
This is the playbook I give clients who want a working churn score in two weeks.
The Architecture
Step 1 — Prepare Your Historical Data
Einstein Prediction Builder needs enough history to learn from. The official requirements, per Salesforce’s Einstein Prediction Builder documentation:
- At least 400 records in the object you’re predicting against (usually Account)
- For a binary prediction (Yes/No) — at least 100 positive examples and 100 negative examples
For churn, “positive” means “this account churned.” “Negative” means “this account is still active.” If you don’t already flag churned accounts explicitly, you need two things:
- A boolean or picklist field that clearly identifies churned accounts — e.g.,
Is_Churned__c(checkbox) orStatus= “Churned” - A way to populate it historically — typically derived from contract end dates, zero revenue in the last N months, or an explicit status change
If you’ve only been tracking churn for two months, you don’t have enough history. Wait until you have 100+ clean churned records before building the model — an undertrained model does more damage than no model, because CSMs learn to distrust it and it becomes shelfware.
If the record count is there but the labelling is messy, your first week of work is data hygiene. Einstein can’t predict what you can’t cleanly label.
Step 2 — Build the Prediction
Inside Setup, go to Einstein Prediction Builder and create a new prediction. The flow is:
- Pick the object — Account (or your chosen churn-tracking object)
- Define what you’re predicting — the binary field (
Is_Churned__c) - Segment the records Einstein learns from — typically: accounts older than 6 months (so they have enough history), excluding any that are already churned this month
- Pick which fields Einstein can consider — start broad, let Einstein pick
- Review, save, deploy
Einstein automatically:
- Splits your historical data into training and holdout sets
- Tests multiple algorithms and picks the best one
- Writes a score field to your object (e.g.,
Likelihood_To_Churn) on every record, refreshed on the schedule you pick - Surfaces model-quality metrics (AUC, top predictors) so you can judge trustworthiness before rolling out
AUC below 0.7 is a warning sign — the model isn’t meaningfully better than guessing. Before you surface this to CSMs, make sure the AUC is at least in the 0.75-0.85 range. Below that, fix the data, not the model.
Step 3 — Create the Recommendation Records
Next Best Action’s per-Salesforce’s Get Started with Einstein Next Best Action recommendations live as rows in the standard Recommendation object. Each one needs:
- Name — internal identifier
- Description — what the CSM sees
- Acceptance Label — the button text (“Schedule Executive Check-in”)
- Rejection Label — the dismiss text (“Not now”)
- Action — what happens when accepted — a Flow, an Apex invocable, or a URL
For churn, the recommendations that tend to work are narrowly-defined and tied to specific scoring thresholds:
| Score Range | Recommendation | Action |
|---|---|---|
| 80-100 | ”High churn risk — escalate to CSM lead” | Screen Flow creating a task assigned to the CS manager |
| 60-80 | ”Schedule a health check” | Screen Flow booking a meeting |
| 40-60 | ”Send the Q2 product update email” | Screen Flow triggering marketing cloud journey |
| 0-40 | (no recommendation — account is healthy) | — |
Step 4 — Build the Strategy Flow
Salesforce’s current direction, per the NBA Considerations doc, is to use Flow Builder (Recommendation Strategy flow type) rather than the older Strategy Canvas UI. Strategy Builder still works but is no longer recommended for new builds.
Create a Flow of type Recommendation Strategy. The flow receives context (typically a record ID) and returns a collection of Recommendation records. Inside the flow you:
- Load all Recommendations for this object
- Filter them based on the churn score and other business rules (contract value, renewal date, CSM assignment)
- Sort by priority
- Return the top N (typically 3)
Example Recommendation Strategy Flow structure
[Start]
↓
[Get Recommendations] — all churn-related Recommendation records
↓
[Get Account] — load the current Account record
↓
[Decision: churn score]
├── > 80 → filter to "Escalate" recommendations
├── 60-80 → filter to "Health check" recommendations
├── 40-60 → filter to "Re-engage" recommendations
└── < 40 → return empty collection
↓
[Decision: contract value]
├── > $50k → prioritize executive-level actions
└── < $50k → prioritize CSM-level actions
↓
[Sort + Limit to top 3]
↓
[Return collection]This is where most of the business sophistication lives. You can (and should) combine the Einstein score with factors Einstein can’t know — contract size, strategic-account flag, industry, renewal cycle. This is the difference between a score nobody acts on and a score that changes behaviour.
Step 5 — Surface on the Account Page
The final piece is the Einstein Next Best Action component from Salesforce’s NBA Component documentation, added to the Account record page via Lightning App Builder. You configure it with:
- Which Recommendation Strategy Flow to run
- How many recommendations to display
- The page context (passed automatically)
When a CSM opens an Account, the component fires the Strategy Flow, passes the current Account ID, receives the ranked recommendations, and displays them.
Each recommendation has an “Accept” and “Reject” button. Clicking Accept runs the action (Flow, Apex, or URL). Clicking Reject logs the rejection so you can analyse which recommendations are being ignored.
After two weeks of production use, look at the Recommendation rejection data — which recommendations are being ignored, and by whom. It tells you more about what your CSMs actually need than any focus group. Kill the ones with 80%+ rejection rates; they’re noise.
The Cost Side
Salesforce’s public documentation confirms every org gets 5,000 strategy requests per month at no additional cost, with Einstein Prediction Builder and NBA included in most Enterprise and Unlimited editions. The strategy request counter is in Setup.
If you’re running Customer Success for a team of 30 CSMs, each opening five accounts a day, you’re at ~3,000 strategy requests per month — well under the free tier. If you’re at 60+ CSMs with high account-open volume, you’ll cross the threshold and need to negotiate additional capacity.
The 80/20 Rollout
The sequence that consistently works:
- Week 1 — Data hygiene. Validate that 100+ churned records exist and are cleanly labelled
- Week 2 — Build the prediction in Einstein Prediction Builder. Check AUC. Review top predictors with the CS team to build trust
- Week 3 — Build 3-5 recommendations and the Strategy Flow. Get the score on test accounts first
- Week 4 — Pilot with 2-3 CSMs, collect feedback, tune the strategy flow, then roll out broadly
The single biggest mistake I see is trying to ship all of this at once. Prediction Builder producing a score with nothing built on top is half the job — rolling out NBA with poorly-tuned recommendations is the other half, and it’s the one that tanks CSM adoption.
Real-World Scenario
Problem: A SaaS company built a churn model (AUC 0.82, genuinely good), wired it into NBA with a single recommendation — “Schedule executive outreach” — that fired for every score above 50. CSMs hit the recommendation 40+ times a day. After two weeks, they reflexively clicked Reject on everything. Adoption: zero.
Root cause: One recommendation, wide threshold, no nuance. Every account looked the same.
Fix: Rebuild with three graduated thresholds (80+, 60-80, 40-60), three different actions, and segmentation by contract value in the Strategy Flow. Rejection rate dropped from 95% to under 20%. Adoption recovered.
What to Avoid
Four anti-patterns I see repeatedly
1. Skipping the AUC check. If the model isn’t materially better than guessing, you’re pushing noise at your CSMs. Fix the labelling first.
2. One giant recommendation. Differentiate by score band and by segment. A CSM for a $10k account needs different actions than a CSM for a $500k account.
3. Rolling out to everyone immediately. Pilot with the CSMs who are most patient and most curious. Their feedback shapes the Strategy Flow before it hits the less-patient adopters.
4. Ignoring the rejection data. If a recommendation is being rejected 90% of the time, it’s not a CSM problem — it’s a recommendation problem. Kill it or rewrite it.
Where the Official Docs Live
- Get Started with Einstein Next Best Action — the authoritative setup guide
- Einstein Next Best Actions Considerations — limits, editions, recommended flow type
- Einstein Prediction Builder — record requirements, supported objects
- Einstein Next Best Action Component — surfacing recommendations on a page
Read the Considerations doc first. It’s the one that saves you from discovering a limit the wrong way.
What’s stopping you from building your own version of this — is it the data, the buy-in from CS leadership, or something else entirely?
How did this article make you feel?
Comments
Salesforce Tip