I spent years doing Salesforce deployments the manual way: export from dev sandbox, upload change set to QA, pray nothing breaks, export from QA, upload to production during a maintenance window at 11 PM on a Sunday. Every developer on the team had their own sandbox and nobody really knew what was in it. Deployments were tense, unpredictable events.
The move to a proper CI/CD pipeline changed everything. Deployments became boring, and boring deployments are what you want.
In this article Iβll walk you through the pipeline architecture I use today, the SF CLI commands that make it work, and full YAML examples for both GitHub Actions and Bitbucket Pipelines.
The Pipeline Architecture
The principle is simple: every change starts in a scratch org or developer sandbox, flows through automated CI checks on every pull request, and then progresses through environments only after tests pass and humans approve. Nothing goes directly to production.
Change Set Approach (Old)
Developer makes changes in their personal sandbox, manually builds a change set, uploads to QA. Another developerβs changes conflict because both edited the same Flow. Nobody knows whose version is correct. The Sunday night production window turns into a 3-hour debugging session.
CI/CD Pipeline (Modern)
Every change is committed to a feature branch and goes through an automated validation on a clean scratch org before it can be reviewed. Conflicts are caught at PR merge time, not during a production window. Deployments are triggered by a pipeline click, not a manual process at 11 PM.
Source-Driven Development with SF CLI
The foundation of any modern Salesforce DevOps practice is treating your Git repository as the source of truth for all metadata. This means using the Salesforce CLI (sf) to pull metadata into your repo and push from your repo to orgs β never making changes directly in a sandbox that arenβt tracked in source control.
Initial Project Setup
# Create a new SFDX project
sf project generate --name my-salesforce-project
cd my-salesforce-project
# Authorize your Dev Hub (enables scratch org creation)
sf org login web --set-default-dev-hub --alias DevHub
# Create a scratch org
sf org create scratch \
--definition-file config/project-scratch-def.json \
--alias MyScratchOrg \
--duration-days 30 \
--set-default
# Push your source to the scratch org
sf project deploy start
# Pull changes from scratch org back to your repo
sf project retrieve startThe project-scratch-def.json defines what your scratch org looks like β which features are enabled, what the edition simulates, and organization preferences:
{
"orgName": "My Project",
"edition": "Developer",
"features": ["EnableSetPasswordInApi", "Communities"],
"settings": {
"lightningExperienceSettings": {
"enableS1DesktopEnabled": true
},
"mobileSettings": {
"enableS1EncryptedStoragePref2": false
}
}
}Project Structure
Your sfdx-project.json tells the CLI which directories contain your metadata and how to organize it:
{
"packageDirectories": [
{
"path": "force-app",
"default": true
}
],
"name": "my-salesforce-project",
"namespace": "",
"sourceApiVersion": "60.0"
}Metadata lives under force-app/main/default/, organized by type:
force-app/main/default/
βββ classes/
β βββ AccountService.cls
β βββ AccountService.cls-meta.xml
βββ lwc/
β βββ accountSummary/
β βββ accountSummary.html
β βββ accountSummary.js
β βββ accountSummary.js-meta.xml
βββ flows/
β βββ Account_Auto_Create_Contact.flow-meta.xml
βββ objects/
βββ Account/
βββ fields/
βββ Custom_Field__c.field-meta.xmlWhen you pull metadata from a sandbox with sf project retrieve start, do it on a feature branch β never directly on main. This ensures the change goes through your PR review and CI validation before being treated as the source of truth. Pulling straight to main bypasses all your pipeline gates and re-introduces the βit works in sandboxβ problem you built the pipeline to solve.
GitHub Actions Pipeline
Here is a complete GitHub Actions workflow that validates on every pull request and deploys on merge to main.
GitHub Actions
name: Salesforce CI/CD
on:
push:
branches: [main]
pull_request:
branches: [main]
env:
SF_CLI_VERSION: latest
jobs:
validate:
name: Validate Metadata
runs-on: ubuntu-latest
if: github.event_name == 'pull_request'
steps:
- name: Checkout source
uses: actions/checkout@v4
- name: Install SF CLI
run: npm install -g @salesforce/cli@${{ env.SF_CLI_VERSION }}
- name: Authenticate Dev Hub
run: |
echo "${{ secrets.SFDX_AUTH_URL_DEVHUB }}" > ./DEVHUB_SFDX_URL.txt
sf org login sfdx-url --sfdx-url-file ./DEVHUB_SFDX_URL.txt --alias DevHub --set-default-dev-hub
- name: Create scratch org
run: |
sf org create scratch \
--definition-file config/project-scratch-def.json \
--alias CIScratch \
--duration-days 1 \
--set-default
- name: Push source to scratch org
run: sf project deploy start
- name: Run Apex tests
run: |
sf apex run test \
--test-level RunLocalTests \
--output-dir ./test-results \
--result-format human \
--code-coverage \
--wait 30
- name: Delete scratch org
if: always()
run: sf org delete scratch --no-prompt --target-org CIScratch
deploy:
name: Deploy to QA
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
needs: []
steps:
- name: Checkout source
uses: actions/checkout@v4
- name: Install SF CLI
run: npm install -g @salesforce/cli
- name: Authenticate QA sandbox
run: |
echo "${{ secrets.SFDX_AUTH_URL_QA }}" > ./QA_SFDX_URL.txt
sf org login sfdx-url --sfdx-url-file ./QA_SFDX_URL.txt --alias QA
- name: Deploy to QA
run: |
sf project deploy start \
--target-org QA \
--test-level RunLocalTests \
--wait 30
- name: Notify on success
if: success()
run: echo "Deployment to QA successful"The --code-coverage flag on sf apex run test only collects coverage data and reports it β it does not enforce the 75% minimum threshold. The 75% gate is enforced by Salesforce during sf project deploy start when deploying to a production org. In the CI validation step above, coverage is collected for reporting purposes, but the actual enforcement happens at deployment time.
Bitbucket Pipelines
image: node:20
pipelines:
pull-requests:
'**':
- step:
name: Validate (Scratch Org)
caches: [node]
script:
- npm install -g @salesforce/cli
- echo $SFDX_AUTH_URL_DEVHUB > devhub_url.txt
- sf org login sfdx-url --sfdx-url-file devhub_url.txt --alias DevHub --set-default-dev-hub
- sf org create scratch --definition-file config/project-scratch-def.json --alias CIScratch --duration-days 1 --set-default
- sf project deploy start
- sf apex run test --test-level RunLocalTests --code-coverage --wait 30
- sf org delete scratch --no-prompt --target-org CIScratch
branches:
main:
- step:
name: Deploy to QA
caches: [node]
deployment: QA
script:
- npm install -g @salesforce/cli
- echo $SFDX_AUTH_URL_QA > qa_url.txt
- sf org login sfdx-url --sfdx-url-file qa_url.txt --alias QA
- sf project deploy start --target-org QA --test-level RunLocalTests --wait 30
custom:
deploy-production:
- step:
name: Deploy to Production (Manual Trigger)
deployment: Production
script:
- npm install -g @salesforce/cli
- echo $SFDX_AUTH_URL_PROD > prod_url.txt
- sf org login sfdx-url --sfdx-url-file prod_url.txt --alias Prod
- sf project deploy start --target-org Prod --test-level RunLocalTests --wait 60Store your org auth URLs as repository secrets (SFDX_AUTH_URL_DEVHUB, SFDX_AUTH_URL_QA). Generate them with:
sf org display --target-org YourOrg --verbose --json
# Copy the "sfdxAuthUrl" value from the outputThe production pipeline uses Bitbucketβs manual trigger (custom:) so it only runs when someone explicitly initiates it from the Pipelines UI β not automatically on every merge. This is the right pattern for production deployments.
Scratch Orgs vs Sandboxes in the Pipeline
Scratch Orgs (CI)
I use scratch orgs for CI validation because they are:
- Completely clean β no legacy data, no configuration drift from manual changes
- Cheap β free with Dev Hub, you can create dozens simultaneously
- Fast to spin up β typically 2-4 minutes
- Disposable β delete after the pipeline run, no maintenance cost
Sandboxes (QA/UAT)
I use sandboxes (Full and Partial) for QA and UAT because:
- They contain a copy of production data, which is necessary for integration testing
- Stakeholders and QA testers need a stable environment between deploys
- Some integrations require real external system connectivity that scratch orgs canβt replicate
Dev Hub has a limit on the number of active scratch orgs (6 for Developer Edition, 40 for Enterprise/Unlimited Edition). If your pipeline creates scratch orgs but doesnβt delete them, youβll eventually hit the limit and CI will start failing with an unhelpful quota error. The if: always() condition on the delete step in the GitHub Actions example ensures the org is deleted even when the pipeline fails mid-run.
Handling Destructive Changes
When you delete metadata (a field, a class, a flow), you need a destructiveChanges.xml file to tell Salesforce what to remove. Without it, the deleted file is simply ignored and the metadata remains in the target org.
<?xml version="1.0" encoding="UTF-8"?>
<Package xmlns="http://soap.sforce.com/2006/04/metadata">
<types>
<members>OldClass</members>
<name>ApexClass</name>
</types>
<version>60.0</version>
</Package>With SF CLI, you can generate destructive changes automatically from a delta between two commits:
# Generate delta between last deploy commit and HEAD
sf project generate manifest \
--from-literal "ApexClass:OldClass" \
--output-dir force-app/main/default/destructiveChanges \
--destructive-changes-type preThe pre type runs the destructive changes before the deployment (removes old code before deploying new code that might conflict). Use post for removing obsolete components after the new code is deployed.
A deprecated Apex class is deleted from the repo. The pipeline deploys successfully. But the old class still exists in the QA and Production orgs β still consuming storage, still potentially being called by anonymous scripts or scheduled jobs nobody documented. Technical debt accumulates invisibly. Always commit destructiveChanges.xml alongside the deletion to keep every environment in sync.
The Mindset Shift That Makes It Work
The biggest obstacle to adopting Salesforce DevOps is not technical β itβs cultural. Salesforce developers are accustomed to making quick declarative changes directly in sandboxes. Flow changes, page layout tweaks, validation rule edits β they happen βin the orgβ rather than in source control.
The pipeline model requires committing to one rule: no metadata changes are made directly in QA, UAT, or Production. Everything starts in source control. If a hotfix needs to go to production, it still goes through the pipeline β with an expedited process if needed.
This discipline is what makes the pipeline valuable. Without it, you have automation on top of chaos, and the deployment still fails when sandbox drift causes conflicts.
What stage of this journey is your team on? Fully automated, partially automated, or still on change sets? Iβm curious what the biggest blocker has been for teams trying to make this transition β share it in the comments.
How did this article make you feel?
Comments
Salesforce Tip