Tuesday, 13 January 2026

Run Relevant Tests in Spring '26

Image created by GPT5.2 based on a prompt by Bob Buzzard

Introduction


The Spring '26 release of Salesforce introduces a new way to execute tests when deploying - Run Relevant Tests. Specifying this option essentially hands over responsibility to Salesforce to identify and execute the tests associated with any Apex code in the deployment payload. Note that this is in beta in Spring '26, and everything following was based on trying it out in a pre-release org in the first half of January 2026, so really early days for it. 

This is the archetypal double-edged sword in my view. One the one hand, deployments can run way faster with little or no human effort, especially compared to manually specifying the tests that should be executed. On the other hand, it's abdicating responsibility for the quality of the deployment. Given that we're already looking at abdicating responsibility for the development of software to AI tools, does ceding another aspect of the lifecycle really matter? 

Who Chooses the Tests?


While conceptually this is something that should require many sleepless nights and lengthy discussions, I think in the majority of cases, especially outside of ISVs, it really doesn't matter. Yes, the test engine might not get it 100% right 100% of the time, but neither will most developers. If we're being honest the test suite itself is likely sub-optimal, especially in mature orgs at the Enterprise level where lots of disconnected parties have focused on getting things live over the years. In this scenario, if the odd test gets missed or an extra test gets run, it doesn't really change much from the quality perspective. 

This is not the case for ISVs though, who tend to put a lot of effort into designing a robust test suite, given that they need their solution to function under pretty much any scenario. Likely also not true for recent orgs that have been following good DevOps and Quality Assurance principles from the start, given that Salesforce and third-party tooling now makes this relatively straightforward to achieve. In these cases, the two new parameters for the @IsTest annotation allow tight coupling of tests to classes/deployments. Note that these only apply when the test level for the deployment is RunRelevantTests:

  • @IsTest(critical=true)

    I really like this one. If you've ever built an app that works with real money, you'll know that there are areas that must not fail or losses will be incurred. Executing tests for key areas, regardless of what changed, is a nice new feature.

  • @IsTest(testFor='<classes_and_triggers>')

    This is for the well-managed codebases. It allows you to guarantee that this test class will be executed if new/modified versions any of the identified dependencies are included in the payload. While this might feel like development overhead, my view is it's exactly what is needed in a robust test suite. Good development teams will likely hold this information elsewhere anyway, and apply it via RunSpecifiedTests, so I can see in many cases it will short cut that process.

How Does Salesforce Choose?


This is the $64,000 question, but right now we just don't know. The docs say :
the RunRelevantTests test engine analyzes the deployment payload and automatically runs a subset of tests based on that analysis.
which tells us what happens, but no detail about the analysis carried out. This isn't unusual in my experience, and by the time this feature goes GA I'd expect significantly more information to be available. That said I can't just sit idly and wait, so I've been doing some digging using a sample codebase with the following actors:
  • OpportunityUtils
    The protagonist in my little drama is a class named  This implements an interface (OpportunityUtilsIF) with a single method, getBigDeals(), which receives a collection of opportunities and returns a new collection containing just those opportunities with a value greater than or equal to 250,000.

    There is a dedicated test class (OpportunityUtilsTest) which directly instantiates the class and executes a zero/one test. 
  • OpportunityEOD
    This contains a single method (EODProcessing) that extracts all opportunities created today, creates an instance of OpportunityUtils, extracts just the big deals, appends ' - BIG DEAL' to their name and updates them.

    There is a dedicated test class (OpportunityEODTest) that inserts test opportunities of varying values, instantiates OpportunityEOD and executes the EODProcessing method, then extracts all opportunities from the database that are greater than or equal to 250,000 and asserts each name contains ' - BIG DEAL'.

  • OpportunityWrapLevel1
    This contains a single method (EODProcessingWrapLevel1) that instantiates the OpportunityEOD class and executes the EODProcessing method.

    There is a dedicated test class (OpportunityWrapLevel1) that inserts test opportunities of varying levels, instantiates OpportunityEODWrapLevel1, executes the EODProcessingWrapLevel1 method, then extracts all opportunities from the database that are greater than or equal to 250,000 and asserts each name contains ' - BIG DEAL'.

  • OpportunityEODInjection
    This ups the ante somewhat, is it contains a replica of the EODProcessing() method, but rather than directly instantiating OpportunityUtils it is passed a parameter (implementing the OpportunityUtilsIF interface. There is a dedicated test class (OpportunityEODInjectionTest) that delegates to a test factory to dynamically create an instance of OpportunityUtils based on the class name - at no point is OpportunityUtils directly referred to. The test mirrors the other EOD tests, inserting opportunities, carrying out the EOD processing and verifying that ' - BIG DEAL' is appended where appropriate.

After deploying these to a Spring '26 pre-release developer edition, I then changed the OpportunityUtils code to consider opportunities with a value of 300,000 and over as big deals, then tried to deploy it using the new -l RunRelevantTests Salesforce CLI option. There were unit test failures, but which ones?
  • OpportunityUtilsTest
    This test class was chosen, and the test for a single big deal failed. All as expected.

  • OpportunityEODTest
    This test class was chosen, and the EOD processing test failed. All as expected.

  • OpportunityWrapLevel1Test
    This test class was not chosen. I was surprised at this, as there is a dependency chain leading to OpportunityUtils.
    In case the selection of the OpportunityEODTest skewed the results, I removed the tests from that class and re-ran from the start. It still wasn't chosen, which suggests to me that currently only tests for classes with direct dependencies on the changed Apex will be chosen. 

  • OpportunityEODInjectionTest
    This test class was not chosen. This does not surprise me at all, as it would be really hard to pick up dynamically instantiated instances. The example I've given is straightforward, but the name could be generated by combining strings, through a lookup collection, or even configuration, so the only way to tell is to actually run the real code. I think this is a scenario where it would be up to the developer to ensure that this code was tested when the dependencies changed via the @IsTest(testFor='...') annotation.

I don't think this is too bad at all. I'd have liked the tests from the dependency chain to be picked up too, but I can see that is a bit of a balancing act. If every class that can possibly reach the changed code is executed, you could end up executing all of your tests every time. Taking the opposing view, if classes that could be impacted by a change aren't tested, what does that mean for our confidence in the deployed code? In this case I'm assuming good intentions, given this is in beta, and expecting this to be tightened up as the functionality makes its way towards general availability.

Conclusion


I like this new feature, but it requires careful consideration before relying on it for production (which, of course, you can't as it's in beta!). Personally I like to execute all tests whenever I deploy, but that isn't always feasible, especially if there is a large set of tests that have to execute serially and a high frequency release cadence. In that scenario I'd likely use the @IsTest(testFor='...') annotation approach to retain tight control. If, however, I was working in a mature org that showed clear signs of the big ball of mud anti-pattern, I'd happily leave it up to Salesforce.

Oh, and the UI hasn't quite caught up with this new functionality, as the deployment status always says that tests weren't required even if some of them failed:


so if you want to know which tests were actually picked, you need to use the Salesforce CLI with the --json flag and parse the output.

If you are interested in learning more about Apex testing, check out my in-progress book Software Testing on the Salesforce Platform.

More Information