Wednesday, 26 May 2021

The Impact of System.debug

TL;DR

Your System.debug statements are consuming resources even when you aren't capturing debug logs. 
If you are passing complex datatype parameters to System.debug then they will consume more resources, as they will be converted to strings (before being discarded as you aren't capturing debug logs!)

Introduction

This week the London Salesforce Developers hosted Paul Battisson and his Turn it up to 11- Improving the Performance of Your Apex code talk, which reminded me of something that has been on my list of things to look at - the impact of System.debug statements on your code.

In Paul's talk he points out that the act of measuring the consumed CPU time and writing the metric to the debug log consumes a small amount of CPU time. And many Apex developers know that turning on debug logging in Salesforce impacts the duration of a transaction, sometimes significantly if the debug log generates multiple megabytes of information - just leave the developer console open while you use your code-heavy application if you haven't experienced this.

What is less well known is that System.debug statements have an impact on your transactions even if you aren't capturing debug logs, as I will endeavour to show in the rest of this blog.

Methodology

I have a simple Apex class with a static method that loops 5,000 times, incrementing an index from 0 to 4999. A string that starts out life empty has the index concatenated with it each time through the loop, using the + operator. 

At the start of the method, the current time in milliseconds and the CPU time consumed are captured, and at the end of the method this is repeated and the difference (the elapsed milliseconds and consumed CPU time for the method) are stored in a custom object.

public class LoggingOn 
{
    public static void LoggingOn()
    {
        Limits_Consumed__c lc=new Limits_Consumed__c();
        Long startMillis=System.now().getTime();
        Long startCPU=Limits.getCpuTime();
        
        String message='';
        for (Integer idx=0; idx<5000; idx++)
        {
            message+=idx;
        }
        
        lc.Total_Millis__c=System.now().getTime() - startMillis;
        lc.CPU_Time__c=Limits.getCpuTime() - startCPU;
        insert lc;
    }
}

The Tests

For each test, the method was executed twice:

  1. Via execute anonymous via the developer console, with debug logging enabled and the log level at the default of SFDC_DevConsole.
  2. As the only code in a trigger before update of a record (this was the quickest way I could execute the Apex outside of execute anonymous - the trigger is essentially a carrier for it) with debug logging disabled

1. No Debug Statements

Scenario 1 took 387 milliseconds elapsed time and consumed 154 milliseconds of CPU

Scenario 2 took 44 milliseconds elapsed time and consumed 45 milliseconds of CPU

I then added debug statements and ran the method via the two scenarios.

2. Adding a Debug Statement

This used System.debug to log the message variable each time through the loop:

System.debug('Message = ' + message);

Scenario 1 took 736 milliseconds elapsed time and consumed 285 milliseconds of CPU - you can clearly see the impact of debug statements when logging is enabled here.

Scenario 2 took 49 milliseconds elapsed time and 45 milliseconds of CPU - very little change here.

3. Extending the Debug Statement

The debug statement was extended to include the name of the current user:
System.debug('Message = ' + message + ', User ' + UserInfo.getName());

Scenario 1 took 1281 milliseconds elapsed time and consumed 328 milliseconds of CPU. An interesting aspect here is that after a very short period of time, the message aspect was being truncated and the user's name wasn't appearing, but the work was still being done to retrieve it and build the full debug string each time through the loop.

Scenario 2 took 120 milliseconds elapsed time and 109 milliseconds of CPU - starting to ramp up.

4. Including an sObject

At the start of the method, prior to the starting CPU and elapsed time being captured, an account record with four fields (Id, Name, AccountNumber, CreatedDate) was retrieved from the database. This was added to the debug statement:

System.debug('Message = ' + message + ', User ' + UserInfo.getName() + ', Account = ' + acc);

Scenario 1 took 1504 milliseconds elapsed time and consumed 494 milliseconds of CPU, but again most of the time the account record details weren't written to the log. 

Scenario 2 took 185 milliseconds elapsed time and 180 milliseconds of CPU - around a 50% increase.

5. Splitting Into Multiple Debug Statements

To ensure that the information could be seen in the debug logs, the single statement was split into three:

System.debug('Message = ' + message);
System.debug('User = ' + UserInfo.getName());
System.debug('Account = ' + acc);

Scenario 1 took 1969 milliseconds elapsed time and consumed 656 milliseconds of CPU, and this time all of the pertinent information was recorded each time through. 

Scenario 2 took 198 milliseconds elapsed time and 192 milliseconds of CPU - a small increase.

6. Adding an Array

The query to retrieve the record was extended to include the Id, Subject and Description of its 2 related cases, and an additional statement added to debug this:
System.debug('Message = ' + message);
System.debug('User = ' + UserInfo.getName());
System.debug('Account = ' + acc);
System.debug('Cases = ' + acc.cases);

Scenario 1 took 2563 milliseconds elapsed time and consumed 1136 milliseconds of CPU - closing in on double the CPU, so clearly logging an array has quite an impact. 

Scenario 2 took 368 milliseconds elapsed time and 353 milliseconds of CPU, again close on double.

Reviewing the Numbers

Here are the numbers for the tests in an easy to consume format, with the percentage increase from the previous test shown in brackets.

Test Elapsed 1 CPU 1 Elapsed 2 CPU 2
1 387 154 44 45
2 736 (47%) 285 (46%) 49 (11%) 45 (0%)
3 1281 (60%) 328 (15%) 120 (144%) 109 (142%)
4 1504 (17%) 494 (50%) 185 (54%) 180 (65%)
5 1969 (31%) 656 (33%) 198 (7%) 192 (7%)
6 2563 (30%) 1136 (73%) 368 (86%) 353 (84%)

Looking at the scenario 1 results, we can see that the time increases with the number of debug statements and complexity of datatypes. This makes perfect sense as I'm doing more work to find out granular details about the state as the method proceeds.

Looking at the scenario 2 results, we can see that by not capturing debug logs, the numbers are quite a lot smaller. However, the time is increasing as the number of debug statements and complexity of datatypes increase, even though I'm not using any of the information

By leaving the debug statements, I'm forcing various datatypes to be converted to strings in order to pass them as parameters to the System.debug() method, which will do nothing with them. In the case of test 6, my transaction is over a third of a second longer due to a bunch of work that adds zero value - my users will be pleased!

Delete All the Debugs?

So is the solution to delete all the debug statements before deploying to production? In many cases, yes. Often System.debug() statements are leftovers from problems encountered during development and can be safely deleted. But not in all cases.

If you are trying to track down an intermittent problem, maybe not. It might be useful to have the debug statements in there so that you can quickly enable logging for the user having the problem and see exactly what is happening. The downside to this is the transaction is slowed down for everyone all the time, not just the user encountering the problem when they are encountering it.

A better solution may be to allow the system administrator to control whether the System.debug() statements are executed through configuration. That way they are there if needed, but only executed when there is a problem to be tracked down.

To carry out a fairly blunt test of this, I created a custom metadata type with a checkbox to indicate if debugging is enabled. The custom metadata type is retrieved after the starting elapsed and CPU time have been captured, and is checked before each of the debug statements is executed, to try to simulate a little of the real world. My method now looks as follows:

public class LoggingOn 
{
	public static void LoggingOn()
    {
        Account acc=[select id, name, accountnumber, createddate,
                     (select id, subject, description from Cases)
                      from account where id='0011E00001ogFnaQAE'];
                      
        Limits_Consumed__c lc=new Limits_Consumed__c();
        Long startMillis=System.now().getTime();
        Long startCPU=Limits.getCpuTime();
        
        Debugging__mdt systemDebugging=Debugging__mdt.getInstance('System');
        String message='';
        for (Integer idx=0; idx<5000; idx++)
        {
            message+=idx;
            if (systemDebugging.Enable_Debugging__c)
            {
	            System.debug('Message = ' + message);
            }
            if (systemDebugging.Enable_Debugging__c)
            {
	            System.debug('User ' + UserInfo.getName());
            }
            if (systemDebugging.Enable_Debugging__c)
            {
	            System.debug('Account = ' + acc);
            }
            if (systemDebugging.Enable_Debugging__c)
            {
	            System.debug('Cases = ' + acc.cases);
            }
        }
        
        lc.Total_Millis__c=System.now().getTime() - startMillis;
        lc.CPU_Time__c=Limits.getCpuTime() - startCPU;
        insert lc;
    }
}

Executing the final tests again with the custom metadata type set to disable debug statements showed a significant improvement:

Scenario 1 took 475 milliseconds elapsed time (-78%), and consumed 213 milliseconds of CPU (-81%)

Scenario 2 took 66 milliseconds elapsed time (-82%), and consumed 62 milliseconds of CPU (-82%)

Conclusion

Now I'm not suggesting that everyone starts building configuration frameworks to allow individual debug statements to be toggled on or off for specific users - for any decent sized Salesforce implementation, this would be a huge amount of work and an ongoing maintenance headache. What I am saying is think about the debug statements that you leave in your code when you deploy your code to production. While it might not feel like you need to worry about a few statements that debug record details, once you start to scale the volume it can slow down the user experience. Studies have shown that when an application is slow, the users also perceive it to be buggy regardless of whether that is actually the case - once a user takes against your app, they go all in!


Saturday, 15 May 2021

AuraEnabled Apex and User Access


  

Since the Winter 21 release of Salesforce, users have to be given explicit access to AuraEnabled classes that are used as controllers for Aura Components. BrightSIGN is no exception to this, but recently I encountered some strange behaviour around this that took a couple of days to get to the bottom of.

The Problem

A subscriber had reported an issue that when saving their signature image, they received an alert containing the text 'Save Failed: Null'. This sounded fairly straightforward - if an error occurs I trap it and return the details to the front end, where it triggers that alert. However, upon checking the controller, there was no code that set the error details as null. They were initialised as a success message and then switched to explicit error messages if something went wrong.

Maybe it was an exception that I wasn't trapping correctly, although I couldn't see how. In order to save a signature for a record the user needed update access, so I tried a number of tests:

  • Using parent record id that didn't exist
  • No permission to access the parent record
  • Read only permission on the parent record
  • Parent record not shared with the user
  • Parent record shared read only with the user
  • Trigger on attachment save that results in a catchable exception
  • Trigger on attachment save that breaches governor limit and generates an uncatchable exception
In every case, I either received a detailed error message that explained the problem, or the server call returned an error and stack trace. 

The subscriber was seeing this in a community, so I re-ran the tests with different licenses, but was still unable to reproduce the problem.

Access to AuraEnabled 


I'd been doing my testing in a new developer org where I'd installed the package for all users, so when I checked I found that everyone had explicit access to the Apex controller with the AuraEnabled method, but it occurred to me that might not be the case for the subscriber. Sadly when I removed access, I got a sensible error that I didn't have access to the class. Just for the sake of completeness, I asked the subscriber to check this. Lo and behold their user didn't have access to the class, and when they added it everything started working.  Looking at my client side code, it meant that when the callback from the server was invoked, the getState() method returned SUCCESS, which means the action executed successfully. However, the getReturnValue() method returned null, which meant that it hadn't hit my method as the return value wasn't ever set to null. 

So it appeared that when the update was enforced to remove their users access to the classes, it didn't completely remove it. Instead it looked like the user could successfully hit the server but not actually execute the method, just receiving a null response instead. A touch of the uncertainty principle, as there was both access and no access!

So if you are getting a null response from a controller from one of your Aura components and you don't understand why, check the class access - it might save you a day or two of head scratching!

Related Posts


My First (No Longer) Deleted Post

(Update: My post was reinstated - see the update at the end of this missive)

This week I experienced a first on Blogger - one of my posts was deleted. Out of the blue I received an email that my post had been flagged for breaching a content policy (Malware and Viruses) and deleted. 

Just like that, the post was gone. Of course on the internet nothing is ever gone that quickly, if ever! There are various sites out there that scrape the contents of blogs and host it themselves, and I cross post most of my content to Medium, but it's gone from my blog.

The post itself was about my first steps into using Substack - could it be that Blogger were worried about their users following me, Pied Piper style, over to an upstart competitor? This seems highly unlikely given the scale of Blogger - even if everyone that has ever read my blog decided to switch on the basis of one post, I'm fairly sure it wouldn't move the needle in any way that would be visible without a microscope.

What's highly likely is that there are automated tools involved that have picked up a false positive. 

Why do I think it's a false positive? Because as I mentioned above, the same post is hosted elsewhere and I've been through it with a fine-tooth comb, and all I can find is links to news sites or some of my pages that host Salesforce tools and content. Nothing that jumps out as a problem, and in every case the reader has to click a link to navigate to the content. 

Now don't get me wrong, I'm not crying about this - I don't pay to use Blogger so they can do what they like with their own platform. I'd really like to know some more details of what is wrong so that I don't do it again, but if they can't or don't want to tell me then there's not a lot I can do. I've been on here 10 years and this is the first incident of this nature, so it's hardly a vendetta.

The upside to this is that it made me realise that I haven't backed up my blog for a while, so I've taken care of that. If the balloon does go up and my blog suddenly disappears, at least I haven't lost all the ramblings I've spent years crafting!

Back up your work people, wherever it may be!

Update: I replied to the email from Blogger about my post being deleted, explaining that I'd checked the content and didn't know which part was causing the issue, and asking for more information so I could avoid the same problem in the future. 6 hours or so later I received another email telling me that it had been looked at again and reinstated. I might not be so lucky next time, so I'll be backing up regularly!

Saturday, 8 May 2021

The CLI GUI takes on Packages


Introduction

Second generation packaging via the Salesforce CLI is awesome, but it's the area I spend most of my time remembering what parameters it supports, as well as figuring out the ids or names of the packages I'm working with.

I've actually had this functionality in the GUI for a while now, but I hadn't been using it in anger across all the various package types, so I didn't publicise it. Since then I've used it to create managed, unlocked and org dependent packages, and base and extension packages, so I think it's good enough to unleash on the world. The examples below are based on the org-dependent package that I created for London's Calling 2021.

Commands

The updated commands configuration file in the repo adds support for five package commands in their own tab:


(I have them in the order that I typically use them on a day to day basis, but they can easily be reordered by updating the commands.js file).

Create Package



Typically only used when I start a project, the Create Package command will default to the configured dev hub if there is one, but I can change it if for some reason I want to use another. Only dev hub orgs appear in the select list. The Path parameter will be used to update the packageDirectories property from the sfdx-project.json file.

Create Version

This is probably the command I use most when developing packages - creating a new version in the hope that I've finally fixed that elusive bug, but expecting further disappointment! 


When I click the Get Packages button, the available packages managed by the dev hub are retrieved and the sfdx-project.json checked to see if there is already a package defined that it can default to. I can choose any other package if I need to.


The other parameter of note is Definition File - when you create a second generation package, there's no developer edition where the code lives, so something like a scratch org is created as part of the upload process, where the code is deployed, tests run etc. Most of the time I just specify my existing scratch org definition file (config/project-scratch-def.json), but very occasionally I'll have a different shape required for the upload and create a dedicated file.

I can specify a key (password) required to install the package, or bypass this. And I always specify a long wait, as if the command doesn't finish synchronously then the details of the version don't get written to the sfdx-project.json.

Promote Version

There's always a bit of Stockholm Syndrome about this command when developing a package. At the start, I'll do pretty much anything rather promote the version to fix the functionality and commit myself to the package contents, but after a few versions I'll be executing it without a second thought.


As before, the Get Packages button retrieves the packages being administered by the dev hub. Get Package Versions retrieves the versions for the selected package, saving me remembering names or ids. It also only shows my versions that haven't been promoted:


List Packages

A simple command - as it's name suggests it lists the packages currently being administered by the dev hub. 


The package information is shown in the log panel, which automatically opens when the command is executed:


List Package Versions

Again, does exactly what it says on the tin. 


Although this time I can reduce the results based on the when the version was created or modified, and limit it to released versions only. These parameters might seem unnecessary when you are starting out with a package, but after a few years and 100+ versions, you'll be glad of them.

Wrap Up

As usual, I've tested these commands on MacOS and Windows 10, but if you come across any issues feel free to raise them at the Github repository.

Related Posts


Friday, 30 April 2021

Remote Work - The End of the Beginning?


As we head into May in the United Kingdom, COVID-19 cases and deaths have settled at around 2-2.5k and 10-40 respectively, so the planned relaxing of lockdown looks set to continue. Vaccine data from around the world makes us cautiously optimistic that the impact of catching the virus is significantly reduced, so it could be that once we re-open then we stay open. While this is very positive news, it does apply pressure to those of us who have been regularly writing about remote work, as the clock is ticking down on the "all remote all the time" approach, and interest in hearing about it will start to wane.

I've read a number of research items suggesting that people feel less productive when they work from home, in part because when they hit a problem they can't just look up and ask someone a question. Instead they either have to self-serve via Google or track someone down in chat and type out their question. While I can see the impact, it isn't the whole story. As the person who was usually on the receiving end of said questions, and answering a large amount every day, I'm often a lot more productive as I can more easily get my head down and concentrate on some deep work. I can also choose to ignore my email/chat/phone much more easily than I can ignore someone who comes and stands next to me and starts talking! Obviously helping people is a huge part of my job, so I don't expect to go off the grid for days (or even hours) at a time, but even breaking off to tell someone I'm busy and can I come back to them later disturbs my train of thought. I'd really like to see the numbers from both sides to determine the real impact!

The "Work from Anywhere" narrative is being dialled back to a hybrid approach of "Work from Anywhere  Half the Time and the Office the Other Half", which kind of ruins the anywhere aspect. If you have to go into the office 2-3 days a week, you'll need to keep living near to that office unless you really fancy commuting from the beach for several hours a day. To be fair, some companies are sticking to the fully remote option, but typically caveating with "where your role allows it", and what's the betting a lot of roles turn out not to allow it after all.

From a pure getting things done perspective, I've feel like I've always embraced working from anywhere. Prior to the pandemic I used to do a fair bit of travelling, including a few trips every year to the US. This would typically involve several hours on buses/trains/tubes, arriving at an airport three or four hours early and then sitting for ten hours on a plane. With a little planning I could churn out work wherever I found myself, writing designs or documentation when I was offline and Apex and Lightning Components when I had connectivity. 

From a leadership, collaboration, and support perspective it's less than ideal - Zoom calls and messaging tools can cover a lot of this, but sometimes there really is no substitute for a face to face meeting where you can scribble on a whiteboard. But you don't need that all day every day either, which is why I think the classic remote work approach is likely to be rolled out more going forward - employees are based at home and don't have a desk in an office, but they spend a day or two a month meeting with their colleagues in person. 

One thing is for sure, remote work isn't settled as we tentatively venture back into different houses, pubs,  and offices. In November 1942 Churchill gave a speech at the Lord Mayor's Luncheon at Mansion House where he said  :

Now this is not the end. It is not even the beginning of the end. but it is, perhaps, the end of the beginning

and that is where I feel we are with remote work. The beginning was the momentous turn of events that saw us pivot to "as remote as possible" with very little planning, and stick with it for over a year.  We are now coming to the end of the beginning, as offices will soon be able to reopen. When that happens we enter the middle as we try out lots of changes and tweak things as we go. The end is still some way off, which is when we all agree that we have nailed it and remote work is as efficient as it can be. And that will be the normal until the next momentous event that derails everything!


Saturday, 24 April 2021

Book Review - Becoming a Salesforce Certified Technical Architect


 (Disclaimer : I didn’t purchase my copy of this book - I was sent a copy to review by Packt Publishing)

Introduction

Salesforce describes Certified Technical architects as follows:

Salesforce Technical Architects serve as executive-level strategic advisors who focus on business transformation with unrivaled domain expertise in functional, platform and integration architecture. They communicate technical solutions and design tradeoffs effectively to business stakeholders, and provide a delivery framework that ensures quality and success.

Based on that description, it's not surprising that there's a huge amount of interest in becoming one. The path isn't straightforward though, with an awful lot of candidates falling at the last hurdle - the review board. This isn't a new development - in the 12 months after I qualified I heard about two week review board events where all 20 candidates failed.  While that's obviously bad for the candidates, it's also not great for the judges - failing people isn't a fun job, and when you have to do it repeatedly the sessions become a chore. 

One reason people fail is that they haven't got experience of presenting a solution and being judged on it. That has improved with community efforts to come up with example scenarios and run practice review boards. While this is good experience for the day of the board, it's not overly helpful in terms of preparation. And that is where this book comes in.

The Book for When You Book

This is the book you want to buy when you are about to book your review board slot.  It won't turn you into a Technical Architect - you still need the experience and learning - but it will get you ready for the board. First off, it looks at what expertise you need to become CTA - if you don't recognise yourself here and spot gaps, maybe hold off for a little while. It then takes you through the structure of the review board session and gives some useful tactics for how to approach it.

Then there are a bunch of chapters around the various areas that you need to shine at in order to pass the board, with mini-scenarios that are worked through with you. Rather than reading straight through, I'd strongly advice having a go yourself and then compare your solution with the exemplar.If you have a different view, that doesn't mean you are wrong, just make sure you can justify it and your justification stacks up - it's highly likely that you'll be quizzed about the benefits of your approach over the other possible solutions.

The book then finishes off with a couple of full mock scenarios, with example solutions and presentation artefacts and script. Again, look to make the most of this by trying it yourself under exam conditions.

Allow Enough Time


I'm already a CTA so I didn't have to take on each of the scenarios and mocks, but it still took me a couple of months to read through the book in my free time and review it. Don't expect to skim this the night before the board, or to rip through it the weekend before. Set aside enough time to work through it properly and you'll reap the benefit.

The Key Advice

The most important piece of advice in the book, which crops up multiple times, is to own your solution. You've considered all the options and have come up with the best possible solution, so be proud of it when you present, and defend it with everything you have. 

In Summary

You can boost your chances of becoming a CTA with this book. It's as simple as that. Tameem Bahri has done a great job with it.




Saturday, 17 April 2021

Unpackaged Metadata for Package Creation Tests in Spring 21

Introduction

The Spring 21 release of Salesforce introduced a useful new feature for those of us using second generation managed or unlocked packages (which really should be all of us now) - the capability to have metadata available at the time of creating a new package version, but which doesn't get added to the package.

I've written before about how I use unpackaged directories for supplementary code, but this isn't code that is needed at creation time, it's typically utilities to help the development process or sample application code to make sure that the package code works as expected when integrated elsewhere. 

The Scenario

To demonstrate the power of the new unpackaged concept, here's my scenario:

I have a package that defines a framework for calculating the discount on an Opportunity through plug and play rule classes. The package provides a global interface that any rule class has to implement:

public interface DiscountRuleIF 
{
    Decimal GetDiscountPercent(Opportunity opp);
}

 The rules are configured through a Discount_Rule__c custom object, which has the following important fields:

  • Rule_Class__c - the name of the class with the rule interface implementation
  • Rule_Namespace__c - the namespace of the class, in case the class lives in another package
And there's the rule manager, which retrieves all of the rules, iterates them, instantiates the implementing class and executes the GetDiscountPercent method on the dynamically created class, eventually calculating the total discount across all rules:
global with sharing class DiscountManager 
{
    global Decimal GetTotalDiscount(Opportunity opp)
    {
        List<Discount_Rule__c> discountRules=[select id, Rule_Class__c, Rule_Class_Namespace__c
                                              from Discount_Rule__c];
        Decimal totalDiscount=0;

        for (Discount_Rule__c discountRule : discountRules)
        {
            Type validationType = Type.forName(discountRule.Rule_Class_Namespace__c,discountRule.Rule_Class__c);        
            DiscountRuleIF discountRuleImpl = (DiscountRuleIF) validationType.newInstance();
    
             totalDiscount+=discountRuleImpl.GetDiscountPercent(opp);

        }

        return totalDiscount;
    }
}

So the idea is that someone installs this package, creates rules locally (or installs another package that implement the rules), and when an opportunity is saved, the discount is calculated and some action taken, probably discounting the price, but they are only limited by their imagination.

Testing the Package

I don't want to include any rules with my package - it purely contains the framework to allow rules to be plugged in. Sadly, this means that I won't be able to test my rule manager class, as I won't have anything that implements the discount rule interface. For the purposes of checking my class, I can define some supplementary code using the technique from my earlier post via the following sfdx-project.json : 

{
    "packageDirectories": [
        {
            "path": "force-app",
            "default": true,
            "package": "UNPACKMETA",
            "versionName": "ver 0.1",
            "versionNumber": "0.1.0.NEXT"
        },
        {
            "path": "example",
            "default": false
        }
    ],
    "namespace": "BGKBTST",
    "sfdcLoginUrl": "https://login.salesforce.com",
    "sourceApiVersion": "51.0"
}

Then I put my sample implementation and associated test classes into example/force-app/main/classes, and when I execute all tests I get 100% coverage.  I can also create a new package version with no problems. Unfortunately, when I promote my package version via the CLI, the wheels come off:

Command failed
The code coverage required to promote this version has not been met.
Please add additional test coverage and ensure the code coverage check
passes during version creation.

My sample implementation and the associated test code has been successfully excluded from the package, but that means I don't have the required test coverage. Before Spring 21 I'd sigh and pollute my package with code that I didn't want, just to satisfy the test coverage requirement. It would be a small sigh, but they add up over the years.

The unpackagedMetadata property

The new Spring 21 feature is activated by adding the unpackagedMetadata property to my sfdx-project.json, inside the default package directory stanza:

{
    "packageDirectories": [
        {
            "path": "force-app",
            "default": true,
            "package": "UNPACKMETA",
            "versionName": "ver 0.1",
            "versionNumber": "0.1.0.NEXT",
            "unpackagedMetadata": {
                "path": "example"
            }
        },
        {
            "path": "example",
            "default": false
        }
    ],
    "namespace": "BGKBTST",
    "sfdcLoginUrl": "https://login.salesforce.com",
    "sourceApiVersion": "51.0",
}

Note that I still have my supplementary directory defined, but I'm also flagging it as metadata that should be used at create time to determine the test coverage, but not included in the package.

Having created a new version with this configuration, I'm able to promote it without any problem. Just to double check I installed the package into a scratch org and the unpackagedMetadata was, as expected, not present.

Reduced Coverage in the Subscriber Org

One side effect of this is that the code coverage across all namespaces will be reduced in the subscriber org. Depending on your point of view, this may cause you as the package author some angst. It may also cause the admin who installed the package into their org some angst. 

Personally I don't think it matters - the coverage of managed package code doesn't have any impact on the subscriber org, and I don't think it's possible to craft tests inside a managed package that are guaranteed to pass regardless of what org they are installed into. I could do it for my sample scenario, but if it's anything more complex than that, the tests are at the tender mercy of the org configuration. I create an in-memory Opportunity to test my discount manager, but if I had to insert it into the database, there's any amount of validation or expected data that could trip me up. While there are techniques that could be used to help ensure my tests can pass, mandating that everyone who installs my package has to have used them across all their automation seems unlikely to be met with a positive reaction.

What it is more likely to mean is that running all tests across all namespaces is more likely to succeed, as the tests which could easily be derailed by the org configuration can be left out of the package, just leaving the ones that are completely isolated to the package code in place, so swings and roundabouts.

What it does mean is that the package author has the choice as to whether the tests are packaged or not, which seems the way it should be to me.

Related Posts