Saturday, 31 July 2021

The CLI GUI Plays Favourites

 


Introduction


My last change to the CLI GUI was to add the capability to decode a Salesforce CLI command string and regenerate the command page for it. This was the first step towards favourites functionality, so that I could save frequently used commands and quickly re-run them. As usual, it wasn't quite as straightforward as that, but this afternoon I pushed an update to the repository to add support for favourites.

A Few of My Favourite Things


If you are already using the CLI GUI, you'll need to run npm install as I'm using a different mechanism to convert the command string into it's component parameters - string-argv. Previously I had a very complicated regular expression that I found online, but it didn't handle string parameters containing spaces too well, or MacOS directories.

The first change you'll notice when the GUI starts is the new datalist and a couple of buttons. The datalist allows you to select from the favourites you've saved, and you can either Open the command window with the favourite decoded, or run it immediately. As an aside, running a favourite that does something destructive or that can't be undone (like deleting a scratch org) is a pretty dangerous thing to do, so you'll be asked to confirm it. I really only use this for opening orgs without having to go via the command window:


Obviously there won't be anything in the datalist yet, as you'll need to create a favourite or two first.

Favourites are added and removed from the command window, which now has a section at the bottom of the page for this. Once you have set the various parameters, give it a name and click the Save Favourite button:


Note that the directory you are working in is saved with the favourite.

Returning back to the main screen, the new favourite is now available in the datalist, and selecting it enables the buttons:


Clicking the Open button opens the command window with the favourite details, and a button to remove it:



Clicking the Run button first asks you to confirm you really want to do it:



If you choose to continue, the command will immediately execute, in the directory you were in when you saved the favourite, and the output shown in a similar modal to that of the command window:



Right now there's no mechanism to update a favourite - you have to remove the existing one and then save the updated command with the same name. Yes it's a couple more clicks, but think of how grateful you'll be if I do add this capability.

The latest code is in the Github repository, and this week I'm back to the usual test environments of MacOS and Windows 10.  I haven't flogged it to within an inch of its life though, so I'm sure if you look hard enough you'll find something that doesn't work. If you do, feel free to raise an issue.

Related Posts

Saturday, 10 July 2021

The CLI GUI Decodes Command Strings

Introduction

Something I've been meaning to add to the CLI GUI for a while is the ability to enter a CLI command string and decode that into a command page with the appropriate parameters set. This would allow someone to understand more about a command without having to run a help command, match to the parameters, and the additional instructions etc defined in the GUI configuration file can be displayed.

I finally got around to adding that capability this week, which wasn't quite as clunky as I was expecting it to be, although to be fair it is still a little clunky.

The Solution

The decoder is offered as part of a new group named 'Built In' - this is so named because it is added to the configured set of commands in the code and will always be present.


Clicking this opens up the command window with a single input to capture the command string:

In this case it's the command I've used in the past to create an org depdendent package, and made sure to write it down for the next time I needed it,

Clicking the Decode Command changes the page to reflect this command, with any additional instructions. And the first clunky bit, in that I always use the short version of the flags so the command generated in this page doesn't match character for character the one pasted in, although it effectively is the same.


The other clunky part is excluding certain functionality from the page when the command is the decoder, but unless I created a whole new page just for that, I was always going to have to do something along those lines. My command configuration file also now knows about the long and short version of every flag, so that it can decode either. 

The Latest Code

I've pushed the latest code to the Github repository - at the moment it's only tested against MacOS as I won't have access to my Windows machine for a day or so. I can't think of any reason why these changes wouldn't work equally well, but if you hit problems before I get a chance to test, raise an issue in the repo.

One thing to bear in mind is that this can only decode the commands and parameters that it knows about. For example, I haven't configured any of the commands to extract data, so there will be no match found when decoding those. 

Related Posts



Saturday, 3 July 2021

Low Code Still Needs Unit Tests

Introduction

First off let me make clear this is not dumping on low code or some King Canute like attempt to push back the advance of clicks not code in favour of having to write every tiny automation in Apex. When I started working with Salesforce back in 2008, I didn't look at workflow rules and think "Oh nose, I wanted to write a bunch of code to update the contents of a rich text area field", I thought 'That's cool, someone else can configure the simple automation and I get to work on the more challenging scenarios". 

On the developer side we've spent years convincing people that unit tests are a good thing, and we even have test driven development that turns the requirements into tests as a first step, after which we write the code that allows the tests to pass. Some people still moan about having to write tests to deploy to production, but most of us recognise this is a good thing and we'll be grateful we had to do it at some point in the future. 

So when I see messaging (from Salesforce and many others) that not being able to write unit tests is a benefit of whatever Low Code platform is being talked about, it strikes me as an attempt to brand a negative as a positive without explaining the downsides. You might be able to get things into production quicker, but realistically you should be swapping unit test effort for extended QA and UAT effort, not just trousering the difference. If you don't, it's quite likely you are getting bugs into production quicker!

Unit Testing is Important

Unit testing is basically writing code that verifies that other code works as expected. Now that low code includes more and more business logic, and the cyclomatic complexity is increasing, unit testing is something which is needed more than ever. As an aside, I felt the same when Aura components were first introduced and moved a bunch of business logic to the front end, so I did a talk at Dreamforce on unit testing those.  

Unit testing brings the following benefits, which apply equally to all types of code:

  • The nearer you find a bug to the developer, the less it costs to rectify.
  • With good test coverage and appropriate assertions, you can make changes with confidence that if you break behaviour that someone else is relying on, one or more tests will fail (this is the main benefit for me).
  • If someone else makes a change that breaks your code (adding a validation rule is the classic situation here), the next time the unit tests run this will be picked up. 
  • It keeps your users happier, as more bugs are found during the development cycle rather than after the system has been released to them.

Testing No and Low Code

No Code tends to be discrete units that can be used to apply automation, like a workflow rule that sent an email and updated some fields if a record transitioned to a specific state, or a validation rule that checks dependencies between fields. There's no point in unit testing that with many different combinations of record fields, for the same reason that in Apex we don't write unit tests to check that the compiler and run time is working as expected. The workflow engine is tested by Salesforce and we can trust it. Manually running a few records through is fine for testing a scenario like this, as the logic that causes the workflow or validation rule to fire is typically very small and rarely changes.

In Low Code, a number of these kind of items are assembled into some more complex automation in a flow, including conditional logic and loops, which is much closer to traditional code. The medium of expression might be different, but we are now in a world of complexity and reuse, especially if it's a subflow that can be dropped into any other flow. In this world it makes sense to run the flow with a variety of inputs, with the underlying database in a variety of states, and as users with a variety of profiles, and many different combinations if these. Doing all this manually every time there is a change isn't going to be practical. Doing it via test automation tools is possible, but tends to be testing an entire transaction rather than the smallest unit, and it can't happen in production as the changes aren't rolled back at the end of the test.

Right now the only way to do this is in a genuine test environment is to write Apex tests. This somewhat defeats a key benefit of flow for smaller organisations - they would still need a developer on staff to write the unit tests for the flows that the admins have created. (Slightly off topic, I also think it would be hard to find a developer who wanted that job for any length of time, so you'd have to be prepared for some significant churn). Apex can't hook directly into the flow engine in all cases though, so for record triggered there's a compromise that the transaction must actually complete, after which verification that the flow aspect had the desired effect can take place. This is more like an integration test, as there are other aspects of automation taking place that could impact the work done by the flow, but it's definitely better than nothing. Once you have Apex tests, you can take advantage of many existing solutions for scheduled execution, making them part of a deployment pipeline etc.

What Might the Future Hold

Ideally what we need is a Low Code test builder that can execute flows/subflows in isolation and generates actual unit tests, with assertions, that run in an isolated environment where the transactions are rolled back open completion. Generating Apex seems like a reasonable approach to me, as does targeting another engine. Given all the effort that has gone into flow over the last few years, it feels like this should be doable. 

Many years ago when Salesforce was seen as shadow IT, my approach to bring the technology departments into the fold was to tell them how their governance and best practice was needed to ensure that things were done properly and were able to scale. Bringing best practice from the Pro Code world feels like a good way to get everyone on board with the Low Code approach.


Saturday, 26 June 2021

Extracting Limits Information from Test Debug Logs

(The follow up to No Limits by 2Unlimited)

Introduction

In large enterprise Salesforce implementations there's often a particular piece of functionality that is limit-poor - it consumes most of one or more limits as a matter of course.  It might have to retrieve information from myriad standard and custom objects, update a whole load of other objects and do a bunch of processing in between that consumes a lot of CPU time, and when it is invoked with a large number of records it flies close to the maximum values like Icarus flying too close to the sun. Often these areas will have unit tests so that unlike Icarus they don't fall into the sea and drown if the code is changed to add a few more SOQL queries.

Often these tests will try to ensure the published maximum number of records can still be handled, but the problem there is that by the time the test fails you have nowhere to go. Another approach is to determine how much of each limit is currently consumed and fail the test if it goes over, which leads to brittle tests that fail for minor changes or, in the case of CPU time, flappy tests that sometimes pass and sometimes fail  even though the code hasn't changed, depending on what else is happening on the Salesforce instance. 

Ideally there would be some way to figure out when the consumed limits change, but without the nuclear option of failing the unit test. In Apex everything that tests do is rolled back when the test completes, regardless of success or failure, sending notifications isn't an option, nor is saving the limit information to an object. This is something I've been wrestling with for years and haven't been able to come up with a solution for. Until now. And in a turn of events that will surprise nobody, it involves a Salesforce CLI plug-in.

The Plug-in

The limitlog plug-in executes the unit test supplied by the user, then relies on the fact that unit tests always generate a debug log - from the Salesforce help on Debug Logs Order of Precedence :

If you don’t have active trace flags, synchronous and asynchronous Apex tests execute with the default logging levels.

The most recent log for the user is then retrieved, and the limits information parsed from it. There's a bit of a wrinkle around this if your org has a namespace, so the plug-in allows you to specify one if you need something other than the default.

The limits information is then returned in JSON format or written to a Salesforce custom object - you can find the metadata for this at the limitlogsalesforce Github repository. 

The metadata also includes a flow that checks the limit information from the previous run for the unit test and sends me an email if anything is different:

So now I can add the plug-in to my continuous integration operations and get notified if the limits consumption has gone up, without having to force unit test failures. 

Installing and Executing

If you want to capture the limit information in Salesforce, the first thing to do is spin up a scratch org or developer edition, clone the limitlogsalesforce Github repository and push/deploy to this org. Make sure to authenticate against it via the CLI.

Next, identify or create a unit test that has some limit impact - this can be in the same developer/scratch org or a different one.

The install the plug-in into your CLI :

    sfdx plugins:install limitlog

Then execute the test command from the plug-in as follows:

sfdx bblimlog:test -n LimitLogSample -r KABDEV -t LimitLogSampleTest.DoWorkTest -s KAB_TUTORIAL -u LIMITLOG -w

where:

  • -n LimitLogSample is the unique name for this test - the flow uses this to find the previous record and figure out if anything has changed.
  • -r KABDEV is the username to run the tests as. If your tests are in the same org as you are writing the data, you can omit this.
  • -t LimitLogSampleTest.DoWorkTest is the unit test method to run
  • -s KAB_TUTORIAL is the namespace of my dev org - if you don't have a namespace, omit this to pick up the default
  • -u LIMITLOG is the username for the org where the test information will be written to
  • -w says to write the limits objects to Salesforce. If you don't supply this, make sure to add the --json switch to see the output.

You will then see output similar to the following:

Executing test LimitLogSampleTest.DoWorkTest
Retrieving test log file
Extracting limits information
Writing limits information to Salesforce
Limits information written to Salesforce

and the limits record will be available in the Salesforce UI:


and here's the list of records in my org that caused the notifications to be sent when the limits increased:


In future blogs I'll go through how this works under the hood, but this feels like enough for now. As this is the first version of the plug-in I'm sure there will be some scenarios it doesn't handle - if you come across one of these, please raise an issue in the plug-in source repo and I'll see what I can do.

Friday, 18 June 2021

Permission Set Group Assignments with Expiration Dates

Introduction

Permission set group assignments with expiration dates are in beta in Summer 21, and do pretty much what it says on the tin - when a permission set group is assigned to a user, an expiration date can be specified after which the assignment will be removed. This might seem like a small change, but it's a powerful one. The release notes mention the obvious use case - someone needing additional permissions for the duration of a project, but there are a fair few more that spring to my mind. 

Use Cases


Pilot Program 

If you are trialling some new functionality and you want everyone to try it out and give feedback by a certain date, provide access to the app/tab/custom objects through a permission set group and set the expiration date to the pilot end date. The artificial scarcity introduced by a hard deadline usually focuses effort.

Scheduled Elevated Permissions

Does your accounts team need access to a bunch of Salesforce data to carry out invoicing, but only on the last day of the month? You can create permission set assignments programatically, so just set up a scheduled apex job that supplies the expiration date, along the lines of :

PermissionSetAssignment psa=new PermissionSetAssignment(
                                  AssigneeId='00580000001ju2CAAQ',
                                  PermissionSetGroupId='0PG1E000000PAv7WAG',
                                  ExpirationDate=Date.newInstance(2021, 06, 20));
    
insert psa;

and there's no need to clean up the access later.

Elevated Permissions for an Event

When attending a trade show, you often want those attending to be able to capture leads even though that isn't their usual role. Assign them the permission and expire it at the end of the day of the event and it's one less thing to worry about afterwards.

Additional Help for New Joiners

When new staff are being onboarded to Salesforce, it can be useful to give them additional information or a way to get help on the pages that they regularly use, but once they have found their feet you can remove the stabilisers (training wheels for our friends across the pond). Create a group with a permission set containing a New Joiners custom permission and assign that to new joiners with an expiration date of two weeks time, and you can conditionally display information based on that. 


 Like an explanation of what a Lead is and how it is worked


Maybe Another


This one I haven't really thought through, but it relates to the sudo command, which in Unix like systems allows you to take on the security privileges of another user, typically the super user.  Using the programmatic assignment from above, users with the appropriate permissions could elect to give themselves additional permissions for the day, thus allowing them to carry out some destructive changes when they need to, but stopping them from accidentally deleting records on other occasions.

The One that Got Away


My favourite use case that I came up with turned out not to be possible. It was going to be punishing bad actors and the example was removing the ability for a user to change the amount or probability on opportunities if they were doing this too often. A muting permission set seemed like it would offer this capability, but sadly these can only mute other permissions granted from the same permission set group rather than remove a permission from a user regardless of where it came from. Shame.



 

Wednesday, 26 May 2021

The Impact of System.debug

TL;DR

Your System.debug statements are consuming resources even when you aren't capturing debug logs. 
If you are passing complex datatype parameters to System.debug then they will consume more resources, as they will be converted to strings (before being discarded as you aren't capturing debug logs!)

Introduction

This week the London Salesforce Developers hosted Paul Battisson and his Turn it up to 11- Improving the Performance of Your Apex code talk, which reminded me of something that has been on my list of things to look at - the impact of System.debug statements on your code.

In Paul's talk he points out that the act of measuring the consumed CPU time and writing the metric to the debug log consumes a small amount of CPU time. And many Apex developers know that turning on debug logging in Salesforce impacts the duration of a transaction, sometimes significantly if the debug log generates multiple megabytes of information - just leave the developer console open while you use your code-heavy application if you haven't experienced this.

What is less well known is that System.debug statements have an impact on your transactions even if you aren't capturing debug logs, as I will endeavour to show in the rest of this blog.

Methodology

I have a simple Apex class with a static method that loops 5,000 times, incrementing an index from 0 to 4999. A string that starts out life empty has the index concatenated with it each time through the loop, using the + operator. 

At the start of the method, the current time in milliseconds and the CPU time consumed are captured, and at the end of the method this is repeated and the difference (the elapsed milliseconds and consumed CPU time for the method) are stored in a custom object.

public class LoggingOn 
{
    public static void LoggingOn()
    {
        Limits_Consumed__c lc=new Limits_Consumed__c();
        Long startMillis=System.now().getTime();
        Long startCPU=Limits.getCpuTime();
        
        String message='';
        for (Integer idx=0; idx<5000; idx++)
        {
            message+=idx;
        }
        
        lc.Total_Millis__c=System.now().getTime() - startMillis;
        lc.CPU_Time__c=Limits.getCpuTime() - startCPU;
        insert lc;
    }
}

The Tests

For each test, the method was executed twice:

  1. Via execute anonymous via the developer console, with debug logging enabled and the log level at the default of SFDC_DevConsole.
  2. As the only code in a trigger before update of a record (this was the quickest way I could execute the Apex outside of execute anonymous - the trigger is essentially a carrier for it) with debug logging disabled

1. No Debug Statements

Scenario 1 took 387 milliseconds elapsed time and consumed 154 milliseconds of CPU

Scenario 2 took 44 milliseconds elapsed time and consumed 45 milliseconds of CPU

I then added debug statements and ran the method via the two scenarios.

2. Adding a Debug Statement

This used System.debug to log the message variable each time through the loop:

System.debug('Message = ' + message);

Scenario 1 took 736 milliseconds elapsed time and consumed 285 milliseconds of CPU - you can clearly see the impact of debug statements when logging is enabled here.

Scenario 2 took 49 milliseconds elapsed time and 45 milliseconds of CPU - very little change here.

3. Extending the Debug Statement

The debug statement was extended to include the name of the current user:
System.debug('Message = ' + message + ', User ' + UserInfo.getName());

Scenario 1 took 1281 milliseconds elapsed time and consumed 328 milliseconds of CPU. An interesting aspect here is that after a very short period of time, the message aspect was being truncated and the user's name wasn't appearing, but the work was still being done to retrieve it and build the full debug string each time through the loop.

Scenario 2 took 120 milliseconds elapsed time and 109 milliseconds of CPU - starting to ramp up.

4. Including an sObject

At the start of the method, prior to the starting CPU and elapsed time being captured, an account record with four fields (Id, Name, AccountNumber, CreatedDate) was retrieved from the database. This was added to the debug statement:

System.debug('Message = ' + message + ', User ' + UserInfo.getName() + ', Account = ' + acc);

Scenario 1 took 1504 milliseconds elapsed time and consumed 494 milliseconds of CPU, but again most of the time the account record details weren't written to the log. 

Scenario 2 took 185 milliseconds elapsed time and 180 milliseconds of CPU - around a 50% increase.

5. Splitting Into Multiple Debug Statements

To ensure that the information could be seen in the debug logs, the single statement was split into three:

System.debug('Message = ' + message);
System.debug('User = ' + UserInfo.getName());
System.debug('Account = ' + acc);

Scenario 1 took 1969 milliseconds elapsed time and consumed 656 milliseconds of CPU, and this time all of the pertinent information was recorded each time through. 

Scenario 2 took 198 milliseconds elapsed time and 192 milliseconds of CPU - a small increase.

6. Adding an Array

The query to retrieve the record was extended to include the Id, Subject and Description of its 2 related cases, and an additional statement added to debug this:
System.debug('Message = ' + message);
System.debug('User = ' + UserInfo.getName());
System.debug('Account = ' + acc);
System.debug('Cases = ' + acc.cases);

Scenario 1 took 2563 milliseconds elapsed time and consumed 1136 milliseconds of CPU - closing in on double the CPU, so clearly logging an array has quite an impact. 

Scenario 2 took 368 milliseconds elapsed time and 353 milliseconds of CPU, again close on double.

Reviewing the Numbers

Here are the numbers for the tests in an easy to consume format, with the percentage increase from the previous test shown in brackets.

Test Elapsed 1 CPU 1 Elapsed 2 CPU 2
1 387 154 44 45
2 736 (47%) 285 (46%) 49 (11%) 45 (0%)
3 1281 (60%) 328 (15%) 120 (144%) 109 (142%)
4 1504 (17%) 494 (50%) 185 (54%) 180 (65%)
5 1969 (31%) 656 (33%) 198 (7%) 192 (7%)
6 2563 (30%) 1136 (73%) 368 (86%) 353 (84%)

Looking at the scenario 1 results, we can see that the time increases with the number of debug statements and complexity of datatypes. This makes perfect sense as I'm doing more work to find out granular details about the state as the method proceeds.

Looking at the scenario 2 results, we can see that by not capturing debug logs, the numbers are quite a lot smaller. However, the time is increasing as the number of debug statements and complexity of datatypes increase, even though I'm not using any of the information

By leaving the debug statements, I'm forcing various datatypes to be converted to strings in order to pass them as parameters to the System.debug() method, which will do nothing with them. In the case of test 6, my transaction is over a third of a second longer due to a bunch of work that adds zero value - my users will be pleased!

Delete All the Debugs?

So is the solution to delete all the debug statements before deploying to production? In many cases, yes. Often System.debug() statements are leftovers from problems encountered during development and can be safely deleted. But not in all cases.

If you are trying to track down an intermittent problem, maybe not. It might be useful to have the debug statements in there so that you can quickly enable logging for the user having the problem and see exactly what is happening. The downside to this is the transaction is slowed down for everyone all the time, not just the user encountering the problem when they are encountering it.

A better solution may be to allow the system administrator to control whether the System.debug() statements are executed through configuration. That way they are there if needed, but only executed when there is a problem to be tracked down.

To carry out a fairly blunt test of this, I created a custom metadata type with a checkbox to indicate if debugging is enabled. The custom metadata type is retrieved after the starting elapsed and CPU time have been captured, and is checked before each of the debug statements is executed, to try to simulate a little of the real world. My method now looks as follows:

public class LoggingOn 
{
	public static void LoggingOn()
    {
        Account acc=[select id, name, accountnumber, createddate,
                     (select id, subject, description from Cases)
                      from account where id='0011E00001ogFnaQAE'];
                      
        Limits_Consumed__c lc=new Limits_Consumed__c();
        Long startMillis=System.now().getTime();
        Long startCPU=Limits.getCpuTime();
        
        Debugging__mdt systemDebugging=Debugging__mdt.getInstance('System');
        String message='';
        for (Integer idx=0; idx<5000; idx++)
        {
            message+=idx;
            if (systemDebugging.Enable_Debugging__c)
            {
	            System.debug('Message = ' + message);
            }
            if (systemDebugging.Enable_Debugging__c)
            {
	            System.debug('User ' + UserInfo.getName());
            }
            if (systemDebugging.Enable_Debugging__c)
            {
	            System.debug('Account = ' + acc);
            }
            if (systemDebugging.Enable_Debugging__c)
            {
	            System.debug('Cases = ' + acc.cases);
            }
        }
        
        lc.Total_Millis__c=System.now().getTime() - startMillis;
        lc.CPU_Time__c=Limits.getCpuTime() - startCPU;
        insert lc;
    }
}

Executing the final tests again with the custom metadata type set to disable debug statements showed a significant improvement:

Scenario 1 took 475 milliseconds elapsed time (-78%), and consumed 213 milliseconds of CPU (-81%)

Scenario 2 took 66 milliseconds elapsed time (-82%), and consumed 62 milliseconds of CPU (-82%)

Conclusion

Now I'm not suggesting that everyone starts building configuration frameworks to allow individual debug statements to be toggled on or off for specific users - for any decent sized Salesforce implementation, this would be a huge amount of work and an ongoing maintenance headache. What I am saying is think about the debug statements that you leave in your code when you deploy your code to production. While it might not feel like you need to worry about a few statements that debug record details, once you start to scale the volume it can slow down the user experience. Studies have shown that when an application is slow, the users also perceive it to be buggy regardless of whether that is actually the case - once a user takes against your app, they go all in!


Saturday, 15 May 2021

AuraEnabled Apex and User Access


  

Since the Winter 21 release of Salesforce, users have to be given explicit access to AuraEnabled classes that are used as controllers for Aura Components. BrightSIGN is no exception to this, but recently I encountered some strange behaviour around this that took a couple of days to get to the bottom of.

The Problem

A subscriber had reported an issue that when saving their signature image, they received an alert containing the text 'Save Failed: Null'. This sounded fairly straightforward - if an error occurs I trap it and return the details to the front end, where it triggers that alert. However, upon checking the controller, there was no code that set the error details as null. They were initialised as a success message and then switched to explicit error messages if something went wrong.

Maybe it was an exception that I wasn't trapping correctly, although I couldn't see how. In order to save a signature for a record the user needed update access, so I tried a number of tests:

  • Using parent record id that didn't exist
  • No permission to access the parent record
  • Read only permission on the parent record
  • Parent record not shared with the user
  • Parent record shared read only with the user
  • Trigger on attachment save that results in a catchable exception
  • Trigger on attachment save that breaches governor limit and generates an uncatchable exception
In every case, I either received a detailed error message that explained the problem, or the server call returned an error and stack trace. 

The subscriber was seeing this in a community, so I re-ran the tests with different licenses, but was still unable to reproduce the problem.

Access to AuraEnabled 


I'd been doing my testing in a new developer org where I'd installed the package for all users, so when I checked I found that everyone had explicit access to the Apex controller with the AuraEnabled method, but it occurred to me that might not be the case for the subscriber. Sadly when I removed access, I got a sensible error that I didn't have access to the class. Just for the sake of completeness, I asked the subscriber to check this. Lo and behold their user didn't have access to the class, and when they added it everything started working.  Looking at my client side code, it meant that when the callback from the server was invoked, the getState() method returned SUCCESS, which means the action executed successfully. However, the getReturnValue() method returned null, which meant that it hadn't hit my method as the return value wasn't ever set to null. 

So it appeared that when the update was enforced to remove their users access to the classes, it didn't completely remove it. Instead it looked like the user could successfully hit the server but not actually execute the method, just receiving a null response instead. A touch of the uncertainty principle, as there was both access and no access!

So if you are getting a null response from a controller from one of your Aura components and you don't understand why, check the class access - it might save you a day or two of head scratching!

Related Posts