Friday, 14 February 2020

Before Save Flows and Apex Triggers Part 2

Before Save Flows and Apex Triggers Part 2

TL;DR Flows consume CPU time, but you might not find it easy to check how much.

Introduction


A couple of weeks ago I wrote a blog post about before save flows and triggers. It's fair to say that this caused some discussion in the Salesforce community. Salesforce themselves reached out to me for more information, as the results weren't what they were expecting to see, so I sent them details of the various tests that I'd run. The funny thing was, when I set out to write the post the CPU times were intended to be secondary, the message from the post was supposed to be about being aware of the impact of mixing tools and thinking about what is already present when adding new automation. So that went well.

Since I wrote the post I've been testing in pre-release, scratch and sandbox orgs, with varying log levels, and every time the results came out much the same. Some differences in the absolute numbers, but always flows using a very small amount of CPU.

I then got a comment on the original post from the legendary Dan Appleman, which included details of the test that he'd run.  Here's the key part of it:
This is inserting the the opportunities using anonymous Apex, with a Limits.getCPUTime statement before and after the insert, with the debug log captured at Error or None level.
The difference for me was that I wasn't outputting any CPU time during the transaction, I just had the debug levels turned all the way down and looked at the final CPU total from the cumulative limits messages:


mainly because whenever I've tried to debug CPU time usage in the past, I've found the results questionable.

That did mean my setup/teardown code was included in the calculation, but as that was the same in all cases I figured while it might change the exact number, the differences would still stand out.

Logging the CPU Time

Adding the following statement to my anonymous apex before the insert:

System.debug('Consumed CPU = ' + LoggingLevel.ERROR, Limits.getCPUTime());

had no effect - I was still seeing around 100 msec for the flow, and around 1500 msec for the equivalent trigger. The log message gave the same CPU time as the cumulative limits message.

Adding it after the insert, and suddenly everything changed - the CPU usage was logged as 2100
msec and the cumulative limits messages had the same value. So flows were consuming CPU time, it just wasn't being surfaced without some additional logging being present. This was a real surprise to me - while figuring out CPU time is always a little fraught, I've not had cause to doubt the cumulative limits before now. Note that this only had an impact in flow-only scenarios, with Apex it didn't change anything materially.

I then changed my anonymous apex so that the entire test was in a loop, and started logging at the CPU at various times. Regardless of when I did it, the cumulative CPU time message always reflected the value from the last time I logged it. So if I only logged the first time through, the cumulative would show as 2100, if it was the second time through, the cumulative would go up to 3500 and so on.

Based on this testing it appears that if there is no additional apex automation (triggers) then the act of accessing the CPU time sets that value into the cumulative limit tracker. I verified this by changing the log messages to just capture to a local variable, and the cumulative limit messages continued to show the actual CPU rather than a very small number.

In Conclusion


Flows do consume CPU time. They don't have a pass that excuses them from CPU duty and there isn't a conspiracy to drive everyone away from Apex. In my cursory testing with simple flows and triggers, triggers were around 33% faster - I intend to do some more testing to try to get some better figures, but right now I need to look at something else for a change!

Getting an accurate report of how much isn't as straightforward as it appears, and figuring out the impact in a transaction which doesn't involve any Apex may present a challenge.





Saturday, 8 February 2020

Spring 20 Default Record Create Values

Spring 20 Default Record Create Values


Introduction


Spring 20 introduces something rather reminiscent of URL hacks - the capability to provide default values when creating a record via URL parameters.  Not quite the same though -  for one thing this is a supported mechanism, whereas we were constantly warned against using URL hacks in Classic by Salesforce. For another, this is currently only documented as working on record create, whereas URL hacks worked (although wasn't supported, might not work in the future etc) across most of the later Classic pages.

How It's Done


The first thing you need is the URL to the create page - this takes the form:

https://<salesforce_instance>/lightning/o/<object_api_name>/new

e.g. for Account this is :

https://<salesforce_instance>/lightning/o/Account/new

While for a custom object with an api name of Webinar__c it is:

https://<salesforce_instance>/lightning/o/Webinar__c/new

Next, you need the parameter the defines the defaults:

?defaultFieldValues=
and then the list of name=value pairs for the fields - again, these are the API names, so the record name would be:

Name=Test
while a custom field would have the API name, including the __c suffix:

Description__c=Test+record+for+blog
(note that I've used '+' to indicate a space in the query part of the URL - I could just as well have used '%32')

Multiple Values

To add multiple default parameter values, use the comma ',' character to separate them - don't use '&' as this will indicate the defaultFieldValues parameter has finished and a new parameter is starting!

Name=Test,Description__c=Test+record+for+blog,

All Together Now


Putting all the elements identified above, I have the following URL:


https://kabdev.lightning.force.com/lightning/o/Webinar__c
/new?defaultFieldValues=Name=Test,
Description__c=Test+record+for+blog,Planned_Duration__c=60

Entering this in my Spring 20 org takes me to the record create page for the Webinar custom object, with the defaults pre-populated:



Relationship Fields

 MyWebinar__c custom object has a lookup to a custom survey object, with the relationship field named Survey__c. When entering information on the create page, I put the text name of the field and choose the entry from the list, so it's tempting to specify the name of the record in the URL. That doesn't end well:



Leaving aside the frankly hilarious idea that I could take an internal error id from Salesforce to my administrator and that would help in any way, it clearly doesn't like the text. This is because the relationship field needs an ID, so if I specify one of those it populates correctly.



Out of curiosity I tried an non-existent ID to see if the error message is any more helpful - it is, and tells me that it can't find the record that the ID refers to, which is much better.


I must also say that in both error cases I love that the plucky Save button sticks around even though things have gone badly wrong. Clicking it doesn't do anything, in case you are wondering.


Related Posts






Friday, 31 January 2020

Spring 20 Before Save Flows and Apex Triggers

Spring 20 Before Save Flows and Apex Triggers


Introduction

Spring 20 introduces the concept of the before record save flow, analogous to the before insert/update trigger that we’ve had for over a decade now. Like these triggers, the flow can make additional changes to the records without needing to save it to the database once it has finished its work. Avoiding this save makes things a lot faster - a claimed 10 times faster than doing similar work in process builder. What the release notes don’t tell us is how they compare to Apex triggers, which was the kind of thing I was a lot more interested in.


Scenarios

I've tried a couple of relatively simple scenarios, both of which I've encountered in the real world:
  1. Changing the name of an opportunity to reflect things like the amount, close date. All activity happens on the record being inserted, so this is very simple.
  2. Changing the name of an opportunity as above, but including the name of the account the opportunity is associated with, so an additional record has to be retrieved.
In order to push the trigger/flow to a reasonable degree, I'm inserting a thousand opportunities which are round robin'ed across two hundred accounts and immediately deleting them.

Scenario 1


Flow

My flow is about as simple as it gets:



The assignment appends '-{opportunity amount}' to the record name:


At the end of the transaction, I have the following limit stats:

Number of SOQL queries: 2 out of 100
Number of query rows: 1219 out of 50000
Maximum CPU time: 116 out of 10000


Trigger

The trigger is also very simple:

trigger Opp_biu on Opportunity (before insert, before update) 
{
    for (Opportunity opp : trigger.new)
    {
        opp.name=opp.Name + '-' + opp.Amount;
    }
}

and this gives the following limit stats:

Number of SOQL queries: 2 out of 100
Number of query rows: 1219 out of 50000
Maximum CPU time: 1378 out of 10000

So in this case the trigger consumes over a thousand more milliseconds of CPU time. Depending on what else is going on in my transaction, this could be the difference between success and failure.


Scenario 2


Flow

There's a little more to the flow this time :


The Get Account element retrieves the account record associated with the opportunity - I only extract the Name field as that is all I use in my opportunity name: 


I also have a formula that generates the opportunity name, and this is used by the Assignment action:

and this gives the following limit stats:

Number of SOQL queries: 7 out of 100
Number of query rows: 2219 out of 50000
Maximum CPU time: 111 out of 10000


Trigger

The trigger mirrors the flow, with a little extra code to ensure it is bulkified :

trigger Opp_biu on Opportunity (before insert, before update) 
{
    Set<Id> accountIds=new Set<Id>();
 for (Opportunity opp : trigger.new) 
    {
        accountIds.add(opp.AccountId);
    }
    
    Map<Id, Account> accountsById=new Map<Id, Account>(
        [select id, Name from Account where id in :accountIds]);
    for (Opportunity opp : trigger.new)
    {
        Account acc=accountsById.get(opp.AccountId); 
        opp.Name=acc.Name + '-' + opp.CloseDate + '-' + opp.Amount;
    }
}

which gives the following limit stats:

Number of SOQL queries: 7 out of 100
Number of query rows: 2219 out of 50000
Maximum CPU time: 1773 out of 10000

Aside from telling us that CPU time isn't an exact science, as it went down this time, the flow is pretty much the same in spite of the additional work. The trigger, on the other hand, has consumed another 500 milliseconds.


All Flow All the Time?

So based on this, should all before insert/update functionality be migrated to flows? As always, the answer is it depends.

One thing it depends on is whether you can do everything you need in the flow - per Salesforce Process Builder best practice:

For each object, use one automation tool.

If an object has one process, one Apex trigger, and three workflow rules, you can’t reliably predict the results of a record change.

It can also get really difficult to debug problems if you have your business logic striped across multiple technologies, especially if some aspects of it are trivial to change in production.

Something that is often forgotten with insert/update automation is what should happen when a record is restored from the recycle bin. In may ways this can be considered identical to inserting a new reecord. Triggers offer an after undelete variant to allow automated actions to take place - you don't currently have this option in the no code world.


One More Thing

A word of warning - you might be tempted to implement your next simple before save requirements as a flow regardless of existing automation. Let's say a consultant developer created you a trigger similar to mine above and now you need to make an additional change to the record. If you do this with a flow, make sure to test this thoroughly.  Out of curiosity, I tested combining my trigger that sets the opportunity name with a flow that tweaks the amount by a very small amount.

The limit stats for this were frankly terrifying:

Number of SOQL queries: 7 out of 100
Number of query rows: 2219 out of 50000
Maximum CPU time: 8404 out of 10000 *****

So the CPU time has increased five fold by adding in a flow that by itself consumes almost nothing!






Saturday, 18 January 2020

Going GUI over the Salesforce CLI Part 2

Going GUI over the Salesforce CLI Part 2

Introduction

In part 1 of this series I introduced my Salesforce CLI GUI with some basic commands. In this instalment I’ll cover some of the Electron specifics and add a couple of commands.

Electron

Electron is an open source framework for building cross platform desktop applications with JavaScript, HTML and CSS. The Chromium rendering engine handles the UI and the logic is managed by the Node JS runtime. I found this particularly attractive as I spend  lot of time these days writing code for Node - wrappers around the command line, for example, or plugins for the Salesforce CLI. I’m also keen to do as much in JavaScript as I can as it helps my Lightning Web Components development too.

An Electron application has a main process and a number of renderer processes. The main process creates the web pages that make up the application UI, and each application has exactly one main process. Each page that is created has its own renderer process to manage the page. Each renderer process only knows about the web page it is managing and is isolated from the other pages.

The renderer process can’t access the operating system APIs - they have to request that the main process does this on their behalf. 

CLI GUI Main Process

The main process for my CLI GUI loads up the JSON file that configures the available commands, runs a couple of CLI commands to determine if the default user and default dev hub user have been set. It then registers a callback for the ready event of the application, which means that it is fully launched and ready to display the user interface:

app.on('ready', () => {
    mainWindow=new BrowserWindow({
        width: 800,
        height: 600,
        webPreferences: {
            nodeIntegration: true
          }
        }
    );
    let paramDir=process.argv[2];
    if (paramDir!==undefined) {
        changeDirectory(paramDir);
    }

mainWindow.webContents.loadURL(`file://${__dirname}/home.html`); windows.add(mainWindow); });

The ready handler creates a new BrowserWindow instance, specifying that node integration should be enabled for the renderer process that manages the page. It then loads the home.html page, aka the GUI home page:

CLI GUI Home Page Renderer

The Node JavaScript behind this page (home.js in the repo) gains access to the main process via the following imports:

const { remote, ipcRenderer } = require('electron');
const mainProcess = remote.require('./main.js');

Remote gives access to a lot of the modules available in the main process (essentially proxying) and mainProcess provides access to the functions and properties of the main process instance that created the renderer. The renderer then gets a reference to its window so that it can output the dynamic content:

const currentWindow = remote.getCurrentWindow();

It then iterates the command groups, creating a Salesforce Lightning Design System tab for each one, then iterates the commands inside the group, adding buttons for each of those. There’s a fair bit of code around that to set the various attributes, so a cut down version is shown here:

for (let group of mainProcess.commands.groups) {
    if (0===count) {
        classes.push('slds-is-active');
    }
    let tabsEle=document.createElement('li');
    tabsEle.id='tab-'+ group.name;
    tabLinkEle.id='tab-' + group.name + '-link';
    tabLinkEle.innerText=group.label;

    tabsEle.appendChild(tabLinkEle);

    const tabContainer=document.querySelector('#tabs');
    tabContainer.appendChild(tabsEle);
    let contentEle=document.createElement('div');
    contentEle.classList.add('slds-' + (0===count?'show':'hide'));
    contentEle.setAttribute('role', 'tabpanel');

    let gridEle=document.createElement('div');
    gridEle.classList.add('slds-grid');
    contentEle.appendChild(gridEle);

    const tabsContentContainer=document.querySelector('#tab-contents');
    tabsContentContainer.appendChild(contentEle);

    for (let command of group.commands) {

        let colEle=document.createElement('div');
        colEle.classList.add('slds-col');
        colEle.classList.add('slds-size_1-of-4');

        let colButEle=document.createElement('button');
        colButEle.id=command.name + '-btn';
        colButEle.innerText=command.label;
colEle.appendChild(colButEle); gridEle.appendChild(colEle); } }

This allows the commands to be dynamically generated based on configuration, rather than having a hardcoded set that are the same for everyone in the world and requiring code changes to add or remove commands. Of course there is code specific to each of the commands, but there are a lot of similarities between the commands which allows a lot of code re-use.

Note that I’m using the DOM API to add the elements, rather than HTML snippets, partly because it is slightly faster to render, but mostly because while it takes longer to write the initial version, it’s much easier to maintain going forward.

Note also that the buttons and tabs have dynamically generated ids, based on the command names. This allows me to add the event handlers for when a user clicks on a tab or a button:

for (let group of mainProcess.commands.groups) {
    group.link=document.querySelector('#tab-' + group.name + '-link');
    group.link.addEventListener('click', () => {
        activateTab(group);
    });
    group.tab=document.querySelector('#tab-' + group.name);
    group.content=document.querySelector('#tab-' + group.name + '-content');

    for (let command of group.commands) {
        command.button=document.querySelector('#' + command.name + '-btn');
        command.button.addEventListener('click', () => {
            mainProcess.createWindow('command.html', 900, 1200, 10, 10, {command: command, dir: process.cwd()});
        });
    }
}

The more interesting handler is that for the commands - this invokes a method from the main process to open a new window, which again has been cut down to the salient points:

const createWindow = exports.createWindow = (page, height, width, x, y, params) => {
    let newWindow = new BrowserWindow({ x, y, show: false ,
        width: width, 
        height: height,
        webPreferences: {
            nodeIntegration: true
        }});
      newWindow.loadURL(`file://${__dirname}/` + page);
      newWindow.once('ready-to-show', () => {
          newWindow.show();
          if (params!==undefined) {
              newWindow.webContents.send('params', params);
          }
      });
      newWindow.on('close', (event) => {
        windows.delete(newWindow);
        newWindow.destroy();
    });
};

The new window is created much like the home window, and loads the page - command.html in this example. Unlike the home page, once the new window receives the ready-to-show event, a handler sends any additional parameters passed to this method to the window - in this case the command that needs to be exposed and the current directory that it will be run in. There’s also a close handler that destroys the window, cleaning up the renderer process.

New Commands

The latest repo code contains a couple of new commands in a Debugging group. As before, I’ve tested this on MacOS and Windows 10.

List Log Files

Choose the username/alias for the org whose log files you are interested in:

Clicking on the ‘List’ button pulls back brief details of the available log files:

Get Log File

To retrieve the contents of a specific log file, first find out which ones are available from a specific org:

Then select the specific file from the dropdown:

And click the 'Get' button to display the contents:

Related Posts

 

Sunday, 12 January 2020

SalesforceWay Podcast

SalesforceWay Podcast

Mid 2019, Xi Xiao from Finland reached out to ask me if I’d appear on his podcast, SalesforceWay. I’ve done a podcast or two in the past, but not for some time, so I accepted with gusto. As is so often the case, I had a stack of work that I was already behind on and some half-completed community initiatives that needed some focus. Luckily Xi was a patient man and shortly (a quarter of a year or so!) afterwards we recorded the episode

Anyone who has been following my blog over the last couple of years will know that I’m a big fan of the Salesforce CLI, so this seemed like a great topic to talk about. For those who aren’t familiar with the podcast format, I always feel it’s kind of a cross between an interview and a collection of war stories. As a rule they are intended to enlighten rather than train, involving a conversation around a topic and pointing the listener at where they might find out more information.

While my episode is clearly the one you should listen to first, there are a ton of great episodes available, so don’t stop at one! There are also more on the way - I know this as I recommended a few of the guests. Xi is always looking for more, so if you have a topic that you think other Salesforce developers would be interested in, reach out to Xi, or if you are shy then get in touch with me via the usual channels (or the comments section of this post) and I’ll arrange an introduction. You won’t regret it - it’s a lot of fun!

Related Posts