Friday 14 February 2020

Before Save Flows and Apex Triggers Part 2

Before Save Flows and Apex Triggers Part 2

TL;DR Flows consume CPU time, but you might not find it easy to check how much.

Introduction


A couple of weeks ago I wrote a blog post about before save flows and triggers. It's fair to say that this caused some discussion in the Salesforce community. Salesforce themselves reached out to me for more information, as the results weren't what they were expecting to see, so I sent them details of the various tests that I'd run. The funny thing was, when I set out to write the post the CPU times were intended to be secondary, the message from the post was supposed to be about being aware of the impact of mixing tools and thinking about what is already present when adding new automation. So that went well.

Since I wrote the post I've been testing in pre-release, scratch and sandbox orgs, with varying log levels, and every time the results came out much the same. Some differences in the absolute numbers, but always flows using a very small amount of CPU.

I then got a comment on the original post from the legendary Dan Appleman, which included details of the test that he'd run.  Here's the key part of it:
This is inserting the the opportunities using anonymous Apex, with a Limits.getCPUTime statement before and after the insert, with the debug log captured at Error or None level.
The difference for me was that I wasn't outputting any CPU time during the transaction, I just had the debug levels turned all the way down and looked at the final CPU total from the cumulative limits messages:


mainly because whenever I've tried to debug CPU time usage in the past, I've found the results questionable.

That did mean my setup/teardown code was included in the calculation, but as that was the same in all cases I figured while it might change the exact number, the differences would still stand out.

Logging the CPU Time

Adding the following statement to my anonymous apex before the insert:

System.debug('Consumed CPU = ' + LoggingLevel.ERROR, Limits.getCPUTime());

had no effect - I was still seeing around 100 msec for the flow, and around 1500 msec for the equivalent trigger. The log message gave the same CPU time as the cumulative limits message.

Adding it after the insert, and suddenly everything changed - the CPU usage was logged as 2100
msec and the cumulative limits messages had the same value. So flows were consuming CPU time, it just wasn't being surfaced without some additional logging being present. This was a real surprise to me - while figuring out CPU time is always a little fraught, I've not had cause to doubt the cumulative limits before now. Note that this only had an impact in flow-only scenarios, with Apex it didn't change anything materially.

I then changed my anonymous apex so that the entire test was in a loop, and started logging at the CPU at various times. Regardless of when I did it, the cumulative CPU time message always reflected the value from the last time I logged it. So if I only logged the first time through, the cumulative would show as 2100, if it was the second time through, the cumulative would go up to 3500 and so on.

Based on this testing it appears that if there is no additional apex automation (triggers) then the act of accessing the CPU time sets that value into the cumulative limit tracker. I verified this by changing the log messages to just capture to a local variable, and the cumulative limit messages continued to show the actual CPU rather than a very small number.

In Conclusion


Flows do consume CPU time. They don't have a pass that excuses them from CPU duty and there isn't a conspiracy to drive everyone away from Apex. In my cursory testing with simple flows and triggers, triggers were around 33% faster - I intend to do some more testing to try to get some better figures, but right now I need to look at something else for a change!

Getting an accurate report of how much isn't as straightforward as it appears, and figuring out the impact in a transaction which doesn't involve any Apex may present a challenge.





5 comments:

  1. This is absolutely fascinating - it never occurred to me that the CPU time limit shown in the debug log might be impacted by the presence (or lack) of Apex or a query to GetCPUTime.
    This may also explain in part something we have seen in the past - that when CPU limits occur within automation, they usually happen as soon as Apex code is hit (and the CPU timeout is blamed on the Apex, not the automation). This has always been painful as an ISV partner, as the blame in the Apex exception would always go to whatever trigger was unfortunate enough to be hit - having nothing to do with actual usage. Things have gotten better in recent releases - sometimes automation itself raises the CPU timeout, but your results suggest that the metering of CPU time in automation is less certain than within Apex. This would also explain strange results in debug logs - where CPU time after automation is still low, but jumps dramatically after some code executes.
    I think this is a significant discovery and will be enormously helpful to anyone trying to figure out from debug logs where CPU time is actually being consumed!

    ReplyDelete
  2. Thanks Bob, this is really helpful. We have been facing the same issue due to process builders as they also consume CPU limits. We have faced several scenarios where while processing a process builder even if a condition is not satisfied the CPU time is consumed and this behavior is not constant for the same transaction.

    ReplyDelete
  3. Thanks for this blog as we have found over the last two major releases we have had to re-visit every PB and flows to optimise as we suddenly started hitting apex CPU timeouts, its been difficult to get actual timings however SFDC support did help with some back end logs. Rahul are you on linkedin, did you optimise your PBs, would like to get in touch. Steve

    ReplyDelete
  4. Keir, these are great insights, but I'm still confused about the thing you raised as part of the 1st blog.

    I understand we need to be careful about using Flows and other automations, including apex together.
    But even if I just consider using Flows and then have to create triggers for some advanced logic, will I get into the CPU Time Out related issues?

    It's the specific combination of these 2 components(and maybe others along with Flows), that worries me.

    Should I be worried? Can I use these 2 together without hitting CPU time out issues?

    ReplyDelete
    Replies
    1. You can use them together without hitting CPU timeouts, but my tests suggest that you may see a jump in CPU when you add the trigger, as the flow is suddenly counted. I was inserting a thousand records, so that the CPU time registered was pretty large numbers to make comparisons sensible. It will depend on exactly what you are doing and your volumes, but it's certainly possible to combine the two without issue. Test with indicative data volumes!

      Delete