Tweet |
Introduction
The Spring 21 Salesforce release includes the following update, which will be enforced in Summer 22:
Accurately Measure the CPU Time Consumption of Flows and Processes (Update)
With this update enabled, Salesforce accurately measures, logs, and limits the CPU time consumed by all flows and processes. Previously, the CPU time consumed was occasionally incorrect or misattributed to other automation occurring later in the transaction, such as Apex triggers. Now you can properly identify performance bottlenecks that cause the maximum per-transaction CPU time consumption limit to be exceeded. Also, because CPU time is now accurately counted, flows and processes fail after executing the element, criteria node, or action that pushes a transaction over the CPU limit. We recommend testing all complex flows and processes, which are more likely to exceed this limit.
which I was particularly pleased to see, as it was confirmation of what I'd been seeing since before save flows were introduced in Spring 20, namely that the reporting around flow CPU consumption was off. You can read about the various findings by working back through the blog posts, but the bottom line was that the debug logs reported hardly any CPU consumption for a flow unless you added an Apex debug log about it, in which case it suddenly jumped up and reported the actual value. In effect the act of looking at how much CPU had been consumed caused it to be consumed, at least from a reporting perspective. For a real transaction the CPU obviously had been consumed, but it wasn't always clear what had consumed it, which no doubt led to a lot of finger pointing between low and pro coders, or ISVs and internal admins.
To close the loop on this, I was really keen to re-run some of my previous tests to see what the "real" figures looked like, or as real as you can get, given that performance profiling of Salesforce automation is affected by so many factors outside your control. When looking at the results below, always remember that the actual values are likely to be different from org to org - the figures I'm seeing from my latest tests are way faster than a year ago in a scratch org, but your mileage may vary.
Also, don't forget to apply the update - I did and couldn't make head or tail of my first test result!
Methodology
The methodology was the same as for the original before save testing in Spring 20 - I'm inserting a thousand opportunities which are round robin'ed across two hundred accounts and immediately deleting them. This means the figures are inflated by the setup/teardown code, but that is constant across all runs. The debug level was turned down to the minimum and I ran the each test five times and took the average.
The simple flow to update the opportunity name came out at 1172 msec. Just out of curiosity, I added a debug statement to Apex code that does the testing, and lo and behold the average went up to 1374 msec. Here we go again, I thought. Before I published and started fighting on the internet again, I disabled all automation and ran the code again - so inserting and deleting the opportunities with and without the debug statement. The averages were : no debug statement - 1108 msec, with debug - 1140. Nothing conclusive, but it's definitely having an impact. Finally, I enabled my simple trigger to do the same thing as the flow and test this with and without the debug statement. Average without - 897 msec, with - 1216 msec. Given that the average without the debug statement was lower than the average when all automaton had been turned off, I decided that the debug statement ensures an accurate report, especially as it has a similar impact across both flow and trigger numbers. Once again, it's really hard to profile Salesforce performance!
Results
Test | Flow | Trigger | Difference |
---|---|---|---|
No automation | 1140 | 1140 | 0 |
Update record name | 1374 | 1216 | 158 |
Update record name with account info (lookup) | 2133 | 1252 | 881 |
Combined trigger (set name) and flow (change amount) | 1459 | 1459 | 0 |
As expected, triggers are faster than flows, which is good because that's what Salesforce say! The differences aren't huge for a single record but once you scale up to a thousand records with a little complexity, they can become significant - 881msec might not sound like a lot, but it's 8% of your allowance for a transaction.
Mixing flow and triggers doesn't bring any penalty, for CPU at least. It makes your automation harder to understand, and may mean you can't guarantee the order that the various types fire in, so it's best avoided regardless of the effect on CPU time.
Proceed With Caution
In a follow up post when after save flows were introduced, I found that I could carry out way more work in a transaction using flows rather than triggers, as whatever checks CPU time from a limits perspective was subject to the same reporting issue. Once this feature is activated, the flow CPU starts counting from the beginning. In Summer 20 I could insert 2-3000 opportunities and keep under the 10000 msec CPU limit, with this feature activated I'm hitting it at the 1-2000 mark, so you might find you've inadvertently been breaking the limit because the platform didn't notice and let you. Definitely run some tests before enabling this in production and, per the update information:
You can enable the update for a single flow or process by configuring it to run in API version 51.0 or later.
so you can start testing on a flow by flow basis.
Hopefully this is the last post I'll have to write on this topic - it's been a wild ride, but it feels like we've reached the end of the road.
Great post Bob. Nice to see more truthful accountability with flows and platform limits. Truly believe that most flow designers are oblivious to platform limits and quickly point finger to apex triggers instead of recognizing that each automation requires cpu processing time and pushes the platform limit. Test, Test, and automate your tests! 😃
ReplyDeleteSo, If the API Version of a running the flow is 49.0, will this update accurately measure CPU time or will it not be considered. Part that is confusing me is that doc mentions this point "Why: Salesforce accurately measures, logs, and limits the CPU time consumed by flows and processes that are configured to run in API version 51.0 or later. With this update enabled, the behavior applies to all flows and processes, regardless of their run-time API versions."
ReplyDeleteWhat this is saying is:
Delete- If you have applied the update, then all flows will accurately measure CPU time
- If you haven't applied the update, only flows with API version 51+ will accurately measure CPU time
So before applying the update, you can move individual flows to API 51+ to check out the consequences, which may involve breaking the CPU governor limit.
Great article. My Salesforce org is 6 months old and I checked for each flow, the 'API version for running the flow' is greater than 50. I checked this via - setup --> Flows--> clicked the down arrow for each flow --> clicked on 'view details and versions' and it says 52 for API version for running the flow.
ReplyDeleteQuestion - 1 Does this enforcement impact my org?
Question 2 - Is there anything else I need to check? for Apex classes or Triggers? or Process Builders etc?
Question 3 - Any other testing I need to do?
Question 4 - Do I still need to enable this setting? And, any considerations before enabling it?
And, any suggestions on how to test if a flow is complex and exceeding the limit? for a flow with API Version for Running the Flow is 49?
ReplyDelete