Spring 20 Before Save Flows and Apex Triggers
Introduction
Spring 20 introduces the concept of the before record save flow, analogous to the before insert/update trigger that we’ve had for over a decade now. Like these triggers, the flow can make additional changes to the records without needing to save it to the database once it has finished its work. Avoiding this save makes things a lot faster - a claimed 10 times faster than doing similar work in process builder. What the release notes don’t tell us is how they compare to Apex triggers, which was the kind of thing I was a lot more interested in.Scenarios
I've tried a couple of relatively simple scenarios, both of which I've encountered in the real world:
- Changing the name of an opportunity to reflect things like the amount, close date. All activity happens on the record being inserted, so this is very simple.
- Changing the name of an opportunity as above, but including the name of the account the opportunity is associated with, so an additional record has to be retrieved.
In order to push the trigger/flow to a reasonable degree, I'm inserting a thousand opportunities which are round robin'ed across two hundred accounts and immediately deleting them.
Scenario 1
Flow
My flow is about as simple as it gets:The assignment appends '-{opportunity amount}' to the record name:
At the end of the transaction, I have the following limit stats:
Number of SOQL queries: 2 out of 100
Number of query rows: 1219 out of 50000
Maximum CPU time: 116 out of 10000
Trigger
The trigger is also very simple:trigger Opp_biu on Opportunity (before insert, before update) { for (Opportunity opp : trigger.new) { opp.name=opp.Name + '-' + opp.Amount; } }
and this gives the following limit stats:
Number of SOQL queries: 2 out of 100
Number of query rows: 1219 out of 50000
Maximum CPU time: 1378 out of 10000
So in this case the trigger consumes over a thousand more milliseconds of CPU time. Depending on what else is going on in my transaction, this could be the difference between success and failure.
Scenario 2
Flow
There's a little more to the flow this time :
The Get Account element retrieves the account record associated with the opportunity - I only extract the Name field as that is all I use in my opportunity name:
I also have a formula that generates the opportunity name, and this is used by the Assignment action:
and this gives the following limit stats:
Number of SOQL queries: 7 out of 100
Number of query rows: 2219 out of 50000
Maximum CPU time: 111 out of 10000
Trigger
The trigger mirrors the flow, with a little extra code to ensure it is bulkified :trigger Opp_biu on Opportunity (before insert, before update) { Set<Id> accountIds=new Set<Id>(); for (Opportunity opp : trigger.new) { accountIds.add(opp.AccountId); } Map<Id, Account> accountsById=new Map<Id, Account>( [select id, Name from Account where id in :accountIds]); for (Opportunity opp : trigger.new) { Account acc=accountsById.get(opp.AccountId); opp.Name=acc.Name + '-' + opp.CloseDate + '-' + opp.Amount; } }
which gives the following limit stats:
Number of SOQL queries: 7 out of 100
Number of query rows: 2219 out of 50000Maximum CPU time: 1773 out of 10000
Aside from telling us that CPU time isn't an exact science, as it went down this time, the flow is pretty much the same in spite of the additional work. The trigger, on the other hand, has consumed another 500 milliseconds.
All Flow All the Time?
So based on this, should all before insert/update functionality be migrated to flows? As always, the answer is it depends.One thing it depends on is whether you can do everything you need in the flow - per Salesforce Process Builder best practice:
For each object, use one automation tool.
If an object has one process, one Apex trigger, and three workflow rules, you can’t reliably predict the results of a record change.
It can also get really difficult to debug problems if you have your business logic striped across multiple technologies, especially if some aspects of it are trivial to change in production.
Something that is often forgotten with insert/update automation is what should happen when a record is restored from the recycle bin. In may ways this can be considered identical to inserting a new reecord. Triggers offer an after undelete variant to allow automated actions to take place - you don't currently have this option in the no code world.
One More Thing
A word of warning - you might be tempted to implement your next simple before save requirements as a flow regardless of existing automation. Let's say a consultant developer created you a trigger similar to mine above and now you need to make an additional change to the record. If you do this with a flow, make sure to test this thoroughly. Out of curiosity, I tested combining my trigger that sets the opportunity name with a flow that tweaks the amount by a very small amount.
The limit stats for this were frankly terrifying:
Number of SOQL queries: 7 out of 100
Number of query rows: 2219 out of 50000
Maximum CPU time: 8404 out of 10000 *****
So the CPU time has increased five fold by adding in a flow that by itself consumes almost nothing!
Not sure if I'm misreading, but in your Scenario 1 you conclude the trigger "consumes over a thousand more milliseconds of CPU time.". But you also have 2 SOQL queries....where are they coming from in your Trigger?
ReplyDeleteIt's the teardown setup code -
Delete--- snip ---
In order to push the trigger/flow to a reasonable degree, I'm inserting a thousand opportunities which are round robin'ed across two hundred accounts and immediately deleting them.
--- snip ---
So the first query gets the accounts that will be used the opportunities and the second retrieves the opportunities to delete.
That last example is something we indeed need to better understand.
ReplyDeleteHi Bob:
ReplyDeleteI tried to repro your results, and I think you may have something else going on here.
On the first scenario, bracketing an insertion of 2000 opportunities on a new scratch org, I get about 2 seconds of CPU time with and without the trigger, and about 3 seconds with the flow.
This is inserting the the opportunities using anonymous Apex, with a Limits.getCPUTime statement before and after the insert, with the debug log captured at Error or None level.
Also, I have no SOQL queries - the flow itself does not create them, nor does the trigger - which suggests there's something else going on in the org.
So based on my measurements, triggers are still faster than before-trigger flows by far.
My suspicion is that there's an issue with your methodology of measuring the flow CPU time - as data insertions themselves take CPU time these days, and I'm not seeing any way to insert 2000 records in 100ms (even with the flow and trigger both inactive).
Hi Dan,
DeleteThere definitely seems to be differences based on orgs. For example, when I deactivated all my flows and triggers, the logs on my pre-release org show 0 cpu time consumed for the insertion and deleting them afterwards, so in that case my org seems to be more efficient than yours!
I can't see how there could be anything else going on in the org, as I've reproduced much the same figures in a preview scratch org that was empty apart from the various flows and triggers that I deployed.
It's interesting that your example didn't have any SOQL queries - I did, as that was the original scenario I was testing - retrieving a field from a related record. I would definitely expect this to consume additional CPU as the set of record ids for the query has to be generated by iterating the trigger. The flow didn't really change in terms of CPU for me - I'd be curious as to whether triggers are still the winner in your org in that case.
I'm hoping to find some time to do more testing around this across a number of orgs at different times of day, given the non-deterministic calculation of CPU time that still doesn't prove anything, but it should hopefully lessen the impact of local conditions.
I think it may be a case of both what we are comparing and what we are measuring - obviously I didn't recreate your exact scenario. Perhaps after the holiday weekend we can compare notes :-) I would recommend trying the measurement methodology I use (inserting the records using anonymous Apex with getCPUTime calls before and after, then measuring the difference between different scenarios). i've seen some truly bizarre results in debug logs - where there are disconnects between elapsed time and CPU time reported - especially with automation involved.
DeleteHi Dan and Bob,
DeleteWere you able to figure out the cause of the difference in your observed results? I would be very interested to hear the results if either of you get a chance!
Thank you both for all that you do for the community,
James
I’ve got another perspective on this I’ll have to publish. I got curious about Apex vs PB/Flow performance and ran a series of tests but did not use CPU time... I’ll have to find some time to revisit and share my results and see what other ideas we can come up with to test.
ReplyDeleteHi Bob,
ReplyDeleteIs there anyway to add custom errors inside the before save flow just like addError in the trigger
Thanks,
Pramodh