TL;DR
Your System.debug statements are consuming resources even when you aren't capturing debug logs.
If you are passing complex datatype parameters to System.debug then they will consume more resources, as they will be converted to strings (before being discarded as you aren't capturing debug logs!)
Introduction
This week the London Salesforce Developers hosted Paul Battisson and his Turn
it up to 11- Improving the Performance of Your Apex code talk, which reminded
me of something that has been on my list of things to look at - the impact of
System.debug statements on your code.
In Paul's talk he points out that the act of measuring the consumed CPU time
and writing the metric to the debug log consumes a small amount of CPU time.
And many Apex developers know that turning on debug logging in Salesforce
impacts the duration of a transaction, sometimes significantly if the debug
log generates multiple megabytes of information - just leave the developer
console open while you use your code-heavy application if you haven't
experienced this.
What is less well known is that System.debug statements have an impact on your
transactions even if you aren't capturing debug logs, as I will endeavour to
show in the rest of this blog.
Methodology
I have a simple Apex class with a static method that loops 5,000 times,
incrementing an index from 0 to 4999. A string that starts out life empty has
the index concatenated with it each time through the loop, using the +
operator.
At the start of the method, the current time in milliseconds and the CPU time
consumed are captured, and at the end of the method this is repeated and the
difference (the elapsed milliseconds and consumed CPU time for the method) are
stored in a custom object.
public class LoggingOn
{
public static void LoggingOn()
{
Limits_Consumed__c lc=new Limits_Consumed__c();
Long startMillis=System.now().getTime();
Long startCPU=Limits.getCpuTime();
String message='';
for (Integer idx=0; idx<5000; idx++)
{
message+=idx;
}
lc.Total_Millis__c=System.now().getTime() - startMillis;
lc.CPU_Time__c=Limits.getCpuTime() - startCPU;
insert lc;
}
}
The Tests
For each test, the method was executed twice:
-
Via execute anonymous via the developer console, with debug logging enabled
and the log level at the default of SFDC_DevConsole.
-
As the only code in a trigger before update of a record (this was the
quickest way I could execute the Apex outside of execute anonymous - the
trigger is essentially a carrier for it) with debug logging disabled
1. No Debug Statements
Scenario 1 took 387 milliseconds elapsed time and consumed 154 milliseconds of
CPU
Scenario 2 took 44 milliseconds elapsed time and consumed 45 milliseconds of
CPU
I then added debug statements and ran the method via the two scenarios.
2. Adding a Debug Statement
This used System.debug to log the message variable each time through the loop:
System.debug('Message = ' + message);
Scenario 1 took 736 milliseconds elapsed time and consumed 285 milliseconds of
CPU - you can clearly see the impact of debug statements when logging is
enabled here.
Scenario 2 took 49 milliseconds elapsed time and 45 milliseconds of CPU - very
little change here.
3. Extending the Debug Statement
The debug statement was extended to include the name of the current user:
System.debug('Message = ' + message + ', User ' + UserInfo.getName());
Scenario 1 took 1281 milliseconds elapsed time and consumed
328 milliseconds of CPU. An interesting aspect here is that after a very
short period of time, the message aspect was being truncated and the user's
name wasn't appearing, but the work was still being done to retrieve it and
build the full debug string each time through the loop.
Scenario 2 took 120 milliseconds elapsed time and 109 milliseconds
of CPU - starting to ramp up.
4. Including an sObject
At the start of the method, prior to the starting CPU and elapsed time being
captured, an account record with four fields (Id, Name, AccountNumber,
CreatedDate) was retrieved from the database. This was added to the debug
statement:
System.debug('Message = ' + message + ', User ' + UserInfo.getName() + ', Account = ' + acc);
Scenario 1 took 1504 milliseconds elapsed time and consumed
494 milliseconds of CPU, but again most of the time the account record
details weren't written to the log.
Scenario 2 took 185 milliseconds elapsed time and 180 milliseconds of CPU
- around a 50% increase.
5. Splitting Into Multiple Debug Statements
To ensure that the information could be seen in the debug logs, the single
statement was split into three:
System.debug('Message = ' + message);
System.debug('User = ' + UserInfo.getName());
System.debug('Account = ' + acc);
Scenario 1 took 1969 milliseconds elapsed time and consumed
656 milliseconds of CPU, and this time all of the pertinent information
was recorded each time through.
Scenario 2 took 198 milliseconds elapsed time and 192 milliseconds
of CPU - a small increase.
6. Adding an Array
The query to retrieve the record was extended to include the Id, Subject and
Description of its 2 related cases, and an additional statement added to debug
this:
System.debug('Message = ' + message);
System.debug('User = ' + UserInfo.getName());
System.debug('Account = ' + acc);
System.debug('Cases = ' + acc.cases);
Scenario 1 took 2563 milliseconds elapsed time and consumed
1136 milliseconds of CPU - closing in on double the CPU, so clearly
logging an array has quite an impact.
Scenario 2 took 368 milliseconds elapsed time and 353 milliseconds
of CPU, again close on double.
Reviewing the Numbers
Here are the numbers for the tests in an easy to consume format, with the
percentage increase from the previous test shown in brackets.
Test |
Elapsed 1 |
CPU 1 |
Elapsed 2 |
CPU 2 |
1 |
387 |
154 |
44 |
45 |
2 |
736 (47%) |
285 (46%) |
49 (11%) |
45 (0%) |
3 |
1281 (60%) |
328 (15%) |
120 (144%) |
109 (142%) |
4 |
1504 (17%) |
494 (50%) |
185 (54%) |
180 (65%) |
5 |
1969 (31%) |
656 (33%) |
198 (7%) |
192 (7%) |
6 |
2563 (30%) |
1136 (73%) |
368 (86%) |
353 (84%) |
Looking at the scenario 1 results, we can see that the time increases with the
number of debug statements and complexity of datatypes. This makes perfect
sense as I'm doing more work to find out granular details about the state as
the method proceeds.
Looking at the scenario 2 results, we can see that by not capturing debug
logs, the numbers are quite a lot smaller. However, the time is increasing as
the number of debug statements and complexity of datatypes increase, even though I'm not using any of the information.
By leaving the debug statements, I'm forcing various datatypes to be converted
to strings in order to pass them as parameters to the System.debug() method,
which will do nothing with them. In the case of test 6, my
transaction is over a third of a second longer due to a bunch of work that
adds zero value - my users will be pleased!
Delete All the Debugs?
So is the solution to delete all the debug statements before deploying to
production? In many cases, yes. Often System.debug() statements are leftovers
from problems encountered during development and can be safely deleted. But
not in all cases.
If you are trying to track down an intermittent problem, maybe not. It might
be useful to have the debug statements in there so that you can quickly enable
logging for the user having the problem and see exactly what is happening. The
downside to this is the transaction is slowed down for everyone all the time,
not just the user encountering the problem when they are encountering it.
A better solution may be to allow the system administrator to control whether
the System.debug() statements are executed through configuration. That way
they are there if needed, but only executed when there is a problem to be
tracked down.
To carry out a fairly blunt test of this, I created a custom metadata type
with a checkbox to indicate if debugging is enabled. The custom metadata type
is retrieved after the starting elapsed and CPU time have been captured, and
is checked before each of the debug statements is executed, to try to simulate
a little of the real world. My method now looks as follows:
public class LoggingOn
{
public static void LoggingOn()
{
Account acc=[select id, name, accountnumber, createddate,
(select id, subject, description from Cases)
from account where id='0011E00001ogFnaQAE'];
Limits_Consumed__c lc=new Limits_Consumed__c();
Long startMillis=System.now().getTime();
Long startCPU=Limits.getCpuTime();
Debugging__mdt systemDebugging=Debugging__mdt.getInstance('System');
String message='';
for (Integer idx=0; idx<5000; idx++)
{
message+=idx;
if (systemDebugging.Enable_Debugging__c)
{
System.debug('Message = ' + message);
}
if (systemDebugging.Enable_Debugging__c)
{
System.debug('User ' + UserInfo.getName());
}
if (systemDebugging.Enable_Debugging__c)
{
System.debug('Account = ' + acc);
}
if (systemDebugging.Enable_Debugging__c)
{
System.debug('Cases = ' + acc.cases);
}
}
lc.Total_Millis__c=System.now().getTime() - startMillis;
lc.CPU_Time__c=Limits.getCpuTime() - startCPU;
insert lc;
}
}
Executing the final tests again with the custom metadata type set to disable debug statements showed a significant improvement:
Scenario 1 took 475 milliseconds elapsed time (-78%), and consumed 213 milliseconds of CPU (-81%)
Scenario 2 took 66 milliseconds elapsed time (-82%), and consumed 62 milliseconds of CPU (-82%)
Conclusion
Now I'm not suggesting that everyone starts building configuration frameworks to allow individual debug statements to be toggled on or off for specific users - for any decent sized Salesforce implementation, this would be a huge amount of work and an ongoing maintenance headache. What I am saying is think about the debug statements that you leave in your code when you deploy your code to production. While it might not feel like you need to worry about a few statements that debug record details, once you start to scale the volume it can slow down the user experience. Studies have shown that when an application is slow, the users also perceive it to be buggy regardless of whether that is actually the case - once a user takes against your app, they go all in!