Sunday, 21 February 2021

London Salesforce Developers Want Your Spring 21 Favourites

Since the UK went into first lockdown in March 2020, the London Salesforce Developers have met virtually over zoom. This works fine from the perspective of the talks and Q&A, but one area that is a real challenge to replicate is the casual conversations. Sometimes this is just general catch ups to talk about what we've been working on recently, which is something we can just about manage without. More problematic is that we aren't sharing the cool new features that we've just learned about, and that just isn't acceptable.

For that reason, our March 2021 event will be nothing but our members sharing their favourite feature from the Spring 21 release of Salesforce - we want to hear what you are excited about, and why!

So if you've spotted a hidden gem, sign up for our session on March 10th 2021 and tell us all about it. Don't delay - if someone else gets in before you, they'll get to talk about it!

You can sign up for the event here - once registered you'll need to fill in another form with details of what you want to talk about. You can also put in a backup choice or two, in case someone got in early and grabbed your favourite.

The event takes place on March 10th from 18:00 to 20:00 GMT - we'd love to hear from some internationals, so if you can make the timing work then please join us.

Thursday, 11 February 2021

Spring 21 - AuraEnabled Apex and Sharing

(Updated 11/02 to fix typo on inherited sharing when Apex is invoked by starting a transaction. Mea culpa)


The Spring 21 release of Salesforce includes an update that may change the behaviour of your Apex classes that are used as controllers for Aura or Lighting Web Components. If your org was created after the Spring 18 Salesforce release, or you activated the (now retired) update

   Use without sharing for @AuraEnabled Apex Controllers with Implicit Sharing 

then by default your controllers run as without sharing, which means that they don't take into account sharing settings for the user making the request and allow access to all records. 

Once Spring 21 goes live, the

   Use with sharing for @AuraEnabled Apex Controllers with Implicit Sharing (Update, Enforced)

will be applied and this behaviour will be reversed - the default will be with sharing and access will only be allowed for records owned by, or shared with, the user making the request. 

Why the Change

In a word, security. This update makes your components secure by default - if you forget to specify with sharing or without sharing, the principle of least privilege is applied and the most restrictive option is chosen. 

The absence of a sharing keyword can also be considered a sharing keyword

I'm really not a fan of acts of omission driving behaviour, especially when that behaviour isn't guaranteed. Prior to the Spring 21 release, if you don't specify the type of sharing, there's no way to tell by inspecting the code itself what will happen. Anyone debugging an issue around sharing would have to know when the org was provisioned, or find out whether the earlier update had been applied, always assuming they could get access to production to find out!

Historically, one reason to omit the sharing was to allow the code to inherit the sharing from it's calling Apex. This allowed a class to execute as though :

  • with sharing is defined, if called from a class defined as with sharing
  • without sharing is defined, if called from a class defined as without sharing
which gives a great degree of flexibility, with the trade-off that the exact same behaviour applies if you forgot the sharing declaration rather than intentionally excluded it. A comment to clarify the intent could help here, but that's something else to remember.

Inherited Sharing

Winter 19 made a great step forward for forgetful programmers with the introduction of the inherited sharing keyword. This explicitly states that the the class will inherit the sharing from the calling code, so no need for anyone to try to infer what the missing sharing keywords might mean.  

A slight wrinkle to this is what does inherited sharing mean when the calling code is not Apex - i.e. when it is the entry point for a transaction and thus executed by the Salesforce platform? A great example of this is an @AuraEnabled class used as a controller for an Aura or Lightning Web Component, aka where we came in to this post! 

The good news is that the Apex docs explicitly call this out - inherited sharing means with sharing when it is the entry point for a transaction - the principle of least privilege again, but clearly documented so that everyone knows what behaviour to expect.

Call to action

So do yourself and your team a favour, and when you are checking your @AuraEnabled classes to see if they will be affected by the Spring 21 update, if you find any without a sharing keyword, add one to make it clear what sharing is being applied. Your future self will thank you, and it also means that Salesforce can flip flop around what the absence of a sharing keyword should be and your code remains unaffected.

Related Posts

Saturday, 6 February 2021

Org Documentor - Flag Non-Display Fields


Towards the end of 2020, I pushed an update to the Org Documentor plug-in to include details of the page layouts that a field is referenced in. When I posted this on Linked In, I got the following comment from Anand Narasimhan (a blast from the past from the early days of the CTA program) :

Which chimed with some of the comments I'd received when I was asked to add this, around helping to retire old fields that weren't used any more.  This didn't seem like it would take a huge amount of work to implement, so I added an issue to the Github repository and forgot about it until today.


It certainly didn't take a huge amount of effort. As detailed when adding the page layout reference information, I build up a map of the page layouts that reference a field keyed by the field name. As I'm building a complex JavaScript object to pass to the EJS templating framework, I add the list of page layouts to a field property named pageLayoutInfo.  It was then simply a matter of setting the background colour for the field if the pageLayoutInfo property was empty. The slight complication was that if the field has been determined to be in an error (missing description) or warning (todo, deprecated in the field description) then it would already have a background colour and I wanted to leave that in place. 

All told, this was 4 lines of code (could be reduced to 3 with a wider screen ;):
  if ( (!field.pageLayoutInfo) && 
       (''==field.background) ) {

I then added a field to the sample metadata that isn't present on any page layouts - Internal Description - and regenerated the report, which highlights the field in pink as expected:

Bonus Changes

In response to an issue raised from the community, I also added the ability to configure the name of the report subdirectory for a metadata type via the reportDirectory property in the configuration file. The sample repository has been updated to write the pages pertaining to the objects metadata to the objs directory. If you don't provide the reportDirectory property, it will default to the metadata type name - e.g. objects, triggers. I've also added an issue to document the configuration file properties, as right now there is an example file and I leave everyone to draw their own conclusions.

I also fixed a bug in the aura enabled pages that detail the Apex controller classes for aura components - if the component extended a super component it all went to hell, but now it handles that correctly.

Updated Plug-in

Version 3.4.6 of the plug-in has this new functionality and can be found on NPM

If you already have the plug-in installed, just run sfdx plugins:update to upgrade to 3.4.6 - run sfdx plugins once you have done that to check the version.

if you aren't already using it, check out the dedicated page to read more about how to install and configure it.

The source code for the plug-in can be found in the Github repository.

Related Posts

Sunday, 31 January 2021

How Often Should You Blog?

How often should you blog? This is a question that, while not often asked, has many answers. The reason it's not often asked is probably because you just have to wait a couple of days and it will pop up in one of the newsletters you somehow signed up for years ago and haven't got around to cancelling. The problem is, it will be answering the question from the perspective of someone else - maybe someone that is trying to grow an already large online following, sell online courses, or forge a new career as a writer. It might even be answered from the perspective of a company that is blogging as part of a marketing initiative.

Even if the answer is from someone trying to achieve the exact same thing that you are, they won't be you, shaped by your own experiences and subject to your own personal ups and downs. For this reason I would advise never following someone else's advice slavishly, but pick some approaches based on what appeals to you, and try a selection of them out. You should also apply this to my advice - it's typically based on what has (mostly) worked for me, and I'm not you. 

Every Day

I've seen a number of write ups from people who blog every day. Some reported less traffic, not more - readers deserted them as they weren't able to maintain the quality, or just got overloaded even though the blogger felt that their process had improved. Others found that their engagement went up, sometimes by as much as 1,000%, although the real success stories tend to be a few years ago, possibly because the platforms they were writing on were less crowded so a concerted effort could make a real difference.

Blogging every day definitely isn't for me, as I'm absolutely certain the quality would go downhill. It's easy to underestimate the effort required to write posts at a decent cadence, and I can't tell you the number of times I thought I had a series of 10+ posts on a topic, only to start running out of ideas by part 4 or 5. The only way I could make this happen is by flipping my blog to more of a journal, where I just write about my day, but I'd get bored with that before too long, never mind any readers. 

I am tempted though, as I'm curious if there's a point where it becomes easy to dash out a post during a coffee break, and if in the future I have less demands on my time I might try it. I probably won't try it out on this blog though, I'd likely create a new space on something like Medium, so as not to alienate my current audience. They are likely to be interested in Salesforce, software development and devops (maybe remote working right now), because that's what I've written about for the last decade, and probably wouldn't enjoy the change to a diary approach. That doesn't mean it won't work for you, so it might be worth a try if you think you can take the pace. Most of the accounts of daily blogging I read are about people experimenting with it for 1-3 months, but there are a few that stick with it for years. I doubt they are doing this on top of a day job and a demanding family life, but who knows.

Several Times a Week

Marketing Insider Group recommend 2-4 posts per week to get the top results in terms of traffic and conversions, but this is for companies that are trying to convert blog readers into customers. While I could aim for that, I'm not sure what I'd do with a customer, given the only thing I really have to sell is the Visualforce Development Cookbook.  2-4 posts per week is also a lot given that I'm writing them all myself, and I already have a full time job that consumes quite a bit of my time. I could probably do a couple of posts a week for a short period of time, but I feel like I'd run out of ideas after a few months. I'd also start to resent that I'd put myself in this position and view it as a chore, which never leads to good outcomes in creative endeavours.

Once a Week

This is what I aim for, and I must say it's quite easy to achieve during a pandemic. Looking back at my posts over the last 12 months, I was hitting more than one a week during the periods that we were locked down, or restricted in terms of travel. When we opened up, however, I dropped down to one or two a month as I had more pressing demands on my time.  

When I Have Something to Say

This is my sweet spot. Rather than putting pressure on myself to write at a particular cadence, and beating myself up when I inevitably fall behind, I write longer and more considered posts when I have something to say. If I have something to say then it will typically be about a topic that I am interested in or I care about or that I know a fair bit about. When it's all three, I can't get the words out fast enough, but even if it's one of them then the motivation is there. For a long time I didn't realise this though, and I experimented with a number of other approaches to see if I enjoyed them more, or if they had a positive effect. I reconsider my approach whenever I read a post about blogging every day, every week, or points in between, because what works for me now may not be ideal in a year or two. 

There does seem to be some level of agreement that blogging with a cadence can be helpful, but I'm not sure how well that applies to technical blogs like this one. I can't imagine that if I switched to posting every Friday that there would be a bunch of people out there waiting for the white smoke to indicate a new post had been published, which is why I tend to publish as soon as I'm happy with what I've written. Again, I've tried a number of different approaches to this, trying to find the days and times that get maximum engagement, and I've never really found it makes much difference. Your mileage may vary.

Saturday, 30 January 2021

Org Documentor - Custom Header Colors



In my last post about the Org Documentor, I detailed the first customisation option for the template content outside of the actual org detail - the title and subtitle of the header page. Something that also seemed useful to configure was the header background colour and text colour - I like the current purple/dark grey combo, but I can't imagine everyone does.

What I wasn't sure about at the time was how to go about this. The colours come from the styles.ejs file, which is included into the various templates as required. I didn't want to move away from this to have specific style stanzas in the template itself, so considered rewriting the styles file. The downside to this was it would leave me able to override the colour once, which would apply to all pages, whereas I liked the idea of being able to override at any level.

And I've used the US spelling in the title of this post, and for my property names, as otherwise it jars against the CSS names. Hate on me if you want.


So I did some digging around what was possible when including files in EJS, and it turned out I needn't have worried. I can pass a JavaScript object as part of the include call, and use the properties of this just like I would in the main template:
   <%- include ('./common/styles', {header: content.header}) %>
While it might seem like a simple change to add a header property to the JavaScript object being passed to the template, there was a little more to it as:
  • This is a Salesforce CLI plug-in, and is written in Typescript. This means that if I want to add properties to an object, I have to declare them, as opposed to JavaScript where I'd just chuck them on the end.
  • I want to be able to override the header colours on a per-page basis, falling back on the parent page if no override is defined, and eventually falling back on default values, which meant I needed to add this to every one of my content objects that generates the documentation pages. 
First off I defined the HeaderStyle custom type:
interface HeaderStyle {
    backgroundColor: string;
    color: string;
Once I have this, I can add a property to each of my content types:
interface IndexContent {
    links: Array<ContentLink>;
    title: string;
    subtitle: string;
    header: HeaderStyle;
Note: there are other options around this in Typescript - I could define the property type as any, which would allow me to put whatever I like in there, much like regular JavaScript. A benefit of strong typing is that VS Code flags up anywhere I've used my content types but haven't added the header property, which saves me figuring it out manually.

I then changed the code that processes the configuration file to look for backgroundColor and color entries at the top level, at each processor level (objects, triggers, auraenabled) and at each group level, defaulting the values from the parent, if defined, and eventually falling back on the standard colours.

Finally, I updated the configuration file of my sample Salesforce metadata to override the objects page:

    "title": "Instance Overview",
    "subtitle": "From the metadata source",
    "objects": {
        "name": "objects",
        "description": "Custom Objects", 
        "backgroundColor" : "#ff8b00",
        "color": "#ffffff",

and after generating my report, I get the default on the Home and most other pages:

while on the objects page, I get a particularly fetching orange background:

choose your colours with care though, as right now the breadcrumb is using the default bootstrap colours, so it's easy to make it hard to see.

Updated Plug-in

Version 3.4.4 of the plug-in has this new functionality and can be found on NPM

If you already have the plug-in installed, just run sfdx plugins:update to upgrade to 3.4.4 - run sfdx plugins once you have done that to check the version.

if you aren't already using it, check out the dedicated page to read more about how to install and configure it.

The source code for the plug-in can be found in the Github repository.

Related Posts

Sunday, 24 January 2021

Salesforce Flows, Triggers and CPU - One. More. Time.


The Spring 21 Salesforce release includes the following update, which will be enforced in Summer 22:

Accurately Measure the CPU Time Consumption of Flows and Processes (Update)

With this update enabled, Salesforce accurately measures, logs, and limits the CPU time consumed by all flows and processes. Previously, the CPU time consumed was occasionally incorrect or misattributed to other automation occurring later in the transaction, such as Apex triggers. Now you can properly identify performance bottlenecks that cause the maximum per-transaction CPU time consumption limit to be exceeded. Also, because CPU time is now accurately counted, flows and processes fail after executing the element, criteria node, or action that pushes a transaction over the CPU limit. We recommend testing all complex flows and processes, which are more likely to exceed this limit.

which I was particularly pleased to see, as it was confirmation of what I'd been seeing since before save flows were introduced in Spring 20, namely that the reporting around flow CPU consumption was off. You can read about the various findings by working back through the blog posts,  but the bottom line was that the debug logs reported hardly any CPU consumption for a flow unless you added an Apex debug log about it, in which case it suddenly jumped up and reported the actual value. In effect the act of looking at how much CPU had been consumed caused it to be consumed, at least from a reporting perspective. For a real transaction the CPU obviously had been consumed, but it wasn't always clear what had consumed it, which no doubt led to a lot of finger pointing between low and pro coders, or ISVs and internal admins.

To close the loop on this, I was really keen to re-run some of my previous tests to see what the "real" figures looked like, or as real as you can get, given that performance profiling of Salesforce automation is affected by so many factors outside your control.  When looking at the results below, always remember that the actual values are likely to be different from org to org - the figures I'm seeing from my latest tests are way faster than a year ago in a scratch org, but your mileage may vary.

Also, don't forget to apply the update - I did and couldn't make head or tail of my first test result!


The methodology was the same as for the original before save testing in Spring 20 - I'm inserting a thousand opportunities which are round robin'ed across two hundred accounts and immediately deleting them. This means the figures are inflated by the setup/teardown code, but that is constant across all runs. The debug level was turned down to the minimum and I ran the each test five times and took the average.

The simple flow to update the opportunity name came out at 1172 msec. Just out of curiosity, I added a debug statement to Apex code that does the testing, and lo and behold the average went up to 1374 msec. Here we go again, I thought. Before I published and started fighting on the internet again, I disabled all automation and ran the code again - so inserting and deleting the opportunities with and without the debug statement. The averages were : no debug statement - 1108 msec, with debug - 1140. Nothing conclusive, but it's definitely having an impact. Finally, I enabled my simple trigger to do the same thing as the flow and test this with and without the debug statement. Average without - 897 msec, with - 1216 msec. Given that the average without the debug statement was lower than the average when all automaton had been turned off, I decided that the debug statement ensures an accurate report, especially as it has a similar impact across both flow and trigger numbers. Once again, it's really hard to profile Salesforce performance!


Test Flow Trigger Difference
No automation 1140 1140 0
Update record name 1374 1216 158
Update record name with account info (lookup) 2133 1252 881
Combined trigger (set name) and flow (change amount) 1459 1459 0

As expected, triggers are faster than flows, which is good because that's what Salesforce say! The differences aren't huge for a single record but once you scale up to a thousand records with a little complexity, they can become significant - 881msec might not sound like a lot, but it's 8% of your allowance for a transaction. 

Mixing flow and triggers doesn't bring any penalty, for CPU at least. It makes your automation harder to understand, and may mean you can't guarantee the order that the various types fire in, so it's best avoided regardless of the effect on CPU time.

Proceed With Caution

In a follow up post when after save flows were introduced, I found that I could carry out way more work in a transaction using flows rather than triggers, as whatever checks CPU time from a limits perspective was subject to the same reporting issue. Once this feature is activated, the flow CPU starts counting from the beginning. In Summer 20 I could insert 2-3000 opportunities and keep under the 10000 msec CPU limit, with this feature activated I'm hitting it at the 1-2000 mark, so you might find you've inadvertently been breaking the limit because the platform didn't notice and let you. Definitely run some tests before enabling this in production and, per the update information:

      You can enable the update for a single flow or process by configuring it to run in API version 51.0 or later.

so you can start testing on a flow by flow basis.

Hopefully this is the last post I'll have to write on this topic - it's been a wild ride, but it feels like we've reached the end of the road. 

Related Posts

Saturday, 16 January 2021

Remote Working - together, isolated


As we move into the third week of 2021, the UK is working through the world's worst movie franchise - now showing : Lockdown 3 - Lockdown Harder (but not as hard as Lockdown 1). We're all hopeful that Lockdown 4 - Lockdown without a Vaccine is permanently cancelled, but that remains to be seen.

Everyone that is able to is back working from home and ordering everything they need via the internet. I saw this situation described on twitter recently as "the middle class stay at home while the working class bring them things", which sums it up quite nicely.

The Tech Wars aren't Over

But they have changed.  Back in the day it used to be desktop operating systems - Linux, Windows or Mac. Then came the rise of the mobile device and it was iOS or Android. Sometimes it was programming languages - C++ or Java. In Salesforce we have the UI - Lightning or Classic, and the development paradigm - No, Low, or Pro Code.

In the world of remote work we can now argue about chairs, air purifiers, mechanical versus membrane keyboards, standing desks, microphones and webcams, along with side skirmishes about LED lighting and cable routing. If you aren't engaged in any of these battles, you might want to read some of the billions of blog posts that have been written in the last 10 months about "the perfect home working setup". There's probably been another couple of hundred written while you've been reading this paragraph, so don't delay.

The Great Normalisation (and other upsides)

One upside is the acceptance that working from home doesn't mean your home suddenly becomes a professional office. Children and pets wandering in to video calls is the new normal and not something to be mortified about. In fact, it's not just accepted, but often a welcome break for the hostages desperately waiting for the call to end. A home is a shared resource, and we must all be good multi-tenant citizens and not hog the space.

Community events are mostly virtual, including the Dream'/London's Calling behemoths. While we miss seeing each other in person, this is another upside. When we had to fly all over the world to attend or present, it definitely favoured those earning Western European/US style salaries with passports that gave visa free access to lots of countries. If you were from a wealthy country, you had far more opportunity to raise your profile in the community than someone who needed to wait months for a visa and spend a week's wages on one night in a London hotel. 

What Does the Future Hold

(This is obviously forward looking, and the following is nothing more than my opinion)

There will be no in-person Dreamforce in 2021. The figures in the US are still horrific, and Europe isn't that much better. While it seems likely the vaccine will reduce the death toll by protecting those most at risk, it will take more than a couple months and, as usual, the developed world will take priority. I can't see Salesforce wanting to be the company that brings 100k+ people from anywhere to San Francisco and gets blamed for the next flare up. There will almost certainly be some testing requirements for any in-person events for the rest of the year, and the logistics of that would be overwhelming at that scale - anyone who has been to a World Tour event with metal detectors can attest to that. A gradual return to normal event attendance seems more likely to me, and I can't see a socially distanced Dreamforce with 5k attendees appealing to Marc Benioff and co.

The same goes for the London Salesforce Developers. In the UK right now (16th Jan) we are seeing around 50k+ new cases a day, which is flattening but will take some time to go down. We are likely to stay in a highly restricted environment for a couple of months followed by a gradual easing. It's hard to see any company wanting to open up their offices to bunch of people who could have been mixing with anyone, and speaking as an organiser, I don't want to be remembered as one of the team behind a super spreader event.

Realistically it feels to me like it will be 2022 before we restart the kind of events that aren't vital to an industry, and it will only be when things like football, rugby and the theatre are running at full capacity  that the community-minded events start getting back up to speed. I also think that hybrid events, with a mix of in person and virtual sessions, will be the future. Before then, if we can get back to some kind of normality where I can mix with one or two of my friends and neighbours in my local pub I'll be happy.

Looking a bit further out, I don't think the remote working genie is going back into the bottle. While there will be a return to the office after the pandemic is over, people new to the job market have only known this, will have mostly enjoyed the experience, and will be expecting this to be offered by any company they talk to in the future. They will eventually be the generation of leaders, and will be pushing for more remote when they are in a position of influence. It might take a decade or two, but the days of everyone commuting to an office will seem as quaint as taking your celebration food to the village bakehouse for cooking now does to those of us who live in Europe.

Related Posts