Friday, 30 April 2021

Remote Work - The End of the Beginning?


As we head into May in the United Kingdom, COVID-19 cases and deaths have settled at around 2-2.5k and 10-40 respectively, so the planned relaxing of lockdown looks set to continue. Vaccine data from around the world makes us cautiously optimistic that the impact of catching the virus is significantly reduced, so it could be that once we re-open then we stay open. While this is very positive news, it does apply pressure to those of us who have been regularly writing about remote work, as the clock is ticking down on the "all remote all the time" approach, and interest in hearing about it will start to wane.

I've read a number of research items suggesting that people feel less productive when they work from home, in part because when they hit a problem they can't just look up and ask someone a question. Instead they either have to self-serve via Google or track someone down in chat and type out their question. While I can see the impact, it isn't the whole story. As the person who was usually on the receiving end of said questions, and answering a large amount every day, I'm often a lot more productive as I can more easily get my head down and concentrate on some deep work. I can also choose to ignore my email/chat/phone much more easily than I can ignore someone who comes and stands next to me and starts talking! Obviously helping people is a huge part of my job, so I don't expect to go off the grid for days (or even hours) at a time, but even breaking off to tell someone I'm busy and can I come back to them later disturbs my train of thought. I'd really like to see the numbers from both sides to determine the real impact!

The "Work from Anywhere" narrative is being dialled back to a hybrid approach of "Work from Anywhere  Half the Time and the Office the Other Half", which kind of ruins the anywhere aspect. If you have to go into the office 2-3 days a week, you'll need to keep living near to that office unless you really fancy commuting from the beach for several hours a day. To be fair, some companies are sticking to the fully remote option, but typically caveating with "where your role allows it", and what's the betting a lot of roles turn out not to allow it after all.

From a pure getting things done perspective, I've feel like I've always embraced working from anywhere. Prior to the pandemic I used to do a fair bit of travelling, including a few trips every year to the US. This would typically involve several hours on buses/trains/tubes, arriving at an airport three or four hours early and then sitting for ten hours on a plane. With a little planning I could churn out work wherever I found myself, writing designs or documentation when I was offline and Apex and Lightning Components when I had connectivity. 

From a leadership, collaboration, and support perspective it's less than ideal - Zoom calls and messaging tools can cover a lot of this, but sometimes there really is no substitute for a face to face meeting where you can scribble on a whiteboard. But you don't need that all day every day either, which is why I think the classic remote work approach is likely to be rolled out more going forward - employees are based at home and don't have a desk in an office, but they spend a day or two a month meeting with their colleagues in person. 

One thing is for sure, remote work isn't settled as we tentatively venture back into different houses, pubs,  and offices. In November 1942 Churchill gave a speech at the Lord Mayor's Luncheon at Mansion House where he said  :

Now this is not the end. It is not even the beginning of the end. but it is, perhaps, the end of the beginning

and that is where I feel we are with remote work. The beginning was the momentous turn of events that saw us pivot to "as remote as possible" with very little planning, and stick with it for over a year.  We are now coming to the end of the beginning, as offices will soon be able to reopen. When that happens we enter the middle as we try out lots of changes and tweak things as we go. The end is still some way off, which is when we all agree that we have nailed it and remote work is as efficient as it can be. And that will be the normal until the next momentous event that derails everything!


Saturday, 24 April 2021

Book Review - Becoming a Salesforce Certified Technical Architect


 (Disclaimer : I didn’t purchase my copy of this book - I was sent a copy to review by Packt Publishing)

Introduction

Salesforce describes Certified Technical architects as follows:

Salesforce Technical Architects serve as executive-level strategic advisors who focus on business transformation with unrivaled domain expertise in functional, platform and integration architecture. They communicate technical solutions and design tradeoffs effectively to business stakeholders, and provide a delivery framework that ensures quality and success.

Based on that description, it's not surprising that there's a huge amount of interest in becoming one. The path isn't straightforward though, with an awful lot of candidates falling at the last hurdle - the review board. This isn't a new development - in the 12 months after I qualified I heard about two week review board events where all 20 candidates failed.  While that's obviously bad for the candidates, it's also not great for the judges - failing people isn't a fun job, and when you have to do it repeatedly the sessions become a chore. 

One reason people fail is that they haven't got experience of presenting a solution and being judged on it. That has improved with community efforts to come up with example scenarios and run practice review boards. While this is good experience for the day of the board, it's not overly helpful in terms of preparation. And that is where this book comes in.

The Book for When You Book

This is the book you want to buy when you are about to book your review board slot.  It won't turn you into a Technical Architect - you still need the experience and learning - but it will get you ready for the board. First off, it looks at what expertise you need to become CTA - if you don't recognise yourself here and spot gaps, maybe hold off for a little while. It then takes you through the structure of the review board session and gives some useful tactics for how to approach it.

Then there are a bunch of chapters around the various areas that you need to shine at in order to pass the board, with mini-scenarios that are worked through with you. Rather than reading straight through, I'd strongly advice having a go yourself and then compare your solution with the exemplar.If you have a different view, that doesn't mean you are wrong, just make sure you can justify it and your justification stacks up - it's highly likely that you'll be quizzed about the benefits of your approach over the other possible solutions.

The book then finishes off with a couple of full mock scenarios, with example solutions and presentation artefacts and script. Again, look to make the most of this by trying it yourself under exam conditions.

Allow Enough Time


I'm already a CTA so I didn't have to take on each of the scenarios and mocks, but it still took me a couple of months to read through the book in my free time and review it. Don't expect to skim this the night before the board, or to rip through it the weekend before. Set aside enough time to work through it properly and you'll reap the benefit.

The Key Advice

The most important piece of advice in the book, which crops up multiple times, is to own your solution. You've considered all the options and have come up with the best possible solution, so be proud of it when you present, and defend it with everything you have. 

In Summary

You can boost your chances of becoming a CTA with this book. It's as simple as that. Tameem Bahri has done a great job with it.




Saturday, 17 April 2021

Unpackaged Metadata for Package Creation Tests in Spring 21

Introduction

The Spring 21 release of Salesforce introduced a useful new feature for those of us using second generation managed or unlocked packages (which really should be all of us now) - the capability to have metadata available at the time of creating a new package version, but which doesn't get added to the package.

I've written before about how I use unpackaged directories for supplementary code, but this isn't code that is needed at creation time, it's typically utilities to help the development process or sample application code to make sure that the package code works as expected when integrated elsewhere. 

The Scenario

To demonstrate the power of the new unpackaged concept, here's my scenario:

I have a package that defines a framework for calculating the discount on an Opportunity through plug and play rule classes. The package provides a global interface that any rule class has to implement:

public interface DiscountRuleIF 
{
    Decimal GetDiscountPercent(Opportunity opp);
}

 The rules are configured through a Discount_Rule__c custom object, which has the following important fields:

  • Rule_Class__c - the name of the class with the rule interface implementation
  • Rule_Namespace__c - the namespace of the class, in case the class lives in another package
And there's the rule manager, which retrieves all of the rules, iterates them, instantiates the implementing class and executes the GetDiscountPercent method on the dynamically created class, eventually calculating the total discount across all rules:
global with sharing class DiscountManager 
{
    global Decimal GetTotalDiscount(Opportunity opp)
    {
        List<Discount_Rule__c> discountRules=[select id, Rule_Class__c, Rule_Class_Namespace__c
                                              from Discount_Rule__c];
        Decimal totalDiscount=0;

        for (Discount_Rule__c discountRule : discountRules)
        {
            Type validationType = Type.forName(discountRule.Rule_Class_Namespace__c,discountRule.Rule_Class__c);        
            DiscountRuleIF discountRuleImpl = (DiscountRuleIF) validationType.newInstance();
    
             totalDiscount+=discountRuleImpl.GetDiscountPercent(opp);

        }

        return totalDiscount;
    }
}

So the idea is that someone installs this package, creates rules locally (or installs another package that implement the rules), and when an opportunity is saved, the discount is calculated and some action taken, probably discounting the price, but they are only limited by their imagination.

Testing the Package

I don't want to include any rules with my package - it purely contains the framework to allow rules to be plugged in. Sadly, this means that I won't be able to test my rule manager class, as I won't have anything that implements the discount rule interface. For the purposes of checking my class, I can define some supplementary code using the technique from my earlier post via the following sfdx-project.json : 

{
    "packageDirectories": [
        {
            "path": "force-app",
            "default": true,
            "package": "UNPACKMETA",
            "versionName": "ver 0.1",
            "versionNumber": "0.1.0.NEXT"
        },
        {
            "path": "example",
            "default": false
        }
    ],
    "namespace": "BGKBTST",
    "sfdcLoginUrl": "https://login.salesforce.com",
    "sourceApiVersion": "51.0"
}

Then I put my sample implementation and associated test classes into example/force-app/main/classes, and when I execute all tests I get 100% coverage.  I can also create a new package version with no problems. Unfortunately, when I promote my package version via the CLI, the wheels come off:

Command failed
The code coverage required to promote this version has not been met.
Please add additional test coverage and ensure the code coverage check
passes during version creation.

My sample implementation and the associated test code has been successfully excluded from the package, but that means I don't have the required test coverage. Before Spring 21 I'd sigh and pollute my package with code that I didn't want, just to satisfy the test coverage requirement. It would be a small sigh, but they add up over the years.

The unpackagedMetadata property

The new Spring 21 feature is activated by adding the unpackagedMetadata property to my sfdx-project.json, inside the default package directory stanza:

{
    "packageDirectories": [
        {
            "path": "force-app",
            "default": true,
            "package": "UNPACKMETA",
            "versionName": "ver 0.1",
            "versionNumber": "0.1.0.NEXT",
            "unpackagedMetadata": {
                "path": "example"
            }
        },
        {
            "path": "example",
            "default": false
        }
    ],
    "namespace": "BGKBTST",
    "sfdcLoginUrl": "https://login.salesforce.com",
    "sourceApiVersion": "51.0",
}

Note that I still have my supplementary directory defined, but I'm also flagging it as metadata that should be used at create time to determine the test coverage, but not included in the package.

Having created a new version with this configuration, I'm able to promote it without any problem. Just to double check I installed the package into a scratch org and the unpackagedMetadata was, as expected, not present.

Reduced Coverage in the Subscriber Org

One side effect of this is that the code coverage across all namespaces will be reduced in the subscriber org. Depending on your point of view, this may cause you as the package author some angst. It may also cause the admin who installed the package into their org some angst. 

Personally I don't think it matters - the coverage of managed package code doesn't have any impact on the subscriber org, and I don't think it's possible to craft tests inside a managed package that are guaranteed to pass regardless of what org they are installed into. I could do it for my sample scenario, but if it's anything more complex than that, the tests are at the tender mercy of the org configuration. I create an in-memory Opportunity to test my discount manager, but if I had to insert it into the database, there's any amount of validation or expected data that could trip me up. While there are techniques that could be used to help ensure my tests can pass, mandating that everyone who installs my package has to have used them across all their automation seems unlikely to be met with a positive reaction.

What it is more likely to mean is that running all tests across all namespaces is more likely to succeed, as the tests which could easily be derailed by the org configuration can be left out of the package, just leaving the ones that are completely isolated to the package code in place, so swings and roundabouts.

What it does mean is that the package author has the choice as to whether the tests are packaged or not, which seems the way it should be to me.

Related Posts






Saturday, 10 April 2021

Bake It Till You Make It

 


Introduction

As we start to come out of the third lockdown in the UK, blinking in the not so bright light of yet another overcast day, it's important to keep occupied. BrightGen have been pretty good at finding fun things to do, and the baking challenge returned for Spring. We all get an allowance for our ingredients, so the only outlay is our time and loving attention.

I'm not that fussed about cakes, and we'd only just finished up the last couple of slices of my Great BrightGen Bake Off entry from November 2020 (it was in the freezer rather than rotting in a tin!) so I decided to go savoury with beef loaf en croute. I've made this many times over the years, but this time was different. First I was making the pastry from scratch. I haven't done this in about 10 years as the supermarket version is just as good, but that didn't seem in the spirit of the challenge. The second difference was that it really mattered if it came out okay - this wasn't just for guests to eat at a sophisticated dinner party, this was going on social media!

The Hashtag

In the previous incarnation of the BrightGen bake off, the hashtag proved a bit of a challenge. That's supposed to say GBGBO - Great BrightGen Bake Off. It's just about readable, but hardly a triumph. I wasn't expecting much better this time, but I was determined to try!


After making the pastry and leaving it to rest in the fridge for a couple of hours, I nicked off the ends and created the hashtag lettering - minus the hash symbol, as that's just dead space. Not too bad, but boy do you have to be quick with puff pastry or it sticks to everything, and there are a lot more letters this time!

Next  it was time to make the meat loaf - minced beef, onions, apples, breadcrumbs, stock and a few herbs, then 45 minutes in a Bain-Marie and it was cooling. This is by far the easiest part of the whole process, but make sure you leave it to cool down completely or you'll leave half of it in the tin - I know this for a fact as I've spent more than one evening frantically trying to piece it back together like the world's worst game of tetris.


Several hours later I returned to assemble, only to experience the revenge of the hashtag (if you saw the outcome on social media, just forget that for a couple of paragraphs and play along with my artificial jeopardy) - it doesn't fit on the loaf. 


Nothing to be done now, as the whole thing was assembled, and the pastry was milk-washed and already starting to sag. Into the oven and hoping for the best.

Even if I say so myself, it came out alright:


Baker's Remorse

What happened? How did it go so wrong? I knew the letters were large as I wanted them to fill the top of the loaf, but did I really think I could fit 12 letters of that size with the surface area of a 1 pound loaf tin.

If only I’d made a smaller one to take the second part of the hashtag.

Oh wait.

That’s right.

I did!


As I'm still shopping for elderly parents, whenever I cook anything like this I make extra to send to them with their weekly groceries, and it turns out the 1/2 pound loaf tin surface area is perfect for the last 4 letters.

Here’s the finished articles and a sneak peek at the official cutting:







Saturday, 3 April 2021

Unbounded Queries [Featuring Spring 21 FIELDS()]


Introduction

FIELDS() is a new SOQL function in the Spring 21 Salesforce release that allows you to query standard, custom or all fields for an sObject without having to know the names. Could it be, are we finally getting the equivalent of SELECT * so beloved of developers working with an SQL database? Can we retire our Apex code that figures out all of the field names using describe calls? Digging into the docs, the answer is no. You can pull back the standard fields, but not all of them.

Disappointing, right? Maybe at first glance, but from a future proofing your Apex code perspective, absolutely not. It's actually saving you from having unbounded queries lurking like a time bomb in your Salesforce instance, waiting to blow the heap limit on an unsuspecting user.

Unbounded Queries

Simply put, an unbounded query doesn't limit the results that may be returned from the database. 

Most of the time, we (and the rest of the developer world) think of unbounded queries as those without a restrictive WHERE clause, or with no WHERE clause at all. These queries start out life doing their best to be helpful - backing custom pages with search capabilities where the user can choose some criteria to restrict the number of records, but can click search straight away if they want to. When there are 20 records in the system, this isn't a problem. Once you've been life for 5 years and you have 15,000 records, it's an accident waiting to happen. I think of this as vertically unbound, as something needs to be done to limit the depth of the record list retrieved. 

So putting in a LIMIT clause to your query solves the problem and future proofs your application? Not quite. Queries can be unbound horizontally as well as vertically - each record can become so wide that the total number that can be retrieved is massively reduced. I view this is a problem that impacts Salesforce implementations more than other technologies, as it's so easy to add fields to an sObject and change the database schema. No need to involve a DBA, just a few clicks in Setup. The odd field here and there racks up over the years, and if your query is retrieving all fields on a record, the size of each record will become a problem, even if you are restricting the number of records that you retrieve.

Some Numbers

To reflect the 'SELECT *' concept, I threw together some Apex that dynamically creates a query by iterating the field map returned by the schema describe for an sObject. I created 500 Contact sObjects with a variety of fields present, then executed some anonymous Apex to query them back, iterate them and debug the records and debug the heap consumed. I took the simplistic view that the entire heap was down to the records retrieved, so all calculations around the maximum records will probably be over optimistic, but I'm sure you get the point.

With no contacts, the heap was a svelte 1968 bytes. 

With 500 contacts with the First and Last Name fields defined as 'Keir Bowden', the heap was 250,520 bytes, so approx 500 bytes per record. This sounds like a lot for a couple of fields, but there are quite a few standard fields that will be retrieved too - Id, IsDeleted, OwnerId, HasOptedOutOfEmail, HasOptedOutOfFax, DoNotCall, CreatedById, CreatedDate, LastModifiedId, LastModifiedDate, SystemModStamp, IsEmailBounced, PhotoUrl. A whole bunch of stuff that I probably don't care about. Not too much to worry about though, as at that rate I can retrieve 12,000 contacts before approximately blowing the heap. This is a good example of why adding a LIMIT clause to a query is a good idea - 12,000 contacts really isn't that many for a decent sized organisation that has been around for a few years!

Adding Salution of 'Mr' to all of the records caused the heap to creep up to 253,864 bytes - 508 per record. A small increase, but one that has a material impact on my max records - I'm down to 11,811 - a drop of 189 records through a single field that maybe I don't need for my processing.

I then added a unique Email Address and a Lead Source of Purchased List - moving the heap needle to 288,454 bytes - an increase of 70 bytes per record over the previous run. I'm now down to 10,380 records, so I'll have to process 1,431 less records in my transaction to avoid the limit. All because of fields that are present on the record, not because of the fields that I want. 

I upped the ante on my next change - I decided my Sales Reps wanted a notes style field on the record where they could record their unstructured thoughts about the customer. So I added a Text Area (Long) field, and add 1,000 characters to each of my records, which isn't a lot for free text - if you are doubting that, consider that at this point in this blog post I am around 5,000 characters. The impact on the heap was much as expected, and my heap use is now at 794,946 bytes at a hefty 1,600 bytes per record. I'm now down to 3,750 records per transaction! I think it's highly unlikely that I have any interest in the Sales Rep's random musings, but I'm down to around 31% of the records in no small part because of it.

Hopefully there's no need for me to labour the point - specifying a LIMIT clause is only part of the solution, and it's unlikely to help if the size of each record is something that you are exerting no control over.

Other Problems With Unbounded Queries

Even when they aren't breaking limits, unbounded queries bring unwanted behaviour:

  • If you are sending the records to the front end, you could end up transmitting way more data than you need to, taking longer and putting more stress on the client who has to manage the bloated records.
  • It makes the code harder to read - if the only query is to retrieve all fields, you need to dig into how the results are used to figure out which fields are actually needed in specific scenarios.
  • You lose static bindings to field names. This is unlikely to be a problem, as you'll almost certainly be referring to the fields you want to process by name elsewhere in your Apex code. But if you are processing fields using dynamic DML, then if an admin deletes one of the fields you are relying on, the first anyone will know about it is when you try to access the field and it's not there any more.
Even if you absolutely, definitely, have to have all of the fields that are currently defined on an sObject, before using an unbounded query ask yourself if you absolutely, definitely, have to have all the fields that could possibly be defined in the future - including those that really don't belong on the contact but it was easier than making some fundamental changes to the data model! Yes, you'll save yourself some typing, but you'll be storing up problems for the future. Maybe not today, maybe not tomorrow, but some day it's going to go pop!

So Never All Fields?


As always, never say never. For example, I'll sometimes use this technique in debug Lightning Web Components where I want to dump everything out about an object that is in use on a page, even the fields that aren't currently being retrieved. In pretty much every case where I do use this approach, the query will return a maximum of 1 record!

Related Posts


Thursday, 1 April 2021

Einstein Not Your Worst Action



Einstein Next Best Action has been a feature of Salesforce for a while now, bringing the right recommendations, to the right people, at the right time. But not everyone wants to provide a stellar service to their customers - perfect is the enemy of good, after all - and sometimes good enough is good enough.

To satisfy this demand, Bob Buzzard Enterprises is pleased to launch Einstein Not Your Worst Action - Artificial Mediocrity for the apathetic masses. Using some, but nothing like all, of the power of Einstein, you can bring tolerable recommendations to people who might be interested within a few hours. 

Rather than Recommendations and Strategies, we favour Suggestions and Tactics. No need to plan long term, just knee-jerk react based on incomplete information and move on, secure in the knowledge that you did an okay job. 

Einstein Not Your Worst Action is available for Salesforce Limited edition for one day only - contact your Unaccountable Executive to find out some of the details.




Saturday, 27 March 2021

Org Documentor - Field Count Badges



Introduction


With large, mature implementations, it's sometimes difficult to keep track of how close you are getting to the field limits - for example, the total number of fields on an object, which varies from instance to instance, or the total number of relationship fields, which is 40 unless Salesforce are willing to raise it for you. This is compounded when you have multiple development streams working in parallel, as the total number of relationship fields in only known when the various branches are merged ready for cutting a release branch. 

As Peter Drucker said, if you can't measure it you can't improve it, so the Org Documentor is getting into the measurement business.


Field Count Badges


The latest release (3.6.0) of the Org Documentor aims to help with this, by adding Bootstrap badges containing field counts to the object documentation. Right now it's the total number of fields on the object and the total number of relationship fields, as the screen grab below from the sample metadata output shows:


If there are no relationship fields, the badge isn't rendered:


Right now the badges don't change colour to flag up that the count is close to the limit, as there's no way to know if that is the case, as the Documentor is designed to work against object metadata and doesn't have access to any orgs that the metadata is installed into. Flagging the total number of fields as close to the Enterprise Edition limit isn't helpful if it will only be installed into Unlimited Edition, and vice versa is even worse.  It's also possible that there are other fields on the object that aren't version controlled - added by a managed package for example.

So I decided to leave it as the standard Bootstrap secondary badge colour, which will remain the case unless I get feedback from users that they'd like something else. I'm also open to suggestions for additional badges - if you have any strong feelings on either of these topics, please create an issue at the plug-in Github repo.

As always, you can view the latest output for the sample metadata on the public site.


Updated Plug-in


Version 3.6.0 of the plug-in has this new functionality and can be found on NPM

If you already have the plug-in installed, just run sfdx plugins:update to upgrade to 3.6.0 - run sfdx plugins once you have done that to check the version.

The source code for the plug-in can be found in the Github repository.

Related Posts



Thursday, 25 March 2021

The Value of Diminishing


I've been reading recently about diminished reality, where hardware devices and/or software remove items or stimuli from your environment. What drew me in was the diminished aspect, which I originally assumed was a joke about a device that would make things worse, but that isn't the case. While we hear a lot about augmented reality, where your experience is improved by additional information about the objects in view, diminished reality is all about improving your experience by hiding things. Not the things you are interested in, but the things that are a distraction, or simply not relevant.

A lot of us already have diminished reality devices and use them a lot - we call them noise cancelling headphones, which remove the external sounds that would distract us to improve our listening experience, although I've never thought of them in that way.

There are plenty of practical applications for diminished reality, for example seeing what rooms look like without the furniture and clutter when viewing real estate. Handle with care though - even though diminished reality removed the coffee table from your view, you'll still bark your shin when you walk into it. I could even use it to remove my lockdown locks when I'm on video calls, as these appear to be a major distraction for anyone I'm talking to!

This got me thinking about improving the user experience by removing rather than adding when working with Salesforce data. The customer 360 view without the clutter from the individual's entire history with your company, just showing the important engagements and key information. In my experience, most of us focus on pushing more information onto a page rather than less, but that isn't always a good thing as eventually it can end up overwhelming. Dividing information up into sections helps a little, but it's still up to the user to sort wheat from chaff.

What constitutes clutter would be quite subjective though, so ideally everyone would be able to define their own version of what matters. Imagine the ability on a per-object basis to provide different lenses on a record - click a button to change the view of the record based on metadata configuration that you define to effortlessly breeze through the Sales, Marketing, Service view, or combinations. 

We can do this to some degree already, with page layouts by switching applications, but it's not real time and all users don't necessarily have access to all layouts. If less is more, then many versions of less is much more!

Subscribe to my Substack

Saturday, 20 March 2021

London's Calling 2021



London's Calling 2021 was an all remote affair - the 2020 edition started this trend, but there was a small in person presence as the physical facilities were all in place. A sound decision this year, as we are currently under restrictions that mean we can meet one person for exercise/coffee in a public open space. I guess we could all have our own mini-event where two of us get together and present a talk to each other, but quite hard to scale I feel.

It was a very different day for me - typically I'd be up around 05:30 to walk the dogs before catching the bus to the station for the trip up to London. I still did the walk, but with an extra hour or two in bed. At 07:30 rather than wandering around near The Brewery waiting for it to open, I was sat on the sofa refreshing the Attendify app waiting for it to open! The commute was a lot easier though, and I could spend most of the day barefoot which I'd be reluctant to try in the real city. Lunch was a lot easier to come by as well - hardly any queueing in my kitchen, although the menu wasn't as varied.

The sessions were pre-recorded and replayed as live with the speaker on hand in the chat to answer questions. This is definitely my preferred route for these kind of events - while I like the "anything can happen" excitement of presenting live, home broadband connections just aren't solid enough to rely on for 50+ sessions across a whole day. Mine decided to drop just after I joined the BrightGen sponsor room and it took me a couple of minutes to reconnect via my phone - I suspect I'd have lost most of my audience during that time.  You don't get a lot of questions as the schedule is pretty compressed and most people are jumping on to the next session rather than sticking around to chat, but remember that all of us speakers are contactable via the usual channels after the event. The deck from my talk is available on SlideShare.

The keynote was, as always, thought provoking. The key takeaway for me was that I'm Batman! Or all of us attendees are Batman. Or more accurately, the skill sets that the attendees have will be key to the new normal. A talk will always go down well when the audience are told they are super heroes. 

A common thread between physical and virtual events is not being able to make most of the sessions that look interesting, and this was no exception. Typically half a dozen sessions on at any one time meant that I had to make my decisions based on whether I wanted to ask any questions. The organisers have done a sterling job to make the sessions available immediately after the event, so I'll be catching up over the coming week.

The bit I really missed was the networking - there's an element of this through the Attendify activity feed and the sponsor rooms, but nothing like the in person experience of bumping into someone in a corridor that you haven't seen for ages and hearing about what they've been up to. London's Calling brought this into sharp relief as I've known a lot of the attendees for years but don't catch up with them in person that much, even when there isn't a pandemic.

The other part this is very difficult to replicate is the after party, although if we'd had one of those I might not be sitting here writing this post at 07:00 on Saturday morning!

There's definitely an upside to the all remote approach - no travel means the opportunity to speak for those who don't have the spare funds to jet around the world. Likewise no visas means individuals aren't at the mercy of government departments, and I hope that when things are back to "normal" next year(?) some aspect of this will be retained for a hybrid event - maybe a virtual room or two so the physical audience can listen to virtual speakers, and a virtual ticket so that people who can't get over to London can still get access to the content and join in via the app. I'll be watching what happens with great interest.

The event might have been a little different in its location and execution, but once again it was awesome. Kudos to the organisers, all volunteers let's not forget, for showing some of the bigger events how things should be done. One pure virtual is enough for me though, and I can't wait for next year when we'll (hopefully) be able to get together in person.

Subscribe to my Substack

Thursday, 18 March 2021

The Documentor Documented (and Moar Detail)



This week I closed a couple of issues on the Org Documentor, and added some more flexibility around the output.

Moar Detail


The first issue was a request from Andy Ognenoff for some additional fields - relatively straightforward as three came straight from the CustomField metadata and one combined a couple of fields if the first was true. 

This gave rise to the need for a little more flexibility, as the pages started to get pretty wide with the number of columns and the length of the detail rendered. So the latest version of the plug-in allows you to configure the columns that will be displayed - at the group level, for all objects, or you can leave configuration alone and pick up the default of all columns.

Here's the configuration from my sample metadata repo:
"groups": {
    "events": {
        "name":"events", 
        "columns":["Name", "Label", "Type", "Description", "Info", "Page Layouts", "Encrypted"],
        "description": "Objects associated with events",
        "objects": "Book_Signing__c, Author_Reading__c"
    },
    "other": {
       "name":"other", 
       "columns":["Name", "Label", "Type", "Description", "Info", "Page Layouts", "Security", "Compliance", "In Use", "Encrypted"],
       "title":"Uncategorised Objects",
       "description": "Objects that do not fall into any other category"
    }
}
and the associated output, first for events, which just adds the Encrypted field (which is empty as I don't have Shield enabled!):




and then a new Email_Address__c field on the Author__c custom object, which includes compliance fields:



As always, you can view the output for the sample metadata on the public site.

The Documentor, Documented


The other issue that I finally got around to was documenting the configuration file. I used a Google Site for this, as I find them extremely easy to spin up and fill in. You don't get the same level of control that you would over something like Github pages, but if you want something out there quickly then they are hard to beat. There's not too much in there right now, mostly around setup and configuration, but more will come. If there's something that you particularly want to see, then create an issue in the Github repo and I'll see what I can do.

Updated Plug-in


Version 3.5.1 of the plug-in has this new functionality and can be found on NPM

If you already have the plug-in installed, just run sfdx plugins:update to upgrade to 3.5.1 - run sfdx plugins once you have done that to check the version.

The source code for the plug-in can be found in the Github repository.

Related Posts

Tuesday, 16 March 2021

A Year Remote


Today (16th March 2021) marks the one year anniversary of BrightGen pivoting to a fully remote workforce. We knew it was possible, as we'd carried out a trial run the week before, but we didn't know how well it would scale, or indeed how long it would last for. It's pretty safe to say that none of us were expecting it to go on this long! The scaling worked surprisingly well - people's internet connection held up, kitchens and bedrooms were pressed into service as home offices, and we onboarded new joiners pretty seamlessly. Not perfect, and not everyone's first choice, but it was a very smooth transition.

For some, including me, the biggest challenge is stopping working from home turning into residing at your office. When you live and work in the same space, the boundaries become blurred, and if you have few distractions (because you are in the middle of a pandemic lockdown maybe) then it's easy to start working as soon as you get up and not stop until well into the evening. It's also easy to end up in a state of always on, where you check your email and instant messages every waking hour and jump straight into solving problems. This really can't scale, so you have to keep reminding yourself to take breaks and to set a time to switch off. The BrightGen team have done a great job of providing distractions, including virtual coffee breaks to mix us up and get us talking, pub and Kahoot quizzes, Taskmaster and more.

In the UK the vaccine rollout it is going surprisingly well, and confidence is growing that we will be able to open up the country on the government's target timeline including, at some point, our offices. This present us with an opportunity that doesn't come along too often - to redefine how work gets done. We've blown it all up for the last year, so we should spend some time thinking about how we can put it back together for everyone's benefit. The good news (in my opinion) is that simply going back to the way we worked before isn't going to happen. The genie is out of the bottle!

Subscribe to my Substack


Saturday, 6 March 2021

The Substack Experiment

What is Substack?

I've been hearing a lot about Substack recently, mostly on technology podcasts and particularly on those that sit at the intersection of media and technology. In some quarters, Substack is seen as a way for writers/journalists to connect directly with their readers and monetise their output, and by all accounts some people are doing well with the top 10 newsletters bringing in $7 million a year.

I'm fairly certain that those writers didn't start from scratch on Substack - most of the stand out successes are well known writers with a large existing following and a great reputation. They brought their existing audience over with them, but have then been able to define their own price for access to their content and keeping around 85% of the money. They are also the first and have no doubt captured quite a bit of the audience that is prepared to pay, and if my experience with the Internet is anything to go by, there will be a limited set of giants and a large number scratching by. 

I suspect there's also going to be a limit to the number of separate subscriptions that anyone is prepared to pay. I don't know the specific numbers, but if I guess at $10/writer/month for the top tier, then subscribing to 10 quickly turns into real money, especially when compared to subscribing to traditional news media where you get access to a whole host of writers. Those that haven't decamped to Substack at any rate. By all accounts Substack expect to offer bundles, but based on writers wishing to bundle rather than pivoting to aggregation themselves.

I'd also have some personal concerns about limiting my reading in this way - it makes it far more likely that I'd descend into an echo-chamber and assume the whole world felt the same why I do about everything. I quite like reading things that I really disagree with, even if they make me angry sometimes, but if that took time away from the content I was paying for, I'd probably cut right down on it.

My Substack

All of this monetisation is very interesting, but in my case highly likely to be theoretical. I can't see myself trying to monetise my audience, particularly as the number of writers I would be prepared to subscribe to is a number pretty close to zero, and if I'm not doing it I can't really expect anyone else to. And, of course, clearly nobody would pay to read my ramblings, especially as I don't really know what I'm going to use it for.

I have signed up for Substack - in keeping with my history of creative names like the Bob Buzzard Blog you are currently reading, I've gone for the Bob Buzzard Stack - you can sign up to join me on the journey.

Right now I don't have a great sense of what I'm going to use it for, but as it's a regular newsletter it seems likely there will be links to things that interest me that I've found since the last issue, probably with some of my thoughts about those things or my opinion about something in the news. And they mystery of why I don't expect to monetise this deepens! 

I'll try to post regularly, but I make no promises about the cadence. Feast or famine seems the most likely outcome, but we'll see.

Why Substack?

Because it's there. Because it's popular. Because it's new and I like learning about new things. And because it's free - you only pay if you charge, and even then it's a cut of what you make rather than a flat fee.

I've tried other regular posting concepts before, like Medium Series, but I didn't really warm to the format and, maybe because of that, I didn't get a lot of interest. Series are deprecated now, so even if I wanted to continue with them I couldn't. I also added a News section to my Toolbox, but then it gets hard to provide easy access to the older links, and I didn't really want to build out a whole front end for it. 

My Substack may well go the same way, but you can't hit the ball if you don't swing, so I'm giving it a go.

Links

Sunday, 21 February 2021

London Salesforce Developers Want Your Spring 21 Favourites

Since the UK went into first lockdown in March 2020, the London Salesforce Developers have met virtually over zoom. This works fine from the perspective of the talks and Q&A, but one area that is a real challenge to replicate is the casual conversations. Sometimes this is just general catch ups to talk about what we've been working on recently, which is something we can just about manage without. More problematic is that we aren't sharing the cool new features that we've just learned about, and that just isn't acceptable.

For that reason, our March 2021 event will be nothing but our members sharing their favourite feature from the Spring 21 release of Salesforce - we want to hear what you are excited about, and why!

So if you've spotted a hidden gem, sign up for our session on March 10th 2021 and tell us all about it. Don't delay - if someone else gets in before you, they'll get to talk about it!

You can sign up for the event here - once registered you'll need to fill in another form with details of what you want to talk about. You can also put in a backup choice or two, in case someone got in early and grabbed your favourite.

The event takes place on March 10th from 18:00 to 20:00 GMT - we'd love to hear from some internationals, so if you can make the timing work then please join us.


Thursday, 11 February 2021

Spring 21 - AuraEnabled Apex and Sharing


(Updated 11/02 to fix typo on inherited sharing when Apex is invoked by starting a transaction. Mea culpa)

Introduction

The Spring 21 release of Salesforce includes an update that may change the behaviour of your Apex classes that are used as controllers for Aura or Lighting Web Components. If your org was created after the Spring 18 Salesforce release, or you activated the (now retired) update

   Use without sharing for @AuraEnabled Apex Controllers with Implicit Sharing 

then by default your controllers run as without sharing, which means that they don't take into account sharing settings for the user making the request and allow access to all records. 

Once Spring 21 goes live, the

   Use with sharing for @AuraEnabled Apex Controllers with Implicit Sharing (Update, Enforced)

will be applied and this behaviour will be reversed - the default will be with sharing and access will only be allowed for records owned by, or shared with, the user making the request. 

Why the Change

In a word, security. This update makes your components secure by default - if you forget to specify with sharing or without sharing, the principle of least privilege is applied and the most restrictive option is chosen. 

The absence of a sharing keyword can also be considered a sharing keyword

I'm really not a fan of acts of omission driving behaviour, especially when that behaviour isn't guaranteed. Prior to the Spring 21 release, if you don't specify the type of sharing, there's no way to tell by inspecting the code itself what will happen. Anyone debugging an issue around sharing would have to know when the org was provisioned, or find out whether the earlier update had been applied, always assuming they could get access to production to find out!

Historically, one reason to omit the sharing was to allow the code to inherit the sharing from it's calling Apex. This allowed a class to execute as though :

  • with sharing is defined, if called from a class defined as with sharing
  • without sharing is defined, if called from a class defined as without sharing
which gives a great degree of flexibility, with the trade-off that the exact same behaviour applies if you forgot the sharing declaration rather than intentionally excluded it. A comment to clarify the intent could help here, but that's something else to remember.

Inherited Sharing


Winter 19 made a great step forward for forgetful programmers with the introduction of the inherited sharing keyword. This explicitly states that the the class will inherit the sharing from the calling code, so no need for anyone to try to infer what the missing sharing keywords might mean.  

A slight wrinkle to this is what does inherited sharing mean when the calling code is not Apex - i.e. when it is the entry point for a transaction and thus executed by the Salesforce platform? A great example of this is an @AuraEnabled class used as a controller for an Aura or Lightning Web Component, aka where we came in to this post! 

The good news is that the Apex docs explicitly call this out - inherited sharing means with sharing when it is the entry point for a transaction - the principle of least privilege again, but clearly documented so that everyone knows what behaviour to expect.

Call to action

So do yourself and your team a favour, and when you are checking your @AuraEnabled classes to see if they will be affected by the Spring 21 update, if you find any without a sharing keyword, add one to make it clear what sharing is being applied. Your future self will thank you, and it also means that Salesforce can flip flop around what the absence of a sharing keyword should be and your code remains unaffected.

Related Posts


Saturday, 6 February 2021

Org Documentor - Flag Non-Display Fields


Introduction

Towards the end of 2020, I pushed an update to the Org Documentor plug-in to include details of the page layouts that a field is referenced in. When I posted this on Linked In, I got the following comment from Anand Narasimhan (a blast from the past from the early days of the CTA program) :


Which chimed with some of the comments I'd received when I was asked to add this, around helping to retire old fields that weren't used any more.  This didn't seem like it would take a huge amount of work to implement, so I added an issue to the Github repository and forgot about it until today.

Solution


It certainly didn't take a huge amount of effort. As detailed when adding the page layout reference information, I build up a map of the page layouts that reference a field keyed by the field name. As I'm building a complex JavaScript object to pass to the EJS templating framework, I add the list of page layouts to a field property named pageLayoutInfo.  It was then simply a matter of setting the background colour for the field if the pageLayoutInfo property was empty. The slight complication was that if the field has been determined to be in an error (missing description) or warning (todo, deprecated in the field description) then it would already have a background colour and I wanted to leave that in place. 

All told, this was 4 lines of code (could be reduced to 3 with a wider screen ;):
  if ( (!field.pageLayoutInfo) && 
       (''==field.background) ) {
     field.background='#f5dfea';  
  } 

I then added a field to the sample metadata that isn't present on any page layouts - Internal Description - and regenerated the report, which highlights the field in pink as expected:


Bonus Changes


In response to an issue raised from the community, I also added the ability to configure the name of the report subdirectory for a metadata type via the reportDirectory property in the configuration file. The sample repository has been updated to write the pages pertaining to the objects metadata to the objs directory. If you don't provide the reportDirectory property, it will default to the metadata type name - e.g. objects, triggers. I've also added an issue to document the configuration file properties, as right now there is an example file and I leave everyone to draw their own conclusions.

I also fixed a bug in the aura enabled pages that detail the Apex controller classes for aura components - if the component extended a super component it all went to hell, but now it handles that correctly.

Updated Plug-in


Version 3.4.6 of the plug-in has this new functionality and can be found on NPM

If you already have the plug-in installed, just run sfdx plugins:update to upgrade to 3.4.6 - run sfdx plugins once you have done that to check the version.

if you aren't already using it, check out the dedicated page to read more about how to install and configure it.

The source code for the plug-in can be found in the Github repository.

Related Posts




Sunday, 31 January 2021

How Often Should You Blog?


How often should you blog? This is a question that, while not often asked, has many answers. The reason it's not often asked is probably because you just have to wait a couple of days and it will pop up in one of the newsletters you somehow signed up for years ago and haven't got around to cancelling. The problem is, it will be answering the question from the perspective of someone else - maybe someone that is trying to grow an already large online following, sell online courses, or forge a new career as a writer. It might even be answered from the perspective of a company that is blogging as part of a marketing initiative.

Even if the answer is from someone trying to achieve the exact same thing that you are, they won't be you, shaped by your own experiences and subject to your own personal ups and downs. For this reason I would advise never following someone else's advice slavishly, but pick some approaches based on what appeals to you, and try a selection of them out. You should also apply this to my advice - it's typically based on what has (mostly) worked for me, and I'm not you. 

Every Day

I've seen a number of write ups from people who blog every day. Some reported less traffic, not more - readers deserted them as they weren't able to maintain the quality, or just got overloaded even though the blogger felt that their process had improved. Others found that their engagement went up, sometimes by as much as 1,000%, although the real success stories tend to be a few years ago, possibly because the platforms they were writing on were less crowded so a concerted effort could make a real difference.

Blogging every day definitely isn't for me, as I'm absolutely certain the quality would go downhill. It's easy to underestimate the effort required to write posts at a decent cadence, and I can't tell you the number of times I thought I had a series of 10+ posts on a topic, only to start running out of ideas by part 4 or 5. The only way I could make this happen is by flipping my blog to more of a journal, where I just write about my day, but I'd get bored with that before too long, never mind any readers. 

I am tempted though, as I'm curious if there's a point where it becomes easy to dash out a post during a coffee break, and if in the future I have less demands on my time I might try it. I probably won't try it out on this blog though, I'd likely create a new space on something like Medium, so as not to alienate my current audience. They are likely to be interested in Salesforce, software development and devops (maybe remote working right now), because that's what I've written about for the last decade, and probably wouldn't enjoy the change to a diary approach. That doesn't mean it won't work for you, so it might be worth a try if you think you can take the pace. Most of the accounts of daily blogging I read are about people experimenting with it for 1-3 months, but there are a few that stick with it for years. I doubt they are doing this on top of a day job and a demanding family life, but who knows.

Several Times a Week

Marketing Insider Group recommend 2-4 posts per week to get the top results in terms of traffic and conversions, but this is for companies that are trying to convert blog readers into customers. While I could aim for that, I'm not sure what I'd do with a customer, given the only thing I really have to sell is the Visualforce Development Cookbook.  2-4 posts per week is also a lot given that I'm writing them all myself, and I already have a full time job that consumes quite a bit of my time. I could probably do a couple of posts a week for a short period of time, but I feel like I'd run out of ideas after a few months. I'd also start to resent that I'd put myself in this position and view it as a chore, which never leads to good outcomes in creative endeavours.

Once a Week

This is what I aim for, and I must say it's quite easy to achieve during a pandemic. Looking back at my posts over the last 12 months, I was hitting more than one a week during the periods that we were locked down, or restricted in terms of travel. When we opened up, however, I dropped down to one or two a month as I had more pressing demands on my time.  

When I Have Something to Say

This is my sweet spot. Rather than putting pressure on myself to write at a particular cadence, and beating myself up when I inevitably fall behind, I write longer and more considered posts when I have something to say. If I have something to say then it will typically be about a topic that I am interested in or I care about or that I know a fair bit about. When it's all three, I can't get the words out fast enough, but even if it's one of them then the motivation is there. For a long time I didn't realise this though, and I experimented with a number of other approaches to see if I enjoyed them more, or if they had a positive effect. I reconsider my approach whenever I read a post about blogging every day, every week, or points in between, because what works for me now may not be ideal in a year or two. 

There does seem to be some level of agreement that blogging with a cadence can be helpful, but I'm not sure how well that applies to technical blogs like this one. I can't imagine that if I switched to posting every Friday that there would be a bunch of people out there waiting for the white smoke to indicate a new post had been published, which is why I tend to publish as soon as I'm happy with what I've written. Again, I've tried a number of different approaches to this, trying to find the days and times that get maximum engagement, and I've never really found it makes much difference. Your mileage may vary.