Pages

Wednesday, 29 December 2021

2021 Year in Review - Part 1

London's Calling Caricatures - 1 year and zero haircuts apart

London's Calling Caricatures - 1 year and zero haircuts apart

2021 began in the UK as we spent quite a bit of 2020, in lockdown (Lockdown 3, just when you thought it was safe to go inside someone else's house again) and doing everything over videoconference, which was starting to get a bit samey. 

January

Those of us that had submitted London's Calling presentations heard back, and I was relieved to see that I'd made the cut. 

The London Salesforce Developers held two events in January - the first for the graduation of another round of Speaker Academy candidates, and the second for Apex Trigger Essentials courtesy of my fellow co-organiser Erika McEvilly. The combination of Erika and starting from the ground up with triggers clearly resonated, as we had a record 163 signups for this event. 

I made a bold, and incorrect, prediction that there wouldn't be an in person Dreamforce in 2021. I got it right that Salesforce wouldn't bring 100k+ people to San Francisco from all over the world, but got it wrong that they'd be happy with a 5k event.

February

The Spring 21 release of Salesforce went live, including an update that corrected the CPU tracking for flows - this is enforced in the Summer 22 release, so if you haven't tested with the new behaviour the clock is ticking. As as the tradition, here at BrightGen we ran our release webinar and gave everyone a chance to spend more time in front of a screen, but at least not on camera. Yay!

London's Calling ticked ever closer, and session recording got underway. In keeping with the scale of the thing I was accompanied by a member of the content team and an AV specialist to keep things on track. I have to say I still prefer the adrenaline rush of doing everything live - there's nothing like wondering if the wifi will hold up to get the heart racing. That said, from the point of view of the organisers I can see that trying to co-ordinate a ton of speakers to present live from around the would be a total nightmare.

The London Salesforce Developers were treated to a session on respecting data protection laws in Salesforce - a dry topic, but also an important one that isn't going away.

I also launched a Substack in February, which is proving invaluable for reminding me what was going on back then!

March

The wait was finally over and London's Calling was here - another chance to spend a whole day in front of a screen! As always it felt like the hardest thing to recreate online was the expo, but at least we at BrightGen had the caricatures which do translate pretty well, as you can see at the start of this post. My session was on Org Dependent Packages, which I still think are pretty awesome, especially for large enterprises with mature orgs. You can find recordings of all of the sessions on the youtube channel, and there is some great stuff there, so it's well worth a few hours digging around. 

When it's London's Calling month, we at the London Salesforce Developers try to keep our event lightweight as we feel like there's plenty of learning around already. To this end we decided to crowd-source our member's favourite Spring 21 features, which didn't get a huge take-up. To punish them, I did most of the presenting on my favourite features, which should motivate people to either get more involved or not show up in the future!

In an unexpected turn of events, the data recovery service came back from the dead, having been retired in July 2020. A fine example of listening to your customers. There were also rumours that Marc Benioff was considering stepping down and handing over to Bret Taylor. Not entirely wrong as it turned out, but not exactly correct either.

March also marked a whole year fully remote - little did I realise that there was plenty more of this to come.


Sunday, 5 December 2021

JavaScript for Apex Programmers Part 2 - Methods and Functions


(for the full Gertrude Stein quote on art having no function and not being necessary, see : https://www.scottedelman.com/2012/04/26/art-has-no-function-it-is-not-necessary/)

Introduction

In the first part of this occasional series on JavaScript for Apex Programmers, we looked at my chequered history with JavaScript, then the difference between Apex and JavaScript types. In this instalment we'll look at methods and functions.

Definitions

A method is associated with an object. It's either part of an object instance (non-static) or part of the class that objects are created from (static method). A function is an independent collection of code that can be called by name from anywhere. These are often used interchangeably (by me especially!) but the difference is important when considering JavaScript and Apex.

Apex

Apex only has methods. All Apex code called by its name is either a method being invoked on an object instance (that you or the platform created) or a static method in a class.

...

Almost all Apex code.

...

Apart from triggers. 

...

Here's an account trigger with a named collection of code that I can call later in the trigger:

trigger AccountMethod on Account (before insert) 
{
    void logMessage(String message) 
    {
        System.debug('Message : ' + message);
    }
    
    logMessage('In account trigger');
}

And if I execute anonymous and insert a trigger, I see the message appear as expected:



While this might look like a function, it's actually a static method on the trigger, as I found out by changing the use to this.logMessage('In account trigger');



It's not a useful static method though, as it can't be called from outside of this trigger. I suppose it could be used to organise code in the trigger to be more readable, but code shouldn't live in triggers so you'd do far better to put it in a utility or service class.

That interesting digression over with, as far as we are concerned Apex has methods in objects or classes, and I'll proceed on that basis.

If you want to pass some Apex code to be executed by another method, you have to pass the object to the method. An example of this that we come across pretty often is scheduled Apex:

MySchedulable mySch = new MySchedulable();
String cronStr = '21 00 9 9 1 ?';
String jobID = System.schedule('My Job', cronStr, mySch);
The platform will call the execute() method on the mySch instance of the MySchedulable class that I pass to it.

 

JavaScript

JavaScript, as I find so often the case, is a lot more powerful and a lot more confusing. JavaScript has both methods and functions, but under the hood methods are functions stored as a property of an object. 

Functions are also actually object instances - every function you create is actually an instance of the Function object.  Which means they can have properties and methods like other objects. And those methods are actually functions stored as properties. And so on.

The good news is that you don't need to care about most of this when using JavaScript in Salesforce. In my experience, what you really need know is:

Functions are First Class Citizens

In JavaScript, functions can be assigned to variables:

let log=function(message) { 
                console.log(message);
        }
   
log('Hello');

Hello

passed as parameters to other functions:

let consoleLogger=function(message) {
               console.log(message);
               }
               
let log=function(logger, message) {
             logger(message);
}

log(consoleLogger, 'Message to console');

Message to console

and returned as the result of a function;

function getConsoleLogger() {
    return function(message) {
        console.log(message);
    }
}

let consoleLogger=getConsoleLogger();

consoleLogger('Message for console');

Message for console

Functions can be Anonymous

When you create callback functions in JavaScript for very simple, often one-off use, they quickly start to proliferate and become difficult to distinguish from each other. Anonymous functions are defined where they are needed/used and don't become a reusable part of the application. Using a very simplistic example, for some reason I want to process an array of numbers and multiply each entry by itself each entry. I'm going to use the map() method from the Array object, which creates a new array by executing a function I supply on every element in the source array. If I do this with named functions:

function multiply(value) {
    return value*value;
}
let numbers=[1, 2, 3, 4];
let squared=numbers.map(multiply);
console.log(squared);
[1, 4, 9, 16]

If I don't need the multiple function anywhere else, it's being exposed for no good reason, so I can replace it with an anonymous function that I define when invoking the map method:

let numbers=[2, 4, 6, 8];
let squared=numbers.map(function(value){return value * value});
console.log(squared);
[4, 16, 36, 64]

My anonymous function has no name and cannot be used anywhere else. It's also really hard to debug if you have a bunch of anonymous functions in your stack, so exercise a little caution when using them.

Arrow Functions improve on Anonymous

Especially for simple functions. Arrow functions (sometimes called fat arrow functions) give you a more succinct way to create anonymous functions.

numbers.map(function(value){return value * value});
I can lose a lot of the boilerplate text and just write:
numbers.map(value=>value*value);

Breaking this down:

  • I don't need the function keyword - I replace it with =>
  • I don't need parenthesis around my parameter, I just put it to the left of =>
    Note that if I have no parameters, or more than one, I do need parenthesis
  • I don't need the braces, as long as the code fits onto a single line
  • I don't need the return statement, again as long as the code fits onto a single line. The result of my expression on the right hand side of => is implicitly returned
Thus arrow functions can look pretty different to regular functions:
let multiply=function(value) {
    return value * value;
}

let arrowMultiply=value=>value*value;

or quite similar

let addAndLog=function(first, second) {
    let result=first + second;
    console.log('Result = ' + result);
    return result;
}

let arrowAddAndLog=(first, second)=>{
    let result=first + second;
    console.log('Result = ' + result);
    return result;
}

Arrow functions have a few gotchas too - the major one is 'this' always refers to the Window object, regardless of how you might try to change it. 

Functions have a Context

There's quite a bit to this (pun intended!) and as I mentioned in the first instalment, this isn't intended to be a JavaScript tutorial, so I can't see any point in replicating the Mozilla Developer Network content. Instead I'll just point you at it. The key thing to remember is 'this' depends on how the function is called, not where it is declared, so if you pass an object method as a callback function, when it is invoked 'this' won't refer to the original object, but whatever object is now invoking it. I'd recommend spending some time getting to grips with the context, otherwise you'll likely spend a lot more time trying to figure out why things don't work.

Related Posts

JavaScript for Apex Programmers Part 1 - Typing

Sunday, 28 November 2021

Salesforce++ Holiday Highlights

With the holiday season fast approaching, it's time to take a look at the feast of programming coming in the next few weeks, starting with one from my side of the pond.

The Great British Break Point


Amateur developers compete for the crown of Britain's Top Debugger. This week focuses on the user experience, where the breakers are tasked with crafting the perfect break point to identify why a user cannot successfully create an opportunity and its related products in a single transaction. Judges Paul Cricklewood and Prue L33t are on hand to deliver the verdict. 

On your marks ... get set ... break!

Bob and Mate: Plus 8


Introduction to simple formulas featuring me, Bob Buzzard, and an acquaintance from the Salesforce ecosystem. December's episode shows how to add 8 to various numeric fields, either directly or by calculating the value 8 using advanced mathematical operations like addition and multiplication. 

Licensed at First Sight


Five prospect companies who have never seen Salesforce before are matched up to license packs by a team of experts. Cameras follow the users as they get their first sight of the system on go live day. Look out for the follow up program in 8 weeks time, when the prospects decide if they want to stay licensed or break up their contract. 

Unlike other matchmaking shows, there is no cash prize for prospects who stay with their licenses - quite the reverse as they are then liable for the full cost of the license pack, even the ones they don't want!

Batched


Follow two Apex specialists as they remedy extreme asynchronous processing gone wrong. Whether it's a maximum scope of 1 record, or exceeding the 50 million records per day processing limit, there's always hope.

Film of the Month  - Hidden Triggers (2019)



Documentary featuring the unsung trigger heroes that keep enterprises moving. Whether it's overcoming limitations with roll up summaries, or simply copying an updated field from one sObject type to another, if these triggers fail then western civilisation would quickly grid to a halt. Filmed over five years with unparalleled access to version control, see for the first time how updates to these triggers are deployed and tested. 

Contains scenes of mild jeopardy and swearing at failed deployments.

See Also


If you enjoyed this post, you might like Salesforce++ Top Picks. And you might also like to question some of your life choices.

Sunday, 24 October 2021

London Salesforce Developers - Back in Person



20th October 2021 was a momentous day for the London Salesforce Developers Trailblazer group, as we met in person for the first time since 12th February 2020 - 88 weeks later!

We've still been running our events - like most of the developers around the world we had switched to Zoom, but fatigue was hitting, and the excitement of having people join from around the world was fading fast. After the September Dreamforce viewing party event, we organisers felt that enough was enough and it was time to go out into the world and once again mingle with the three-dimensional people.

Cloud Orca were our generous hosts, in the Loading Bay event area of their Techspace offices in Shoreditch: 

Your organisers and Cloud Orca CEO Ed Rowland (right)

It was wonderful to see everyone, and very strange to be in such proximity again. Dare I say it felt like we were returning to normal, although it will take a few more of these before it starts to feel normal again. Luckily we humans are nothing if not adaptable, so I'm sure by early next year we'll have forgotten what a virtual event feels like.

One thing we were fairly sure of what that people wanted to talk to each other rather than listen to us, so we kept the presentation side of the evening short and sweet.  Amnon kicked us off minimum slides to welcome everyone back, and call out some key community news, including :

then a few minutes from yours truly on the key developer features from the Winter 22 release

As part of my talk I gave a demo of dynamic interactions - while this wasn't recorded, you can find the code in my Winter 22 Github repository. I may write up another blog post about this, although there wasn't an awful lot more ground covered than my last post on the topic, aside from a slightly more plausible use case! It did feel good to be demoing to a live audience again - demoing virtually is fine, but you do feel somewhat removed from the audience.

Not quite Rick Wakeman - photo posted by Louise Lockie on Twitter

The RSVPs were a little lower than before the pandemic, which isn't surprising as not everyone is keen to start mixing again. Dropouts were way down though, so thus far it looks like those who sign up to come along aren't doing so lightly. 

If you are interested in coming along to our next meetup, make sure to join our Trailblazer Community Group, and if you'd like to see what we've got up to in the past and receive future recordings, follow our Youtube channel.  


 

Saturday, 2 October 2021

Transaction Boundaries in Salesforce (Apex and Flow)

Introduction

Winter 22 introduces the capability to roll back pending record changes when a flow element fails at run time. I had two reactions to this - first, that's good; second, how come it didn't always do that? It took me back to the (very few) days when I did Visual Basic work and found out that boolean expressions didn't short circuit - it's something that has been such an integral part of other technologies that I've worked with, it never occurred to me that this would be different.

The original idea for this post was to call out the roll back only applies to the current transaction, and tie in how Apex works as a comparison. But then it occurred to me that in the Salesforce ecosystem there are many people teaching themselves Apex who could probably use a bit more information about how transactions work, So, in the words of Henry James wrote in his obituary for George du Maurier (referring to the novel Trilby):

"the whole phenomenon grew and grew till it became, at any rate for this particular victim, a fountain of gloom and a portent of woe"

Transactions

A transaction is a collection of actions that are applied as a single unit. Transactions ensure the integrity of the database in the face of exceptions, errors, crashes and power failures. The characteristics of a transaction are known by the acronym ACID:

  • Atomic - the work succeeds entirely or fails entirely. Regardless of what happens in terms of failure, the database cannot be partially updated.
  • Consistent - when a transaction has completed, the database is in a valid state in terms of all rules and constraints. This doesn't guarantee the data is correct, just that it is legal from the database perspective.
  • Isolated - the changes made during a transaction are not visible to other users/requests until the transaction completes. 
  • Durable - once a transaction completes, the changes made are permanent. 

Transactions in Apex


Apex is different to a number of languages that I've used in the past, as it is tightly coupled with the database. For this reason you don't have to worry about transaction management unless you want to take control. When a user makes a request that invokes your code, for example a Lightning Web Component calling an @AuraEnabled method, your code is already in a transaction context. 

When a request completes without error, the transaction is automatically committed and all changes made during the request are applied permanently to the database.  This also causes work such as sending emails and publishing certain types of platform events to take place. Sending an email about work that didn't actually happen doesn't make a lot of sense, although this caused us plenty of angst in the past as we tried to find ways to log to an external system that a transaction had failed (Publish Immediately platform events finally gave us a mechanism to achieve this).

When a request encounters an error, the transaction is automatically rolled back and all changes made during the request are discarded. Often the user receives an ugly stack trace, which is where you might decide that you need a bit more control over the transaction.

Catching Exceptions


By catching an exception, you can interrogate the exception object to find out what actually happened and formulate a nice error message to send to the user. However, unless you surface this message to the user, you have changed the behaviour of the transaction, maybe without realising it. For example, if you catch an exception and return a message encapsulating what happened:
Account acc=new Account(Name='TX Test');
Contact cont=new Contact(FirstName='Keir', LastName='Bowden');
String result='SUCCESS';
try {
   insert acc;
   cont.AccountId=acc.Id;
   insert cont;
}
catch (Exception e) {
    result='Error ' + e.getMessage();
}

return result;
In my dev org, I have a validation rule set up that one of email or phone must be defined, so the insert of the contact fails and I see a fairly nice error message:


Unfortunately, this isn't the only issue with my code - as I caught the exception, the request didn't fail so the transaction wasn't automatically rolled back. While from the user's perspective the request failed, the first part of it succeeded and an account named TX Test was created:


and every time the user retries the request, I'll get another TX Test account and they will get another error message.  

Taking Back Control


Luckily Apex has my back on this, and if I want to (and I know what I'm doing) I can take control of the transaction and apply some nuance to it, rather than everything succeeding or failing.

Savepoints allow you to create what can be thought of as sub or nested transactions. A savepoint identifies a point within a transaction that can be rolled back to, undoing all work after that point but retaining all work prior to that point.  Changing my snippet from above to use a savepoint and rollback, I can ensure that all of my changes succeed or fail :
SavePoint preAccountSavePoint=Database.setSavePoint();
Account acc=new Account(Name='TX Test');
Contact cont=new Contact(FirstName='Keir', LastName='Bowden');
String result='SUCCESS';
try {
   insert acc;
   cont.AccountId=acc.Id;
   insert cont;
}
catch (Exception e) {
    result='Error ' + e.getMessage();
    Database.rollback(preAccountSavePoint);
}

return result;
The user's experience remains the same - they receive a friendly error message that the insert failed.  This time, though, there is no need to rid my database of the troublesome account. Prior to carrying out any DML I have created a Savepoint, and in my exception handler I rollback to that Savepoint, undoing all of the work in between - the insert of the account that was successful.  Note that rolling back the transaction has no effect on my local variables - prior to the return statement I have an account in memory that has an Id assigned from the database, but that doesn't exist any more. Of course this also leaves any other work that happened in the transaction outside of my code in place, which may or may not be what I want, so taking control of transactions shouldn't be done lightly.

Savepoints are also useful if there are a number of courses of action that are equally valid for your code to take. You set a Savepoint and try option 1, if that doesn't work you rollback and try option 2 and so on. This is pretty rare in my experience, as usually there is a specific requirement around a user action, but it does happen, like when option 2 is writing the details of how and why option 1 failed.
 
You can also set multiple Savepoints, each rolling back less and less work, and choose how far rollback to in the event of an error. Your co-workers probably won't thank you for this, and are more likely to see it as combining Circles of Hell into your very own Inferno.  If you do choose to go this route, note that when you rollback to a savepoint, any that you created since then are no longer valid, so you can't switch between different versions of the database at will.

Savepoints only work in the current transaction, so if you execute a batch job that requires 10 batches to process all of the records, rolling back batch 10 has no effect on batches 1-9, as those took place in their own transactions which completed successfully. 

Rolling Back Flow Transactions

After that lengthy diversion into Apex, hopefully you'll understand why I expect everything to rollback the transaction automatically when there is an error - it's been a long time since it has been my problem.  Per the Winter 22 release notes, flow doesn't do this :

Previously, when a transaction ended, its pending record changes were saved to the database even if a flow element failed in the transaction

and bear in mind that is still the case - what has been added is a new Roll Back Records element you can use in a fault path. Why wasn't this changed to automatically roll back on error? For the same reason that Visual Basic didn't start short circuiting boolean expressions - there's a ton of existing solutions out there and some of them will rely on flow working this way. While it's not ideal, nor is introducing a breaking change to existing customer automation. 

Something else to bear in mind is that this rolls back the current transaction, not necessarily all of the work that has taken place in the flow. Per the Flows in Transactions Salesforce Help article, a flow transaction ends when a Screen, Local Action or Pause element is executed. Continuing with my Account and Contact approach, if you've inserted the Account and then use a Screen element to ask the user for the Contact name, the Account is committed to the database and will persist regardless of what happens to your attempt to create a Contact.  Much like the batch Apex note above, rolling back the second (Contact) transaction has no effect on the first (Account) transaction as that has already completed successfully.

Enjoying Transactions?

Try distributed transactions:

And then scale them up across many microservices.

Many years ago I used to have to manage transactions myself, and in one particular case I had to work on distributing transactions across a number of disparate systems. It was interesting and challenging work, but I don't miss it!

Related Posts

Tuesday, 14 September 2021

Dynamic Interactions in Winter 22

Introduction

If all goes according to plan, Dynamic Interactions go GA in Winter 22. The release notes have this to say about them:

With Dynamic Interactions, an event occurring in one component on a Lightning page, such as the user clicking an item in a list view, can update other components on the page.

Which I think is very much underselling it, as I'll attempt to explain in the rest of this post.

Sample App

You can find the code for the sample app at the Github repository. There's not a huge amount to it, you choose an Account and another component gets the Id and Name of the Account and retrieves the Opportunities associated with it:

Component Decoupling

What Dynamic Interactions actually allow us to do is assemble disparate components into custom user interfaces while retaining full control over the layout. The components can come from different developers, and we can add or remove components that interact with each other without having to update any source code, or with the components having the faintest idea about what else is on the page. 

This is something I've wanted for years, but was never able to find a solution that didn't require my custom components to know something about what was happening on the page.

The original way of providing a user with components that interact with each other was to create a container component and embed the others inside it. The container knows exactly which components are embedded, and often owns the data that the other components work on. There's no decoupling, and no opportunity to change the layout as you get the container plus all it's children or nothing. 

Lightning Message Service was the previous game changer, and that allowed components to be fairly loosely coupled. They would publish events when something happened to them, and receive events when something happened elsewhere that they needed to act on. They were still coupled through the messages that were sent and received though - any component that wished to participate had to know about the message channels that were in use and make sure they subscribed to and published on those. Good luck taking components developed by a third party and dropping those in to enhance your page. It did allow the layout to be easily changed and components, as long as they knew about the channels and messages, to be added and removed without changing any code. I was planning a blog post on this, but masterful inactivity has once again saved me the trouble of writing it then having to produce another post recommending against that approach 

With Dynamic Interactions, all that needs to happen is that components publish events when things of interest happen to them, and expose public properties that can be updated when things they should be interested in happen, the dream of decoupled components is realised.  The components don't have to listen for each other's events, that is handled by the lightning app builder page. As the designer of the page, I decide what should happen when a specific component fires a particular event fires. Essentially I use the page builder to wire the components to each other, through configuration.

Back to the Sample App

My app consists of two components (no container needed):

  • chooseAccount - this retrieves all the accounts in the system and presents the user with a lightning-combobox so they can pick one. In the screenshot above, it's on the left hand side. When the user chooses an account, an accountselected CustomEvent is fired with the details - all standard LWC:
        this.dispatchEvent(
                new CustomEvent(
                    'accountselected', 
                    {detail: {
                        recordId: this.selectedAccountId,
                        recordName: this.selectedAccountName
                    }
                })
        );
  • accountInfo - this retrieves the opportunity information for the recordId that is exposed as a public property, again all standard and, thanks to reactive properties, I don't have to manually take action when the id changes:
    @api get recordId() {
        return this._recordId;    
    }

    set recordId(value) {
        if (value) {
            this._recordId=value;
            this.hasAccount=true;
        }
    }
    
		....
        
    @wire(GetOpportunitiesForAccount, {accountId: '$_recordId'})
    gotOpportunities(result){
        if (result.data) {
            this.opportunities=result.data;
            this.noOpportunitiesFound=(0==this.opportunities.length);
        }
    }

and the final step is to use the Lightning App Builder to define what happens when the accountSelected event fires. I edit the page and click on the chooseAccount component, and there's a new tab next to the (lack of) properties that allows me to define interactions for the component - the Account Selected event:


and I can then fill in the details of the interaction:


In this case I'm targeting the accountInfo component and setting its public properties recordId and recordName to their namesakes from the published event. If I had additional components which cared about an account being selected, I'd create additional interactions to change their state to reflect the selection.

I now have two components communicating with each other, without either of them knowing anything about the other one, using entirely standard functionality. I can wire up additional components, move them around, or delete components at will.  

Conclusion


I regularly find myself producing is highly custom user interfaces that allow multiple records to be managed on a single page. For this use case, Dynamic Interactions are nothing short of a game changer, and I'm certain that this will be my go to solution. 

Related




Friday, 10 September 2021

JavaScript for Apex Programmers Part 1 - Typing

Background

When I started working with Salesforce way back in 2008, I had a natural affinity for the Apex programming language, as I'd spent the previous decade working with Object Oriented languages - first C++, then 8 years or so with Java. Visualforce was also a very easy transition, as I had spent a lot of time building custom front ends using Java technologies - servlets first before moving on to JavaServer Pages (now Jakarta Server Pages), which had a huge amount of overlap with the Visualforce custom tag approach. 

One area where I didn't have a huge amount of experience was JavaScript. Oddly I had a few years experience with server side JavaScript due to maintaining and extending the OpenMarket TRANSACT product, but that was mostly small tweaks added to existing functionality, and nothing that required me to learn much about the language itself, such as it was back then. 

I occasionally used JavaScript in Visualforce to do things like refreshing a record detail from an embedded Visualforce page, Onload Handling or Dojo Charts. All of these had something in common though, they were snippets of JavaScript that were rendered by Visualforce markup, including the data that they operated on. There was no connection with the server, or any kind of business logic worthy of the name - everything was figured out server side. 

Then came JavaScript Remoting, which I used relatively infrequently for pure Visualforce, as I didn't particularly like striping the business logic across the controller and the front end, until the Salesforce1 mobile app came along. Using Visualforce, with it's server round trips and re-rendering of large chunks of the page suddenly felt clunky compared to doing as much as possible on the device, and I was seized with the zeal of the newly converted. I'm pretty sure my JavaScript still looked like Apex code that had been through some automatic translation process, as I was still getting to grips with the JavaScript language, much of which was simply baffling to my server side conditioned eyes. 

It wasn't long before I was looking at jQuery Mobile to produce Single Page Applications where maintaining state is entirely the job of the front end, which quickly led me to Knockout.js as I could use bindings again, rather than having to manually update elements when data changed. This period culminated my Dreamforce 2013 session on Mobilizing your Visualforce Application with jQuery Mobile and Knockout.js

Then in 2015, Lightning Components (now Aura Components) came along, where suddenly JavaScript got real. Rather than rendering via Visualforce or including from a static resource, my pages were assembled from re-usable JavaScript components. While Aura didn't exactly encourage it's developers down the modern JavaScript route, it's successor - Lightning Web Components - certainly did.

All this is rather a lengthy introduction to the purpose of this series of blogs, which are intended to (try to) explain some of the differences and challenges when moving to JavaScript from an Apex background. This isn't a JavaScript tutorial, it's more about what I wish I'd known when I started. It's also based on my experience, which as you can see from above, was a somewhat meandering path. Anyone starting their journey should find it a lot more straightforward now, but there's still plenty there to baffle!

Strong versus Weak (Loose) Typing

The first challenge I encountered with JavaScript was the difference in typing. 

Apex

Apex is a strongly typed language, where every variable is declared with the type of data that it can store, and that cannot change during the life of the variable. 

    Date dealDate;

In the line above, dealDate is declared of type Date, and can only store dates. Attempts to assign it DateTime or Boolean values explicitly will cause compiler errors:

    dealDate=true;         // Illegal assignment from Boolean to Date
    dealDate=System.now(); // Illegal assignment from DateTime to Date

while attempts to assign something that might be a Date, but turns out not to be at runtime will throw an exception:

    Object candidate=System.now();
    dealDate=(Date) candidate; // System.TypeException: Invalid conversion from runtime type Datetime to
    Date

JavaScript

JavaScript is a weakly typed language, where values have types but variables don't.  You simply declare a variable using var or let, then assign whatever you want to it, changing the type as you need to:

    let dealDate;
    dealDate=2;               // dealDate is now a number
    dealDate='Yesterday';     // dealDate is now a string
    dealDate=Date();          // dealDate is now a date

The JavaScript interpreter assumes that you are happy with the value you have assigned the variable and will use it appropriately. If you use it inappropriately, this will sometimes be picked up at runtime and a TypeError thrown. For example, attempting to run the toUpperCase() string method on a number primitive:

    let val=1;
    val.toUpperCase();
    Uncaught TypeError: val.toUpperCase is not a function

However, as long as the way you are attempting to use the variable is legal, inappropriate usage often just gives you an unexpected result. Take the following, based on a simplified example of something I've done a number of times - I have an array and I want to find the position of the value 3.

    let numArray=[1,2,3,4];
    numArray.indexOf[3];

which returns undefined, rather than the expected position of 2.

Did you spot the error? I used the square bracket notation instead of round brackets to demarcate the parameter. So instead of executing the indexOf function, JavaScript was quite happy to treat the function as an array and return me the third element, which doesn't exist.

JavaScript also does a lot more automatic conversion of types, making assumptions that might not be obvious.  To use the + operator as an example, this can mean concatenation for strings or addition for numbers, so there are a few decisions to be made:

lhs + rhs

1. If either of lhs/rhs is an object, it is converted to a primitive string, number or boolean

2. If either of lhs/rhs is string primitive, the other is converted to a string (if necessary) and they are concatenated

3. lhs and rhs are converted to numbers (if necessary) and they are added

Which sounds perfectly reasonable in theory, but can surprise you in practice:

    1 + true 

result: 2, true is converted to the number 1

    5 + '4'  

result '54', rhs is a string so 5 is converted to a string and concatenated.

    false + 2

result: 2, false is converted to the number 0

    5 + 3 + '5'

result '85' - 5 + 3 adds the two numbers to give 8, which is then converted to a string to concatenate with '5'

    [1992, 2015, 2021] + 9

result '1992,2015,20219' - lhs is an object (array) which is converted to a primitive using the toString method, giving the string '1992,2015,2021', 9 is converted to the string '9' and the two strings are concatenated

Which is Better?

Is this my first rodeo? We can't even agree on what strong and weak typing really mean, so deciding whether one is preferred over the other is an impossible task. In this case it doesn't matter, as Apex and JavaScript aren't going to change!

Strongly typed languages are generally considered safer, especially for beginners, as more errors are trapped at compile time. There may also be some performance benefits as you have made guarantees to the compiler that it can use when applying optimisation, but this is getting harder to quantify and in reality is unlikely to be a major performance factor in any code that you write.

Weakly typed languages are typically more concise,  and the ability to pass any type as a parameter to a function can be really useful when building things like loggers.

Personally I take the view that code is written for computers but read by humans, so anything that clarifies intent is good. If I don't have strong typing, I'll choose a naming convention that makes the type of my variables clear, and I'll avoid re-using variables to hold different types even if the language allows me to.




Saturday, 28 August 2021

The certificate associated with the consumer key has expired

This week a couple of my continuous integration builds started failing. This in itself isn't unusual - these are typically end to end builds that create scratch orgs, set up standing data, run a bunch of tests, so it doesn't take much to tip the odd one over.  I didn't find anything helpful about the error message online though, so I'm writing this post so that it will appear as a match for the next person that is trying to find out more!

The error was something I hadn't seen before - "The certificate associated with the consumer key has expired.".  Googling didn't bring up much, one person had reported it before and they had got around it by removing their CLI installation and starting again. Not an option for me as the CLI setup on my CI machine would take a fair amount of effort to recreate. Time to start digging.

The first place I looked was the CLI itself - I typically don't update this much on my CI machine, as it hasn't been the most stable tool in terms of working new releases over the last year or so. It seemed entirely plausible that something embedded in the CLI had expired, so I updated everything and waited. Sadly this didn't fix my scratch builds, but it did break one of my static code analysis jobs, as a rule had switched from Java to XPath, and I had references to the Java class in one of my custom configurations. That was a relatively quick fix, so shortly I was no worse than the day before.

Next was the JWT grant for the org that I'm using as a dev hub for these builds. I was able to query data from the org without any problem, so it didn't seem to be that. Then I tried creating a scratch org and got the same error, so it seemed likely that it was related, but not as obviously as it might have been. 

Once I'd remembered how to access a connected app's configuration, I could see that the self-signed certificate for the app had expired about 9 hours before the build started failing.  Clearly I had found the problem.  

My next thought was that I would have to go through the whole JWT grant again - not something I look forward to, mainly because I don't do it that often and I always remember it being worse than it is. The first thing I needed to do though, was create a new self-signed certificate for the connected app, which I duly did. I was tempted to make the certificate last 10 years (apparently openssl self-signed certs can go out for around 75 years), but that felt like trading security for convenience, which is never a good thing to do. Once I'd updated the cert I decided I'd have a quick go at creating a new scratch org and it worked! No need to generate a new grant, I was off to the races. I then encountered a problem deleting scratch orgs, but this is something I'm also seeing on another machine that is authorised to a different dev hub via web login, so it feels like that is a different issue. I can also work around it with some scheduled Apex, so I'm happy to wait and see if it goes away!




Saturday, 21 August 2021

The Public Nature of Modern Programming

Then

When I got my first programming job, way back in 1987, it was a quite a private and solitary occupation. The Design Authority for my project gave me a detailed design for a procedure, including explicit definitions of the interfaces that I would use to interact with the rest of the system. I would then write the code for the procedure, test it locally and check it into version control. If I didn't need any clarification, I wouldn't speak to anyone as part of this process. I'd chat to my colleagues that I shared an office with, but in terms of doing my work it would be me and a screen. 

My code wouldn't be looked at by anyone else unless there was an issue when the next build took place, which there always was, as there were hundreds of new procedures that had never been slammed together before. The build team would take the first stab at fixing the issues, and if they were successful then I wouldn't know anything about this. If they were unsuccessful we would then enter a period of confusion as they typically wanted help with their fix rather than the underlying problem, and I would wonder where this code they were talking about had come from. 

My working life was spent solving problems that others were probably spending time solving, but it would never have occurred to me to share information about what I was doing, nor would I have had the first clue about how to go about sharing it. 

Outside of work, I would almost never talk to anyone about programming unless I was catching up with someone I'd been at college with, but it would be very superficial and typically about which languages and business areas we were working in, before getting down to the serious business of talking about sport. I'd also rarely do any programming outside of work. I might fiddle around a bit with some Basic, Pascal or 6502 assembler on my BBC Model B, but nothing serious. And certainly nothing that I'd consider telling anyone else about.  Some people wrote games and sent the source into magazines that might print it, but as a professional programmer I already got paid to write code so didn't need this outlet. 

Programming was a job like any other - I went to an office, worked there for a set number of hours a day, then went home and did whatever else I was interested in at the time. Unless you were a close friend or family member, you wouldn't have known that I programmed computers for a living, and unless you were a programmer yourself, it was hard for me to communicate exactly what I did every day.

The stereotype of a programmer was someone sitting in a darkened basement, being fed written requirements, and spending their days staring at a screen and typing. Which was pretty accurate.

Now

Programming in 2021 has changed beyond all recognition. It's now a collaborative and very public occupation. Requirements are rarely written down, and if they are it is often the programmers doing that after speaking to "the business" to find out exactly what is needed. Outside of integrating with external systems, integrating with the rest of the code in the project involves collaboration with the other programmers and the interfaces change when it becomes apparent they need to. Code is reviewed before being added to version control and continuous integration flags up problems as soon as they occur. The idea that a team of humans would need to kick off a build and triage the problems seems quaint and incredibly inefficient now. 

For me, the biggest change is that programmers expect, and are expected, to have a public profile. 

When we solve a problem, we write a blog post about it, redacting any detail that might expose which customer this impacted. We cross post on Substack and Medium. There might be a Github repository with the sample code for the blog post - there will almost certainly be other Github repositories to showcase our work, sometimes complete applications that anyone is welcome to copy. 

We also perform for audiences now, as we share what we've been doing at meet ups, conferences (both community and vendor organised), podcasts, webinars and recorded sessions. I'm pretty sure for the first 5-10 years of my career I didn't even show a customer what I'd built for them, it just got rolled into demos and training carried out by others. Now I show complete strangers from around the world and it seems perfectly normal.

Which is Better?

From the point of view of solving problems and producing solutions, it's almost unrecognisable now. If you hit a problem with anything other than bleeding edge technology, it's incredibly likely that someone else has already hit it and fixed it. And blogged about the fix. And spoken at a conference about it. And maybe created a Github repository with all the code you could possibly need going forward. 

The flip side is that programming is now more of a lifestyle choice than a job. There is an expectation that you will be working on side projects, writing posts and presenting at conferences. Often this will be on top of your actual job, which means that your social life is merely an extension of your professional life and often indistinguishable. Going to the office and working a set number of hours is typically only part of the programmer life these days, which can bring a lot of pressure, especially if you don't see it as a calling.  It can also be a full time job keeping up with the latest developments - frameworks rise and fall, platforms come in and out of favour, or the baddies get a new CEO and become the goodies so you grudgingly have to skill up on their products.

Personally I find it a lot better now that it was when I started, but very I'm lucky in that (a) I enjoy what I do, and (b) I can dedicate large chunks of my own time to all the extra-curricular activities without making my personal life difficult. I'm sure nobody wants to go back to the days when the helpful information was silo'd or didn't exist, but I'd imagine there are some out there who would quite like a return to the days when programming was something you did in private.



Saturday, 14 August 2021

Salesforce++ Top Picks


Salesforce++ is the new streaming service that marries up the excitement of enterprise software with the creativity and spontaneity of reality TV, showcasing exclusive original content that other services can only dream about. 

We've watched them all so you don't have to, so read on for our pick of bunch of August's programming.

90 Day Licenseé

In this show, Salesforce Account Executives are paired up with real life prospects and have 90 days to turn them into paying customers.

As we catch up with the couples in August, Malcolm introduces Clare to his family for the first time. Malcolm's sister, Evelyn, question Clare's intentions, feeling that she is leading Malcolm on to get access to his first call deck.

If you enjoy this show, watch out for 90 Day Licenseé : Happy Ever After? to find out more about previous prospects - did they get the agreement they were looking for, and are they still together with their AE?

American Picklists


Following Salesforce Admins Becky and Taylor from coast to coast as they track down weird and wonderful picklists, and talk to the admins responsible for creating and maintaining them.

August - Becky and Taylor head to rural Iowa to meet solo admin Herman, who works for a local non-profit. While clearing out some old applications that hadn't been opened for years, Herman stumbled across a mint-condition picklist produced for training purposes in 2012, made up of the names of Disney characters. 

Miami ISVs


Drama featuring Sonny Crackit and Rico Tabs, two software engineers based in Miami who spend their weeks helping ISVs fine tune their app exchange offerings, and their weekends on sun-drenched beaches.

In this month's episode, Crackit suffers a concussion after crashing his jet-ski and believes himself to be Sonny Crocket from the Miami Vice TV series. Tabs faces a race against time to stop his colleague from blowing the profits from their last engagement by renting a Ferrari Daytona Spyder.

Deadliest Batch


(Documentary series following the real-life experience of several teams of hard-bitten Apex developers who make a living writing batch jobs for demanding customers)

In August, there's trouble at the family operations. At Munchausen the Slopestring brothers fall out over whether to use Database.Stateful or write information back to the database at the end of each execute method. Meanwhile, at Winter Cove, Calamity Jane Hitchcoski's development team grind around the clock to fill their record quota before tax season ends.

Say Yes to the Apex


Reality series following events at Grossmeadow Software, where the developers try to find the perfect Apex solutions for a different admin and their entourage every week.

August : Can Team Leader Jackie design the perfect Apex for newly single admin Roberto? Roberto has dreamt about replacing his ageing workflow rules with a stylish Apex solution, and is keen to make it happen this year as a tribute to his step-uncle, who died 15 years before Roberto was born. He'll be deploying the code from Honolulu, so is keen that it has a Hawaiian feel to it.


Film of the Month: Bad Multi-Tenant




[While it might not feel like it, streaming services aren't all about reality shows. Salesforce++ features a mix of new and classic movies]

August gets off to a blistering start with a hard hitting classic from the early 90s. Harry Cortez stars as MT, a Salesforce Architect who delights in exploiting loopholes to use more than his fair share of Salesforce resources.  As he closes in on rock bottom, he is given a shot at redemption when he stumbles across a post on Stack Exchange asking for help writing the unit tests for an Apex trigger.

Contains scenes of limit abuse and unbounded SOQL queries.



Sunday, 8 August 2021

Avoiding Returnaggedon


How a company has treated its staff over the last 18 months, and how they treat them over the next few months as traditional workplaces open up, will have a huge impact on their future. Forcing teams back into the office is likely to result in Returnaggedon, where they come back just long enough to hand in their notice.

Now that most legal restrictions in the UK around COVID-19 have been lifted, thoughts are inevitably turning to what happens next in terms of remote work. Many of us are drifting back in for the odd day here and there for meetings, while some CEOs seem determined to revert back to how things were before. Apparently working from home doesn't work for those who want to hustle, and if you can go into a restaurant in New York City, you can come to the office. Although most of the time, the impending switch back to the old normal was immediately pushed back as the delta variant proved to be no respecter of desires or plans - a couple of days ago Amazon were the latest to push back from a September 2021 to January 2022.

Expecting everything to go back to how it was because that's how you liked it seems rather short sighted to me. While many reasons may be given for why everyone needs to be in the same physical location to get their work done, a lot of this will come down to trust, or lack thereof. The pandemic forced many managers to trust that their people were working even when they couldn't stand over them and, while they paid lip service to how well it was all going, that trust really wasn't there. Hence some can't wait to get everyone back in the office where their every move can be watched. Trust begets trust, however, and many employees report not trusting their leadership to manage the return to work safely - last September a survey found only 14% trusted their CEOs and senior leaders to make the correct call, which is pretty shocking. 

It's not just about safety though. People have adjusted to the benefits of part or full time remote working and are reluctant to put the genie back in the bottle.  No commute and saving money are the main reasons that most people want to stay remote, and they won't give that up easily, especially if they can avoid it by simply changing jobs. And people are changing jobs. A lot. So much that it's being called the Great Resignation. A Microsoft survey of 30,000 workers around the world found that 41% of them were considering quitting or changing profession. Imagine losing 41% of your workforce - it's hard to see that as anything other than catastrophic, given how long it takes to fully onboard new employees. While most executives say they don't want things to go back to how they are, around 70% of them want people in the office 3+ days a week, so it's clear that many want it to look an awful lot like it did before, with a few bones thrown to pretend they embrace remote work.

In Salesforce world, the competition for talent has always been intense, and somehow gets hotter every year. Companies need to offer people what they are looking for, or they will miss out to those that do. A survey of 1,000 US office workers in May said 39% would consider quitting if their employer didn't show flex. If you add in to that how easily people can walk into another Salesforce job, it's clear that if employees want to continue with this way of working, an accommodation must be found.  If plans aren't in place to offer remote/hybrid working going forward, it's going to be a bumpy ride!



Saturday, 31 July 2021

The CLI GUI Plays Favourites

 


Introduction


My last change to the CLI GUI was to add the capability to decode a Salesforce CLI command string and regenerate the command page for it. This was the first step towards favourites functionality, so that I could save frequently used commands and quickly re-run them. As usual, it wasn't quite as straightforward as that, but this afternoon I pushed an update to the repository to add support for favourites.

A Few of My Favourite Things


If you are already using the CLI GUI, you'll need to run npm install as I'm using a different mechanism to convert the command string into it's component parameters - string-argv. Previously I had a very complicated regular expression that I found online, but it didn't handle string parameters containing spaces too well, or MacOS directories.

The first change you'll notice when the GUI starts is the new datalist and a couple of buttons. The datalist allows you to select from the favourites you've saved, and you can either Open the command window with the favourite decoded, or run it immediately. As an aside, running a favourite that does something destructive or that can't be undone (like deleting a scratch org) is a pretty dangerous thing to do, so you'll be asked to confirm it. I really only use this for opening orgs without having to go via the command window:


Obviously there won't be anything in the datalist yet, as you'll need to create a favourite or two first.

Favourites are added and removed from the command window, which now has a section at the bottom of the page for this. Once you have set the various parameters, give it a name and click the Save Favourite button:


Note that the directory you are working in is saved with the favourite.

Returning back to the main screen, the new favourite is now available in the datalist, and selecting it enables the buttons:


Clicking the Open button opens the command window with the favourite details, and a button to remove it:



Clicking the Run button first asks you to confirm you really want to do it:



If you choose to continue, the command will immediately execute, in the directory you were in when you saved the favourite, and the output shown in a similar modal to that of the command window:



Right now there's no mechanism to update a favourite - you have to remove the existing one and then save the updated command with the same name. Yes it's a couple more clicks, but think of how grateful you'll be if I do add this capability.

The latest code is in the Github repository, and this week I'm back to the usual test environments of MacOS and Windows 10.  I haven't flogged it to within an inch of its life though, so I'm sure if you look hard enough you'll find something that doesn't work. If you do, feel free to raise an issue.

Related Posts