Wednesday, 29 December 2021

2021 Year in Review - Part 1

London's Calling Caricatures - 1 year and zero haircuts apart

London's Calling Caricatures - 1 year and zero haircuts apart

2021 began in the UK as we spent quite a bit of 2020, in lockdown (Lockdown 3, just when you thought it was safe to go inside someone else's house again) and doing everything over videoconference, which was starting to get a bit samey. 

January

Those of us that had submitted London's Calling presentations heard back, and I was relieved to see that I'd made the cut. 

The London Salesforce Developers held two events in January - the first for the graduation of another round of Speaker Academy candidates, and the second for Apex Trigger Essentials courtesy of my fellow co-organiser Erika McEvilly. The combination of Erika and starting from the ground up with triggers clearly resonated, as we had a record 163 signups for this event. 

I made a bold, and incorrect, prediction that there wouldn't be an in person Dreamforce in 2021. I got it right that Salesforce wouldn't bring 100k+ people to San Francisco from all over the world, but got it wrong that they'd be happy with a 5k event.

February

The Spring 21 release of Salesforce went live, including an update that corrected the CPU tracking for flows - this is enforced in the Summer 22 release, so if you haven't tested with the new behaviour the clock is ticking. As as the tradition, here at BrightGen we ran our release webinar and gave everyone a chance to spend more time in front of a screen, but at least not on camera. Yay!

London's Calling ticked ever closer, and session recording got underway. In keeping with the scale of the thing I was accompanied by a member of the content team and an AV specialist to keep things on track. I have to say I still prefer the adrenaline rush of doing everything live - there's nothing like wondering if the wifi will hold up to get the heart racing. That said, from the point of view of the organisers I can see that trying to co-ordinate a ton of speakers to present live from around the would be a total nightmare.

The London Salesforce Developers were treated to a session on respecting data protection laws in Salesforce - a dry topic, but also an important one that isn't going away.

I also launched a Substack in February, which is proving invaluable for reminding me what was going on back then!

March

The wait was finally over and London's Calling was here - another chance to spend a whole day in front of a screen! As always it felt like the hardest thing to recreate online was the expo, but at least we at BrightGen had the caricatures which do translate pretty well, as you can see at the start of this post. My session was on Org Dependent Packages, which I still think are pretty awesome, especially for large enterprises with mature orgs. You can find recordings of all of the sessions on the youtube channel, and there is some great stuff there, so it's well worth a few hours digging around. 

When it's London's Calling month, we at the London Salesforce Developers try to keep our event lightweight as we feel like there's plenty of learning around already. To this end we decided to crowd-source our member's favourite Spring 21 features, which didn't get a huge take-up. To punish them, I did most of the presenting on my favourite features, which should motivate people to either get more involved or not show up in the future!

In an unexpected turn of events, the data recovery service came back from the dead, having been retired in July 2020. A fine example of listening to your customers. There were also rumours that Marc Benioff was considering stepping down and handing over to Bret Taylor. Not entirely wrong as it turned out, but not exactly correct either.

March also marked a whole year fully remote - little did I realise that there was plenty more of this to come.


Sunday, 5 December 2021

JavaScript for Apex Programmers Part 2 - Methods and Functions


(for the full Gertrude Stein quote on art having no function and not being necessary, see : https://www.scottedelman.com/2012/04/26/art-has-no-function-it-is-not-necessary/)

Introduction

In the first part of this occasional series on JavaScript for Apex Programmers, we looked at my chequered history with JavaScript, then the difference between Apex and JavaScript types. In this instalment we'll look at methods and functions.

Definitions

A method is associated with an object. It's either part of an object instance (non-static) or part of the class that objects are created from (static method). A function is an independent collection of code that can be called by name from anywhere. These are often used interchangeably (by me especially!) but the difference is important when considering JavaScript and Apex.

Apex

Apex only has methods. All Apex code called by its name is either a method being invoked on an object instance (that you or the platform created) or a static method in a class.

...

Almost all Apex code.

...

Apart from triggers. 

...

Here's an account trigger with a named collection of code that I can call later in the trigger:

trigger AccountMethod on Account (before insert) 
{
    void logMessage(String message) 
    {
        System.debug('Message : ' + message);
    }
    
    logMessage('In account trigger');
}

And if I execute anonymous and insert a trigger, I see the message appear as expected:



While this might look like a function, it's actually a static method on the trigger, as I found out by changing the use to this.logMessage('In account trigger');



It's not a useful static method though, as it can't be called from outside of this trigger. I suppose it could be used to organise code in the trigger to be more readable, but code shouldn't live in triggers so you'd do far better to put it in a utility or service class.

That interesting digression over with, as far as we are concerned Apex has methods in objects or classes, and I'll proceed on that basis.

If you want to pass some Apex code to be executed by another method, you have to pass the object to the method. An example of this that we come across pretty often is scheduled Apex:

MySchedulable mySch = new MySchedulable();
String cronStr = '21 00 9 9 1 ?';
String jobID = System.schedule('My Job', cronStr, mySch);
The platform will call the execute() method on the mySch instance of the MySchedulable class that I pass to it.

 

JavaScript

JavaScript, as I find so often the case, is a lot more powerful and a lot more confusing. JavaScript has both methods and functions, but under the hood methods are functions stored as a property of an object. 

Functions are also actually object instances - every function you create is actually an instance of the Function object.  Which means they can have properties and methods like other objects. And those methods are actually functions stored as properties. And so on.

The good news is that you don't need to care about most of this when using JavaScript in Salesforce. In my experience, what you really need know is:

Functions are First Class Citizens

In JavaScript, functions can be assigned to variables:

let log=function(message) { 
                console.log(message);
        }
   
log('Hello');

Hello

passed as parameters to other functions:

let consoleLogger=function(message) {
               console.log(message);
               }
               
let log=function(logger, message) {
             logger(message);
}

log(consoleLogger, 'Message to console');

Message to console

and returned as the result of a function;

function getConsoleLogger() {
    return function(message) {
        console.log(message);
    }
}

let consoleLogger=getConsoleLogger();

consoleLogger('Message for console');

Message for console

Functions can be Anonymous

When you create callback functions in JavaScript for very simple, often one-off use, they quickly start to proliferate and become difficult to distinguish from each other. Anonymous functions are defined where they are needed/used and don't become a reusable part of the application. Using a very simplistic example, for some reason I want to process an array of numbers and multiply each entry by itself each entry. I'm going to use the map() method from the Array object, which creates a new array by executing a function I supply on every element in the source array. If I do this with named functions:

function multiply(value) {
    return value*value;
}
let numbers=[1, 2, 3, 4];
let squared=numbers.map(multiply);
console.log(squared);
[1, 4, 9, 16]

If I don't need the multiple function anywhere else, it's being exposed for no good reason, so I can replace it with an anonymous function that I define when invoking the map method:

let numbers=[2, 4, 6, 8];
let squared=numbers.map(function(value){return value * value});
console.log(squared);
[4, 16, 36, 64]

My anonymous function has no name and cannot be used anywhere else. It's also really hard to debug if you have a bunch of anonymous functions in your stack, so exercise a little caution when using them.

Arrow Functions improve on Anonymous

Especially for simple functions. Arrow functions (sometimes called fat arrow functions) give you a more succinct way to create anonymous functions.

numbers.map(function(value){return value * value});
I can lose a lot of the boilerplate text and just write:
numbers.map(value=>value*value);

Breaking this down:

  • I don't need the function keyword - I replace it with =>
  • I don't need parenthesis around my parameter, I just put it to the left of =>
    Note that if I have no parameters, or more than one, I do need parenthesis
  • I don't need the braces, as long as the code fits onto a single line
  • I don't need the return statement, again as long as the code fits onto a single line. The result of my expression on the right hand side of => is implicitly returned
Thus arrow functions can look pretty different to regular functions:
let multiply=function(value) {
    return value * value;
}

let arrowMultiply=value=>value*value;

or quite similar

let addAndLog=function(first, second) {
    let result=first + second;
    console.log('Result = ' + result);
    return result;
}

let arrowAddAndLog=(first, second)=>{
    let result=first + second;
    console.log('Result = ' + result);
    return result;
}

Arrow functions have a few gotchas too - the major one is 'this' always refers to the Window object, regardless of how you might try to change it. 

Functions have a Context

There's quite a bit to this (pun intended!) and as I mentioned in the first instalment, this isn't intended to be a JavaScript tutorial, so I can't see any point in replicating the Mozilla Developer Network content. Instead I'll just point you at it. The key thing to remember is 'this' depends on how the function is called, not where it is declared, so if you pass an object method as a callback function, when it is invoked 'this' won't refer to the original object, but whatever object is now invoking it. I'd recommend spending some time getting to grips with the context, otherwise you'll likely spend a lot more time trying to figure out why things don't work.

Related Posts

JavaScript for Apex Programmers Part 1 - Typing

Sunday, 28 November 2021

Salesforce++ Holiday Highlights

With the holiday season fast approaching, it's time to take a look at the feast of programming coming in the next few weeks, starting with one from my side of the pond.

The Great British Break Point


Amateur developers compete for the crown of Britain's Top Debugger. This week focuses on the user experience, where the breakers are tasked with crafting the perfect break point to identify why a user cannot successfully create an opportunity and its related products in a single transaction. Judges Paul Cricklewood and Prue L33t are on hand to deliver the verdict. 

On your marks ... get set ... break!

Bob and Mate: Plus 8


Introduction to simple formulas featuring me, Bob Buzzard, and an acquaintance from the Salesforce ecosystem. December's episode shows how to add 8 to various numeric fields, either directly or by calculating the value 8 using advanced mathematical operations like addition and multiplication. 

Licensed at First Sight


Five prospect companies who have never seen Salesforce before are matched up to license packs by a team of experts. Cameras follow the users as they get their first sight of the system on go live day. Look out for the follow up program in 8 weeks time, when the prospects decide if they want to stay licensed or break up their contract. 

Unlike other matchmaking shows, there is no cash prize for prospects who stay with their licenses - quite the reverse as they are then liable for the full cost of the license pack, even the ones they don't want!

Batched


Follow two Apex specialists as they remedy extreme asynchronous processing gone wrong. Whether it's a maximum scope of 1 record, or exceeding the 50 million records per day processing limit, there's always hope.

Film of the Month  - Hidden Triggers (2019)



Documentary featuring the unsung trigger heroes that keep enterprises moving. Whether it's overcoming limitations with roll up summaries, or simply copying an updated field from one sObject type to another, if these triggers fail then western civilisation would quickly grid to a halt. Filmed over five years with unparalleled access to version control, see for the first time how updates to these triggers are deployed and tested. 

Contains scenes of mild jeopardy and swearing at failed deployments.

See Also


If you enjoyed this post, you might like Salesforce++ Top Picks. And you might also like to question some of your life choices.

Sunday, 24 October 2021

London Salesforce Developers - Back in Person



20th October 2021 was a momentous day for the London Salesforce Developers Trailblazer group, as we met in person for the first time since 12th February 2020 - 88 weeks later!

We've still been running our events - like most of the developers around the world we had switched to Zoom, but fatigue was hitting, and the excitement of having people join from around the world was fading fast. After the September Dreamforce viewing party event, we organisers felt that enough was enough and it was time to go out into the world and once again mingle with the three-dimensional people.

Cloud Orca were our generous hosts, in the Loading Bay event area of their Techspace offices in Shoreditch: 

Your organisers and Cloud Orca CEO Ed Rowland (right)

It was wonderful to see everyone, and very strange to be in such proximity again. Dare I say it felt like we were returning to normal, although it will take a few more of these before it starts to feel normal again. Luckily we humans are nothing if not adaptable, so I'm sure by early next year we'll have forgotten what a virtual event feels like.

One thing we were fairly sure of what that people wanted to talk to each other rather than listen to us, so we kept the presentation side of the evening short and sweet.  Amnon kicked us off minimum slides to welcome everyone back, and call out some key community news, including :

then a few minutes from yours truly on the key developer features from the Winter 22 release

As part of my talk I gave a demo of dynamic interactions - while this wasn't recorded, you can find the code in my Winter 22 Github repository. I may write up another blog post about this, although there wasn't an awful lot more ground covered than my last post on the topic, aside from a slightly more plausible use case! It did feel good to be demoing to a live audience again - demoing virtually is fine, but you do feel somewhat removed from the audience.

Not quite Rick Wakeman - photo posted by Louise Lockie on Twitter

The RSVPs were a little lower than before the pandemic, which isn't surprising as not everyone is keen to start mixing again. Dropouts were way down though, so thus far it looks like those who sign up to come along aren't doing so lightly. 

If you are interested in coming along to our next meetup, make sure to join our Trailblazer Community Group, and if you'd like to see what we've got up to in the past and receive future recordings, follow our Youtube channel.  


 

Saturday, 2 October 2021

Transaction Boundaries in Salesforce (Apex and Flow)

Introduction

Winter 22 introduces the capability to roll back pending record changes when a flow element fails at run time. I had two reactions to this - first, that's good; second, how come it didn't always do that? It took me back to the (very few) days when I did Visual Basic work and found out that boolean expressions didn't short circuit - it's something that has been such an integral part of other technologies that I've worked with, it never occurred to me that this would be different.

The original idea for this post was to call out the roll back only applies to the current transaction, and tie in how Apex works as a comparison. But then it occurred to me that in the Salesforce ecosystem there are many people teaching themselves Apex who could probably use a bit more information about how transactions work, So, in the words of Henry James wrote in his obituary for George du Maurier (referring to the novel Trilby):

"the whole phenomenon grew and grew till it became, at any rate for this particular victim, a fountain of gloom and a portent of woe"

Transactions

A transaction is a collection of actions that are applied as a single unit. Transactions ensure the integrity of the database in the face of exceptions, errors, crashes and power failures. The characteristics of a transaction are known by the acronym ACID:

  • Atomic - the work succeeds entirely or fails entirely. Regardless of what happens in terms of failure, the database cannot be partially updated.
  • Consistent - when a transaction has completed, the database is in a valid state in terms of all rules and constraints. This doesn't guarantee the data is correct, just that it is legal from the database perspective.
  • Isolated - the changes made during a transaction are not visible to other users/requests until the transaction completes. 
  • Durable - once a transaction completes, the changes made are permanent. 

Transactions in Apex


Apex is different to a number of languages that I've used in the past, as it is tightly coupled with the database. For this reason you don't have to worry about transaction management unless you want to take control. When a user makes a request that invokes your code, for example a Lightning Web Component calling an @AuraEnabled method, your code is already in a transaction context. 

When a request completes without error, the transaction is automatically committed and all changes made during the request are applied permanently to the database.  This also causes work such as sending emails and publishing certain types of platform events to take place. Sending an email about work that didn't actually happen doesn't make a lot of sense, although this caused us plenty of angst in the past as we tried to find ways to log to an external system that a transaction had failed (Publish Immediately platform events finally gave us a mechanism to achieve this).

When a request encounters an error, the transaction is automatically rolled back and all changes made during the request are discarded. Often the user receives an ugly stack trace, which is where you might decide that you need a bit more control over the transaction.

Catching Exceptions


By catching an exception, you can interrogate the exception object to find out what actually happened and formulate a nice error message to send to the user. However, unless you surface this message to the user, you have changed the behaviour of the transaction, maybe without realising it. For example, if you catch an exception and return a message encapsulating what happened:
Account acc=new Account(Name='TX Test');
Contact cont=new Contact(FirstName='Keir', LastName='Bowden');
String result='SUCCESS';
try {
   insert acc;
   cont.AccountId=acc.Id;
   insert cont;
}
catch (Exception e) {
    result='Error ' + e.getMessage();
}

return result;
In my dev org, I have a validation rule set up that one of email or phone must be defined, so the insert of the contact fails and I see a fairly nice error message:


Unfortunately, this isn't the only issue with my code - as I caught the exception, the request didn't fail so the transaction wasn't automatically rolled back. While from the user's perspective the request failed, the first part of it succeeded and an account named TX Test was created:


and every time the user retries the request, I'll get another TX Test account and they will get another error message.  

Taking Back Control


Luckily Apex has my back on this, and if I want to (and I know what I'm doing) I can take control of the transaction and apply some nuance to it, rather than everything succeeding or failing.

Savepoints allow you to create what can be thought of as sub or nested transactions. A savepoint identifies a point within a transaction that can be rolled back to, undoing all work after that point but retaining all work prior to that point.  Changing my snippet from above to use a savepoint and rollback, I can ensure that all of my changes succeed or fail :
SavePoint preAccountSavePoint=Database.setSavePoint();
Account acc=new Account(Name='TX Test');
Contact cont=new Contact(FirstName='Keir', LastName='Bowden');
String result='SUCCESS';
try {
   insert acc;
   cont.AccountId=acc.Id;
   insert cont;
}
catch (Exception e) {
    result='Error ' + e.getMessage();
    Database.rollback(preAccountSavePoint);
}

return result;
The user's experience remains the same - they receive a friendly error message that the insert failed.  This time, though, there is no need to rid my database of the troublesome account. Prior to carrying out any DML I have created a Savepoint, and in my exception handler I rollback to that Savepoint, undoing all of the work in between - the insert of the account that was successful.  Note that rolling back the transaction has no effect on my local variables - prior to the return statement I have an account in memory that has an Id assigned from the database, but that doesn't exist any more. Of course this also leaves any other work that happened in the transaction outside of my code in place, which may or may not be what I want, so taking control of transactions shouldn't be done lightly.

Savepoints are also useful if there are a number of courses of action that are equally valid for your code to take. You set a Savepoint and try option 1, if that doesn't work you rollback and try option 2 and so on. This is pretty rare in my experience, as usually there is a specific requirement around a user action, but it does happen, like when option 2 is writing the details of how and why option 1 failed.
 
You can also set multiple Savepoints, each rolling back less and less work, and choose how far rollback to in the event of an error. Your co-workers probably won't thank you for this, and are more likely to see it as combining Circles of Hell into your very own Inferno.  If you do choose to go this route, note that when you rollback to a savepoint, any that you created since then are no longer valid, so you can't switch between different versions of the database at will.

Savepoints only work in the current transaction, so if you execute a batch job that requires 10 batches to process all of the records, rolling back batch 10 has no effect on batches 1-9, as those took place in their own transactions which completed successfully. 

Rolling Back Flow Transactions

After that lengthy diversion into Apex, hopefully you'll understand why I expect everything to rollback the transaction automatically when there is an error - it's been a long time since it has been my problem.  Per the Winter 22 release notes, flow doesn't do this :

Previously, when a transaction ended, its pending record changes were saved to the database even if a flow element failed in the transaction

and bear in mind that is still the case - what has been added is a new Roll Back Records element you can use in a fault path. Why wasn't this changed to automatically roll back on error? For the same reason that Visual Basic didn't start short circuiting boolean expressions - there's a ton of existing solutions out there and some of them will rely on flow working this way. While it's not ideal, nor is introducing a breaking change to existing customer automation. 

Something else to bear in mind is that this rolls back the current transaction, not necessarily all of the work that has taken place in the flow. Per the Flows in Transactions Salesforce Help article, a flow transaction ends when a Screen, Local Action or Pause element is executed. Continuing with my Account and Contact approach, if you've inserted the Account and then use a Screen element to ask the user for the Contact name, the Account is committed to the database and will persist regardless of what happens to your attempt to create a Contact.  Much like the batch Apex note above, rolling back the second (Contact) transaction has no effect on the first (Account) transaction as that has already completed successfully.

Enjoying Transactions?

Try distributed transactions:

And then scale them up across many microservices.

Many years ago I used to have to manage transactions myself, and in one particular case I had to work on distributing transactions across a number of disparate systems. It was interesting and challenging work, but I don't miss it!

Related Posts

Tuesday, 14 September 2021

Dynamic Interactions in Winter 22

Introduction

If all goes according to plan, Dynamic Interactions go GA in Winter 22. The release notes have this to say about them:

With Dynamic Interactions, an event occurring in one component on a Lightning page, such as the user clicking an item in a list view, can update other components on the page.

Which I think is very much underselling it, as I'll attempt to explain in the rest of this post.

Sample App

You can find the code for the sample app at the Github repository. There's not a huge amount to it, you choose an Account and another component gets the Id and Name of the Account and retrieves the Opportunities associated with it:

Component Decoupling

What Dynamic Interactions actually allow us to do is assemble disparate components into custom user interfaces while retaining full control over the layout. The components can come from different developers, and we can add or remove components that interact with each other without having to update any source code, or with the components having the faintest idea about what else is on the page. 

This is something I've wanted for years, but was never able to find a solution that didn't require my custom components to know something about what was happening on the page.

The original way of providing a user with components that interact with each other was to create a container component and embed the others inside it. The container knows exactly which components are embedded, and often owns the data that the other components work on. There's no decoupling, and no opportunity to change the layout as you get the container plus all it's children or nothing. 

Lightning Message Service was the previous game changer, and that allowed components to be fairly loosely coupled. They would publish events when something happened to them, and receive events when something happened elsewhere that they needed to act on. They were still coupled through the messages that were sent and received though - any component that wished to participate had to know about the message channels that were in use and make sure they subscribed to and published on those. Good luck taking components developed by a third party and dropping those in to enhance your page. It did allow the layout to be easily changed and components, as long as they knew about the channels and messages, to be added and removed without changing any code. I was planning a blog post on this, but masterful inactivity has once again saved me the trouble of writing it then having to produce another post recommending against that approach 

With Dynamic Interactions, all that needs to happen is that components publish events when things of interest happen to them, and expose public properties that can be updated when things they should be interested in happen, the dream of decoupled components is realised.  The components don't have to listen for each other's events, that is handled by the lightning app builder page. As the designer of the page, I decide what should happen when a specific component fires a particular event fires. Essentially I use the page builder to wire the components to each other, through configuration.

Back to the Sample App

My app consists of two components (no container needed):

  • chooseAccount - this retrieves all the accounts in the system and presents the user with a lightning-combobox so they can pick one. In the screenshot above, it's on the left hand side. When the user chooses an account, an accountselected CustomEvent is fired with the details - all standard LWC:
        this.dispatchEvent(
                new CustomEvent(
                    'accountselected', 
                    {detail: {
                        recordId: this.selectedAccountId,
                        recordName: this.selectedAccountName
                    }
                })
        );
  • accountInfo - this retrieves the opportunity information for the recordId that is exposed as a public property, again all standard and, thanks to reactive properties, I don't have to manually take action when the id changes:
    @api get recordId() {
        return this._recordId;    
    }

    set recordId(value) {
        if (value) {
            this._recordId=value;
            this.hasAccount=true;
        }
    }
    
		....
        
    @wire(GetOpportunitiesForAccount, {accountId: '$_recordId'})
    gotOpportunities(result){
        if (result.data) {
            this.opportunities=result.data;
            this.noOpportunitiesFound=(0==this.opportunities.length);
        }
    }

and the final step is to use the Lightning App Builder to define what happens when the accountSelected event fires. I edit the page and click on the chooseAccount component, and there's a new tab next to the (lack of) properties that allows me to define interactions for the component - the Account Selected event:


and I can then fill in the details of the interaction:


In this case I'm targeting the accountInfo component and setting its public properties recordId and recordName to their namesakes from the published event. If I had additional components which cared about an account being selected, I'd create additional interactions to change their state to reflect the selection.

I now have two components communicating with each other, without either of them knowing anything about the other one, using entirely standard functionality. I can wire up additional components, move them around, or delete components at will.  

Conclusion


I regularly find myself producing is highly custom user interfaces that allow multiple records to be managed on a single page. For this use case, Dynamic Interactions are nothing short of a game changer, and I'm certain that this will be my go to solution. 

Related




Friday, 10 September 2021

JavaScript for Apex Programmers Part 1 - Typing

Background

When I started working with Salesforce way back in 2008, I had a natural affinity for the Apex programming language, as I'd spent the previous decade working with Object Oriented languages - first C++, then 8 years or so with Java. Visualforce was also a very easy transition, as I had spent a lot of time building custom front ends using Java technologies - servlets first before moving on to JavaServer Pages (now Jakarta Server Pages), which had a huge amount of overlap with the Visualforce custom tag approach. 

One area where I didn't have a huge amount of experience was JavaScript. Oddly I had a few years experience with server side JavaScript due to maintaining and extending the OpenMarket TRANSACT product, but that was mostly small tweaks added to existing functionality, and nothing that required me to learn much about the language itself, such as it was back then. 

I occasionally used JavaScript in Visualforce to do things like refreshing a record detail from an embedded Visualforce page, Onload Handling or Dojo Charts. All of these had something in common though, they were snippets of JavaScript that were rendered by Visualforce markup, including the data that they operated on. There was no connection with the server, or any kind of business logic worthy of the name - everything was figured out server side. 

Then came JavaScript Remoting, which I used relatively infrequently for pure Visualforce, as I didn't particularly like striping the business logic across the controller and the front end, until the Salesforce1 mobile app came along. Using Visualforce, with it's server round trips and re-rendering of large chunks of the page suddenly felt clunky compared to doing as much as possible on the device, and I was seized with the zeal of the newly converted. I'm pretty sure my JavaScript still looked like Apex code that had been through some automatic translation process, as I was still getting to grips with the JavaScript language, much of which was simply baffling to my server side conditioned eyes. 

It wasn't long before I was looking at jQuery Mobile to produce Single Page Applications where maintaining state is entirely the job of the front end, which quickly led me to Knockout.js as I could use bindings again, rather than having to manually update elements when data changed. This period culminated my Dreamforce 2013 session on Mobilizing your Visualforce Application with jQuery Mobile and Knockout.js

Then in 2015, Lightning Components (now Aura Components) came along, where suddenly JavaScript got real. Rather than rendering via Visualforce or including from a static resource, my pages were assembled from re-usable JavaScript components. While Aura didn't exactly encourage it's developers down the modern JavaScript route, it's successor - Lightning Web Components - certainly did.

All this is rather a lengthy introduction to the purpose of this series of blogs, which are intended to (try to) explain some of the differences and challenges when moving to JavaScript from an Apex background. This isn't a JavaScript tutorial, it's more about what I wish I'd known when I started. It's also based on my experience, which as you can see from above, was a somewhat meandering path. Anyone starting their journey should find it a lot more straightforward now, but there's still plenty there to baffle!

Strong versus Weak (Loose) Typing

The first challenge I encountered with JavaScript was the difference in typing. 

Apex

Apex is a strongly typed language, where every variable is declared with the type of data that it can store, and that cannot change during the life of the variable. 

    Date dealDate;

In the line above, dealDate is declared of type Date, and can only store dates. Attempts to assign it DateTime or Boolean values explicitly will cause compiler errors:

    dealDate=true;         // Illegal assignment from Boolean to Date
    dealDate=System.now(); // Illegal assignment from DateTime to Date

while attempts to assign something that might be a Date, but turns out not to be at runtime will throw an exception:

    Object candidate=System.now();
    dealDate=(Date) candidate; // System.TypeException: Invalid conversion from runtime type Datetime to
    Date

JavaScript

JavaScript is a weakly typed language, where values have types but variables don't.  You simply declare a variable using var or let, then assign whatever you want to it, changing the type as you need to:

    let dealDate;
    dealDate=2;               // dealDate is now a number
    dealDate='Yesterday';     // dealDate is now a string
    dealDate=Date();          // dealDate is now a date

The JavaScript interpreter assumes that you are happy with the value you have assigned the variable and will use it appropriately. If you use it inappropriately, this will sometimes be picked up at runtime and a TypeError thrown. For example, attempting to run the toUpperCase() string method on a number primitive:

    let val=1;
    val.toUpperCase();
    Uncaught TypeError: val.toUpperCase is not a function

However, as long as the way you are attempting to use the variable is legal, inappropriate usage often just gives you an unexpected result. Take the following, based on a simplified example of something I've done a number of times - I have an array and I want to find the position of the value 3.

    let numArray=[1,2,3,4];
    numArray.indexOf[3];

which returns undefined, rather than the expected position of 2.

Did you spot the error? I used the square bracket notation instead of round brackets to demarcate the parameter. So instead of executing the indexOf function, JavaScript was quite happy to treat the function as an array and return me the third element, which doesn't exist.

JavaScript also does a lot more automatic conversion of types, making assumptions that might not be obvious.  To use the + operator as an example, this can mean concatenation for strings or addition for numbers, so there are a few decisions to be made:

lhs + rhs

1. If either of lhs/rhs is an object, it is converted to a primitive string, number or boolean

2. If either of lhs/rhs is string primitive, the other is converted to a string (if necessary) and they are concatenated

3. lhs and rhs are converted to numbers (if necessary) and they are added

Which sounds perfectly reasonable in theory, but can surprise you in practice:

    1 + true 

result: 2, true is converted to the number 1

    5 + '4'  

result '54', rhs is a string so 5 is converted to a string and concatenated.

    false + 2

result: 2, false is converted to the number 0

    5 + 3 + '5'

result '85' - 5 + 3 adds the two numbers to give 8, which is then converted to a string to concatenate with '5'

    [1992, 2015, 2021] + 9

result '1992,2015,20219' - lhs is an object (array) which is converted to a primitive using the toString method, giving the string '1992,2015,2021', 9 is converted to the string '9' and the two strings are concatenated

Which is Better?

Is this my first rodeo? We can't even agree on what strong and weak typing really mean, so deciding whether one is preferred over the other is an impossible task. In this case it doesn't matter, as Apex and JavaScript aren't going to change!

Strongly typed languages are generally considered safer, especially for beginners, as more errors are trapped at compile time. There may also be some performance benefits as you have made guarantees to the compiler that it can use when applying optimisation, but this is getting harder to quantify and in reality is unlikely to be a major performance factor in any code that you write.

Weakly typed languages are typically more concise,  and the ability to pass any type as a parameter to a function can be really useful when building things like loggers.

Personally I take the view that code is written for computers but read by humans, so anything that clarifies intent is good. If I don't have strong typing, I'll choose a naming convention that makes the type of my variables clear, and I'll avoid re-using variables to hold different types even if the language allows me to.