Tuesday, 14 September 2021

Dynamic Interactions in Winter 22

Introduction

If all goes according to plan, Dynamic Interactions go GA in Winter 22. The release notes have this to say about them:

With Dynamic Interactions, an event occurring in one component on a Lightning page, such as the user clicking an item in a list view, can update other components on the page.

Which I think is very much underselling it, as I'll attempt to explain in the rest of this post.

Sample App

You can find the code for the sample app at the Github repository. There's not a huge amount to it, you choose an Account and another component gets the Id and Name of the Account and retrieves the Opportunities associated with it:

Component Decoupling

What Dynamic Interactions actually allow us to do is assemble disparate components into custom user interfaces while retaining full control over the layout. The components can come from different developers, and we can add or remove components that interact with each other without having to update any source code, or with the components having the faintest idea about what else is on the page. 

This is something I've wanted for years, but was never able to find a solution that didn't require my custom components to know something about what was happening on the page.

The original way of providing a user with components that interact with each other was to create a container component and embed the others inside it. The container knows exactly which components are embedded, and often owns the data that the other components work on. There's no decoupling, and no opportunity to change the layout as you get the container plus all it's children or nothing. 

Lightning Message Service was the previous game changer, and that allowed components to be fairly loosely coupled. They would publish events when something happened to them, and receive events when something happened elsewhere that they needed to act on. They were still coupled through the messages that were sent and received though - any component that wished to participate had to know about the message channels that were in use and make sure they subscribed to and published on those. Good luck taking components developed by a third party and dropping those in to enhance your page. It did allow the layout to be easily changed and components, as long as they knew about the channels and messages, to be added and removed without changing any code. I was planning a blog post on this, but masterful inactivity has once again saved me the trouble of writing it then having to produce another post recommending against that approach 

With Dynamic Interactions, all that needs to happen is that components publish events when things of interest happen to them, and expose public properties that can be updated when things they should be interested in happen, the dream of decoupled components is realised.  The components don't have to listen for each other's events, that is handled by the lightning app builder page. As the designer of the page, I decide what should happen when a specific component fires a particular event fires. Essentially I use the page builder to wire the components to each other, through configuration.

Back to the Sample App

My app consists of two components (no container needed):

  • chooseAccount - this retrieves all the accounts in the system and presents the user with a lightning-combobox so they can pick one. In the screenshot above, it's on the left hand side. When the user chooses an account, an accountselected CustomEvent is fired with the details - all standard LWC:
        this.dispatchEvent(
                new CustomEvent(
                    'accountselected', 
                    {detail: {
                        recordId: this.selectedAccountId,
                        recordName: this.selectedAccountName
                    }
                })
        );
  • accountInfo - this retrieves the opportunity information for the recordId that is exposed as a public property, again all standard and, thanks to reactive properties, I don't have to manually take action when the id changes:
    @api get recordId() {
        return this._recordId;    
    }

    set recordId(value) {
        if (value) {
            this._recordId=value;
            this.hasAccount=true;
        }
    }
    
		....
        
    @wire(GetOpportunitiesForAccount, {accountId: '$_recordId'})
    gotOpportunities(result){
        if (result.data) {
            this.opportunities=result.data;
            this.noOpportunitiesFound=(0==this.opportunities.length);
        }
    }

and the final step is to use the Lightning App Builder to define what happens when the accountSelected event fires. I edit the page and click on the chooseAccount component, and there's a new tab next to the (lack of) properties that allows me to define interactions for the component - the Account Selected event:


and I can then fill in the details of the interaction:


In this case I'm targeting the accountInfo component and setting its public properties recordId and recordName to their namesakes from the published event. If I had additional components which cared about an account being selected, I'd create additional interactions to change their state to reflect the selection.

I now have two components communicating with each other, without either of them knowing anything about the other one, using entirely standard functionality. I can wire up additional components, move them around, or delete components at will.  

Conclusion


I regularly find myself producing is highly custom user interfaces that allow multiple records to be managed on a single page. For this use case, Dynamic Interactions are nothing short of a game changer, and I'm certain that this will be my go to solution. 

Related




Friday, 10 September 2021

JavaScript for Apex Programmers Part 1 - Typing

Background

When I started working with Salesforce way back in 2008, I had a natural affinity for the Apex programming language, as I'd spent the previous decade working with Object Oriented languages - first C++, then 8 years or so with Java. Visualforce was also a very easy transition, as I had spent a lot of time building custom front ends using Java technologies - servlets first before moving on to JavaServer Pages (now Jakarta Server Pages), which had a huge amount of overlap with the Visualforce custom tag approach. 

One area where I didn't have a huge amount of experience was JavaScript. Oddly I had a few years experience with server side JavaScript due to maintaining and extending the OpenMarket TRANSACT product, but that was mostly small tweaks added to existing functionality, and nothing that required me to learn much about the language itself, such as it was back then. 

I occasionally used JavaScript in Visualforce to do things like refreshing a record detail from an embedded Visualforce page, Onload Handling or Dojo Charts. All of these had something in common though, they were snippets of JavaScript that were rendered by Visualforce markup, including the data that they operated on. There was no connection with the server, or any kind of business logic worthy of the name - everything was figured out server side. 

Then came JavaScript Remoting, which I used relatively infrequently for pure Visualforce, as I didn't particularly like striping the business logic across the controller and the front end, until the Salesforce1 mobile app came along. Using Visualforce, with it's server round trips and re-rendering of large chunks of the page suddenly felt clunky compared to doing as much as possible on the device, and I was seized with the zeal of the newly converted. I'm pretty sure my JavaScript still looked like Apex code that had been through some automatic translation process, as I was still getting to grips with the JavaScript language, much of which was simply baffling to my server side conditioned eyes. 

It wasn't long before I was looking at jQuery Mobile to produce Single Page Applications where maintaining state is entirely the job of the front end, which quickly led me to Knockout.js as I could use bindings again, rather than having to manually update elements when data changed. This period culminated my Dreamforce 2013 session on Mobilizing your Visualforce Application with jQuery Mobile and Knockout.js

Then in 2015, Lightning Components (now Aura Components) came along, where suddenly JavaScript got real. Rather than rendering via Visualforce or including from a static resource, my pages were assembled from re-usable JavaScript components. While Aura didn't exactly encourage it's developers down the modern JavaScript route, it's successor - Lightning Web Components - certainly did.

All this is rather a lengthy introduction to the purpose of this series of blogs, which are intended to (try to) explain some of the differences and challenges when moving to JavaScript from an Apex background. This isn't a JavaScript tutorial, it's more about what I wish I'd known when I started. It's also based on my experience, which as you can see from above, was a somewhat meandering path. Anyone starting their journey should find it a lot more straightforward now, but there's still plenty there to baffle!

Strong versus Weak (Loose) Typing

The first challenge I encountered with JavaScript was the difference in typing. 

Apex

Apex is a strongly typed language, where every variable is declared with the type of data that it can store, and that cannot change during the life of the variable. 

    Date dealDate;

In the line above, dealDate is declared of type Date, and can only store dates. Attempts to assign it DateTime or Boolean values explicitly will cause compiler errors:

    dealDate=true;         // Illegal assignment from Boolean to Date
    dealDate=System.now(); // Illegal assignment from DateTime to Date

while attempts to assign something that might be a Date, but turns out not to be at runtime will throw an exception:

    Object candidate=System.now();
    dealDate=(Date) candidate; // System.TypeException: Invalid conversion from runtime type Datetime to
    Date

JavaScript

JavaScript is a weakly typed language, where values have types but variables don't.  You simply declare a variable using var or let, then assign whatever you want to it, changing the type as you need to:

    let dealDate;
    dealDate=2;            // dealDate is now a number
    dealDate='Yesterday';  // dealDate is now a string
    dealDate=Date();       // dealDate is now a date

The JavaScript interpreter assumes that you are happy with the value you have assigned the variable and will use it appropriately. If you use it inappropriately, this will sometimes be picked up at runtime and a TypeError thrown. For example, attempting to run the toUpperCase() string method on a number primitive:

    let val=1;
    val.toUpperCase();
    Uncaught TypeError: val.toUpperCase is not a function

However, as long as the way you are attempting to use the variable is legal, inappropriate usage often just gives you an unexpected result. Take the following, based on a simplified example of something I've done a number of times - I have an array and I want to find the position of the value 3.

    let numArray=[1,2,3,4];
    numArray.indexOf[3];

which returns undefined, rather than the expected position of 2.

Did you spot the error? I used the square bracket notation instead of round brackets to demarcate the parameter. So instead of executing the indexOf function, JavaScript was quite happy to treat the function as an array and return me the third element, which doesn't exist.

JavaScript also does a lot more automatic conversion of types, making assumptions that might not be obvious.  To use the + operator as an example, this can mean concatenation for strings or addition for numbers, so there are a few decisions to be made:

lhs + rhs

1. If either of lhs/rhs is an object, it is converted to a primitive string, number or boolean

2. If either of lhs/rhs is string primitive, the other is converted to a string (if necessary) and they are concatenated

3. lhs and rhs are converted to numbers (if necessary) and they are added

Which sounds perfectly reasonable in theory, but can surprise you in practice:

    1 + true 

result: 2, true is converted to the number 1

    5 + '4'  

result '54', rhs is a string so 5 is converted to a string and concatenated.

    false + 2

result: 2, false is converted to the number 0

    5 + 3 + '5'

result '85' - 5 + 3 adds the two numbers to give 8, which is then converted to a string to concatenate with '5'

    [1992, 2015, 2021] + 9

result '1992,2015,20219' - lhs is an object (array) which is converted to a primitive using the toString method, giving the string '1992,2015,2021', 9 is converted to the string '9' and the two strings are concatenated

Which is Better?

Is this my first rodeo? We can't even agree on what strong and weak typing really mean, so deciding whether one is preferred over the other is an impossible task. In this case it doesn't matter, as Apex and JavaScript aren't going to change!

Strongly typed languages are generally considered safer, especially for beginners, as more errors are trapped at compile time. There may also be some performance benefits as you have made guarantees to the compiler that it can use when applying optimisation, but this is getting harder to quantify and in reality is unlikely to be a major performance factor in any code that you write.

Weakly typed languages are typically more concise,  and the ability to pass any type as a parameter to a function can be really useful when building things like loggers.

Personally I take the view that code is written for computers but read by humans, so anything that clarifies intent is good. If I don't have strong typing, I'll choose a naming convention that makes the type of my variables clear, and I'll avoid re-using variables to hold different types even if the language allows me to.




Saturday, 28 August 2021

The certificate associated with the consumer key has expired

This week a couple of my continuous integration builds started failing. This in itself isn't unusual - these are typically end to end builds that create scratch orgs, set up standing data, run a bunch of tests, so it doesn't take much to tip the odd one over.  I didn't find anything helpful about the error message online though, so I'm writing this post so that it will appear as a match for the next person that is trying to find out more!

The error was something I hadn't seen before - "The certificate associated with the consumer key has expired.".  Googling didn't bring up much, one person had reported it before and they had got around it by removing their CLI installation and starting again. Not an option for me as the CLI setup on my CI machine would take a fair amount of effort to recreate. Time to start digging.

The first place I looked was the CLI itself - I typically don't update this much on my CI machine, as it hasn't been the most stable tool in terms of working new releases over the last year or so. It seemed entirely plausible that something embedded in the CLI had expired, so I updated everything and waited. Sadly this didn't fix my scratch builds, but it did break one of my static code analysis jobs, as a rule had switched from Java to XPath, and I had references to the Java class in one of my custom configurations. That was a relatively quick fix, so shortly I was no worse than the day before.

Next was the JWT grant for the org that I'm using as a dev hub for these builds. I was able to query data from the org without any problem, so it didn't seem to be that. Then I tried creating a scratch org and got the same error, so it seemed likely that it was related, but not as obviously as it might have been. 

Once I'd remembered how to access a connected app's configuration, I could see that the self-signed certificate for the app had expired about 9 hours before the build started failing.  Clearly I had found the problem.  

My next thought was that I would have to go through the whole JWT grant again - not something I look forward to, mainly because I don't do it that often and I always remember it being worse than it is. The first thing I needed to do though, was create a new self-signed certificate for the connected app, which I duly did. I was tempted to make the certificate last 10 years (apparently openssl self-signed certs can go out for around 75 years), but that felt like trading security for convenience, which is never a good thing to do. Once I'd updated the cert I decided I'd have a quick go at creating a new scratch org and it worked! No need to generate a new grant, I was off to the races. I then encountered a problem deleting scratch orgs, but this is something I'm also seeing on another machine that is authorised to a different dev hub via web login, so it feels like that is a different issue. I can also work around it with some scheduled Apex, so I'm happy to wait and see if it goes away!




Saturday, 21 August 2021

The Public Nature of Modern Programming

Then

When I got my first programming job, way back in 1987, it was a quite a private and solitary occupation. The Design Authority for my project gave me a detailed design for a procedure, including explicit definitions of the interfaces that I would use to interact with the rest of the system. I would then write the code for the procedure, test it locally and check it into version control. If I didn't need any clarification, I wouldn't speak to anyone as part of this process. I'd chat to my colleagues that I shared an office with, but in terms of doing my work it would be me and a screen. 

My code wouldn't be looked at by anyone else unless there was an issue when the next build took place, which there always was, as there were hundreds of new procedures that had never been slammed together before. The build team would take the first stab at fixing the issues, and if they were successful then I wouldn't know anything about this. If they were unsuccessful we would then enter a period of confusion as they typically wanted help with their fix rather than the underlying problem, and I would wonder where this code they were talking about had come from. 

My working life was spent solving problems that others were probably spending time solving, but it would never have occurred to me to share information about what I was doing, nor would I have had the first clue about how to go about sharing it. 

Outside of work, I would almost never talk to anyone about programming unless I was catching up with someone I'd been at college with, but it would be very superficial and typically about which languages and business areas we were working in, before getting down to the serious business of talking about sport. I'd also rarely do any programming outside of work. I might fiddle around a bit with some Basic, Pascal or 6502 assembler on my BBC Model B, but nothing serious. And certainly nothing that I'd consider telling anyone else about.  Some people wrote games and sent the source into magazines that might print it, but as a professional programmer I already got paid to write code so didn't need this outlet. 

Programming was a job like any other - I went to an office, worked there for a set number of hours a day, then went home and did whatever else I was interested in at the time. Unless you were a close friend or family member, you wouldn't have known that I programmed computers for a living, and unless you were a programmer yourself, it was hard for me to communicate exactly what I did every day.

The stereotype of a programmer was someone sitting in a darkened basement, being fed written requirements, and spending their days staring at a screen and typing. Which was pretty accurate.

Now

Programming in 2021 has changed beyond all recognition. It's now a collaborative and very public occupation. Requirements are rarely written down, and if they are it is often the programmers doing that after speaking to "the business" to find out exactly what is needed. Outside of integrating with external systems, integrating with the rest of the code in the project involves collaboration with the other programmers and the interfaces change when it becomes apparent they need to. Code is reviewed before being added to version control and continuous integration flags up problems as soon as they occur. The idea that a team of humans would need to kick off a build and triage the problems seems quaint and incredibly inefficient now. 

For me, the biggest change is that programmers expect, and are expected, to have a public profile. 

When we solve a problem, we write a blog post about it, redacting any detail that might expose which customer this impacted. We cross post on Substack and Medium. There might be a Github repository with the sample code for the blog post - there will almost certainly be other Github repositories to showcase our work, sometimes complete applications that anyone is welcome to copy. 

We also perform for audiences now, as we share what we've been doing at meet ups, conferences (both community and vendor organised), podcasts, webinars and recorded sessions. I'm pretty sure for the first 5-10 years of my career I didn't even show a customer what I'd built for them, it just got rolled into demos and training carried out by others. Now I show complete strangers from around the world and it seems perfectly normal.

Which is Better?

From the point of view of solving problems and producing solutions, it's almost unrecognisable now. If you hit a problem with anything other than bleeding edge technology, it's incredibly likely that someone else has already hit it and fixed it. And blogged about the fix. And spoken at a conference about it. And maybe created a Github repository with all the code you could possibly need going forward. 

The flip side is that programming is now more of a lifestyle choice than a job. There is an expectation that you will be working on side projects, writing posts and presenting at conferences. Often this will be on top of your actual job, which means that your social life is merely an extension of your professional life and often indistinguishable. Going to the office and working a set number of hours is typically only part of the programmer life these days, which can bring a lot of pressure, especially if you don't see it as a calling.  It can also be a full time job keeping up with the latest developments - frameworks rise and fall, platforms come in and out of favour, or the baddies get a new CEO and become the goodies so you grudgingly have to skill up on their products.

Personally I find it a lot better now that it was when I started, but very I'm lucky in that (a) I enjoy what I do, and (b) I can dedicate large chunks of my own time to all the extra-curricular activities without making my personal life difficult. I'm sure nobody wants to go back to the days when the helpful information was silo'd or didn't exist, but I'd imagine there are some out there who would quite like a return to the days when programming was something you did in private.



Saturday, 14 August 2021

Salesforce++ Top Picks


Salesforce++ is the new streaming service that marries up the excitement of enterprise software with the creativity and spontaneity of reality TV, showcasing exclusive original content that other services can only dream about. 

We've watched them all so you don't have to, so read on for our pick of bunch of August's programming.

90 Day Licenseé

In this show, Salesforce Account Executives are paired up with real life prospects and have 90 days to turn them into paying customers.

As we catch up with the couples in August, Malcolm introduces Clare to his family for the first time. Malcolm's sister, Evelyn, question Clare's intentions, feeling that she is leading Malcolm on to get access to his first call deck.

If you enjoy this show, watch out for 90 Day License√© : Happy Ever After? to find out more about previous prospects - did they get the agreement they were looking for, and are they still together with their AE?

American Picklists


Following Salesforce Admins Becky and Taylor from coast to coast as they track down weird and wonderful picklists, and talk to the admins responsible for creating and maintaining them.

August - Becky and Taylor head to rural Iowa to meet solo admin Herman, who works for a local non-profit. While clearing out some old applications that hadn't been opened for years, Herman stumbled across a mint-condition picklist produced for training purposes in 2012, made up of the names of Disney characters. 

Miami ISVs


Drama featuring Sonny Crackit and Rico Tabs, two software engineers based in Miami who spend their weeks helping ISVs fine tune their app exchange offerings, and their weekends on sun-drenched beaches.

In this month's episode, Crackit suffers a concussion after crashing his jet-ski and believes himself to be Sonny Crocket from the Miami Vice TV series. Tabs faces a race against time to stop his colleague from blowing the profits from their last engagement by renting a Ferrari Daytona Spyder.

Deadliest Batch


(Documentary series following the real-life experience of several teams of hard-bitten Apex developers who make a living writing batch jobs for demanding customers)

In August, there's trouble at the family operations. At Munchausen the Slopestring brothers fall out over whether to use Database.Stateful or write information back to the database at the end of each execute method. Meanwhile, at Winter Cove, Calamity Jane Hitchcoski's development team grind around the clock to fill their record quota before tax season ends.

Say Yes to the Apex


Reality series following events at Grossmeadow Software, where the developers try to find the perfect Apex solutions for a different admin and their entourage every week.

August : Can Team Leader Jackie design the perfect Apex for newly single admin Roberto? Roberto has dreamt about replacing his ageing workflow rules with a stylish Apex solution, and is keen to make it happen this year as a tribute to his step-uncle, who died 15 years before Roberto was born. He'll be deploying the code from Honolulu, so is keen that it has a Hawaiian feel to it.


Film of the Month: Bad Multi-Tenant




[While it might not feel like it, streaming services aren't all about reality shows. Salesforce++ features a mix of new and classic movies]

August gets off to a blistering start with a hard hitting classic from the early 90s. Harry Cortez stars as MT, a Salesforce Architect who delights in exploiting loopholes to use more than his fair share of Salesforce resources.  As he closes in on rock bottom, he is given a shot at redemption when he stumbles across a post on Stack Exchange asking for help writing the unit tests for an Apex trigger.

Contains scenes of limit abuse and unbounded SOQL queries.



Sunday, 8 August 2021

Avoiding Returnaggedon


How a company has treated its staff over the last 18 months, and how they treat them over the next few months as traditional workplaces open up, will have a huge impact on their future. Forcing teams back into the office is likely to result in Returnaggedon, where they come back just long enough to hand in their notice.

Now that most legal restrictions in the UK around COVID-19 have been lifted, thoughts are inevitably turning to what happens next in terms of remote work. Many of us are drifting back in for the odd day here and there for meetings, while some CEOs seem determined to revert back to how things were before. Apparently working from home doesn't work for those who want to hustle, and if you can go into a restaurant in New York City, you can come to the office. Although most of the time, the impending switch back to the old normal was immediately pushed back as the delta variant proved to be no respecter of desires or plans - a couple of days ago Amazon were the latest to push back from a September 2021 to January 2022.

Expecting everything to go back to how it was because that's how you liked it seems rather short sighted to me. While many reasons may be given for why everyone needs to be in the same physical location to get their work done, a lot of this will come down to trust, or lack thereof. The pandemic forced many managers to trust that their people were working even when they couldn't stand over them and, while they paid lip service to how well it was all going, that trust really wasn't there. Hence some can't wait to get everyone back in the office where their every move can be watched. Trust begets trust, however, and many employees report not trusting their leadership to manage the return to work safely - last September a survey found only 14% trusted their CEOs and senior leaders to make the correct call, which is pretty shocking. 

It's not just about safety though. People have adjusted to the benefits of part or full time remote working and are reluctant to put the genie back in the bottle.  No commute and saving money are the main reasons that most people want to stay remote, and they won't give that up easily, especially if they can avoid it by simply changing jobs. And people are changing jobs. A lot. So much that it's being called the Great Resignation. A Microsoft survey of 30,000 workers around the world found that 41% of them were considering quitting or changing profession. Imagine losing 41% of your workforce - it's hard to see that as anything other than catastrophic, given how long it takes to fully onboard new employees. While most executives say they don't want things to go back to how they are, around 70% of them want people in the office 3+ days a week, so it's clear that many want it to look an awful lot like it did before, with a few bones thrown to pretend they embrace remote work.

In Salesforce world, the competition for talent has always been intense, and somehow gets hotter every year. Companies need to offer people what they are looking for, or they will miss out to those that do. A survey of 1,000 US office workers in May said 39% would consider quitting if their employer didn't show flex. If you add in to that how easily people can walk into another Salesforce job, it's clear that if employees want to continue with this way of working, an accommodation must be found.  If plans aren't in place to offer remote/hybrid working going forward, it's going to be a bumpy ride!



Saturday, 31 July 2021

The CLI GUI Plays Favourites

 


Introduction


My last change to the CLI GUI was to add the capability to decode a Salesforce CLI command string and regenerate the command page for it. This was the first step towards favourites functionality, so that I could save frequently used commands and quickly re-run them. As usual, it wasn't quite as straightforward as that, but this afternoon I pushed an update to the repository to add support for favourites.

A Few of My Favourite Things


If you are already using the CLI GUI, you'll need to run npm install as I'm using a different mechanism to convert the command string into it's component parameters - string-argv. Previously I had a very complicated regular expression that I found online, but it didn't handle string parameters containing spaces too well, or MacOS directories.

The first change you'll notice when the GUI starts is the new datalist and a couple of buttons. The datalist allows you to select from the favourites you've saved, and you can either Open the command window with the favourite decoded, or run it immediately. As an aside, running a favourite that does something destructive or that can't be undone (like deleting a scratch org) is a pretty dangerous thing to do, so you'll be asked to confirm it. I really only use this for opening orgs without having to go via the command window:


Obviously there won't be anything in the datalist yet, as you'll need to create a favourite or two first.

Favourites are added and removed from the command window, which now has a section at the bottom of the page for this. Once you have set the various parameters, give it a name and click the Save Favourite button:


Note that the directory you are working in is saved with the favourite.

Returning back to the main screen, the new favourite is now available in the datalist, and selecting it enables the buttons:


Clicking the Open button opens the command window with the favourite details, and a button to remove it:



Clicking the Run button first asks you to confirm you really want to do it:



If you choose to continue, the command will immediately execute, in the directory you were in when you saved the favourite, and the output shown in a similar modal to that of the command window:



Right now there's no mechanism to update a favourite - you have to remove the existing one and then save the updated command with the same name. Yes it's a couple more clicks, but think of how grateful you'll be if I do add this capability.

The latest code is in the Github repository, and this week I'm back to the usual test environments of MacOS and Windows 10.  I haven't flogged it to within an inch of its life though, so I'm sure if you look hard enough you'll find something that doesn't work. If you do, feel free to raise an issue.

Related Posts