Friday, 15 March 2019

O Trailhead! myTrailhead!

O Trailhead! myTrailhead!

Captain

Introduction

After a fairly lengthy wait (announced at DF 17 and 18) myTrailhead officially launched on March 6th 2019. Simply put, this allows you to provide much of the Trailhead features that you know and love (minus some, plus some others) within your organisation to create your own continuous learning system. Trails, modules, badges and points are all there.

The main addition is you can add items such as corporate presentations and documents to a trail, so you can re-use existing collateral without having to migrate it to Trailhead quiz format. You can also combine standard Salesforce Trailhead modules into your own trails, so there’s no need to reinvent the wheel.

The big minus is that (I think) the automated checking of code/configuration assignments isn’t (currently) available. While I’ve seen references to Trail Checker, as far as I can tell this is more of a manual check than the kind of automation we are familiar with from Superbadges. I can’t say I’m surprised by this - turning a verification system of this nature into a product is no small undertaking, but it would be a very powerful addition for those of us regularly on boarding admins and developers to maintain our existing Salesforce applications.

Familiarity Goes a Long Way

The familiar styling should help adoption. Trailhead is extremely widely used by Salesforce admins, devs, and users (and many of their children, for some reason) and a UI/UX that they are already intimately familiar with will immediately remove some barriers. I’m also pretty sure that the chance to earn more badges and points will be jumped on with alacrity.

Content is King

myTrailhead also includes content management - modules and trails are combined into releases that can be published (accessible to your users) or unpublished (the next iteration, currently being created or reviewed), giving fine grained control on when users can access content and ensuring that it is fit for purpose. As mentioned earlier, you can link to existing Trailhead content so you have a head start in terms of material that isn’t specific to your business.

This is the mechanism of managing content that is provided by myTrailhead. However, creating the content is a task that is all yours and should absolutely not be underestimated. Managing a library of engaging training content and ensuring this is kept up to date when underlying processes or applications change can be a huge task. I can’t imagine how long the Trailhead content team spend on this for a new Salesforce release, but one thing I can tell you is that they don’t ask the developers to do it in their spare time, so if you want a system that will be used, be prepared to invest in a content team of your own. By the same token, if you library consist of half a dozen badges from a couple of years ago, don’t be surprised if nobody is interested in earning them.

Licensing

Up until now I’ve been broadly positive about this, which might come as a surprise to regular readers of this blog, and here comes the negativity. I think Salesforce have made a bit of a mistake with the licensing.

myTrailhead is a $25/user/month add-on. While this isn’t cheap, that isn’t where my beef lies. Instead it’s the fact that it’s an add-on to an Enterprise Sales/Service/Platform license. So if I’m part of a large company (I’m not) and I want to use myTrailhead for staff that aren’t Salesforce users, I have to buy them a Salesforce license they will never use and then add myTrailhead on top of it. And of course I won’t do that, as it makes the cost prohibitive. Instead I’ll do it all on another platform or run two training systems, one for Salesforce users and one for everyone else, which will cause confusion. Or I’ll buy 10 licenses and continually cycle them around users when new training content becomes available.

A standalone license including myTrailhead and chatter, for example, would make much more sense from a purchasing perspective and allow me to train all my users through a single system. Hopefully this is being used to throttle uptake in the short term and we’ll see more licensing options in the future, otherwise I fear that it will only achieve a fraction of its potential.

Related Posts

 

  

Monday, 21 January 2019

Unsaved Changes in Spring 19

Unsaved Changes in Spring 19

Introduction

As I mentioned in my last post, it’s often the little things in a release that make all the difference. Another small change in the Spring 19 release is the lightning:unsavedChanges aura component (yes aura, remember that in Spring 19 Lightning Web Components are GA and Lightning Components are renamed Aura Components). The release notes are somewhat understated - "Notify the UI about unsaved changes in your component”, but what this means is that I can retire a bunch of code/components that I’ve written in the past to sop the user losing their hard earned changes to a record. Even better, I’m wiring in to the standard Lightning Experience behaviour so I don’t need to worry if Salesforce change this behaviour, I’ll just pick it up automatically.

Example

My example component is a simple form with a couple of inputs - first and last name:

<aura:component implements="flexipage:availableForAllPageTypes">
    <aura:attribute name="firstname" type="String" />
    <aura:attribute name="lastname" type="String" />
    <aura:handler name="change" value="{!v.firstname}" action="{!c.valueChanged}"/>
    <aura:handler name="change" value="{!v.lastname}" action="{!c.valueChanged}"/>

    <lightning:card variant="narrow" title="Unsaved Example">
        <div class="slds-p-around_medium">
            <lightning:input type="text" label="First Name" value="{!v.firstname}" />
            <lightning:input type="text" label="Last Name" value="{!v.lastname}" />
            <lightning:button variant="brand" label="Save" title="Save" onclick="{! c.save }" />
        </div>
    </lightning:card>

    <lightning:unsavedChanges aura:id="unsaved"
                onsave="{!c.save}"
                ondiscard="{!c.discard}" /> 
</aura:component>	

I have change handlers on the first name and last name attributes, so that I can notify the container that there are unsaved changes. I achieve this via the embedded lightning:unsavedChanges component, which exposes a method named setUnsavedChanges. In my change handler I invoke this method:

valueChanged : function(component, event, helper) {
    var unsaved = component.find("unsaved");
    unsaved.setUnsavedChanges(true, { label: 'Unsaved Example' });
}

So now if I add this component to a Lightning App page, enter some text and then try to click on another tab, I get a nice popup telling me that I may be making a big mistake. It also displays the label that I passed the setUnsavedChanges method, so that if there are multiple forms on the page then the user can easily identify which one I am referring to. 

Screenshot 2019 01 20 at 11 20 02

Of course, my Evil Co-Worker immediately spotted that they could pass the label of a different component and keep the user in a loop where they believe they have saved the changes but keep getting the popup. If I’m honest I expected something a bit more evil from them, like calling a method that clears the unsaved changes without saving the record when the user clicks the Save button,  clearly getting lazy as well as Evil in their old age.

What’s even nicer is that if I click the Discard Changes or Save buttons, my controller methods to discard or save the record are invoked, because I specified these on the component:

<lightning:unsavedChanges ..." onsave="{!c.save}" ondiscard="{!c.discard}" /> 

so I have full control over what happens.

This kind of interaction with the user is possible when I click on a link that is managed by the Lightning Experience, but not so much if I reload the page or navigate to a bookmark. In this scenario I’m at the mercy of the beforeunload event, which is browser specific and much more limited.

Screenshot 2019 01 20 at 11 36 40

This behaviour can’t be customised, the idea being that you can’t hold a user on your page against their will, regardless of whether you are doing it for their own good. 

On another note, when I created the original lightning app page for this I used a single column, but the inputs then spanned the whole page. Thanks to the Spring 19 feature of changing lightning page templates, I was able to switch to header, body and right sidebar in a few seconds, rather than having to create a whole new page. I sense that feature is the gift that keeps giving.

Related Posts

 

Saturday, 12 January 2019

Change Lightning Page Template in Spring 19

Change Lightning Page Template in Spring 19

Introduction

Sometimes it’s the little things that make all the difference, and in the Spring 19 release of Salesforce there’s a small change that will save me a bunch of time - support for changing the template of a Lightning Page. I seem to spend a huge amount of time recreating Lightning pages once I add a component that needs more room than is available. Often this is historic Bob's fault, because I’ve written a custom component that needs full width, but when I originally created the page I didn’t foresee this and chose a template that is now obviously unsuitable.

Changing Template

Here’s my demo page - as you can see the dashboard is a bit squashed as I’ve gone for main region and sidebar:

Screenshot 2019 01 12 at 16 13 44

Changing template is pretty simple - in the Lightning App builder (it can’t be long before this is renamed the Lightning Page builder can it?) edit the page and find the new option on the right hand side:

Screenshot 2019 01 12 at 16 15 13

Clicking this opens the change template dialog - initially, much like creating a page I get to choose from a list. I’ve gone for header and right sidebar so that my dashboard can take the full width header section:

Screenshot 2019 01 12 at 16 15 51

The next page is a departure from previous experience, in that I need to tell the dialog how to map the components from the old to the new template - I’ve left it as the defaults aside from the main region (which has my dashboard) which I add to the new header region:

Screenshot 2019 01 12 at 16 16 10

Once I’ve saved the page, I can view it and see the dashboard in it’s full width glory:

Screenshot 2019 01 12 at 16 16 41

Deploying Changes

While it’s great that we can do this through the UI, if I change the template of a Lightning Page in a sandbox I’d like to be able to deploy it via the metadata Api to production. When I tried this initially I had my API version set to 44.0 in my package.xml, which meant that I wasn’t hitting the Spring 19 metadata Api, but the Winter 19 version which doesn’t support template changes. Once I updated that, it all worked fine. If you are wondering how I can make such an obvious mistake, allow me to point you at this post, which explains everything.

Related Posts

 

 

Saturday, 1 December 2018

Apex Snippets in VS Code

Apex Snippets in Visual Studio Code

Snippet

Introduction

As I’ve written about before, I switched over to using Microsoft Visual Studio Code as soon as I could figure out how to wire the Salesforce CLI into it for metadata deployment. I’m still happy with my decision, although I do wish that more features were also supported when working in the metadata API format rather than the new source format. When using an IDE with plugins for a specific technology, such as the Salesforce Extension Pack, it’s easy to focus on what the plugins provide and forget to look at the standard features. Easy for me anyway. 

Snippets

Per the docs, "Code snippets are small blocks of reusable code that can be inserted in a code file using a context menu command or a combination of hotkeys”. What this means in practice is that if I type in ‘try”, I’ll get offered the following snippets, which I can move between with the up and down arrow (cursor) keys:

Screenshot 2018 12 01 at 16 34 09

Note the information icon on the first entry - clicking this shows me the documentation for each snippet and what will be inserted if I choose the snippet. This is particularly important if I install an extension containing snippets from the marketplace that turns out to have been authored by my Evil Co-Worker - I can see exactly what nefarious code would be inserted before I continue:

Screenshot 2018 12 01 at 16 34 17

Which is pretty cool - with the amount of code i write, saving a few keystrokes here and there can really add up.

User Defined Snippets

The good news is that you aren’t limited to the snippets provided by the IDE or plugins - creating user defined snippets is, well a snip (come on, you knew I was going there).  On MacOS you access the Code ->Preferences -> User Snippets menu option:

Screenshot 2018 12 01 at 16 41 46

and choose the language - Apex in my case - and start creating your own snippet.

Example

Here’s an example from my apex snippets:

	"SObject_List": {
		"prefix": "List<",
		"body": [
			"List<${1:object}> ${2:soList}=new List<${1}>();"
		],
		"description":"List of sObjects"
	}

Breaking up the JSON:

  • The name of my snippet is “SObject_List”.
  • The prefix is “List<“ - this is the sequence of characters that will activate offering the snippet to the user.
  • The body of the snippet is "List<${1:object}> ${2:soList}=new List<${1}>();” 
    •  $1 and $2 are tabstops, which are cursor locations. When the user chooses the snippet, their cursor will initially be placed in $1 so they can enter a value, then they hit tab and move to $2 etc. 
    • $1.object is a tabstop with a placeholder - in this case the first tabstop will contain the value “object” for the user to change. 
    • If you use the same tabstop in several places, when the user updates the first instance this will change all other references
  • Description is the detail that will be displayed if the user has hit the information  icon.

The following video shows the snippet in use while creating an Apex class - note how I only type ‘Contact’ once, but both instances of ‘object’ get changed.

Nice eh? And all done via a configuration file.

Related Posts

 

 

Sunday, 18 November 2018

Wrapping the Salesforce CLI

Wrapping the Salesforce CLI

Wrp

Introduction

(This post is aimed at beginners to the command line and scripting - all of the examples in this post are for MacOS)

Anyone who has read my blog or attended my recent talks at the London Salesforce Developers or Dreamforce will know I’m a big fan of the Salesforce CLI. I use it for pretty much everything I do around metadata and testing on Salesforce, and increasingly for things that don’t involve directly interacting with Salesforce. I think everyone should use it, but I also realise that not everyone is that comfortable with the command line, especially if their career didn’t start with it!

The range of commands and number of options can be daunting, for example to deploy local metadata to production and execute local tests, waiting for up to 2 minutes for the command to complete:

sfdx force:mdapi:deploy -d src -w 2 -u keir.bowden@sfdx.deploy -l RunLocalTests

If you are a developer chances are you’ll be executing commands like this fairly regularly, but for infrequent use, it’s quite a bit to remember. If you have colleagues that need to do this, consider creating a wrapper script so that they don’t have to care about the detail.

Wrapper Script

A wrapper script typically encapsulates a command and a set of parameters,  simplifying the invocation and easing the burden of remembering the correct sequence.A wrapper script for the Salesforce CLI can be written in any language that supports executing commands - I find that the easiest to get started with is bash, as it’s built in to MacOS.

A Bash wrapper script to simplify the deploy command is as follows:

#!/bin/bash

echo "Deploying to production as user $1"

sfdx force:mdapi:deploy -d src -w 2 -u $1 -l RunLocalTests

Taking the script a line at a time:

#!/bin/bash

This is known as a shebang - it tells the interpreter to execute the '/bin/bash' command, passing the wrapper script as a parameter.

echo "Deploying to production as user $1"

This outputs a message to the user, telling them the action that is about to be taken. The ‘$’ character access the positional parameters, or arguments, passed to the script. ‘$0' is set to the name that the script is executed with, '$1' is the first argument, ‘$2' the second and so on. The script expects the ‘targetusername' for the deployment to be passed as the first argument, and wastes no time checking if this is the case or not :)

sfdx force:mdapi:deploy -d src -w 2 -u $1 -l RunLocalTests

This executes the embedded Salesforce CLI command, once again accessing the parameter at position 1 via ‘$1' to set the ‘targetusername’ of the command.

Executing the Wrapper Script

The script assumes it is being executed from the project directory (the parent directory of src), so I’ve put it there, named as ‘deployprod’.

To execute the script, I type ‘./deployprod’ in the terminal - the ‘./‘ prefix simply tells the interpreter that the command is located in the local directory.

Attempting to execute the script after I create it shows there is still some work to do:

> ./deployprod keir.bowden@sfdx.deploy
-bash: ./deployprod: Permission denied

In order to allow the wrapper script to be as a command, I need to make it executable, via the chmod command:

> chmod +x deployprod

Re-running the command then produces the expected output:

> ./deployprod keir.bowden@sfdx.deploy

Deploying to production as user keir.bowden@sfdx.deploy
2884 bytes written to /var/folders/tn/q5mzq6n53blbszymdmtqkflc0000gs/T/src.zip using 60.938ms
Deploying /var/folders/tn/q5mzq6n53blbszymdmtqkflc0000gs/T/src.zip...

etc

So in future, when the user wants to deploy to production, they simply type:

./deployprod keir.bowden@sfdx.deploy

rather than

sfdx force:mdapi:deploy -d src -w 2 -u keir.bowden@sfdx.deploy -l RunLocalTests

which is a lot less for them to remember and removes any chance that they might specify the wrong value for the -d or -l switches.

Of course there is always the chance that my Evil Co-Worker will update the script for nefarious purposes (to carry out destructive changes, for example) and the user will be none the wiser unless they inspect the output in detail (which they never will unless they see an error trust me), but the risk of this can be mitigated by teaching users good security practices around allowing access to their machines. And reminding them regularly of the presence of the many other evil people that they don’t happen to work with.

All Bash, all the time?

I  created the original command line tools on top of the Force CLI  for deployment of our BrightMedia appcelerator using Bash, and it allowed us to get big quick. However, there were a couple of issues:

  1. Only I knew how to write  bash scripts
  2. Bash isn’t the greatest language for carrying out complex business logic
  3. When the Salesforce CLI came along I wanted to parse the resulting JSON output, and that is quite a struggle in Bash.

Point 1 may or may not be an issue in your organisation, though I’d wager that you won’t find a lot of bash scripting experience outside of devops teams these days. Points 2 and 3 are more important - if you think you’ll be doing more than simple commands (or blogs about those simple commands!) then my advice would be to write your scripts in Node JS. You’ll need to be comfortable writing JavaScript, and you have to do a bit more in terms of installation, but in the long run you’ll be able to accomplish a lot more.

Bash does allow you to get somewhat complex quite quickly, so you’ll likely be some way down the wrong path when you realise it - don’t be tempted to press on. The attitude that “we’ve invested so much in doing the wrong thing that we have to see it through” never pays off!

 

Monday, 5 November 2018

Situational Awareness

Introduction

Train

This is the eighth post in my ‘Building my own Learning System” series, in which I finally get to implement one of the features that started me down this path in the first place - the “Available Training” component.

The available training component has situational awareness to allow it to do it’s job properly. Per Wikipedia, ituational awareness comprises three elements:

  • Perception of the elements in the environment - who the user is and where they are an application
  • Comprehension of the situation - what they are trying to do and what training they have taken
  • Projection of future status - if there is more training available they will be able to do a better job

Thus rather than telling the user that there is training available, regardless of whether they have already completed it, this component tells the user that there is training, how much of it they have completed, and gives them a simple way to continue. Push versus pull if you will.

Training System V2.0

Some aspects were already in place in V1 of the training system - I know who the user is based on their email address, for example, However, I only knew what training they had taken for the current endpoint so some work was required there. Polling all the endpoints to find out information seemed like something that could be useful to the user, which lead to sidebar search. Sidebar search allows the user to enter terms and search for paths or topics containing those terms across all endpoints:

Screen Shot 2018 11 05 at 15 47 18

Crucially the search doesn’t just pull back the matching paths. it also includes information about the progress of the currently logged in user for those paths - in this case, like so much of my life, I’ve achieved nothing:

Screen Shot 2018 11 05 at 15 47 24

In order to use this search, the available training component needs to know what the user is trying to do within the app. It’s a bit of a stretch for it to work this out itself, so it takes a topic attribute. 

I also want the user to have a simple mechanism to access the training that they haven’t taken, so the available training component also takes an attribute of there URL of a page containing the training single page application. 

Examples

Hopefully everyone read this in the voice of Jools from Pulp Fiction, otherwise the fact that there’s only a single example means it doesn’t make a lot of sense.

An example of using the new component is from my BrightMedia dev org. It’s configured in the order booking page as follows:

<c:TrainingAvailable modalVisible="{!v.showTraining}" 
    trainingSPAURL="https://bgmctest-dev-ed.lightning.force.com/lightning/n/Training" 
    topic="Order Booking" />

where trainingSPAURL is the location of the single page application.

Also in my booking page I have an info button (far right): 

Screen Shot 2018 11 05 at 16 12 45

clicking this toggles the training available modal:

Screen Shot 2018 11 05 at 16 15 10

 

Which shows that there are a couple of paths that I haven’t started. Clicking the ‘Open Training’ button takes me to the training page with the relevant paths pre-loaded:

Screen Shot 2018 11 05 at 16 15 24

Related Posts

 

Saturday, 13 October 2018

All Governor Limits are Created Equal

All Governor Limits are Created Equal

Soql

Introduction

Not all Salesforce governor limits inspire the same fear in developers. Top of the pile are DML statements and SOQL queries, followed closely by heap size, while the rest are relegated to afterthought status, typically only thought about when they breach. There’s a good reason for this - the limits that we wrestle with most often bubble to the top of our thoughts when designing a solution. Those that we rarely hit get scant attention, probably because on some level we assume that if we aren’t breaching these limits regularly, we must have some kind of superpower to write code that is inherently defensive against it. Rather than what it probably is - either the limit is very generous or dumb luck.

This can lead to code being written that is skewed to defending against a couple of limits, but will actually struggle to scale due to the lack of consideration for all limits. To set expectation, the example that follows is contrived - a real world example would require a lot more code and shift the focus away from the point I’m trying to make. 

The Example

For some reason, I want to create two lists in my Apex class - one that contains all leads in the system where the last name starts with the letter ‘A’ and another list containing all the rest. Because I’m scared of burning SOQL queries, I query all the leads and process the results:

List<Lead> leads=[select id, LastName from Lead];
List<Lead> a=new List<Lead>();
List<Lead> btoz=new List<Lead>();
for (Lead ld : leads)
{
    String lastNameChar1=ld.LastName.toLowerCase().substring(0,1);
    if (lastNameChar1=='a')
    {
        a.add(ld);
    }
    else 
    {
        btoz.add(ld);
    }
}

System.debug('A size = ' + a.size());
System.debug('btoz size = ' + btoz.size());

The output of this shows that I’ve succeeded in my mission of hoarding as many SOQL queries as I can for later processing:

Screen Shot 2018 10 13 at 16 20 14

But look towards the bottom of the screenshot - while I’ve only used 1% of my SOQL queries, I’ve gone through 7% of my CPU time limit. Depending on what else needs happens in this transaction, I might have created a problem now or in the future. But I don’t care, as I’ve satisfied myself requirement of minimising SOQL queries.

If I fixate a bit less on the SOQL query limit, I can create the same lists in a couple of lines of code but using an additional query:

List<Lead> a=[select id, LastName from Lead where LastName like 'a%'];
List<Lead> btoz=[select id, LastName from Lead where ID not in :a];
System.debug('A size = ' + a.size());
System.debug('btoz size = ' + btoz.size());

because the CPU time doesn’t include time spent in the database, I don’t consume any of that limit:

Screen Shot 2018 10 13 at 16 23 59

Of course I have consumed another SOQL query though, which might create a problem now or in the future. There’s obviously a trade-off here and fixating on minimising the CPU impact and ignoring the impact of additional SOQL queries is equally likely to cause problems.

Conclusion

When designing solutions, take all limits into account. Try out various approaches and see what the impact of the trade-offs is, and use multiple runs with the same data to figure out the impact on the CPU limit, as my experience is that this can vary quite a bit. There’s no silver bullet when it comes to limits, but spreading the load across all of them should help to stretch what you can achieve in a single transaction. Obviously this means the design time takes a bit longer, but there’s an old saying that programming is thinking not typing, and I find this to be particularly true when creating Salesforce applications that have to perform and scale for years. The more time you spend thinking, the less time you’ll spend trying to fix production bugs when you hit the volumes that everybody was convinced would never happen.