Sunday, 5 July 2020

Passing the Salesforce JavaScript Developer 1 Certification



This week (2nd July 2020) I passed the new (at the time of writing) JavaScript Developer 1 Salesforce certification. This was somewhat different to other certs and will definitely get you out of your comfort zone - if you are a JavaScript dev, the you'll have your work cut out with the Superbadge, while if you are a Salesforce dev, the exam will make you realise how much there is to JavaScript!

Format

Like Platform Developer 2, this certification requires the completion of an exam and Trailhead badges. I took the beta exam - 150 questions over 3 hours and you don't get the section breakdown, just a pass or fail. An added challenge was I took my exam a few days before the UK went into complete lockdown, so I wasn't exactly at my most relaxed travelling to the testing centre. Luckily when I arrived at Stratford, instead of the usual huge crowds of commuters and shoppers at 9am, I pretty much had the place to myself!



The other downside to a beta is that nothing is known about the cert in advance - in this case I didn't know the exam would be pure JavaScript and not cover any Salesforce features. Once I received my beta results it was clear why this was the case - a new Lightning Web Components Specialist superbadge was now part of the certification requirements - a lovely surprise!

Preparation

Like a number of certs that I've taken in the past, a lot of the preparation had actually taken place in the previous months and years in my regular work and side projects.  Aura Components, Lightning Web Components, Salesforce CLI plug-ins, node scripts to orchestrate operations and Electron apps all had an element of JavaScript and some, like Electron, were nothing but. So for me it wasn't so much learning JavaScript,  but more about brushing up on the areas that I hadn't used for a while and making sure I was up on the latest best practice.


Useful sites for preparation:

The most useful preparation was doing - building command line tools in node or GUI applications in Electron (node again) proved invaluable. I learned about modules to reuse code, asynchronous techniques to interact with the filesystem and other system commands, and context (aka this) because it is fundamental to the language and without understanding how it works you are in a world of pain!

My advice would be to read a book / take a course on Electron - it's a really enjoyable framework to develop against which makes it easy to stay focused.

Focus Areas

Looking back over my study notes, these are the areas that I focused on for the exam - as I took the beta exam and it was several months ago, your mileage may vary if you just cover these!

  • Variable naming rules and differences in strict mode
  • Data types - primitives, composites and specials
  • Operators and the effect of null/undefined/NaN
  • Scope, execution context, and the principal of least access - this is incredibly important in JavaScript, so spend time on it!
  • Context (and the fact that execution context is different!)
  • Explicit and implicit type conversion and what happens when the conversion fails
  • Strings - worth spending some time on this as there are a number of methods that do similar things (and sometimes have similar names!) with subtle differences
  • Functions - another area to spend time on - functions are first class citizens in JavaScript so can be used in different ways to a language like Apex. Make sure you understand higher order functions, function expressions and don't forget ES6 arrow functions!
  • Objects, arrays, maps and how to iterate/enumerate properties and values
  • DOM - worth spending a lot of time on this as it will stand you in good stead when writing LWC
  • Modules - import and export, importing individual items or as an object, re-exporting from submodules and dynamic module loading
  • Events - again, worth spending time on as you'll need this when developing JavaScript
  • Error handling
  • Asynchronous techniques - callbacks, Promises and async/await. Read up on Promises as they have more methods than .then() and .catch()!
  • Debugging
  • Testing JavaScript - I've done a lot of work with Jasmine and Mocha in the past, so my study here was around the limitations of testing in JavaScript and the boundaries between stubs/spies/mocks.
  • Decorators, iterators, generators - I found generators the hardest to wrap my head around.
  • Changing default context with apply/call/bind
  • Double bang (!!) - something I hadn't come across before
  • Array methods - map/reduce/filter/some/every/find
  • Transpilers
  • How a newline doesn't always equal a semi-colon
  • Switch statements and equality
Other Tips

The usual tips for certs apply here - don't panic if you don't know the right answer - see if you can eliminate some wrong answers to narrow things down, and make sure to mark any that you are unsure about so you can check back later. You won't know everything, so don't get despondent if you get a few wrong in a row.

Related Posts




Wednesday, 10 June 2020

Generating PDFs with a Salesforce CLI Plug-In



Introduction


Yeah, I know, another week another plug-in. Unfortunately this is likely to continue for some time, as my  plug-ins are only really limited by my imagination. I think this one is pretty cool though and I hope, dear reader, you agree.

At Dreamforce 2019, Salesforce debuted Evergreen - a technology to allow Heroku microservices to be invoked from Apex (and other places) seamlessly. One of the Evergreen use cases is creating a PDF from Salesforce data, and one of the strap lines around this is that with Evergreen you have access to the entire Node package ecosystem. Every time I heard this I'd think that's true, but CLI plug-ins have the same access, as they are built on node, and like Evergreen there is no additional authentication required.

Essentially this is the opposite of Evergreen - rather than invoking something that lives on another server that returns a PDF file, I invoke a local command that writes a file to the local file system. I toyed with the idea of calling this plug-in Deciduous, but decided that was but of a lengthy name to type, and I'd have to explain it a lot! (and yes I know that Evergreen is a lot more than creating a PDF, but I liked the image so decided to go with it).

The plug-in is available on NPM - you can install it by executing : sfdx plugins:install bbpdf

EJS Templates


There are a couple of ways to create PDFs with node - using something like PDFKit to add the individual elements to the document, or my preferred route - generating the document from an HTML page. And if I'm generating HTML from Salesforce data or metadata, I'm going to be using EJS templates.

I'm not going to go into too much detail in this post about the creation of the HTML via EJS, but if you want more information check out my blog post on how I used it in my bbdoc plug-in to create HTML documents from Salesforce metadata.  

My plug-in requires two flags to specify the template. The name of the template itself, specified via the --template/-t flag, and the templates directory, specified via the --template-dir/-d flag. The reason I need the directory is that any css or images required by the template will be specified relative to the template's location. You can see this working by cloning the samples repo and trying it out.

Salesforce Data


My plug-in provides two mechanisms to retrieve data from Salesforce.

Query Parameter


This is for the simple use case where the template requires a single record. The query to retrieve the record is supplied via the --query/-q flag.

Querying Salesforce data from a plug-in is quite straightforward, I specify that I need a username which guarantees me access to an org:

protected static requiresUsername = true;

then I create a connection:

const conn = this.org.getConnection();

and finally I run the query, based on the supplied parameter,  and check the results:

const result = await conn.query<object>(this.flags.query);

if (!result.records || result.records.length <= 0) {
   ...
}

Once I have the record I need to pass in the Salesforce data in an object to the template, with the property name expected by the template, and this is provided by the user through the --sobject/-s flag.

This is a bit clunky though, so here's the preferred route.

Query File


A query file is specified via the --query-file/-f flag.  This is a JSON format file containing an object with a property per query:

{
    "contact": {
        "single": true,
        "query": "select Title, FirstName, LastName from Contact where id='00380000023TUDeAAO'"
    },
    "account": {
        "single": true,
        "query": "select id, Name from Account where id='0018000000eqTV7AAM'"
    }
}

The property name is the name that will be supplied to the EJS template - in this example there will be two properties passed to the template - contact and account.  The single property specifies whether the query will result in a single record or an array, and the query property specifies the actual SOQL query that will be executed. Using this mechanism, any number of records or arrays of records can be passed through to the template.

Generating the PDF


In a recurring theme in this post, there are a couple of ways to generate the PDF from the HTML. The simplest in terms of setup and code is to use the html-pdf package, which is a wrapper around PhantomJS. While simple, this is sub-optimal - the package hasn't been updated for a while, development on Phantom is suspended and when I added this to my plug-in, npm reported a critical vulnerability, so clearly another way would be required.

The favoured mechanism on the interwebs was to use Puppeteer (headless chromium node api), so I set about this. After some hoop jumping around creating temporary directories and copying the templates and associated files around, it wasn't actually too bad. I loaded the HTML from the file that I'd created and then asked for a PDF version of it:

const page = await browser.newPage();

await page.goto('file:///' + htmlFile, {waitUntil: 'networkidle0'});
const pdf = await page.pdf({ format: 'A4' });

One area I did go off-piste slightly was not using chromium bundled with puppeteer, instead I rely on the user defining an environment variable for their local installation of chrome and I use that. Mainly because I didn't want my plug-in to have to download 2-300 Mb before it could be used. This may cause problems down the line, so caveat emptor.

The environment variable in question is PUPPETEER_EXECUTABLE_PATH. Here are the definitions from my Macbook Pro running MacOS Catalina:

export PUPPETEER_EXECUTABLE_PATH="/Applications/Google Chrome.app/Contents/MacOS/Google Chrome"

and my Surface Pro running Windows 10:

setx PUPPETEER_EXECUTABLE_PATH "C:\Program Files (x86)\Google\Chrome\Application\chrome.exe"

The PDF is then written to the location specified by the --output/-o flag.

The next hoop to jump through came up when I installed the plug-in - Puppeteer is currently at 3.3.0 which requires Node 10.18.1 but the Salesforce CLI node version is 10.15.3. Luckily I was able to switch to Puppeteer version 2.1.1 without introducing any further issues.

Samples Repo




There's a lot of information in this post and a number of flags to wrap your head around - if you'd like to jump straight in with working examples, check out the samples repo. This has a couple of example commands to try out - just make sure you've set your chrome location correctly - and the PDFs that were generated when I ran the plug-in against my dev orgs.

Related Posts





Saturday, 30 May 2020

Documentor Plugin - Triggers (and Bootstrap)



Introduction

In previous posts about documenting my org with a Salesforce CLI plugin the focus has been on extracting and processing the object metadata. Now that area has been well and truly covered, it's time to move on. The next target in my sights is triggers - not for any particular reason, just that the real projects that I use this on have triggers that I want to show in my docs!

Trigger Metadata

Trigger metadata is split across two files - the .trigger file, which contains the actual Apex trigger code, and the .trigger-meta.xml file, which contains the supporting information - whether it is active, it's API version. The .trigger file itself contains the object that it is acting on and the action(s) that will cause it to fire:

trigger Book_ai on Book__c (before insert) {

}
Luckily the syntax is rigid - the object comes after the on keyword and the actions appear in brackets after that, so just using standard JavaScript string methods I can locate the boundaries and pull out the information that I want, but not just to display it.

Checking for Duplicates

As we all know, when there is more than one trigger for an object and action, the order they will fire in is not guaranteed. If you are a gambling aficionado then you may find this enjoyable, but most of us prefer a dull, predictable life when it comes to our business automation.

Once I've processed the trigger bodies and extracted the object names and actions, one of the really nice side-effects is that I can easily figure out if there are any duplicates and I can call these out on the triggers page (I've updated the sample metadata to add duplicate triggers - they don't actually do anything so wouldn't really be an issue anyway!) :

Bootstrap

Those that have been following this series of posts may notice that the styling looks a little better now than in earlier versions - this is because I've moved over to the Bootstrap library, pulled in via the CDN so I don't need to worry about including it in my plug-in. 

I've also updated the main page to use cards with badges to make it clear if there are any problems with the processed metadata :



Which I think we can all agree looks a lot better. You can see the latest version of the HTML generated from the sample metadata at https://bbdoc-example.herokuapp.com/index.html

I've also moved some duplicated markup out to an EJS include, which is as easy as I hoped it would be. The markup goes into a file with a .ejs suffix like any other - here's the footer from my pages:

<div class="pt-3"></div>
<footer class="footer">
    <div class="container">
        <div class="row justify-content-between">
            <div class="col-6">
                <p class="text-left"><small class="text-muted">Generated <%=footer.generatedDate%></small>
                </p>
            </div>
            <div class="col-6">
                <p class="text-right"><small class="text-muted">Bob Buzzard's Org Documentor
                        v<%=footer.version%></small></p>
            </div>
        </div>
    </div>
</footer>
And I can include this just by specifying the path:
<%- include ('../common/footer') %>
Note that I've put the footer in a common subdirectory that is a sibling of where the template is stored.
Note also that I've used the <%- tag which says to include the raw html output from the include. Finally, note that I don't pass anything to the footer - it has access to exactly the same properties as the page that includes it. 

As usual, I've tested the plug-in against the sample metadata repo on MacOS and Windows 10. You can find the latest version (3.3.1) on NPMJS at the link shown below.

Related Posts



 

Saturday, 23 May 2020

Going GUI over the Salesforce CLI Part 3

Introduction


In part 1 of this series I introduced my Salesforce CLI GUI with some basic commands.
In part 2 I covered some of the Electron specifics and added a couple of commands around listing and retrieving log files.
In this instalment I'll look at how a command is constructed - the configuration of the parameters and how these are specified by the user.

Command Configuration


As mentioned in part 1, the commands are configured via the app/shared/commands.js file. This defines the groupings and the commands that belong in those groupings, which are used to generate the tabs and tiles that allow the user to execute commands. For example, the debug commands added in the last update have the following configuration (detail reduced otherwise this post will be huge!) :

{
    name : 'debug',
    label: 'Debugging',
    commands : [
        {
            name: 'listlogs',
            label: 'List Logs',
            icon: 'file',
              ///////// detail removed //////////
        },
        {
            name: 'getlog',
            label: 'Get Log File',
            icon: 'file',
              ///////// detail removed //////////       
        }
    ]
}

which equates to a tab labelled Debugging and two command tiles - List Logs and Get Log File :


clicking on a command tile opens up a new window to allow the command to be defined, and here is where the rest of the configuration detail comes into play :


the screenshot above is from the List Logs command, which has the following configuration (in it's entirety) :

{
    name: 'listlogs',
    label: 'List Logs',
    icon: 'file',
    startMessage: 'Retrieving log file list',
    completeMessage: 'Log file list retrieved',
    command: 'sfdx',
    subcommand: 'force:apex:log:list',
    instructions: 'Choose the org that you wish to extract the log file details for and click the \'List\' button',
    executelabel: 'List',
    refreshConfig: false,
    refreshOrgs: false,
    json: true,
    type: 'brand',
    resultprocessor: 'loglist',
    polling: {
        supported: false
    },
    overview : 'Lists all the available log files for a specific org.',
    params : [
        {
            name : 'username',
            label: 'Username',
            type: 'org',
            default: false,
            variant: 'all',
            flag: '-u'
        }
    ]        
}

the attributes are used in the GUI as follows:

  • name - unique name for the command, used internally to generate ids and as a map key
  • label - the user friendly label displayed in the tile
  • icon - the SLDS icon displayed at the top left of the command page
  • startMessage - the message displayed in the log modal when the command starts
  • completeMessage - the message displayed in the log modal when the command successfully completes
  • command - the system command to be run to execute the command - all of the examples thus far use sfdx 
  • subcommand - the subcommand of the executable
  • instructions - the text displayed in the instructions panel below the header. This is specific to defining the command rather than providing help about the underlying sfdx command
  • executeLabel - the user friendly label on the button that executes the command
  • refreshConfig - should the cached configuration (default user, default dev hub user) be refreshed after running this command - this is set to true if the command changes configuration
  • refreshOrgs - should the cached organisations be updated after running this command - set to true if the command adds an org (create scratch org, login to org) or removes one (delete scratch org, logout of org)
  • json - does the command support JSON output
  • type - the type of styling to use on the command tile
  • resultProcessor - for commands that produce specific output that must be post-processed before displaying to the user, this defines the type of processor
  • polling - is there a mechanism for polling the status of the command while it is running
  • overview - text displayed in the overview panel of the help page for this command
  • params - the parameters supported by the command.

Parameters


In this case there is only one parameter, but it's a really important one - the username that will define which org to retrieve the list of logs from. This was a really important feature for me - I tend to work in multiple orgs on a daily (hourly!) basis, so I didn't want to have to keep running commands to switch between them. This parameter allows me to choose from my cached list of orgs to connect to, and has the following attributes:

  • name - unique name for the parameter
  • label  - the label for the input field that will capture the choice
  • type - the type of the parameter - in this case the username for the org connection
  • variant - which variant of orgs should be shown :
    •  hub - dev hubs only
    • scratch - scratch orgs only
    • all - all orgs
  • default - should the default username or devhubusername be selected by default
  • flag - the flag that will be passed to the sfdx command with the selected value

Constructing the Parameter Input


As I work across a large number of orgs, and typically a number of personas within those orgs, the username input is implemented as a datalist - a dropdown that I can also type in to reduce the available options - here's what happens if I limit to my logins :


as the page is constructed, the parameter is converted to an input by generating the datalist entry and then adding the scratch/hub org options as required:

const datalist=document.createElement('datalist');
datalist.id=param.name+'-list';

for (let org of orgs.nonScratchOrgs) {
    if ( (('hub'!=param.variant) || (org.isDevHub)) && ('scratch' != param.variant) ) {
        addOrgOption(datalist, org, false);
    }
}

if ('hub' != param.variant) {
    for (let org of orgs.scratchOrgs) {
         addOrgOption(datalist, org, true);
    }                    
}

formEles.contEle.appendChild(datalist);

The code that generates the option pulls the appropriate information from the cached org details to generate a label that will be useful:

let label;
if (org.alias) {
    label=org.alias + ' (' + org.username + ')';
}
else {
    label=org.username;
}

if (scratch) {
    label += ' [SCRATCH]';
}

and this feels like a reasonable place to end this post - in the next part I'll show how the command gets executed and the results processed.

Related Posts






Friday, 15 May 2020

Documenting from the metadata source with a Salesforce CLI Plug-In - Part 5


Introduction

In Part 1 of this series, I explained how to generate a plugin and clone the example command.
In Part 2 I covered finding and loading the package.json manifest and my custom configuration file.
In Part 3, I explained how to load and process source format metadata files.
In Part 4, I showed how to enrich fields by pulling in information stored outside of the field definition file.

In this week's exciting episode, I'll put it all together and generate the HTML output. This has been through several iterations :

  • When I first came up with the idea, I did what I usually do which is hardcode some very basic HTML in the source code, as at this point I'm rarely sure things are going to work.
  • Prior to my first presentation of the plug-in, I decided to move the HTML out to files that were easily changed, but still as fragments. This made showing the code easier, but meant I had to write a fair bit of boilerplate code, but was all I really had time for.
  • Once I'd done the first round of talks, I had the time to revisit and move over to a template framework, which was always going to be the correct solution. This was the major update to the plug-in I referred to in the last post.
Template Framework

I'd been using EJS in YASP (Yet Another Side Project) and liked how easy it was to integrate. EJS stands for Easy JavaScript templates, and it's well named. You embed plain old JavaScript into HTML markup and call a function passing the template and the data that needs to be processed by the JavaScript. If you've written Visualforce pages or email templates in the past, this pattern is very familiar (although you have to set up all of the data that you need prior to passing it to the generator function - it won't figure out what is needed based on what you've used like Visualforce will!).

A snippet from my HTML page that lists the groups of objects:

<% content.groups.forEach(function(group){ %>
    <tr>
        <td><%= group.title %></td>
        <td><a href="<%= group.link %>">Click to View</a></td>
    </tr>
<% }); %>

and as long as I pass the function that generates the HTML an object with a property of content, that matches the expected structure, I'm golden.

To use EJS in my plug-in, I install the node module,  :

npm install --save ejs

(Note that I'm using the --save option, as otherwise this won't be identified as a dependency and thus won't be added to the plug-in. Exactly what happened when I created the initial version and installed it on my Windows machine!)

I add the following line to my Typescript file to import the function that renders the HTML from the template:

import { renderFile } from 'ejs';

and I can then call renderFile(template, data) to asynchronously generate the HTML.

Preparing the Data

As Salesforce CLI plug-ins are written in Typescript by default, it's straightforward to keep track of the structure of your objects as you can strongly type them. Continuing with the example of my groups, the data for these is stored in an object with the following structure:

interface ObjectsContent {
    counter: number;
    groups: Array<ObjectGroupContent>;
    footer?: object;
}

Rather than pass this directly, I store this as the content of a more generic object that includes some standard information, like the plug-in version and the date the pages were generated:
let data={
             content: content,
             footer: 
             {
                 generatedDate : this.generatedDate,
                 version : this.config.version
             }
         };  

Rendering the HTML

Once I have the data in the format I need, I execute the rendering function passing the template filename - note that as this is asynchronous I have to return a promise to my calling function:

return new Promise((resolve, reject) => {
    renderFile(templateFile, data, (err, html) => {
        if (err) {
            reject(err);
        }
        else {
            resolve(html);
        }
    })
})  

and my calling function loops through the groups and gets the rendered HTML when the promise resolves, which it writes to the report directory:
this.generator.generateHTML(join('objects', 'objects.ejs'), this.content)
.then(html => {
    writeFileSync(this.indexFile, html);    
});

Moar Metadata

The latest version of the plugin generates HTML markup for an object's Validation Rules and Record types - the example repo of Salesforce metadata has been updated to include these and the newly generated HTML is available on Heroku at : https://bbdoc-example.herokuapp.com/index.html


Related Posts







Tuesday, 12 May 2020

Summer 20 After Save Flows and CPU Time


Introduction

(Be honest, you knew this one was coming)

Summer 20 sees more #LowCodeLove with the introduction of flows that run after records are saved. After the fun and games when I wrote a post comparing before save flow and apex trigger performance, and then a follow up detailing what I thought was actually happening, I was keen to get my hands on this to see if anything was different. Because I love arguing with people on twitter. It seriously went on for days last time.

Example Flow

My example flow was once again refreshingly simple - when an opportunity is inserted, update a field on the associated account called Most Recent Opportunity with the ID of the new record - thus it had to be an after save flow as the record ID needed to be populated. For the purposes of the testing I also assumed the account lookup would always be populated on the opportunity.

As before, I turned the log level way down, ran my anonymous apex to insert one thousand opportunities spread across five accounts and then delete them, and checked the log file.

If what I was looking for was more confusion and perhaps controversy, I wasn't disappointed:


As this wasn't my first rodeo, I then put the insert/delete of the thousand records inside a loop and kept increasing the iterations - starting with 1, just in case that changed things.
  • No loop (scenario above) - 0
  • 1 x - 0 
  • 2 x - 6869
  • 3 x - CPU limit exceeded - 19696
So unless we get a free thousand or so iterations, then the next thousand are expensive, and when the next thousand hugely expensive, I'd say the CPU logging is equally off for after save flows. 

I then added a single line of apex to the end of the anonymously executed code:
Long newTime=Limits.getCPUTime();
This changed things quite dramatically:
  • No loop - 6865
  • 1 x - 7115
  • 2 x - CPU time exceeded - 15779
Now this could be interpreted that getting the CPU time consumed to date via the Limits.getCPUTime method is incredibly expensive, but adding these inside the various loops I had to insert the data gave an increase of a couple of milliseconds, so that can be safely excluded.

Conclusion

Nothing I've seen in this latest experiment has changed my view that CPU is being consumed in flow automation, but it only gets tracked when the it is accessed. There is an additional wrinkle though, in that I do appear to be able to squeeze more work in if I don't cause the CPU to be tracked - I can successfully complete the two thousand inserts in this scenario, rather than breaching the CPU time quite spectacularly when I do track it. 

This makes sense to me, as CPU isn't a hard limit - you can spike past it as long as the Salesforce infrastructure has capacity to handle it, so there is some checking around the CPU used and if this is awry  the transaction will be allowed to proceed a little further than it otherwise might. It could also be coincidence, as the size of spikes allowed varies constantly, and I may be very close to succeeding in the cases where I breach, but I reran my tests a few times with much the same results.

Of course I don't really think I'm squeezing more work in, I'm taking advantage of what looks like a gap in the CPU tracking under certain circumstances. It would be a courageous decision (from Yes, Minister - a brave decision will cost you a few votes, a courageous decision will lose you the election!) to put something into production that relied on it - a single line of Apex added somewhere in the transaction would blow it out of the water, and I'm sure that at some point Salesforce will rectify this, and hopefully some of the others around CPU monitoring!

I'd also really love to see some of the logging that Salesforce captures internally - conjecture is good fun, but it would be nice to understand what is really happening.

Related Posts








Sunday, 10 May 2020

Lightning Web Components and Flows

Lightning Web Components and Flows


Introduction


This week (8th May 2020, for anyone reading this in a different week!) saw the Salesforce Low Code Love online event,  showcasing low code tools and customer stories. This was very timely for me, as I'd finally found some time to try out embedding Lightning Web Components in Lightning Flows.

I don't spend a lot of time writing flows - not because I don't want to, but because it's not particularly good use of my time, which is better spent on architecture, design and writing JavaScript or Apex code. Some developers don't like working on flows, and I suspect there are a couple of reasons:

  1. It slows them down - they find that expressing complex logic in a visual tool requires a lot of small steps that could be expressed far more efficiently in code. In this case I'd suggest that a step backwards is required to decide if flow is the best tool to write that particular piece of logic in - which is not saying the whole flow should be discarded, but maybe this aspect should be treated as a micro-service and moved to another technology. In much the same way that Evergreen will allow us to move processing this is better suited to another platform outside of Salesforce - dip out of flow for the complex work that is unlikely to change, leaving the simpler steps that need regular tweaking.

  2. They can't be unit tested in isolation. This is probably my biggest peeve - if I write some Apex code to automate a business process, I can unit test it with a multitude of different inputs and configuration, and easily repeat those tests when I make a change. While I can include auto-launched flows in my unit test, they are mixed in with the other automation that is present and so may experience side effects, while screen flows are entirely manual so I'd need to use a tool like ProVar to create automated UI tests. I get that it's tricky, as screen flows span multiple transactions, but it feels like something that needs to be solved if low-code tools are to gain real currency with IT departments.
    The old maxim that the further away from the developer you find a bug, the more expensive it is to fix still holds true.

Scenario

The first thing I should say is that Lightning Web Components can only be embedded in screen flows, so for headless flows you'll be using Apex actions.  In my example my web component simply displays a message and automatically navigates without any user interaction, so would be a candidate for an Apex action, but I quite like the idea of being able to interact with the user if I need to, and I was also curious if there were any issues with my approach.

Drawing on my Building my own Learning System work, I decided to implement a simple quiz where the user is presented with a number of questions that need to be answered via a radio button, for single answer, or checkbox group, where multiple selections are required. Once the user has made their selection(s), this is compared with the correct answers and their score updated accordingly. Once all questions have been answered, the user is told how they got on.

The Flow

Building the flow was very straightforward to begin with - getting the questions from the database, looping through them, updating a counter so I could show the question and a screen to ask the question, with conditionally rendered inputs depending on the question type:



There was some behind the scenes work with variables to create the choices for the radiobutton/checkbox groups, but nothing untoward.  When marking the answers however, things got a lot more tricky. I had to retrieve and iterate the answers, identify the type of the user's selection (which could be the radio button or checkbox output from the screen), check the answer against their selection and then update their score appropriately. There's probably a few minor variations on this, but expressing all of those steps visually ended up with quite a complex looking flow:


Once you have multiple decisions inside a loop, the sheer number of connectors makes it difficult to lay out the flow nicely, and adding custom boolean logic to a decision quickly gets ugly - in this case I'm checking if the user failed on this answer by not choosing an answer that is correct or choosing an answer that is incorrect:





One of the advantages of building functionality in flow is that it is easier to change, but understanding what is going on here and making changes would not be simple and there would be a fair amount of retesting required. This kind of thing is the worst of both worlds - it slows development, limits the capabilities but isn't easy to rework.

With this many small moving parts to carry out some processing that is highly unlikely to change that often, flow seems like the wrong tool to create the marking functionality in. I could move some of it into a sub-flow, but that feels like trying to hide how much the flow is doing rather than genuine functional decomposition.

Embedded Lightning Component

So I decided to replace the marking aspect with a Lightning Web Component - this will take the question and the various inputs that a user can supply and figure out whether they answered correctly. It will return true or false into an output variable.

To embed a Lightning Web Component in a flow, it needs to specify lightning__FlowScreen as a target in the -meta.xml file:

<targets>
    <target>lightning__FlowScreen</target>
</targets>

and to interact with the flow via input/output variables, these must be declared in the target config stanza for the lightning__flowScreen target :

<targetConfigs>
    <targetConfig targets="lightning__FlowScreen">
        <property name="questionId" type="String"  /> 
        <property name="radioChosen" type="String" />
        <property name="selectChosen" type="String" />
        <property name="correct" type="Boolean" />
    </targetConfig>
</targetConfigs>

I have three input properties :

  • questionId - the id of the question that has been answered
  • radioChosen - the selected value for a radio button question 
  • selectChosen - the selected value for a checkbox group question 
And one output property:
  • correct - did the user answer the question correctly

As well as detailing the properties in the metadata file, I need appropriate public functions in the component JavaScript class. For input properties I need a public getter:

@api
get questionId() 
{
    return this._questionId;
}

while for output properties I need a public setter:

@api
set correct(value)
{
    this._correct=value;
}

I also need to fire an event to tell flow that the value of the output parameter has changed, which I'll cover in a moment.

When my input properties are set, I need to mark the question, but I have no control over the order that they are set in. I'm marking via an Apex method so I don't want to call that when I've only received the question id.  Each of my setters calls the function to mark the question, but before doing anything this checks that I've received the question id and one of the checkbox or radio button option sets:

markQuestion()
{
    if ( (this._questionId) && 
         (this._radioChosen || this._selectChosen) )
    {
        MarkQuestionApex({questionId: this._questionId, 
                          answers: (this._radioChosen?this._radioChosen:this._selectChosen)})

This invokes the following Apex method

@AuraEnabled
public static Boolean MarkQuestion(String questionId, String answers)
{
    Question__c question=[select id, 
                           (select id, Name, Correct__c from Answers__r)
                          from Question__c
                          where id=:questionId];

    Boolean correct=true;
    for (Answer__c answer : question.Answers__r)
    {
        if ( ((!answer.Correct__c) && (answers.contains(answer.Name))) ||
             ((answer.Correct__c) && (!answers.contains(answer.Name))) )
        {
            correct=false;
        }
    }

    return correct;
}
Now I know I'm biased as I like writing code, but this seems a lot easier to understand - small things like retrieving the question and it's related answers in a single operation. writing boolean login in a single expression rather than combining conditions defined elsewhere make a big difference. I can also write unit tests for this method and give it a thorough workout before committing any changes.

My function from my Lightning Web Component receives the result via a promise and fires the event to tell flow that the value of the correct property has been changed:

MarkQuestionApex({questionId: this._questionId, 
                  answers: (this._radioChosen?this._radioChosen:this._selectChosen)})
.then(result => {
    const attributeChangeEvent = new FlowAttributeChangeEvent('correct', result);
    this.dispatchEvent(attributeChangeEvent);

    const navigateNextEvent = new FlowNavigationNextEvent();
    this.dispatchEvent(navigateNextEvent);
})
It also fires the event to navigate to the next event (the next question or the done screen) as I don't need anything from the user.

I add this to my flow and wire up the properties via a screen component - my markQuestion component appears in the custom list, and I specify the inputs and outputs as I would for any other flow component:



Embedding this in my flow simplifies things quite a bit:


Now any admin coming to customise this flow can easily see what is going on - the question marking that doesn't change is encapsulated in a single screen component and they can easily add more screens - to display the running total for example, or cut the test short if they user gets too many questions wrong. Note that as this is another screen, the user will be taken to it when they click 'Next' - I display a 'Marking question' message, but if you don't want this then an Apex Action would be a better choice.

You can find the metadata components at my Low Code Github repository, but if you want to see the two flows in action, here's a video:




Related Posts





Saturday, 2 May 2020

Speaker Academy is back - and it's twice as good!


We're Back!

A few years ago, Jodi Wagner and I started up a community effort called Speaker Academy, after we noticed that most of the events that we attended had the same faces speaking each time. As I was one of those faces I didn't necessarily see it as a bad thing, but I know it's possible to have too much of a good thing!

We figured that a lot of people liked the general idea of raising their profile and sharing their experience, but hated the thought of being up on stage in front of an audience. That's pretty much how I felt about it when I started public speaking many years ago - I could see all the upsides and very much wanted those, but the potential pitfalls seemed overwhelming. I got past this by forcing myself to speak whenever the opportunity arose, and reading lots of articles about conquering your fear of failure, but it wasn't the smoothest road to travel.

So we offered candidates the chance to take advantage of our experience through a multi-week course, with a mix of presentations, hands on exercises, and dry runs of real talks with feedback. Our candidates used to graduate by presenting at a special meetup of the London Salesforce Developers, but since we went online and are pulling in people from Europe and beyond, we now ask candidates to speak at an event of their choosing and send us some pictures to prove it happened!

We've run a couple of iterations online, and each time had to rework the curriculum a little. It turns out people are much more comfortable sat in their own home in front of a screen, so we had to figure out new ways to take them out of their comfort zone. The expectations of event organisers also changed, so we had to increase our focus on things like writing an abstract.

Double Trouble!




We also ended up on different continents when Jodi moved back to the US, which made the logistics more challenging. For the last round, Jodi very kindly gave up an hour of her morning to fit around us starting just after work in the UK, but it was clear to us both that this was a short term solution.

We decided to split our teaching efforts up - we'd each find new co-hosts and run the program in parallel in Europe and the US, and I'm very pleased to announce that my co-host for the next iteration run out of the UK is Julia Doctoroff. Julia is well known in the Salesforce community and is a graduate of the very first round of Speaker Academy, and I'm excited to be facilitating with her.

The Next Round

Our next iteration of Speaker Academy Europe will be starting in the next couple of weeks - if you are interested in being a part of it, fill out the interest form to let us know. This requires a commitment to attend a weekly class of an hour or so and involves homework, so please don't sign up if you can't commit. If you drop out half way through you don't get to graduate, you've blocked someone else who could have made better use of the opportunity, and it's unlikely we'll let you back in to a future iteration.

As we give 1-1 feedback and mentoring, we are limited in terms of the number that we can accept per iteration, but if you don't make this one you will move up the list for future iterations and eventually you will get lucky!



 






Sunday, 26 April 2020

Documenting from the metadata source with a Salesforce CLI Plug-In - Part 4


Introduction


In Part 1 of this series, I explained how to generate a plugin and clone the example command.
In Part 2 I covered finding and loading the package.json manifest and my custom configuration file.
In Part 3, I explained how to load and process source format metadata files.

In this episode I'll show how to enrich the field information by pulling in information from elsewhere in the metadata - specifically global or standard value sets. When I'm using a global value set for picklist values, all I have in my field XML metadata is a reference to the set:

<valueset>
    <restricted>true</restricted>
    <valuesetname>Genre</valuesetname>
</valueset>

and the actual values live in the (in this case) Genre.globalValueSet-meta.xml file in the globalValueSets folder.

When I create the HTML report for the object metadata, I want to pull these values in, otherwise whoever is viewing the report has to go into the Salesforce org to figure out just which Genres are available.

Figuring the Value Set Type


Fields using standard value sets don't actually specify the value set name, instead it has to be derived from the field using the information in the metadata API documentation. I try to find a standard value set matching the field and if none exists then I fall back to loading the named global value set. The actual loading is the same, and should be familiar to anyone who read part 2 - I use fast-xml-parser to load the XML and parse - however, as multiple fields can refer to the same value set, I cache the parsed metadata :

let getGlobalValueSet = (dir, name) => {
    let valueSet=globalValueSetByName[name];
    if (null==valueSet) {
        globalValueSetByName[name] = valueSet = 
             parseXMLToJS(join(dir, 'globalValueSets', name + 
                                    '.globalValueSet-meta.xml'));
    }

    return valueSet;
}

I then assign this to a new property on the field named gvs. As I'm processing the metadata in an offline mode, if I can't find the global value set, the property will be null.

Adding the Information to the Field


When processing the field, I expand the values from the value set by iterating them and extracting the names - note that if the gvs property is not populated then I output a message that it isn't in version control:


var vsName=field.valueSet.valueSetName;
result+='<b>Global Value Set (' + vsName +')</b><br/>';
if (field.gvs) {
   field.gvs.GlobalValueSet.customValue.
      forEach(item => result+='&nbsp;&nbsp;' + item.fullName + '<br/>');
}
else {
   result+='Not version controlled';
}

This is a relatively simple piece of enrichment, just pulling some information from another XML metadata file and inserting that in the row for the field. However, it doesn't have to be metadata that is included - it can be additional information, a diagram, anything that can be written out to HTML - quite powerful once you realise that.


One More Thing


The version of the plugin is now at 2.0.3. From the user perspective nothing should change, but the mechanism for generating the report is completely different, which I'll write more about in the next exciting episode.


One More One More Thing



You can view the output for the example repo of Salesforce metadata on Heroku at : https://bbdoc-example.herokuapp.com/index.html

Related Posts




 



Saturday, 18 April 2020

Documenting from the metadata source with a Salesforce CLI Plug-In - Part 3


Introduction

In Part 1 of this series, I explained how to generate a plugin and clone the example command. In Part 2 I covered finding and loading the package.json manifest and my custom configuration file

In this instalment, I'll show how to load and process source format metadata files - a key requirement when documenting from the metadata source, I'm sure you'll agree.

Walking Directories

In order to process the metadata files, I have to be able to find and iterate them. As I'm processing object files, I also need to process the fields subdirectory that contains the metadata for the custom fields that I'm including in my document. I'll use a number of standard functions provided by Node in order to achieve this.

I know the top level folder name for the source format data, as it is the parameter of the -s (--source-dir) switch passed to my plugin command. To simplify the sample code, I'm pretending I know that the custom objects are in the objects subfolder, which isn't a bad guess, but in the actual code this information is pulled from the configuration file. To generate the combined full pathname, I use the standard path.join function, as this figures out which operating system I'm running on and uses the correct separator:

import { join } from 'path';
...
let objSourceDir=join(sourceDir, 'objects');
I then check that this path exists and is indeed a directory - this time while I use the standard fs.lstatSync to check the path is a directory, I use a homegrown function to check that the path exists:

import { lstatSync} from 'fs';
import { fileExists } from '../../shared/files';
 ...
if ( (fileExists(objSourceDir)) && 
     (lstatSync(objSourceDir).isDirectory()) ) {

Once I know the path is there and is a directory, I can read the contents, again using a standard function - fs.readdirSync - and iterate them :

let entries=readdirSync(objSourceDir);
import { appendFileSync, mkdirSync, readdirSync } from 'fs';
  ...
for (let idx=0, len=entries.length; idx<len; idx++) {
    let entry=entries[idx];
    let entryPath=join(objSourceDir, entries[idx]);
}

Reading Metadata

The Salesforce metadata is stored as XML format files, which presents a slight challenge when working in JavaScript. I can use the DOM parser, but I find the code that I end up writing looks quite ugly, with lots of chained functions. My preferred solution is to parse the XML into a JavaScript object using a third party tool - there are a few of these around, but my current favourite is fast-xml-parser because

  • it's fast (clue is in the name)
  • it's synchronous - this isn't a huge deal when writing a CLI plugin, as the commands are async and can thus await asynchronous functions, but when I'm working with the command line I tend towards synchronous
  • it's popular - 140k downloads this week
  • it's MIT licensed
It's also incredibly easy to use. After installing by running npm install --save, I just need to import the parse function, load the XML format file into a string and parse it!

import { parse} from 'fast-xml-parser';

let objFileBody=readFileSync(
join(entryPath, entry + '.object-meta.xml'), 
                  'utf-8');
let customObject=parse(objFileBody);

My customObject variable now contains a JavaScript object representation of the XML file, so I can access the XML elements as properties:

let label=customObject.CustomObject.label;
let description==customObject.CustomObject.description;

Which allows me to extract the various properties that I am interested in for my document.

In the next exciting episode I'll show how to enrich the objects by pulling in additional information from other files.

Related Posts