Friday, 26 January 2018

SFDX and the Metadata API Part 4 - VSCode Integration

SFDX and the Metadata API Part 4 - VSCode Integration

Introduction

In the previous instalments of this blog series I’ve shown how to deploy metadata, script the deployment to avoid manual polling and carry out destructive changes. All key tasks for any developer, but executed from the command line. On a day to day basis I, like just about any other developer in the Salesforce ecosystem, will spend large periods of the day working on code in an IDE. As it has Salesforce support (albeit still somewhat fledgling) I’ve switched over completely to the Microsoft VSCode IDE. The Salesforce extension does provide a mechanism to deploy local changes, but at the time of writing (Jan 2018) only to scratch orgs, so a custom solution is required to target other instances.

In the examples below I’m using the deploy.js Node script that I created in SFDX and the Metadata API Part 2 - Scripting as the starting point.

Sample Code

My sample class is so simple that I can’t think of anything to say about it, so here it is:

public with sharing class VSCTest1 {
    public VSCTest1() {
        Contact me;
    }
}

and the package.xml to deploy this:

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<Package xmlns="http://soap.sforce.com/2006/04/metadata">
    <types>
        <members>*</members>
        <name>ApexClass</name>
    </types>
    <version>40.0</version>
</Package>

VSCode Terminal

VSCode has a nice built-in terminal in the lower panel, so the simplest and least integrated solution is to run my commands though this. It works, and I get my set of results, but it’s clunky.

Screen Shot 2018 01 24 at 17 41 26

VSCode Tasks

If I’m going to execute deployments from my IDE, what I’d really like is a way to start them from a menu or shortcut key combination. Luckily the designers of VSCode have foreseen this and have the concept of Tasks. Simply put, a Task is a way to configure VSCode with details of an external process that compiles, builds, tests etc. Once configured, the process will be available via the Task menu and can also be set up as the default build step. 

To configure a Task, select the Tasks -> Configure Tasks menu option and choose the Create tasks.json file from template option in the command bar dropdown:

Screen Shot 2018 01 24 at 07 31 04

Then select Others from the resulting menu of Task types;

Screen Shot 2018 01 24 at 07 31 57

This will generate a boilerplate tasks.json file with minimal information, which I then add details of my node deploy script to:

{
    "version": "2.0.0",
    "tasks": [
        {
            "label": “build",
            "type": "shell",
            "command": "node",
            "args":["deploy.js"]
        }
    ]
}

I then execute this via the Tasks -> Run Task menu, choosing ’build’ from the command bar dropdown and selecting 'Continue without scanning the task output'

This executes my build in the terminal window much like, but saves me having to remember and enter the command each time:

Screen Shot 2018 01 24 at 17 06 36

Sadly I can’t supply parameters to the command when executing it, so if I need to deploy to multiple orgs I need to create multiple entries in the tasks,json file, but for the purposes of this blog let’s imagine I’m living a very simple life and only ever work in a single org!

Capturing Errors

Executing my command from inside VSCode is the first part of an integrated experience, but I still have to check the output myself to figure out if there are any errors and which files they are located in. For that true developer experience I’d like feedback from the build stage to be immediately reflected in my code. To capture an error I first need to generate one, so I set my class up to fail

public with sharing class VSCTest1 {
    public VSCTest1() {
        Contact me;
        // this will fail
        me.do();
    }
}

VSCode Tasks can pick up errors, but it requires a bit more effort than simple configuration.

Tasks detect errors via ProblemMatchers - these take a regular expression to parse an error string produced by the command and extract useful information, such as the filename, line and column number and error message. 

While my deploy script has access to the error information, it’s in JSON format which the ProblemMatcher can’t process. Not a great problem though, as my node script can extract the errors from the JSON and output them in regexp friendly format. 

Short Diversion into the Node Script

As I’m using execFileSync to run the SFDX command from my deploy script, if the command returns a non-zero result, which SFDX does if there are failures on the deployment, it will throw an exception and halt the script. To get around this without having to resort to executing the command asynchronously and capturing the stdout, stderr etc, I simply send the error stream output to a file and catch the exception, if there is one. I then check the error output to see if it was a failure on deployment, in which case I just use that instead of the regular output stream or if it is a “real” exception, when I need to let the command fail. This is all handled by a single function that also turns the captured response into a JavaScript object:

function execHandleError(cmd, params) {
    try {
        var err=fs.openSync('/tmp/err.log', 'w');
        resultJSON=child_process.execFileSync(cmd, params, {stdio: ['pipe', 'pipe', err]});
        result=JSON.parse(resultJSON);
        fs.closeSync(err);
    }
    catch (e) {
        fs.closeSync(err);
        // the command returned non-zero - this may mean the metadata operation
        // failed, or there was an unrecoverable error
        // Is there an opening brace?
        var errMsg=''+fs.readFileSync('/tmp/err.log');
        var bracePos=errMsg.indexOf('{');
        if (-1!=bracePos) {
            resultJSON=errMsg.substring(bracePos);
            result=JSON.parse(resultJSON);
        }
        else {
            throw e;
        }
    }

    return result;
}

Once my deployment has finished, I check to see if it failed and if it did, extract the failures from the JSON response:

if ('Failed'===result.result.status) {
	if (result.result.details.componentFailures) {
		// handle if single or array of failures
		var failureDetails;
		if (Array.isArray(result.result.details.componentFailures)) {
			failureDetails=result.result.details.componentFailures;
		}
		else {
			failureDetails=[];
			failureDetails.push(result.result.details.componentFailures);
		}
          ...
        }
   ...
}

and then iterate the failures and output text versions of them.

for (var idx=0; idx<failureDetails.length; idx++) {
	var failure=failureDetails[idx];
	console.log('Error: ' + failure.fileName + 
		    ': Line ' + failure.lineNumber + 
	            ', col ' + failure.columnNumber + 
		    ' : '+ failure.problem);
}

Back in the Room

Rerunning the task shows an errors that occur:

Screen Shot 2018 01 24 at 17 34 42

I can then create my regular expression to extract information from the failure text - I used regular expressions 101 to create this. as it allows me to baby step my way through building the expression. Once I’ve got the regular expression down, I add the ProblemMatcher stanza to tasks.json:

"problemMatcher": {
    "owner": "BB Apex",
    "fileLocation": [
        "relative",
        "${workspaceFolder}"
    ],
    "pattern": {
        "regexp": "^Error: (.*): Line (\\d)+, col (\\d)+ : (.*)$",
        "file": 1,
        "line": 2,
        "column": 3,
        "message": 4
    }
}

Now when I rerun the deployment, the problems tab contains the details of the failures surfaced by the script:

Screen Shot 2018 01 24 at 17 46 10

and I can click on the error to be taken to the location in the offending file.

There’s a further wrinkle to this, in that lightning components report errors in a slightly different format - the row/column in the result is undefined, but if it is known it appears in the error message on the following line, e.g.

Error: src/aura/TakeAMoment/TakeAMomentHelper.js: Line undefined, col undefined : 0Ad80000000PTL3:8,2: ParseError at [row,col]:[9,2]
Message: The markup in the document following the root element must be well-formed.

This is no problem for my task, as the ProblemMatcher attribute can specify an array of elements, so I just add another one with an appropriate regular expression:

"problemMatcher": [ {
        "owner": "BB-apex",
        ...
    },
    {
        "owner": "BB-lc",
        "fileLocation": [
            "relative",
            "${workspaceFolder}"
        ],
        "pattern": [ {
            "regexp": "^error: (.*): Line undefined, col undefined : (.*): ParseError at \\[row,col\\]:\\[(\\d+),(\\d+)]$",
            "file": 1,
            "line": 3,
            "column": 4,
        },
        {
            "regexp":"^(.*$)",
            "message": 1
        } ]
    }],

Note that I also specify an array of patterns to match the first and second lines of the error output. If the error message was spread over 5 lines, I’d have 5 of them.

You can view the full deploy.js file at the following GIST, and the associated tasks.json.

Default Build Task

Once the tasks.json file is in place, you can set this up as the default build task by selecting the Tasks -> Configure Default Build Task menu option, and choosing Build from the command drop down menu. Thereafter, just use the keyboard shortcut to execute the default build.

Related Posts

 

Saturday, 13 January 2018

Building My Own Learning System - Part 1

Building My Own Learning System

Learn

Introduction

Before I get started on this post, I want to make one thing clear. This is not Trailhead. It’s not Bob Buzzard’s Trailhead. It’s not a clone or wannabe of Trailhead. While it would be fun to build a clone of Trailhead, all it would be is an intellectual exercise to see how close I could get. So that’s not what I did. I didn’t build my own Trailhead. Are we clear on that? Nor is it MyTrailhead, although it could be used in that way. But again, I’m not looking to clone an existing solution, even if it is still in pilot and likely to stay there for a couple of releases. I’m coming at this from a different angle, as will hopefully become clear from this and subsequent blog posts. Put the word Trailhead out of your mind.

All that said, I was always going to build my own training system. Pretty much every post I’ve written about Trailhead had a list of things I’d like to see, and I can only suppress the urge to write code in this space for so long. This might mean that I moderate my demands, realising how difficult things really are when you have to implement them rather than just think about them in abstract form.

The Problem

Trailhead solves the problem of teaching people about Salesforce at scale, with content that comes from the source and is updated with each release. MyTrailhead is about training/onboarding people into your organisation. The problem I was looking to solve was somewhat different, although closer to MyTrailhead. I wanted a way to onboard people from inside and outside my organisation onto a specific application or technology, but without sending everyone through the same process.

For example, regular readers of this blog or my medium posts will know that I run product development at BrightGen, and that we have a mature Full Force solution in BrightMedia. We also have a bunch of collateral and training material around BrightMedia that I’d like to surface to various groups of people:

  • Internal BrightGen sales team
  • Internal BrightGen developers
  • External customer users

I don’t particularly want a single training system, as this would mean giving external users access to internal systems. It’s also likely that I’ll have a bunch of training information that isn’t BrightMedia specific, and I don’t really want to colocate this with everything else.

Essentially what I’m looking for is a training client that can connect to multiple endpoints, each endpoint containing content specific to a product/application/team. That, and a way to limit who can access the content, allows me to colocate the content with the application, potentially in the packaging org that contains the application.

The First Stirrings of the Solution

Data Model

As the client won’t be accessing data from the same Salesforce org, or potentially any Salesforce org, my front end is backed by a custom apex class data model rather than sObjects:

Screen Shot 2018 01 13 at 18 12 00

I’ve deliberately chosen names that are different to Trailhead, because as we all know this isn’t Trailhead. I was very tempted to use insignia rather than badge, as I think that gives it a somewhat British feel, but in the end I decided that would confuse people. Each path has topics associated with it so that I can see how strong a candidate is in a particular field. The path and associated steps are essentially the learning template, while the candidate path/step tracks the progress of a candidate through the path. A path has a badge associated with it and once a candidate completes all steps in the path they are awarded the badge. The same(isn) data model as myriad training systems around the globe.

The records that back this data model live in the content endpoint. Thus the candidate doesn’t have a badge count per se, instead they have a badge count per functional area. In the BrightGen scenario they will have a badge count for BrightMedia, and a separate badge count for other product areas. The can also have multiple paths in progress striped across content endpoints.

User Interface

I created the front end to work against these custom classes as a single page application. As the user selected paths and steps the page would re-render itself to show the appropriate detail. I’m still tweaking this so I’ll cover the details in the next post in the series.

Show me the Code

I don’t plan to share any code in these posts until the series is complete, at which point I’ll open source the whole thing on github, mainly because it isn’t ready yet. I’m pretty sure I’ve got the concepts straight in my head, but the detail keeps changing as I think of different ways of doing things.