Monday, 31 August 2020

Now What for Dreamforce?



The cancellation of the in-person 2020 Dreamforce conference brought my 10 year attendance streak to a shuddering halt. I'll shortly be taking my (late) summer holiday, and it will be the first one I can remember where I'm not building applications, putting together presentations for one or more sessions, or joining conference calls to cover how round table events will be run. While it will be nice to have that time back, I'll very much miss the trip to San Francisco and the chance to catch up with a bunch of people that I rarely see in person.

We're now notionally a couple of months out from the digital version of Dreamforce, which has already been put in doubt by comments made by Marc Benioff to The Information. For what it's worth I don't think a like for like replacement of Dreamforce will take place online, nor would it make a lot of sense to do that.

Duration

Dreamforce is a 4 day event, often bookended by additional summits (think CTA) or training events (get certified for half price!). For those outside of North America it typically involves a week plus out of the office - this aspect is key - we've all flown over with not much more to do but attend the conference and network. 

For a virtual event, nobody is taking a week off work to sit in front of their screen watching presentations. The best that can be hoped for is that the audience watch some of the sessions live, fitting in around their real work. 

Location

Dreamforce takes place in San Francisco, which is a popular destination in its own right. A virtual event takes place in your own home (or possibly in an office now that some have reopened) which doesn't have the same cachet. 

Attendees are also going to be in their own timezone, rather than struggling to adjust to PST. They aren't likely to stay up until 2am to see the latest report builder enhancements, but by the same token they won't be jet-lagged and should be able to stay awake.

Networking

This is one of the biggest attractions for me - the chance to meet with a whole bunch of people from product managers, to answer burning questions, to fellow developers from the community, to find out what cool things others are working on. Hang around the Trailhead forest for a few hours and you'll be amazed how many people you can tick off your list. This just doesn't happen at virtual events - you can fire off questions via videoconference chat channels, but the one to one connection simply won't be an option at the scale you can achieve in person.

I'd also include things like parties, the Gala and after parties in the networking column, and clearly none of that is going to happen virtually. You might see some virtual happy hour concept where a group of people sip drinks and try not to talk over each other on a zoom call, but it won't be the same.

Expo

I've attended a bunch of virtual events this year, sometimes because I'm interested in the topic and sometimes just to see the approach. In my opinion, one area that consistently struggles is the expo. People dialling in from home to watch a session on a specific topic typically sign off afterwards rather than browsing a virtual expo. If you are interested in a product and know Dreamforce is coming up, you might wait to talk to the vendor in person. When you can't do that, you'll sign up for a virtual demo at the first possible opportunity rather than wait for a day when you are already going to be stuck in front of a screen for hours. There is also the swag, or lack thereof.. From personal experience, I can tell you that a surprising number of people at any event expo just want free swag, and they aren't joining your virtual expo demo without some incentive. 

This feels to me like one area that will have to be completely rethought in these continuing Covid times - simply taking the expo concept online isn't cutting it. Organisers will still want sponsor dollars, but they'll have to find a new mechanism. Some kind of session sponsorship is my guess, which showcases the partner's offering alongside the main product, in this case Salesforce. 

Community Content

This is the other area that will need to be rethought. If it will be a struggle to get people to join sessions involving product management, I really can't see Salesforce giving up space to speakers from the community. They won't have loads of breakout rooms to fill, so I can see them keeping the vast majority of virtual space to themselves. Maybe this will push the community content completely to community events like London's Calling or the many flavours of Dreamin' 

Diluting the Brand

Dreamforce is the Salesforce event of the year. It's something that everyone wants to go to and always sells out. By running something less than this online, the name is devalued. I'm sure the numbers would still be pretty good, because of the sheer reach that Salesforce has, but it would always be compared to the in person event and found wanting.

Now What?

Instead of trying to recreate Dreamforce in a virtual setting, do something different. It could still leverage the Dreamforce name, but not try to be a like for like replacement. Rather than squashing a ton of content into 4 days that we won't have time to consume, spread it out - maybe as a half-day event around a theme every month or six weeks - Dreamforce Bi-Fortnightly has a nice ring, as does Dreamforce Semi-Quarterly. The sessions need to be useful though - if they aren't the audience will vote with their feet, and they won't have the ticket cost to guilt them into staying.

The one exception to the above is the keynote. This is something that should still happen as a one-off with attendant folderol. We need this see the performance versus the previous year, the focus for the next year and some key customer stories. Also, without the keynote I won't be able to hear about the new features that I won't make the pilot for and thus won't get access to for a couple of years, and I need that envy to keep my interest up. I'm not sure how well the multiple product keynotes would work - maybe a half-day keynotes event that starts with Marc Benioff and friends, and hands over to the the various clouds, or perhaps each Dreamforce Semi-Quarterly includes a keynote and has a major focus on the specific cloud. 


Saturday, 29 August 2020

Adding IP Ranges from a Salesforce CLI Plug-in


Introduction

I know, another week, another post about a CLI plug-in. I really didn't intend this to be the case, but I recently submitted a (second generation) managed package for security review and had to open up additional IP addresses to allow the security team access to a test org. I'm likely to submit more managed packages for review in the future, so I wanted to find a scriptable way of doing this. 

It turned out to be a little more interesting than I'd originally expected, for a couple of reasons.

It Requires the Metadata API


To add an IP range, you have to deploy security settings via the metadata API. I have done this before, to enable/disable parallel Apex unit tests, but this was a little different. If I create a security settings file with the new range(s) and deploy them, per the Metadata API Docs, all existing IP ranges will be turned off:

    To add an IP range, deploy all existing IP ranges, including the one you
    want to add. Otherwise, the existing IP ranges are replaced with the ones
    you deploy. 

Definitely not what I want!

It Requires a Retrieve and Deploy


In order to get the existing IP addresses, I have to carry out a metadata retrieve of the security settings from the Salesforce org, add the ranges, then deploy them. No problem here, I can simply use the retrieve method on the metadata class that I'm already using to deploy. Weirdly, unlike the deploy function, the retrieve function doesn't return a promise, instead it expected me to supply a callback. I couldn't face returning to callback hell after the heaven of async/await in my plug-in development, so I used the Node Util.Promisify function that turns it into a function that returns a promise. Very cool.

const asyncRetrieve = promisify(conn.metadata.retrieve);
const retrieveCheck = await asyncRetrieve.call(...);
The other interesting aspect is that I get the settings back in XML format, but I want a JavaScript object to manipulate, which I then need to turn back into XML to deploy.

To turn XML into JavaScript I use fast-xml-parser, as this has stood me in good stead with my Org Documentor. To get at the NetworkAccess element:

import { parse} from 'fast-xml-parser';

  ...

const settings = readFileSync(join(tmpDir, 'settings', 'Security.settings'), 'utf-8');
const md = parse(settings);

let networkAccess = md.SecuritySettings.networkAccess;

Once I've added my new ranges, I convert back to XML using xml2js:

import { Builder } from 'xml2js';

  ...

const builder = new Builder(
   {renderOpts:
      {pretty: true,
       indent: '    ',
       newline: '\n'},
    stringify: {
       attValue(str) {
           return str.replace(/&/g, '&')
                     .replace(/"/g, '"')
                     .replace(/'/g, ''');
       }
    },
    xmldec: {
        version: '1.0', encoding: 'UTF-8'
    }
});

const xml = builder.buildObject(md);

The Plug-In


This is available as a new command on the bbsfdx plug-in - if you have it installed already, just run 

sfdx plugins: update 

to update to version 1.4

If you don't have it installed already (I won't lie, that hurts) you can run:

sfdx plugins:install bbsfdx
 
and to add one or more ranges: 

sfdx bb:iprange:add -r 192.168.2.1,192.168.2.4:192.168.2.255 -u <user>

The -r switch defines the range(s) - comma separate them. For a range, separate the start and end addresses with a colon. For a single address, just skip the colon and the end address.

Related Posts


Saturday, 22 August 2020

App Builder Page Aware Lightning Web Component

Introduction

This week I've been experimenting with decoupled Lightning Web Components in app builder pages, now that we have the Lightning Message Service to allow them to communicate with each other without being nested. 

A number of the components are intended for use in record home and app pages, which can present challenges, in my case around initialisation. Some of my components need to initialise by making a callout to the server, but if they are part of a record home page then I want to wait until the record id has been supplied. Ideally I want my component to know what type of page it is currently being displayed in and take appropriate action.

Inspecting the URL

One way to achieve this is to inspect the URL of the current page, but this is a pretty brittle approach as if Salesforce change the URL scheme then it stands a good chance of breaking. I could use the navigation mixin to generate URLs from page references and compare those with the current URL, but that seems a little clunky and adds a delay while I wait for the promises to resolve.

targetConfig Properties

The solution I found with the least impact was to use targetConfig stanzas in the component's js-meta.xml configuration file. From the docs, these allow you to:

Configure the component for different page types and define component properties. For example, a component could have different properties on a record home page than on the Salesforce Home page or on an app page.

It was this paragraph that gave me the clue - different properties depending on the page type!

You can define the same property across multiple page types, but define different default values depending on the specific page type. In my case, I define a pageType property and default to the type of page that I am targeting:

<targetConfigs>
    <targetConfig targets="lightning__RecordPage">
        <property label="pageType" name="pageType" type="String" default="record" required="true"/>
    </targetConfig>
    <targetConfig targets="lightning__AppPage">
        <property label="pageType" name="pageType" type="String" default="app" required="true"/>
    </targetConfig>
    <targetConfig targets="lightning__HomePage">
        <property label="pageType" name="pageType" type="String" default="home" required="true"/>
    </targetConfig>
</targetConfigs>

so for a record page, the pageType property is set as 'record' and so on.

In my component, I expose page type as a public property with getter and setter methods (you only need the @api decorator on one of the methods, and convention right now seems to be the getter) :

@api get pageType() {
    return this._pageType;
}

set pageType(value) {
    this._pageType=value;
    this.details+='I am in a(n) ' + this._pageType + ' page\n';
    switch (this._pageType) {
        case 'record' :
             this.details+='Initialisation will happen when the record id  is set (which may already have happened)\n';
             break;
            ;;
        case 'app' :
        case 'home' :
            this.details+='Initialising\n';
            break;
    }                
 }

and similar for the record id, so that I can take some action when that is set:

@api get recordId() {
    return this._recordId;
}

set recordId(value) {
    this._recordId=value;
    this.details+='I have received record id ' + this._recordId + ' - initialising\n';
}

then I can add the component to the record page for Accounts:



a custom app page:



and the home page for the sales standard application:

and in each case the component knows what type of page it has been added to. 

Of course this isn't foolproof - my Evil Co-Worker could edit the pages and change the values in the app builder, leading to all manner of hilarity as my components wait forlornly for the record id that never comes. I could probably extend my Org Documentor to process the flexipage metadata and check the values haven't been changed, but in reality this is fairly low impact sabotage and probably better that the Evil Co-Worker focuses on this rather than something more damaging.

Show Me The Code!

You can find the code at the Github repo.

Related Posts


Saturday, 15 August 2020

Org Documentor and AuraEnabled Classes


Introduction

In the Winter 21 release of Salesforce, access to AuraEnabled methods in Apex classes will be restricted to users with profiles or permission sets that grant access to those classes. If you've been working in a sandbox recently you've probably encountered this already, as the critical update for this was enabled on August 8th.

Figuring out who (if anyone) has access to classes isn't straightforward, as you have to trawl through the profiles and permission sets and check each one. Salesforce have created an Aura Enabled Scanner application that can be installed via an unlocked package, which checks packaged and unpackaged code, but it does require that you login to Salesforce each time you need to check things.

This seemed like a good candidate for my Org Documentor Salesforce CLI Plug-In - something that can be run on a schedule, check that any new classes used as controllers for Aura or Lightning Web Components are accessible, and show which profiles/permission sets have access.


Configuration

To add processing for AuraEnabled classes, an additional stanza is required in the configuration file passed to the bbdoc command - here's what it looks like in my example repo:

"auraenabled": {
    "name": "auraenabled",
    "description": "AuraEnabled Class Access", 
    "subdirectory": ".",
    "image": "images/auraenabled.png",
    "suffix":"object",
    "groups": {
        "other": {
            "name":"other", 
            "title":"AuraEnabled Components",
            "description": "All components with Apex controllers"
        }
    }
}

As with other metadata types, you can specify multiple groups to slice up your components into functional areas - I've lumped them all into one group as I only have one component!


Output

The index (home) page for the org report displays a new card for the AuraEnabled metadata:


An error badge is displayed if there are one or more classes used as controllers for components that aren't accessible from any profile or permission set. 

Clicking in to the details shows the groups and errors:


and clicking into the group shows the detail for each component, with classes that aren't accessible highlighted as errors:



Note that if a Lightning Web Component accesses multiple Apex classes, there will be a row for each class with the same component name in the report, as shown above.


Processing

I was quite pleased by how little code I had to write to add support for this:

  • A classes map structure is created in memory, profilesAndPermSetsByClassname, where each entry has the Apex class name as a key and a value object containing lists of permission sets and profiles with access to the class. This is generated by loading all of the profile and permission set metadata and iterating the ClassAccess sections. 

  • The component groups are iterated, and for each entry in the group:

    • If it is an Aura Component, the controller attribute from the <aura:component/> tag is extracted

    • If it is a Lighting Web Component (which can access multiple Apex classes), @salesforce/apex/ lines are identified and the Apex class name extracted. 

      For each class:

      • The entry for the class in the profilesAndPermSetsByClassname map is extracted. If this is null or both of the profile and permission set lists are empty, the classname is added to the error collection displayed on the group page.

      • A row is added to the JavaScript object backing the group page showing the profiles/permission sets that have access, adding the error highlight colour if there are none.


Plug-In

The updated bbddoc Salesforce CLI plug-in providing AuraEnabled support can be found on NPM, and the source code is available in the Github repository. 


Why not use the AuraEnabledScanner?

You absolutely should - I do, to configure access. This works with the AuraEnabledScanner rather than replacing it. As development continues, new classes and components are added to your codebase. The Org Documentor flags up any that don't have appropriate access, and you can then login to an org that your metadata is deployed to, run the scanner and fix up the access. 

The scanner also handles managed classes which the Org Documentor doesn't, as it works against your metadata on disk rather than everything installed in a specific org instance.


Related Posts

Saturday, 8 August 2020

New bbsfdx Plug-In Cleanup Commands

New bbsfdx Plug-In Cleanup Commands


Introduction


bbsfdx is my Salesforce CLI utilities plug-in. It receives infrequent updates, typically when I find myself doing the same thing repeatedly via workbench or the like. It doesn't have many functions as I try to keep it to utilities - anything new goes into it's own dedicated plug-in (e.g mentz, bbdoc, bbpdf). This week I found the need to add a couple of functions and published V1.3 of the plug-in.

New Commands

bb:devconsole:delete

When the developer console won't load, up until now I've googled for the help article and followed the instructions to remove the IDEWorkspace record for my user via the workbench. 

This happened a couple of times in the last week, so I decided I needed a quicker way. 

bb:logs:delete 

Since the Summer 19 release, we can store up to 1,000Mb of debug logs, so I don't often fill up the storage. That said, some of the areas I work in generate logs close to the 20Mb per log limit, and it doesn't take too many of those to start causing a problem. 

Up until now I'd query and delete via the developer console, but a CLI command is a little more efficient. This command also takes a -a flag to delete all logs in the org, rather than just those for my user.

Upgrading


Regardless of which plug-in you need to upgrade, you run the same command:

sfdx plugins:update

this checks for newer versions of all of your installed plug-ins and, if it finds any, installs them.

Related Posts


Saturday, 1 August 2020

Visual Studio Codespaces - the tech behind Code Builder

Visual Studio Codespaces - the tech behind Code Builder


Salesforce launched Code Builder in June 2020, just before the 2020 TrailheaDX virtual conference, and it's fair to say there was some excitement around this. Those expecting a Developer Console Mk 2 must have been delighted to see a full-featured IDE that can run in the browser.  Even better, we confirmed with the product team that as the IDE is effectively running in a virtual machine, any required extensions and Salesforce CLI plug-ins can be installed, so you can recreate your local environment with all your favourite features.

I was certainly excited about this and reached out in a number of directions to try to get on the pilot, all of which sadly failed - I needed to be nominated by an Account Executive, and experience has taught me that at our number of licenses our AE typically doesn't even know that pilots are available, much less how to nominate me for one.

Undaunted, I took a different approach and started looking at the technology that powers Code Builder - Visual Studio Codespaces.  

Visual Studio Codespaces

Formerly known as Visual Studio Online, the description from official Microsoft site sums it up very well:

Cloud-hosted dev environments accessible from anywhere

There's some signup and setup required to start using Codespaces, as detailed below.

Sign up for Azure

A Microsoft account and Azure subscription are needed, as Azure hosts the virtual environments (machines). I chose the free subscription, which gives me 12 months free Linux virtual machine access. Compute hours are limited, but in all likelihood I'll use this rarely and then assuming it doesn't require me to spend a bunch of money, little or never once Code Builder is available.

VS Code Extensions

I then had to install a couple of extensions :
  • Azure Account, which gives VS Code access to my Azure subscription that I've just created. Signing in to this took ages, and was eventually tracked down to some kind of conflict with my Skype account that I was logged in to at the same time. The Skype account is linked to my Microsoft account in the same was as Azure is, but VS Code wasn't having it. In the end I logged out of Skype and I was able to login to Azure. Since then I've been able to login to Skype and Azure in any order, so it may have been a glitch with one of the connections at that point in time. If in doubt, sign out of everything!]
  • Visual Studio Codespaces, this allows VS Code to manage and connect to remote dev environments - you don't have to use the browser as a front-end with Codespaces and I can't see any reason you wouldn't be able to do this with Code Builder also, but time will tell.  
You also have to sign in to Codespaces, which you can do via the button on the left of VS Code:


Once you've signed in, choosing to create a new Codespace will take you through a process of creating a new plan - the key questions are the Azure region - make sure to choose one close to you, and the default instance type - I went for Linux.

Putting it Together

Once the accounts and extensions are in place, VS Code gains a new icon at the bottom left:



Clicking this opens the Codespaces menu: 


Choosing 'Create New Codespace' generates a new remote environment, after capturing some information from me. First, what type of environment I want - I've only used the Default settings and had no problems to date.


I then have a choice of creating an empty Codespace, or populating it with the contents of a repository - I'm using my Curated repo (currently private, but the contents of the repo aren't really important for this post - if you are curious what it is, it's my toolbox):


Then an easy question to answer - the Codespace name.


Once all the questions are answered, the extension gets to work on a new Codespace:


As well as the progress bar, a new panel opens on the right of VS Code to show the steps being completed (assuming all goes well - I've had one or two failures, but most of the time it's a breeze);


Once the setup is complete, clicking the Connect button opens the remote workspace through my local VS Code installation, which looks pretty much the same as a local workspace, bar a larger green element on the bottom left of the window:


and the fact that any changes I make here are not reflected in my local workspace unless I round trip them to the version control system. 

This is all well and good, and if I don't have a particularly powerful machine it's a nice way to be able to work on multiple projects at the same time and push a lot of the computing requirement to the cloud. The more interesting environment, however, is the browser.

Codespaces in the Browser

To access my Codespace from a browser, I have to login to the Visual Studio Codespaces web site at : https://online.visualstudio.com/login, using my Microsoft account that I attached my Azure subscription to. As an aside, I only seem to be able to login via a regular Chrome window, not an incognito one. I've no idea whether that is intentional, but it's been consistently the case since I started playing with Codespaces.

Once I've logged in I can see all of my current Codespaces, and I have the option to create more directly from the web site:



Clicking on my new CuratedBlog Codespace opens the IDE in the browser:



And as this is a virtual machine, the terminal works in the browser too!


That's all there is to it?

Not quite - while this gives me a Visual Studio Codespace primed with my Salesforce application metadata and running in the cloud, it doesn't know much about Salesforce, so I have to do a little more setup - note you can configure your Codespace environment via a devcontainer.json file in your project directory, so you wouldn't have to do this every time if you were spinning up multiple codespaces. I haven't looked at this in detail.


Once this is done, you now have the tools to interact with Salesforce orgs, but you still need to authenticate against them. This is the slightly long-winded part as, while it is technically possible to run a browser in a docker container, it doesn't look straightforward, so you'll likely end up using the JWT flow. This isn't particularly difficult, as long as you are familiar with the command line and open SSH commands, but it is a little long-winded as anyone who has set up a CI machine will testify, with the connected apps, keys and certs that you have to set up.

A similar problem presents itself once you have authenticated against your dev hub and created a scratch org - clicking the icon to open the org can't open a browser, so doesn't really do anything. The upside here is that it will typically tell you the URL that it is attempting to open which includes a session id, so you can copy/paste that into your local browser to access the scratch org, even though you haven't got any oauth for this org set up. The URL will appear in the Output tab:


This is where I expect Salesforce to add the Code Builder value - generating a Codespace that has the extensions and CLI pre-configured and already authorised against the org, as well as some other items that I haven't stumbled across yet.

Wrapping Up

While Codespaces require a bit of effort, until Code Builder is more widely available they are a really good way to be able to develop from any device that has a supported browser and internet connection.  In fact I was able to use Chrome on my iPhone XS to push changes to a scratch org. The UI was terrible so I wouldn't recommend it, but as an intellectual exercise it was fun.

Related Posts




Sunday, 5 July 2020

Passing the Salesforce JavaScript Developer 1 Certification



This week (2nd July 2020) I passed the new (at the time of writing) JavaScript Developer 1 Salesforce certification. This was somewhat different to other certs and will definitely get you out of your comfort zone - if you are a JavaScript dev, the you'll have your work cut out with the Superbadge, while if you are a Salesforce dev, the exam will make you realise how much there is to JavaScript!

Format

Like Platform Developer 2, this certification requires the completion of an exam and Trailhead badges. I took the beta exam - 150 questions over 3 hours and you don't get the section breakdown, just a pass or fail. An added challenge was I took my exam a few days before the UK went into complete lockdown, so I wasn't exactly at my most relaxed travelling to the testing centre. Luckily when I arrived at Stratford, instead of the usual huge crowds of commuters and shoppers at 9am, I pretty much had the place to myself!



The other downside to a beta is that nothing is known about the cert in advance - in this case I didn't know the exam would be pure JavaScript and not cover any Salesforce features. Once I received my beta results it was clear why this was the case - a new Lightning Web Components Specialist superbadge was now part of the certification requirements - a lovely surprise!

Preparation

Like a number of certs that I've taken in the past, a lot of the preparation had actually taken place in the previous months and years in my regular work and side projects.  Aura Components, Lightning Web Components, Salesforce CLI plug-ins, node scripts to orchestrate operations and Electron apps all had an element of JavaScript and some, like Electron, were nothing but. So for me it wasn't so much learning JavaScript,  but more about brushing up on the areas that I hadn't used for a while and making sure I was up on the latest best practice.


Useful sites for preparation:

The most useful preparation was doing - building command line tools in node or GUI applications in Electron (node again) proved invaluable. I learned about modules to reuse code, asynchronous techniques to interact with the filesystem and other system commands, and context (aka this) because it is fundamental to the language and without understanding how it works you are in a world of pain!

My advice would be to read a book / take a course on Electron - it's a really enjoyable framework to develop against which makes it easy to stay focused.

Focus Areas

Looking back over my study notes, these are the areas that I focused on for the exam - as I took the beta exam and it was several months ago, your mileage may vary if you just cover these!

  • Variable naming rules and differences in strict mode
  • Data types - primitives, composites and specials
  • Operators and the effect of null/undefined/NaN
  • Scope, execution context, and the principal of least access - this is incredibly important in JavaScript, so spend time on it!
  • Context (and the fact that execution context is different!)
  • Explicit and implicit type conversion and what happens when the conversion fails
  • Strings - worth spending some time on this as there are a number of methods that do similar things (and sometimes have similar names!) with subtle differences
  • Functions - another area to spend time on - functions are first class citizens in JavaScript so can be used in different ways to a language like Apex. Make sure you understand higher order functions, function expressions and don't forget ES6 arrow functions!
  • Objects, arrays, maps and how to iterate/enumerate properties and values
  • DOM - worth spending a lot of time on this as it will stand you in good stead when writing LWC
  • Modules - import and export, importing individual items or as an object, re-exporting from submodules and dynamic module loading
  • Events - again, worth spending time on as you'll need this when developing JavaScript
  • Error handling
  • Asynchronous techniques - callbacks, Promises and async/await. Read up on Promises as they have more methods than .then() and .catch()!
  • Debugging
  • Testing JavaScript - I've done a lot of work with Jasmine and Mocha in the past, so my study here was around the limitations of testing in JavaScript and the boundaries between stubs/spies/mocks.
  • Decorators, iterators, generators - I found generators the hardest to wrap my head around.
  • Changing default context with apply/call/bind
  • Double bang (!!) - something I hadn't come across before
  • Array methods - map/reduce/filter/some/every/find
  • Transpilers
  • How a newline doesn't always equal a semi-colon
  • Switch statements and equality
Other Tips

The usual tips for certs apply here - don't panic if you don't know the right answer - see if you can eliminate some wrong answers to narrow things down, and make sure to mark any that you are unsure about so you can check back later. You won't know everything, so don't get despondent if you get a few wrong in a row.

Related Posts




Wednesday, 10 June 2020

Generating PDFs with a Salesforce CLI Plug-In



Introduction


Yeah, I know, another week another plug-in. Unfortunately this is likely to continue for some time, as my  plug-ins are only really limited by my imagination. I think this one is pretty cool though and I hope, dear reader, you agree.

At Dreamforce 2019, Salesforce debuted Evergreen - a technology to allow Heroku microservices to be invoked from Apex (and other places) seamlessly. One of the Evergreen use cases is creating a PDF from Salesforce data, and one of the strap lines around this is that with Evergreen you have access to the entire Node package ecosystem. Every time I heard this I'd think that's true, but CLI plug-ins have the same access, as they are built on node, and like Evergreen there is no additional authentication required.

Essentially this is the opposite of Evergreen - rather than invoking something that lives on another server that returns a PDF file, I invoke a local command that writes a file to the local file system. I toyed with the idea of calling this plug-in Deciduous, but decided that was but of a lengthy name to type, and I'd have to explain it a lot! (and yes I know that Evergreen is a lot more than creating a PDF, but I liked the image so decided to go with it).

The plug-in is available on NPM - you can install it by executing : sfdx plugins:install bbpdf

EJS Templates


There are a couple of ways to create PDFs with node - using something like PDFKit to add the individual elements to the document, or my preferred route - generating the document from an HTML page. And if I'm generating HTML from Salesforce data or metadata, I'm going to be using EJS templates.

I'm not going to go into too much detail in this post about the creation of the HTML via EJS, but if you want more information check out my blog post on how I used it in my bbdoc plug-in to create HTML documents from Salesforce metadata.  

My plug-in requires two flags to specify the template. The name of the template itself, specified via the --template/-t flag, and the templates directory, specified via the --template-dir/-d flag. The reason I need the directory is that any css or images required by the template will be specified relative to the template's location. You can see this working by cloning the samples repo and trying it out.

Salesforce Data


My plug-in provides two mechanisms to retrieve data from Salesforce.

Query Parameter


This is for the simple use case where the template requires a single record. The query to retrieve the record is supplied via the --query/-q flag.

Querying Salesforce data from a plug-in is quite straightforward, I specify that I need a username which guarantees me access to an org:

protected static requiresUsername = true;

then I create a connection:

const conn = this.org.getConnection();

and finally I run the query, based on the supplied parameter,  and check the results:

const result = await conn.query<object>(this.flags.query);

if (!result.records || result.records.length <= 0) {
   ...
}

Once I have the record I need to pass in the Salesforce data in an object to the template, with the property name expected by the template, and this is provided by the user through the --sobject/-s flag.

This is a bit clunky though, so here's the preferred route.

Query File


A query file is specified via the --query-file/-f flag.  This is a JSON format file containing an object with a property per query:

{
    "contact": {
        "single": true,
        "query": "select Title, FirstName, LastName from Contact where id='00380000023TUDeAAO'"
    },
    "account": {
        "single": true,
        "query": "select id, Name from Account where id='0018000000eqTV7AAM'"
    }
}

The property name is the name that will be supplied to the EJS template - in this example there will be two properties passed to the template - contact and account.  The single property specifies whether the query will result in a single record or an array, and the query property specifies the actual SOQL query that will be executed. Using this mechanism, any number of records or arrays of records can be passed through to the template.

Generating the PDF


In a recurring theme in this post, there are a couple of ways to generate the PDF from the HTML. The simplest in terms of setup and code is to use the html-pdf package, which is a wrapper around PhantomJS. While simple, this is sub-optimal - the package hasn't been updated for a while, development on Phantom is suspended and when I added this to my plug-in, npm reported a critical vulnerability, so clearly another way would be required.

The favoured mechanism on the interwebs was to use Puppeteer (headless chromium node api), so I set about this. After some hoop jumping around creating temporary directories and copying the templates and associated files around, it wasn't actually too bad. I loaded the HTML from the file that I'd created and then asked for a PDF version of it:

const page = await browser.newPage();

await page.goto('file:///' + htmlFile, {waitUntil: 'networkidle0'});
const pdf = await page.pdf({ format: 'A4' });

One area I did go off-piste slightly was not using chromium bundled with puppeteer, instead I rely on the user defining an environment variable for their local installation of chrome and I use that. Mainly because I didn't want my plug-in to have to download 2-300 Mb before it could be used. This may cause problems down the line, so caveat emptor.

The environment variable in question is PUPPETEER_EXECUTABLE_PATH. Here are the definitions from my Macbook Pro running MacOS Catalina:

export PUPPETEER_EXECUTABLE_PATH="/Applications/Google Chrome.app/Contents/MacOS/Google Chrome"

and my Surface Pro running Windows 10:

setx PUPPETEER_EXECUTABLE_PATH "C:\Program Files (x86)\Google\Chrome\Application\chrome.exe"

The PDF is then written to the location specified by the --output/-o flag.

The next hoop to jump through came up when I installed the plug-in - Puppeteer is currently at 3.3.0 which requires Node 10.18.1 but the Salesforce CLI node version is 10.15.3. Luckily I was able to switch to Puppeteer version 2.1.1 without introducing any further issues.

Samples Repo




There's a lot of information in this post and a number of flags to wrap your head around - if you'd like to jump straight in with working examples, check out the samples repo. This has a couple of example commands to try out - just make sure you've set your chrome location correctly - and the PDFs that were generated when I ran the plug-in against my dev orgs.

Related Posts





Saturday, 30 May 2020

Documentor Plugin - Triggers (and Bootstrap)



Introduction

In previous posts about documenting my org with a Salesforce CLI plugin the focus has been on extracting and processing the object metadata. Now that area has been well and truly covered, it's time to move on. The next target in my sights is triggers - not for any particular reason, just that the real projects that I use this on have triggers that I want to show in my docs!

Trigger Metadata

Trigger metadata is split across two files - the .trigger file, which contains the actual Apex trigger code, and the .trigger-meta.xml file, which contains the supporting information - whether it is active, it's API version. The .trigger file itself contains the object that it is acting on and the action(s) that will cause it to fire:

trigger Book_ai on Book__c (before insert) {

}
Luckily the syntax is rigid - the object comes after the on keyword and the actions appear in brackets after that, so just using standard JavaScript string methods I can locate the boundaries and pull out the information that I want, but not just to display it.

Checking for Duplicates

As we all know, when there is more than one trigger for an object and action, the order they will fire in is not guaranteed. If you are a gambling aficionado then you may find this enjoyable, but most of us prefer a dull, predictable life when it comes to our business automation.

Once I've processed the trigger bodies and extracted the object names and actions, one of the really nice side-effects is that I can easily figure out if there are any duplicates and I can call these out on the triggers page (I've updated the sample metadata to add duplicate triggers - they don't actually do anything so wouldn't really be an issue anyway!) :

Bootstrap

Those that have been following this series of posts may notice that the styling looks a little better now than in earlier versions - this is because I've moved over to the Bootstrap library, pulled in via the CDN so I don't need to worry about including it in my plug-in. 

I've also updated the main page to use cards with badges to make it clear if there are any problems with the processed metadata :



Which I think we can all agree looks a lot better. You can see the latest version of the HTML generated from the sample metadata at https://bbdoc-example.herokuapp.com/index.html

I've also moved some duplicated markup out to an EJS include, which is as easy as I hoped it would be. The markup goes into a file with a .ejs suffix like any other - here's the footer from my pages:

<div class="pt-3"></div>
<footer class="footer">
    <div class="container">
        <div class="row justify-content-between">
            <div class="col-6">
                <p class="text-left"><small class="text-muted">Generated <%=footer.generatedDate%></small>
                </p>
            </div>
            <div class="col-6">
                <p class="text-right"><small class="text-muted">Bob Buzzard's Org Documentor
                        v<%=footer.version%></small></p>
            </div>
        </div>
    </div>
</footer>
And I can include this just by specifying the path:
<%- include ('../common/footer') %>
Note that I've put the footer in a common subdirectory that is a sibling of where the template is stored.
Note also that I've used the <%- tag which says to include the raw html output from the include. Finally, note that I don't pass anything to the footer - it has access to exactly the same properties as the page that includes it. 

As usual, I've tested the plug-in against the sample metadata repo on MacOS and Windows 10. You can find the latest version (3.3.1) on NPMJS at the link shown below.

Related Posts



 

Saturday, 23 May 2020

Going GUI over the Salesforce CLI Part 3

Introduction


In part 1 of this series I introduced my Salesforce CLI GUI with some basic commands.
In part 2 I covered some of the Electron specifics and added a couple of commands around listing and retrieving log files.
In this instalment I'll look at how a command is constructed - the configuration of the parameters and how these are specified by the user.

Command Configuration


As mentioned in part 1, the commands are configured via the app/shared/commands.js file. This defines the groupings and the commands that belong in those groupings, which are used to generate the tabs and tiles that allow the user to execute commands. For example, the debug commands added in the last update have the following configuration (detail reduced otherwise this post will be huge!) :

{
    name : 'debug',
    label: 'Debugging',
    commands : [
        {
            name: 'listlogs',
            label: 'List Logs',
            icon: 'file',
              ///////// detail removed //////////
        },
        {
            name: 'getlog',
            label: 'Get Log File',
            icon: 'file',
              ///////// detail removed //////////       
        }
    ]
}

which equates to a tab labelled Debugging and two command tiles - List Logs and Get Log File :


clicking on a command tile opens up a new window to allow the command to be defined, and here is where the rest of the configuration detail comes into play :


the screenshot above is from the List Logs command, which has the following configuration (in it's entirety) :

{
    name: 'listlogs',
    label: 'List Logs',
    icon: 'file',
    startMessage: 'Retrieving log file list',
    completeMessage: 'Log file list retrieved',
    command: 'sfdx',
    subcommand: 'force:apex:log:list',
    instructions: 'Choose the org that you wish to extract the log file details for and click the \'List\' button',
    executelabel: 'List',
    refreshConfig: false,
    refreshOrgs: false,
    json: true,
    type: 'brand',
    resultprocessor: 'loglist',
    polling: {
        supported: false
    },
    overview : 'Lists all the available log files for a specific org.',
    params : [
        {
            name : 'username',
            label: 'Username',
            type: 'org',
            default: false,
            variant: 'all',
            flag: '-u'
        }
    ]        
}

the attributes are used in the GUI as follows:

  • name - unique name for the command, used internally to generate ids and as a map key
  • label - the user friendly label displayed in the tile
  • icon - the SLDS icon displayed at the top left of the command page
  • startMessage - the message displayed in the log modal when the command starts
  • completeMessage - the message displayed in the log modal when the command successfully completes
  • command - the system command to be run to execute the command - all of the examples thus far use sfdx 
  • subcommand - the subcommand of the executable
  • instructions - the text displayed in the instructions panel below the header. This is specific to defining the command rather than providing help about the underlying sfdx command
  • executeLabel - the user friendly label on the button that executes the command
  • refreshConfig - should the cached configuration (default user, default dev hub user) be refreshed after running this command - this is set to true if the command changes configuration
  • refreshOrgs - should the cached organisations be updated after running this command - set to true if the command adds an org (create scratch org, login to org) or removes one (delete scratch org, logout of org)
  • json - does the command support JSON output
  • type - the type of styling to use on the command tile
  • resultProcessor - for commands that produce specific output that must be post-processed before displaying to the user, this defines the type of processor
  • polling - is there a mechanism for polling the status of the command while it is running
  • overview - text displayed in the overview panel of the help page for this command
  • params - the parameters supported by the command.

Parameters


In this case there is only one parameter, but it's a really important one - the username that will define which org to retrieve the list of logs from. This was a really important feature for me - I tend to work in multiple orgs on a daily (hourly!) basis, so I didn't want to have to keep running commands to switch between them. This parameter allows me to choose from my cached list of orgs to connect to, and has the following attributes:

  • name - unique name for the parameter
  • label  - the label for the input field that will capture the choice
  • type - the type of the parameter - in this case the username for the org connection
  • variant - which variant of orgs should be shown :
    •  hub - dev hubs only
    • scratch - scratch orgs only
    • all - all orgs
  • default - should the default username or devhubusername be selected by default
  • flag - the flag that will be passed to the sfdx command with the selected value

Constructing the Parameter Input


As I work across a large number of orgs, and typically a number of personas within those orgs, the username input is implemented as a datalist - a dropdown that I can also type in to reduce the available options - here's what happens if I limit to my logins :


as the page is constructed, the parameter is converted to an input by generating the datalist entry and then adding the scratch/hub org options as required:

const datalist=document.createElement('datalist');
datalist.id=param.name+'-list';

for (let org of orgs.nonScratchOrgs) {
    if ( (('hub'!=param.variant) || (org.isDevHub)) && ('scratch' != param.variant) ) {
        addOrgOption(datalist, org, false);
    }
}

if ('hub' != param.variant) {
    for (let org of orgs.scratchOrgs) {
         addOrgOption(datalist, org, true);
    }                    
}

formEles.contEle.appendChild(datalist);

The code that generates the option pulls the appropriate information from the cached org details to generate a label that will be useful:

let label;
if (org.alias) {
    label=org.alias + ' (' + org.username + ')';
}
else {
    label=org.username;
}

if (scratch) {
    label += ' [SCRATCH]';
}

and this feels like a reasonable place to end this post - in the next part I'll show how the command gets executed and the results processed.

Related Posts