Pages

Sunday, 26 April 2020

Documenting from the metadata source with a Salesforce CLI Plug-In - Part 4


Introduction


In Part 1 of this series, I explained how to generate a plugin and clone the example command.
In Part 2 I covered finding and loading the package.json manifest and my custom configuration file.
In Part 3, I explained how to load and process source format metadata files.

In this episode I'll show how to enrich the field information by pulling in information from elsewhere in the metadata - specifically global or standard value sets. When I'm using a global value set for picklist values, all I have in my field XML metadata is a reference to the set:

<valueset>
    <restricted>true</restricted>
    <valuesetname>Genre</valuesetname>
</valueset>

and the actual values live in the (in this case) Genre.globalValueSet-meta.xml file in the globalValueSets folder.

When I create the HTML report for the object metadata, I want to pull these values in, otherwise whoever is viewing the report has to go into the Salesforce org to figure out just which Genres are available.

Figuring the Value Set Type


Fields using standard value sets don't actually specify the value set name, instead it has to be derived from the field using the information in the metadata API documentation. I try to find a standard value set matching the field and if none exists then I fall back to loading the named global value set. The actual loading is the same, and should be familiar to anyone who read part 2 - I use fast-xml-parser to load the XML and parse - however, as multiple fields can refer to the same value set, I cache the parsed metadata :

let getGlobalValueSet = (dir, name) => {
    let valueSet=globalValueSetByName[name];
    if (null==valueSet) {
        globalValueSetByName[name] = valueSet = 
             parseXMLToJS(join(dir, 'globalValueSets', name + 
                                    '.globalValueSet-meta.xml'));
    }

    return valueSet;
}

I then assign this to a new property on the field named gvs. As I'm processing the metadata in an offline mode, if I can't find the global value set, the property will be null.

Adding the Information to the Field


When processing the field, I expand the values from the value set by iterating them and extracting the names - note that if the gvs property is not populated then I output a message that it isn't in version control:


var vsName=field.valueSet.valueSetName;
result+='<b>Global Value Set (' + vsName +')</b><br/>';
if (field.gvs) {
   field.gvs.GlobalValueSet.customValue.
      forEach(item => result+='&nbsp;&nbsp;' + item.fullName + '<br/>');
}
else {
   result+='Not version controlled';
}

This is a relatively simple piece of enrichment, just pulling some information from another XML metadata file and inserting that in the row for the field. However, it doesn't have to be metadata that is included - it can be additional information, a diagram, anything that can be written out to HTML - quite powerful once you realise that.


One More Thing


The version of the plugin is now at 2.0.3. From the user perspective nothing should change, but the mechanism for generating the report is completely different, which I'll write more about in the next exciting episode.


One More One More Thing



You can view the output for the example repo of Salesforce metadata on Heroku at : https://bbdoc-example.herokuapp.com/index.html

Related Posts




 



Saturday, 18 April 2020

Documenting from the metadata source with a Salesforce CLI Plug-In - Part 3


Introduction

In Part 1 of this series, I explained how to generate a plugin and clone the example command. In Part 2 I covered finding and loading the package.json manifest and my custom configuration file

In this instalment, I'll show how to load and process source format metadata files - a key requirement when documenting from the metadata source, I'm sure you'll agree.

Walking Directories

In order to process the metadata files, I have to be able to find and iterate them. As I'm processing object files, I also need to process the fields subdirectory that contains the metadata for the custom fields that I'm including in my document. I'll use a number of standard functions provided by Node in order to achieve this.

I know the top level folder name for the source format data, as it is the parameter of the -s (--source-dir) switch passed to my plugin command. To simplify the sample code, I'm pretending I know that the custom objects are in the objects subfolder, which isn't a bad guess, but in the actual code this information is pulled from the configuration file. To generate the combined full pathname, I use the standard path.join function, as this figures out which operating system I'm running on and uses the correct separator:

import { join } from 'path';
...
let objSourceDir=join(sourceDir, 'objects');
I then check that this path exists and is indeed a directory - this time while I use the standard fs.lstatSync to check the path is a directory, I use a homegrown function to check that the path exists:

import { lstatSync} from 'fs';
import { fileExists } from '../../shared/files';
 ...
if ( (fileExists(objSourceDir)) && 
     (lstatSync(objSourceDir).isDirectory()) ) {

Once I know the path is there and is a directory, I can read the contents, again using a standard function - fs.readdirSync - and iterate them :

let entries=readdirSync(objSourceDir);
import { appendFileSync, mkdirSync, readdirSync } from 'fs';
  ...
for (let idx=0, len=entries.length; idx<len; idx++) {
    let entry=entries[idx];
    let entryPath=join(objSourceDir, entries[idx]);
}

Reading Metadata

The Salesforce metadata is stored as XML format files, which presents a slight challenge when working in JavaScript. I can use the DOM parser, but I find the code that I end up writing looks quite ugly, with lots of chained functions. My preferred solution is to parse the XML into a JavaScript object using a third party tool - there are a few of these around, but my current favourite is fast-xml-parser because

  • it's fast (clue is in the name)
  • it's synchronous - this isn't a huge deal when writing a CLI plugin, as the commands are async and can thus await asynchronous functions, but when I'm working with the command line I tend towards synchronous
  • it's popular - 140k downloads this week
  • it's MIT licensed
It's also incredibly easy to use. After installing by running npm install --save, I just need to import the parse function, load the XML format file into a string and parse it!

import { parse} from 'fast-xml-parser';

let objFileBody=readFileSync(
join(entryPath, entry + '.object-meta.xml'), 
                  'utf-8');
let customObject=parse(objFileBody);

My customObject variable now contains a JavaScript object representation of the XML file, so I can access the XML elements as properties:

let label=customObject.CustomObject.label;
let description==customObject.CustomObject.description;

Which allows me to extract the various properties that I am interested in for my document.

In the next exciting episode I'll show how to enrich the objects by pulling in additional information from other files.

Related Posts


Monday, 13 April 2020

The Virtual Conference


Introduction

As the COVID-19 virus continues it's inexorable march around the world, the impact on events is ramping up. Many are postponing until later in the year, which feels likely to lead to some cancellations as three times the usual events compete for the same attendees.

Others are moving online and offering an entirely virtual experience, which is very different to events I've experienced before. Of course there has always been an element of virtual attendance to many Salesforce events - the Salesforce and TrailheaDX keynotes have been streamed live for a few years now, but this is more akin to a broadcast of the session as there is a large audience in attendance. A virtual conference moves every session online, and all of the networking.

The Engagement Challenge

Attending an event in person makes engagement fairly straightforward. You are out of the office for the day (or days, when it's something like Dreamforce) and in a physical location, with nowhere else to go. While some people do attend a physical conference briefly and then head off to the nearest bar/casino/beach, for the sake of this post I'm assuming that is a negligible number and ignoring them. As this is what you are doing for the day, you'll likely be fully committed and trying to get the most out of it.  If nothing else, it will justify the time out of the day job and mean you are allowed to go to the next conference. You might have to get on your phone/laptop if a something comes up, but it's likely to be kept to a minimum.

For a virtual conference, will you really take that time out of work to virtually attend, or will you try to juggle the day job and picking up a few key sessions? If the sessions are going to be recorded and published at a later date, can you just wait and consume them at your leisure? In my view, to get people to actually attend rather than just view content in the background or time-shifted, you need some scarcity in there. Invert events like Dreamforce and don't broadcast/record the keynote, instead just give access to those that bought (?) a ticket. Using a professional keynote speaker will make this easier, as they usually don't allow recording/broadcasting of their content anyway!

The Networking Challenge

Networking is a bigger draw than talks at some events, especially those that are more focused on selling than educating. When you are all in the same physical location, you'll end up having a lot of accidental conversations as you pass each other collecting lunch or in the expo hall. In a virtual event, these kind of conversations are less likely (although  not impossible - many of the tools support ad-hoc rooms/tables for conversations), and so more planning is required. I think we'll see a more open attendee list as virtual events continue, allowing attendees to plan who they want to talk to and when. If you are running a virtual event, you'll definitely want to integrate the tools that allow networking, as these conversations will happen anyway and if you don't offer the mechanism, they'll move to an external backchannel and you'll miss the opportunity to be part of it. 

The Session Challenge

What should a session at a virtual conference look like? There are a number of options:

  • Present live in a studio with an audience. The audience will likely consist of other speakers, plus any event staff who have some spare time. This will probably give the nearest equivalent to being part of a physical session, but will also come with the price of the studio, equipment and staff.  
  • Present live from home office. The easiest to set up, as the onus is entirely on the speaker to make sure they have good equipment and connectivity. The most difficult to control, for exactly the same reason. 
  • Pre-record and present as live. This strikes me as the safest option - get the speaker to pre-record their presentation, but have them live for their introduction and Q&A session after the event. I'm not the biggest fan of the pre-recorded demo in a physical environment, but I think it makes a lot of sense when virtual.  If done well, and if the speaker is on camera and doesn't suddenly change costume for the pre-recorded section, nobody will be any the wiser anyway, and it gives a guarantee of quality.
  • Pre-record and playback without the speaker. I think this is probably my least favoured approach, as there's really no difference to watching a talk recording on youtube or similar site, and I'm pretty sure I'd go for a session with live interaction if I had to choose between two at the same time.

The Upsides

Thus far I've been focused on problems - no surprise there, as a CTA a lot of my job is looking at as-is and to-be and trying to figure out what is already struggling, what won't scale in the future and generally trying to identify problematic areas. But there are a number of upsides to the virtual conference:
  • Less to organise.
    If you are responsible for running a conference, doing it virtually means you don't have to worry about venue(s) (unless you go for studio sessions), catering, swag, stands.It doesn't all go away, but a lot of the costly stuff does.
  • Lower cost for attendees.
    Attending an event in a physical location requires travel, accommodation and subsistence, all of which cost. If the event is in an expensive location like London or San Francisco, potential attendees from parts of the world with lower earnings are at a disadvantage. Spending a week's earnings to attend a conference in person is one thing, but it's quite another if it's several months salary. 
  • Lower ticket prices.
    Potentially free, but unlikely for studio sessions or events involving professional speakers.
  • Wider pool of attendees.
    Theoretically anyone in the world with an internet connection and an interest in your content, although in reality limited to the timezones that your event reasonably overlaps and how well the technology you choose scales.
  • Easier on nervous speakers.
    Virtual events will be a pipeline for speakers that aren't yet confident enough to stand up on stage in front of an audience. They can get started from the comfort of their own home to build up valuable experience before trying local, in-person events such as Developer or Admin groups.
  • Lower environmental impact.
    Not having people fly in from all over the world certainly adds to a conference's green  credentials.

The Future

So what does the future hold? Will all events be virtual going forward? I can't see that happening, especially for physical events like Dreamforce that sell out quickly and generate a huge amount of interest in Salesforce's products. I can see more of the community Dreamin' type events going this route - it's a good way to start something that can build into an physical event if there is enough interest. I do believe that many events in the future will be a hybrid of physical and virtual, both from the attendee and speaker perspective. 

Note: Some of  the above post was informed by my experience at London's Calling, which pivoted from a physical to a virtual event in about a week - a fantastic effort by the organisers.

Friday, 10 April 2020

Documenting from the metadata source with a Salesforce CLI Plug-In - Part 2

Introduction


In Part 1 of this series, I explained how to generate a plugin and clone the example command. In this instalment I'll look at customising the cloned command to locate and load the package.json manifest, and to load a configuration file that defines how my objects should be divided up for documentation purposes. All loading, all the time.

Customising the Command

Renaming

The first thing to do around customising the command is to rename it - as I cloned the sample org command, although it is externally known as bbdoc:doc due to the folder structure, in the source code it is still Org:

export default class Org extends SfdxCommand {

so I change the class name to Doc.

Flags

The next thing is to change the flags supported by the command to those that I need for documentation purposes. The flags for the org command are defined as follows:

protected static flagsConfig = {
  // flag with a value (-n, --name=VALUE)
  name: flags.string({char: 'n', description: messages.getMessage('nameFlagDescription')}),
  force: flags.boolean({char: 'f', description: messages.getMessage('forceFlagDescription')})
};

This shows up that there is slightly more to this than changing the flag names - notice the description properties aren't hardcoded strings. Instead they are retrieved by a message object. Here's the setup for the messages object:


// Initialize Messages with the current plugin directory
Messages.importMessagesDirectory(__dirname);

// Load the specific messages for this file. Messages from @salesforce/command, @salesforce/core,
// or any library that is using the messages framework can also be loaded this way.
const messages = Messages.loadMessages('sample', 'org');


The final line shows that the messages are loaded based on the command name - org.  In the generated plugin, messages are stored in the messages folder and the command specific messages are stored in a file named <command>.json. So for the org command, we have the following file structure:



and org.json has the following contents:

{
  "commandDescription": "print a greeting and your org IDs",
  "nameFlagDescription": "name to print",
  "forceFlagDescription": "example boolean flag",
  "errorNoOrgResults": "No results found for the org '%s'."
}

so as part of defining my flags, I also need to define their descriptions and the description of the command itself as follows:

  • config       - configuration file
  • report-dir  - directory to store the generated report
  • source-dir - source directory containing the org metadata

so I create a doc.json file:

{
  "commandDescription": "generate documentation for an org",
  "configFlagDescription": "configuration file",
  "reportDirFlagDescription": "directory to store the generated report",
  "sourceDirFlagDescription": "source directory containing the org metadata"
}

I initialise the messages with this new file:

const messages = Messages.loadMessages('sample', 'doc');

and update the flags definition in the source code to :

protected static flagsConfig = {
  // flag with a value (-n, --name=VALUE)
  "config": flags.string({char: 'c', description: messages.getMessage('configFlagDescription')}),
  "report-dir": flags.string({char: 'r', required: true, description: messages.getMessage('reportDirFlagDescription')}),
  "source-dir": flags.string({char: 's', required: true, description: messages.getMessage('sourceDirFlagDescription')})
};

Reading the package.json Manifest

The version number of my plugin is stored in the package.json manifest file and I want to include this in the generated HTML, so it's time to locate and read that file. I know the package.json is at the root of my plugin directory, but I don't know where the plugin has been installed. Node has a standard File System module, fs, that has been extended by Salesforce to add a handy utility function - traverseForFile. This starts from the folder the command lives in and makes its way up the folder hierarchy until it finds he requested file, or hits the root directory for the disk.  I execute this to find the location of the package.json file, which is in the plugin root:

const pluginRoot=await fs.traverseForFile(__dirname, 'package.json');

Having found the directory, I then read the package.json file. Note that I'm using the join function from the standard Path module - this allows me to build a path without worrying about the operating system or path separator. Note also that as reading a file is an asynchronous operation, I use the await keyword to stop the plugin until the read is complete.

// get the version
const packageJSON=await fs.readFile(join(pluginRoot, 'package.json'), 'utf-8');

Having read the JSON from the file, I parse it into a JavaScript object using the standard JSON.parse function and extract the version property:

const pkg=JSON.parse(packageJSON);
let version=pkg.version;

Loading the Configuration

I find I'm using configuration files more and more often with plugins. While I could add my configuration to sfdx-project.json, and I do if it is only a few items, when I start getting into lengthy configuration it feels like I'm polluting that file with things that don't belong there.

For this plugin and the example Salesforce metadata, I have the following configuration in bbdoc-config.json:

{
    "objects": {
        "name": "objects",
        "description": "Custom Objects", 
        "subdirectory": "objects",
        "suffix":"object",
        "groups": {
            "events": {
                "name":"events", 
                "title":"Event Objects",
                "description": "Objects associated with events",
                "objects": "Book_Signing__c, Author_Reading__c"
            }
            ,
            "other": {
                "name":"other", 
                "title":"Uncategorised Objects",
                "description": "Objects that do not fall into any other category"
            }
        }
    }
}

The objects property contains the configuration for generating the HTML document from the object metadata. It's in its own property as I intend to add more capability in the future. There's some information about the metadata, where it can be found, the suffix for the metadata files, and then a couple of groupings for the objects - the event specific objects and the rest.

The configuration file is optional - if one isn't provided there will be a single object grouping of uncategorised, so I check if he flag was supplied before trying to load any file.

// load the config, using the default if nothing provided via flags
let config;

if (!this.flags.config) {
  this.ux.log('Using default configuration');
  config=defaultConfig;
}
else {
  config=await fs.readJson(this.flags.config);
}

This time I'm using another Salesforce extension to the fs module - readJSON - which reads and parses the file into an object in one go - I should probably refactor the loading of the package.json file to use this!

In the next instalment, I'll show how to load and process the metadata source files. As before, if you can't wait, you can see the plug-in source on Github or install it yourself from npm.


RELATED POSTS


Saturday, 4 April 2020

Documenting from the metadata source with a Salesforce CLI Plug-In - Part 1

Introduction


(This series accompanies the talk that I gave to an empty room and a camera crew at London's Calling 2020 and a revised version to the virtual meetup Helsinki Developer Group the following week)

As regular readers of this blog know, I'm a huge fan of the Salesforce CLI. I reckon getting on for half of the talks/blogs that I've done in the last couple of years have been around using or extending it. I've even wrapped it in a GUI that I use multiple times every day, to open orgs, create scratch orgs or work with packages. I'm always looking for new ways to leverage it, and a year or so ago I was asked to create a document with details of the objects and fields from one of my Salesforce orgs. While I could do this manually, that's a lot of effort for something that will almost certainly be wrong in a couple of days. A better idea would be to automatically generate the document from the metadata.

BP (Before Plug-Ins)


This predated plug-ins, so I wrote a bunch of NodeJS code that behaved as a CLI in it's own right, using the Commander package. While this is fine, it's another tool that needs to be installed and maintained, so once plug-ins came along I added this to my list of things to migrate.

AP (After Plug-Ins)


Now that the Salesforce CLI supports plug-ins, that is always my first choice for anything of this nature. My team all have the CLI installed as that is how we develop and deploy on Salesforce, so distribution is simplified as the container is already on everyone's machine. All they need to do is run the plugins:install command to add a plug-in, and if I push an update they can upgrade via the plugins:update command.

The Metadata


My metadata is in source format and is available in a Github repo.  The object model is based on a bookstore and has the following items:


  • Book - details of a specific book
  • Author - an individual who has written one or more books
  • Publisher - an organisation that publishes books from multiple authors
  • Book Signing - an event where an author signs copies of their book that customers have purchased
  • Author Reading - an event where an author reads from their book to an invited audience 

I want to create an HTML document that pulls information from the metadata and generates an HTML document, separating the objects into two groups - Events (Book Signing and Author Reading) and the rest.

Creating a Plug-In

Creating a plug-in is as simple as executing the plugins:generate command and answering some questions. The following example creates a plug-in named 'sample'. Note that the plug-in is generated in the current working directory, so I create the sample directory first, or everything ends up in the root of my drive!

kbowden@Keirs-MacBook-Pro ~ *$ mkdir sample
kbowden@Keirs-MacBook-Pro ~ *$ cd sample
kbowden@Keirs-MacBook-Pro sample *$ sfdx plugins:generate

     _-----_     ╭──────────────────────────╮
    |       |    │     Time to build an     │
    |--(o)--|    │     sfdx-cli plugin!     │
   `---------´   │      Version: 1.1.2      │
    ( _´U`_ )    ╰──────────────────────────╯
    /___A___\   /
     |  ~  |
   __'.___.'__
 ´   `  |° ´ Y `

? npm package name sample
? description Sample plug-in for blog
? author Keir Bowden @keirbowden
? version 0.0.0
? license MIT
? Who is the GitHub owner of repository (https://github.com/OWNER/repo) keirbowden
? What is the GitHub name of repository (https://github.com/owner/REPO) sample

Once I've answered the questions, the command creates some files in the directory and installs a bunch of packages. You don't need to worry about the details, just make sure that the following is output when it completes to indicate the command was successful:

Created sample in /Users/kbowden/sample

(replacing sample and the directory with your specific details, obviously!)

Executing a Plug-In Command


When you generate a plug-in, you get a command - hello:org.  This connects to the Salesforce org associated with the default user, or the user that you supply when running the command, and retrieves some information. To test the command, you can use the bin/run script, which avoids having to install into the Salesforce CLI while you are building your plug-in. The output below is from my Mentz code mentoring org:

kbowden@Keirs-MacBook-Pro sample *$ bin/run hello:org -u MENTZLIVE
Hello world! This is org: Bob Buzzard
My hub org id is: 00D30000000J2G8EAK

Creating a New Command


The easiest way to create a new command is to copy the example and change it to your requirements. I'd very much advise taking baby steps when you are doing this for the first time, so try to keep the command working and re-run it when making changes.

The example command lives in the src/commands folder:



the hello folder defines the topic, and org.ts contains the command source code.  Copying this to src/commands/bbdoc/doc.ts adds the bbdoc:doc command:

kbowden@Keirs-MacBook-Pro sample *$ bin/run bbdoc:doc -u MENTZLIVE
Hello world! This is org: Bob Buzzard
My hub org id is: 00D30000000J2G8EAK

So running a couple of commands, answering a few questions and copying a file gives a whole new (if familiar) plug-in and command. At this point the plug-in is complete and can be published to npm. It's unlikely to see much uptake, but there's nothing else that needs to be done to make it distributable.

In the next instalment, I'll start the customisation of the bbdoc:doc command. If you can't wait, you can see the plug-in source on Github or install it yourself from npm.

Related Posts