Saturday, 7 September 2019

Mentz - The Story Continues

Mentz - The Story Continues

Introduction

It’s been over three months since I launched Mentz on an unsuspecting Salesforce ecosystem, and the results have far exceeded my expectations. I’d have been quite happy with a couple of people attempting the challenges that I mentored myself, but it’s fair to say we are well past that. At the time of writing (September 2019) we have 90 mentees, 24 mentors and 45 solutions that have been mentored. The standard of mentoring is incredible - a lot of very smart people are putting a lot of effort into helping others in their development journey.

Release, Review, Repeat

  • Creating and maintaining the tooling around Mentz has been an interesting aspect for me, not least because of how wrong I’ve been about some of it. 
  • My original plan was to have two stages in a solution lifecycle - mentoring and publication. A mentee would iterate on their solution based on mentor feedback and when they were completely happy with it, publish it for the wider mentee community to see and comment on. This was pretty much entirely unsuccessful and just caused confusion about where to post solutions. We now have a single place where solutions are published which anyone can access.
  • Solutions were originally uploaded as chatter files until one of the Mentors asked me to turn on "Allow Inclusion of Code Snippets from UI” - now if the solution fits int the 10k chatter message limit it is uploaded as a snippet, which is a lot easier to respond to (thanks Adam Lasek).
  • The challenges typically involved a class with multiple methods to be built out. While I always created my own (unpublished) reference solutions, as I’d come up with the scenario it didn’t take me very long. When a few solutions stacked up and I mentored them on a weekend, it took me almost all day! So I created a couple of short challenges to see what kind of reception they would get.
  • I used to regularly post into the Mentor group to let everyone know if there were any solutions awaiting a response. This always lead to apologetic replies from the Mentors, which wasn’t what I was after at all - I just wanted to avoid them having to poll the org to see if there was anything they could help with. I replaced this with a lightning web component rendered in in the Mentors group home page that listed any unanswered solutions, which seems to have helped.

There have also been some other changes to try to make things easier/more interesting:

  • A Suggestion Box repository for mentees (or anyone really) to suggest challenges they’d like to take
  • The ability to lock a solution to a single mentor. The idea here is that a mentor claims a solution and is the only one (aside from the original author) that can respond to it. I haven’t turned this on yet as AFAIK we’ve only had one instance where two mentors were working on responses to the same solution at the same time.
  • Mentee and Mentor leaderboards - people seem to like these so I’ve added them to the group home pages. I’m at the top of the Mentor leaderboard, but only because I mop up any solutions that haven’t had a response after a few days. I don’t have to do this, but I do feel a sense of responsibility having enticed mentees to join.
  • The Mentz Salesforce CLI plugin now has a challenges topic that lists the available challenges (optionally including those already completed) and clones the repo for the user:
            $ sfdx mentz:publish --targetusername myOrg@example.com --all

              Select a challenge
              1) COLLECTION SIMPLE 1
              2) CONDITIONAL SIMPLE 1 (Completed)
              0) Quit

              Choose a challenge: 1
              Cloning repository = https://github.com/mentzbb/SimpleCollections1
              ...
             Done

What’s Next

As I wrote in my original Mentz blog post:

If things do take off, I don’t want to handle everything out of a single org myself, as that will limit scale. Instead I'll make the code available as a package so that others can host their own instance of Mentz, in their own org. We'll all use a common set of challenges, but the actual mentoring will be distributed.

With 90 mentees it feels like my first org is getting pretty close to as big as I want it to get, so it’s time to test the waters and see if anyone else wants to host Mentz. Here’s a few things to think about before you jump in:

  • This will be a developer edition setup by me that you then take over - this isn’t (only) because I’m a megalomaniac, but more because I want to go through the setup a few times to get it documented before leaving people to face it on their own. The Mentz code will be deployed as unpackaged code  - it might be packaged up in the future, but at present it won’t add much and just adds more work for me :) It does also allow me to keep an eye on the standard of mentoring, as it’s been stellar so far and I’d like to keep it up there.
  • There’s not very much housekeeping - mostly it’s emailing the Mentee/Mentor requestors and then executing a lightning action to convert the request to a user. I usually do half a dozen or so a week.
  • The challenges are the same for every Mentee, all that changes is the org they post solutions to and who mentors.
  • If you host a Mentz instance, be prepared to act as the Mentor of last resort - this doesn’t happen often, but I think it’s quite important to make sure that posts are getting answered. I usually allow about a week for the mentors to dive in and then start picking things up myself, usually over the weekend.
  • Lots of people will register an interest and never even login - remember that this is Mentz where we do what we want, so this is absolutely fine. Never try to make people do anything, although it’s okay to check from time to time to make sure they aren’t trying and struggling/failing.
  • You need to have some reach to attract Mentees/Mentors - I’d imagine it’s a bit dispiriting to announce this is happening and receive zero interest :)

If this sounds like the sort of thing you’d be interested in, fill in the short form at https://bobbuzz.me.uk/MentzHosting - it will almost certainly take me a week or two to do anything so don’t panic if you don’t hear back quickly.

Related Posts

 

 

Sunday, 1 September 2019

Parallel Apex Unit Tests and Salesforce CLI Plugins

Parallel Apex Unit Tests and Salesforce CLI Plugins

 

Introduction

In the Salesforce Winter 20 release notes was something I’ve been looking forward to for a few years - the ability to turn off parallel Apex unit test execution. By default parallel unit test are enabled (somewhat confusingly, by the fact that the Disable Parallel Apex Testing option is not checked):

 

and this typically results in a number of failures in my automated test runner package, as the tests can’t get exclusive access to write to the account others object tables.

This setting is one of the last items that I have to manually turn on when creating a scratch org, and that has been annoying me for a while. I could automate it via Selenium, but that feels like overkill, so I was very pleased to see the new Apex Settings metadata type. Among other features, this has the enableDisableParallelApexTesting field which allows me to check or uncheck the Disable Parallel Apex Testing checkbox, albeit via a metadata API deployment. The release notes also mentioned the to-be-deprecated OrgPreferenceSettings metadata object, and it turns out that this also has a mechanism for turning off parallel testing via the DisableParallelApexTesting setting, so there was no need for me to wait until Winter 20 as long as I could switch between the two mechanisms based on the API version the org is at.

Salesforce CLI Plugin

Regular readers of this blog will know that I’m a huge fan of the Salesforce Cli - I use it all the time and when I’m looking to do anything around developer tooling I always try to create a CLI plugin to host it. This was no different, although it presented a couple of challenges that I hadn’t taken on before:

  • Determining the API version that the org is at. If this is 46 or less (not sure how it would be possible to be on an earlier version of the API than Summer 19, but if Salesforce ever allow it I wanted to be covered) I need to deploy an OrgPreferences metadata type, 47 or higher I need to deploy an ApexSettings.
  • Metadata deployment from inside a plugin. 

After I scaffolded a new bbsfdx plugin and copied the commands/hello/org.ts example command to bb/test/parallel.ts, I set about solving them.

Determine the API Version

Whenever I’m doing anything new with a CLI plugin, my first port of call is the reference documentation for the Salesforce DX Core Library, and I wasn’t disappointed. I can find the API version for the org via the Connection.retrieveMaxAPIVersion function. Getting a Connection object is simple in a CLI plugin - just specify the requiresUsername property as true and a connection comes up with the rations via the org property supplied by the plugin. Getting the target API version is as simple as:

const conn = this.org.getConnection();
const apiVersion = await conn.retrieveMaxApiVersion();

So far so good.

Metadata Deployment

The simplest way to do this is to execute an existing Salesforce CLI deployment command, either force:source:deploy or force:mdapi:deploy, but I’m not a fan of this approach. Spawning a process to execute a Salesforce CLI command from within a CLI plugin seems clunky and inelegant, and it binds me to a command that I don’t control and which may be retired. I should be able replicate anything the standard commands do as I have access to the same underlying libraries.

This time the core library docs weren’t much help - there was a metadata property on the Connection object, but it didn’t have any detail, so time to look elsewhere. The Core library is built on Shinichi Tomita’s JSforce library, so I headed over to the docs for that. The API reference for the Metadata class was exactly what I was looking for, specifically the deploy method.

To deploy metadata, I need a zip file containing a manifest (package.xml) and the metadata files themselves in the directory structure mandated by the metadata API. In order to achieve this, I create a temporary directory and write the appropriate information depending on the API version (stored as a float value in the fltVersion variable):

let packageFile=join(targetDir, 'package.xml');
let packageContents='<?xml version="1.0" encoding="UTF-8"?>\n' + 
    '<Package xmlns="http://soap.sforce.com/2006/04/metadata">\n' +
    ' <types>\n' + 
    '   <name>Settings</name>\n';

let fltVersion=parseFloat(apiVersion);

if (fltVersion>46) {
  packageContents+='    <members>Apex</members>\n' + 
                   '  </types>\n' +
                   '  <version>47.0</version>\n' + 
                   '</Package>';
  let apex=join(settingsDir, 'Apex.settings');
  writeFileSync(apex, 
        '<?xml version="1.0" encoding="UTF-8"?>\n' + 
        '<ApexSettings xmlns="http://soap.sforce.com/2006/04/metadata">\n' +
        '  <enableDisableParallelApexTesting>' + attr + '</enableDisableParallelApexTesting>\n' +
        '</ApexSettings>\n'
          );
}
else {
  packageContents+='    <members>OrgPreference</members>\n' + 
                   '  </types>\n' +
                   '  <version>46.0</version>\n' + 
                   '</Package>';
  let orgPref=join(settingsDir, 'OrgPreference.settings');
  writeFileSync(orgPref, 
    '<?xml version="1.0" encoding="UTF-8"?>\n' + 
    '<OrgPreferenceSettings xmlns="http://soap.sforce.com/2006/04/metadata">\n' + 
    '  <preferences>\n' + 
    '     <settingName>DisableParallelApexTesting</settingName>\n' + 
    '     <settingValue>' + attr + '</settingValue>\n' + 
    '  </preferences>\n' + 
    '</OrgPreferenceSettings>\n'
  );
}

writeFileSync(packageFile, packageContents);

Now I have my directory structure, I need to zip it. Searching on npm for zip packages returns a lot of results, so how to choose? I bounced around sites like stack exchange to see what others were using and eventually settled on compressing for a couple of reasons. First, it supports other compression files than zip, and I might need that flexibility in the future, and second it has a really simple API and is already promisified. After installing it into my plugin node modules and importing it into the parallel.ts file, generating a zip file is a couple of lines:

const zipFile=join(tmpDir, 'pkg.zip');
await compressing.zip.compressDir(targetDir, zipFile);

Getting there. Back to the docs for the metadata deploy function, it wants a zip stream rather than a filename, so I create one of those and add the code to deploy the metadata;

let zipStream=createReadStream(zipFile);
let result=await conn.metadata.deploy(zipStream, {});

The deploy function returns information about the deployment job, so I then enter a loop to poll for the status until it is finished:

let done=false;

let deployResult:DeployResult;
while (!done) {
  deployResult=await conn.metadata.checkDeployStatus(result.id);
  done=deployResult.done;
  if (!done) {
    this.ux.log(deployResult.status + messages.getMessage('sleeping'));
    await new Promise(sleep => setTimeout(sleep, 5000));
  }
}

and there it is - a plugin to enable or disable parallel Apex unit testing in under a couple of hundred lines of Typescript (and obviously a ton of existing node modules that allow me to stand on the shoulders of giants). 

Where’s the Code?

The full code for the plugin can be found in the Github repository at : https://github.com/keirbowden/bbsfdx 

The plugin itself is published on npm at : https://www.npmjs.com/package/bbsfdx - it has been tested on MacOS.

To install the plugin into your version of sfdx, execute:

sfdx plugins:install bbsfdx

Related Posts