Pages

Thursday, 31 December 2020

2020 Year in Review - Part 1


January

2020 started quietly, like any other. The first London meetup was Einstein Analytics where the number of people doing dry January meant that there was more than enough booze to for the rest of us. The following day was the London Salesforce Developers, where 60 odd people headed over to PWC's offices to hear about Heroku and the Spring 21 release of Salesforce. London's Calling needed bios and headshots, and tickets were selling like hotcakes. 

There were also reports of a virus in the Wuhan province of China that was highly contagious - this didn't feel like something we had to worry about as (a) it was a long way away and (b) we'd seen this kind of thing before with SARS, swine flu and bird flu in the past. Towards the end of the month there were a few cases in other countries, but easily explained through international travel. On the last day of the month the UK recorded it's first case, along with fellow European countries Sweden and Spain. Again, very much a case of someone bringing it with them on holiday and easily contained.

February

Two meetups for the Developers in February. First a fireside chat with Wade Wegner at Salesforce Tower on 5th, talking about Evergreen (now Salesforce Functions) and more. Little did we know this would be the last trip to the tower of 2020.  Then on 12th we had our Dreamforce Global Gathering at Deloitte's offices in Clerkenwell Green. Little did we know (2) that this would be the last time we would meet in person. 

Spring 20 went live and I presented a release webinar with my colleague Clive Platt. Little did we know ... you get the picture. This brought the new mobile app to all users, and introduced the WITH SECURITY_ENFORCED SOQL clause and the Security.stripInaccessible() method to bolster security in Apex. What a time to be alive!

The COVID-19 virus was still around  - by Feb 6 there were 30 odd cases in Europe which led me to mention it when sending my London's Calling slide deck - a throwaway remark hoping that it didn't derail us. It still seemed a very low risk, and likely something that would level out quickly - in the UK we only had a couple of cases. Fast forward to the end of the month and we still had under 20 cases Looking over to Europe, the case count in Italy was rapidly increasing. Around this time things started to change as we realised that it was likely to be problematic, although human nature still made us expect that things wouldn't be too bad. At BrightGen we started planning for how we'd shift most/all of our workforce to remote, with a dry run day scheduled for March.

March

The London Salesforce Developers decided not to hold a meetup in March, as the London's Calling community conference was taking place and that typically consumes most of the community's energy. At BrightGen we'd had our dry run day in early March with everyone working from home and it had been pretty straightforward. Which was handy as we went completely remote the following week, although with offices still open for the odd face to face meeting. 

The Salesforce World Tour London, due the third Thursday in May, was postponed. By the end of the month TrailheaDX had gone virtual and become a half day or so event. 

As everywhere started to shut down, or operate at a much reduced capacity, I had to make a trip to a testing centre in Stratford to take my JavaScript Developer 1 exam. After a weird journey where I shared a whole carriage with about 10 other people in rush hour, I arrived at a deserted Stratford that would usually be bustling. I had an hour to spare in a ghost town where not much was open and there was nowhere for a casual visitor to sit inside with a coffee and carry out some last minute revision. So I wandered the streets while reviewing my notes and did my best to avoid going within 6 feet of anyone else. After the test I headed back home on a carriage with about 8 other people, as the London's Calling speaker drinks scheduled for that evening had been cancelled due to the ever increasing virus.

London's Calling had tried to stay in-person, but eventually gave up the battle and went all remote, with the exception of a few speakers, including myself. This involved another trip to London, this time sharing the carriage with 5 other people and arriving at an eerily quiet Liverpool Street, which typically resembles the zombie apocalypse at that time of the morning. A strange day then commenced of a group of speakers shuffling from one room to another to support each other, moving the chairs into corners of the room and generally avoiding each other while trying to network with each other. I personally had the pleasure of presenting to the cavernous and almost totally empty keynote room - this was actually a great experience as I had to still give my best even in the absence of an audience and in a somewhat uncomfortable environment. Whatever doesn't kill you makes you stronger. I'm not sure the organisers would agree though, having pivoted from an in person to virtual event at a few days notice, and spending a lot of the day finding out the limitations of the various livestream event platforms! 

After a happy hour involving a live band and highly distanced drinks and dancing, I headed home sharing the carriage with 3 other people, wondering how long the trains would continue to run with so few passengers. This was also the last day that the pubs were allowed to open, in an attempt to slow the spread of the virus, and they were packed both inside and out until closing time. It didn't seem like this was going to help much, and it didn't. A mere three days later we were locked down and only allowed out for an hour's exercise a day.

I'd been asked to present at the Finland Developer Group on 25th March. It was always going to be remote, but as I was preparing my talk it started to feel like a vision of my future. Short term future, though, which turned out to be all kinds of wrong!

Related Posts

Saturday, 12 December 2020

Org Documentor - Fields Usage in Page Layouts




The November meetup of the London Salesforce Developers saw us sharing our favourite tools, those we use regularly and those we have written ourselves. I showed the Org Documentor, and got a great question from Lawrence Newcombe - does the information output for the custom fields include details of which page layouts they are used on. The answer was no, but definitely doable. And here we are less than a month later and it's done! 

There's no additional configuration required, but as always remember that it only works against the metadata that you have retrieved from your org. If fields or page layouts aren't present on disk, they won't appear in the report.

Output


I've had to tweak the format of the report slightly, as the layout names can be quite long so I needed more real-estate. For those that care about the bootstrap side of things, the main bodies of the pages now use the container-fluid class rather than container, as this allows them to take up most of the width of the page.

The fields table now contains a new column - Page Layouts - which has the layout name and the behaviour.





As the names can be quite long and there can be multiples, the behaviour is in bold and there's a separator between each layout. 

This output was generated from the sample metadata and can be viewed online.

Processing


Much like the last enhancement I added, aura enabled classes, I was again pleased by how little code needed adding. I loaded the page layouts from the source folder and iterated the layoutItems entries. For each of these I created an ObjectPageLayoutData record and stored this in an array associated with the object and field combination. I then cached this information in a map, keyed by the object name, followed by a ':', followed by the field name.

When generating the field record, after it has been enriched with any extra information, I then retrieve the cached ObjectPageLayoutData array for the object name and field combination, and add this to the object information that is passed to EJS to generate the HTML.

In the HTML template, I check to see if there are any ObjectPageLayoutData entries, as fields don't have to be present on layouts, and if there are some, I iterate them and output the layout name and the behaviour (Required/Edit/Readonly). As there can be multiple entries, and the names can be quite lengthy, I add a thematic break tag (<hr/>) after all but the last item in the array (and I also think it's pretty cool that in an EJS HTML template I can iterate an array and tell if I am on the last entry!).

Plug-in


Version 3.3 of the plug-in has this new functionality and can be found on NPM

If you already have the plug-in installed, just run sfdx plugins:update to upgrade to 3.3.

if you aren't already using it, check out the dedicated page to read more about how to install and configure it.

The source code for the plug-in can be found in the Github repository.

Related Posts

Saturday, 5 December 2020

Salesforce buys Slack


A week after the news broke that Salesforce were looking to acquire Slack, the deal is done - we knew it would be, so that it could be announced at the Dreamforce to You keynote. It turns out that talks started some time ago, but with Slack talking to Salesforce about an acquisition of Quip. This would have been a real surprise, as I can't recall Salesforce ever selling on an acquisition, although they have shut a few down (including my favourite todo list, do.com).

In my previous post I touched a little on why Salesforce would need another tool to collaborate outside of the Salesforce instance, and I've been thinking more about this.

Chatter

Chatter is a great tool for collaboration around Salesforce data, allowing you to combine structured information in the record detail and related records with unstructured data in the chat. You can see the historic discussion around changes to the record, not just which fields changed, and understand more about the lifecycle of a record. Did a case involve pulling in additional teams, did a Sales manager have to chase up a rep to progress an opportunity. All of this is valuable information which is hard to capture on a record without having a million related records.

Chatter groups are also useful in the context of Salesforce data. Being able to @mention a group regarding a record, using the same example from above, if there was a case that required a specialist team you could get their attention with a single post. They are less good at general collaboration in my view - while I use groups a lot, it's for headline type posts - announcements or asking if anyone has seen a specific problem before. But only for the initial contact. Once the protagonists are identified, any further collaboration is typically carried out via Google Chat, Slack, email or videoconference. Essentially I find hem a way to get the attention of the right collection of people. 

I don't think Chatter was helped by the investment drying up after a few bumper years. The desktop application lagged way behind the web interface, and was eventually withdrawn, even though it was pretty popular. But the key sticking point to expanding to the wider company is the org-centric aspect. If you have multiple orgs, then people need to maintain multiple logins, which is a bit easier with mobile app as can easily switch, but still not ideal. You also need to be very clear which is the correct org for collaboration, even if it's around something happening in another org. There were some third party solutions around providing single view of chatter across orgs, but generally these replicated posts across orgs and required additional licenses for the ISV product and often something like Heroku orchestrating the transfers. You also can't integrate inside the feed like you can with a tool like Slack, although you can do a bit more around the feed with Lightning Experience pages. Automatic notifications also felt quite antiquated, as for a long time you had to construct a post through the Connect API, @mention someone, then detail what it was you wanted to alert them about. This has improved since the advent of process builder and custom notifications, but was introducing features that other tools had for years. Once Quip came along, that felt like Chatter was heading down the maintenance only route. 

Employee Communities

Communities were another attempt to introduce collaboration across the enterprise, and replace the intranet, that didn't make it. What most companies wanted were a simple way to create a related set of pages (Google sites anyone?) for intranet type content and a way to collaborate on the content. What they got was a $20/user/month platform-lite style license, that pretty much guaranteed limited take-up. For example, I was working with a 4,000 person company who wanted to migrate their intranet to Salesforce. They had 300 or so existing Salesforce users, so needed around 3,700 community licenses, which would lead to a bill of around $75,000 per month, for an intranet that people would use to find someone's phone number or figure out how to book holiday! Yes they could access up to 10 custom objects, but the vast majority had no interest in that. Without reasonably priced, login-based, entry level licenses, the employee community was always going to be priced out of the market. The employee community license was eventually withdrawn and users transitioned to a platform user license with a company communities for force.com permission set, which feels like what was actually being sold.

Slack

Against a dedicated solution like Slack, aimed at the entire company rather than specific groups of users, it was always going to be a difficult battle. Acquiring Slack gives Salesforce access to the entire enterprise, especially if they stick with the free tier. Yes you don't get the history, but important information should not be stored in chatlogs. If people have to search back through the history of chats they may not have been involved in to find something they need, you're doing it wrong.  

Where the chat history is important, as mentioned earlier, is when it is colocated with a record, and this is why I think chatter will live on after the acquisition - people won't switch to another tool to search for any messages related to the record they are currently viewing.

Slack may have lost $147 million last two quarters and may not have seen the growth that other tools have during the pandemic, but they also haven't had the unstoppable force of Salesforce Sales and Marketing pushing it. According to Marc Benioff, 90% of Slack customers are also Salesforce customersOnce the machine is tooled up to sell Slack, I'd expect the numbers to trend towards this in reverse - a massive existing customer base, existing access to the upper echelons of leadership, and a motivated (incentivised) Sales operation will, I believe, lead to a large amount of Salesforce customers becoming Slack customers.

Follow @bob_buzzard


Saturday, 28 November 2020

Salesforce and Slack - the End of the Beginning


The big news this week (last week of November 2020) is that Salesforce are in "advanced talks" about acquiring Slack. While I wasn't expecting this particular deal, I was expecting something of this nature, but I thought it would be in the realm of video rather than chat and they would try to buy Zoom.

I expected something because of the uptake that Microsoft Teams has seen since the pandemic started - from a disputed 20 million daily active users in November 2019,  to 75 million in May and then jumping to 115 million by November. The combination of file sharing, chat, collaboration, and video clearly has a lot of appeal, even though Microsoft referred to it as "digital translation of an open office space" which I'd imagine would put a lot of people off using it! Given that we'll be almost exclusively be using technology to communicate and collaborate for the foreseeable future, Salesforce had to be regarding the way Teams was embedding itself into the enterprise with envious eyes.

Slack hasn't seen the same explosion of usage and its share price has recently dropped, making it vulnerable to a takeover bid. From the Salesforce perspective, this is an opportunity to get across the entire enterprise rather than being constrained by the org - something the Chatter Free license attempted a number of years ago, but in my view foundered because it just wasn't a great collaboration tool when outside the context of Salesforce data. There was also the Company Community attempt to replace the intranet and more, but that was wildly expensive for a large organisation where most employees weren't Salesforce users.

While an acquisition will widen the Salesforce footprint, and no doubt bring deeper integration between the two tools, there will still be the need for Yet Another License for videoconference software. Meetings moves some of the experience inside Salesforce, but it's still someone else's application.

There's also Google Workspace to compete with - Meet and Chat have come on a lot in the last year, and complement docs, mail, calendar and co. very well. Again, it's one G-Suite license and all of those tools are bundled.

So in my view, buying Slack makes sense, but it's not the end - rather it needs to be part of a larger program to provide all the tools needed for enterprise collaboration in a single offering. That's what Microsoft and Google are doing and Salesforce is playing catch up. To paraphrase Winston Churchill, this needs to be the end of the beginning, not the beginning of the end.

Follow @bob_buzzard



Sunday, 1 November 2020

Giving the CLI Scanner the GUI Treatment

Introduction

As I've written about in recent posts, I'm using the Salesforce CLI Scanner to perform static code analysis on a number of my projects. As it's clearly part of my day to day use of the Salesforce CLI, I decided to add it to my CLI GUI.  

TL;DR, pull the latest code from the Github repository, or clone it and follow the instructions to carry out the initial setup. When it starts up you'll have the Scanner command group.

Commands

I added the scanner as a new command group, with a couple of commands - listing the rules and running the scanner.

Listing Rules

Listing the rules is fairly straightforward as I used existing parameter types to capture the categories and language, should the user wish to provide them:



and only needed to include a function to process the return JSON to extract the rule details and dump them to the log panel:


Running the Scanner


Running the scanner was a little more complex. First, I usually want to pick the categories but I don't want to have to remember the exact names, so I needed a mechanism to allow me to select from list of options. I've got a good start for this around allowing the user to choose which log file to retrieve from the Debugging command group. Under the hood I run the scanner:rule:list command, process the output to extract the unique category names, then build a select element using the categories as options. As I can choose multiple options I set the multiple attribute and give it a size of 7.  I don't always want to choose though, so I didn't want to have to wait every time for the command to run, so I gave myself a button to click:



I also typically want to open the file that the output is sent to, so I added this capability too:




Aside from this, most of my effort went into figuring out how to pass enquoted parameters to the scanner:run command (to define the targets, for example) on MacOS and Windows 10. I learned, for example, that Node's execFileSync will not spawn a shell to run the command thus any spaces I provide will be assumed to be different arguments to be passed to the command, regardless of any quotes that I might add in a futile attempt to influence it. execSync, on the other hand, will spawn a shell which will know that an enquoted string is a single parameter, allowing the command to succeed. I wouldn't be surprised if there is still the odd issue around this in there, so if you find one let me know by raising an issue at the Github repo.


Saturday, 24 October 2020

Working with the Salesforce CLI Scanner

Introduction

A couple of weeks ago I had my first exposure to the Salesforce CLI Scanner and it's fair to say I quite liked it - two blog posts in two days is above my usual cadence. The first thing I wanted to do was find out if I could tune it to my specific requirements, which  I was pleased to find out I could by creating custom XPath rules and installing them into my CLI instance.

The next job was to introduce them into my workflow and take action when issues were flagged up, which I've been working on in what time I could carve out.

Customising the Default Rulesets

Some of the default rules from the CLI Scanner weren't applicable to all of my projects - an example being the CRUD/FLS enforcement. If I'm not targeting the app exchange, a lot of the time my code needs to be able to carry out actions on specific objects and fields regardless of the user's profile - I'm relying on running in system mode. I can't see the point in letting this be flagged up as warnings and then hiding or ignoring them, so I wanted to exclude this specific rule. At the time of writing (October 2020) the CLI Scanner doesn't allow rules to be excluded or included at run time, so I needed a way to remove them from the ruleset.

After a small amount of digging around, I found the default ruleset configurations in my ~/.sfdx-scanner directory, so I copied them and removed the CRUD rules. Executing the scanner on my sample codebase showed this had no effect, so I checked the file to make sure I'd removed all instances. I found a couple, removed them and executed the scanner again with the same result. Checking the files again showed the CRUD rules were still in place, and this time I knew that I'd definitely removed all references, so I realised that these configuration files were generated at run time so any changes I made were overwritten. Back to the drawing board.

Cloning the Default Rulesets

As PMD is open source, the rulesets are available in the Github repo, so creating a copy was fairly straightforward. I edited the rulesets to remove the rules that I wasn't interested in, and to change the name - I didn't want to replace the defaults as there are times when I want all the rules applied. 

Something else I learned since my original post is that if you are relying on existing Java rule classes, you don't need to build a JAR file containing the ruleset XML, you can just install the XML file itself, so that saved me a little time. Once I had the custom rulesets in place I was good to go.

Adding to the Workflow

The CLI scanner is now part of my overnight Continuous Integration processing on our newly launched BrightFORUM event management accelerator.  This doesn't have a vast amount of custom code and we've had a very clear set of standards and rules that are applied at code review time, so I wasn't expecting many issues, and I was pleased to find that was the case. All the violations were level 3 or lower as well, and I'd have been pretty annoyed with myself if there were any higher than that. 

Satisfying the Scanner

Clearing the violations was an interesting exercise.  Some were very simple, like unused local variables. Most of the time this was overzealous retaining a reference that wasn't used later, but sometimes it was genuine leftovers from a previous approach. As these had been through code review, I can't see that we'd have picked them up manually in the future unless we'd made further changes.

Others required more effort, like cyclomatic or cognitive complexity, which typically means methods that are trying to do too much at once. An automated approach helps here as it's not down to individual developers deciding that a method is a bit long or a bit complicated, and convince others that is the case. A tool flagging this up takes all the emotion out of it. Fixing these required some refactoring, which I had no problem undertaking as we have excellent test coverage to confirm I didn't break anything inadvertently.

Suppressing Violations

I didn't change the code in every case though - for example, I had a potential SOQL injection flagged up, as the fields in the query were dynamically generated. The string being used in the query had been generated from the built-in schema classes, a couple of lines earlier, so it was guaranteed to contain only valid field names and there was no chance of unescaped user input or the like being included. I could bow to the scanner and escape the string anyway, but that would suggest to anyone reading the code that it was coming from an untrusted source, which wasn't the case. This gave my a chance to try out an Apex annotation that two weeks ago I had no idea existed - @SuppressWarnings.

From the Apex docs : 

This annotation does nothing in Apex but can be used to provide information to third party tools.

To turn off the violation, I added the following annotation to the method that contained the SOQL query:

    @SuppressWarnings('PMD.ApexSOQLInjection')

and the violation is gone. You can suppress multiple warnings by comma separating them inside the single quotes, and you can also apply this at the class level to suppress all warnings of the specified type(s).

This is something that should definitely be used with care - once I've suppressed SOQL injection warnings for a method, any code added that introduces a genuine SOQL injection vulnerability won't get picked up. I'm typically adding a comment to explain why I've suppressed the warning and to say that the method shouldn't be changed without taking this into consideration. I'm also putting regular time into my calendar to revisit the suppress warnings annotations and confirm that I don't have a better approach.

I wouldn't use this annotation to exclude a class from all rules - that could be done, but it would be a big list of suppressions and it would be simpler to exclude the class from those processed. I'd also add a ticket that there is technical debt to be addressed in that class, so that it doesn't get forgotten about and embarrass me in the future.

Takeaways

The key takeaway from my last couple of weeks experience is that automated code scanning leads to cleaner code - things that aren't needed are gone, things that were hard to understand are now easier, and I have some custom rules that ensure that concerns are appropriately separated. Including it in my CI ensures that I pick up any issues as soon as they are added to the codebase, and I also plan to have the development team run the scanner before pushing their work. Aside from the initial effort to set this up, it's pretty much zero effort for a lot of benefit.

Static code analysis shouldn't replace manual code reviews though - an automated tool is configured to look for specific code/markup that indicates problems. A developer carrying out a code review understands the purpose of the code and can determine if it looks like it does what it is supposed to. Code reviews are also an excellent way to share knowledge in a development team. By adding automation you free up code reviewers from having to check whether every variable is used and figure out what makes a method too complex. By using both you'll catch more problems earlier, so what's not to like?

Resources






Saturday, 17 October 2020

Automation with Platform Events


Introduction

During the Trailblazers Innovate live session on 13th October, Wade Wegner made a comment about modern development being event driven rather than trigger driven, which is an approach I've been following for a while now, as in my view it maximises flexibility. Most of us are experiencing event driven development on the client side as we are working more and more with JavaScript, where user interactions generate events. It also applies server side, where platform events can be used to drive automation.

(As an aside, if you google trigger driven development with Salesforce, one of the top hits is an April Fool's Day post of mine about moving all of your code into triggers and faking record changes to make it run. Don't do that!)

Trigger Driven

The trigger approach (and I'm using this term to cover pro/low/no code mechanisms including Apex triggers, process builders, flows, workflow rules) typically means that you (via the Salesforce platform) monitor records and react to specific changes involving those records. 

An example of this, loosely based on some of the code behind my toolbox site where I keep details of really useful sites, tools, documents etc, would be to notify me if the number of clicks on an Entry goes up.  I can add automation that fires when an Entry record is updated and the clicks field is updated, and sends me the an email to that effect. This works a treat and I'd imagine there are about a billion automations similar to this out in the wild.

The downside to this is the automation is tightly coupled to the data model. If I decided for that I wanted to move the dynamic information out of the Entry and into another object, let's call it EntryInteraction, my automation has to reflect the change to the data model. I need something that monitors EntryInteraction records and emails me when the clicks field on that changes. 

Event Driven

In the event approach, rather than monitor records, my automation subscribes to a platform event that is fired when the clicks are updated. It neither knows nor cares where the clicks are stored - it could be off platform - it just knows that when it receives this event, the clicks changed somewhere, so it should send the email. The code that updates the clicks fires this platform event, and if the location of the clicks changes, that code changes. But regardless of what the code does, the event that it fires stays the same.

The automation is now decoupled from the data model and knows nothing of the internals of how the application works. As the application author, I don't need to worry about dependencies on automation, and I can change my data model as I want to reflect my changing needs. A much better situation, I'm sure you will agree.

Resources

Sunday, 11 October 2020

Salesforce CLI Scanner Custom XPath Rules - Part 2


Introduction

In Part 1 of this post I showed how to create an XPath expression that looks for SOQL queries outside of test method or accessor classes via the PMD designer. This in itself was good fun, and there's a definite rush when you finally come up with the appropriate runes, but is of limited value. In this post I'll embed the expression in a custom rule and add it to the Salesforce CLI Scanner.

Rule

A rule needs to be inside a ruleset XML file. The PMD site has excellent documentation on this, so I won't reproduce it here for the sake of padding my post out.

A couple of key aspects of my rule is that it is an XPath expression to evaluate against Apex coded, so I need to add that detail to the definition so that the appropriate class can be instantiated to handle it:

<rule name="SoqlOnlyInTestAndAccessorClasses" 
      language="apex" 
      message="SOQL queries can only appear in test methods and accessor classes"
      class="net.sourceforge.pmd.lang.apex.rule.ApexXPathRule">

and the XPath expression itself appears in the rule properties, along with the XPath version:

<properties>
  <property name="version" value="2.0"/>
  <property name="xpath">
    <value>
      <![CDATA[
//UserClass[not(ends-with(@Image, 'Accessor'))]/Method/ModifierNode[@Test=false()]/..//
(SoqlExpression | MethodCallExpression[lower-case(@FullMethodName)='database.query'])
]]>
    </value>
  </property>
</properties>

Packaging the Rule

The ruleset xml needs to be packaged as a JAR file (Java ARchive) using the tool of the same name. If I'd written a custom Java rule, I'd need to compile the class and include that in the JAR, but as I'm relying on a pre-existing class (net.sourceforge.pmd.lang.apex.rule.ApexXPathRule) I can just package the ruleset xml file.

Per the official Authoring Custom Rules documentation, my ruleset file lives at:

    ./category/apex/bobbuzzard.xml

so to create my JAR I execute the following command, which gives me plenty of information about what is being added:

$ jar -cvf bobbuzzard.jar ./category

added manifest
adding: category/(in = 0) (out= 0)(stored 0%)
adding: category/apex/(in = 0) (out= 0)(stored 0%)
adding: category/apex/bobbuzzard.xml(in = 1105) (out= 545)(deflated 50%)

Installing the Rule

Once I've packaged the rule, I can install it using the scanner:rule:add Salesforce CLI command, specifying the language as Apex and the JAR I just created:

$ sfdx scanner:rule:add -l apex -p bobbuzzard.jar 
Successfully added rules for apex.
1 Path(s) added: /Users/kbowden/PMDRULE/bobbuzzard.jar

Per the official docs, I list the commands to ensure it went in correctly (when I was figuring all this out but didn't have my ruleset.xml file set up right, the command would succeed but the rule wouldn't be added, so verifying via the list command is definitely a step worth taking). So that I don't have to wade through a tone of output, I pipe it through grep to exclude any lines that don't contain the characters Buzzard :

$ sfdx scanner:rule:list | grep Buzzard
SoqlOnlyInTestAndAccessorClasses          apex         Bob Buzzard     pmd

so all is well - my rule is all present and correct.

Running the Rule

All that is left is to run my rule against some sample Apex classes. The classes, and the ruleset and JAR re available at the Github repository.

I have:

  • ScannerExample.cls - this has a SOQL expression and a Database.query method call, so I expect it to be flagged twice.
  • ScannerExampleAccessor.cls - as above, but as it's a class name ending in Accessor I don't expect it to be flagged.
  • ScannerExampleTest.cls - a test class with one SOQL expression in a test method and one in a regular method. I expect it to be flagged once regarding the regular method
When running the scanner, I set the category to "Bob Buzzard" as I only want the code evaluated against my rule:
$ sfdx scanner:run --target './**/*.cls' -c "Bob Buzzard"
LOCATION                                                  DESCRIPTION                                          CATEGORY     U R L
────────────────────────────────────────────────────────  ───────────────────────────────────────────────────  ───────────  ─────
force-app/main/default/classes/ScannerExample.cls:5         SOQL queries can only appear in test methods and   Bob Buzzard
                                                            accessor classes
force-app/main/default/classes/ScannerExample.cls:10        SOQL queries can only appear in test methods and   Bob Buzzard
                                                            accessor classes
force-app/main/default/classes/ScannerExampleTest.cls:19    SOQL queries can only appear in test methods and   Bob Buzzard
                                                            accessor classes
and there we have it - the lines that break my rule have been identified.

Resources

Saturday, 10 October 2020

Salesforce CLI Scanner Custom XPath Rules - Part 1

Introduction

Earlier this week (6th October 2020) a tweet popped up in my feed about the Salesforce CLI Scanner, which obviously caught my attention as I seem to have some kind of tunnel vision on the CLI and plug-ins right now. I'd seen something about this before, but at the time it felt like it was aimed more at ISVs to help them pass the security review. This wasn't something I had any immediate need for, so it went on my never ending mental list of things to look at when I have time. 

The tweet linked to a Salesforce Developers Blog post and quick skim read of that suggested there was plenty more to it, so I set aside some time to read it properly and take a look at the code, which was well worth the investment. The Plug-in docs cover installation and running, so I won't go into them here.

Static Code Analysis

Simply put, a static code code analysis tool examines source code before the code is run, evaluating it against a set of rules. Assuming you've crafted your rules correctly, it will find every occurrence of code that breaks a rule. Essentially a code review carried out by a Terminator:

    "It can’t be bargained with. It can’t be reasoned with. It doesn’t feel pity, or remorse, or fear."

The CLI Scanner combines a bunch of static code analysis tools into a single plug-in. The power of the CLI is that this can be installed in a single command using the CLI itself. As the CLI is pretty ubiquitous, it's straightforward for most of us to install the scanner and start using it immediately. 

Apex

While the scanner supports other languages, I'm going to stick to Apex for this post, as it's where most of my code lives right now.

The scanner evaluates Apex using the Apex rules for PMD originally created by Robert Sösemann, part of the open-source PMD source code analyser. I'd looked at PMD several years ago and got the basics working, but things weren't quite as slick back then so I didn't proceed with it. I was also under the impression that I had to write any custom rules in Java, which I wasn't particularly interested in at the time.

Once I had the scanner working I decided to check out the custom rules again, as I was definitely going to need some to apply the BrightGen house rules. The documentation again suggests that Java is the way to go, so I went to the PMD site to learn more about it. Reading through the documentation I was delighted to stumble on the following:

XML rule definition

New rules must be declared in a ruleset before they’re referenced. This is the case for both XPath and Java rules. To do this, the rule element is used, but instead of mentioning the ref attribute, it mentions the class attribute, with the implementation class of your rule.

  • For Java rules: this is the class extending AbstractRule (transitively)
  • For XPath rules: this is net.sourceforge.pmd.lang.rule.XPathRule


My old friend XPath - the mechanism I've used many times in the past with Selenium automated testing. As long ago as March 2019 I was telling people to learn about this, as evidenced by Don Robin's tweet from my London's Calling session:


It now felt like I had a fighting chance of coming up with a custom rule in fairly short order, although obviously in PMD I'm not working with HTML. Instead the source code is parsed into an Abstract Syntax Tree, which I can navigate in a similar fashion.

One of the major helpers for Selenium is being able to run XPath expressions in the Chrome inspector and gradually build up a complex rule from simple steps. Reading more on the PMD documentation I was delighted to find the designer - a GUI where I could write some Apex code, execute an XPath expression against that code and see the matches live. It also has an excellent interactive reference where I can click on one of the tree elements and see its properties.  Well worth spending some time with:


Sample Rule

My sample rule was to check that SOQL statements are only present inside test methods or accessor classes, where accessor classes have the naming convention <item>Accessor.

The rule is somewhat lengthy - here it is in it's entirety before I break it down into sections:

//UserClass[not(ends-with(@Image, 'Accessor'))]/Method/ModifierNode[@Test=false()]/..//(SoqlExpression | MethodCallExpression[lower-case(@FullMethodName)='database.query'])

The first part checks that the class name doesn't end with Accessor - if it is, I am in an accessor class and we can stop now:

//UserClass[not(ends-with(@Image, 'Accessor'))]

if I'm not in an accessor, I look for any method that isn't a test method:

/Method/ModifierNode[@Test=false()]

The @Test property is part of the magic of PMD - the method modifier tree element knows whether a method is a test or not, so I don't have to worry about annotations or keywords.

I then back up from the ModifierNode to the method and consider all the child nodes of all non-test methods:

/..//

Then comes the meat of the rule - I look for the union of nodes that are SOQL expressions or have a database.query method call - note that I convert the FullMethodName property to lower case so that I don't have to care about capitalisation:

(SoqlExpression | MethodCallExpression[lower-case(@FullMethodName)='database.query'])

Executing this rule against an Apex class snippet:

class MyClass
{
   void SoqlQuery()
   {
       List<Contact> contacts=[select id from Contact];
   }

   void SoqlQuery2()
   {
        List<Contact> contacts=database.query('select id from Contact');
   }
}

matched as expected against the SOQL query on line 5 and the Database.query method call on line 10:


That's as far as I'm going for part 1 - in part 2 I'll look at how to create a ruleset for this rule and install it into the CLI Scanner. You won't have to wait too long as I'm hoping to write it in the next day or so while it's fresh in my mind.

Resources


Saturday, 3 October 2020

Working remotely (and locally)

Work has changed


Work is no longer somewhere you go - it’s something you do

It's something you do from pretty much anywhere, if you can and your team/employer can manage the timezones. The COVID-19 outbreak has compressed 5-10 years worth of gradual working practice changes into a couple of quarters. It’s been 6 months for us in the UK and bar, a short government push to get people back to the office, there’s been very little sign of a return to how things were. The last round of advice suggested we are looking at another 6 months, which feels like a number chosen because it’s the biggest we will quietly accept. 

It probably won't change back


If this is the shape of things to come, rather than a one off pandemic, it would be a brave company that maintains a huge office portfolio for on-again off-again occupation. It’s like the world’s worst Hokey Cokey at the moment – right team in, right team out, left team in, left team out, but with no idea of when you’ll be able to put your whole team back in for any length of time.

Working remotely affects other workers


The lack of commuters to cities is also having an impact on the service industry – coffee, sandwiches, food trucks, restaurants and bars. I used to work out of our London office one day a week and typically spent £10–20 each time on sundries (and maybe a pint after work). I know many who spent this amount every day! I haven’t been for 6 months now and that is not looking likely to change any time soon. 

That money won’t entirely go away as people continue to work from home, although it seems like people are still being cautious with spending, but a fair bit will likely end up being spent locally. This isn’t a bad thing – instead of someone making a 70 minute commute to serve me a coffee after my 70 minute commute, I’ll walk to my local café and be served by someone who also walked there. (Or I would if there were any cafés that I could walk to in under an hour without taking my life in my hands, but you get the idea). Of course there’s some irony that in the great enabler of remote working, the coffee shop, the last thing they now want is people working there all day. 

Many of the local outlets are suffering too, but for those that can survive, the changes to our habits forced on us by the pandemic might just give them a brighter future. And the cities will have to focus on the people that live there, rather than those that are transported in and out each day. 

Monday, 31 August 2020

Now What for Dreamforce?



The cancellation of the in-person 2020 Dreamforce conference brought my 10 year attendance streak to a shuddering halt. I'll shortly be taking my (late) summer holiday, and it will be the first one I can remember where I'm not building applications, putting together presentations for one or more sessions, or joining conference calls to cover how round table events will be run. While it will be nice to have that time back, I'll very much miss the trip to San Francisco and the chance to catch up with a bunch of people that I rarely see in person.

We're now notionally a couple of months out from the digital version of Dreamforce, which has already been put in doubt by comments made by Marc Benioff to The Information. For what it's worth I don't think a like for like replacement of Dreamforce will take place online, nor would it make a lot of sense to do that.

Duration

Dreamforce is a 4 day event, often bookended by additional summits (think CTA) or training events (get certified for half price!). For those outside of North America it typically involves a week plus out of the office - this aspect is key - we've all flown over with not much more to do but attend the conference and network. 

For a virtual event, nobody is taking a week off work to sit in front of their screen watching presentations. The best that can be hoped for is that the audience watch some of the sessions live, fitting in around their real work. 

Location

Dreamforce takes place in San Francisco, which is a popular destination in its own right. A virtual event takes place in your own home (or possibly in an office now that some have reopened) which doesn't have the same cachet. 

Attendees are also going to be in their own timezone, rather than struggling to adjust to PST. They aren't likely to stay up until 2am to see the latest report builder enhancements, but by the same token they won't be jet-lagged and should be able to stay awake.

Networking

This is one of the biggest attractions for me - the chance to meet with a whole bunch of people from product managers, to answer burning questions, to fellow developers from the community, to find out what cool things others are working on. Hang around the Trailhead forest for a few hours and you'll be amazed how many people you can tick off your list. This just doesn't happen at virtual events - you can fire off questions via videoconference chat channels, but the one to one connection simply won't be an option at the scale you can achieve in person.

I'd also include things like parties, the Gala and after parties in the networking column, and clearly none of that is going to happen virtually. You might see some virtual happy hour concept where a group of people sip drinks and try not to talk over each other on a zoom call, but it won't be the same.

Expo

I've attended a bunch of virtual events this year, sometimes because I'm interested in the topic and sometimes just to see the approach. In my opinion, one area that consistently struggles is the expo. People dialling in from home to watch a session on a specific topic typically sign off afterwards rather than browsing a virtual expo. If you are interested in a product and know Dreamforce is coming up, you might wait to talk to the vendor in person. When you can't do that, you'll sign up for a virtual demo at the first possible opportunity rather than wait for a day when you are already going to be stuck in front of a screen for hours. There is also the swag, or lack thereof.. From personal experience, I can tell you that a surprising number of people at any event expo just want free swag, and they aren't joining your virtual expo demo without some incentive. 

This feels to me like one area that will have to be completely rethought in these continuing Covid times - simply taking the expo concept online isn't cutting it. Organisers will still want sponsor dollars, but they'll have to find a new mechanism. Some kind of session sponsorship is my guess, which showcases the partner's offering alongside the main product, in this case Salesforce. 

Community Content

This is the other area that will need to be rethought. If it will be a struggle to get people to join sessions involving product management, I really can't see Salesforce giving up space to speakers from the community. They won't have loads of breakout rooms to fill, so I can see them keeping the vast majority of virtual space to themselves. Maybe this will push the community content completely to community events like London's Calling or the many flavours of Dreamin' 

Diluting the Brand

Dreamforce is the Salesforce event of the year. It's something that everyone wants to go to and always sells out. By running something less than this online, the name is devalued. I'm sure the numbers would still be pretty good, because of the sheer reach that Salesforce has, but it would always be compared to the in person event and found wanting.

Now What?

Instead of trying to recreate Dreamforce in a virtual setting, do something different. It could still leverage the Dreamforce name, but not try to be a like for like replacement. Rather than squashing a ton of content into 4 days that we won't have time to consume, spread it out - maybe as a half-day event around a theme every month or six weeks - Dreamforce Bi-Fortnightly has a nice ring, as does Dreamforce Semi-Quarterly. The sessions need to be useful though - if they aren't the audience will vote with their feet, and they won't have the ticket cost to guilt them into staying.

The one exception to the above is the keynote. This is something that should still happen as a one-off with attendant folderol. We need this see the performance versus the previous year, the focus for the next year and some key customer stories. Also, without the keynote I won't be able to hear about the new features that I won't make the pilot for and thus won't get access to for a couple of years, and I need that envy to keep my interest up. I'm not sure how well the multiple product keynotes would work - maybe a half-day keynotes event that starts with Marc Benioff and friends, and hands over to the the various clouds, or perhaps each Dreamforce Semi-Quarterly includes a keynote and has a major focus on the specific cloud. 


Saturday, 29 August 2020

Adding IP Ranges from a Salesforce CLI Plug-in


Introduction

I know, another week, another post about a CLI plug-in. I really didn't intend this to be the case, but I recently submitted a (second generation) managed package for security review and had to open up additional IP addresses to allow the security team access to a test org. I'm likely to submit more managed packages for review in the future, so I wanted to find a scriptable way of doing this. 

It turned out to be a little more interesting than I'd originally expected, for a couple of reasons.

It Requires the Metadata API


To add an IP range, you have to deploy security settings via the metadata API. I have done this before, to enable/disable parallel Apex unit tests, but this was a little different. If I create a security settings file with the new range(s) and deploy them, per the Metadata API Docs, all existing IP ranges will be turned off:

    To add an IP range, deploy all existing IP ranges, including the one you
    want to add. Otherwise, the existing IP ranges are replaced with the ones
    you deploy. 

Definitely not what I want!

It Requires a Retrieve and Deploy


In order to get the existing IP addresses, I have to carry out a metadata retrieve of the security settings from the Salesforce org, add the ranges, then deploy them. No problem here, I can simply use the retrieve method on the metadata class that I'm already using to deploy. Weirdly, unlike the deploy function, the retrieve function doesn't return a promise, instead it expected me to supply a callback. I couldn't face returning to callback hell after the heaven of async/await in my plug-in development, so I used the Node Util.Promisify function that turns it into a function that returns a promise. Very cool.

const asyncRetrieve = promisify(conn.metadata.retrieve);
const retrieveCheck = await asyncRetrieve.call(...);
The other interesting aspect is that I get the settings back in XML format, but I want a JavaScript object to manipulate, which I then need to turn back into XML to deploy.

To turn XML into JavaScript I use fast-xml-parser, as this has stood me in good stead with my Org Documentor. To get at the NetworkAccess element:

import { parse} from 'fast-xml-parser';

  ...

const settings = readFileSync(join(tmpDir, 'settings', 'Security.settings'), 'utf-8');
const md = parse(settings);

let networkAccess = md.SecuritySettings.networkAccess;

Once I've added my new ranges, I convert back to XML using xml2js:

import { Builder } from 'xml2js';

  ...

const builder = new Builder(
   {renderOpts:
      {pretty: true,
       indent: '    ',
       newline: '\n'},
    stringify: {
       attValue(str) {
           return str.replace(/&/g, '&amp;')
                     .replace(/"/g, '&quot;')
                     .replace(/'/g, '&apos;');
       }
    },
    xmldec: {
        version: '1.0', encoding: 'UTF-8'
    }
});

const xml = builder.buildObject(md);

The Plug-In


This is available as a new command on the bbsfdx plug-in - if you have it installed already, just run 

sfdx plugins: update 

to update to version 1.4

If you don't have it installed already (I won't lie, that hurts) you can run:

sfdx plugins:install bbsfdx
 
and to add one or more ranges: 

sfdx bb:iprange:add -r 192.168.2.1,192.168.2.4:192.168.2.255 -u <user>

The -r switch defines the range(s) - comma separate them. For a range, separate the start and end addresses with a colon. For a single address, just skip the colon and the end address.

Related Posts


Saturday, 22 August 2020

App Builder Page Aware Lightning Web Component

Introduction

This week I've been experimenting with decoupled Lightning Web Components in app builder pages, now that we have the Lightning Message Service to allow them to communicate with each other without being nested. 

A number of the components are intended for use in record home and app pages, which can present challenges, in my case around initialisation. Some of my components need to initialise by making a callout to the server, but if they are part of a record home page then I want to wait until the record id has been supplied. Ideally I want my component to know what type of page it is currently being displayed in and take appropriate action.

Inspecting the URL

One way to achieve this is to inspect the URL of the current page, but this is a pretty brittle approach as if Salesforce change the URL scheme then it stands a good chance of breaking. I could use the navigation mixin to generate URLs from page references and compare those with the current URL, but that seems a little clunky and adds a delay while I wait for the promises to resolve.

targetConfig Properties

The solution I found with the least impact was to use targetConfig stanzas in the component's js-meta.xml configuration file. From the docs, these allow you to:

Configure the component for different page types and define component properties. For example, a component could have different properties on a record home page than on the Salesforce Home page or on an app page.

It was this paragraph that gave me the clue - different properties depending on the page type!

You can define the same property across multiple page types, but define different default values depending on the specific page type. In my case, I define a pageType property and default to the type of page that I am targeting:

<targetConfigs>
    <targetConfig targets="lightning__RecordPage">
        <property label="pageType" name="pageType" type="String" default="record" required="true"/>
    </targetConfig>
    <targetConfig targets="lightning__AppPage">
        <property label="pageType" name="pageType" type="String" default="app" required="true"/>
    </targetConfig>
    <targetConfig targets="lightning__HomePage">
        <property label="pageType" name="pageType" type="String" default="home" required="true"/>
    </targetConfig>
</targetConfigs>

so for a record page, the pageType property is set as 'record' and so on.

In my component, I expose page type as a public property with getter and setter methods (you only need the @api decorator on one of the methods, and convention right now seems to be the getter) :

@api get pageType() {
    return this._pageType;
}

set pageType(value) {
    this._pageType=value;
    this.details+='I am in a(n) ' + this._pageType + ' page\n';
    switch (this._pageType) {
        case 'record' :
             this.details+='Initialisation will happen when the record id  is set (which may already have happened)\n';
             break;
            ;;
        case 'app' :
        case 'home' :
            this.details+='Initialising\n';
            break;
    }                
 }

and similar for the record id, so that I can take some action when that is set:

@api get recordId() {
    return this._recordId;
}

set recordId(value) {
    this._recordId=value;
    this.details+='I have received record id ' + this._recordId + ' - initialising\n';
}

then I can add the component to the record page for Accounts:



a custom app page:



and the home page for the sales standard application:

and in each case the component knows what type of page it has been added to. 

Of course this isn't foolproof - my Evil Co-Worker could edit the pages and change the values in the app builder, leading to all manner of hilarity as my components wait forlornly for the record id that never comes. I could probably extend my Org Documentor to process the flexipage metadata and check the values haven't been changed, but in reality this is fairly low impact sabotage and probably better that the Evil Co-Worker focuses on this rather than something more damaging.

Show Me The Code!

You can find the code at the Github repo.

Related Posts