Friday, 14 February 2020

Before Save Flows and Apex Triggers Part 2

Before Save Flows and Apex Triggers Part 2

TL;DR Flows consume CPU time, but you might not find it easy to check how much.

Introduction


A couple of weeks ago I wrote a blog post about before save flows and triggers. It's fair to say that this caused some discussion in the Salesforce community. Salesforce themselves reached out to me for more information, as the results weren't what they were expecting to see, so I sent them details of the various tests that I'd run. The funny thing was, when I set out to write the post the CPU times were intended to be secondary, the message from the post was supposed to be about being aware of the impact of mixing tools and thinking about what is already present when adding new automation. So that went well.

Since I wrote the post I've been testing in pre-release, scratch and sandbox orgs, with varying log levels, and every time the results came out much the same. Some differences in the absolute numbers, but always flows using a very small amount of CPU.

I then got a comment on the original post from the legendary Dan Appleman, which included details of the test that he'd run.  Here's the key part of it:
This is inserting the the opportunities using anonymous Apex, with a Limits.getCPUTime statement before and after the insert, with the debug log captured at Error or None level.
The difference for me was that I wasn't outputting any CPU time during the transaction, I just had the debug levels turned all the way down and looked at the final CPU total from the cumulative limits messages:


mainly because whenever I've tried to debug CPU time usage in the past, I've found the results questionable.

That did mean my setup/teardown code was included in the calculation, but as that was the same in all cases I figured while it might change the exact number, the differences would still stand out.

Logging the CPU Time

Adding the following statement to my anonymous apex before the insert:

System.debug('Consumed CPU = ' + LoggingLevel.ERROR, Limits.getCPUTime());

had no effect - I was still seeing around 100 msec for the flow, and around 1500 msec for the equivalent trigger. The log message gave the same CPU time as the cumulative limits message.

Adding it after the insert, and suddenly everything changed - the CPU usage was logged as 2100
msec and the cumulative limits messages had the same value. So flows were consuming CPU time, it just wasn't being surfaced without some additional logging being present. This was a real surprise to me - while figuring out CPU time is always a little fraught, I've not had cause to doubt the cumulative limits before now. Note that this only had an impact in flow-only scenarios, with Apex it didn't change anything materially.

I then changed my anonymous apex so that the entire test was in a loop, and started logging at the CPU at various times. Regardless of when I did it, the cumulative CPU time message always reflected the value from the last time I logged it. So if I only logged the first time through, the cumulative would show as 2100, if it was the second time through, the cumulative would go up to 3500 and so on.

Based on this testing it appears that if there is no additional apex automation (triggers) then the act of accessing the CPU time sets that value into the cumulative limit tracker. I verified this by changing the log messages to just capture to a local variable, and the cumulative limit messages continued to show the actual CPU rather than a very small number.

In Conclusion


Flows do consume CPU time. They don't have a pass that excuses them from CPU duty and there isn't a conspiracy to drive everyone away from Apex. In my cursory testing with simple flows and triggers, triggers were around 33% faster - I intend to do some more testing to try to get some better figures, but right now I need to look at something else for a change!

Getting an accurate report of how much isn't as straightforward as it appears, and figuring out the impact in a transaction which doesn't involve any Apex may present a challenge.





Saturday, 8 February 2020

Spring 20 Default Record Create Values

Spring 20 Default Record Create Values


Introduction


Spring 20 introduces something rather reminiscent of URL hacks - the capability to provide default values when creating a record via URL parameters.  Not quite the same though -  for one thing this is a supported mechanism, whereas we were constantly warned against using URL hacks in Classic by Salesforce. For another, this is currently only documented as working on record create, whereas URL hacks worked (although wasn't supported, might not work in the future etc) across most of the later Classic pages.

How It's Done


The first thing you need is the URL to the create page - this takes the form:

https://<salesforce_instance>/lightning/o/<object_api_name>/new

e.g. for Account this is :

https://<salesforce_instance>/lightning/o/Account/new

While for a custom object with an api name of Webinar__c it is:

https://<salesforce_instance>/lightning/o/Webinar__c/new

Next, you need the parameter the defines the defaults:

?defaultFieldValues=
and then the list of name=value pairs for the fields - again, these are the API names, so the record name would be:

Name=Test
while a custom field would have the API name, including the __c suffix:

Description__c=Test+record+for+blog
(note that I've used '+' to indicate a space in the query part of the URL - I could just as well have used '%32')

Multiple Values

To add multiple default parameter values, use the comma ',' character to separate them - don't use '&' as this will indicate the defaultFieldValues parameter has finished and a new parameter is starting!

Name=Test,Description__c=Test+record+for+blog,

All Together Now


Putting all the elements identified above, I have the following URL:


https://kabdev.lightning.force.com/lightning/o/Webinar__c
/new?defaultFieldValues=Name=Test,
Description__c=Test+record+for+blog,Planned_Duration__c=60

Entering this in my Spring 20 org takes me to the record create page for the Webinar custom object, with the defaults pre-populated:



Relationship Fields

 MyWebinar__c custom object has a lookup to a custom survey object, with the relationship field named Survey__c. When entering information on the create page, I put the text name of the field and choose the entry from the list, so it's tempting to specify the name of the record in the URL. That doesn't end well:



Leaving aside the frankly hilarious idea that I could take an internal error id from Salesforce to my administrator and that would help in any way, it clearly doesn't like the text. This is because the relationship field needs an ID, so if I specify one of those it populates correctly.



Out of curiosity I tried an non-existent ID to see if the error message is any more helpful - it is, and tells me that it can't find the record that the ID refers to, which is much better.


I must also say that in both error cases I love that the plucky Save button sticks around even though things have gone badly wrong. Clicking it doesn't do anything, in case you are wondering.


Related Posts






Friday, 31 January 2020

Spring 20 Before Save Flows and Apex Triggers

Spring 20 Before Save Flows and Apex Triggers


Introduction

Spring 20 introduces the concept of the before record save flow, analogous to the before insert/update trigger that we’ve had for over a decade now. Like these triggers, the flow can make additional changes to the records without needing to save it to the database once it has finished its work. Avoiding this save makes things a lot faster - a claimed 10 times faster than doing similar work in process builder. What the release notes don’t tell us is how they compare to Apex triggers, which was the kind of thing I was a lot more interested in.


Scenarios

I've tried a couple of relatively simple scenarios, both of which I've encountered in the real world:
  1. Changing the name of an opportunity to reflect things like the amount, close date. All activity happens on the record being inserted, so this is very simple.
  2. Changing the name of an opportunity as above, but including the name of the account the opportunity is associated with, so an additional record has to be retrieved.
In order to push the trigger/flow to a reasonable degree, I'm inserting a thousand opportunities which are round robin'ed across two hundred accounts and immediately deleting them.

Scenario 1


Flow

My flow is about as simple as it gets:



The assignment appends '-{opportunity amount}' to the record name:


At the end of the transaction, I have the following limit stats:

Number of SOQL queries: 2 out of 100
Number of query rows: 1219 out of 50000
Maximum CPU time: 116 out of 10000


Trigger

The trigger is also very simple:

trigger Opp_biu on Opportunity (before insert, before update) 
{
    for (Opportunity opp : trigger.new)
    {
        opp.name=opp.Name + '-' + opp.Amount;
    }
}

and this gives the following limit stats:

Number of SOQL queries: 2 out of 100
Number of query rows: 1219 out of 50000
Maximum CPU time: 1378 out of 10000

So in this case the trigger consumes over a thousand more milliseconds of CPU time. Depending on what else is going on in my transaction, this could be the difference between success and failure.


Scenario 2


Flow

There's a little more to the flow this time :


The Get Account element retrieves the account record associated with the opportunity - I only extract the Name field as that is all I use in my opportunity name: 


I also have a formula that generates the opportunity name, and this is used by the Assignment action:

and this gives the following limit stats:

Number of SOQL queries: 7 out of 100
Number of query rows: 2219 out of 50000
Maximum CPU time: 111 out of 10000


Trigger

The trigger mirrors the flow, with a little extra code to ensure it is bulkified :

trigger Opp_biu on Opportunity (before insert, before update) 
{
    Set<Id> accountIds=new Set<Id>();
 for (Opportunity opp : trigger.new) 
    {
        accountIds.add(opp.AccountId);
    }
    
    Map<Id, Account> accountsById=new Map<Id, Account>(
        [select id, Name from Account where id in :accountIds]);
    for (Opportunity opp : trigger.new)
    {
        Account acc=accountsById.get(opp.AccountId); 
        opp.Name=acc.Name + '-' + opp.CloseDate + '-' + opp.Amount;
    }
}

which gives the following limit stats:

Number of SOQL queries: 7 out of 100
Number of query rows: 2219 out of 50000
Maximum CPU time: 1773 out of 10000

Aside from telling us that CPU time isn't an exact science, as it went down this time, the flow is pretty much the same in spite of the additional work. The trigger, on the other hand, has consumed another 500 milliseconds.


All Flow All the Time?

So based on this, should all before insert/update functionality be migrated to flows? As always, the answer is it depends.

One thing it depends on is whether you can do everything you need in the flow - per Salesforce Process Builder best practice:

For each object, use one automation tool.

If an object has one process, one Apex trigger, and three workflow rules, you can’t reliably predict the results of a record change.

It can also get really difficult to debug problems if you have your business logic striped across multiple technologies, especially if some aspects of it are trivial to change in production.

Something that is often forgotten with insert/update automation is what should happen when a record is restored from the recycle bin. In may ways this can be considered identical to inserting a new reecord. Triggers offer an after undelete variant to allow automated actions to take place - you don't currently have this option in the no code world.


One More Thing

A word of warning - you might be tempted to implement your next simple before save requirements as a flow regardless of existing automation. Let's say a consultant developer created you a trigger similar to mine above and now you need to make an additional change to the record. If you do this with a flow, make sure to test this thoroughly.  Out of curiosity, I tested combining my trigger that sets the opportunity name with a flow that tweaks the amount by a very small amount.

The limit stats for this were frankly terrifying:

Number of SOQL queries: 7 out of 100
Number of query rows: 2219 out of 50000
Maximum CPU time: 8404 out of 10000 *****

So the CPU time has increased five fold by adding in a flow that by itself consumes almost nothing!






Saturday, 18 January 2020

Going GUI over the Salesforce CLI Part 2

Going GUI over the Salesforce CLI Part 2

Introduction

In part 1 of this series I introduced my Salesforce CLI GUI with some basic commands. In this instalment I’ll cover some of the Electron specifics and add a couple of commands.

Electron

Electron is an open source framework for building cross platform desktop applications with JavaScript, HTML and CSS. The Chromium rendering engine handles the UI and the logic is managed by the Node JS runtime. I found this particularly attractive as I spend  lot of time these days writing code for Node - wrappers around the command line, for example, or plugins for the Salesforce CLI. I’m also keen to do as much in JavaScript as I can as it helps my Lightning Web Components development too.

An Electron application has a main process and a number of renderer processes. The main process creates the web pages that make up the application UI, and each application has exactly one main process. Each page that is created has its own renderer process to manage the page. Each renderer process only knows about the web page it is managing and is isolated from the other pages.

The renderer process can’t access the operating system APIs - they have to request that the main process does this on their behalf. 

CLI GUI Main Process

The main process for my CLI GUI loads up the JSON file that configures the available commands, runs a couple of CLI commands to determine if the default user and default dev hub user have been set. It then registers a callback for the ready event of the application, which means that it is fully launched and ready to display the user interface:

app.on('ready', () => {
    mainWindow=new BrowserWindow({
        width: 800,
        height: 600,
        webPreferences: {
            nodeIntegration: true
          }
        }
    );
    let paramDir=process.argv[2];
    if (paramDir!==undefined) {
        changeDirectory(paramDir);
    }

mainWindow.webContents.loadURL(`file://${__dirname}/home.html`); windows.add(mainWindow); });

The ready handler creates a new BrowserWindow instance, specifying that node integration should be enabled for the renderer process that manages the page. It then loads the home.html page, aka the GUI home page:

CLI GUI Home Page Renderer

The Node JavaScript behind this page (home.js in the repo) gains access to the main process via the following imports:

const { remote, ipcRenderer } = require('electron');
const mainProcess = remote.require('./main.js');

Remote gives access to a lot of the modules available in the main process (essentially proxying) and mainProcess provides access to the functions and properties of the main process instance that created the renderer. The renderer then gets a reference to its window so that it can output the dynamic content:

const currentWindow = remote.getCurrentWindow();

It then iterates the command groups, creating a Salesforce Lightning Design System tab for each one, then iterates the commands inside the group, adding buttons for each of those. There’s a fair bit of code around that to set the various attributes, so a cut down version is shown here:

for (let group of mainProcess.commands.groups) {
    if (0===count) {
        classes.push('slds-is-active');
    }
    let tabsEle=document.createElement('li');
    tabsEle.id='tab-'+ group.name;
    tabLinkEle.id='tab-' + group.name + '-link';
    tabLinkEle.innerText=group.label;

    tabsEle.appendChild(tabLinkEle);

    const tabContainer=document.querySelector('#tabs');
    tabContainer.appendChild(tabsEle);
    let contentEle=document.createElement('div');
    contentEle.classList.add('slds-' + (0===count?'show':'hide'));
    contentEle.setAttribute('role', 'tabpanel');

    let gridEle=document.createElement('div');
    gridEle.classList.add('slds-grid');
    contentEle.appendChild(gridEle);

    const tabsContentContainer=document.querySelector('#tab-contents');
    tabsContentContainer.appendChild(contentEle);

    for (let command of group.commands) {

        let colEle=document.createElement('div');
        colEle.classList.add('slds-col');
        colEle.classList.add('slds-size_1-of-4');

        let colButEle=document.createElement('button');
        colButEle.id=command.name + '-btn';
        colButEle.innerText=command.label;
colEle.appendChild(colButEle); gridEle.appendChild(colEle); } }

This allows the commands to be dynamically generated based on configuration, rather than having a hardcoded set that are the same for everyone in the world and requiring code changes to add or remove commands. Of course there is code specific to each of the commands, but there are a lot of similarities between the commands which allows a lot of code re-use.

Note that I’m using the DOM API to add the elements, rather than HTML snippets, partly because it is slightly faster to render, but mostly because while it takes longer to write the initial version, it’s much easier to maintain going forward.

Note also that the buttons and tabs have dynamically generated ids, based on the command names. This allows me to add the event handlers for when a user clicks on a tab or a button:

for (let group of mainProcess.commands.groups) {
    group.link=document.querySelector('#tab-' + group.name + '-link');
    group.link.addEventListener('click', () => {
        activateTab(group);
    });
    group.tab=document.querySelector('#tab-' + group.name);
    group.content=document.querySelector('#tab-' + group.name + '-content');

    for (let command of group.commands) {
        command.button=document.querySelector('#' + command.name + '-btn');
        command.button.addEventListener('click', () => {
            mainProcess.createWindow('command.html', 900, 1200, 10, 10, {command: command, dir: process.cwd()});
        });
    }
}

The more interesting handler is that for the commands - this invokes a method from the main process to open a new window, which again has been cut down to the salient points:

const createWindow = exports.createWindow = (page, height, width, x, y, params) => {
    let newWindow = new BrowserWindow({ x, y, show: false ,
        width: width, 
        height: height,
        webPreferences: {
            nodeIntegration: true
        }});
      newWindow.loadURL(`file://${__dirname}/` + page);
      newWindow.once('ready-to-show', () => {
          newWindow.show();
          if (params!==undefined) {
              newWindow.webContents.send('params', params);
          }
      });
      newWindow.on('close', (event) => {
        windows.delete(newWindow);
        newWindow.destroy();
    });
};

The new window is created much like the home window, and loads the page - command.html in this example. Unlike the home page, once the new window receives the ready-to-show event, a handler sends any additional parameters passed to this method to the window - in this case the command that needs to be exposed and the current directory that it will be run in. There’s also a close handler that destroys the window, cleaning up the renderer process.

New Commands

The latest repo code contains a couple of new commands in a Debugging group. As before, I’ve tested this on MacOS and Windows 10.

List Log Files

Choose the username/alias for the org whose log files you are interested in:

Clicking on the ‘List’ button pulls back brief details of the available log files:

Get Log File

To retrieve the contents of a specific log file, first find out which ones are available from a specific org:

Then select the specific file from the dropdown:

And click the 'Get' button to display the contents:

Related Posts

 

Sunday, 12 January 2020

SalesforceWay Podcast

SalesforceWay Podcast

Mid 2019, Xi Xiao from Finland reached out to ask me if I’d appear on his podcast, SalesforceWay. I’ve done a podcast or two in the past, but not for some time, so I accepted with gusto. As is so often the case, I had a stack of work that I was already behind on and some half-completed community initiatives that needed some focus. Luckily Xi was a patient man and shortly (a quarter of a year or so!) afterwards we recorded the episode

Anyone who has been following my blog over the last couple of years will know that I’m a big fan of the Salesforce CLI, so this seemed like a great topic to talk about. For those who aren’t familiar with the podcast format, I always feel it’s kind of a cross between an interview and a collection of war stories. As a rule they are intended to enlighten rather than train, involving a conversation around a topic and pointing the listener at where they might find out more information.

While my episode is clearly the one you should listen to first, there are a ton of great episodes available, so don’t stop at one! There are also more on the way - I know this as I recommended a few of the guests. Xi is always looking for more, so if you have a topic that you think other Salesforce developers would be interested in, reach out to Xi, or if you are shy then get in touch with me via the usual channels (or the comments section of this post) and I’ll arrange an introduction. You won’t regret it - it’s a lot of fun!

Related Posts

Sunday, 29 December 2019

Going GUI over the Salesforce CLI

Going GUI over the Salesforce CLI

Introduction

The Salesforce CLI has been around since 2016, and its predecessor (the Force.com CLI) is even older, debuting at Dreamforce 2013, and both of these have been making developers lives better ever since. While these tools aren’t just for devs, as a fair amount of admin work can be carried out using them, the command line doesn’t always have broad appeal to those who don’t spend most of their working lives writing code or manually executing commands. 

Now the command line isn’t the only way to access some (but not necessarily all) Salesforce CLI commands - the Salesforce VS Code extensions provide access to a large number through the command palette, and it’s relatively straightforward to add simple use cases by configuring custom tasks. That said, it’s a bit of a sledgehammer to crack a nut, as it’s a full fledged IDE being used to display a couple of dialogs and some output, and there’s a fair bit of installation/configuration required which can quickly take people out of their comfort zone.

Every time I heard about a need for a GUI for the CLI, I’d think to myself that it couldn’t be that hard to build one. A lot of it is around wrapping the CLI commands in Node scripts and presenting the output, something I’ve been doing for years. Typically what happens next is I keep putting off doing anything about it, someone else gets there before me, and I get annoyed at my lack of action. This time I was determined things would be different, and earlier in 2019 this collided with my desire to learn to build cross platform apps using Electron, so I finally got got going on YASP (Yet Another Side Project). 

This post is a light touch on the technical side of things as there’s a fair bit to cover and I don’t want to put people off using the GUI because of too much detail. For anyone interested in knowing more about how the internals work, rest assured there will be more posts. Many more. Far more than you expect or want.

The GUI App

The GUI is built using Electron, a combination of Node JS and Chromium (essentially open source Chrome). If you can write HTML and JavaScript you can build an Electron app - there’s obviously a bunch of specifics to learn, but I found it a very interesting experience. I have intentionally not used any kind of additional framework, instead I’m manipulating the DOM of the various windows using vanilla JavaScript.

Everything in Electron starts with the main process - this manages the application lifecycle and interacts with the operating system. The main process is responsible for creating the windows that the user interacts with, and the main window for my application looks like this:

The body of the page is a set of tabs, each of which have a grid of buttons, each button associated with a Salesforce CLI command. The tabs and commands are configurable via the app/shared/commands.js file included with the application. At present this is a smallish set of commands, but others are pretty close to ready and will be added in the coming weeks. The file embedded in the app will also be the starting point for the app, but users will be able create a local version to tailor the tabs and commands to their exact requirements. 

Clicking a command button opens another window to configure and execute the command:

The page for any command is constructed dynamically based on the configuration from the app/shared/commands.js file, which I’ll cover in a future blog post.  What it does mean is that, as long as the parameters for the command are either simple or already handled, adding a new command is simply a matter of adding a stanza to the commands.js file. 

As this is intended to be helpful to those who aren’t that comfortable with the command line, the command to be executed is displayed as the parameter details are entered. In fact there’s no need to execute a command from the app, a user can simply construct the command then copy and paste it to a command line session if they so desire.

The page header has the following buttons:

Help - this opens up the Help page for the command, with the Overview text again coming from the app/shared/commands.js file, and the Command Help coming directly from the Salesforce CLI.

Change Directory - after following the instructions in the README file, the GUI app starts in the cloned repository directory, which may not be where commands need to be executed from (to pick up a project specific sfdx-project.json file, for example). Clicking this button allows the user to change the working directory. If the directory is changed from the main page, this will apply to all commands executed thereafter, whereas if it is changed from a command page hen it only applies to that command window. The current working directory is always displayed on the footer of a window.

Show Log - when you execute a command the output is shown in a Log modal. If you close this you can re-open it at any point in time by clicking the Show Log button.

Supported Commands

For the initial release, the following commands are supported:

  • Login to Org - authenticate a user for a Salesforce instance
  • Logout of Org - clear a previously authenticated user - always do this for orgs containing real data
  • Default Username - set the default user for future commands, either in a specific project directory or globally. 
    Note that the GUI takes this is a default value that the user may with to change, so it will simply be pre-selected in the Org dropdown
  • Default Dev Hub - set the default developer hub user for future commands, either in a specific project directory or globally. 
    Again, the GUI takes this as a default value that can be overwritten.
  • Open Org - this is the command I use the most by far as with my various duties as CTO of BrightGen ad myriad side projects, I’m forever needing to be in a different org.
  • Create Scratch Org - create a Salesforce DX scratch org
  • Delete Scratch Org - delete a scratch org prior to its scheduled expiry time

I have more commands that are pretty close to ready, so there will be more added over the first weeks of 2020, assuming time allows.

All of the commands implicitly add a —json switch so that the output can be parsed programmatically - this does mean that you won’t see any output until the command is complete, so patience is required.

Caching Org Names

Any command that requires a user or dev hub user provides a datalist (a dropdown that you can also search in) with options based on the required org type (scratch, dev hub, any). The Salesforce CLI command to retrieve the orgs the user is currently authenticated against takes quite a while if you are authenticated against a lot of orgs, so this is run at startup and the results cached in a file with any authentication tokens removed. Any commands which change the authenticated orgs (e.g. creating a scratch org) will require the org cache to be refreshed, which can take a few moments. If you carry out any commands outside of the GUI which add or remove orgs, simply click the Refresh Orgs button on the main page and this will update the cached information.

Getting Started

Obviously you need to have the Salesforce CLI installed before you can work with the GUI. Assuming you have this, you can simply clone the repository, execute npm install to install the dependencies, then execute npm run start to fire up the GUI and away you go. The dependency installation takes a little while if you are on a slow internet connection, as Electron is touch on the large side.

Note that I’ve tested the GUI on MacOS Catalina and Windows 10. It might work on other OS and it might not. Caveat emptor.

Related Posts

 

Saturday, 14 December 2019

Install Packages in Trailhead Playgrounds without Password Resets

Install Packages in Trailhead Playgrounds without Password Resets

The Credentials Conundrum

A number of Trailhead modules require you to install a package into a playground. This can present a challenge, as the package link provided is the generic version that starts with login.salesforce.com, requiring you to login with the org credentials. Org credentials which as a rule you don’t know, as you just open a playground by clicking on a link in the module itself:

 

The upshot is you need to reset your password to get something you can login with - this crops up so often that it has it’s own module.

The Package Link

Picking a Trailhead module that requires a package at random, Quick Start: Salesforce Connect, right clicking the link and copying the address gives us:

 

https://login.salesforce.com/packaging/installPackage.apexp?p0=04tE00000001aqG

As mentioned earlier, this points to login.salesforce.com, but it doesn’t have to. The part of the URL that drives the installation of the package is :

/packaging/installPackage.apexp?p0=04tE00000001aqG

where packaging/installPackage.apexp is the setup page for installing a package, and p0=04tE00000001aqG identifies the package to be installed, in this case  04tE00000001aqG.

Applying the Package Link to the Playground

Once you have the package link, this can be applied to any org that you are logged into. Continuing the example from above, here’s a Trailhead playground that I prepared earlier:

From the URL bar, I can see tyhe domain name for the org is: playful-shark-vh3q5k-dev-ed.lightning.force.com. Replacing login.salesforce.com from the package link determined above gives me:

https://playful-shark-vh3q5k-dev-ed.lightning.force.com/packaging/installPackage.apexp?p0=04tE00000001aqG

Changing the URL in the browser takes me to the installation page and I can install the package without knowing anything about the org password.

A CLI Tangent

It wouldn’t be a Bob Buzzard Blog post without pimping the CLI. Not particularly applicable to this scenario as you need to authenticate the org using the CLI so you’d have gone through the password reset already, but never any harm in learning about the power of the command line :)

If you have the package id, you can install directly from the command line using the following command:

sfdx force:package:install -r -p 04tE00000001aqG -w 10

The -r switch says not to ask for confirmation (useful when scripting an installation to run unattended as part of a CI setup) and -w 10 says to wait for up to 10 minutes for the package to install.