Tuesday, 1 April 2025

Secret Agentforce Pen

The Artificial Intelligence revolution continues to upend the technology industry as Salesforce makes its first move into hardware devices with the Secret Agentforce Pen.

The Secret Agentforce Pen resembles a spy pen that everyone will know and love from their youth, but with a key difference - rather than simply recording a conversation, the embedded Agentforce connection acts on what it hears. 

No more running workshops to find out what your users are struggling with when using your Salesforce implementation, simply place the Secret Agentforce Pen in an unobtrusive place in the office and capture their views as they work. 

Any issues or ideas are seamlessly turned into Cases, and the best part is that as your users have no idea they are being recorded, you'll get the unvarnished truth!

Or wear your Secret Agentforce Pen with pride to take action on those casual conversations in the kitchen room. Delight your users when Agentforce implements their request by the time they've returned to their desk - they'll think you are some kind of wizard!

According to a source at Salesforce, speaking on condition of anonymity as they were not authorised to share information about the new product, "We've tried various prototypes over the last 18 months, but struggled to find that blend of cutting-edge Generative AI and fun products from the classified pages of kid's comics in the 70s. The Secret Agentforce Pen was the culmination of this search, providing that elusive mix of cheap retro-style and modern functionality."

Pricing for the Secret Agentforce Pen has not yet been announced, but the same source confirmed that in keeping with the rest of the Agentforce product set, it will be confusing enough that most customers will be scared to use it. 

Saturday, 22 March 2025

Keep Your Agentforce Dev Org Alive and Kicking


Image generated by OpenAI GPT4o in response to a prompt by Bob Buzzard

Introduction

One of the major announcements at TrailblazerDX '25 was the availability of Salesforce Developer Editions with Agentforce and Data Cloud. This is something that just about everyone has been asking for since the first generative AI features went GA in April 2024. Everything else that wasn't associated with a paid contract expired - Trailhead specials after 5 days and Partner SDOs couldn't be taken past 90 days, and even getting that far required raising a case to extend it past the default 30 days. The icing on the cake was the metadata support wasn't fantastic, which meant manually recreating features after spinning up a new org, which gets a bit samey after the fifth time.

Developer Editions don't live forever though - if you don't use them you lose them. After 180 days for the non-Agentforce variant (now known as legacy) and an evanescent 45 days if you want the generative AI features. Although to be fair, the signup page states that Salesforce "may" terminate orgs that are unused for 45 days rather than "will", but if you've put a lot of effort in you don't want to take any risks.

45 days sounds like a long time and it's easy to assume you'll manage to remember a login every 6 weeks or so, but my experience is that it's easy to get distracted and miss a slot. Machines are much better at remembering to do things on a schedule, so this is one task that I prefer to hand off - in this case to a Github Action.

Access Dev Org Action

Github Actions allows automation of software workflows through YAML (YAML Ain't Markup Language) files. Typically I'd use actions for continuous integration tasks on a project - automated build and test every night, for example - but I can also use it for something simpler. In this case, running a Salesforce CLI command against my dev org. If you set up your environment and secret name as I have, you'll be able to use my YAML file exactly as is.

Setup Dev Org

The first thing to do is get your new dev org and setup CLI access. Sign up for a new Agentforce dev org at : 

https://www.salesforce.com/form/developer-signup/?d=pb

Then connect it to the Salesforce CLI using the command:

> sf org login web -o AFDevOrg

Login with your username/password and approve the oauth access.

Execute the following CLI command:

> sf org display -o AFDevOrg --json --verbose

and copy the Sfdx Auth Url output value

Now you have this value, you can move on to the Github steps.

Create Your Repository

In order to use Github actions on a free account, your repository will need to be public. This doesn't mean that your Dev Org becomes public property, as you'll be storing the Sfdx Auth Url in an environment secret that only you as the repository owner can see:



Create Your Environment and Secret

Once you've created your repository, you need an environment to store your secret - click the Settings tab on the top right and choose the Environments option on the left hand menu:

Click the New environment on the resulting page, then name your environment and click the Configure environment button:


On the resulting page, scroll down to the Environment secrets section and click the Add environment secret button:

Name your secret, paste the Sfdx Auth Url value in the Value textbox and click the Add secret button.


Create and Execute Your Action

The final setup step is to create the YAML file for your action. This needs to live in the .github/workflows repository subfolder and can be called anything you like - I've gone for:

.github/workflows/renew.yaml

Once the file is present, clicking on the Actions tab will show the name of the action in the All workflows list on the left hand side - Renew Agentforce Dev Org Lease in this case.


Click on the name to see the run history and other options. There are no runs yet, but I've defined a workflow_dispatch event trigger so that I can run it on demand - in my experience this is well worth adding.


Start a run by clicking the Run workflow dropdown and the resulting Run workflow button:

I find that either the page doesn't refresh or I can't wait for that to happen, so I click the Actions tab again to see the In progress run:


Clicking on the run name gives me a little more detail:


And clicking the card shows me workflow steps - I've waited until they all completed successfully, so my run is green!


Switching out to my dev org, upon checking my user record I can see that there's an entry matching the action execution, although obviously it comes from a Github IP address in the US:


And that's it. As well as the workflow_dispatch event trigger, I've also specified a cron trigger so that it executes at 20:30 every Monday - more than every 45 days, but that gives me some wiggle room if my job starts failing and I don't get around to fixing it quickly.

More Information





Wednesday, 5 February 2025

Evaluate Dynamic Formulas in Apex GA

Image created by ChatGPT4o based on a prompt by Bob Buzzard

Introduction

Like zip handling in Apex, the ability to evaluate dynamic formulas in Apex is Generally Available in the Spring '25 release of Salesforce. Unlike zip handling, which we've all been wanting/battling with for years, this might not be such an obvious win. It's definitely something I could have used a few times in my Salesforce career, mostly around rule engines. I've written several of these in my time, to do things like apply a discount to a price based on a number of attributes of a customer, their spend, and whether they have a contract with us. 

In order to avoid everyone having to become an Apex or Flow expert, rules are configured through custom settings or custom metadata types, but defining the criteria to execute rules is a bit more tricky to surface for Administrator configuration, especially if you need to look at the values contained by a specific record. Then it's a toss up between express it in pro/low code and make it less configurable, or write something to validate and parse boolean style expressions including records and fields. Neither are great.

With the ability to evaluate formula fields in Apex, Admins can express criteria in a familiar format and this can easily be evaluated by the engine to decide whether to apply the rule.

The Sample

The examples I'd seen to date were mostly querying a record from the database and evaluating a formula to concatenate some fields against it, but I didn't find that overly exciting. Instead I went a different route - as I'm envisaging an Administrator creating the formula as plain text in a configuration record, so it would be useful to allow them to check it works before settling on it. I guess it could also be used to test someone's ability to create valid formulas without needing to go into Setup and create a field. So many possibilities!

The page itself is a pretty simple:


There's a picklist to choose the sobject type (limited to Account, Opportunity and Contact - this is a demo after all), another picklist to choose the formula return type, and a lookup to select the record to use when evaluating the formula. In this case my test record is in the Technology business and doing pretty well with annual revenue of ten million:


My rule should fire if the candidate account is in the technology business and making less than fifty million a year, so lets try that out:



And it works. But this is hardly an exhaustive test, so lets check what happens if I change the revenue target to less than nine million a year:


Which also works. But why is the result displayed in green regardless of whether it is true or false? Because the formula evaluated successfully of course - if I introduce an error into my formula, and try to compare the website to a numeric value, the result is red :


Show Me The Code!

You can find the code in my Spring '25 samples Github repository. The method to evaluate the formula is as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
@AuraEnabled
public static String CheckFormula(String formulaStr, String sobjectType,
                                  String returnTypeStr, Id recordId)
{
    FormulaEval.FormulaReturnType returnType=
              FormulaEval.FormulaReturnType.valueof(returnTypeStr);
 
    FormulaEval.FormulaInstance formulaInstance = Formula.builder()
                                    .withType(Type.forName(sobjectType))
                                    .withReturnType(returnType)
                                    .withFormula(formulaStr)
                                    .build();
 
 
    String fieldNameList = String.join(formulaInstance.getReferencedFields(),',');
    String queryStr = 'select ' + fieldNameList + ' from ' + sobjectType +
                      ' where id=:recordId LIMIT 1';
    SObject s = Database.query(queryStr);
     
    Object formulaResult=formulaInstance.evaluate(s);
    return formulaResult.toString();
}

The information populated by the user through the various inputs is passed through verbatim and uses to build the formula instance. One aspect that has changed since I used the original beta is the ability to have the formula instance tell me which fields I need to query from the record via the getReferencedFields method, so I can drop them into my dynamic query with minimal effort:

1
2
3
4
String fieldNameList = String.join(<b>formulaInstance.getReferencedFields()</b>,',');
String queryStr = 'select ' + fieldNameList + ' from ' + sobjectType +
                  ' where id=:recordId LIMIT 1';
SObject s = Database.query(queryStr);

More Information




Sunday, 2 February 2025

Zip Handling in Apex GA


Image generated by ChatGPT 4o based on a prompt from Bob Buzzard

Introduction

Anyone who's developed custom solutions in Salesforce world is likely to have come up against the challenge of processing zip files. You could use a pure Apex solution, such as Zippex, leverage the Metadata API or push it all to the front end and use JavaScript. What these options all had in common was leaving you dissatisfied. Apex isn't the best language for CPU intensive activities like compression, so the maximum file size was relatively small. The metadata API isn't officially supported and introduces asynchronicity, while forcing the user to the front end simply to extract the contents of a file isn't the greatest experience and might introduce security risks.

One year ago, in the Spring '24 release of Salesforce, we got the developer preview of Zip Handling in Apex which looked very cool. I encountered a few teething issues, but the preview was more than enough to convince me this was the route forward once Generally Available. Fast forward a week or so from now (31st Jan) and it will be GA in the Spring '25 release. That being the case, I felt I should revisit.

The Sample

I built the sample in a scratch org before the release preview window opened, and all I had to specify was the ZipSupportInApex feature. The sample is based on my earlier attempts with the developer preview - Lightning Web Component front end which allows you to select a zip file from Salesforce Files, an Apex controller to extract the entries from the zip file and more Lightning Web Component to display the details to the user:


The key differences are (a) I could get the zip entries back successfully this time and (b) I captured how much CPU and heap were consumed processing the zip file. The heap was slightly larger than the size of the zip file in this case, but the CPU was a really nice surprise - just 206 milliseconds consumed.

One expected fun side effect was the ability to breach the heap size limit quite dramatically. As we all know, this limit is enforced through periodic checking rather than a finite heap size, so you can go over as long as it's only for a short while. Processing the entries in a zip file clearly satisfies the "short while" requirement, as I was able to extract the contents of a 24Mb zip file, pushing the heap close to 25Mb:


It was gratifying to see that once again the CPU wasn't too onerous - even when handling a zip file that is theoretically far too large, only 1,375 milliseconds of CPU were consumed. 

Show Me The Code!

You can find the code in my Spring '25 Github repository. The key Apex code is as follows:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
@AuraEnabled(cacheable=true)
 public static ZipResponse GetZipEntries(Id contentVersionId)
 {
     ZipResponse response=new ZipResponse();
     List<Entry> entries=new List<Entry>();
     try
     {
         ContentVersion contentVer=[select Id, ContentSize, VersionData, Title, Description, PathOnClient
                                     from ContentVersion
                                     where Id=:contentVersionId];
 
         Blob zipBody = contentVer.VersionData;
         Compression.ZipReader reader = new Compression.ZipReader(zipBody);
         for (Compression.ZipEntry zipEntry : reader.getEntries())
         {
             System.debug('Entry = ' + zipEntry.getName());
             Entry entry=new Entry();
             entry.name=zipEntry.getName();
             entry.method=zipEntry.getMethod().name();
             entry.compressedSize=zipEntry.getCompressedSize();
             entry.uncompressedSize=zipEntry.getUncompressedSize();
             entries.add(entry);
         }
 
         response.entries=entries;
         response.cpu=Limits.getCpuTime();
         response.size=contentver.ContentSize;
         response.heap=Limits.getHeapSize();
     }
     catch (Exception e)
     {
         System.debug(e);
     }
 
     return response;
 }

ZipResponse is a custom class to return the information in a friendly format to the front end, including the CPU and Heap consumption, while Entry is another custom class that wraps the information I want to show the user in an AuraEnabled form. The actual amount code required to process a Zip file is very small - once you have the zip file body as a Blob, simple create a new ZipReader on it and iterate the entries:

1
2
3
4
5
6
Blob zipBody = contentVer.VersionData;
 Compression.ZipReader reader = new Compression.ZipReader(zipBody);
 for (Compression.ZipEntry zipEntry : reader.getEntries())
 {
     // your code here!
 }


Performance

Regular readers of this blog will know that I'm always keen to compare performance, so here's the CPU and heap consumption for the new on platform solution versus Zippex for my test zip files

Zip Size On Platform Zippex
CPU Heap CPU Heap
66,408 30 90,306 19 142,149
123,746 57 143,425 17 253,086
3,992,746 48 4,022,537 208 8,000,354
8,237,397 58 8,256,426 387 16,480,442
24,014,156 973 24,431,186 Heap size exception at 48,028,743

Zippex uses less CPU at the start, but that looks to be falling behind as the zip file size increases. Heap is where the on platform solution really scores though, needing 60-100% less than Zippex and able to process larger files.

More Information




Wednesday, 29 January 2025

Book Review - Salesforce Anti-Patterns


 Disclaimer - I didn't purchase this book, I was given a pre-release copy by Packt Publishing to review

I've been interested in anti-patterns since I read a short article, more years ago than I care to remember, that covered 3-4 of them. When I joined the Salesforce ecosystem, the anti-pattern I saw most of the time was the big ball of mud, which I still reference in talks to this day, "These systems show unmistakable signs of unregulated growth, and repeated expedient repair"- something all of us who have been here for a while are familiar with. I was therefore curious when Packt Publishing reached out and asked me to review the second edition of Lars Malmqvist's book on Salesforce Anti-Patterns - which of my old favourite would I see in there, and what was new in the world of bad solutions to problems that look good at first glance. It's fair to say I wasn't disappointed.

This is a great book for any aspiring architect, as it will teach you the red flags to look out for, which aren't always obvious. Sometimes the problems they indicate take months or even years to appear, but appear they surely will. Recognising this at the outset will save you a lot of angst.

It's also a great book for experienced architects, as it gives you facts rather than feelings with which to argue against a particular approach. Instead of saying that you've seen this kind of thing before and it didn't end well, you can point out the planned (or current) strategy exhibits the classic symptoms of a named anti-pattern, talk knowledgeably about the likely consequences, and advise on alternatives that will lead to better outcomes. I also like that this book isn't entirely focused on programming or technical anti-patterns - it covers the human side of things too, such as communication, collaboration, allocation of work. 

While there are plenty of the old favourites in there, that apply equally to any technology project rather than just Salesforce, there are also a number that you'll only see in a Salesforce project. These are related to specific characteristics of Salesforce like sharing, licensing, the data model, so you are unlikely to encounter them elsewhere.

Each anti-pattern is explained through a scenario, followed by a description of the problem, how it is presented as the solution to a problem, and the likely outcome of using it (rarely good), followed by a description of a better solution. Each chapter concludes with key takeaways and, in a nice touch, advice around the topic area for the CTA review board. A second nice touch is the covering how the introduction of/reliance on AI can bring you into an anti-pattern that you might otherwise have avoided.

One thing I will call out though - the likely real-world experience is that the worst of the anti-pattern is rowed back, but there still isn't the appetite to accept the better solution - life is rarely that neat and tidy. And that's okay - as architects and consultants our job is to advise and make the customer aware of the risks involved. Once that is done, the final decision rests with the customer, and it will be an informed decision, aware of the potential outcome. As long as all stakeholders agree and accept the risk, nobody can point fingers later. Striving for perfection and coming up a bit short is far better than accepting failure at the outset!

The second edition is available at https://packt.link/U7I6R - you won't regret buying it.



Sunday, 1 December 2024

Guest User Access Comes to BrightSIGN

Over the last couple of years I've received a number of requests to allow Unauthenticated/Guest Users to be able to sign records like internal users. Unfortunately this isn't something that isn't possible for a security reviewed package. In order to be able to save a File/Attachment against a record, the user must have Edit access to the record, and since the Spring '21 release of Salesforce Guest Users can't have Edit access. The Experience Cloud Developer Guide has a handy workaround of executing in system context and without sharing. but if I change the BrightSIGN code to work this way it will fail the security review, and rightly so - I'd be ignoring the security settings of the org and allowing an unauthenticated user to carry out actions that should be blocked. 

While I can't publish a package that allows a Guest User to execute code in system context without sharing, there's nothing to stop the owner of the org adding this capability after installing the package. So in version 4.1 of BrightSIGN, Guest Users can capture a signature as a File. There's a caveat to this though - as I can't associate the File with a record, it will be "orphaned". 

Full details of how to configure BrightSIGN to allow Guest User access are available in the Implementation Guide for V4, but the upshot is rather than the file detail having the following sharing :


It just has the sharing for the Owner:




The admin can then create an Apex trigger on ContentVersion and take appropriate action. This is a bit tricky though, as they'll need to find a way to tie the ContentVersion back to the specific record. The other option is a second component to handle the Signature Captured Event - there's an example of using this to update a record when a signature is captured, and this can easily be tweaked to insert a ContentDocumentLink record to associate the File with the target record.

Related Posts




Saturday, 26 October 2024

Einstein Copilot vs Complex Queries

Image created by GPT-4o based in a prompt by Bob Buzzard

Introduction

Now that Agentforce (for Service at least) is GA we have access to the latest Atlas Reasoning Engine. That's the theory at least - I haven't seen a way to find out what version is in use, which models it has access to etc, but that doesn't worry me too much as I can't change or influence it anyway. Anecdotally I do feel that the ability to handle requests with more than one step has improved over the last few months, but this sounded like a step change - Chain of Thought reasoning and an iterative approach to finding the best response! 

My focus continues to be on Copilot (aka Agentforce for CRM, or whatever name it's going by this week), but I'm writing rather more custom actions than I'd like. Each of these introduces a maintenance overhead and as Robert Galanakis wisely wrote "The code easiest to maintain is the code that was never written", so if there's a chance to switch to standard functionality I'm all over it.

The Tests

Where I've found standard Copilot actions less than satisfactory in the past is around requests that require following a relationship between two objects and applying filters to each object. Show me my accounts created in the last 200 days with contacts that I've got open tasks against, that kind of thing. Typically it would satisfy the account ask correctly but miss the contact requirement. Now I can easily create an Apex action to handle this request, but the idea of an AI assistant is that it handles requests for me, rather than sending me the requirements so I can build a solution!

I've created 4 products with similar names:

  • Cordless Drill 6Ah
  • Cordless Drill 4Ah
  • Cordless Drill 2Ah
  • Travel Cordless Drill
and created a single Opportunity for the Cordless Drill 8Ah. I then ask some questions about this.

Test #1


Ask Copilot to retrieve Opportunities based on the generic product name 'Cordless Drill'

When I've tested this in the past, Copilot has refined 'Cordless Drill' to one of the four available products and then search for Opportunities containing that product. Sometimes it gets lucky and picks the right one, but more often than not it picks the wrong one (3 -1 odds) and tells me I don't have any.

The latest Copilot gets this right first time.



and checking the Query Records output shows the steps involved:



  • First it looked for products matching 'Cordless Drill'
    A limit of 10,000 seems a bit large, but I guess this isn't being passed on to an LLM and consuming tokens.
  • Then it finds the opportunities which have line items matching any of the 'Cordless Drill' products.
  • Then it pulls the information about the Opportunity.
    Interesting that it only narrows it to my Opportunities at this point - it feels like the line item query could get a lot of false positives.
So I can pick the odd hole, but all in all a good effort.

Test #2


The next test was to add some filtering to the opportunity aspect of the request - I'm only interested in opportunities that are open. Once again, Copilot has no problem handling this request:


Test #3


The final test was using a prompt that had failed in earlier tests. This introduced a time component - the opportunities had to be open and created in the last 300 days, and used slightly different working, asking for opportunities that include "the cordless drill product".

This time Copilot was wrong-footed:


My assumption here was that the date component had tripped it up - maybe it didn't include today - or the "the cordless drill product" had resulted in the wrong product being chosen. Inspecting the queries showed something else though:


  • The products matching 'cordless drill' had been identified correctly
    This was through a separate query, presumably because I'd mentioned 'product'
  • The opportunity line items are queried for the products, but this time the query is retrieving the line item Id rather than the related Opportunity Id
  • An attempt is then made to query opportunity records that are open, created in the last 300 days and whose Id matches the line item Id, which will clearly never be successful. 
Rewording the request to "Show my open opportunities for the last 300 days that include cordless drills" gave me the correct results, so it appears use of the keyword 'product' changed the approach and caused it to lose track of what it was supposed to be doing.

Conclusion


The latest reasoning engine is definitely an improvement on previous versions, but it still gets tripped up with requests that reference multiple sObject types with specific criteria. 

While rewording the request did give a successful response, that isn't something I can see going down well with users - they just want to ask what is on their mind rather than figure out how to frame the request so that Copilot will answer it.

So I can't retire my custom actions just yet, but to be fair to Salesforce they have said that not all aspects of the Atlas Reasoning Engine will be available until February 2025. That said, I'm not sure I'd be happy if I'd been charged $2 to see it get confused!

Related Posts