Thursday 28 December 2023

A Tale of Two Contains Methods

Image generated by DALL-E 3 from a prompt by Bob Buzzard 


Eagle-eyed regular readers of this blog may have noticed that I dropped away during the build up to Christmas this year, and there was a good reason for this. I was taking part in Advent of Code, having been introduced to it by our Credera brethren. A challenge a day for 25 days - two a day in fact, as if/when you solve the first, it gets tweaked to make it harder. 

I opted to use Apex to take on the challenges, which ensured that I couldn't brute force any solutions, and that I typically had to switch to a batch/asynchronous approach when I finally ran out of CPU or heap. This didn't always help by the way - a few times I had to give up after encountering variants of "the batch class is too large" error. A few other times I had to accept solving one challenge of the two when I couldn't figure out how to even start on the second!  I ended up with 39/50 successes though, which didn't seem like a bad return for a constrained language. It was extremely enjoyable, but be aware that this can easily soak up all of your spare time and more, and may not make you popular at home!

A common theme of the challenge was walking a route through a grid and keeping track of the tiles that I'd encountered before, usually with some additional information around which path I was following, the direction I'd moved in and how many steps I'd taken in a direction. If it was net new then I needed to add it to a collection, but I also needed to order the collection, as I was looking for the shortest or longest route possible, so they needed to end up in a List.

A Tale of Two Contains Methods

It was the best of methods, it was the worst of methods

One difference between Apex and some other languages I've worked with in the past is the contains method on the List class - this handy helper returns true if the list contains the element passed in to it. This saves me from either iterating the list each time I consider adding an element, or maintaining a separate "lookup" collection - typically a Set that matches the list and I'd check if the element was in there first.

I used the List contains method in my first attempt on one of the challenges, and found that I had to quickly go to batch apex. In order to walk the path I was carrying out a breadth-first search, adding every possible option for each step to a queue as a complex object, but always processing the shortest option first. Once the queue got to around 3,000 elements (complex objects), I found that I could only process a few of them before breaching the 60,000 millisecond CPU limit, and I was looking at an extremely large set of batches that would likely take multiple hours to complete.  After a bit of digging it looked like the check/add to the queue of steps wasn't scaling overly well, so I switched back to maintaining a separate lookup Set and using the Set contains method to determine if I'd seen it before. Once I did this, the CPU use dropped to the point where I could complete the whole thing in 2-3 batches, which I did.

I was somewhat taken aback by this, as I'd assumed that the List contains method would be using an efficient mechanism under the hood and would perform well regardless of the size of the list/complexity of the object. This turns out not to be the case, but that's really my fault for assuming - there's nothing in the docs to suggest that it will be doing anything of the sort.

Now that Advent of Code has completed, I've had the time to run some numbers on the CPU consumption of each of these contains methods (hence the witty title, with apologies to Charles Dickens of course), and present the results.

The Methodology

I have defined a (not particularly) complex object so that there's a bit of work involved to determine if another object matches it:

public class ContainsObject 
    public String stringRep;
    public Integer intRep;
    public Long square;
    public DateTime timestamp;
    public ContainsObject(Integer idx)

I then add two thousand of these to a List, checking each one to see if I've seen it before. The CPU consumed is captured for every 100 elements and gives the following results:

   Count            CPU

   0 - 100           76
 400 - 500           97
 900 -1000          169
1400 - 1500         460
1900 - 2000         582

from this I can deduce that there isn't anything particularly efficient going on under the hood - as the size of the List increases, so does the time taken to check and add 100 elements. In the 1900-2000 range the average is over 5ms per check/insert, which is quite a chunk for a couple of statements.

Switching to the List and lookup Set approach, I create a Set of the complex objects to mirror the contents of the List, but without any ordering, that I can use for the check part. If the element isn't present in the Set, I add it to both the Set and the List.

Executing this for the same number of complex objects gives:

   Count            CPU

   0 - 100           4
 400 - 500           6
 900 -1000           5
1400 - 1500          4
1900 - 2000          7

This is much more the kind of result I want to see - the performance isn't really changing regardless of the size of the Set, and while the final hundred takes slightly longer than the first hundred, the average is 0.07ms, which leaves me plenty of CPU to play with in the rest of the transaction.

No Downside?

As always, there's no such thing as a free lunch - the fact that I have to maintain another collection for speedy lookup does incur a heap penalty. It is a pretty cheap lunch though, as I'm only holding references to the objects stored in the List rather than copies, so the 2,000 entries in the Set consume another 8kb of heap. This feels like a pretty decent trade off to me, but if your transaction has plenty of spare CPU and is butting up against the heap limit, you'll likely feel different.

Related Posts

Saturday 11 November 2023

OpenAI GPTs - Meet Bob Buzzard 2.0


During OpenAI DevDay, the concept of custom GPTs was launched - Chat GPT with a bunch of preset instructions to target a specific problem domain, additional capabilities such as browsing the web, and extra knowledge in terms of information that may not be available on the web. 

In order to create and use GPTs, you need to be a ChatGPT Plus subscriber at $20/month, although in the UK there's VAT to be added so it works out around £20/month. This also gives priority access to new features, the latest models and tools, faster response times and access even at peak times. I signed up just to try out GPTs though, as they looked like a world of fun.

The Replicant

My first custom GPT is my replicant - Bob Buzzard 2.0. A GPT that has been pointed at most of my public and some of my private information. Instructed to respond as I would, you can expect irreverent or sarcastic responses as the mood takes (AI) me. Obviously very focused on Salesforce, and keen on Apex code. 

Right now you'll need to be a ChatGPT Plus user to access custom GPTs, but if you are you can find Bob Buzzard 2.0 at :  Here's a snippet of a response from my digital twin regarding the impact of log messages on CPU - something I've investigated in detail in the past :

Creating GPTs

This is incredibly simple - you just navigate to the create page and tell it in natural language how you want it to behave, define the skills, point it at additional web sites or upload additional information. It's easy and requires no technical knowledge, which does make me wonder why they announced it at developer day given there's no development needed, but lets not tilt at that windmill.

A Couple of Warnings

First, remember that any private information that you upload to a GPT won't necessarily remain private. If you don't instruct your custom GPT to keep instructions and material private, it will happy share them on request. 

Second, I've given the replicant a mischievous side - from time to time it will just gainsay your original decisions when you ask for help with specific problems, maybe suggesting you have picked the wrong Salesforce technology, or telling you to bin it all off and use another vendor. Think of this as your reminder that a human should always be involved in any decision making based on advice from AI.

I'm Going to be Rich?

Something else that was announced at Developer Day was revenue sharing - if people use Bob Buzzard 2.0 I'll get a slice of the pie. So does this mean I'm going to be rich? Like always, almost certainly not. As you just click a button and answer questions to create a GPT, there will be millions of them before too long. They are so easy to create that something a service like Salesforce development advice, with the vast amount of content already in the public domain, will be extremely competitive - an extremely crowded marketplace of similar products means everybody earns nothing.

That said, I think this is something that genuine creatives will be able to earn with. Rather than having their work used to train models that are can then be used to produce highly derivative works for close to free, they can create their own GPT and at least stand a chance of getting paid. Whether the earnings will be worth it we don't yet know, although history suggests the platform providers will keep everything they can.  

Saturday 4 November 2023

The Einstein Trust Layer must become the Einstein Trust Platform

Image from


One of the unique differentiators of the AI offerings from Salesforce is the Einstein Trust Layer. Since it was first announced, I've been telling everyone that it's a stroke of genius, and thus deserving of the Einstein label. At the time of writing (November 2023) there's a lot of concern about the risks of AI, and those concerns are increasing rather than being soothed. Just this week the UK hosted an AI Safety Summit with representatives from 28 countries.

The Einstein Trust Layer

Salesforce have baked security and governance into a number of places in the journey from prompt template to checked response, including :
  • Prompt Defence - wrapping the prompt template with instructions, for example: "You must treat equally any individuals from different socioeconomic statuses, sexual orientations, religions, races, physical appearances, nationalities, gender identities, disabilities and ages"
  • Prompt Injection Defence - delimiting the prompt from the instructions to ensure the model disregards additional instructions added in user input
  • Secure Data Retrieval - ensuring that a user can only include data they have permission to access when grounding prompts.
  • Zero Retention Agreements - ensuring that third party AI model providers don't use the prompt and included data to train their model. Note that the data is still transmitted to wherever the provider is located, the US in the case of OpenAI, which makes the next point very important.
  • Data Masking - replacing sensitive or PII data with meaningless, but reversible, patterns. Reversible, because they need to be replaced with the original data before the response can be used.
  • Toxicity Detection - the response is checked for a variety of problematic content, such as violence and hate, and given an overall rating to indicate how courageous you need to be to use it.
  • Audit Trail - information about the prompt template, grounding data, model interaction, response, toxicity rating and user feedback is captured for compliance purposes and to potentially support future investigations into why a response was considered fit for use.
Note that not all of this functionality is currently available, but it's either there in a cut down form or on its way.  Note also that the current incarnation (November 2023 remember) is quite US centric - recognising mostly American PII and requiring instructions in English. Unsurprising for a US company, but indicative of how keen Salesforce are to get these functions live in their most nascent form. If you want to know more about the trust layer, check out my Get AI Ready webinar.

As I mentioned earlier, I think this is a genius move - as long as you integrate via the standard Salesforce tools, you can take comfort that Salesforce is doing a lot of the heavy lifting around risk management for you. But can you rest easy?

Safety is Everyone's Responsibility

Of course you can't rest easy. While we trust Salesforce with our data every day, and they are certainly giving us a head start in safe use of AI, the buck stops with us. Something else I've been saying to anyone who will listen is that we should trust Salesforce, but it can't be blind trust. We know quite a lot about how the Einstein Trust Layer works, but we have to be certain that it is applying the rules that we want in place, rather than a set of generic rules that doesn't quite cover what we need. One-size-doesn't-quite-fit-all if you will. 

The Layer must become the Platform

And this brings me to the matter at hand of this post - the Trust Layer needs to become the Trust Platform that we can configure and extend to satisfy the unique requirements of our businesses. In no particular order, we need to be able to :
  • Define our own rules and patterns for data masking
  • Create our own toxicity topics, and adjust the overall ratings based on our own rules
  • Add our own defensive instructions to the prompts. 
    Yes, I know we'll be able to do this on a prompt by prompt basis, but I'm talking about company standard instructions that need to be added to all prompts. It will get tedious to manually add these to every prompt, and even more tedious to update them all manually when minor changes are required.
  • Include additional information in the audit logs
and much more - plugins that carry out additional risk mitigation that isn't currently part of the Salesforce "stack". Feels like there's an AppExchange opportunity here too!

Once we have this, we'll be able to say we are using AI responsibly, to the best of our ability and current knowledge at any rate.

Saturday 14 October 2023

Einstein Sales Emails

Image created by StableDiffusion 2.1 based on a prompt by Bob Buzzard


Sales GPT went GA in July 2023, and then went through a couple of "blink and you'll miss it" renames, before it's was rolled into (October 2023, so it might have changed by the time you read this!) Einstein for Sales. From the Generative AI perspective, this consists of a couple of features - call summaries, and the subject of this post - Sales Emails. 

Turning it On

This feature is pretty simply to enable - first turn on Einstein for Sales in Setup:

Then assign yourself the permission set:

Creating Emails

Once the setup is complete, opening the Email Composer shows a shiny new button to draft with Einstein:

In this case I'm sending the email from the Al Miller contact record, and I've selected Al's account - Advanced Communications - from the dropdown/search widget at the bottom. This will be used to ground the prompt that is sent to OpenAI, to include any relevant details from the records in the email.

Clicking the Draft with Einstein button offers me a choice of 5 pre-configured prompts - note that Salesforce doesn't yet offer the capability to create your own prompts, although that is definitely coming soon.

Since GA this feature has been improved with the ability to include product information, so once I choose the type of prompt - Send a Meeting Invite in this case - I have the option to choose a product to refer to. 

Once I choose a product, the Name and Description is pulled from the record, but I can add more information that might be relevant for this Contact - the words with the red border below.  Note that there's a limited number of characters allowed here - I was within 5-10 of the limit.

Clicking the Continue button starts the process of pulling relevant information from the related account to ground the prompt, adding the guardrails to ensure the response is non-toxic, and validating the response before offering it to me:

You can see where the grounding information has been used in the response - Al's name, role and company appear in the second paragraph, and the product information (including my added info) is in the third to try to entice Al to bite at a meeting.

If I don't care for this response I can edit and tweak it, or click the button again to get a new response:


Note that this just adds the next response under the previous one, as you can see by the 'Best regards' at the top of the screenshot. If I don't want to use a response, it's up to me to delete it. Make sure to check the entire content before sending, as this it would be pretty embarrassing to let one go out with 4-5 different emails in it! Note also that I'm expected to add the date and time that I want to meet, to replace the

   [Customize: DATE AND TIME]

I can't help thinking that we'll all start receiving emails with these placeholders still there, much like at the moment when merge fields go bad!

Related Information

Saturday 30 September 2023

Apex Comparator in Winter 24

Image generated by Stable Diffusion from a prompt by Bob Buzzard


The Winter 24 release of Salesforce introduces a few new Apex features, including one that I'm very pleased to see - the Comparator interface. Simply put, this new interface allows the List.sort() method to take a parameter that determines the sort order of the elements in the List. 

The Problem

Now this might not sound like a big change, but it simplifies the support of sorting Lists quite a bit. The way we used to have to do it was for the items in the List to implement the Comparable interface. I've written loads of custom Apex classes over the years that implement this, and it's very straightforward - here's an example from a class that retrieves the code coverage values for all Apex classes in an org and displays them in order of coverage with the lowest covered (problem!) classes first :

public Integer compareTo(Object compareTo) 
    CoverageRecord that=(CoverageRecord) compareTo;
    return this.getPercentage()-that.getPercentage();
In this case, implementing compareTo isn't any real overhead - I've created a custom class that contains a whole bunch of information about the coverage for an Apex class - total lines, lines covered etc, so an extra method with a couple of lines of code isn't a big deal. It's a little less convenient if I need to sort a class from an App Exchange package - in that case I'll need to create a new class from scratch to wrap the packaged class and implement the method. If we assume that my coverage class is now in a package, it would look something like :
public class CoverageWrapper 
    public BBCOVERAGE__CoverageRecord coverage {get; set;}
    public Integer compareTo(Object compareTo) 
        BBCOVERAGE__CoverageRecord that=(BBCOVERAGE__CoverageRecord) compareTo;
        return this.coverage.getPercentage()-that.getPercentage();

Slightly less convenient - I now have a whole new class to maintain to be able to sort, and I have to store all the elements of the list in a CoverageWrapper rather than their original CoverageRecord. Again, not a huge amount of overhead but it gets a bit samey if I'm doing a lot of this kind of thing.

Much the same thing applies if I want to sort sObjects - I need to create a wrapper class and turn my list of sObjects into a list of the wrapper class before I can sort it. All those CPU cycles gone forever!

The Solution

This all changes in Winter 24 with the Comparator interface. I still need to create a class that implements an interface - the Comparator in this case :

public with sharing class CoverageComparator implements Comparator<CoverageRecord> 
    public Integer compare(CoverageRecord one, CoverageRecord tother) 
    	return one.getPercentage()-tother.getPercentage();

but I don't need to wrap the class/sObject that I am processing in this class and create a new list. Instead I call the new sort() method that takes a Comparator parameter:

List<CoverageRecord> coverageRecords;
CoverageComparator covComp=new CoverageComparator();

CPU Impact

Regular readers of this blog will know that I'm always interested in the impact of changes on CPU time. In enterprise implementations CPU limits are something that I run up against again and again, so if a new feature improves this I want to know!\

I used my usual methodology to test this - execute anonymous with most logging turned off and Apex at the error level, executing the same code three times and taking the average.

For each test I created the records before capturing the CPU time, then sorted the list. First using a comparator on the list of CoverageRecord objects:
List<CoverageRecord> covRecs=TestData.CreateRecords(100);

Integer startCpu=Limits.getCpuTime();
CoverageComparator covComp=new CoverageComparator();
Integer stopCpu=Limits.getCpuTime();
   'CPU for comparator = ' + (stopCpu-startCpu));

And secondly wrapping them with classes that implement Comparable:

List<CoverageRecord> covRecs=TestData.CreateRecords(100);

Integer startCpu=Limits.getCpuTime();
List<CoverageWrapper> wrappers=new List<CoverageWrapper>();
for (CoverageRecord covRec : covRecs)
    CoverageWrapper wrapper=new CoverageWrapper();

Integer stopCpu=Limits.getCpuTime();
      'CPU for comparable = ' + (stopCpu-startCpu));

The results were broadly what I was expecting, as sorting a list in place is always going to be quicker than iterating it, wrapping the members, and then sorting, but it's always good to see the numbers:

  • For 100 records, Comparator took 9 milliseconds versus 11 milliseconds for wrapping
  • For 1,000 records, Comparator took 126 milliseconds versus 150 for wrapping
  • For 10,000 records, Comparator took 1844 millseconds versus 2058 for wrapping
So once you get up to decent sized lists, there's a 10% difference, well worth saving. 

Columbo Close

Just one more thing ... 1844 milliseconds is still quite a chunk of the CPU limit. This is because my original implementation of the CoverageRecord calculates the percentage on demand, based on stored total and covered lines. Clearly in a large list this is being called a lot. I then re-ran the 10,000 test with a new implementation - CoverageRecordCachePercent - which stores the percentage after calculating, which knocked 300 milliseconds, or 15%, off the time. I'm sure that converting again to something that calculated the percentage once the total and covered lines were known and set it as a public property would reduce it further. Your regular reminder that even the smallest method can have an impact if it's being called a large number of times!

More Information

Sunday 6 August 2023

Salesforce CLI Open AI Plug-in - Function Calling

Image generated by Stable Diffusion online, based on a prompt by Bob Buzzard

WARNING: The new command covered in this blog extracts information from your Salesforce database and sends it to OpenAI in the US to provide additional grounding to a prompt - only use this with test/fake data, as there is no attempt at masking or restricting field access.


Back in June 2023, only a couple of months ago on the calendar but a lifetime in generative AI product releases, OpenAI announced the availability of function calling. Now that my plug-in is integrated with gpt-3.5+, this is now something I can use, but what value does it add? 

The short version - this allows the model to progress past it's training data and request more information to satisfy a prompt. 

The longer version. As we all know, the data used to train gpt-3.5 cut off at September 2021, so often the response to a prompt will warn you that things may have changed. With function calling, when you prompt the model you also tell it about any functions that you have available that it can use to retrieve additional information. If the functions don't help it will return the response as usual, but if they would it will return function calls for you to make and pass back to it. Note that the model doesn't call the functions, it tells you which functions to call and the parameters to pass and expects you to make the decision as to whether you should call them, which is as it should be.

The Plug-in Command

In the latest version (1.2.2) of my plug-in, there's a new command that gives the model access to a function to pull data from Salesforce if needed. The function simply takes a query and returns the result as a JSON formatted string:

const queryData = async (query: string): Promise<string> => {
  const authInfo = await AuthInfo.create({ username: flags.username });
  const connection = await Connection.create({ authInfo });
  const result = await connection.query<{ Name: string; Id: string }>(query);

  return JSON.stringify(result);

When the command is executed, the request to the Chat Completion API includes the prompt supplied by the user, and details of the function:

const functions: ChatCompletionFunctions[] = [
    name: 'queryData',
    description: 'A function to extract records from Salesforce via a SOQL query',
    parameters: {
      type: 'object',
      properties: {
        query: {
          type: 'string',
          description: 'The SOQL query to execute',
      required: ['query'],

Note that my function isn't targeting any specific objects, nor does it have any logic to figure out what might be needed based on the user's prompt - it simply executes a query and returns the results, which might be an error or an empty data set.

I execute this command as follows:

> sf bbai org data -t "Create an email introducing GenePoint, a new account we are tracking in Salesforce. Include the industry and number of employee details from our Salesforce database" -u

Note that I have to tell the prompt that the GenePoint account details can be found in our Salesforce database - if I don't do that it won't see any value in the function. I've also provided the description of a couple of fields that I want it to extract, and finally I've supplied the username that I'll connect to Salesforce with if the model asks me to run a query.

When the model responds, it will indicate if it wants me to execute the function by specifying the finish_reason as 'function_call', and adding the details in the function_call property of the message:

  index: 0,
  message: {
    role: 'assistant',
    content: null,
    function_call: {
      name: 'queryData',
      arguments: '{\n' +
        `  "query": "SELECT Industry, NumberOfEmployees FROM Account 
                     WHERE Name = 'GenePoint' LIMIT 1"\n`
  finish_reason: 'function_call'

In this case it wants me to call queryData with a query parameter of a SOQL query string that extracts details of the GenePoint account. I execute this, then add the results of the query to the prompt message and retry my request. This time I get a response with the email containing the details I wanted:

Subject: Introducing GenePoint - A Promising Addition to Our Portfolio

Dear Team,

I hope this email finds you well. I am excited to introduce you to our newest account, GenePoint, which we have recently started tracking in our Salesforce database. GenePoint is a biotechnology company that shows immense potential in its field. Allow me to provide you with some important details about this account.

Industry: Biotechnology

Number of Employees: 265


I think this is very cool because I haven't had to inspect the prompt in any way to decide to extract information from Salesforce. The model has been given a very basic function and knows (with some nudging, for sure) when it is appropriate to call it and, more importantly, the query that needs to be run to extract the details the user requested. 

Right now I've only told it about a single function, so it's either going to call that or nothing, but I can easily imagine a collection of functions that provide access to many internal systems. This allows the final request to be grounded with a huge amount of relevant data, leading to a highly accurate and targeted response.

Once again a reminder that this could result in sensitive or personally identifiable information being sent to the OpenAI API in the US to be processed, so while it's fun to try out you really don't want to go any where near your production data with this.

More Information

Sunday 30 July 2023

Salesforce CLI Open AI Plug-in - Generating Records

Image generated by Stable Diffusion 2.1, based on a prompt from Bob Buzzard


After the go-live of Service and Sales GPT, I felt that I had to revisit my Salesforce CLI Open AI Plug-in and connect it up to the GPT 4 Large Language Model. I didn't succeed in this quest, as while I am a paying customer of OpenAI, I haven't satisfied the requirement of making a successful payment of at least $1. The API function I'm hitting, Create Chat Completion, supports gpt-3.5-turbo and the gpt-4 variants, so once I've racked up enough cost using the earlier models I can switch over by changing one parameter. My current spending looks like it will take me a few months to get there, but such is life with competitively priced APIs.

The Use Case

The first incarnation of the plug-in asks the model to describe Apex, CLI or Salesforce concepts, but I wanted something that was more of a tool than a content generator, so I decided on creating test records. The new command takes parameters listing the field names, the desired output format, and the number of records required, and folds these into the messages passed to the API function. Like the Completion API, the interface is very simple:

const response = await openai.createChatCompletion({
     model: 'gpt-3.5-turbo',
     temperature: 1,
     max_tokens: maxTokens,
     top_p: 1,
     frequency_penalty: 0,
     presence_penalty: 0,

result = ([0].message?.content as string);

There's a few more parameters than the Completion API:

  • model - the Large Language Model that I send the request to. Right now I've hardcoded this to the latest I can access
  • messages - the collection of messages to send. The messages build on each other, and each message has a content (the instruction/request) and a role (where the instruction is being sent). This allows me to separate the instructions to the model (when the role is assistant, I'm giving it constraints about how to behave) from the request (when the role is user, this is the task/request I'm asking it to carry out).
  • max_tokens is the maximum number of tokens (approximately 4 characters of text) that my request combined with the response can be. I've set this to 3,500, which is approaching the limit of the gpt-3.5 model. If you have a lot of fields you'll have to generate a smaller number of records to avoid breaching this. I was able to create 50 records with 4-5 fields inside this limit, but your mileage may vary.
  • temperature and top_p guide the model as to whether I want precise or creative responses.
  • frequency_penalty and presence_penalty indicate whether I want the model to continually focus on tokens if they are repeated, or focus on new information.

As this is an asynchronous API, I await the response, then pick the first element in the choices array. 

Here's a few executions to show it in action - linebreaks have been added to the commands to aid legibility - remove these if you copy/paste the commands.

> sf bbai data testdata -f 'Index (count starting at 1), Name (Text, product name), 
            Amount (Number), CloseDate (Date yyyy-mm-dd), 
            StageName (One of these values : Negotiating, Closed Lost, Closed Won)' 
            -r csv

Here are the records you requested
1,Product A,1000,2022-01-15,Closed Lost
2,Product B,2500,2022-02-28,Closed Won
3,Product C,500,2022-03-10,Closed Lost
4,Product D,800,2022-04-05,Closed Won
5,Product E,1500,2022-05-20,Negotiating

> sf bbai data testdata -f 'FirstName (Text), LastName (Text), Company (Text), 
                            Email (Email), Rating__c (1-10)' 
                            -n 4 -r json

Here are the records you requested
    "FirstName": "John",
    "LastName": "Doe",
    "Company": "ABC Inc.",
    "Email": "",
    "Rating__c": 8
    "FirstName": "Jane",
    "LastName": "Smith",
    "Company": "XYZ Corp.",
    "Email": "",
    "Rating__c": 5
    "FirstName": "Michael",
    "LastName": "Johnson",
    "Company": "123 Co.",
    "Email": "",
    "Rating__c": 9
    "FirstName": "Sarah",
    "LastName": "Williams",
    "Company": "Acme Ltd.",
    "Email": "",
    "Rating__c": 7

There's a few interesting points to note here:

  • Formatting field data is conversational - e.g. when I use Date yyyy-mm-dd the model knows that I want the date in ISO8601 format. For picklist values, I just tell it 'One of these values' and it does the rest.
  • In the messages I asked it to generate realistic data, and while it's very good at this for First Name, Last Name, Email, Company, it's not when told a Name field should be a product name, just giving me Product A, Product B etc.
  • It sometimes takes it a couple of requests to generate the output in a format suitable for dropping into a file - I'm guessing this is because I instruct the model and make the request in a single API call.
  • I've generated probably close to 500 records while testing this, and that has cost me the princely sum of $0.04. If you want to play around with the GPT models, it really is dirt cheap.
The final point I'll make, as I did in the last post, is how simple the code is. All the effort went into the messages to ask the model to generate the data in the correct format, not to include additional information that it was responding to the request, to generate realistic data. Truly the key programming language for Generative AI is the supported language that you speak - English in my case!

As before, you can install the plug-in via :
> sf plugins install bbai
or if you have already installed it, upgrade via :
> sf plugins update

More Information

Saturday 22 July 2023

Salesforce GPT - It's Alive!

Image generated by Stable Diffusion 2.1 in response to a prompt by Bob Buzzard


This week (19th July) the first of the Salesforce AI Cloud/Einstein GPT applications became generally available. You can read the full announcement on the Salesforce news site, but it's great to see something tangible in the hands of customers after the wave of marketing over the last few months. Its a relatively limited amount of functionality to start with, but I prefer that to waiting another 9 months for everything to be fully built out. GA is better than finished in this case!

What we know so far

We know that it's only available to Unlimited Edition, which already includes the existing Einstein AI features. This seems to be becoming the standard approach for Salesforce - Meetings, for example, was originally Performance and Unlimited Edition only, but is now available for all editions with Sales Cloud. It's a good way of keeping the numbers down without having to evaluate every application, and it's likely to include those customers that are running a lot of their business on Salesforce and thus will get the most value. 

We know that it's (initially) a limited set of features that look to be mostly relying on external generative AI systems rather than LLMs trained on your data. The features called out in the announcement include:

  • Service replies - personalised responses grounded in relevant, real time data sources. To be fair, this could be a model trained on your data, but the term grounded implies that it's an external request to something like Open AI GPT with additional context pulled from Salesforce.
  • Work Summaries - wrap ups of service cases and customer engagements. The kind of thing that Claude from Anthropic is very good at. These can then be turned into Knowledge Articles, assuming there was anything that could be re-used or repurposed for future customer calls.
  • Sales Emails - personalised and data-informed emails created for you, again sounding very much like a grounded response from something like OpenAI.
This looks like a smart move by Salesforce, as they can make generative AI available to customers without having to build out the infrastructure to host their own models - something that might present a challenge, given the demand for GPUs across the industry.

We know it will include the Einstein GPT Trust Layer. This is probably the biggest benefit - you could create your own integration with any of these external services, but you'd have to build all the protections in yourself, and the UI that allow admins to configure them.

We don't know what pricing to expect when it becomes available outside of Unlimited Edition, but given that it's included with Einstein there, it may well be included in that SKU for Enterprise Edition, which is $50/user/month for each of Sales and Service Cloud Einstein.

We know it includes "a limited number of credits', which I'm pretty sure was defined as 8,000 in one of the webinars I watched. This sounds like a lot, but we don't know what a credit is, so it might not be. If it's requests, that is quite a few. If it's tokens, not so much - testing my Salesforce CLI integration with OpenAI used around 6,000 tokens for not very many requests. Still, if you built your own integration with any of these tools you'd have to pay for usage, so there's no reason to expect it to be included when going via Salesforce, especially as I'm sure different customers will have wildly different usage. Those 6,000 tokens also cost me around 12 cents, so hopefully purchasing extra credits won't break the bank!

We also know, based on the Service Cloud GPT webinar on 19th July, that we'll be able to add our own rules around PII/sensitive data detection in prompts. It seemed highly likely that would be the case, but good to have it confirmed.

Finally, we know this is just the beginning of the GPT GAs. Dreamforce is less than 2 months away and there will be some big AInnouncements for sure.]

More Information

Sunday 16 July 2023

Salesforce CLI OpenAI Plug-in

Image generated using StableDiffusion 2.1 via


I've finally found time to have a play around with the OpenAI API, and was delighted to find that it has a NodeJS library, so I can interact with it via Typescript or JavaScript rather than learning Python. Not that I have any issue with learning Python, but it's not something that is that useful for my day to day work and thus not something I want to invest effort in right now.

As I've said many times in the past, anyone who knows me knows that I love a Salesforce CLI Plug-in, and that is where my mind immediately went once I'd cloned their Github repo and been through the (very) quick overview. 

The Plug-In

I wasn't really interested in surfacing a chat bot via the Salesforce CLI - for a start, it wouldn't really add anything over and above the public OpenAI chat page

The first area I investigated was asking it to review Apex classes and call out any inefficiencies or divergence from best practice. While this was entertaining, and found some real issues, it was also wildly inaccurate (and probably something my Evil Co-Worker will move forward with to mess with everyone). 

I then built out a couple of commands to generate titles and abstracts for conference sessions based on Salesforce development concepts, but as the Dreamforce call for papers is complete for this year, that didn't seem worth the effort.

I landed on a set of explainer commands that would ask the AI model to explain something to help a more junior developer:

  • apex - explain a topic in the context of the Apex programming language
  • cli - explain a Salesforce CLI command, along with an example
  • salesforce - explain a Salesforce topic using appropriate language for a specific audience - admins, devs, execs etc.
Accessing the OpenAI API is very straightforward:

Install the OpenAI Package

> npm install openai

Create the API instance
    import { Configuration, OpenAIApi } from 'openai';

    const configuration = new Configuration({
        apiKey: process.env.OPENAI_API_KEY,
    const openai = new OpenAIApi(configuration);
Note that the API Key comes from the OPENAI_API_KEY environment variable - if you want to try out the plug-in yourself you'll need an OpenAI account and your own key. 

Create the Completion

This is where the prompt is passed to the model so that it can generate a response:
    const completion = await openai.createCompletion({
        model: 'text-davinci-003',
        max_tokens: maxTokens,

      result = ([0].text as string);
Note that the completions API is legacy and received it's final update in July 2023. I chose this simply because it's cheap. When you are writing a new plug-in connecting to a third party system you get a lot of stuff wrong and I was keen not to rack up too large a bill while I was learning! The Chat Completions API is slightly different, in that it receives a list of messages and can call functions with results. It's not dramatically different though, so I felt anything I learned applied to both.

I moved this into a shared function, hence most of the call is parameterised. The createCompletion function can take many parameters, but here's the explanation of those that I've used:
  • model - the large language model to use
  • max_tokens - the maximum number of tokens to generate the response. A token is typically thought to be around 4 characters, so will sometimes represent words and sometimes parts of words. Input and output token counts are how you get charged for API use, so you want to keep them down where you can. The function defaults to 16 which in practice is fairly useless for a response to a human, so I've typically upped it to 256
  • prompt - the question/task that I've set the model
  • temperature - a value between 0 and 2.0 that indicates how accurate a response I want. I find that 0.6 works pretty well. If you change this, do it in small increments, as I asked it to review an Apex class with a temperature of 1.5 and got pretty much gibberish.
And that's about it. The rest of the code is available at the Github repository - it's mostly standard CLI Plug-In stuff of handling flags and displaying the response.

The Plug-In In Action

Once I've installed the plug-in from NPM, using the following command :

> sf plugins install bbai

I can ask for some explanations. First, explain the Salesforce concept of groups. As I don't provide details of the audience, it defaults to programmers:

> sf bbai explain salesforce -t groups

Here are the results

Groups in Salesforce are a way to organize and manage users in the system. They allow you to categorize users based on their roles and responsibilities, allowing for more streamlined access to information. For example, you can create a group for all the programmers within your organization, allowing them to access the same resources, documents, and tasks. Additionally, groups can be used to assign tasks, send messages, and create reports. By creating groups, you can quickly and easily manage permissions, tasks, and data access for your programming team.

Now the same command for an exec audience - notice the difference?

> sf bbai explain salesforce -t groups -s executive

Here are the results

Groups in Salesforce allow organizations to manage and control access to records and data. Groups are used to assign users to roles within the organization and to grant them access to the appropriate records. By using groups, organizations can easily assign the same set of permissions to multiple users, allowing them to quickly and easily manage access rights. This makes it easier to manage user permissions, and also helps to ensure that users only have access to the data and records they need to do their job. Groups also streamline the process of granting and revoking access rights, making it easier for organizations to stay secure and compliant.

And now I want to start working with some of these concepts, but I don't understand how to get my code into the instance via the CLI:

sf bbai explain cli -c "source push"             

Here are the results

Salesforce CLI command source push is used to deploy source from the local environment (e.g. a developer’s machine) to a Salesforce org. This command is useful in a DevOps process, as it enables developers to quickly and easily deploy their code changes to a Salesforce org for testing and deployment.

Example Execution: 

sfdx force:source:push --targetusername devorg --sourcepath /path/to/source/directory/

The command above will deploy all source code in the specified directory to the org with the username devorg.


Something that really stood out in this was the simplicity of the integration - it's literally a few lines of code. Which is as it should be - with generative AI, English is the programming language so most of the effort should go into the prompt rather than getting set up to send the prompt. Looking ahead a few years, I can see this being a game changer for integrating systems. Rather than starting with a book of API specifications that you need to adhere to, there will be a single entry point where you send conversational requests to a model trained on the specific data. "Return the id and all name and address fields of all the contacts we have that work for financial services customers in JSON format", without needing to develop that specific interface.

The Cost

This was top of mind for me during this experiment. I've seen situations in the past where request based charging really racked up during testing. OpenAI allows you to set a hard and soft limit, so that you can't get into that situation, but it also charges by the token for both input and output. When you aren't that familiar with how many tokens might be in a 2-3 line prompt. After building my plug-in, with the usual number of mistakes, and testing it for several hours, I'd used the princely sum of $0.16. 

While that might sound like there's nothing to worry about, but the latest models are more expensive per token and can handle significantly more tokens both for input and output, so you may find that things stack up quickly. I've seen news stories lauding the ability to send an entire book to a generative AI as context, but no mention of how much you'll be charged to do that!

More Information

Sunday 9 July 2023

Salesforce World Tour London : AI Day

AI Day London

A steampunk image of a machine with a brain.

Image created using Stable Diffusion 2.1

It's been just over a week since Salesforce World Tour : AI Day, and I'll wager less than two weeks since the events team found it it was going to be rebranded AI Day! Kudos to them though, it was seamless and if you weren't following the tech news you'd never have guessed!

Those of us attending the UK event experienced great timing this year. The fact that it happened a few weeks after the launch of AI Cloud in New York means that cutting edge features have just reached demo state well outside of the usual release cycle. For example, we got the first public demo of Prompt Studio, which is the kind of thing that would typically land at Dreamforce in San Francisco, rather than a World Tour event on the other side of the world. I'm also fairly sure that we wouldn't have been treated to a deep dive in machine learning from Patrick Stokes if he wasn't leading the AI technical charge. 

Spare a thought for the poor execs who are suddenly having to trip out a lot more than usual at this time of year, but if you want to position yourself as a leader in the AI space, you need some representation from Head Office. Bad news for them, but good news for us.

Prompt Studio looks good from what we saw in a short demo in a keynote. As expected, Salesforce are taking a cautious approach with guardrails around not just the AI, but who can create the prompts that drive the AI. There's some chatter on the socials that every admin is now a prompt engineer, but that feels like a simplistic view to me. Marketing and Legal are a couple of departments that will be very interested in contributing to and signing off prompts, rather than viewing it as regular configuration. Any admin who does become a prompt engineer could be looking at a rather lucrative career though, as that role is currently paying up to $335k/year, although it isn't expected to be around that long.

We also saw several other demos of GPT-as-an-Assistant across Sales, Service, Slack and Platform. This last one was particularly interesting for me, covering as it did automatic generation of Apex code from descriptive comments. This represents a significant change for us developers from the usual enhancements on the development side - typically we are keen to learn the detail of the new features so that we can start customising Salesforce with them. With GPT it's closer to handing off our work to a junior colleague - explaining what we want, waiting for it to appear and then reviewing it to see how well it matches our requirements. And, if history is anything to go by, realising that there was a whole layer of complexity that we forgot to mention!

One constant attendee at the London events is Kavindra Patel, VP of Trailblazer Engagement at Salesforce. I always try to catch up with him, and this year was no different. In a shocking turn of events, he let me in on the secret that there's quite a focus on AI at Salesforce right now!

And, of course, we had the Golden Hoodie award. This year it went to my fellow MVP and co-organiser of the London Salesforce Developer group, Todd Halfpenny, seen here preparing for a duel with Zahra Bahrololoumi CBE, CEO of Salesforce UK&I.

You can catch up with the World Tour keynote, and various other sessions, on Salesforce+. If you weren't able to attend in person I'd highly recommend it, as it is a good overview of where Salesforce will be in a few months once it's all built out. If you are interested in AI in general or AI Cloud in particular, but not sure where to start, join me on 19th July when I'll be doling out advice about preparing for the rise of the machines. 

Additional Information