Pages

Saturday, 26 October 2024

Einstein Copilot vs Complex Queries

Image created by GPT-4o based in a prompt by Bob Buzzard

Introduction

Now that Agentforce (for Service at least) is GA we have access to the latest Atlas Reasoning Engine. That's the theory at least - I haven't seen a way to find out what version is in use, which models it has access to etc, but that doesn't worry me too much as I can't change or influence it anyway. Anecdotally I do feel that the ability to handle requests with more than one step has improved over the last few months, but this sounded like a step change - Chain of Thought reasoning and an iterative approach to finding the best response! 

My focus continues to be on Copilot (aka Agentforce for CRM, or whatever name it's going by this week), but I'm writing rather more custom actions than I'd like. Each of these introduces a maintenance overhead and as Robert Galanakis wisely wrote "The code easiest to maintain is the code that was never written", so if there's a chance to switch to standard functionality I'm all over it.

The Tests

Where I've found standard Copilot actions less than satisfactory in the past is around requests that require following a relationship between two objects and applying filters to each object. Show me my accounts created in the last 200 days with contacts that I've got open tasks against, that kind of thing. Typically it would satisfy the account ask correctly but miss the contact requirement. Now I can easily create an Apex action to handle this request, but the idea of an AI assistant is that it handles requests for me, rather than sending me the requirements so I can build a solution!

I've created 4 products with similar names:

  • Cordless Drill 6Ah
  • Cordless Drill 4Ah
  • Cordless Drill 2Ah
  • Travel Cordless Drill
and created a single Opportunity for the Cordless Drill 8Ah. I then ask some questions about this.

Test #1


Ask Copilot to retrieve Opportunities based on the generic product name 'Cordless Drill'

When I've tested this in the past, Copilot has refined 'Cordless Drill' to one of the four available products and then search for Opportunities containing that product. Sometimes it gets lucky and picks the right one, but more often than not it picks the wrong one (3 -1 odds) and tells me I don't have any.

The latest Copilot gets this right first time.



and checking the Query Records output shows the steps involved:



  • First it looked for products matching 'Cordless Drill'
    A limit of 10,000 seems a bit large, but I guess this isn't being passed on to an LLM and consuming tokens.
  • Then it finds the opportunities which have line items matching any of the 'Cordless Drill' products.
  • Then it pulls the information about the Opportunity.
    Interesting that it only narrows it to my Opportunities at this point - it feels like the line item query could get a lot of false positives.
So I can pick the odd hole, but all in all a good effort.

Test #2


The next test was to add some filtering to the opportunity aspect of the request - I'm only interested in opportunities that are open. Once again, Copilot has no problem handling this request:


Test #3


The final test was using a prompt that had failed in earlier tests. This introduced a time component - the opportunities had to be open and created in the last 300 days, and used slightly different working, asking for opportunities that include "the cordless drill product".

This time Copilot was wrong-footed:


My assumption here was that the date component had tripped it up - maybe it didn't include today - or the "the cordless drill product" had resulted in the wrong product being chosen. Inspecting the queries showed something else though:


  • The products matching 'cordless drill' had been identified correctly
    This was through a separate query, presumably because I'd mentioned 'product'
  • The opportunity line items are queried for the products, but this time the query is retrieving the line item Id rather than the related Opportunity Id
  • An attempt is then made to query opportunity records that are open, created in the last 300 days and whose Id matches the line item Id, which will clearly never be successful. 
Rewording the request to "Show my open opportunities for the last 300 days that include cordless drills" gave me the correct results, so it appears use of the keyword 'product' changed the approach and caused it to lose track of what it was supposed to be doing.

Conclusion


The latest reasoning engine is definitely an improvement on previous versions, but it still gets tripped up with requests that reference multiple sObject types with specific criteria. 

While rewording the request did give a successful response, that isn't something I can see going down well with users - they just want to ask what is on their mind rather than figure out how to frame the request so that Copilot will answer it.

So I can't retire my custom actions just yet, but to be fair to Salesforce they have said that not all aspects of the Atlas Reasoning Engine will be available until February 2025. That said, I'm not sure I'd be happy if I'd been charged $2 to see it get confused!

Related Posts




Saturday, 5 October 2024

Agentforce - The End of Salesforce Human Capital?

Image generated by GPT-4o based on a prompt by Bob Buzzard

It's been a couple of weeks since Dreamforce and according to Salesforce there were 10,000 Agents built by the end of the conference, which means there's probably more than 20,000 now. One of those is mine, which I built to earn the Trailhead badge while waiting for the start of a Data Cloud theatre session and it did only take a few minutes.

So does this mean we need to sit back and prepare a the life of leisure while the Agents cater to our every whim?

The End of Humans + Salesforce?

Is this the end of human's working on/in Salesforce? Well not for some time in my opinion - these are very entry level tasks right now, and highly reliant on existing automation to do the actual work of retrieving or changing data. I suppose it's possible that eventually we'll end up with a perfect set of Agent actions (and supporting flows/Apex), but given we haven't achieved anything like that kind of reusability in the automation work that we've carried out over the last 20 odd years in Salesforce, it seems hopeful that we'll suddenly crack it. Even with the assistance of unlimited Agents. Any reports of our demise will be greatly exaggerated.

Fewer Humans + Salesforce?

This seems more plausible - not because of the current Agent capabilities displacing humans, but the licensing approach that Salesforce are taking. By moving away from per-seat licensing to per-conversation, Salesforce are clearly signalling that they expect to sell fewer licenses once Agentforce is available. 

Again, I don't think we're anywhere close to this happening, but it's the direction of travel. What it probably will do is given organisations pause for thought when creating their hiring plans next year. Will quite so many entry level recruits be required compared to previous years? 

Without Juniors, Where are the Seniors?

Something that often seems to be glossed over when talking about how generative AI will replace great swathes of the workforce is succession planning. Everyone who is now working as a Senior <anything> started out as a Junior <anything>, learned a bunch of stuff, gained experience and progressed. Today's Seniors won't last forever, there's a few years certainly, but eventually they'll run out of steam. And if there are no Juniors being hired, where do our crop of Seniors come from by 2034 and beyond?

It's likely to be one of two ways:

  • We grow them differently. Using AI we can fast-track their experience and reduce the amount they have to learn and retain themselves. Rather than people organically encountering teachable moments in customer interactions, we'll engineer those scenarios using an "<insert role here> Coach". The coach will present as a specific type of customer in a specific scenario, receive assistance and then critique the performance.
  • We don't need them, as the AI is so incredibly powerful that it exceeds the sum of all human knowledge and experience and can perform any job at any level.
I'm expecting the first way, but if I'm wrong I'm ready to welcome our Agent overlords.

Conclusion

No, I don't think Agentforce means the end of Human Capital in the Salesforce ecosystem. It does mean we need to do things differently though, and we shouldn't get so excited about the technology that we forget about the people. 

Related Posts

Sunday, 11 August 2024

The Evil Co-Worker presents Evil Copilot - Your Untrustworthy AI Assistant

Image generated by gpt4o based on a prompt from Bob Buzzard

Introduction

Regular readers of this blog will be familiar with the work of my Evil Co-Worker to worsen the Salesforce user experience wherever possible. The release of Einstein Copilot has opened up a whole raft of possibilities, where the power of Generative AI can be leveraged to make the world a better place .... but only for them! This week we saw the results of their initial experiments when the Evil Copilot was launched on an unsuspecting Sales Team - your untrustworthy AI assistant.

The Evil Actions

Evil Copilot only has a couple of actions, but these are enough to unsettle a Sales team and help the Evil Co-Worker gain control of important opportunities.

What Should I Work On

The first action is What Should I Work On. An unsuspecting Sales rep asks for advice about which accounts and deals they should focus on today, expecting to be pointed at their biggest deals that will close soon, and the high value accounts that should be closely monitored. Instead they are directed to their low value/low probability opportunities and accounts that they haven't dealt with for ages. They are also informed that it doesn't look like Sales is really for them, and advised to look for a different role! Quite the demotivating start to the day:


Opportunity Guidance


Note also that the rep is advised that they have an opportunity that is a bit tricky and they should seek help from E. Coworker. Before they do this, they use Copilot to look up the opportunity:


It turns out this is their biggest opportunity, so the user seeks the sage advice of Copilot, only to hit another evil action and another knock to their confidence - Copilot flat our says that they aren't up to the job and the best bet is to hand it over to, you guessed it, E. Coworker!


With just a couple of actions, Evil Copilot has most of the Sales team focused on trivia, while all the top opportunities end up in the hands of the Evil Co-Worker - not a bad return for a few hours work!

But Seriously Folks!

Now this is all good fun, but in practice my Evil Co-Worker would require administrator access to my Salesforce instance and very little oversight to be able to unleash Evil Copilot on my users. And I've no doubt there are more than a few companies out there where it would be entirely possible to do this, at least until an angry Sales rep called up to ask what was going on!

But a Copilot implementation doesn't have to be intentionally Evil to cause issues. The Large Language Models will follow the instructions given to them in Prompt Templates, even if that wouldn't be reasonable course of action for a human member of staff - if the instruction is to tell a user they don't appear to be suited to their job, they'll do it without question. While us humans can tell that this isn't appropriate, the models will see it as perfectly reasonable. It won't be picked up as toxic either - simple constructive criticism won't raise any flags. 

That's why you always need to test your Prompt Templates, and your Copilot actions, with a variety of data and users - to ensure that your intended request doesn't turn into something completely different in the wild. We've all sent emails that we were convinced had 100% clarity, only to see someone take them entirely the wrong way, and then we we look at them again we realise how ambiguous or subjective they were. And always have a second pair of eyes reviewing the content of a Prompt Template before making it available to users - Evil Co-Workers are constantly on the lookout for weak points in a process.

Related Posts



Saturday, 25 May 2024

Five Einstein Copilot Gotchas

Image created by DALL-E 3 based on a prompt by Bob Buzzard

Introduction

I've been working with Copilot for a few weeks now, both the current live version in a partner demo org, and the Summer 24 preview in a sandbox. I've learned quite a bit, including a few gotchas that took me by surprise, even though I was expecting a few hurdles with new capabilities like these. 

Read on to make sure they don't catch you out.

1. Only SObjects with User Interface Support can be Displayed

I wrote about this a week or two ago, specifically around Tasks and Events. The docs have now been updated to reflect this:

    Einstein Copilot supports custom and standard User Interface API-supported objects

2. Don't Change Apex Action Outputs

Once you've created an Apex Copilot action and defined the format of the output parameters, you can't change them. Not in a way that works at any rate. You won't get any errors, but Copilot has cast your original formats in stone and if you return anything else it will just get discarded. If you need to change this, delete the action and recreate it. That might sound like a bit of work, but the alternative is scratching your head for a couple of hours and then doing it anyway, so it's really not.

3. Don't Change Apex Grounding for Prompt Templates

This arose because my grounding wasn't being returned to the prompt in the Summer 24 sandbox, which I have a case open with Salesforce about, and I was trying to figure out what was happening. This was also in the Developer Console as I didn't have the CLI hooked up. I added some debug to my grounding code, saved the class without error, but when I ran it again the debug didn't appear. After some trial and error I opened the class again to find my debug wasn't there. I couldn't get the changes to stick until I recreated the template.  If you need to change the Apex grounding code, create a new class and reference that instead. Again, the alternative is to spend several hours failing and then do that, so it's better to short circuit.

4. System Admins need Prompt Builder Permissions to Compile Code

This is one that I suspect is a bug in Summer 24. Once I had some grounding Apex classes in the org, if another System Administrator tried to deploy a change and run tests they would get failures that the grounding classes had invocable method capability type decorators that weren't available. Giving them the Prompt Template Manager permission set resolved this, but it felt like a really odd thing to have to do. I also have a case raised about this, to either get confirmation that it's a bug or get it added to the docs.

5. Users need Class Permissions to Access Apex Actions

This one isn't really a surprise, as it matches with quite a lot of other features, but I thought I'd mention it as it's not called out in the documentation at present. Where it gets tricky is figuring out this is the issue -  you login as another user, try the prompt and it doesn't pick your custom action. Or you don't know if it considered your custom action or not, as it told you there were no results. It's tricky to debug when this happens, especially if it's a regular user that can't access Copilot Builder. 

Related Posts



Saturday, 11 May 2024

Formatted Output from Copilot Custom Actions


Image generated by DALL-E 3 based on a prompt from Bob Buzzard

Introduction

If you've seen any of the demos of Einstein Copilot, you'll notice that sometimes the responses are nicely formatted using the Lightning Design System, while other times they are simply text on a darker background - e.g. from the TrailblazerDX keynote, the pipeline information is text:


While the contacts are shown as a formatted list, with fields specific to the solution:


This isn't particularly well documented, so it's a matter of trial and error to figure out what works and what doesn't.

Text Responses

Text responses came out the same regardless of what I tried - the text that I return appears on a dark background. If I format it as JSON or CSV, it comes out in JSON or CSV format, even if I included instructions to display as a list of label/value pairs. 

The same goes for HTML markup - it's taken as text and shown to the user. Using \n for a line break works, but aside from that what you return from your custom action is what you see on the screen.

Custom Class

Returning custom class instances introduce a little more formatting. You can only return a single "value" from your custom action, but this can contain a list of custom class instances and Copilot will render each of the properties in it's own "text box". In the example below I return a single instance of my custom class Output, but this contains a list of instances another custom class. These in turn which contain the fields from Task records (as using the Tasks directly throws errors) :

public class Output
{
    @InvocableVariable
    public List<CustomRec> recs;
}

public class CustomRec
{
    @InvocableVariable
    public String ident;
        
    @InvocableVariable
    public String subject;
        
    @InvocableVariable
    public String description;
        
    @InvocableVariable
    public String activityDate;

    public CustomRec(Task task)
    {
        this.ident=(null!=task.id?'ID : ' + task.Id:null);
        this.subject=(null!=task.Subject?'Subject: ' + task.Subject:null);
        this.description=(null!=task.Description?'Description :\n' + task.Description:null);
        this.activityDate=(null!=task.activityDate?'Due Date : ' + task.activityDate:null);
    }
}

Copilot displays the properties from each record, although there isn't any separation between records. There's also a wrinkle in there I wasn't expecting - the properties are displayed in alphabetical order : activityDate, description, ident, subject. While unexpected, this does give me a way to order the properties if I need to:


sObject Records

As before, I can only return a single item from a custom action, so if I want to send back a list of records I have to wrap them in an containing class:

public class Output
{
    @InvocableVariable
    public List<Opportunity> opportunities;

    public Output()
    {
        opportunities=new List<Opportunity>();
    }
}

And this comes out very nicely, with a card for each record:


sObject Records and more!

This is the one that I'm most pleased about - it allows me to display records and some commentary about each record, while still retaining the Lightning Design System formatting for the record itself.

Once again I'm returning a custom class containing a list of other custom classes, but this time it's a wrapper class containing an sObject record and some additional information:

public class Output
{
    @InvocableVariable
    public List<OpportunityWrapper> opportunities;

    public Output()
    {
        opportunities=new List<OpportunityWrapper>();
    }
}

public class OpportunityWrapper
{
    @InvocableVariable
    public Opportunity opp;

    @InvocableVariable
    public String message;

    @InvocableVariable
    public String trailer;
        
    @InvocableVariable
    public String zzzSeparator='-----------------------------';

    public OpportunityWrapper(Opportunity opp, String message, String trailer)
    {
        this.opp=opp;
        this.message=message;
        this.trailer=trailer;
    }
}

I really didn't expect this to work, but it did!


I've taken advantage of the fact that the properties are displayed in alphabetical order to display a message about how near the close date is, then the record, then some commission information, followed by a rather ugly separator. I think if I was going to use this in production I'd have all the message above or below the record so that I didn't have to put a clunky separator in, but I wanted to see if I above and below would work.

Conclusion

I don't think this is too bad, given that Copilot is at its heart a text based tool. Hopefully in time we'll be able to apply more formatting to text output, but at least sObject records are styled for the Lightning Experience. Worst case, we all end up polluting our Salesforce org with a bunch of fake records that are just used as carriers for Copilot responses! 

Related Posts



Sunday, 5 May 2024

Einstein Copilot - AI + (most of your) Data + CRM?

Image created by DALL-E 3 based on a prompt from Bob Buzzard

Introduction

Einstein Copilot from Salesforce went GA around a week ago (25th April 2024, for future readers), which changes my attitude to it. When things are in preview/pilot/beta, I'll take a bit of additional effort tracking down issues if it means I get my hands on things earlier. Once a feature goes GA though, the gloves are off - real money is changing hands to use these features so we hold them to a higher standard.

As the title of this post suggests, my first disappointment post-GA was that I can't use all my data across Generative AI - specifically Tasks and Events, part of the Salesforce Activities feature. This won't come as a surprise to old hands in the Salesforce ecosystem - hands up those who remember using text fields to capture IDs because custom lookups weren't available! More recently, they still aren't supported by the User Interface API, which means some of the standard LWC functionality won't work for them, and it's slightly concerning that an idea to add this capability has been open for 3+ years without even a comment from Salesforce. Yes, Activities are really Tasks, Events and Calendars on each others shoulders in a long overcoat, but this was the choice Salesforce made and they need to own it

There are workarounds for most areas of the platform where support for Activities is not as complete as it could be, but that isn't the case for the first issue that I encountered.

Prompt Template Parameters

Activities can't be specified as parameters to Prompt Builder. The docs suggest they should be available, as they are standard objects:

My use case was I wanted to send an Event record to an LLM, which contained details of a meeting that had taken place with a customer. The record includes the intended outcome, notes from the meeting, and any comments from company attendees after they've had a short period of time to reflect.  The LLM would then recommend the best course of action based on the meeting.This would then be used to create a Copilot action, so my users didn't could focus on doing the next step rather than figuring it out.

Searching for Event in the resource picker gave me pause for thought:


Maybe it's under Activity/Activities? Sadly not:


For the sake of completeness I also checked for Task - same result:


Now this doesn't mean that Activities can't be used with Prompt Builder - I can access the Open Activities and Activity History if I specify another object like Opportunity as the resource:



As this is the only way to specify a record to a Prompt Template, there aren't any workarounds to handle this, or none that don't suck anyway.

I could pass the Who and What information to the template and try to figure out the Event - for example, if the Event was with a Contact to talk about an Opportunity, I could pass the IDs for those in and use Apex/Flow grounding to retrieve the Events that match these IDs and hope I can figure out which one the user is interested in. If there are a number of them, or the user doesn't want something simple like the last one, I'm highly likely to pick the wrong one and give them unwanted advice about it.

I could make the users populate a lookup to a custom object and then pass the ID of that into the Prompt Template. The downside to this is my users have to do extra work in order to make the AI assistant useful - the exact opposite of how it's supposed to work. I'd also have to do a data fix to create custom object records for the recent history of Events.

Rendering Lists of Activities

One of the nice features of Copilot is if my custom action returns a list of sobjects in the correct format, it will apply Lightning Design System formatting:


So I decided to create a custom action that retrieves the Tasks that I should be focusing on - this has a bunch of rules in the action to determine which Tasks are actually important, as anyone who creates a Task for me will naturally mark it as Urgent and High Priority.

I'm able to retrieve the list of Tasks, but Copilot can't render them:


and check out that error message - Copilot fails to render the information provided and suggests you talk to your admin about it! I predict frustration all round!

Undeterred, I changed my code to return a list of custom Apex class instances that mirrored a task, so a property for Id, a property for Subject etc. Slightly better, in that it briefly rendered some text and then errored again.

After quite a bit of trial and error I was able to get a list of custom Apex class instances to render - I had to either remove the Id of the task, or put it into a String property that had any name other than Id. It looked awful by comparison, and I had no way to generate a link to any of the Tasks, as I'd had to put a false nose and dark glasses on the Id. The user could copy and paste the Id, but it seemed like the introduction of an AI assistant was once again leading to more work for the User, not less. So here there is a workaround, but it's not a great user experience.

My suspicion regarding this problem is that if Copilot can detect that there is either a Task or the Id of a Task in the data, it tries to pull some information about the Task sObject and then errors. Maybe via the User Interface API, given this is rendering on the screen, which doesn't support Tasks.

Conclusion

I'm really surprised at this gap, as Tasks and Events are pretty fundamental to Customer Relationship Management. It's either quite the oversight or, left out because the APIs don't support them in. the hope that nobody notices. It's all well and good having every web and commerce interaction with your customer in Data Cloud, but if you can't ask questions of your standard CRM objects, then we really aren't getting the AI + Data + CRM that we've been promised will change our lives.

Related Posts



Thursday, 18 April 2024

Chaining Einstein Copilot Custom Actions


Image created by DALL-E 3 based in a prompt by Bob Buzzard

Introduction

I've been testing out Salesforce's Einstein Copilot assistant for a few weeks now, but typically I've been tacking a single custom action onto the end of one or more standard actions, and usually a Prompt Template where I can easily ground it with some data from a specific record. 

Something that hasn't been overly clear to me is how I can link a couple of custom actions and pass information other than record ids between them. The kind of information that would be shown to the user if it was a single action, or the kind of information that the user had to provide in a request.

Scenario

The other aspect I was interested in was some kind of DML. I've seen examples where a record gets created or fields get changed based on a request, but the beta standard actions don't have this capability, so it was clear I'd need to build that myself. So the scenario I came up with was : Given an opportunity, it's existing related activities, and a few rules (like if it's about to close someone should be in contact every day), Copilot suggests a follow up task and inserts it into the Salesforce database. 

This can't easily (or sensibly, really) be done in a single custom action. I guess I could create a custom Apex action that uses a Prompt Template to get a task recommendation from the LLM and then inserts it, but it seems a bit clunky and isn't overly reusable. What I want here are two custom actions:

  1. A Prompt Template custom action that gets the suggestion for the task from the LLM
  2. An Apex custom action that takes the details of the suggestion and uses it to create a Salesforce task.
I also don't want to have to expend a lot of effort processing the recommendation response to pick out the important details - if I'm writing that much code I might as well figure out the task that I need as well.

Implementation


The key aspect of the Prompt Template is explaining how I want the response to be formatted, so that it can be easily processed by Apex code. I was half expecting to have to use few shot prompting to get this to work, but was pleasantly surprised to find that I could just describe it in natural language with a few rules about labelling:
Generate a recommendation for a follow up task in JSON format, including the following details:
- The subject of the task with the element label 'subject'
- A brief description with the element label 'description' - this should include the names of anyone other than the user responsible for the task who should be involved
- The date the task should be completed by, in DD/MM/YYYY format with the element label' due_date'
- The record id of the user who is responsible for the task with the element label 'user_id'
- {!$Input:Candidate_Opportunity.Id} with the element label 'what_id'

Do not include any supporting text, output JSON only.
Note that I did have to remind it not to add any text over and above the JSON output - this is something LLMs tend to suffer from a lot in my experience, always wanting to a add a "here you go" or "as requested". Nice that they try to be polite, but not helpful when you need structured data!

Trying this out in Copilot Builder showed that I was getting the output that I wanted:


Note that while there is the additional text of 'Great, I have generated ...', that's Copilot rather than the LLM, so if I can chain this to another custom action I'll just get the JSON format data.

My Apex Custom Action code is surprisingly svelte:

public with sharing class CopilotOppFollowUpTask 
{
    @InvocableMethod(label='Create Task' description='Creates a task')
    public static List<String> createTask(List<String> tasksJSON) {

        JSONParser parser=JSON.createParser(tasksJSON[0]);
        Map<String, String> params=new Map<String, String>();

        while (null!=parser.nextToken()) 
        {
            if (JSONToken.FIELD_NAME==parser.getCurrentToken())
            {
                String name=parser.getText();
                parser.nextToken();
                String value=parser.getText();
                System.debug('Name = ' + name + ', Value = ' + value);
                params.put(name, value);
            }
        }

        String dateStr=params.get('due_date');

        Date dueDate=date.newInstance(Integer.valueOf(dateStr.substring(6,10)),
                                      Integer.valueOf(dateStr.substring(3, 5)),
                                      Integer.valueOf(dateStr.substring(0, 2)));

        Task task=new Task(Subject=params.get('subject'),
                           Description=params.get('description'),
                           ActivityDate=dueDate,
                           OwnerId=params.get('user_id'),
                           WhatId=params.get('what_id'));

        insert task;
        
        return new List<String>{'Task created with id ' + task.Id};
    }
}

All that's needed to make it available for a Custom Action is the invocable aspect :

    @InvocableMethod(label='Create Task' description='Creates a task')

When I define the custom action, it's all about the instruction:


which hopefully is enough for the Copilot reasoning engine to figure out it can use the JSON format output from the task recommendation action. Of course I still need to give Copilot the correct instruction so it understands it needs to chain the actions:


And here's the task it came up with:


Related Posts



Sunday, 7 April 2024

Einstein Copliot Custom Actions

Image created by DALL-E 3 based on a prompt by Bob Buzzard

Introduction

One of the key features of Einstein Copilot from Salesforce is its extensibility - while there are a bunch of standard actions, and many more coming, there will always be something more you want to give your users. Custom actions allow you to create an AI assistant that targets your specific business challenges, adding capabilities to your copilot that generate real value for your users.

Scenario

In my last post, I introduced the Sales Coach - a custom Lightning Web Component that you add to an Opportunity record page. When the Sales Coach component renders, it executes Apex code to hydrate a prompt template with details of the Opportunity and request some guidance from an LLM. 

While this works well for users that are viewing an Opportunity record, I also want to provide a way tfor users receive guidance on deals while they are in another part of the Salesforce application - their Home Page, for example.

Creating the Custom Copilot Action

Custom Copilot Actions can be created using Apex, Flow or Prompt Templates. I've gone for a Prompt Template, as I already have this in place for the Sales Coach component. Once I choose the Reference Action Type of Prompt Template, I get to select from a list of existing prompts in the Reference Action selector. 


Once I've chosen my action, I provide details of how the Copilot can use it :



The Copilot Action Instructions is pre-populated from my Prompt Template description. As I want to use it in exactly the same way, I can leave that alone. I then provide the instructions around the Candidate Opportunity input, and check the box in the Output section to Show in conversation - without this, the user won't be able to receive the coaching they so richly deserve.

Note that while the Prompt Template pull several fields from the Opportunity record - Name, Amount, Close Date, Stage Name - I don't need to specify any of this information here. I don't even need to specify that it's an Opportunity record - that is picked up automatically from the Prompt Template input. Pretty simple really.

Once my custom action is created, I need to add it to the Copilot. I do this via the Copilot Builder, selecting my new action from the list of available actions and then clicking a button to add it to the Active Copilot :



As an aside, this Copilot isn't Active yet, so the text in the button is slightly misleading, but not a big deal as I can only have one Copilot so I'm hardly likely to get confused.

Putting it Together

Remember earlier when I created the Copilot action I gave it details of how to use it? The Copilot Action Instructions field does what it says on the tin - it instructs the Copilot as to the purpose of the action and when it should be used:


I've defined this as providing advice to progress a specific opportunity, so in order to include the action in the Copilot plan, I need to ask the it to help me do that. As we're in AI world, I don't need to be as obvious as asking for advice to progress an opportunity, instead I used my own words and asked for "help to win a deal".

Interestingly, when I ran this to capture the screenshots, Copilot told me that it was following the instructions that I'd provided for the action, which I hadn't seen before:




Something to remember when using Prompt Template actions is they will lead to a slower response time, as each one involves a round trip to an LLM. There will always be one round trip, to create the plan, but if you manage to generate a request that involves multiple Prompt Template actions, there will be separate requests made for each of those, and you'll be consuming additional tokens. 

Monday, 1 April 2024

Deploy Code at the Speed of AI with Ship Happens

Image from DALL-E 3 based on a prompt by Bob Buzzard


Introduction


It's been quite the year for Salesforce Einstein - Copilot, Prompt Builder, lots of new Generative AI features, and it's only the beginning of April!

Einstein for Developers has been out in beta for a while now, built on CodeGen - Salesforce's in-house open-source LLM. To date this as been focused on generating code from natural language prompts, auto-completions. and unit tests, but now it's time for artificial intelligence to dip a toe into the Dev Ops space with Ship Happens.

Ship Happens


In keeping with most Generative AI launches, Einstein Dev Ops goes live with a single feature and the expectation that more will come in a few months. It's quite the feature though - with Ship Happens, your code will barely have a chance to touch down in a scratch org before it's flung to production and into the hands of users.

You've Got to be Shipping Me


Research shows that by far the biggest blocker to getting code into production is developers accepting that they have finished. While this sounds like a simple decision, developers can't help polishing - always looking to squeeze another feature in, or a percentage point of unit test coverage.  Ship Happens unblocks your development team by handing the decision off to Generative AI.

As development progresses, the code is constantly analysed by a Large Language Model trained on years of Salesforce deployments, both successful and unsuccessful, and the quality of code for your specific instance. Once the LLM decides the code is as good as it's ever going to be, it's committed to version control and on it's way to production before the developer knows what has hit them. To ensure full traceability, Ship Happens reuses the same commit message when it takes the decision to ship a feature - "You've Got to be Shipping Me".

Welcome to Ship Creek


Crunching the numbers after the pilot program showed that users are delighted when Ship Happens. An updated experience every time they login, reduced waiting time for new features, and the oh so familiar bugs they have come to expect with every release. Once they accept their role under our new AI overlords, your developers will enjoy the freedom to focus purely on writing code - with Ship Happens in charge, everyone knows that features will be released exactly when they should be, not a moment earlier and not a moment later. Before long, everyone will be happily floating along Ship Creek and forget they were ever in a different place.