Saturday 26 October 2024

Einstein Copilot vs Complex Queries

Image created by GPT-4o based in a prompt by Bob Buzzard

Introduction

Now that Agentforce (for Service at least) is GA we have access to the latest Atlas Reasoning Engine. That's the theory at least - I haven't seen a way to find out what version is in use, which models it has access to etc, but that doesn't worry me too much as I can't change or influence it anyway. Anecdotally I do feel that the ability to handle requests with more than one step has improved over the last few months, but this sounded like a step change - Chain of Thought reasoning and an iterative approach to finding the best response! 

My focus continues to be on Copilot (aka Agentforce for CRM, or whatever name it's going by this week), but I'm writing rather more custom actions than I'd like. Each of these introduces a maintenance overhead and as Robert Galanakis wisely wrote "The code easiest to maintain is the code that was never written", so if there's a chance to switch to standard functionality I'm all over it.

The Tests

Where I've found standard Copilot actions less than satisfactory in the past is around requests that require following a relationship between two objects and applying filters to each object. Show me my accounts created in the last 200 days with contacts that I've got open tasks against, that kind of thing. Typically it would satisfy the account ask correctly but miss the contact requirement. Now I can easily create an Apex action to handle this request, but the idea of an AI assistant is that it handles requests for me, rather than sending me the requirements so I can build a solution!

I've created 4 products with similar names:

  • Cordless Drill 6Ah
  • Cordless Drill 4Ah
  • Cordless Drill 2Ah
  • Travel Cordless Drill
and created a single Opportunity for the Cordless Drill 8Ah. I then ask some questions about this.

Test #1


Ask Copilot to retrieve Opportunities based on the generic product name 'Cordless Drill'

When I've tested this in the past, Copilot has refined 'Cordless Drill' to one of the four available products and then search for Opportunities containing that product. Sometimes it gets lucky and picks the right one, but more often than not it picks the wrong one (3 -1 odds) and tells me I don't have any.

The latest Copilot gets this right first time.



and checking the Query Records output shows the steps involved:



  • First it looked for products matching 'Cordless Drill'
    A limit of 10,000 seems a bit large, but I guess this isn't being passed on to an LLM and consuming tokens.
  • Then it finds the opportunities which have line items matching any of the 'Cordless Drill' products.
  • Then it pulls the information about the Opportunity.
    Interesting that it only narrows it to my Opportunities at this point - it feels like the line item query could get a lot of false positives.
So I can pick the odd hole, but all in all a good effort.

Test #2


The next test was to add some filtering to the opportunity aspect of the request - I'm only interested in opportunities that are open. Once again, Copilot has no problem handling this request:


Test #3


The final test was using a prompt that had failed in earlier tests. This introduced a time component - the opportunities had to be open and created in the last 300 days, and used slightly different working, asking for opportunities that include "the cordless drill product".

This time Copilot was wrong-footed:


My assumption here was that the date component had tripped it up - maybe it didn't include today - or the "the cordless drill product" had resulted in the wrong product being chosen. Inspecting the queries showed something else though:


  • The products matching 'cordless drill' had been identified correctly
    This was through a separate query, presumably because I'd mentioned 'product'
  • The opportunity line items are queried for the products, but this time the query is retrieving the line item Id rather than the related Opportunity Id
  • An attempt is then made to query opportunity records that are open, created in the last 300 days and whose Id matches the line item Id, which will clearly never be successful. 
Rewording the request to "Show my open opportunities for the last 300 days that include cordless drills" gave me the correct results, so it appears use of the keyword 'product' changed the approach and caused it to lose track of what it was supposed to be doing.

Conclusion


The latest reasoning engine is definitely an improvement on previous versions, but it still gets tripped up with requests that reference multiple sObject types with specific criteria. 

While rewording the request did give a successful response, that isn't something I can see going down well with users - they just want to ask what is on their mind rather than figure out how to frame the request so that Copilot will answer it.

So I can't retire my custom actions just yet, but to be fair to Salesforce they have said that not all aspects of the Atlas Reasoning Engine will be available until February 2025. That said, I'm not sure I'd be happy if I'd been charged $2 to see it get confused!

Related Posts




Saturday 5 October 2024

Agentforce - The End of Salesforce Human Capital?

Image generated by GPT-4o based on a prompt by Bob Buzzard

It's been a couple of weeks since Dreamforce and according to Salesforce there were 10,000 Agents built by the end of the conference, which means there's probably more than 20,000 now. One of those is mine, which I built to earn the Trailhead badge while waiting for the start of a Data Cloud theatre session and it did only take a few minutes.

So does this mean we need to sit back and prepare a the life of leisure while the Agents cater to our every whim?

The End of Humans + Salesforce?

Is this the end of human's working on/in Salesforce? Well not for some time in my opinion - these are very entry level tasks right now, and highly reliant on existing automation to do the actual work of retrieving or changing data. I suppose it's possible that eventually we'll end up with a perfect set of Agent actions (and supporting flows/Apex), but given we haven't achieved anything like that kind of reusability in the automation work that we've carried out over the last 20 odd years in Salesforce, it seems hopeful that we'll suddenly crack it. Even with the assistance of unlimited Agents. Any reports of our demise will be greatly exaggerated.

Fewer Humans + Salesforce?

This seems more plausible - not because of the current Agent capabilities displacing humans, but the licensing approach that Salesforce are taking. By moving away from per-seat licensing to per-conversation, Salesforce are clearly signalling that they expect to sell fewer licenses once Agentforce is available. 

Again, I don't think we're anywhere close to this happening, but it's the direction of travel. What it probably will do is given organisations pause for thought when creating their hiring plans next year. Will quite so many entry level recruits be required compared to previous years? 

Without Juniors, Where are the Seniors?

Something that often seems to be glossed over when talking about how generative AI will replace great swathes of the workforce is succession planning. Everyone who is now working as a Senior <anything> started out as a Junior <anything>, learned a bunch of stuff, gained experience and progressed. Today's Seniors won't last forever, there's a few years certainly, but eventually they'll run out of steam. And if there are no Juniors being hired, where do our crop of Seniors come from by 2034 and beyond?

It's likely to be one of two ways:

  • We grow them differently. Using AI we can fast-track their experience and reduce the amount they have to learn and retain themselves. Rather than people organically encountering teachable moments in customer interactions, we'll engineer those scenarios using an "<insert role here> Coach". The coach will present as a specific type of customer in a specific scenario, receive assistance and then critique the performance.
  • We don't need them, as the AI is so incredibly powerful that it exceeds the sum of all human knowledge and experience and can perform any job at any level.
I'm expecting the first way, but if I'm wrong I'm ready to welcome our Agent overlords.

Conclusion

No, I don't think Agentforce means the end of Human Capital in the Salesforce ecosystem. It does mean we need to do things differently though, and we shouldn't get so excited about the technology that we forget about the people. 

Related Posts